Can Machines Learn the True Probabilities?
Abstract
When there exists uncertainty, AI machines are designed to make decisions so as to reach the best expected outcomes. Expectations are based on true facts about the objective environment the machines interact with, and those facts can be encoded into AI models in the form of true objective probability functions. Accordingly, AI models involve probabilistic machine learning in which the probabilities should be objectively interpreted. We prove under some basic assumptions when machines can learn the true objective probabilities, if any, and when machines cannot learn them.
1 Introduction
In the standard AI model under uncertainty, how to measure the degree of uncertainty matters. This paper is about treating such measures in the form of probabilities. In particular, we focus on the true objective probabilities, if any. There are various probabilistic contexts in which the true objective probabilities matter. For example, causal relations of physical events are widely regarded as objective features of the world. Therefore, when causal relations are to be understood in terms of probabilities mainly due to various regularity issues, a probabilistic causal model should include an objective probability function that measures the true objective values about our world.
This paper addresses the question of whether machines can learn the true objective probabilities from the data to perform such probabilistic reasoning. Under some basic assumptions, we prove that machines can learn the true objective probabilities if and only if the probabilities are directly observable by them. Roughly speaking, a true probability is directly observable by a machine when it can calculate the probability by the empirical frequency of a true population given to it.
The outline of the proof is as follows. After defining some main concepts, we identify the Success Criterion and the necessary condition for any machine to learn the true objective probabilities. From these conditions, we derive the theorem that learning implies the true guarantee of well-calibration. Roughly speaking, “truly guaranteed well-calibration” means the following: when a machine collects data according to its subjective forecast along a stochastic path in which the associated events occur, the empirical frequency of the collected data matches the very probabilistic forecast of the machine with the true probability - one. Now that the machine forecasts must indeed be true when the machine learns the true probabilities, this calibration property can then be understood as a calibration version of the strong law of large numbers without the independence assumption.
Note that there exist connections here among machine forecasting, well-calibration, and machine learning. While proving our theorems, therefore, we establish connections between the true guarantee of well-calibration and various settings of the real forecasting games between Nature and a machine. In this game, what Nature forecasts are the true objective probabilities, while what the machine forecasts are its own subjective probabilities. The machine loses when Nature deviates from the probabilistic forecasts of the machine. Bridged by the property of truly guaranteed well-calibration, we then prove whether the machine learns the true probabilities or not under various settings of forecasting games.
With this proof, we provide the fundamental scope and limit of learning the true probabilities by AI machines. One important implication is that machines can relax the independent assumption among data to learn the true probabilities but cannot relax the assumption of identical distribution such as stationarity or ergodicity along a stochastic path where any associated events occur. Another implication is to show that the problem of computability is directly connected to the problem of complexity in the case of learning the true probabilities.
2 Notations and Definitions
In this section, we define some main concepts, including “machine learning” and “true objective probability”. Adopting terminologies from (Nilsson, 2011) and (Boolos et al., 2002), let us first define a machine as an artifact or device that can effectively calculate or compute any target function if there exist definite and explicit instructions to do so in principle. Since we focus on probability functions in this paper, we particularly mean by “an effectively calculating or computing device” a machine that can in principle assign a probability measure (a value of a probability function) to each state (an argument of the probability function) in a given domain, an event space of a sigma-field.
Definition 2.1.
A function is effectively calculable or computable when there are definite and explicit instructions, following which its functional value can be calculated in principle for any given argument. (Boolos et al. (2002))
Two things merit to be taken into account with Definition 2.1. First, this notion of effective calculation or computation is an ideal one with no practical limits on time, expense, etc., necessary to calculate. Therefore, a proof of the limitation on effective calculation or computation of any function will imply a fundamental limit on computability that cannot be overcome by any practical real machine. Second, as (Kozen, 1997) points out, this notion is an informal one, something that is supposed to be captured in common by all formalisms such as computation by Turing machines, by the -calculus and by the -recursive method, etc. Accordingly, once we adopt this notion of effective calculation or computation to define “learning”, we can be flexible about which formalism would be encoded as instructions to complete a given learning task.
Now, whatever such formalism is, machines can learn only if there exist some instructions followed by them to complete their tasks. So we can prove that it is impossible for machines to learn any target function under certain conditions in the following way: we first suppose that there exist some successful instructions to be encoded into machine programming to learn any given function under the conditions. We then show that this supposition leads to a conclusion that is impossible to satisfy. We thereby conclude that there cannot exist such instructions for the given function and, accordingly, that machines cannot learn it. This is a simple but clear way of proving the impossibility of learning without being committed to any complex procedure of constructing any formalism such as a Turing machine or -calculus, etc.
Definition 2.2.
A machine learns when it succeeds in effectively calculating or computing a target function, if any, after processing possibly infinite amounts of data.
The phenomenon of learning must be at least computational in its essence when acquired by a machine. We thus adopt the notion of computation to define what learning is in Definition 2.2. Inspired by the ideas of (Turing, 1936) and (Church, 1936), we require that a machine be able to effectively calculate or compute a target function when the machine can learn the function.
In addition, we add the notion of success to Definition 2.2, which aims to capture the role of “learning” as an epistemic notion, not just a computational one. The epistemic notion of machine learning requires two components: if a machine learns, then (i) it must be indeed correct most of the time and (ii) it must be self-assured to be correct most of the time.
Learning is the phenomenon of knowledge acquisition. Once something is learned, knowledge about it is acquired. Now, knowledge must be a true representation, and it must be so not just by luck. We thus require that (i) what is effectively calculated or computed by a machine be true and further that it be true most of the time out of infinite opportunities to learn. In addition, if the machine admits errors too many times, say infinitely often, it cannot be said to learn. We thus require also that (ii) the machine be self-assured that what it calculates is correct most of the time. In sum, we provide the following Success Criterion:
(1) If a machine achieves computational success by learning, what it acquires in the end must be true to our world most of the time, which must be assured to the machine itself.
If what the machine computes turns out to be wrong or it admits errors repeatedly too often out of infinite opportunities to learn, then its computation cannot be considered successful. Later, we prove that the Success Criterion (1) is sufficient for learning in the case of computing true probabilities by Corollary 4.37. We also clarify there what we mean by “most of the time.”
Definition 2.3.
A true probability is what collectively constitutes a probability space, a triple of random variables ’s in a joint true probability of the stochastic process according to which Nature generates a sequence of actual data ’s and each of these data is realized as such with the very true probability .
Consider an enumerable set of ’s called states at time with . For example, may be the set where denotes the state of sunny day, the state of cloudy day and the state of rainy day at date . Also, consider the set that consists of all the infinite sequences with a representative sequence Here, is a random variable which has some numerical value according as which ’s are realized at time in our world. Now, comes before in time, and thus the sequence of ’s represents a discrete-time stochastic process. Then Nature generates the actual data set with true probability ’s. So the probability function , if any, becomes true to our world when it corresponds to whatever amounts to the rules according to which the actual data are realized in our world. Broadly speaking, this is in line with the correspondence theory of truth similarly in (Tarski, 1944).
Remark 2.4.
More detailed discussions on Definition 2.3, including examples, are provided in Appendix D.
Now that we have defined learning and true probability, let us discuss under what conditions machines can or cannot learn the true probabilities. Before we move on, however, let us briefly mention how we can provide formal conditions for learning even though Definition 2.2 contains informal notions.
Recall from the second comment on Definition 2.1 that the general notion of computation has not been mathematically defined. This is why the Church-Turing thesis remains as a thesis, not as a theorem, given that it uses the general notion of computation. But the computability of any target function in each specific case can be formally specified by giving some definite and explicit instructions to derive the target function in each case, say by a Turing machine. Likewise, our general notion of machine learning cannot be mathematically defined because Definition 2.2 uses the general notion of computation and the informal notion of success. But this does not prevent us from mathematically analyzing the notion of machine learning on the true probabilities by proving what the necessary and sufficient conditions are to learn them. We can do so by giving some definite and explicit instructions to statistically derive the true probability function by a machine while satisfying the Success Criterion (1).
3 Kinds of Probabilities and Learning
3.1 Subjective vs. Objective Probabilities
Broadly speaking, probabilities can be divided into two kinds, subjective and objective ones. Subjective probability, say ß, depends on each person’s belief and thus possibly varies from person to person, while objective one, say ß, does not.
The standard theory of subjective probability was first developed by Ramsey and then further by De Finetti and Savage. Subjective probability is designed to represent a degree of belief possessed by a subject, say some person or, if possible, a machine. Hence subjective probability represents whatever is in any one’s mind upon anything as long as his/her belief system is coherent, and so can be assigned even to what is merely imagined. For example, while arguing for cogito, ergo sum, (Descartes, 2008) imagined an evil spirit that has devoted all its efforts to deceiving him. Descartes can assign some value of subjective probability to his imagination on the evil spirit in accordance with how likely it is to him that the imagination can be realized in this world, as long as Descartes’ belief system remains coherent.
In contrast, objective probability, if any, is what must be determined by objective features of our world that do not vary from person to person. The best way to understand objective probability is to consider examples. Following (Maher, 2010), for example, suppose that a coin has the same face on both sides, that is, two-headed or two-tailed. When this coin is tossed infinitely often, its relative frequency surely converges to 1 or 0. Hence the limiting relative frequency here is either 1 or 0, depending on how our world turns out to be, which is an objective matter, and not on whatever we believe.
It should be noted that subjective and objective probabilities are conceptually bifurcated in two important ways. First, recall that subjective probability represents an aspect of someone’s subjective belief, while objective probability does not. Hence the subjective probability of Descartes’ demon is positive as long as it is believed at any degree that it could exist in our world. However, this does not necessarily imply that the true objective probability of Descartes’ demon is positive, since it might be the case that such a demon is possible only in one’s imagination but impossible in our real world. We will return to this potential bifurcation between subjective and objective probability in Section 4.1.
Second, there exists an asymmetric relation between subjective and objective probability: although the subjective probability of Descartes’ demon does not necessarily bind its objective probability, the converse holds. (e.g. (Lewis, 1980)) That is, once it is proven/assumed by any agent that the true objective probability of Descartes’ demon is, say zero, then its subjective probability of the same agent is bound to this proven/assumed result on the objective probability and thus must be zero as well. From this asymmetric relationship, we derive Lemma 4.23 in Section 4.2.
Remark 3.1.
More detailed discussions on various kinds of probabilities are provided in Appendix D.
3.2 What is Implied by Learning the True Objective Probabilities?
As we pointed out in Section 2, learning is the phenomenon of knowledge acquisition, and knowledge must be at least a true representation. In the case of human beings, the requirement of true representation is expressed as the requirement that (propositional) knowledge be at least a true belief (e.g. (Hintikka, 1962), (Moore, 1985)). What then is the counterpart of such a requirement for machines?
In general, if a machine achieves computational success at by learning, what the machine represents by learning must be at least true at that time. Then we denote the true representation of the machine about what is learned by the “true belief” of the machine, a legitimate analogue to the true belief of human beings. It is a belief analogue, for we haven’t yet shown that machines have minds or that they have the same kinds of mental representations as human beings. It is nevertheless a legitimate belief analogue, since the computational models of machine intelligence are based on understanding human intelligence. (e.g. (Pearl, 2018), (Russell, 1998), (Valiant, 1984, 2008))
That said, let us discuss the relation between belief and learning on the machine side: the knowledge acquired by machine learning must be at least a true belief. In (Hintikka, 1962), the knowledge of a person refers to the knowledge of that person on any proposition . Likewise, machine’s learning of the true objective probability here refers to the knowledge acquired by any machine on the probabilistic proposition . If a machine learns the true probability as , then the probabilistic proposition amounts to that the true objective probability if any, is what the very machine calculates as . Here, we convert the non-propositional learning into propositional learning.
Now, just as a person ’s knowledge on proposition must satisfy the necessary condition that the person ’s belief in is true, machine learning of the true probability must also satisfy the condition that the belief in of the machine is true. Note here that such a belief in is true when what has been calculated by the machine is indeed equal to the true probability . Now, this calculated probability function by a machine is nothing more than the subjective probability of the machine. Therefore, the necessary condition for machine learning of true probability requires a machine to hold a true belief whose truth condition is satisfied when its subjective probability is, in fact, in congruence with the true objective probability . In short, if a machine learns the true objective probability , then the subjective probability of the machine is actually equal to the true probability .
Remark 3.2.
There has been a large literature in logic and economics whose discussion implies when a machine holds a true belief in the probabilistic proposition . We provide some literature in Appendix B.
Therefore, we obtain the following condition:
The Necessary Condition for any Machine to Learn the True Probability
(2) If a machine learns the true objective probability ß, then ßß
where ß denotes the subjective probability of the machine at time .
We assume, without loss of generality, that the event is an elementary event, for simplicity. So the event is a singleton, i.e. .
Two things should be noted from (2): first, learning/knowledge is not necessarily equivalent to obtaining true fact that ßß, as the converse of condition (2) does not necessarily hold. Second, if a machine is wrong in calculating the true probability at time so that ßß, then by modus tollens we can derive from (2) that the machine does not learn it at that time. However, this does not preclude the machine from learning it at any other time. Then what can be said about learnability in general? According to the Success Criterion (1), a machine cannot learn any target function if it is wrong most of the time, except for a few finite cases out of infinite opportunities to learn. But can a machine be said to learn if it is correct infinitely often but also wrong as that often? We give a negative answer to this question by proving theorems in Section 4.2.
4 Can Machines Learn the True Probabilities?
4.1 Learning the True Probabilities and Calibration
Let us start with a simple example in which a machine is trying to learn the true probability that it will rain tomorrow. A forecasting system is said to be well-calibrated if it assigns probability, say 30%, to rainy events in a test set whose long-term proportion that actually rains is 30%. According to (Dawid, 1982), a forecasting machine is self-assured that its fairly arbitrary test set of forecasts is well-calibrated. This is Theorem 4.1. In addition, we prove in Theorem 4.6 that if the machine learns the true probability, then this machine’s forecasting is truly guaranteed to be well-calibrated.
Now, let us assume that a machine has its own (not necessarily true in our context) probability distribution defined over ß where ßt is denoted by the totality of the true facts up to day . The probability forecasts it makes on day are for events ’s in ßt+1 and are ßt-measurable. For each day we have an arbitrary associated event ßt, say the event of raining on day . We denote the indicator of by , and introduce ß, the probabilistic forecast of machines on day +1. In addition, we introduce the new indicator variables at choice to denote the inclusion of any particular day in the test set where if the day is included in the test set and otherwise. Now, if we set the selection criterion to include any day into the test set as the assessed probability on day , then we have the following theorem.
Theorem 4.1.
Suppose that is ßt-1 measurable. Then, when ,
where : the number of days in the test set
Here, let us use the terms as follows: machine forecasts are self-assured to be well-calibrated when , while those are truly guaranteed to be so when . It should be noted then that even if the forecasting machine is self-assured to be well-calibrated, this does not necessarily imply that its forecasts are truly guaranteed to be well-calibrated. Recall from Section 3.1 that there is a conceptual bifurcation between subjective and objective probability.
Now, suppose that a machine tries to learn the true probability of a particular event . If this machine indeed learns the true probability of the event as , then the machine should correctly calculate the true probability of the same events repeatedly as most of the time. Hence, the machine can construct a test set of those associated events ’s whose sequentially correct probabilities are . Then we can show further from Theorem 4.1 that the test set will be well-calibrated with true probability - one. This is Theorem 4.6. In short, here “being correct as ” itself serves as what (Dawid, 1982) calls a selection criterion.
However, note that if the size of ßt continues to grow as goes to infinity, then ßt’s might be different for each Then might not stay the same as even for the same events ’s across infinitely many ’s. Now, in order for the correct probability to work as a selection criterion, it should be that stays the same as at least for infinitely many ’s even though ßt may vary as time passes. Therefore, we prove Lemma 4.5 from the following three assumptions. The justifications for the three assumptions are provided in Appendix C.
Assumption 4.2.
ßt’s in ß are the set of all the true facts up to time .
Assumption 4.3.
No further knowledge requirement is imposed on condition ßt.
Assumption 4.4.
Once a probability of an event type is established, its associated event tokens ’s occur at some infinite subsequence of time ’ s, so that does not vanish to zero as .
It should be noted from 4.2 and 4.3 that if ßt is the set of known facts, the information on the associated events ’s in ßt’s may not be independent of one another over time. Once has been known in the past at some time the same events ’s are more likely to be known afterwards. Repeatedly accumulated knowledge of the same events reinforces the probability that the very event will be known again in the future. However, this is not necessarily the case with the set of true facts. It will be clear in Lemma 4.5 why this independence condition matters.
Lemma 4.5.
For any , let denote the event token at time whose event type almost surely determines the true probability of an event type as . Then, if for some subsequence ’s, ’s are independent across ’s and for any then
Now that Lemma 4.5 has been established, ß is truly guaranteed to stay as infinitely often, and thus the machine has infinite opportunities to learn ß as .
Theorem 4.6.
Let us consider any arbitrary . If a machine learns the true objective probability ß as , then .
It should be noted that the notion of learning in Theorem 4.6 is flexible enough to allow for some finitely few potential errors, so that there can exist some such that ß while processing the data to learn.
Remark 4.7.
More detailed discussions on Theorem 4.6 are provided in Appendix D.
4.2 Can Machines Learn the True Probabilities?
Theorem 4.8.
It is impossible to obtain a joint distribution for an infinite sequence of events that could have the well-calibration property with subjective probability .
The basic idea in the proof of Theorem 4.8 starts with constructing a counterexample in which the true probability function is deviated infinitely often from the subjective probability function in such a way that the well-calibration property does not hold any longer.
Counterexample 1 Following (Oakes, 1985), let be such as ß with the function being defined by for any event Then, under with where for a subsequence and ’s form a Bernoulli sequence, the well-calibration property does not hold.
Due to this counterexample from (Oakes, 1985), the machine forecaster cannot exclude the possibility that its test set may be mis-calibrated, and thus the machine cannot hold its subjective probability - one of being well-calibrated. Furthermore, if this artificially-imagined possibility of mis-calibration is a real possibility, then it is derived that no test set large enough can be guaranteed to be well-calibrated with the true probability - one. Later in this section, we prove that if such an imagined possibility is a real one, then machines cannot learn. Meanwhile, we also prove mathematically how the (Oakes, 1985) Counterexample paralyzes Dawid’s Theorem 4.1, which amounts to the proof of Theorem 4.8.
Remark 4.9.
More detailed discussion on the Counterexample 1 is provided in Appendix D.
Lemma 4.10.
Suppose that a machine constructs a test set by the assessed probability . Then if and only if where the expectation is taken with respect to the true probability . Here, .
Lemma 4.11.
Let us fix . Now, suppose that exists. Then if and only if . In general,
Remark 4.12.
By Lemma 4.10 and Lemma 4.11, we establish a connection between the true guarantee of well-calibration and the real forecasting game between a machine and Nature. More discussions on such connection by Lemma 4.10 and Lemma 4.11 are provided in Appendix D.
Definition 4.13.
Nature is perverse when, for any fixed machine forecast , ß at least for infinitely many ’s along the stochastic path of the test set.
By “at least i.o.” in Definition 4.13, we mean that Nature deviates from either (i) infinitely often or (ii) all but finitely often along the stochastic path of the test set. Thus, we clearly distinguish (i) from (ii). From now on, we mean by “infinitely often” that nature not only deviates infinitely often, but also does not deviate infinitely often. On the other hand, by “all but finitely often” we mean as usual. Then, if the true probability of Nature’s perversity is zero, then we denote it by ß at least along the path of the test set, which amounts to ß at most for along the path of the test set. Furthermore, if there is no confusion, we will simplify Nature’s perversity by “ß at least ” while omitting “along the path of the test set.”
Now, according to the Success Criterion (1), a machine fails to learn the true probability in case (ii), because the machine then makes wrong forecasts along the path except for a finite few of the infinite opportunities to learn. However, it seems unclear whether the machine can learn or not in case (i). On the one hand, the machine seems not to be able to learn because it makes too many errors, say infinitely many errors. On the other hand, it seems that the machine should be able to learn because it makes astronomically many correct forecasts, say infinitely often. Therefore, while adopting this definition, we clearly prove by Theorem 4.20 and Corollary 4.30 that a machine cannot learn the true probability even when it is correct infinitely often, if it is wrong that often.
Observation Provided that the machine forecast ß is fixed as some value , becomes the true second-order probability on the true first-order probability of such event , that is, = where denotes the event that the machine makes a correct forecast at .
It should be noted here that the computable numbers by a machine are countably many (e.g. (Turing, 1936)). Thus, the true second-order probability here is a probability mass function on countable space and therefore satisfies the Kolmogorov axioms, although may potentially be any real number in
Remark 4.14.
More detailed discussions on the connection between true second-order probability and the forecasting game are provided in Appendix D.
Lemma 4.15.
Let us consider the forecasting game between Nature and a machine. Also, let us further suppose that the structure of this game at any given time , i.e. whether it is simultaneous or not, is certain to Nature. Now, by 4.2 and 4.3, let us suppose that ßt consists of the true facts, not necessarily knowledge. Then there exists a true second-order probability such that if and only if the real forecasting game is a simultaneous-move game at time . In particular, if and only if the machine moves first and then Nature moves later after observing what move the machine takes in the forecasting game at time .
There are various theories of learning in games. (e.g. (Nisan et al., 2007)) Therefore, what matters is what is aimed to learn through games and who are competing with each other in the games. In the standard model, a machine aims to learn what the optimal actions are to produce the minimized expected (total) loss or payoff, which is determined in a given environment, say financial market. In this case, a machine usually competes with other machines in the game. For example, in some online learning, a machine aims to learn a sequence of estimates which return the sub-linear regret, given that the loss functions are convex. It gets a possibly different amount of payoff/loss at each round of games along the stochastic path where the given sequence of games are played.
In our forecasting games, on the other hand, a machine aims to learn the true objective probability, if any, through games, and so the machine is competing with Nature in the game. Also, whoever wins a game, the winner/loser will get uniform payoff at every round along the path, for what counts is how many times the machine loses/wins along the path, not how much payoff it gets at each round along the path once it loses/wins.
Theorem 4.16.
In the forecasting game between a machine and Nature, the machine does not necessarily learn that it wins at each round of the game even though it indeed wins.
Thus, winning strategy is not equivalent to learning strategy. Now, in case when a machine does not learn that it wins/loses a game even though it indeed does so, it does not matter what it gets as payoff when it wins/loses because it cannot learn how much it gets at each round. What matters, on the contrary, is how many times it wins along the path, and this is why our game setting in Lemma 4.15 adopts a uniform payoff at each round.
Theorem 4.17.
Let us consider any arbitrary for any machine forecast. If , then the true probability that Nature is perverse is zero with any of these forecasts . Case 3
Case 1 Let us suppose that ß at most finitely often along the stochastic path where the associated event ’s occur. Then where denotes the limiting relative frequency along the path.
Case 2 Let us suppose that ß just as in (Oakes, 1985). Then, where denotes the limiting relative frequency along the stochastic path of the test set.
Case 3 Let us suppose that ß at least i.o. along the test set. Then where denotes the limiting relative frequency along the path of the test set.
Regarding Theorem 4.17, it is worth noting the following three things: (i) Case 1 is equivalent to the strong law of large numbers under a weaker assumption than i.i.d.: if the true probability ß exists and ß is identically distributed as all but finitely often along the path, then the limiting relative frequency converges to the same ß as with true probability - one. (ii) Case 2 shows that if (Oakes, 1985) holds with subjective probability , then (Dawid, 1982) does not hold, which amounts to the proof of Theorem 4.8. (iii) Case 3 shows, combined with Theorem 4.6, that if ß at most along the test set, then a machine cannot learn the true probability ß as . Thus, the third result (iii) has the following important implication for time-series analysis: a machine cannot relax the assumption that the true probability ß is identically distributed along the stochastic path, if the machine aims to learn the true probability ß. To learn, the machine needs some identical distributional assumptions such as stationarity or ergodicity.
Definition 4.18.
Suppose that, with true probability Nature is perverse with some forecast . Then, Nature is uniformly perverse, when for any forecast , there exists no such that ß at least for any event .
In other words, when Nature deviates from forecasters for any event , she does not discriminate against some forecasters in favor of the others whose forecasts Nature decides to conform to all but finitely often for sure.
Theorem 4.19.
Suppose that, for any , there exists a true second-order probability such that at least for infinitely many ’s. Then, Nature is uniformly perverse.
Theorem 4.20.
Suppose that, for any , there exists a true second-order probability such that at least for infinitely many ’s. The machine cannot then learn the true objective probability ß as .
Now, let us discuss what it means in Theorem 4.20 by the condition that the true second-order probability is strictly less than 1. Note from Lemma 4.15 that if and only if Nature moves first and then the machine moves later after observing what move Nature takes in the forecasting game at time . Thus, it is clear from the condition of Theorem 4.20 why and when the machine fails to learn the true probability if Nature is uniformly perverse: when the machine cannot move later after observing the true move of Nature infinitely often, there always exists a real possibility that the machine may not be able to match Nature’s move that often. Hence the machine cannot be truly guaranteed to be well-calibrated, which again implies the impossibility of machine learning. Since the machine cannot observe the true move of Nature in those forecasting games, the true probability is unobservable by the machine.
So far we have shown that it is of real possibility that Nature is perverse, and thus that no machines can learn the true objective probability. Now someone might argue that its proof holds only under the condition that Nature is uniformly perverse. Nature may not be uniformly perverse, however, but only selectively perverse, so that, for some forecast , Nature may decide to be benevolent enough to conform to that . Then it may be the case that the true probability of Nature being perverse is zero for this , and accordingly that machines may be given an opportunity to learn the true objective probability for that .
Note, however, that it is entirely Nature’s decision when she will be benevolent to a machine and when she will not. Therefore, it is still a random event to the machine whether Nature is perverse or not. If so, we will show further that, even if the true probability of Nature’s being perverse is zero with some a machine still cannot learn the true probability if it cannot learn which forecast is the right for any event .
Definition 4.21.
A machine tolerates error at while pursuing its goal of learning the true probability ß, when ß but ß for some
Remark 4.22.
In relation to Lemma 4.23, more detailed interpretation on Definition 4.21 is provided in Appendix D.
Lemma 4.23.
Suppose that a machine aims to learn the true probability ß and thus performs an effective calculation to return its result of ß as for the true probability ß. Then, ß if and only if ß, for all but finitely many ’s.
Remark 4.24.
In relation to (Savage, 1972), more discussions on Lemma 4.23 are provided in Appendix D.
Definition 4.25.
Nature is selectively perverse, when and such that ß at least , while ß at least for any other .
Now, let us define Nature’s decision to be selectively perverse at to show by Lemma 4.28 that once Nature decides so at , our real world remains as such.
Definition 4.26.
Nature decides to be selectively perverse at , when there exist forecasts and such that ß, while ß where denotes the event that, from onward, Nature is perverse with the associated events ’s whose assessed forecasts are .
Definition 4.27.
Suppose that Nature is selectively perverse so that she freely decides at any time whether to be perverse at any rate or not. Then, denotes a stopping time if is the last time that Nature changes her mind into non-perversity so that, for any with which Nature is not perverse with true probability -one, ß, .
Note that is ßt-measurable, because ßt includes all the true facts up to and so whatever Nature decides at , say the event ß belongs to the set of true facts, ßt.
Lemma 4.28.
Nature is selectively perverse if and only if there exists a stopping time for every forecast with which Nature is not perverse with true probability -one so that ß , while there is no stopping time for any other .
Lemma 4.29.
Let us suppose that Nature is selectively perverse and that a machine learns which forecast is the right forecast for any associated ’s with which Nature is not perverse with true probability - one. The machine is then self-assured that the stopping time arrives for that .
Corollary 4.30.
Suppose that Nature is selectively perverse so that, with true probability -one, she is not perverse with some machine forecasts . Furthermore, suppose that the machine is not self-assured that the stopping time arrives for each of those ’s. The machine cannot then learn the true objective probability ß as .
Note that along the stochastic path considered in Corollary 4.30, ß at least . Now, for this ,
(3) ßß at least
Therefore, without loss of generality, letting with ,
(4) ß with .
Now, (4) means by Lemma 4.15 that the true probability is observable at any time along this path. Then why is the machine still unable to learn the true probability, even though the machine can move after observing what move Nature takes at the forecasting games all along that path after ? According to Corollary 4.30, this is because the machine cannot be self-assured whether the true probability will remain observable at any time after +1 onward, even if the machine observes Nature’s true move at time +1. Let us show this by the following Lemma 4.31.
Lemma 4.31.
Suppose that a machine is not self-assured of the stopping time for . The machine cannot then be self-assured whether the true probability will remain observable at any time after +1 onward, even if the machine observes Nature’s true move at time +1.
From Theorem 4.20 and Corollary 4.30, we conclude that the impossibility of learning is derived under the assumption either that Nature is uniformly perverse or that Nature is selectively perverse but a machine is not self-assured of whether the stopping time arrives or not. What would then happen in the case where Nature is selectively perverse and a machine is self-assured of the stopping time when the indeed exists? We show in the following that a machine can learn the true probability in this case, and further that this is the only case in which a machine can learn it.
Theorem 4.32.
Suppose that a machine learns the true probability ß as . The machine is then self-assured that the stopping time arrives for , while the machine is not self-assured that the stopping time arrives for where such does not exist.
Let us now define when the true probability is directly observable based on the notion of population. The concept of population in Definition 4.34 is mainly indebted to (von Mises, 1957, 1967). Since the true probability is defined as the empirical distribution of this population available to a machine, the probability is said to be directly observable by the machine.
Definition 4.33.
Let us consider a set that consists of the sequence of events ’s, with potentially infinite. Then, the set is defined to be a population with number of elements, when this set is assumed to have a certain attribute of interest, and so an indicator variable is assigned to each event where has a value 1 or 0 depending on whether the event satisfies such an attribute or not, once the set is collected. Then, the empirical distribution of the population with respect to the given attribute is defined to be .
Definition 4.34.
A machine directly observes ß from the population at if the following two conditions are satisfied: (i) a population is in principle available to the machine. (ii) The machine calculates the empirical distribution of the population with respect to the given attribute, which is the true probability distribution of the event
Now, in case where the sequence is a time-series, Definition 4.34 means that ßß with . Thus, when goes to infinity, the directly observable true probability becomes the limiting relative frequency, the representative objective true probability.
Theorem 4.35.
Suppose that a machine is self-assured of the stopping time when there exists , but that the machine is not self-assured of the stopping time when no exists. The machine then directly observes the true probability ß as .
Theorem 4.36.
A machine directly observes the true probability ß as if and only if the machine learns the true probability ß as .
Two things should be noted from Theorem 4.36. First, whenever the true probability is not directly observable, a machine cannot learn the true probability. Now recall from Definition 2.1 that the machine is an ideal one with no practical limits on computational resources such as time or storage spaces. Therefore, this implies that no real machines, hindered by many practical limits in our world, can overcome this impossibility of learning either, whenever the true probability is not directly observable. Second, Theorem 4.36 also says that the true probability is directly observable by a machine whenever it can learn the true probability. Once a machine learns the true probability and so it is successfully computable, then the next question is how complex it is to compute. Now that the true probability is directly observable, this makes it easier to deal with the complexity problem. (e.g. Sorting algorithm) Thus, Theorem 4.36 directly connects the problem of computational solvability to the problem of complexity.
Now, let us finish this section by adding one more claim that the Success Criterion (1) to compute the true probability is sufficient for learning it.
Corollary 4.37.
If a machine calculates the true probability ß correctly most of the time, which is self-assured to the machine, then the machine can learn the true probability.
5 Conclusion
We have discussed so far when machines can learn the true probabilities and when they cannot. In summary:
-
•
such that Nature is perverse with by Theorem 4.19.
Now that Nature is perverse at least with one forecast ,
-
•
(i) Nature is uniformly perverse: machines cannot learn by Theorem 4.20.
-
•
(ii) Nature is selectively perverse: for each such that Nature is perverse with by Lemma 4.28.
Then under (ii),
-
•
(ii-1) Machines are not self-assured of the : machines cannot learn by Corollary 4.30.
-
•
(ii-2) Machines are self-assured of the :
Then under (ii-2),
-
•
(ii-2-1) actually does not arrive: machines cannot learn by Theorem 4.32.
-
•
(ii-2-2) indeed arrives: machines can learn and this is the only case in which machines can learn by Theorem 4.35 and Theorem 4.36.
Before we close this section, let us add a few remarks. First, we emphasize that in this paper we have focused on the notion of “machine learning” that is not just a technical terminology, understood as an identification of a target function, but also an epistemic one, a counterpart to “human learning.” We focus on this epistemic notion of machine learning because we particularly mean by “machines” those artifacts that perform human-level intelligent behaviors.
Second, note that we do not need to specify how machines learn the true objective probabilities to prove the impossibility of machine learning on the true probabilities. Instead, we only need the necessary condition for any machine to learn the true objective probabilities if it learns them in any way. Thanks to this flexibility about how to learn, we come to have a powerful and robust result: no matter what kind of learning method a machine uses, it cannot learn the true objective probabilities that are not directly observable.
Lastly, let us emphasize again that our learning machine is an ideal device with no practical limits on time and storage space, etc. Therefore, the scope and limit of machine learning on true probabilities discussed in this paper are more fundamental than practical ones.
Acknowledgements
The author is grateful to Tyler Burge, Michael Christ, Philip Dawid, Joseph Halpern, Jinho Kang, Steven Matthews, Thomas Sargent, and Byeong-uk Yi for discussions that were helpful in various ways to develop this paper. In particular, Tyler helped me pay attention to the idea of converting a non-propositional structure to a propositional one while learning, and Joe helped me open my eyes to the possibility of machine learning on the true probabilities. I discussed every detail of this paper with Jinho so that I insisted that he should be listed as a co-author. Jinho refused on the ground that he did not make direct contributions to mathematical proofs, with which I disagree. But Jinho has been right most of the time when we disagreed, so I decided to agree. Lastly, the author is grateful to three anonymous reviewers and a meta-reviewer. Their reviews were helpful in improving this paper.
Impact Statement
This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here.
References
- Blume & Easley (2006) Blume, L. and Easley, D. If you’re so smart, why aren’t you rich? belief selection in complete and incomplete markets. Econometrica, Vol. 74:929–966, 2006.
- Blume & Easley (2008) Blume, L. and Easley, D. Market selection and asset pricing. The Handbook of Financial Markets: Dynamics and Evolution, by T. Hens IV and K. Schenk-Hoppe (ed.), North-Holland:403–438, 2008.
- Boolos et al. (2002) Boolos, G., Burgess, J., and Jeffrey, R. Computability and Logic. Cambridge University Press, Cambridge, 2002.
- Carnap (1963) Carnap, R. Logical Foundations of Probability. The University of Chicago Press, 1963.
- Church (1936) Church, A. An unsolvable problem of elementary number theory. American Journal of Mathematics, Vol. 58(2):345–363, 1936.
- Cogley & Sargent (2008) Cogley, T. and Sargent, T. The market price of risk and the equity premium: A legacy of the great depression. Journal of Monetary Economics, Vol. 55:454–476, 2008.
- Cogley & Sargent (2009) Cogley, T. and Sargent, T. Diverse belief, survival and the market price of risk. Economic Journal, vol. 119:354–376, 2009.
- Dawid (1982) Dawid, P. The well-calibrated bayesian. Journal of the American Statistical Association, 77(379):604–613, 1982.
- Descartes (2008) Descartes, R. Meditations on First Philosophy. Translated by Moriarty. M. Oxford University Press, 2008.
- Foster & Vohra (1993) Foster, D. and Vohra, R. Asymptotic calibration. Biometrika, 85(2):379–390, 1993.
- Gaifman (1986) Gaifman, H. A theory of higher order probabilities. In TARK, 1986.
- Halpern (2016) Halpern, J. Actual Causality. The MIT Press, Cambridge, MA, 2016.
- Halpern & Fagin (1994) Halpern, J. and Fagin, R. Reasoning about knowledge and probability. Journal of the Association for Computing Machinery, Vol 41(2):340–367, 1994.
- Hintikka (1962) Hintikka, J. Knowledge and Belief. Cornell University Press, Ithaca, 1962.
- Kozen (1997) Kozen, D. Automata and Computability. Springer, New York, 1997.
- Lewis (1980) Lewis, D. A subjectivist’s guide to objective chance. Studies in Inductive Logic and Probability, Volume II, by R. Jeffrey (ed.):263–293, 1980.
- Maher (2010) Maher, P. Explication of inductive probability. Journal of Philosophical Logic, Vol. 39:593–616, 2010.
- Moore (1985) Moore, R. C. A formal theory of knowledge and action. Formal Theories of the Commonsense World, by J. Hobbs and R. C. Moore, (ed.). Ablex Publishing Corp:319–358, 1985.
- Nagel (1939) Nagel, E. Principles of the theory of probability. Int. Encycl. Unif. Sc., Vol. I(No. 6), 1939.
- Nilsson (1986) Nilsson, N. Probabilistic logic. Artificial Intelligence, 28:71–87, 1986.
- Nilsson (2011) Nilsson, N. Artificial Intelligence: A New Synthesis. Morgan Kaufmann, California, 2011.
- Nisan et al. (2007) Nisan, N., Roughgarden, T., Tardos, E., and Vazirani, V. V. Algorithmic Game Theory. Cambridge University Press, New York, 2007.
- Oakes (1985) Oakes, D. Self-calibrating priors do not exist. Journal of the American Statistical Association, 80(390):p. 339, 1985.
- Pearl (2018) Pearl, J. A personal journey into bayesian networks. UCLA Cognitive Systems Laboratory, Technical Report (R-476), 2018.
- Ramsey (1931) Ramsey, F. Truth and probability. Studies in Subjective Probability, by Henry Kyburg and Howard smokler (ed.). Krieger:25–52, 1931.
- Russell (1998) Russell, S. Learning agents for uncertain environments. Proceedings of the Eleventh Annual Conference on Computational Learning Theory, pp. 101–103, 1998.
- Sandroni (2000) Sandroni, A. Do markets favor agents able to make accurate predictions? Econometrica, Vol. 68(6):1303–1341, 2000.
- Savage (1972) Savage, L. J. The Foundations of Statistics. Dover Publications, New York, 1972.
- Tarski (1944) Tarski, A. The semantic conception of truth: and the foundations of semantics. Philosophy and Phenomenological Research, Vol. 4(3):341–376, 1944.
- Turing (1936) Turing, A. On computable numbers, with an application to the entscheidungsproblem. Proceedings of the London Mathematical Society, 42:230–265, 1936.
- Valiant (1984) Valiant, L. A theory of the learnable. Communications of the ACM, 27, Nov.:1134–1142, 1984.
- Valiant (2008) Valiant, L. Knowledge infusion: In pursuit of robustness in artificial intelligence. Proc 28th Conference on Foundations of Software Technology and Theoretical Computer Science, pp. 415–422, 2008.
- von Mises (1957) von Mises, R. Probability, Statistics and Truth. revised English edition, Macmillan, New York, 1957.
- von Mises (1967) von Mises, R. Mathmatical Theory of Probability and Statistics. 2nd edition, Academic Press Inc., New York, 1967.
Appendix A Proofs for Lemmas, Theorems and Corollaries
Proof of Theorem 4.1 A proof of Theorem 4.1 is suggested in (Dawid, 1982). A simpler one is as follows: Let Since and are ßt-1-measurable, it follows that ß where is taken with respect to ß and so that is a martingale adapted to ßk-1. Also, because is an indicator variable and so ß is uniformly bounded above by some such that . Then, by the martingale convergence theorem, converges with probability one, which implies from Kronecker’s lemma that, with probability one, where
Proof of Lemma 4.5 Let be an event token at time and be the true probability of event type conditional on event type whose event tokens are denoted by and , respectively. Then, by the definition of with respect to , ß with true probability one. Now, once ß is learned as such at some then must have happened at that time and so Also, by 4.4, consider a subsequence of ’s where for any Then, for this subsequence, for any because ’s are independent of one another.
Here, ’s are independent for the following reason: recall that by definition, ß with true probability one. Then, note that ß includes the fact that ß for some Now, without loss of generality, let Thus, we obtain
(1) ß
Now that and are all included in ß by (1), to show that ’s are independent, we need to prove that
(2) ß ß ß
But (2) is satisfied because ß ß.
Now that , for any in this subsequence, we can always find some small enough such that Therefore, the probability of the element in this subsequence does not vanish to zero, which implies that Since , Then, by the second Borel-Cantelli lemma, for which means ß for the desired result.
Proof of Theorem 4.6 Suppose that, for infinitely many ’s when ß stays the same as , machines learn this ß as at time . Then, by the Success Criterion (1), ßß at least infinitely often out of those infinite opportunities at ’s to learn. (We prove in Corollary 4.37 what we mean exactly by “most of the time.” Here we tentatively mean “at least ” by it because machines are otherwise wrong too often to learn given the Success Criterion (1).) Thus we can construct a test set which consists of the subsequence of ß which is equal to for those infinitely many ’s. Let if and only if ß Note that is ßmeasurable, because machine forecasting occurs at time . Then, by Theorem 4.1, with true probability one, , as where is defined over ß and ß is denoted by the totality of true facts up to day
Proof of Lemma 4.10 Clearly, if with probability one, then where the mathematical expectation is taken with respect to the true probability but not vice versa. The reverse does not necessarily hold, because even though when converges to with the equal probability as However, with probability one, if and only if for the following reason: letting denote the event that as goes to infinity, if and only if where denotes the value of when occurs, while denotes that when does not occur. Here, the “if” part is clear. For the “only if” part, if then while , which implies that
Proof of Lemma 4.11 By Fatou’s lemma, ß ß ß ßß Now, since exists by the assumption, Then, by squeezing theorem, ß also exists and thus ß. Now, by the law of iterated expectations, . Therefore, if and only if Also, . But note that by Jensen’s inequality. Therefore,
Proof of Lemma 4.15 Consider a simple two-player game between Nature (player ) and a representative machine (player ) where is the set of players , is the set of pure strategies ’s for each player , and is the usual payoff function for player . Since this is a probabilistic forecasting game, the pure strategy for each player can be any number in . But since the computable numbers by player are countably many, we restrict to be countable. For simplicity, let . In other words, for each profile , if player wins, she obtains , while she obtains otherwise. When Nature (player ) succeeds in deviating from the machine forecast, Nature wins. Otherwise, the machine (player ) wins. Thus, this is a kind of matching game with countably infinite state space.
First, let us note that the structure of the forecasting game is given to Nature, because the structure itself is something objective about the world and thus it belongs to the realm of Nature herself. In other words, it is certain to Nature whether Nature and the machine moves simultaneously or not in the game as follows: If the machine moves when Nature herself does not move yet, then it is certain to Nature that the machine moves first and thus that it is not a simultaneous game. If the machine does not move yet when Nature does not move either, then it is certain to Nature that the machine does not move first, and thus whether it is a simultaneous game or not depends on Nature herself. If Nature reveals herself to the machine even before the machine moves so that the machine can move after observing Nature’s, it is certain to Nature that it is not a simultaneous game. Otherwise, it is certain to Nature that it is a simultaneous game.
the proof of the “only if” part: first, let us fix machine forecast ß as and then consider the relevant test set. Now, suppose that the forecasting game along the stochastic path of this test set is not a simultaneous-move game at time . Then, either Nature or the machine moves first, and the rest moves later after observing what move the other opponent takes. Thus, the one who can observe the opponent’s move can control their/her own forecasting to win the game, and so occurs or does not occur at time , which is certain to Nature because the structure of the game is given to Nature. Then, since ßt includes or as part of the true facts by 4.2, or . Thus, it is either or respectively, according as Nature moves first or the machine moves first. Therefore, the true second-order probability is neither strictly less than nor strictly greater than .
the proof of the “if” part: again, let us fix the machine forecast ß as and then consider the relevant test set. Now, suppose that the forecasting game is a simultaneous-move game at time . Then, for any fixed value , it is not certain to Nature herself whether ß or not, because there exists no pure strategy Nash equilibrium in this simultaneous matching game. Thus, Nature cannot certainly control ß to make it deviate from ß and so we obtain
(3) .
(3) holds even though ßt of in (3) includes or as part of the true facts by 4.2, if either of them indeed occurs at . In the same logic, it is not certain to Nature that the machine can control ß to make it coincide with ß and so we obtain
(4) .
Clearly, any mixed strategy Nash equilibrium, if any, will lead to ß . Therefore, there exists the true second-order probability such that ß .
Furthermore, if Nature moves first, then , as we proved in . Therefore, if the machine does not move first, which amounts to either Nature moves first or the machine moves simultaneously with Nature, then clearly .
Proof of Theorem 4.16 Consider the necessary condition (2) that if a machine learns the true objective probability ß, then ßß. Since this is just a necessary but not sufficient condition, the converse of (2) does not necessarily hold. Now, for any machine forecast , suppose that ß for infinitely many ’s along the stochastic path where the associated ’s occur but that ß for infinitely many ’s. Then, by Theorem 4.19, ß for some event . Thus, by (Case 3) of Theorem 4.17 and Theorem 4.6, the machine cannot learn the true probability ß, even though ßß at infinitely many ’s. Thus, the machine does not learn that it wins even though it indeed wins at ’s. Clearly, the machine does not learn whether it wins at other ’s than ’s when it loses. Now, since the machine does not learn whether it wins or not at each round of game, the machine does not learn what its payoff is at each round. Furthermore, the machine is truly guaranteed to be well-calibrated along the path of ’s and so this is the winning strategy in forecasting game between Nature and the machine (e.g. (Foster & Vohra, 1993)), but the machine still cannot learn the true probability ß. Thus, in this case, winning strategy is not equivalent to learning strategy.
Proof of Theorem 4.17 First, let us recall the followings: by Nature’s perversity with true probability , we mean that ( at least for any fixed . Here, denotes a meta-event ß for any event at time for such a fixed forecast . Given this, let us consider the following three cases, according as how ß actually varies with respect to along the path of the test set. Case 3 amounts to Theorem 4.17.
Case 1 Let us suppose that ß for finitely many ’s along the stochastic path. Now, as in Theorem 4.1, let But, unlike in Theorem 4.1, here if ß for all along the stochastic path, not necessarily restricted to the test set. Now, consider those finite ’s when ß and denote the largest among them by . Then, ßß, along the stochastic path. Thus, ß where expectation is taken with respect to the true probability ß and so is a martingale adapted to ßk-1 at along the path. Then, by the martingale convergence theorem and Kronecker’s lemma, with true probability one.
Case 2 Let us consider the case where with true probability , ß deviates from in such a way as in Oakes (1985) along the test set. Then, and so the calibration property is not truly guaranteed for the following reason: Let be the event that ß deviates from in such a way as in Oakes (1985) along the test set. Then, since some subsequence of ’s along the test set forms Bernoulli whose relative frequency converges to , does not converge to when occurs. Now, let be the value of when occurs, while be the value of when does not occur along the test set. Then, in the same logic as in Lemma 4.11, we obtain that . Thus, However, the converse does not hold, for there can be many other ways of how does not converge to than in Oakes (1985). Hence it does not follow that even if
Now, suppose that with subjective probability ß behaves in such a way as in Oakes (1985). Then, again in the same logic as in Lemma 4.11, we obtain that where expectation is now taken with respect to . Hence Therefore, we conclude that if Oakes (1985) holds with subjective probability , then Dawid (1982) does not hold, which amounts to the proof for Theorem 4.8.
Case 3 In general, suppose that the true probability of Nature’s being perverse is not zero for any fixed forecast on any associated events ’s. In other words, suppose that at least along the test set where is the meta-event that ß. Then, we claim that this implies that where is taken with respect to .
First, suppose that exists. Also, suppose that , because (Case 3) trivially holds if . Now let us consider an infinite subsequence of ’s, which is conditionally identically distributed along the test set where occurs at least infinitely often. We can do this by Kolmogorov axioms 1 and 2 and Lemma 4.5 for the following reason: note that by Kolmogorov axioms 1 and 2 there always exists one such that for any type event and given that there exists probability of type event, if any. Then, for this , ß according to Lemma 4.5. Thus, we found one subsequence of such that it is conditionally identically distributed as ß. Now, fix . Also, without loss of generality, suppose that . Since is arbitrary, from this subsequence we can consider another subsequence of with the true probability such that = ß along the stochastic path of the test set in which occurs at least infinitely often.
For reductio, let us suppose that Nature deviates by picking numbers from uncountably many values of ’s such that every value of is equal to ß only at most finitely many ’s along the test set with true probability - one. In other words,
(5) For where , ß at most for finitely many ’s along the path of the test set where occurs at least infinitely often, with true probability - one.
Note that there must be countably infinite number of different ’s in (5). Let us denote each different at each time along the path by , while letting for without loss of generality. Now, recall that is assumed to exist along the stochastic path of the test set. Thus, inspired by this assumption, let us further assume that ß exists where ß or ß along the path of the test set. Then, letting
(6) ß ßß
.
Thus,
(7) ß, if and only if, .
In other words, if Nature deviates from machine forecasts by ’s so that her deviating forecasts on average satisfy (7) under (5), then and thus the test set is truly guaranteed to be well-calibrated. But Nature then loses the repeated forecasting games along the path in the long run. So Nature has no reason to behave in this way with the true probability - one. Let us then consider the following three cases:
(Case i) ß at least
In this case, by Lemma 4.15, Nature observes machine forecasts in each time whenever the machine predicts ß as .
Now that ßß at least
Nature would choose the deviating value in such a way that she would not allow (7) to hold with true probability - one. Thus,
(8) .
In other words, since Nature observes machine forecast at every time, she would deviate each forecast at in such a way that (8) holds in the end. Otherwise, , so Nature would lose in the long run. Therefore, we conclude due to (8) that in case (i).
(Case ii) ß at least
In this case, by Lemma 4.15, Nature moves first so the machine cannot fail to match ß. But then,
ßß at least ß at most , which contradicts (5). Therefore, we exclude case (ii) under (5).
(Case iii) ß at least
In this case, by Lemma 4.15, Nature moves simultaneously with the machine, so Nature has no reason to pick any particular at each , for there exists no pure strategy Nash equilibrium. Hence any combination of is equally likely. Now, without loss of generality, let us fix and for each . Then we claim that
(9)
where for some fixed , and for some fixed , and some set such that , but is countably infinite, and is any real number in the set /, the set without .
First, recall that exists. Then, by definition,
, such that
, such that
Now, letting max, ,
(10) ß ßß ß .
Therefore, we again obtain (8) by (10). Now, we consider all possible cases under (5), all of which lead to . But this result is what we try to show in this proof anyway. Therefore, to continue to prove, let us accept that there exists such a set with true probability
Now, note that when for all ’s along the test set ßß when ß for all ’s along the test set. Then, since , ß for all ’s along the test set . Thus, since we found one subsequence of ß as such along the test set with true probability and exists, ß along the test set for . Then, by the same reasoning as in Lemma 4.10, . Now, by Lemma 4.11, we obtain that when exists. Clearly, when does not exist,
Therefore, we conclude that if ß at least , then
Proof of Theorem 4.19 First, let us first note that with probability ß at least infinitely often for some event . Otherwise, beyond the near future, all events ’s would certainly continue to occur, with probability one, and thus there would be no uncertainty about any ’s. Now, if this is the case, then we must stop here and simply conclude that no machine would be able to learn the true probability of any simply because there is no uncertainty for any machine to measure by the true probability in our world. Therefore, to continue to prove our main claim, we accept that ß at least for some event . Now, let us consider the test set where . Then, along the stochastic path of this test set, ß at least . Therefore, we found some for which Nature is perverse with true probability 0.
Now, suppose that, for any , at least for infinitely many ’s. In other words, at least Then, ßß at least . Thus, by Definition 4.18, Nature is uniformly perverse, which again means by Definition 4.13 that Nature is perverse for any
Proof of Theorem 4.20 Suppose that, for any , at least for infinitely many ’s. Then, by Theorem 4.17 and Theorem 4.19, and so for any where is the true objective probability defined over ß and the expectation is taken with respect to this true probability Then, by Theorem 4.6, the machine cannot learn the true objective probability ß
Proof of Lemma 4.23 Suppose that the machine effectively calculates ß as with the goal of learning the true value of ß. Then, by the necessary condition for learning, the machine must return ß which is congruent to ß, in order to achieve this goal. Now, suppose further that the machine calculates at the same time ß. Then the machine tolerates error by Definition 4.21.
However, by Theorem 4.6, the machine cannot tolerate errors infinitely often to achieve this goal of learning for the following reason: for any , suppose that ß but ß infinitely often. Now, since it must be that ßß to learn the true probability, it must also be by Theorem 4.6 that = = 1. But now, by assumption, ß infinitely often, which leads to that ßß at least . But this contradicts by the same reasoning as in the proof of (Case 3) in Theorem 4.17 while replacing by and so the machine cannot learn the true probability by Theorem 4.6. Therefore, the machine cannot tolerate errors infinitely often if the machine aims to learn the true probability. Since was arbitrary in , let , the desired result.
Proof of Lemma 4.28 (i) Proof of “if” part: suppose that there exists a stopping time for some forecast such that ß , while there exists no stopping time for any other so that ß at least infinitely often. Then, by the definition of and the law of iterated expectations,
(11) , because .
Now that is the event that ß at least and so that the limit exists,
(12) ß at least for .
Also, in the same logic as for ,
(13) ß at least for any
Thus, by Definition 4.25, Nature is selectively perverse.
(ii) Proof of “only if” part: suppose that Nature is selectively perverse. Then, by Definition 4.25, there must exist some such that ß at least . Now, for reductio, suppose that for any such there exists no stopping time so that ß at least infinitely often. In other words, Nature keeps changing her mind infinitely often between perversity and non-perversity or Nature keeps being perverse all the way long. Then, by law of iterated expectation, at least infinitely often, which contradicts the selective perversity of Nature by the same reasoning as in (13).
Proof of Lemma 4.29 For any given with which Nature is not perverse with true probability -one, there exists for this by Lemma 4.28. Now, by assumption, machines learn that ß . Thus, ß by the necessary condition for learning. Then, by Lemma 4.23 and the same reasoning as (11) in the proof of Lemma 4.28, ß.
Proof of Corollary 4.30 (i) Suppose that Nature is selectively perverse so that ß for some by Lemma 4.28. However, since the machine is assumed not to be self-assured that the stopping time arrives for that , the machine cannot learn that ß by Lemma 4.29.
(ii) Now, note that if the machine learns ß as , the machine also learns that ß at least in the following way: first, by Theorem 4.6 and (Case 3) in Theorem 4.17, machine learning of the true probability ß as mathematically implies that ß at least . Thus, once the machine learns the true probability ß as , it cannot fail to effectively calculate the true probability as , following Theorem 4.6 and (Case 3) in Theorem 4.17 as instructions. Then, by Definition 2.2, the machine learns that in particular , so that ß while following law of iterated expectation as instruction. However, as we proved it in (i), the machine cannot learn that ß . Hence we conclude that the machine cannot learn the true objective probability ß as either. .
Proof of Lemma 4.31 Suppose that the machine is not self-assured of the stopping time for . Then,
(14) ß.
Now that ß at least for this ,
(15) ß at least .
Then, since ß ß at least ,
(16) ß , for some .
Now, note that along the stochastic path considered in Corollary 4.30, ß at least . Now, for this ,
(17) ßß at least
Therefore, without loss of generality, letting with ,
(18) ß with .
Then, without loss of generality, let ß at +1 by (18). Thus, (16) and (18) lead to the desired result by Lemma 4.15.
Proof of Theorem 4.32 Suppose that the machine learns the true probability. Since the machine cannot learn if Nature is uniformly perverse, Nature must then be selectively perverse so that the stopping time exists by Lemma 4.28. Then, by the (ii) part of Corollary 4.30 and Lemma 4.29, the machine is self-assured of the stopping time when exists. We now finish the proof of Theorem 4.32 by showing that if the machine learns the true probability, the machine is not self-assured of the stopping time when such does not exist.
Suppose that the machine is self-assured of the stopping time even though such does not exist. The machine is then wrong about , so it cannot learn the true probability along the path where ß at least for the following reason: first, by Lemma 4.28, with true probability , Nature is perverse to the forecast along the path where there is no stopping time . Thus, ß at least for such forecast Then, by the (Case 3) of Theorem 4.17 and then Theorem 4.6, the machine cannot learn that . In other words, the world does not exist in the way that Nature allows the machine to learn the true probability. Notwithstanding, the machine has a wrong belief about the stochastic path of the true probability, and so cannot learn the true probability.
Proof of Theorem 4.35 Suppose that the machine is self-assured of stopping time along the path where, for any given , ß . Then, along this path, the machine obtains
ß and so ß by Lemma 4.23.
Now, by the definition of and Lemma 4.23 again,
ß, for some
Note also that ß, for some along this path.
(19) ßß, with .
Then, as in Theorem 4.6, we can construct a test set along the stochastic path by the assessed as a selection criterion by (19). This test set is also truly guaranteed to be well-calibrated.
Thus, from this test set along the path, the machine obtains the following by Lemma 4.10 and Lemma 4.11,
(20) ß if and only if
Now, let us gather the sequence of along the path and call this set a population. The machine then effectively calculates the true probability ß as by the empirical distribution out of this population by (20), which satisfies in Definition 4.34. Also, this effective calculation of the empirical distribution must be successful in returning the true probability ß, for ß in the right-hand side of (20) is equal to ß and by (19), which satisfies in Definition 4.34. Therefore, by Definition 4.34, the machine directly observes the true probability ß as .
Proof of Theorem 4.36 (i) Proof of “if” part: follows directly from Theorem 4.32 and Theorem 4.35.
(ii) Proof of “only if” part: suppose that the machine directly observes the true probability ß as from the given population at some time . The machine then effectively calculates ß as at , while adopting the following as an instruction: recall that the given set consists of the sequence of events ’s, with potentially infinite. Since the set is available in principle to the machine by the part (i) of Definition 4.34, there must exist some rule on how to collect the available set of events . Then let the machine build up the population by collecting events while following the rule on how-to. Now, once collected by the machine to constitute the set , it must have been observed whether each event has a certain attribute of interest or not, and so a value of the indicator variable must have been assigned accordingly to each event by the machine. Then, let the machine calculate ß as . Therefore, the machine effectively calculates ß as .
Furthermore, note that is defined to be ß at by the part (ii) in Definition 4.34. The machine then cannot fail to compute ß as from the population . Therefore, the machine learns the true probability ß as by Definition 2.2.
Proof of Corollary 4.37 Let us first define what we mean by “most of the time” in the success criterion (1). by Lemma 4.10 and Theorem 4.17, machines cannot satisfy the calibration property when the test set is constructed by the selection criterion of an assessed probability if ß at least . Therefore, in order to learn, the machines must return the correct calculations except a finite number of times out of infinite opportunities to learn. Thus, “most of the time” in the Success Criterion (1) should be “all but finitely often out of infinite opportunities to learn,” which means that machines must be correct not just infinitely often while being wrong that often.
Now suppose that the machine is correct most of the time when the machine aims to learn the true probability ß. Then, by the (Case 1) in Theorem 4.17, ß at most . Thus, there exists a stopping time because ß at least if and only if there exists a stopping time for any machine forecast by Lemma 4.28. Furthermore, suppose that the machine is self-assured that it is correct most of the time. Then, again by Lemma 4.28, there exists a stopping time Thus, if the machine satisfies the Success Criterion (1), then it satisfies the condition of Theorem 4.35. Therefore, if the machine satisfies the Success Criterion (1), it can learn the true probability by Theorem 4.35 and Theorem 4.36.
Appendix B Some Literature for the Necessary Condition in Sec. 3.2
There has been a large literature in logic and economics whose discussion implies when a machine holds a true belief in the probabilistic proposition . For example, while defining the concept of rationality in the economics model, (Cogley & Sargent, 2008, 2009), (Sandroni, 2000), (Blume & Easley, 2006, 2008) and many others stipulate that an agent is rational when his/her partial beliefs are correct in the sense that his/her subjective probability distributions are congruent to the true probability distribution which Nature identifies as such. In other words, this means that a machine holds such a true belief in when it is rational, which entails that its subjective probability is equal to the true objective probability
Also, in probabilistic logic, (Nilsson, 1986), (Halpern & Fagin, 1994), and many others follow the probabilistic version of the Tarskian semantic theory of truth in the following way: a formula describing the subjective probability of an agent is true when the agent’s probability assignment corresponds to what the sentence in fact represents. For example, in (Halpern & Fagin, 1994), a formula like is true if, according to the probability assignment of the agent , the event is at least twice as probable as . Now, if we extend this idea to the true objective probability if any, a formula such as where denotes the probability operator of the agent and does that of Nature, is true when, according to the assignment of the agent ’s probability, the event is as probable as what Nature assigns on as the true probability value in our world.
It deserves to note from the economics literature when it becomes true that agent ’s partial belief on the event has a degree which corresponds to the true objective probability . This is indeed true when the subjective probability of the agent , is in congruence with the true objective probability , which again makes the formula true. Therefore, the condition for any agent to be rational (or rational machine in our context) in economics is equivalent to the truth condition for the formula in probabilistic logic.
Appendix C Justifications for the Three Assumptions
4.2 ßt’s in ß are the set of all the true facts up to time .
In other words, ßt is the historical path of true facts up to time . To recognize that 4.2 is reasonable, recall that we are handling with objective probability true to our world. Therefore, its condition must also be true in our world. Otherwise, ß cannot represent the true probability according to which the actual data are realized in our world. For example, if there works some special gravity force on Mars and so a fair coin lands on its edge as equally likely as on its head or tail, then the probability of the coin landing on the head conditional on this hypothesis will be . However, if such a special gravity force actually does not exist on Mars, this conditional probability cannot be true either, because its data would not be realized according to the probability of in our world.
4.3 No further knowledge requirement is imposed on the condition ßt.
To recognize that 4.3 is reasonable, note the following: If ßt is the set of known facts, then ß can vary from person to person, as the set of events known to each person may be different, depending on who possesses what information. In order for ß to be objective, however, ß should not depend on each person. Therefore, we require that ßt consist of true facts, not necessarily knowledge.
4.4 Once a probability of an event type is established, its associated event tokens ’s occur at some infinite subsequence of time ’ s, so that does not vanish to zero as .
Here, “event token” refers to the event that ever occurs at some specific time and place, while “event type” refers to the abstract object with no specific space-time location. For example, cloudy weather in Denver is an abstract event type with no time subscript, while cloudy weather in Denver on 29 May 2024 is a particular event token . Some literature (e.g. (Halpern, 2016)) deals mainly with probability of token events, while some literature (e.g. (Maher, 2010)) deals mainly with probability of type events. 4.4 establishes a connection between the probabilities of these two kinds of events.
In order to recognize that 4.4 is reasonable, consider now the following example: suppose that we try to predict the probability that some person suffers from lung cancer caused by his/her smoking habit. As we discussed in the Introduction, this causal probability is objective, which is relevant to our discussion. Then, as long as the probability of the event type of having lung cancer from smoking is allowed to be considered for forecasting, we require that the true probability of the associated event tokens for some persons ’s should not be completely zero from some time onward. In other words, although the true probability of such event tokens is allowed to be intermittently zero, the probability of the associated event tokens should not vanish to zero as .
It might be pointed out that a particular person, say Mary, will die some time in the future, and that it will not make sense to consider the probability of Mary’s suffering from lung cancer after that time any more. However, unless all generations of our human beings suddenly become extinct in the near future, we can consider the true probability of this event token at least for some person at each time . Hence it would make sense to forecast the probability of such an event token in each specific case, as
Appendix D More Detailed Remarks
Remark 2.4 Now, let be the sigma-field generated by and denote a partial history through date . Then, for any probability measure on becomes the (marginal) probability of the partial history, and each is assumed to be -measurable. Note then that for any and so Furthermore, when is only either 0 or 1, becomes an indicator function for an event Then, provided that there indeed exists any true objective probability , where the expectation is taken with respect to this true probability .
For example, let be an . random variable whose value is if the event occurs at and otherwise. Then, will be the number of events that have occurred up to time . Since is ., is same as across time. Now, let be the ratio of events that ever occur. Then, provided that this limit indeed exists, the dominated convergence theorem and Fubini’s theorem imply that . Thus, in the . case, we can derive that with the true probability one, the true objective probability of the event is the limiting relative frequency which is objective.
By stipulating that the true objective probability follows the rule on how Nature generates each actual data point, we emphasize that the true probability here is something objective, not subjective, but no more or no less than that. “Nature” is just a metaphor for describing the relationship of true probability with our objective world. Adopting the widely accepted statistical notion of a data-generating process, we intend to use the term “Nature” to refer to whatever is supposed to govern the underlying true objective process to generate the actual data. Given that Nature is simply a metaphor, it is important to emphasize that, in order to prove the possibility or the impossibility of machine learning on the true objective probabilities, we do not need to commit ourselves to whether there really exists such a thing as a true objective process: probability might be merely something subjective which has nothing to do with “Nature.” If that is the case, then we conclude that no machines can learn the true objective probabilities simply because there exist no such things as true probabilities for machines to learn.
Remark 3.1 The standard theory of subjective probability was first developed by Ramsey and then further by De Finetti and Savage. Subjective probability is designed to represent a degree of belief possessed by a subject, say some person. Here, two words, degree and belief, deserve to be noted. First, subjective probability represents some aspects of belief. However, belief is an inner thought that, in principle, resists a direct observation, while probability quantification requires measurability. Note that the easiest method of measurement is by observation. Thus, in order for the degree of belief to be quantified as a probability measure, it works well if the unobservable is made observable. Here comes in the relationship between unobservable belief and observable action: belief causes action. According to (Ramsey, 1931), the strength of our belief can be judged in relation to how we should act in hypothetical situations. Given a preferential system on the lotteries of a set of conditions, the choice action under hypothetical circumstances will reveal the degree of belief of some relevant agent. In this vein, subjective probability represents whatever is in any one’s mind upon anything as long as his/her belief system is coherent, and thus can be even assigned to what is merely imagined. For instance, while arguing for cogito, ergo sum, (Descartes, 2008) imagined some evil spirit that has devoted all its efforts to deceiving him. Then, Descartes can assign some value of subjective probability to such imagination on the evil spirit, according to how likely it is to him that such imagination can be realized in this world, as long as Descartes’ belief system is coherent.
Second, it is assumed that the degree of belief ranges between 0 and 1. For example, your belief that there will be rain tomorrow has a degree strictly less than and thus is called a partial belief, because you have some unconfidence on future events. In addition to this quantitative usage of the term “belief”, however, there is another categorical usage: “belief” refers to the proposition that something is the case or that something is not the case, or none of them. For instance, your belief in the Moorean fact that here is one hand represents either the case or not, or it is on suspension. Compared to partial belief, this qualitative belief is called belief simpliciter. As the term “belief” has these two faces, gradational quantitative and categorical qualitative ones, numerical degrees are assigned to partial belief, while truth values are assigned to belief simpliciter. In this paper, we abbreviate belief simpliciter by “belief” and denote partial belief by “partial belief” as it is.
In contrast, objective probability, if any, is what must be determined by objective features of the world that do not vary from person to person. Following (Nagel, 1939) and (Carnap, 1963), we list chance, logical probability, and relative frequency as exhaustive examples of objective probability. The best way to clarify these concepts is to consider their examples. Following (Maher, 2010), for example, suppose that a coin has the same face on both sides, that is, two-headed or two-tailed. Provided further that it is completely uncertain what face value, head or tail, the coin has on both sides, the chance of getting head when tossed is 1 or 0, while its logical probability is . Furthermore, when the coin is tossed infinitely often, its relative frequency surely converges to 1 or 0.
Here, the chance is either 1 or 0, depending on what our world is like, namely, whether the coin is indeed two-headed or two-tailed. Therefore, the chance is objective in the sense that it depends on real features of the coin, not on any personal inner thought. On the other hand, the logical probability is because it is logically implied from the given conditions that the coin has the same face value on both sides, but that whether it is two-headed or two-tailed is completely uncertain. Therefore, logical probability is also objective in the sense that it depends on the logical features of our world, not on us. Clearly, the relative frequency is what our world turns out to be, not whatever we believe. However, no matter what interpretation of probability is adopted among these three kinds, it is important to note that the true objective probability in Definition 2.3 is a mathematical object that is supposed to represent any of them as long as they satisfy the Kolmogorov axioms.
Remark 4.7 It should be noted that Theorem 4.6 is our building block to prove when a machine cannot learn the true probability, because in Theorem 4.6 denotes the limiting relative frequency along the test set, the representative true objective probability. We do not consider any limiting behavior of the relative frequency outside the test set, because learning as per se is not possible outside the test set by the necessary condition for learning in Section 3.2. Therefore, if it is shown to be impossible that with probability one, along the stochastic path of the test set collected by the assessed , then it is derived from Theorem 4.6 that the machine cannot learn the true probability.
Now, note that ) if and only if for any Thus, if the machine learns, then for all that are small enough, , which is Thus, Theorem 4.6 is not committed to what the machine engages in by the first number of data while “learning”. This concept of machine learning is flexible enough to allow for some finitely few potential errors where so that ß while processing the data to learn.
Remark 4.9 Indeed, it may well be argued against the (Oakes, 1985) Counterexample that, although it could be imagined so, Nature actually never behaves in that way. There is no reason why Nature is so perverse that she generates data in such a deviating way. The true objective probability of Nature being perverse may be simply zero. Then, Theorem 4.1 and Theorem 4.8 do not necessarily imply that a machine cannot learn the true probability.
Theorem 4.8 shows only to the extent that if a machine can imagine such a counterexample, and thus it sincerely believes in such possibility, then its subjective probability of long-run mis-calibration is not zero. But recall the Descartes’ Demon case from Section 3.1. A simple possibility of imagination does not necessarily imply a real possibility, namely that the true objective probability of it occurring in the actual world is not zero. Theorem 4.1 and Theorem 4.8 show only that if a machine cannot exclude such a counterexample, it cannot be self-assured to be well-calibrated with its own subjective probability . However, recall that there exists an asymmetric relation between subjective and objective probabilities: objective probability binds subjective probability, but not necessarily vice versa. Thus, if the true probability of Nature’s perversity is proven to be zero, the machine can exclude such a possibility, and so its subjective probability on Oakes’ counterexample will be zero as well. Then, from this it is derived neither that the machine cannot be self-assured to be well-calibrated nor that it cannot be truly guaranteed to be so, which implies that the impossibility of machine learning does not necessarily follow from Theorem 4.6.
Later by Theorem 4.19, we prove that such an imagined possibility of Nature’s being perverse is a real one if the true probability is not observable. Meanwhile, we will also prove mathematically how (Oakes, 1985) Counterexample paralyzes Dawid’s Theorem 4.1, which amounts to the proof of Theorem 4.8. Note that if the true probability indeed escapes from the machine’s forecast just as in (Oakes, 1985), Theorem 4.1 breaks down: Theorem 4.1 critically relies on the martingale property of given ßk-1 where , which was from ß This martingale property, however, breaks down when ßßß for all . Note that (Dawid, 1982) takes it for granted that ßß for all . Therefore, if we relax this assumption, we can prove mathematically how (Oakes, 1985) works against (Dawid, 1982), which will be shown from Case 2 in the proof of Theorem 4.17.
Remark 4.12 Regarding Lemma 4.10 and Lemma 4.11, it deserves to note the following two things: first, note that we do not require any standard assumption such as the stochastic process to be along the historic path of the test set and so that ß can vary along the path. Note also that unlike (Blume & Easley, 2006, 2008), etc., we do not require to consider all the associated events ’s along the stochastic path, but that we consider only the events ’s whose assessed probabilities are . The set of those events ’s is called a test set, because it is collected according to the selection criterion of being assessed constantly as . Therefore, we do not assume any specific property of the stochastic process along the path in the test set, such as stationarity or ergodicity. We do not assume any specific properties because we include only the arbitrary subsequences of the stochastic process into the test set according to the subjective assessment.
Second, by Lemma 4.10 and Lemma 4.11, we obtain that if then ß where expectation is taken with respect to the true probability Then, from this equation, we establish a connection between the true guarantee of well-calibration and the real forecasting game between a machine and Nature: the true guarantee of well-calibration is connected to forecasting games between a machine and Nature, for what the machine forecasts is while what Nature forecasts is ß and thus whether ß holds or not is tied to how Nature and the machine play in the forecasting games along the stochastic path of the test set. In this game, the machine loses at time whenever Nature succeeds in deviating from machine forecasting at that time. There is some literature which deals with the problem of well-calibration in various forecasting game settings. (e.g. (Foster & Vohra, 1993)) Also, note that, in the proof of Lemma 4.11, we take both the inner and outer expectations with respect to the true probability while applying the law of iterated expectations. Thus, it is a real game, not any arbitrarily imaginary one, for ß is expected to hold with respect to the true probability , not any other subjective probability .
Remark 4.14 Now, let us establish a connection between the true second-order probability and the forecasting game between Nature and a machine. For simplicity, let us denote by the event at time that ß for any machine forecast . In other words, denotes the event that the machine makes the correct forecast at time , which amounts to that the machine wins the forecasting game at that time. Note here that, strictly speaking, the event is a complex event which consists of two events, the event of ß and the event of ß for the same functional value while ß and ß are two probability functions about the common event , that is ßß. However, since we consider only the test set along the stochastic path, here we take it that ß is fixed as along the path.
Then, extending some notions from (Gaifman, 1986), let us derive a second-order probability, i.e. the probability of probability, from the outcomes of the forecasting game between Nature and the machine as follows: for any event the true second-order probability is the probability of the meta-event that the first-order probability (either Nature’s true forecast or the machine’s subjective forecast) of actually has a certain numerical value . Thus, the true second-order probability denotes .
Here, it deserves to note that although we derive the notion of higher-order probabilities by extending some notions from (Gaifman, 1986), our notion is different from his in the following way: we do not distinguish the first-order and the second-order probabilities while using the same notation as P, although Gaifman(1986) uses P and PR operator to denote the second-order probability and the event on the first-order probability, respectively. This is because Gaifman’s notions are different from ours in that (1) in Gaifman denotes the agent subjective probability, while our second-order probability can be a true objective one just like the first-order true probability, and that (2) his operator accepts a closed interval as one of its arguments, while our domain of the second-order probability does not contain intervals of real numbers. Note that our domain of the second-order probability is assumed to be generated by the collection of all the singletons of the computable real values of the first-order true probability function , and that it is assumed to be countable. Thus, the domain does not contain intervals of real numbers. (3) In addition, our notion of the first-order probability is not imprecise but precise one, so it is not supposed to be what belongs to any interval or any set of probability measures.
Now, the probability space of the second-order probability is defined as , in which is the set of all the computable functional values for any given true first-order probability function , is a field generated by the collection of all the singletons in , and is the second-order probability with . Note here that is countable and that is the set of all the possible forecasts by machines on the event given . Now, if the domain of the second-order probability is a sigma-field generated by , then the problem here is that the sigma-field becomes uncountable given that is countable. So, we should consider a field , not sigma-field for the probability space of the second-order probability .
Here are some justifications for defending the use of a field , not sigma-field , as a domain of the second-order probability : we do not require the domain of the second-order probability to include all the countably infinite unions, for the number of strategies a machine can use then becomes uncountable, which is contradictory to the fact that the set of numbers a machine can compute is countable. In our forecasting game, any singleton in can be thought of as a pure strategy by the machine and any union of those singletons as a mixed strategy by the machine. Again, since the set of numbers a machine can compute is countable, a machine cannot compute uncountably many mixed strategies.
Remark 4.22 Recall from the necessary condition for learning in Section 3.2 that ß = ß = if the machine learns the true probability ß as . Definition 4.21 then means that while the machine calculates the value of ß as to learn the true probability ß at time , the machine assigns its - probability to the event that ß, because the machine tolerates the error that the true value of ß may not be very at that time . In Lemma 4.23, we prove that a machine cannot tolerate errors infinitely often if it aims to learn the true probability.
Remark 4.24 For example, in (Savage, 1972), a vacuous event is null, but not every null set is necessarily vacuous. Here, an event is null to an agent when the event is believed to be impossible to the very agent, and thus its subjective probability is zero to the agent. On the other hand, a vacuous event has absolute impossibility whose true objective probability is zero by the Kolmogorov axiom. Thus, the objective true probability of an absolutely impossible event here binds its subjective probability to zero, but not necessarily vice versa.
We now extend this idea in (Savage, 1972) to all virtually impossible events. Here, note that absolute impossibility is assigned to a vacuous event by the Kolmogorov axiom, while virtual impossibility is assigned to any event whose true objective probability measure is zero by Nature. Thus, in Lemma 4.23, we derive that all virtually impossible events also have a subjective probability zero infinitely often whenever the agent is self-assured that such events are truly impossible, for the subjective probability must be bound to the true objective probability zero, if any. Otherwise, the machine comes to tolerate error infinitely often, which makes it impossible for the machine to achieve its goal of learning the true probability.