Is the Volume of a Credal Set a Good Measure for Epistemic Uncertainty?
Abstract
Adequate uncertainty representation and quantification have become imperative in various scientific disciplines, especially in machine learning and artificial intelligence. As an alternative to representing uncertainty via one single probability measure, we consider credal sets (convex sets of probability measures). The geometric representation of credal sets as -dimensional polytopes implies a geometric intuition about (epistemic) uncertainty. In this paper, we show that the volume of the geometric representation of a credal set is a meaningful measure of epistemic uncertainty in the case of binary classification, but less so for multi-class classification. Our theoretical findings highlight the crucial role of specifying and employing uncertainty measures in machine learning in an appropriate way, and for being aware of possible pitfalls.
1 Introduction
The notion of uncertainty has recently drawn increasing attention in machine learning (ML) and artificial intelligence (AI) due to the fields’ burgeoning relevance for practical applications, many of which have safety requirements, such as in medical domains [Lambrou et al., 2010, Senge et al., 2014, Yang et al., 2009] or socio-technical systems [Varshney, 2016, Varshney and Alemzadeh, 2017]. These applications to safety-critical contexts show that a suitable representation and quantification of uncertainty for modern, reliable machine learning systems is imperative.
In general, the literature makes a distinction between aleatoric and epistemic uncertainties (AU and EU, respectively) [Hora, 1996]. While the former is caused by the inherent randomness of the data-generating process, EU results from the learner’s lack of knowledge regarding the true underlying model; it also includes approximation uncertainty. Since EU can be reduced per se with further information (e.g., via data augmentation using semantic preserving transformations), it is also referred to as reducible uncertainty. In contrast, aleatoric uncertainty, as a property of the data-generating process, is irreducible [Hüllermeier and Waegeman, 2021]. The importance of distinguishing between different types of uncertainty is reflected in several areas of recent machine learning research, e.g. in Bayesian deep learning [Depeweg et al., 2018, Kendall and Gal, 2017], in adversarial example detection [Smith and Gal, 2018], or data augmentation in Bayesian classification [Kapoor et al., 2022]. A qualitative representation of total uncertainty, AU, and EU, and of their asymptotic behavior as the number of data points available to the learning agent increases, is given in Figure 1.

Typically, uncertainty in machine learning, artificial intelligence, and related fields is expressed solely in terms of probability theory. That is, given a measurable space , uncertainty is entirely represented by defining one single probability measure on . However, representing uncertainty in machine learning is not restricted to classical probability theory; various aspects of uncertainty representation and quantification in ML are discussed by Hüllermeier and Waegeman [2021]. Credal sets, i.e., (convex) sets of probability measures, are considered to be very popular models of uncertainty representation, especially in the field of imprecise probabilities (IP) [Augustin et al., 2014, Walley, 1991]. Credal sets are also very appealing from an ML perspective for representing uncertainty, as they can represent both aleatoric and epistemic uncertainty (as opposed to a single probability measure). Numerous scholars emphasized the utility of representing uncertainty in ML via credal sets, e.g., credal classification [Zaffalon, 2002, Corani and Zaffalon, 2008] based on the Imprecise Dirichlet Model (IDM) [Walley, 1996], generalizing Bayesian networks to credal classifiers [Corani et al., 2012], or building credal decision-trees [Abellán and Moral, 2003].
Uncertainty representation via credal sets also requires a corresponding quantification of the underlying uncertainty, referred to as credal uncertainty quantification (CUQ). The task of (credal) uncertainty quantification translates to finding a suitable measure that can accurately reflect the uncertainty inherent to a credal set. In many ML applications, such as active learning [Settles, 2009] or classification with abstention, there is a need to quantify (predictive) uncertainty in a scalar way. Appropriate measures of uncertainty are often axiomatically justified [Bronevich and Klir, 2008, 2010].
Contributions. In this work, we consider the volume of the geometric representation of a credal set on the label space as a quite obvious and intuitively plausible measure of EU. We argue that this measure is indeed meaningful if we are in a binary classification setting. However, in a multi-class setting, the volume exhibits shortcomings that make it unsuitable for quantifying EU associated with a credal set.
Structure of the paper. The paper is divided as follows. Section 2 formally introduces the framework we work in, and Section 3 discusses the related literature. Section 4 presents our main findings, which are further discussed in Section 5. Proofs of our theoretical results are given in Appendix A, and (a version of) Carl-Pajor’s theorem, intimately related to Theorem 1, is stated in Appendix B.
2 Uncertainty in ML and AI
Uncertainty is a crucial concept in many academic and applied disciplines. However, since its definition depends on the specific context a scholar works in, we now introduce the formal framework of supervised learning within which we will examine it.
Let , and be two measurable spaces, where , and are suitable -algebras. We will refer to as instance space (or equivalently, input space) and to as label space. Further, the sequence , is called training data. The pairs are realizations of random variables , which are assumed independent and identically distributed (i.i.d.) according to some probability measure on .
Definition 1 (Credal set).
Let be a generic measurable space and denote by the set of all (countably additive) probability measures on . A convex subset is called a credal set.
Note that in Definition 1, the assumption of convexity is quite natural and considered to be rational (see, e.g., Levi [1980]). It is also mathematically appealing, since, as shown by Walley [1991, Section 3.3.3], the “lower boundary” of , defined as , for all and called the lower probability associated with , is coherent [Walley, 1991, Section 2.5].
Further, in a supervised learning setting, we assume a hypothesis space , where each hypothesis maps a query instance to a probability measure on . We distinguish between different “degrees” of uncertainty-aware predictions, which are depicted in Table 1.
Predictor | AU aware? | EU aware? |
---|---|---|
Hard label prediction: | ✖ | ✖ |
Probabilistic prediction: | ✔ | ✖ |
Credal prediction: | ✔ | ✔ |
We denote by the set of all credal sets on . While probabilistic predictions fail to capture the epistemic part of the (predictive) uncertainty, predictions in the form of credal sets account for both types of uncertainty. It should also be remarked that representing uncertainty is not restricted to the credal set formalism. Another possible framework to represent AU and EU is that of second-order distributions; they are commonly applied in Bayesian learning and have been recently inspected in the context of uncertainty quantification by Bengs et al. [2022].
In this paper, we restrict our attention to the credal set representation. Given a credal prediction set, it remains to properly quantify the uncertainty encapsulated in it using a suitable measure. Credal set representations are often illustrated in low dimensions (usually or ). Examples of such geometrical illustrations can be found in the context of machine learning in [Hüllermeier and Waegeman, 2021] and in imprecise probability theory in [Walley, 1991, Chapter 4]. This suggests that a credal set and its geometric representation are strictly intertwined. We will show in the following sections that this intuitive view can have disastrous consequences in higher dimensions and that one should exercise caution in this respect. Furthermore, it remains to be discussed whether a geometric viewpoint on (predictive) uncertainty quantification is in fact sensible.
3 Measures of Credal Uncertainty
In this section we examine some axiomatically defined properties of (credal) uncertainty measures. For a more detailed discussion of various (credal) uncertainty measures in machine learning and a critical analysis thereof, we refer to Hüllermeier et al. [2022].
Let denote the Shannon entropy [Shannon, 1948], whose discrete version is defined as
A suitable measure of credal uncertainty should satisfy the following axioms proposed by Abellán and Klir [2005], Jiroušek and Shenoy [2018]:
-
A1
Non-negativity and boundedness:
-
(i)
, for all ;
-
(ii)
there exists such that , for all .
-
(i)
-
A2
Continuity: is a continuous functional.
-
A3
Monotonicity: for all such that , we have .
-
A4
Probability consistency: for all such that , we have .
-
A5
Sub-additivity: Suppose , and let be a joint credal set on such that is the marginal credal set on and is the marginal credal set on , respectively. Then, we have
(1) -
A6
Additivity: If and are independent, (1) holds with equality.
In axiom A6, independence refers to a suitable notion for independence of credal sets, see e.g. Couso et al. [1999]. An axiomatic definition of properties for uncertainty measures is a common approach in the literature [Pal et al., 1992, 1993]. Examples of credal uncertainty measures that satisfy some of the axioms A1–A6 are the maximal entropy [Abellan and Moral, 2003] and the generalized Hartley measure [Abellán and Moral, 2000].
Recall that the lower probability of is defined as , for all , and call upper probability its conjugate , for all . Since we are concerned with the fundamental question of whether the volume functional is a suitable measure for epistemic uncertainty, we replace A4 with the following axiom that better suits our purposes.
-
A4’
Probability consistency: reduces to as the distance between and goes to , for all .
While A4’ addresses solely the epistemic component of uncertainty assoicated with the credal set , A4 incorporates the aleatoric uncertainty. Finally, we introduce a seventh axiom that subsumes a desirable property of proposed by Hüllermeier et al. [2022, Theorem 1.A3-A5].
-
A7
Invariance: is invariant to rotation and translation.
Call the cardinality of the label space . In the next section, we will note that many of these axioms are satisfied by the volume operator in the case but can no longer be guaranteed for .
4 Geometry of Epistemic Uncertainty
As pointed out in Section 3, there is no unambiguous measure of (credal) uncertainty for machine learning purposes. In this section, we present a measure for EU rooted in the geometric concept of volume and show how it is well-suited for a binary classification setting, while it loses its appeal when moving to a multi-class setting.
Since we are considering a classification setting, we assume that is a finite Polish space so that , for some natural number . We also let to work with the finest possible -algebra of ; the results we provide still hold for any coarser -algebra.111Call the topology on . The ideas expressed in this paper can be easily extended to the case where is not Polish. We require it to convey our results without dealing with topological subtleties. Because is Polish, is Polish as well. In particular, the topology endowed to is the weak topology, which – because we assumed to be finite – coincides with the topology induced by the Euclidean norm. Consider a credal set , which can be seen as the outcome of a procedure involving an imprecise Bayesian neural network (IBNN) [Caprio et al., 2023a], or an imprecise neural network (INN) [Caprio et al., 2023b]; an ensemble-based approach is proposed by Shaker and Hüllermeier [2020].
Since , each element can be seen as a -dimensional probability vector, , where , , , for all , and . This entails that if we denote by the unit simplex in , we have , which means that is a convex body inscribed in .222In the remaining part of the paper, we denote by both the credal set and its geometric representation, as no confusion arises.
Intuitively, the “larger” is, the higher the credal uncertainty. A natural way of capturing the size of , then, appears to be its volume . Notice that is a bounded quantity: its value is bounded from below by and from above by , the volume of the whole unit simplex . The latter corresponds to the case where , that is, to the case of completely vacuous beliefs: the agent is only able to say that the probability of is in , for all . In this sense, the volume is a measure of the size of set that increases the more uncertain the agent is about the elements of . This argument shows that is well suited to capture credal uncertainty. But why is it appropriate to describe EU?333The concept of volume has been explored in the imprecise probabilities literature, see e.g., Bloch [1996], [Cuzzolin, 2021, Chapter 17], and Seidenfeld et al. [2012], but, to the best of our knowledge, has never been tied to the notion of epistemic uncertainty. More in general, the geometry of imprecise probabilities has been studied, e.g., by Anel [2021], Cuzzolin [2021]. Think of the extreme case where EU does not exist, so that the agent faces AU only. In that case, they would be able to specify a unique probability measure (or equivalently, ), and . Hence, if , then this means that the agent faces EU. In addition, let be a sequence of credal sets on representing successive refinements of computed as new data becomes available to the agent.444Clearly , for all . If, after observing enough evidence, the EU is resolved, that is, if for all , we see that the following holds. Sequence converges – say in the Hausdorff metric – as to such that , for all , and all the elements of are equal to , the (unique) probability measure that encapsulates the AU.555Technically is a multiset, that is, a set where multiple instances for each of its elements are allowed. Through the learning process, we refine our estimates for the “true” underlying aleatoric uncertainty (pertaining to ), which is left after all the EU is resolved. Then, the geometric representation of is a point whose volume is . Hence, we have that the volume of converges from above to (that is, it possesses the continuity property), which is exactly the behavior we would expect as EU resolves.
As we shall see, while this intuitive explanation holds if , for , continuity fails, thus making the volume not suited to capture EU in a multi-class classification setting. We also show in Theorem 1 that the volume lacks robustness in higher dimensions. Small perturbations to the boundary of a credal set make its volume vary significantly. This may seriously hamper the results of a study, leading to potentially catastrophic consequences in downstream tasks.
4.1 : a good measure for EU, but only if
Let so that is a subset of , the segment linking the points and in a -dimensional Cartesian plane. Notice that in this case, the volume corresponds to the length of the segment. In this context, is an appealing measure to describe the EU associated with the credal set .
Proposition 1.
satisfies axioms A1–A3, A4’, A5 and A7 of Section 3.
Let us now discuss additivity (axiom A6 of Section 3). Suppose the label space can be written as , where and . Let be a joint credal set on such that is the marginal credal set on and is the marginal credal set on . In the proof of Proposition 1, we show that if and ,666This implies that . then the volume is sub-additive. Suppose instead now that , so that and .777A similar argument will hold if we assume , so that and . Then, the marginal of any on will give probability to ; in formulas, . This entails that is a singleton and that its geometric representation is a point.888Or, alternatively, is a multiset whose elements are all equal. Then, for all , and , where is the marginal of any on .
In turn, this line of reasoning implies that , which shows that the volume is additive in this case.
This situation corresponds to an instance of strong independence (SI) [Couso et al., 1999, Section 3.5]. We have SI if and only if
(2) |
In other words, there is complete lack of interaction between the probability measure on and those on . To see that this is the case, recall that is a credal set, and so is convex; recall also that is a singleton. Then, pick any , where denotes the set of extreme elements of . We have that , and so . With a slight abuse of notation, we can write . This immediately implies that the equality in (2) holds. As pointed out in [Couso et al., 1999, Section 3.5], SI implies independence of the marginal sets, epistemic independence of the marginal experiments, and independence in the selection [Couso et al., 1999, Sections 3.1, 3.4, and 3.5, respectively]. It is, therefore, a rather strong notion of independence.
The volume is also trivially additive if , but in that case would be a multiset.
The argument put forward so far can be summarized in the following proposition.
Proposition 2.
Let . satisfies axiom A6 if we assume the instance of SI given by either of the following
-
•
,
-
•
,
-
•
and .
If , the volume ceases to be an appealing measure for EU. This is because quantifying the uncertainty associated with a credal set becomes challenging due to the dependency of the volume on the dimension. So far, we have written Vol in place of to ease notation, but for the dimension with respect to which the volume is taken becomes crucial. Let us give a simple example to illustrate this.
Example 1.
Let , so that the unit simplex is , the triangle whose extreme points are , , and in a -dimensional Cartesian plane (the purple triangle in Figure 2). Consider a sequence of credal sets whose geometric representations are triangles, and suppose their height reduces to as , so that the (geometric representation of) – the limit of in the Hausdorff metric – is a segment. The limiting set , then, is not of full dimensionality that is, its geometric representation is a proper subset of , while the geometric representation of is a proper subset of , for all . This implies that , but – unless is a degenerate segment, i.e. a point – . As we can see, the EU has not resolved, yet has a zero -dimensional volume; this is clearly undesirable. It is easy to see how this problem exacerbates in higher dimensions.
There are two possible ways one could try to circumvent the issue in Example 1; alas, both exhibit shortcomings, that is, at least one of the axioms A1–A3, A4’, A5–A7 in Section 3 is not satisfied. The first one is to consider the volume operator as the volume taken with respect to the space in which set is of full dimensionality. In this case, we immediately see how A2 fails. Considering again the sequence in Example 1, we would have a sequence whose volume is going to zero. However, in the limit, its volume would be positive. Axiom A3 fails as well: consider a credal set whose representation is a triangle having base and height and suppose . Consider then a credal set whose representation is a segment having length . Then, , while .
The second one is to consider lift probability sets; let us discuss this idea in depth. Let , and let . Call
where is the -dimensional identity matrix. That is, is the Stiefel manifold of matrices with orthonormal rows [Cai and Lim, 2022]. Then, for any and any , define
Suppose now that, for some , (the geometric representation of) is a proper subset of , while (the geometric representation of) is a proper subset of . Pick any and any ; an embedding of in is a set such that for all , there exists a probability vector such that . Call the set of embeddings of in , and assume that it is nonempty.
Then, define
we call it the lift probability set for the heuristic similarity with lift zonoids [Mosler, 2002]. We define it in this way because we want the -dimensional set whose (full dimensionality) volume is the closest possible to the (-dimensional) volume of . A simple example is the following. Suppose the geometric representation of is a proper subset of , and that the geometric representation of is a proper subset of . So the former is a subset of , and the latter is a segment in . Then, a possible is any triangle in whose height is and whose base length is equal to the length of the segment representing . This because the area of such is ; if and , then , which is what we wanted. A visual representation is given in Figure 2.

Notice that is well defined because , and is compact.999If the is not a singleton, pick any of its elements. We can then compare and of , and also compute the relative quantity
that captures the variation in volume between and . Alas, in this case, too, it is easy to see how A2 fails. Consider the same sequence as in Example 1. We would have that goes to zero as , but . Axiom A3 may fail as well since we could find credal sets and such that , but .
4.2 Lack of robustness in higher dimensions
In this section, we show how, if we measure the EU associated with a credal set on the label space using the volume, as the number of labels grows, “small” changes of the uncertainty representation may lead to catastrophic consequences in downstream tasks.
For a generic compact set and a positive real , the -packing of , denoted by , is the collection of sets that satisfy the following properties
-
(i)
,
-
(ii)
, where denotes the ball of radius in space centered at ,
-
(iii)
the elements of are pairwise disjoint,
-
(iv)
there does not exist such that (i)-(iii) are satisfied by .
The packing number of , denoted by , is given by . We also let and . Notice that
(3) |
where
(4) |
where is any compact set in , possibly different than . That is, we can always find a real number depending on the dimension of the Euclidean space, on the radius of the balls, and on the set of interest, that relates the volume of and that of . Being in , it takes into account the fact that since is a union of pairwise disjoint balls within , its volume cannot exceed that of . This is easy to see in Figure 3. The second condition in (4) states that irrespective of the compact set of interest, we retain more of the volume of the original set if we pack it using balls of a smaller radius.
To give a simple illustration, consider such that . Then, by (3) and (4), we have that . This means that the difference in volume between and is larger than that between and .
Let be the class of compact sets in , and call . As goes to , increases to its optimal value that we denote as . The values of have only been found for [Cohn et al., 2017, Viazovska, 2017]. The fact that increases as decreases to captures the idea that using balls of smaller radius leads to a better approximation of the volume of the compact set in that is being packed.

Suppose our credal set is compact, so to be able to use the concepts of -packing and packing number. Consider then a set that satisfies the following three properties:
-
(a)
, so that ,
-
(b)
, for some ,
-
(c)
is such that we can find for which .
Property (a) tells us that is a proper subset of . Let denote the metric induced by the Euclidean norm . Property (b) tells us that the Hausdorff distance
(5) |
between and is equal to some . Property (c) ensures that is “not too large”. To understand why, notice that if is “large”, that is, if it is close to , then the packing number of using balls of radius can be larger than the packing number of using balls of radius .101010Because and , packing using balls of radius is a sensible choice. Requiring (c) ensures us that this does not happen, and therefore that is “small”. A representation of and satisfying (a)–(c) is given in Figure 4. A (possibly very small) change in uncertainty representation is captured by a situation in which the agent specifies credal set in place of . We are ready to state the main result of this section.

Theorem 1.
Let be a finite Polish space so that , and let . Pick any compact set , and any set that satisfies (a)-(c). The following holds
(6) |
Notice that we implicitly assumed that at least a satisfying (a)-(c) exists. We have that ; in light of this, since as , Theorem 1 states that as grows, most of the volume of concentrates near its boundary.
As a result, if we use the volume operator as a metric for the EU, this latter is very sensitive to perturbations of the boundary of the (geometric representation of the) credal set; this is problematic for credal sets in the context of ML. Suppose we are in a multi-classification setting such that the cardinality of is some large number . Suppose that two different procedures produce two different credal sets on ; call one and the other , and suppose satisfies (a)-(c). This means that the uncertainty representations associated with the two procedures differ only by a “small amount”. For instance, this could be the result of an agent specifying “slightly different” credal prior sets. This may well happen since defining the boundaries of credal sets is usually quite an arbitrary task to perform. Then, this would result in a (possibly massive) underestimation of the epistemic uncertainty in the results of the analysis, which would potentially translate in catastrophic consequence in downstream tasks. In Example 2, we describe a situation in which Theorem 1 is applied to credal prior sets.
Example 2.
Assume for simplicity that the parameter space is finite and that its cardinality is . Suppose an agent faces complete ignorance regarding the probabilities to assign to the elements of . Although tempting, there is a pitfall in choosing the whole simplex as the credal prior set. As shown by Walley [1991, Chapter 5], completely vacuous beliefs – captured by choice of as a credal prior set – cannot be Bayes-updated. This means that the posterior credal set will again be : no large amount of data is enough to swamp the prior. Instead, suppose that the agent considers a credal prior set that satisfies (a)–(c). If is large enough, this would mean that is much smaller than .
Two remarks are in order. First, in the binary classification setting (that is, when ), the lack of robustness of the volume highlighted by Theorem 1 is not an issue since is approximately only when the cardinality is large. Second, Theorem 1 is intimately related to Carl-Pajor’s Theorem [Ball and Pajor, 1990, Theorem 1]; this implies that in the future, more techniques from high-dimensional geometry may become useful in the study of epistemic, and potentially also aleatoric, uncertainties.111111We state (a version of) Carl-Pajor’s Theorem in Appendix B.
5 Conclusion
Credal sets provide a flexible and powerful formalism for representing uncertainty in various scientific disciplines. In particular, uncertainty representation via credal sets can capture different degrees of uncertainty and allow for a more nuanced representation of epistemic and aleatoric uncertainty in machine learning systems. Moreover, the corresponding geometric representation of credal sets as -dimensional polytopes enables a thoroughly intuitive view of uncertainty representation and quantification.
In this paper, we showed that the volume of a credal set is a sensible measure of epistemic uncertainty in the context of binary classification, as it enjoys many desirable properties suggested in the existing literature. On the other side, the volume forfeits these properties in a multi-class classification setting, despite its intuitive meaningfulness.
In addition, this work stimulates a fundamental question as to what extent a geometric approach to uncertainty quantification (in ML) is sensible.
This is the first step toward studying the geometric properties of (epistemic) uncertainty in AI and ML. In the future, we plan to explore the geometry of aleatoric uncertainty and introduce techniques from high-dimensional geometry and high-dimensional probability to enhance and deepen the study of EU and AU in the contexts of AI and ML.
Yusuf Sale and Michele Caprio contributed equally to this paper.
Acknowledgements.
Michele Caprio would like to acknowledge partial funding by the Army Research Office (ARO MURI W911NF2010080). Yusuf Sale is supported by the DAAD programme Konrad Zuse Schools of Excellence in Artificial Intelligence, sponsored by the Federal Ministry of Education and Research.References
- Abellán and Moral [2000] Joaquín Abellán and Serafín Moral. A non-specificity measure for convex sets of probability distributions. International journal of uncertainty, fuzziness and knowledge-based systems, 8(03):357–367, 2000.
- Abellán and Moral [2003] Joaquín Abellán and Serafín Moral. Building classification trees using the total uncertainty criterion. International Journal of Intelligent Systems, 18(12):1215–1225, 2003.
- Abellan and Moral [2003] Joaquin Abellan and Serafin Moral. Maximum of entropy for credal sets. International journal of uncertainty, fuzziness and knowledge-based systems, 11(05):587–597, 2003.
- Abellán and Klir [2005] Joaquín Abellán and George J. Klir. Additivity of uncertainty measures on credal sets. International Journal of General Systems, 34(6):691–713, 2005.
- Anel [2021] Mathieu Anel. The Geometry of Ambiguity: An Introduction to the Ideas of Derived Geometry, volume 1, pages 505–553. Cambridge University Press, 2021.
- Augustin et al. [2014] Thomas Augustin, Frank PA Coolen, Gert De Cooman, and Matthias CM Troffaes. Introduction to imprecise probabilities. John Wiley & Sons, 2014.
- Ball and Pajor [1990] Keith Ball and Alain Pajor. Convex bodies with few faces. Proceedings of the American Mathematical Society, 110(1):225–231, 1990.
- Bengs et al. [2022] Viktor Bengs, Eyke Hüllermeier, and Willem Waegeman. Pitfalls of epistemic uncertainty quantification through loss minimisation. In Advances in Neural Information Processing Systems, 2022.
- Bloch [1996] Isabelle Bloch. Some aspects of Dempster-Shafer evidence theory for classification of multi-modality medical images taking partial volume effect into account. Pattern Recognition Letters, 17(8):905–919, 1996.
- Bronevich and Klir [2008] Andrey Bronevich and George J Klir. Axioms for uncertainty measures on belief functions and credal sets. In NAFIPS 2008-2008 Annual Meeting of the North American Fuzzy Information Processing Society, pages 1–6. IEEE, 2008.
- Bronevich and Klir [2010] Andrey Bronevich and George J Klir. Measures of uncertainty for imprecise probabilities: an axiomatic approach. International journal of approximate reasoning, 51(4):365–390, 2010.
- Cai and Lim [2022] Yuhang Cai and Lek-Heng Lim. Distances between probability distributions of different dimensions. IEEE Transactions on Information Theory, 2022.
- Caprio et al. [2023a] Michele Caprio, Souradeep Dutta, Radoslav Ivanov, Kuk Jang, Vivian Lin, Oleg Sokolsky, and Insup Lee. Imprecise Bayesian Neural Networks. arXiv preprint arXiv:2302.09656, 2023a.
- Caprio et al. [2023b] Michele Caprio, Souradeep Dutta, Kaustubh Sridhar, Kuk Jang, Vivian Lin, Oleg Sokolsky, and Insup Lee. EpiC INN: Epistemic Curiosity Imprecise Neural Network. Technical report, University of Pennsylvania, Department of Computer and Information Science, 01 2023b.
- Cohn et al. [2017] Henry Cohn, Abhinav Kumar, Stephen D. Miller, Danylo Radchenko, and Maryna S. Viazovska. The sphere packing problem in dimension 24. Annals of Mathematics, 185(3):1017–1033, 2017.
- Corani and Zaffalon [2008] Giorgio Corani and Marco Zaffalon. Learning reliable classifiers from small or incomplete data sets: The naive credal classifier 2. Journal of Machine Learning Research, 9(4), 2008.
- Corani et al. [2012] Giorgio Corani, Alessandro Antonucci, and Marco Zaffalon. Bayesian networks with imprecise probabilities: Theory and application to classification. In Data Mining: Foundations and Intelligent Paradigms, pages 49–93. Springer, 2012.
- Couso et al. [1999] Inés Couso, Serafín Moral, and Peter Walley. Examples of independence for imprecise probabilities. In Proceedings of the First International Symposium on Imprecise Probabilities and Their Applications (ISIPTA 1999), pages 121–130, 1999.
- Cuzzolin [2021] Fabio Cuzzolin. The Geometry of Uncertainty. Artificial Intelligence: Foundations, Theory, and Algorithms. Springer Nature Switzerland, 2021.
- Depeweg et al. [2018] Stefan Depeweg, Jose-Miguel Hernandez-Lobato, Finale Doshi-Velez, and Steffen Udluft. Decomposition of uncertainty in bayesian deep learning for efficient and risk-sensitive learning. In International Conference on Machine Learning, pages 1184–1193. PMLR, 2018.
- Hifi and Yousef [2019] Mhand Hifi and Labib Yousef. A local search-based method for sphere packing problems. European Journal of Operational Research, 247:482–500, 2019.
- Hora [1996] Stephen C Hora. Aleatory and epistemic uncertainty in probability elicitation with an example from hazardous waste management. Reliability Engineering & System Safety, 54(2-3):217–223, 1996.
- Hüllermeier and Waegeman [2021] Eyke Hüllermeier and Willem Waegeman. Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods. Machine Learning, 110(3):457–506, 2021.
- Hüllermeier et al. [2022] Eyke Hüllermeier, Sébastien Destercke, and Mohammad Hossein Shaker. Quantification of credal uncertainty in machine learning: A critical analysis and empirical comparison. In Uncertainty in Artificial Intelligence, pages 548–557. PMLR, 2022.
- Hüllermeier [2022] Eyke Hüllermeier. Quantifying aleatoric and epistemic uncertainty in machine learning: Are conditional entropy and mutual information appropriate measures? Available at arxiv:2209.03302, 2022.
- Jiroušek and Shenoy [2018] Radim Jiroušek and Prakash P. Shenoy. A new definition of entropy of belief functions in the Dempster–Shafer theory. International Journal of Approximate Reasoning, 92:49–65, 2018.
- Kapoor et al. [2022] Sanyam Kapoor, Wesley J Maddox, Pavel Izmailov, and Andrew Gordon Wilson. On uncertainty, tempering, and data augmentation in bayesian classification. arXiv preprint arXiv:2203.16481, 2022.
- Kendall and Gal [2017] Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision? Advances in neural information processing systems, 30, 2017.
- Lambrou et al. [2010] Antonis Lambrou, Harris Papadopoulos, and Alex Gammerman. Reliable confidence measures for medical diagnosis with evolutionary algorithms. IEEE Transactions on Information Technology in Biomedicine, 15(1):93–99, 2010.
- Levi [1980] Isaac Levi. The Enterprise of Knowledge. London : MIT Press, 1980.
- Mosler [2002] Karl Mosler. Zonoids and lift zonoids. In Multivariate Dispersion, Central Regions, and Depth: The Lift Zonoid Approach, volume 165 of Lecture Notes in Statistics, pages 25–78. New York : Springer, 2002.
- Pal et al. [1992] Nikhil R Pal, James C Bezdek, and Rohan Hemasinha. Uncertainty measures for evidential reasoning i: A review. International Journal of Approximate Reasoning, 7(3-4):165–183, 1992.
- Pal et al. [1993] Nikhil R Pal, James C Bezdek, and Rohan Hemasinha. Uncertainty measures for evidential reasoning ii: A new measure of total uncertainty. International Journal of Approximate Reasoning, 8(1):1–16, 1993.
- Seidenfeld et al. [2012] Teddy Seidenfeld, Mark J. Schervish, and Joseph B. Kadane. Forecasting with imprecise probabilities. International Journal of Approximate Reasoning, 53(8):1248–1261, 2012. Imprecise Probability: Theories and Applications (ISIPTA’11).
- Senge et al. [2014] Robin Senge, Stefan Bösner, Krzysztof Dembczyński, Jörg Haasenritter, Oliver Hirsch, Norbert Donner-Banzhoff, and Eyke Hüllermeier. Reliable classification: Learning classifiers that distinguish aleatoric and epistemic uncertainty. Information Sciences, 255:16–29, 2014.
- Settles [2009] Burr Settles. Active Learning Literature Survey. Technical report, University of Wisconsin-Madison, Department of Computer Sciences, 2009.
- Shaker and Hüllermeier [2020] Mohammad Hossein Shaker and Eyke Hüllermeier. Aleatoric and epistemic uncertainty with random forests. In Advances in Intelligent Data Analysis XVIII: 18th International Symposium on Intelligent Data Analysis, IDA 2020, Konstanz, Germany, April 27–29, 2020, Proceedings 18, pages 444–456. Springer, 2020.
- Shannon [1948] Claude E Shannon. A mathematical theory of communication. The Bell system technical journal, 27(3):379–423, 1948.
- Smith and Gal [2018] Lewis Smith and Yarin Gal. Understanding measures of uncertainty for adversarial example detection. arXiv preprint arXiv:1803.08533, 2018.
- Varshney [2016] Kush R Varshney. Engineering safety in machine learning. In 2016 Information Theory and Applications Workshop (ITA), pages 1–5. IEEE, 2016.
- Varshney and Alemzadeh [2017] Kush R Varshney and Homa Alemzadeh. On the safety of machine learning: Cyber-physical systems, decision sciences, and data products. Big data, 5(3):246–255, 2017.
- Vershynin [2018] Roman Vershynin. High-dimensional probability: An introduction with applications in data science, volume 47. Cambridge university press, 2018.
- Viazovska [2017] Maryna S. Viazovska. The sphere packing problem in dimension 8. Annals of Mathematics, 185(3):991–1015, 2017.
- Walley [1991] Peter Walley. Statistical Reasoning with Imprecise Probabilities, volume 42 of Monographs on Statistics and Applied Probability. London : Chapman and Hall, 1991.
- Walley [1996] Peter Walley. Inferences from multinomial data: learning about a bag of marbles. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):3–34, 1996.
- Yang et al. [2009] Fan Yang, Hua-zhen Wang, Hong Mi, Cheng-de Lin, and Wei-wen Cai. Using random forest for reliable classification and cost-sensitive learning for medical diagnosis. BMC bioinformatics, 10(1):1–14, 2009.
- Zaffalon [2002] Marco Zaffalon. The naive credal classifier. Journal of statistical planning and inference, 105(1):5–21, 2002.
Appendix A Proofs
Proof of Proposition 1.
Let be credal sets, and assume . Then we have the following.
-
•
and . Hence satisfies A1.
-
•
The volume being a continuous functional is a well-known fact that comes from the continuity of the Lebesgue measure, so satisfies A2.
-
•
. This comes from the fundamental property of the Lebesgue measure, so satisfies A3.
-
•
Consider a sequence of credal sets on such that , for all . Then, this means that there exists such that for all , the geometric representation of is a subset of the geometric representation of . In addition, the limiting element of is a (multi)set whose elements are all equal to , so its geometric representation is a point and its volume is . Hence, probability consistency is implied by continuity A3, so satisfies A4’.
-
•
The volume is invariant to rotation and translation. This is a well-known fact that comes from the fundamental property of the Lebesgue measure, so satisfies A7.
Let us now show that the volume operator satisfies sub-additivity A5. Let . In addition, suppose we are in the general case in which . In particular, let , so that and . Suppose also and . Now, pick any probability measure on . In general, we would have that its marginal on is such that . Similarly for marginal on . In our case, though, the computation is easier. To see this, fix . Then, we should sum over the probability of , . But the only pair is . A similar argument holds if we fix , or any of the elements of . Hence, we have that
Let and denote the marginal convex sets of probability distributions on and , respectively, and let denote the convex set of joint probability distributions on [Couso et al., 1999]. Then, given our argument above, we have that . So in the general case where and , the volume is subadditive. ∎
Proof of Proposition 2.
Immediate from the assumption on the instance of SI. ∎
Proof of Theorem 1.
Pick any compact set and any set satisfying (a)-(c). Let denote a generic ball in of radius . Notice that because . Then, the proof goes as follows
(7) | ||||
(8) | ||||
(9) | ||||
(10) | ||||
(11) |
where (7) comes from equation (3), (8) comes from the fact that by (4), (9) comes from being the union of pairwise disjoint balls of radius , (10) comes from properties of the volume of a ball of radius in , and (11) comes from property (c) of . ∎
Appendix B High-dimensional probability
Since Theorem 1 in Section 4.2 is intimately related with Carl-Pajor’s Theorem [Ball and Pajor, 1990], we state (a version) of the theorem here.
Theorem 2 (Carl-Pajor).
Let denote the -dimensional unit euclidean ball, and let be a polytope with vertices. Then, we have
(12) |
For further results connecting high-dimensional probability and data science, see Vershynin [2018].