This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Superior Scoring Rules for Probabilistic Evaluation of Single-Label Multi-Class Classification Tasks

Rouhollah Ahmadian
Department of Mathematics and Computer Science
Amirkabir University of Technology (Tehran Polytechnic)
Iran
[email protected]
&Mehdi Ghatee111The corresponding author
Department of Mathematics and Computer Science
Amirkabir University of Technology (Tehran Polytechnic)
Iran
[email protected]
&Johan Wahlström
Department of Computer Science
University of Exeter
UK
[email protected]
Abstract

This study introduces novel superior scoring rules called Penalized Brier Score (PBS) and Penalized Logarithmic Loss (PLL) to improve model evaluation for probabilistic classification. Traditional scoring rules like Brier Score and Logarithmic Loss sometimes assign better scores to misclassifications in comparison with correct classifications. This discrepancy from the actual preference for rewarding correct classifications can lead to suboptimal model selection. By integrating penalties for misclassifications, PBS and PLL modify traditional proper scoring rules to consistently assign better scores to correct predictions. Formal proofs demonstrate that PBS and PLL satisfy strictly proper scoring rule properties while also preferentially rewarding accurate classifications. Experiments showcase the benefits of using PBS and PLL for model selection, model checkpointing, and early stopping. PBS exhibits a higher negative correlation with the F1 score compared to the Brier Score during training. Thus, PBS more effectively identifies optimal checkpoints and early stopping points, leading to improved F1 scores. Comparative analysis verifies models selected by PBS and PLL achieve superior F1 scores. Therefore, PBS and PLL address the gap between uncertainty quantification and accuracy maximization by encapsulating both proper scoring principles and explicit preference for true classifications. The proposed metrics can enhance model evaluation and selection for reliable probabilistic classification.

Keywords Strictly Proper Scoring Rules  \cdot Evaluation Metric  \cdot Probabilistic Evaluation  \cdot Probabilistic Classification  \cdot Model Selection

1 Introduction

Evaluation metrics play a critical role in model selection, feature selection, parameter tuning, and regularization when evaluating the performance of a classification model [1]. In model selection, evaluation metrics such as accuracy, precision, recall, and F-measures are used to compare the performance of different models and select the best model for a specific task [2]. Similarly, in feature selection, evaluation metrics are used to identify the most informative features for a specific task. By comparing the performance of a classification model with different feature subsets, irrelevant or redundant features can be eliminated, improving the model’s efficiency and effectiveness [3]. Model checkpointing, also known as snapshotting, involves saving the state of a model during the training process at regular intervals. By evaluating the performance of each snapshot on validation metrics, the best-performing snapshot can be selected for the final model. This helps prevent overfitting by choosing a snapshot that exhibits good generalization ability on the validation set, improving the model’s ability to generalize to unseen data [4, 5, 6, 7]. In regularization, evaluation metrics are used to balance the model’s complexity and its ability to generalize to new data. By evaluating the performance of a classification model with different regularization strengths, overfitting can be prevented, ensuring that the model performs well on unseen data [8, 9].

In classification tasks, the evaluation of the model often relies on simple accuracy measures and related metrics derived from the confusion matrix that only considers predictions specifying the truth class [10]. However, this approach overlooks the crucial aspect of probabilistic uncertainty quantification [11]. Ensuring the accuracy of predictive uncertainty is critical for classification models used in safety-critical applications. To be considered reliable and calibrated in statistical terms, a classification model must exhibit an honest expression of its predictive distribution [12]. In other words, the model should not only make predictions but also convey the level of uncertainty associated with those predictions [13]. For example, consider a binary classification problem where the goal is predicting a certain disease. A classifier that outputs a single class prediction of disease or no disease may achieve a high accuracy rate in terms of correct predictions. However, it fails to capture and communicate the inherent uncertainty surrounding each prediction. This discrepancy between the use of simple accuracy measures in classification and the need for probabilistic assessments has motivated researchers to explore the use of scoring rules [14].

Scoring rules are used to evaluate probabilistic predictions or forecasts in decision theory [15]. When assessing probabilistic forecasts, it is crucial to use a scoring method that accurately reflects the reported probabilities and their alignment with actual outcomes [16]. One approach to achieving this is through the use of strictly proper scoring rules. These rules assign optimal scores only when the forecasts match the true probabilities [17]. The Brier Score, initially introduced in meteorology, is an early example of such a rule [18]. It was developed to discourage biased reporting and promote honest assessments of uncertainty. As a variant of the quadratic scoring rule, it remains widely utilized [16]. Another pioneering effort resulted in the logarithmic rule, which is notable for its theoretical connection to entropy [19]. Over time, numerous studies have examined commonly used scoring rules and their desirable statistical properties from different perspectives. Some analyses have focused on aspects such as forecast calibration and consistency incentives, while others have explored connections to information theory concepts [11, 20]. As probabilistic prediction continues to be a significant area of research, ongoing investigations into existing rule properties and potential novel approaches will contribute to further understanding and advancement of scoring methodologies. In this regard, this paper proposes a novel attribute for strictly proper scoring rules.

1.1 Motivation

The classification decision is commonly partitioned into categories of true positive, true negative, false positive, and false negative. Let us examine an illustrative scenario involving a true vector of [0,1,0][0,1,0] and two predicted vectors: A=[0.33,0.34,0.33]A=[0.33,0.34,0.33] and B=[0.51,0.49,0]B=[0.51,0.49,0]. Vector AA corresponds to a true negative prediction, while vector BB represents a false negative prediction. It is crucial to critically evaluate which vector demonstrates superior performance, taking into account that the argmax\arg\max of vector AA accurately predicts the outcome, while vector BB assigns a higher probability to the correct class. Upon analyzing the performance of vectors AA and BB in a single-label scenario, it becomes evident that vector AA possesses an advantage over vector BB in terms of classification accuracy. This attribute holds significant importance as accurate decision-making is highly desirable and essential in numerous applications. Consequently, considering the paramount significance of precise decisions, it can be concluded that vector AA surpasses vector BB in quality. Therefore, this paper advocates that the scoring rule ought to consistently assign better scores to decisions about true positives and true negatives. However, a scientific discrepancy arises when considering the Brier Score and Logarithmic Loss metrics, as they occasionally contradict this advocated principle. Surprisingly, vector AA exhibits higher Brier Score and Logarithmic Loss values compared to vector BB contrary to expectations. This inconsistency between the advocated principle and the observed outcomes highlights a scientific gap or inconsistency that this paper aims to address and resolve. Therefore, this paper proposes a novel attribute for strictly proper scoring rules to tackle this issue. Our contributions can be stated as the following:

  • It hypothesizes that evaluation metrics should favor accurate predictions (true positives and true negatives) over misclassified ones (false positives and false negatives) when evaluating multi-class single-label classifications.

  • We show traditional strictly proper scores like Brier Score and Logarithmic Loss can rate incorrect predictions higher than correct ones.

  • A new property termed superior is defined for scoring rules to formally establish that evaluations should preference true predictions.

  • To this end, this research proposes two novel scoring rules called Penalized Brier Score (PBS) and Penalized Logarithmic Loss (PLL).

  • This research modifies The scoring rules Brier Score and Logarithmic Loss by incorporating a penalty term to make them superior. This penalizes incorrect predictions more strongly than the original scoring rules, aligning with the initial hypothesis that favors correct decisions over wrongs.

  • The utility of the proposed evaluation metrics for early stopping and checkpointing is experimentally assessed.

  • Experimental results demonstrate that the proposed evaluation metrics enhance the efficiency of the classifier. Furthermore, these metrics exhibit a high correlation with the F-1 score while encompassing the measurement of prediction uncertainty and calibration.

The paper is structured in the following way: Section 2 provides background information and sets the stage for the research. Section 3 reviews previous work related to the topic. Section 4 details the motivation for developing new scoring rules and introduces the proposed Penalized Brier Score and Penalized Logarithmic Loss methods. Section 5 presents experimental results obtained from spatio-temporal data. Finally, section 6 concludes the paper.

2 Preliminaries

2.1 Scoring Rules

A scoring rule S(,)S(\cdot,\cdot) is a bivariate function that takes two arguments: a probability measure, representing a forecast, and an observed outcome [21]. The function returns a real number that measures the deviation between a forecast probability and reality. They provide a summary measure for the evaluation of predictions that assign probabilities to events, such as probabilistic classification of a set of mutually exclusive outcomes or classes [22]. Scoring rules can be thought of as cost function or loss function and are evaluated as the empirical mean of a given sample, simply called score [17]. It is worth noting that in neural networks, loss functions are typically utilized during the backpropagation process to optimize the network. However, in this research, we are focusing on evaluation metrics for assessing model performance.

Let Ω\Omega represent a general sample space, which contains all possible outcomes. Let 𝒜\mathcal{A} represent an event space, which is a σ\sigma-algebra of subsets of Ω\Omega. An event refers to a set of observations within the sample space. The extended real numbers ¯=[,]\overline{\mathbb{R}}=\left[-\infty,\infty\right] denote possible scores. Let \mathcal{F} represent a convex class of probability measures on the space (Ω,𝒜)(\Omega,\mathcal{A}). A probabilistic forecast is any probability measure FF\in\mathcal{F}.

Definition 1 (Scoring Rule).

A scoring rule S:×Ω¯S:\mathcal{F}\times\Omega\rightarrow\overline{\mathbb{R}} is a function assigning numerical values to probability measure pairs (P,i)(P,i), where PP\in\mathcal{F} as a probabilistic forecast is issued and iΩi\in\Omega as an observation materializes.

Let the distributional forecast QQ\in\mathcal{F} represent the forecaster’s best estimate of the probability distribution. In the context of multi-class classification tasks, QQ represents ground-truth vectors. A scoring rule S(P,Q)S(P,Q) is defined as the expected score of S(P,i)S(P,i) when iQi\sim Q. It means that S(P,Q):=𝔼Q[S(P,i)]=S(P,i)𝑑Q(i)S(P,Q):=\mathbb{E}_{Q}[S(P,i)]=\int S(P,i)dQ(i).

Definition 2 (Strictly Proper Scoring Rule).

The scoring rule is proper with respect to \mathcal{F} if

{S(P,Q)S(Q,Q)SisnegativelyorientedS(P,Q)S(Q,Q)Sispositivelyoriented\left\{\begin{matrix}S(P,Q)\geq S(Q,Q)&S~{}is~{}negatively~{}oriented\\ S(P,Q)\leq S(Q,Q)&S~{}is~{}positively~{}oriented\end{matrix}\right.

and it is strictly proper if equality holds if and only if P=QP=Q.

2.2 Probabilistic Evaluation of Multi-Class Classification

In single-label multi-class classification, the objective is to predict the specific class label, denoted as ω\omega, for a given sample characterized by a feature vector, denoted as XX. In probabilistic classification, the goal is to learn the entire conditional distribution p(ω|X)p(\omega|X), which represents the probability distribution over classes conditioned on the given features [23]. This is accomplished through the use of a probabilistic classifier denoted as f:𝒳f:\mathcal{X}\rightarrow\mathcal{F}. The set \mathcal{F} of probability measures is typically identified with the probability cc-simplex 𝒞={p[0,1]c|i=1cpi=1}\mathcal{C}=\{p\in[0,1]^{c}|\sum_{i=1}^{c}p_{i}=1\} and probability distributions are represented by vectors p𝒞p\in\mathcal{C}. Therefore, the overall form of the scoring rules for single-label multi-class classification problems is S(p,i)S(p,i), where pp is a probability vector obtained from the classifier and ii is the true class.

Definition 3.

Brier Score (or squared loss or quadratic score), denoted by SBS(p,i)S_{BS}(p,i), is the most well-known strictly proper scoring rule and is defined as [18]

SBS(p,i)=j=1c(pjyj)2\displaystyle S_{BS}(p,i)=\sum_{j=1}^{c}(p_{j}-y_{j})^{2} (1)

Where cc in the number of classes and yy is the ground-truth label as a vector y=(y1,,yc)y=(y_{1},\cdots,y_{c}) such that yi=1y_{i}=1 if the true class is ii, and yj=0y_{j}=0 otherwise.

Definition 4.

Logarithmic Loss (or Cross-entropy loss or log-loss or ignorance score), denoted by SLL(p,i)S_{LL}(p,i), is a strictly proper scoring rules and is defined as [19]

SLL(p,i)=j=1cyjlog(pj)\displaystyle S_{LL}(p,i)=-\sum_{j=1}^{c}y_{j}\log(p_{j}) (2)

Where cc in the number of classes and yy is the ground-truth label as a vector y=(y1,,yc)y=(y_{1},\cdots,y_{c}) such that yi=1y_{i}=1 if the true class is ii, and yj=0y_{j}=0 otherwise.

In practice, competing forecast procedures are ranked by the average score [17]. Suppose that we wish to fit a parametric model PθP_{\theta} based on random samples X(1),,X(n)X^{(1)},\cdots,X^{(n)}. To estimate θ\theta, we might measure the goodness-of-fit by the mean score [24]

Sn(θ)=1ni=1nS(Pθ(X(i)),ω(i))\displaystyle S_{n}(\theta)=\frac{1}{n}\sum_{i=1}^{n}S(P_{\theta}(X^{(i)}),\omega^{(i)}) (3)

where SS is a strictly proper scoring rule. If θ0\theta_{0} denotes the true parameter, asymptotic arguments indicate that argminθ\arg\min_{\theta} Sn(θ)θ0S_{n}(\theta)\rightarrow\theta_{0} as nn\rightarrow\infty [17]. Let’s suppose SS is a negatively oriented scoring rule like the Brier Score. It suggests a general approach to estimation: choose a strictly proper scoring rule that is tailored to the problem at hand that minimizes Sn(θ)S_{n}(\theta) over the parameters space and take θ^n=argminθSn(θ)\hat{\theta}_{n}=\arg\min_{\theta}S_{n}(\theta) as the optimum score estimator based on the scoring rule [17]. For positively oriented scoring rules like accuracy, θ^n=argmaxθSn(θ)\hat{\theta}_{n}=\arg\max_{\theta}S_{n}(\theta).

3 Literature Review

Evaluation metrics are crucial for assessing the performance of classification algorithms. They are used to evaluate the performance and effectiveness of classifiers, as well as to discriminate and select the optimal solution during the classification training process [1]. Classification evaluation metrics can generally be divided into accuracy-based and confidence-based metrics.

Accuracy-based metrics only consider whether a prediction matches the true label, without regard to predictive confidence or uncertainty. This category includes metrics like accuracy, error rate, precision, recall, F-measures, AUC, and related threshold-based measures. Accuracy and error rate are among the most widespread evaluation metrics. However, they have limitations such as providing less distinctive values and favoring the majority class [25, 10]. Precision and recall focus on different targets, making them unsuitable for selecting the optimal solution [1]. F-measures and AUC aim to address some of the shortcomings of individual metrics. F-measures consider both precision and recall, while AUC reflects the overall ranking performance of a classifier [25]. However, AUC has a high computational cost for large datasets [1].

Confidence-based metrics assess predictive confidence by rewarding calibrated probability estimates [26]. These metrics reward probabilistic calibration and include strictly proper scoring rules. In addition to the scoring rules introduced in the preliminaries, there are several other scoring rules that have been utilized in forecast verification [27]. Some examples include the spherical score, quadratic score, pseudospherical score, and continuous ranked probability score (CRPS) [28, 29]. Variations of these rules have been used in different fields. This includes electricity price forecasting to assess volatility over time, and wind/weather modeling to evaluate probabilistic predictions [30, 31]. Financial prediction and spatial statistics also apply scoring rules [29].

A scoring rule is considered proper if the expected score is minimized by the true distribution of the outcome of interest [32]. This means that a forecaster who uses a proper scoring rule will be incentivized to provide a forecast that is as close as possible to the true distribution. In terms of elicitation, scoring rules have a significant role in motivating assessors to provide conscientious and truthful assessments [17]. Nonetheless, because the concept of strict propriety adds an additional level of uniqueness, the utilization of strictly proper scoring rules offers enhanced theoretical assurances regarding the optimality of truth-telling as a strategy [17]. This encourages predictors to provide more honest probabilistic forecasts.

In addition, assessing forecast characteristics like calibration and sharpness is important for understanding individual and combined forecasts [16]. Calibration examines the consistency between forecasts and outcomes, while sharpness measures the concentration of the forecast distribution for a variable of interest, regardless of outcomes [17]. The relationship between strictly proper scoring rules and measures of calibration and sharpness can be established by decomposing the rules into components. A common decomposition enables the expression of a scoring rule as a function of both a calibration measure and a sharpness measure [16, 26]. By utilizing strictly proper scoring rules, the evaluation process can capture the probabilistic uncertainty inherent in classification problems and prevent overconfidence [16]. Accuracy and F-measures are two proper scoring rules, which are positively oriented. They are not strictly proper scoring rules because their maximum are not unique [17]. Therefore, strictly proper scoring rules such as Brier Score and Logarithmic Loss have more advantages than metrics derived from the confusion matrix. In conclusion, this paper recommends the evaluation of multi-class classification using strictly proper scoring rules for the following reasons:

  • Elicitation: Strictly proper scoring rules have desirable elicitation properties. They uniquely incentivize assessors to provide honest probabilistic forecasts by maximizing the expected score [17].

  • Reliability: Strictly proper scoring rules enable reliable assessments of uncertainty. Decomposing proper scoring rules into calibration and sharpness components facilitates the evaluation of forecast uncertainty. This prevents overconfidence by capturing the inherent probabilistic uncertainty in classification [16, 26].

While this paper recommends using strictly proper scoring rules, they are not always sufficient to ensure that more accurate forecasts receive better scores. Previous evaluation metrics for multi-class classification tasks have the shortcoming of not explicitly favoring accurate predictions over misclassified ones. Metrics like Brier Score and Logarithmic Loss, while being strictly proper scoring rules, do not consistently assign better scores to predictions that result in true positives and true negatives compared to those that are false positives and false negatives. This can potentially lead to suboptimal model selection if a model achieves better scoring by making incorrect predictions rather than correct ones. The paper addresses this shortcoming by proposing novel superior scoring rules called Penalized Brier Score and Penalized Logarithmic Loss.

4 Proposed Model

This section will first present the theoretical background, providing theoretical results that motivate the requirement for proper scoring rules that assign higher scores to correct predictions rather than incorrect ones. Next, the proposed methodology will be introduced. It will illustrate how the scoring rules can be modified by incorporating penalty terms. This serves to penalize incorrect predictions more strongly, thereby ensuring the scoring rules consistently assign higher scores to correct predictions as intended.

Refer to caption
Figure 1: The big picture of theoretical analysis.

4.1 Theoretical Analysis

The primary objective of any classifier is to accurately assign observations to their respective classes. Observations that are correctly classified are considered to be of greater value than those that are incorrectly classified. A classifier that can achieve a high rate of accurate classifications and a low error rate is deemed superior because it indicates a greater ability to identify patterns and differentiate between classes. Conversely, observations that are incorrectly classified suggest that the classifier has not accurately modeled the relationship between features and classes. These errors compromise the classifier’s capacity to generalize and make accurate predictions on new samples. Correct classifications represent successful outcomes for the classifier, while errors indicate failures. Therefore, it is crucial to maximize correct classifications and minimize errors in order to develop an effective and useful classifier. The reliability of a classifier’s performance is determined by its ability to consistently reproduce the correct labels instead of settling for inferior results that are prone to mistakes.

Fig. 1 shows the big picture of the theoretical analysis presented in this section. The subsequent part of this section demonstrates that both Brier Scoring and Logarithmic Loss are indifferent to this preference. To prove this point, a novel attribute for scoring rules is introduced. This attribute guarantees that the scoring rule assigns a superior score to the observations that are correctly classified.

Definition 5 (Superior Scoring Rule).

Let ψ\psi and ξ\xi be two subsets of the set of all predictions by a certain classifier:

ψ\displaystyle\psi ={p|argmaxp=argmaxy},\displaystyle=\left\{p~{}|~{}\arg\max p=\arg\max y\right\},
ξ\displaystyle\xi ={p|argmaxpargmaxy}.\displaystyle=\left\{p~{}|~{}\arg\max p\neq\arg\max y\right\}.

Where yy is the ground-truth label as a vector y=(y1,,yc)y=(y_{1},\cdots,y_{c}) such that i=1cyi=1\sum_{i=1}^{c}y_{i}=1 and yi=1y_{i}=1 when the ii is the true class. Therefore, ψ\psi contains true positive and true negative predictions, and ξ\xi contains false positive and false negative predictions. Let S(p,ω)S(p,\omega) represent the score when the prediction p𝒫p\in\mathcal{P} is issued and the true class is ω\omega. SS is superior if its scores for every member belonging to ψ\psi are always better than those of set ξ\xi. Let p(1)ψp^{(1)}\in\psi with its corresponding true class ω(1)\omega^{(1)}. Let p(2)ξp^{(2)}\in\xi with its corresponding true class ω(2)\omega^{(2)}. Then, if the condition

{S(p(1),ω(1))<S(p(2),ω(2))SisnegativelyorientedS(p(1),ω(1))>S(p(2),ω(2))Sispositivelyoriented\left\{\begin{matrix}S(p^{(1)},\omega^{(1)})<S(p^{(2)},\omega^{(2)})&S~{}is~{}negatively~{}oriented\\ S(p^{(1)},\omega^{(1)})>S(p^{(2)},\omega^{(2)})&S~{}is~{}positively~{}oriented\end{matrix}\right.

always satisfied, SS is superior.

Therefore, if a scoring rule has this property, it will always give better scores to correct predictions. In the following, we show theoretically that Logarithmic Loss and Brier Score are not superior.

Theorem 1 (LL Property).

In the context of single-label multi-class classification tasks, c>2c>2, Logarithmic Loss is not superior.

Proof.

See Appendix 7. ∎

Theorem 2 (BS Property).

In the context of single-label multi-class classification tasks, c>2c>2, Brier Score is not superior.

Proof.

See Appendix 8. ∎

According to the Monte Carlo simulations provided in Appendix 8, it is shown that Brier Score outperforms Logarithmic Loss. However, to ensure that the scoring rule consistently assigns a higher score to accurately classified observations, a new criterion is necessary, which can be attained by modifying the current scoring rules. Specifically, if the probability vector belongs to set ξ\xi, an extra penalty was integrated into the scoring rule. Since this research focuses on the Brier Score and Logarithmic Loss, which should be minimized to be optimal, our aim is to assign a penalty to incorrect predictions that the penalty value is higher than the highest score possible for a correct prediction. To accomplish this, the penalty should be established as equivalent to the maximum value of the scoring rule for set ψ\psi.

Theorem 3 (Maximum SBSS_{BS} in ψ\psi).

The largest possible value of the Brier Score for the set ψ\psi is c1c\frac{c-1}{c}.

Proof.

See Appendix 9. ∎

Theorem 4 (Maximum SLLS_{LL} in ψ\psi).

The largest possible value of Logarithmic Loss for the set ψ\psi is log(1c)-\log(\frac{1}{c}).

Proof.

See Appendix 10. ∎

Based on the highest possible scores for the Brier Score and Logarithmic Loss when predictions are incorrect, an adjusted form of these scoring rules can be defined. The modified Brier Score with the penalty term, Penalized Brier Score (PBS), can be expressed as:

SPBS(q,i)=i=1c(yiqi)2+{c1cqξ 0otherwise\displaystyle S_{PBS}(q,i)=\sum_{i=1}^{c}(y_{i}-q_{i})^{2}+\left\{\begin{matrix}\frac{c-1}{c}&q\in\xi\\ \ 0&\text{otherwise}\end{matrix}\right. (4)

The modified Logarithmic Loss with the penalty term, Penalized Logarithmic Loss (PLL), can be expressed as:

SPLL(q,i)=i=1cyilog(pi){log(1c)qξ 0otherwise\displaystyle S_{PLL}(q,i)=-\sum_{i=1}^{c}y_{i}\log(p_{i})-\left\{\begin{matrix}\log(\frac{1}{c})&q\in\xi\\ \ 0&\text{otherwise}\end{matrix}\right. (5)

where yy is the ground-truth vector, qq is the predicted probability vector by the probabilistic classifier, and cc is the number of classes. As shown in the following theorems, the two proposed evaluation metrics are not only strictly proper scoring rules but also superior.

Theorem 5 (PBS & PLL Scoring Rules).

PBS and PLL are strictly proper scoring rules.

Proof.

See Appendix 11. ∎

Theorem 6 (PBS & PLL Property).

PBS and PLL are superior.

Proof.

The penalty term represents the maximum score that can be assigned to a correct prediction. Consequently, the scoring rule modified by the penalty term passively assigns higher scores to correct predictions over incorrect ones in a consistent manner. ∎

Penalties in Eq. (4) and Eq. (5) are designed to be applied only to incorrect predictions. Therefore, these penalties depend on the values of pp in Eq. (1) and Eq. (2). Furthermore, the more number of incorrect predictions by θ\theta in Eq. (3), the worse score it receives. As a result, the modified evaluation metrics can more reliably identify optimal model checkpoints and early stopping points that achieve better generalization performance compared to the original metrics like Brier Score and Logarithmic Loss.

4.2 Algorithm

In this section, we introduce the vectorization methods of the proposed evaluation metrics. The pseudocode of the Penalized Brier Score and the Penalized Logarithmic Loss is demonstrated in Algorithm 2 and Algorithm 3, respectively. The PBS and PLL algorithms leverage Penalizing Algorithm 1 to incorporate penalties into the Brier Score and the Logarithmic Loss.

The Penalizing algorithm detects wrong predictions and penalizes them. The output is in the form of a vector, where 0 means that the prediction was correct, otherwise the prediction was wrong and was penalized. This payoff vector is then used by other scoring functions to calculate performance metrics. It takes as input predicted probabilities q[0,1]n×cq\in[0,1]^{n\times c}, ground-truth labels y{0,1}n×cy\in\{0,1\}^{n\times c}, and a penalty value for wrong predictions. It first computes the hot values for each sample, which are the predicted probabilities for the correct class. It then subtracts the hot values from the predicted probabilities to obtain a container of values that are positive for incorrect predictions. All negative values in the container are set to zero, and the sum of the remaining values is computed for each sample. The penalty value is then multiplied by a vector of ones to become a vector. Finally, all positive values in the container are replaced by penalty and the container is returned as the payoff for each sample.

The PBS algorithm takes as input predicted probabilities qq and ground-truth labels yy. It first computes the penalty factor as (c1)/(c2)(c-1)/(c^{2}), where cc is the number of classes. It then calls the Penalizing algorithm to compute the payoff for each sample. Next, the Brier scores are computed as the mean squared difference between the predicted probabilities and the ground-truth labels. Finally, the PBS is computed as the mean of the sum of the Brier scores and the payoffs.

The PLL algorithm is similar to the PBS algorithm, but it uses the Logarithmic Loss as the base score instead of the Brier Score. The Logarithmic Loss is computed as the mean of the sum of the element-wise product of the ground-truth labels and the negative logarithm of the predicted probabilities. Finally, the payoff is subtracted from the Logarithmic Loss before taking the mean.

Algorithm 1 Penalizing
1:
  • q: Predictions of size n×cn\times c,

  • y: Ground-truth labels of size n×cn\times c, where nn is the number of samples and cc is the number of classes.

  • penalty: A scalar for wrong predictions.

2:Consider operator ==== applied to an array means applying element-wise ==== to the array. The result is a condition array consisting of TrueTrue and FalseFalse.
3:Consider function 𝗐𝗁𝖾𝗋𝖾(condition,x,y)\mathsf{where}(condition,x,y) gives three arrays with the same shape. The result is taken from xx where conditioncondition is TrueTrue or yy where it is FalseFalse.
4:Consider function 𝗌𝗎𝗆(x,axis)\mathsf{sum}(x,axis) computes the sum of the xx elements over axisaxis. The result is an array with the same shape as xx, with the specified axis removed.
5:Consider function 𝗆𝖾𝖺𝗇(x,axis)\mathsf{mean}(x,axis) computes the average of the xx elements over axisaxis. The result is an array with the same shape as xx, with the specified axis removed.
6:Consider function 𝗓𝖾𝗋𝗈𝗌(x,y)\mathsf{zeros}(x,y) return a zero array with shape x×yx\times y. And function 𝗈𝗇𝖾𝗌(x,y)\mathsf{ones}(x,y) return one array with shape x×yx\times y. If y is not provided the result would be 1-dimension.
7:𝗁𝗈𝗍𝗏𝖺𝗅𝗎𝖾𝗌𝗌𝗎𝗆(𝗐𝗁𝖾𝗋𝖾(y==1,q,y),axis=1)\mathsf{hotvalues}\leftarrow\mathsf{sum}(\mathsf{where}(y==1,q,y),axis=1)
8:𝖼𝗈𝗇𝗍𝖺𝗂𝗇𝖾𝗋q𝗁𝗈𝗍𝗏𝖺𝗅𝗎𝖾𝗌1cT\mathsf{container}\leftarrow q-\mathsf{hotvalues}\mathit{1}_{c}^{T}
9:𝗓𝖾𝗋𝗈𝗌𝗓𝖾𝗋𝗈𝗌(n,c)\mathsf{zeros}\leftarrow\mathsf{zeros}(n,c)
10:𝖼𝗈𝗇𝗍𝖺𝗂𝗇𝖾𝗋𝗐𝗁𝖾𝗋𝖾(𝖼𝗈𝗇𝗍𝖺𝗂𝗇𝖾𝗋<0,𝗓𝖾𝗋𝗈𝗌,𝖼𝗈𝗇𝗍𝖺𝗂𝗇𝖾𝗋)\mathsf{container}\leftarrow\mathsf{where}(\mathsf{container}<0,\mathsf{zeros},\mathsf{container})
11:𝖼𝗈𝗇𝗍𝖺𝗂𝗇𝖾𝗋𝗌𝗎𝗆(𝖼𝗈𝗇𝗍𝖺𝗂𝗇𝖾𝗋,𝖺𝗑𝗂𝗌=1)\mathsf{container}\leftarrow\mathsf{sum}(\mathsf{container},\mathsf{axis}=1)
12:𝗉𝖾𝗇𝖺𝗅𝗍𝗒𝗈𝗇𝖾𝗌(c)𝗉𝖾𝗇𝖺𝗅𝗍𝗒\mathsf{penalty}\leftarrow\mathsf{ones}(c)\cdot\mathsf{penalty}
13:𝗉𝖺𝗒𝗈𝖿𝖿𝗐𝗁𝖾𝗋𝖾(𝖼𝗈𝗇𝗍𝖺𝗂𝗇𝖾𝗋>0,𝗉𝖾𝗇𝖺𝗅𝗍𝗒,𝖼𝗈𝗇𝗍𝖺𝗂𝗇𝖾𝗋)\mathsf{payoff}\leftarrow\mathsf{where}(\mathsf{container}>0,\mathsf{penalty},\mathsf{container})
14:return 𝗉𝖺𝗒𝗈𝖿𝖿\mathsf{payoff}
Algorithm 2 Penalized Brier Score
1: Probability measures q of size n×cn\times c, Ground-truth labels y of size n×cn\times c, where nn is the number of samples and cc is the number of classes
2:𝗉𝖾𝗇𝖺𝗅𝗍𝗒(c1)/(c2)\mathsf{penalty}\leftarrow(c-1)/(c^{2})
3:𝗉𝖺𝗒𝗈𝖿𝖿𝖯𝖾𝗇𝖺𝗅𝗂𝗓𝗂𝗇𝗀(q,y,𝗉𝖾𝗇𝖺𝗅𝗍𝗒)\mathsf{payoff}\leftarrow\mathsf{Penalizing}(q,y,\mathsf{penalty})
4:𝖻𝗌𝗆𝖾𝖺𝗇(𝗌𝗊𝗎𝖺𝗋𝖾(yq),𝖺𝗑𝗂𝗌=1)\mathsf{bs}\leftarrow\mathsf{mean}(\mathsf{square}(y-q),\mathsf{axis}=1) \triangleright Brier Scores
5:return 𝗆𝖾𝖺𝗇(𝖻𝗌+𝗉𝖺𝗒𝗈𝖿𝖿)\mathsf{mean}(\mathsf{bs}+\mathsf{payoff})
Algorithm 3 Penalized Logarithmic Loss
1:Probability measures q of size n×cn\times c, Ground-truth labels y of size n×cn\times c, where nn is the number of samples and cc is the number of classes
2:𝗉𝖾𝗇𝖺𝗅𝗍𝗒log(1/c)\mathsf{penalty}\leftarrow\log(1/c)
3:𝗉𝖺𝗒𝗈𝖿𝖿𝖯𝖾𝗇𝖺𝗅𝗂𝗓𝗂𝗇𝗀(q,y,𝗉𝖾𝗇𝖺𝗅𝗍𝗒)\mathsf{payoff}\leftarrow\mathsf{Penalizing}(q,y,\mathsf{penalty})
4:𝗅𝗅𝗆𝖾𝖺𝗇(𝗌𝗎𝗆(ylog(q)))\mathsf{ll}\leftarrow\mathsf{mean}(\mathsf{sum}(y\cdot\log(q)))
5:return 𝗆𝖾𝖺𝗇(𝗅𝗅𝗉𝖺𝗒𝗈𝖿𝖿)\mathsf{mean}(\mathsf{ll}-\mathsf{payoff})

5 Experimental results

5.1 Datasets

Scoring rules are commonly used in the field of spatio-temporal statistics to compare different models [33]. To assess the effectiveness of our proposed method, we have selected several spatio-temporal datasets. Classifying spatio-temporal data is generally challenging, as evidenced by the low accuracy scores achieved on these datasets. For this reason, thorough model evaluation takes on greater importance. Additionally, it is crucial to consider incorrect classes when the classifier’s ability to generalize is limited. While the probabilities related to incorrect classes may not be particularly informative if the probability of the true class for all observations is above 0.5, it becomes increasingly important when the probability of the true class is below 0.5 for many observations. As a result, any evaluation method must take into account incorrect classes when making decisions.

The proposed model was evaluated using various classification problems, as described below. The datasets used for evaluation included three-axis accelerometer signals obtained from participants’ activities such as running or lying, as described in [34, 35, 36, 37]. While these datasets were intended for research purposes related to activity recognition, they also presented challenges for identifying individuals based on their motion patterns. Also, the dataset presented in [38] was generated for predicting motor failure time using three-axis accelerometer data. The dataset in [39] provided information about power consumption such as temperature, humidity, wind speed, consumption, general diffuse flows, and diffuse flows in three zones of Tetouan City, which is suitable for predicting a zone based on consumption information. The dataset presented in [40] includes hourly air pollutant data such as PM2.5, PM10, SO2, NO2, CO, O3, temperature, pressure, dew point temperature, precipitation, wind direction, wind speed from 12 air-quality monitoring sites, and the proposed model predicted the sites based on their air-quality information. The location of participants was collected using two received signal strength indicator (RSSI) measurements, as described in [41]. While the primary task was indoor localization, the proposed model aimed to identify participants based on their location. Finally, the dataset presented in [42] included three-axis accelerometer and three-axis gyroscope data from ten drivers, and the goal was to identify drivers based on their driving behaviors.

Refer to caption
Figure 2: The architecture of the CNN classifier.

5.2 Evaluation Strategy

Since the purpose of this evaluation is to analyze two proposed metrics, the classification model used is a Convolutional Neural Network (CNN). CNNs are widely used for spatial and temporal data modeling tasks [2]. The Fig. 2 illustrates the architecture of the proposed CNN model. As neural networks are optimized through iterative training procedures, the proposed metrics can be systematically tested during this process. Model checkpointing saves model weights periodically during training. This allows rolling back to previous checkpoints if overfitting or divergence occurs. Early stopping monitors validation performance and stops training if no improvement is seen for a set number of epochs, preventing overfitting. Implementing these techniques with the proposed metrics provides insights into their behavior at different stages of neural network optimization.

Algorithm 4 h-Block Cross-Validation
1:procedure h-BlockCrossValidation(datadata, hh)
2:     Divide datadata into hh non-overlapping blocks: B={B1,B2,,Bh}B=\{B_{1},B_{2},\ldots,B_{h}\}
3:     Validation Size nv0.2hnv\leftarrow\left\lfloor 0.2h\right\rfloor
4:     Test Size nt0.3hnt\leftarrow\left\lfloor 0.3h\right\rfloor
5:     for i=1i=1 to hh do
6:         Validation block BvB[i:i+nv]B_{v}\leftarrow B[i:i+nv]
7:         Test block BtsB[i+nv:i+nv+nt]B_{ts}\leftarrow B[i+nv:i+nv+nt]
8:         Train block BtrB(BvBts)B_{tr}\leftarrow B\setminus(B_{v}\cup B_{ts})
9:         Segmenting blocks by sliding window
10:         Train and validate model on BtrB_{tr} and BvB_{v}
11:         Evaluate model on TtsT_{ts}
12:     end for
13:end procedure

When dealing with spatio-temporal data, it is essential to employ a cross-validation technique that accounts for the temporal dependencies within the data. This is crucial because the data exhibits interdependence, and employing a random sample selection for training and testing could introduce biases in the results [43]. hh-block cross-validation is a type of temporal cross-validation that is often used for temporal data [43]. It helps address the issue of data leakage that can occur in standard kk-fold cross-validation when applied to time series. The Algorithm 4 shows the pseudocode of hh-block CV. The time series data was first divided into hh non-overlapping blocks, each representing a continuous time interval. For each block ii, a validation set BvB_{v} is created by selecting a contiguous subset of nvnv blocks starting from block ii. A test set BtsB_{ts} is then created by selecting a contiguous subset of ntnt blocks starting from the end of the validation block. The remaining blocks are assigned to the training set BtrB_{tr}. Within each block, windowing techniques were applied to segment data. The model is trained on the segments of the training set BtrB_{tr} and validated on the segments of the validation set BvB_{v}. The trained model is then evaluated on the segments of the test set BtsB_{ts}. For each iteration, 50% of the subsets are allocated for training, 30% for testing, and 20% for validation. By adopting this approach, the model is trained on a sufficiently large dataset that enables effective generalization to new data, while the testing phase utilizes an independent dataset.

The validation set helps guide hyperparameter tuning, such as selecting the optimal window length and overlap values of the sliding window evaluated via grid search. Table 1 summarizes the key attributes of each dataset used in the experiments. It also allows assessing model checkpointing and early stopping criteria by tracking validation performance over training epochs. Once hyperparameters are fixed, the final model performance is reported on the independent test sets from each fold. This rigorous evaluation process provides robust estimates of the proposed metrics. The CNN model is implemented using Python and the TensorFlow library. Nadam is employed to optimize network parameters. The results of the experiments can be accessed via GitHub.

Table 1: Datasets were used in evaluating the proposed method.
Dataset Number of Classes Window Length Window Overlap
[34] 13 00:00:03 75%
[35] 10 00:00:08 75%
[36] 12 00:00:06 75%
[37] 5 00:00:03 75%
[38] 3 00:00:06 75%
[39] 3 00:07:00 75%
[40] 12 00:01:00 75%
[41] 12 00:00:30 75%
[42] 10 00:00:10 75%

5.3 Details of Experiments

To ensure a thorough evaluation of the proposed evaluation metrics, the model training incorporated model checkpointing (CP) and early stopping (ES) techniques, considering both traditional evaluation metrics and the proposed metrics. CP were saved during training at epochs where either the traditional metrics (such as Brier Score and Logarithmic Loss) or the proposed metrics demonstrated the best performance on the validation set. ES was implemented as well, terminating training if there was no improvement in either the traditional metrics or the proposed metrics for a specified number of epochs, thus preventing overfitting. To enhance the robustness of the results, this entire process was repeated 100100 times to identify the optimal model hyperparameters. Such an approach enabled a fair comparison between the proposed metrics and the sole use of traditional metrics in determining the best model checkpoint. Subsequently, the checkpoint yielding the highest performance, as indicated by each metric, was evaluated on the test set, providing an accurate assessment of the proposed metric’s ability to identify the model with the highest true performance.

The F1 score and accuracy are widely recognized as a prominent evaluation metric for classification tasks. To effectively evaluate model performance on such unbalanced datasets, accuracy alone is not a suitable metric [44]. As most of the real-world datasets are either heavily or moderately imbalanced, accuracy can be misleading when the number of samples is very different between classes [45]. For this reason, the macro-averaged F1 score is also reported. F1 score considers both precision and recall, providing a better sense of classification effectiveness on unbalanced problems [1]. However, it is important to note that this metric solely serves as a proper scoring rule and does not encompass the measurement of forecast uncertainty and overconfidence [11]. In order to address this limitation, it is proposed that strictly proper scoring rules be employed, as they possess the capability to gauge uncertainty. By exhibiting behavior similar to the F1 score while also measuring uncertainty, these proposed scoring rules can effectively fulfill both reliability and accuracy requirements. Therefore, incorporating such scoring rules in the evaluation process would provide a comprehensive assessment of classification models, encompassing not only their predictive accuracy but also their ability to quantify uncertainty.

Refer to caption
(a) [42]
Refer to caption
(b) [37]
Refer to caption
(c) [36]
Refer to caption
(d) [38]
Refer to caption
(e) [39]
Refer to caption
(f) [40]
Refer to caption
(g) [41]
Refer to caption
(h) [34]
Refer to caption
(i) [35]
Figure 3: This figure presents an illustrative example of validation data statistics, based on epochs, for each dataset. The orange lines show the F1 score at each epoch. The red lines show the flipped Brier Score (BS×1BS\times-1). The green lines are the flipped Penalized Brier Score (PBS×1PBS\times-1). Finally, the symbol CorCor represents the Pearson correlation between the F1 score and the corresponding scoring rule.

5.4 Discussion

Fig. 3 presents validation statistics over training epochs for each dataset, including the macro-averaged F1 score, Brier Score, and proposed Penalized Brier Score. This provides an illustrative example of how the different scoring rules change during model optimization. The Pearson correlation (CorCor) between each scoring rule and the F1 score is also shown. When plotting and comparing the different validation metric trends together, it’s important to note that they are oriented in different directions. Specifically, the F1 score increases positively as the model improves, while the PLL and PBS decrease negatively as performance increases. Therefore, to enable the results to be interpreted more easily, the PBS values graphed have been multiplied by -1 to match the positive F1 score trend.

Importantly, the figure also marks the optimal point on each trend with a data point symbol. This optimal point corresponds to the epoch at which the metric reached its highest validation value. By pinpointing these peaks, we can see exactly where each scoring rule reached its best performance relative to the others. Notably, the PBS trends exhibit optimal points that are closer to the F1 score points compared to the standard Brier Score. This observation suggests that, in these specific examples, the PBS was more suitable for model checkpointing. Additionally, the slopes of the PBS trends display greater similarity to the F1 score trends versus the standard Brier Score slopes. Additionally, the negative correlation metric confirms the PBS consistently maintains a stronger relationship with the F1 score than the standard Brier Score. Therefore, given these observations, implementing early stopping based on the PBS rather than the Brier Score has the potential to yield models with improved test performance.

5.5 Benchmark

The previous discussion examining superior scoring rule trends during training provided initial insights into how the proposed metrics may help optimize model performance. First, the optimal points of the PBS trends were closer to those of the F1 score trends compared to the standard Brier Score. Additionally, the negative correlation between the PBS and F1 score was consistently higher. Given this observation, it was expected that employing the PBS for model selection could lead to improved performance. To decisively validate their effectiveness, rigorous quantitative evaluation was carried out using multiple experiments.

The first stage involved assessing the correlation between the metrics and the F1 score on validation data using k-fold cross-validation. To this end, the performance of models selected with different scoring rules using both model checkpointing (CP) and early stopping (ES) are evaluated. As shown in Table 2, Pearson correlation coefficients were calculated between each of the scoring rules and the macro-averaged F1 scores across datasets and folds. In this table, the symbol CrxCr_{x} represents the correlation between the F1 score and the scoring rule xx. Strong positive correlation within the 1-1 to 11 range indicates close agreement between trends. The results demonstrate that PBS and PLL trends exhibited the highest degree of similarity to F1 score variations. With a correlation exceeding other scores, PBS and PLL can be reliably used to track changes in model performance.

Next, the ability of the proposed superior scoring rules to select high-scoring models was evaluated through k-fold cross-validation. Table 3 compares the macro-averaged F1 scores of the optimized classifier chosen by each metric on test data. These F1 scores were determined through ES or CP, which relied on the utilization of a superior scoring rule. The symbol F1xF1_{x} denotes the F1 score achieved when employing the scoring rule xx. As depicted in the table, it is evident that the F1 score consistently performs better when the model is selected using each of the proposed superior scoring rules. This observation emphasizes the effectiveness of the superior scoring rules in identifying models that yield higher F1 scores. Furthermore, consistently better scores emerged when PBS guided selection rather than PLL.

Consequently, these findings underscore the importance of employing appropriate superior scoring rules to circumvent shortcomings of F1 score alone in model selection, ultimately enabling improved classification capability and more trustworthy predictions. Collectively, the correlational and benchmark results provide compelling quantitative evidence that proposed penalties within strictly proper scoring rules augment their capacity to reflect true performance changes, in addition to facilitating the identification of models with stronger predictive power for new samples. Therefore, by more faithfully reflecting F1 score behavior, the proposed criteria enhance optimal model evaluation, selection, and classification for challenging spatio-temporal applications.

Table 2: The table presents a comparison based on the Pearson correlation between F1 scores and the scoring rules using validation data. The abbreviations ES and CP represent Early Stoping and Model Checkpointing, respectively. The symbol CrxCr_{x} denotes the correlation between F1 score and the scoring rule of xx.
Data ES CP CrBSCr_{BS} CrPBSCr_{PBS} Δ\Delta CrLLCr_{LL} CrPLLCr_{PLL} Δ\Delta
0.957 (±\pm0.02) 0.969 (±\pm0.01) 0.012 0.837 (±\pm0.08) 0.900 (±\pm0.06) 0.063
[34] 0.964 (±\pm0.01) 0.980 (±\pm0.01) 0.016 0.526 (±\pm0.47) 0.640 (±\pm0.30) 0.113
0.963 (±\pm0.04) 0.983 (±\pm0.02) 0.020 0.764 (±\pm0.17) 0.845 (±\pm0.11) 0.081
[35] 0.699 (±\pm0.46) 0.926 (±\pm0.09) 0.228 0.520 (±\pm0.53) 0.589 (±\pm0.45) 0.069
0.721 (±\pm0.17) 0.740 (±\pm0.22) 0.019 0.731 (±\pm0.15) 0.745 (±\pm0.14) 0.013
[36] 0.607 (±\pm0.49) 0.748 (±\pm0.45) 0.141 0.306 (±\pm0.53) 0.508 (±\pm0.45) 0.202
0.690 (±\pm0.73) 0.748 (±\pm0.45) 0.058 0.567 (±\pm0.53) 0.619 (±\pm0.49) 0.052
[37] 0.667 (±\pm1.00) 0.674 (±\pm0.72) 0.007 0.317 (±\pm0.65) 0.378 (±\pm0.65) 0.061
0.717 (±\pm0.61) 0.728 (±\pm0.61) 0.010 0.275 (±\pm0.70) 0.292 (±\pm0.71) 0.016
[38] 0.739 (±\pm0.63) 0.740 (±\pm0.63) 0.001 0.384 (±\pm0.60) 0.425 (±\pm0.55) 0.041
0.965 (±\pm0.08) 0.987 (±\pm0.03) 0.022 0.592 (±\pm0.16) 0.664 (±\pm0.13) 0.072
[39] 0.995 (±\pm0.01) 0.997 (±\pm0.01) 0.002 0.419 (±\pm0.65) 0.446 (±\pm0.66) 0.026
0.680 (±\pm0.31) 0.899 (±\pm0.05) 0.219 0.595 (±\pm0.48) 0.777 (±\pm0.25) 0.182
[40] 0.665 (±\pm0.30) 0.813 (±\pm0.13) 0.148 0.378 (±\pm0.71) 0.802 (±\pm0.14) 0.424
0.572 (±\pm0.48) 0.597 (±\pm0.27) 0.025 0.694 (±\pm0.23) 0.724 (±\pm0.21) 0.031
[41] 0.444 (±\pm0.48) 0.704 (±\pm0.21) 0.260 0.434 (±\pm0.61) 0.498 (±\pm0.53) 0.064
0.874 (±\pm0.14) 0.924 (±\pm0.07) 0.050 0.450 (±\pm0.38) 0.553 (±\pm0.32) 0.103
[42] 0.916 (±\pm0.12) 0.953 (±\pm0.06) 0.037 0.391 (±\pm0.52) 0.629 (±\pm0.32) 0.239
Table 3: This table provides a comparative analysis of scoring rules using F1 scores obtained from test data, which were determined through model selection based on a scoring rule. The abbreviations ES and CP represent Early Stoping and Model Checkpointing, respectively. Additionally, the symbol F1xF1_{x} represents the F1 score achieved through the utilization of the scoring rule of xx.
Data ES CP F1BSF1_{BS} F1PBSF1_{PBS} Δ\Delta F1LLF1_{LL} F1PLLF1_{PLL} Δ\Delta
45.00 (±\pm0.05) 51.65 (±\pm0.07) 6.65 64.76 (±\pm0.09) 65.85 (±\pm0.08) 1.10
[34] 55.47 (±\pm0.05) 60.03 (±\pm0.07) 4.56 70.00 (±\pm0.08) 71.82 (±\pm0.07) 1.83
53.01 (±\pm0.06) 57.60 (±\pm0.04) 4.59 53.90 (±\pm0.08) 55.48 (±\pm0.05) 1.59
[35] 55.37 (±\pm0.04) 58.03 (±\pm0.05) 2.66 55.74 (±\pm0.06) 57.63 (±\pm0.07) 1.89
28.24 (±\pm0.07) 32.48 (±\pm0.09) 4.24 30.08 (±\pm0.04) 30.62 (±\pm0.04) 0.53
[36] 32.30 (±\pm0.08) 34.28 (±\pm0.09) 1.99 31.65 (±\pm0.08) 31.80 (±\pm0.05) 0.15
58.27 (±\pm0.20) 59.14 (±\pm0.19) 0.87 56.79 (±\pm0.14) 59.55 (±\pm0.13) 2.76
[37] 66.51 (±\pm0.06) 67.49 (±\pm0.05) 0.97 68.21 (±\pm0.11) 69.78 (±\pm0.08) 1.57
43.66 (±\pm0.34) 50.81 (±\pm0.32) 7.14 51.12 (±\pm0.32) 59.69 (±\pm0.26) 8.57
[38] 48.00 (±\pm0.09) 54.83 (±\pm0.16) 6.83 64.12 (±\pm0.18) 66.82 (±\pm0.22) 2.71
80.93 (±\pm0.08) 83.20 (±\pm0.07) 2.27 77.87 (±\pm0.06) 80.13 (±\pm0.08) 2.26
[39] 82.08 (±\pm0.08) 83.88 (±\pm0.07) 1.80 80.22 (±\pm0.06) 83.42 (±\pm0.06) 3.20
50.13 (±\pm0.09) 53.51 (±\pm0.07) 3.38 55.59 (±\pm0.08) 56.33 (±\pm0.08) 0.74
[40] 52.04 (±\pm0.11) 53.63 (±\pm0.10) 1.59 51.73 (±\pm0.07) 52.39 (±\pm0.07) 0.66
27.09 (±\pm0.02) 27.77 (±\pm0.03) 0.69 23.10 (±\pm0.04) 23.75 (±\pm0.02) 0.65
[41] 26.51 (±\pm0.05) 29.37 (±\pm0.04) 2.86 26.77 (±\pm0.04) 27.39 (±\pm0.05) 0.62
66.50 (±\pm0.03) 66.53 (±\pm0.03) 0.03 65.39 (±\pm0.06) 65.81 (±\pm0.05) 0.43
[42] 67.85 (±\pm0.03) 68.36 (±\pm0.02) 0.51 66.64 (±\pm0.04) 67.07 (±\pm0.05) 0.43

6 Conclusion

This study introduced novel superior scoring rules called Penalized Brier Score (PBS) and Penalized Logarithmic Loss (PLL) for evaluating probabilistic classification models. PBS and PLL modify the traditional Brier Score and Logarithmic Loss by integrating a penalty term for misclassified observations. As demonstrated formally, PBS and PLL satisfy the properties of strictly proper scoring rules while also consistently assigning superior scores to correctly classified observations. The experimental evaluation highlighted the benefits of using PBS and PLL for model checkpointing and early stopping. PBS and PLL demonstrated a higher negative correlation with the F1 score compared to traditional Brier Score and Logarithmic Loss during model training. Consequently, the proposed scoring functions were more effective in identifying optimal model checkpoints and determining early stopping points, leading to improved F1 scores. The test results substantiated that model selection based on PBS and PLL yielded superior F1 scores compared to traditional metrics. In conclusion, PBS and PLL enable more accurate model evaluation by encapsulating both proper scoring rule principles and preferential treatment of correct classifications. The proposed metrics address a critical gap between probabilistic uncertainty assessment and deterministic accuracy maximization. By accounting for uncertainty and the value of true classifications, PBS and PLL can enhance model selection, checkpointing, and early stopping in classification tasks requiring reliable predictive uncertainty. Further research can explore PBS and PLL with different model architectures and classification problems. Also, various penalties can be investigated to obtain better performance.

Appendix

7 Proof of Theorem 1

Let xx be a member of the set ψ\psi with xi=αx_{i}=\alpha, where ii is the index of the true class, and let yy be the ground-truth label vector. Due to the single-label classification property, the Eq. (2) can be expressed as:

SLL(x,i)=k=1cyklog(xk)=log(xi)\displaystyle S_{LL}(x,i)=-\sum_{k=1}^{c}y_{k}~{}log(x_{k})=-log(x_{i}) (6)

Now, let qξq\in\xi be another prediction. There are three possible cases for the true class probability qiq_{i}:

  • qi=αq_{i}=\alpha. It means that SLL(x,i)=SLL(q,i)S_{LL}(x,i)=S_{LL}(q,i).

  • qi=αβq_{i}=\alpha-\beta, where β>0\beta>0. Since α>αβ\alpha>\alpha-\beta and log(qi)-log(q_{i}) is monotonically descending and positive in [0,1][0,1], it follows that SLL(q,i)>SLL(x,i)S_{LL}(q,i)>S_{LL}(x,i).

  • qi=α+βq_{i}=\alpha+\beta, where β>0\beta>0. Since α+β>α\alpha+\beta>\alpha and log(qi)-log(q_{i}) is monotonically descending and positive in [0,1][0,1], it follows that SLL(x,i)>SLL(q,i)S_{LL}(x,i)>S_{LL}(q,i).

Therefore, when qixiq_{i}\geq x_{i}, the Logarithmic Loss does not assign a strictly higher score to the prediction qq compared to the true prediction xx. Hence, it cannot be considered superior.

8 Proof of Theorem 2

Let xψx\in\psi such that xi=αx_{i}=\alpha, where ii is the index of the true class. Let yy be the ground-truth label vector. Due to the single-label classification property, the Logarithmic Loss can be expressed as: Due to the single-label classification property, Eq. (1) can be written as:

SBS(x,i)=k=1,kicxk2+(1xi)2\displaystyle S_{BS}(x,i)=\sum_{k=1,k\neq i}^{c}x_{k}^{2}+(1-x_{i})^{2} (7)

In the following, the term ”hot value” is used to refer to the element of the probability vectors that correspond to the truth class, whereas the other elements are referred to as ”non-hot values”. Now, let qξq\in\xi. There are three possible cases for the true class probability qiq_{i}:

8.1 qi=αq_{i}=\alpha

It is possible to demonstrate that the variance of the non-hot part has the greatest impact on the score value of the Brier score function. The Brier Score for qq is given by the following expression:

SBS(q,i)=k=1,kic(qk)2nonhotpart+(1qi)2hotpart\displaystyle S_{BS}(q,i)=\underbrace{\sum_{k=1,k\neq i}^{c}(q_{k})^{2}}_{non-hot~{}part}+\underbrace{(1-q_{i})^{2}}_{hot~{}part} (8)

where the index of ii represents the true class or i=argmaxyi=\arg\max\>y. The non-hot part can be expanded in the following manner:

k=1,kic(qk)2=k=1,kic(qkq~+q~)2=\displaystyle\sum_{k=1,k\neq i}^{c}(q_{k})^{2}=\sum_{k=1,k\neq i}^{c}(q_{k}-\tilde{q}+\tilde{q})^{2}= (9)
k=1,kic((qkq~)2+2q~(qkq~)+q~2)=\displaystyle\sum_{k=1,k\neq i}^{c}\left((q_{k}-\tilde{q})^{2}+2\tilde{q}(q_{k}-\tilde{q})+\tilde{q}^{2}\right)= (10)
cck=1,kic((qkq~)2+2q~(qkq~)+q~2)=\displaystyle\frac{c}{c}\sum_{k=1,k\neq i}^{c}\left((q_{k}-\tilde{q})^{2}+2\tilde{q}(q_{k}-\tilde{q})+\tilde{q}^{2}\right)= (11)
cck=1,kic(qkq~)2+2cq~ck=1,kic((qk)q~)+cq~2=\displaystyle\frac{c}{c}\sum_{k=1,k\neq i}^{c}(q_{k}-\tilde{q})^{2}+\frac{2c\tilde{q}}{c}\sum_{k=1,k\neq i}^{c}\left((q_{k})-\tilde{q}\right)+c\tilde{q}^{2}= (12)
cck=1,kic(qkq~)2+2cq~ck=1,kic(qk)2cq~2+cq~2=\displaystyle\frac{c}{c}\sum_{k=1,k\neq i}^{c}(q_{k}-\tilde{q})^{2}+\frac{2c\tilde{q}}{c}\sum_{k=1,k\neq i}^{c}(q_{k})-2c\tilde{q}^{2}+c\tilde{q}^{2}= (13)
cck=1,kic(qkq~)2+2cq~22cq~2+cq~2=\displaystyle\frac{c}{c}\sum_{k=1,k\neq i}^{c}(q_{k}-\tilde{q})^{2}+2c\tilde{q}^{2}-2c\tilde{q}^{2}+c\tilde{q}^{2}= (14)
cck=1,kic(qkq~)2+cq~2\displaystyle\frac{c}{c}\sum_{k=1,k\neq i}^{c}(q_{k}-\tilde{q})^{2}+c\tilde{q}^{2} (15)

If q~\tilde{q} is the average of the non-hat part, then:

k=1,kic(qk)2=c(1ck=1,kic(qkq~)2nonhotpartvariance+q~2)\displaystyle\sum_{k=1,k\neq i}^{c}(q_{k})^{2}=c\left(\underbrace{\frac{1}{c}\sum_{k=1,k\neq i}^{c}(q_{k}-\tilde{q})^{2}}_{non-hot~{}part~{}variance}+\tilde{q}^{2}\right) (16)

It is evident that:

k=1,kicqk=1qiq~=1qic1\sum_{k=1,k\neq i}^{c}q_{k}=1-q_{i}\Rightarrow\tilde{q}=\frac{1-q_{i}}{c-1} (17)

Consequently, it can be obtained that:

SBS(q,i)=c(1ck=1,kic(qkq~)2+q~2)+(1qi)2S_{BS}(q,i)=c\left(\frac{1}{c}\sum_{k=1,k\neq i}^{c}(q_{k}-\tilde{q})^{2}+\tilde{q}^{2}\right)+(1-q_{i})^{2} (18)

where i=argmaxyi=\arg\max\>y. Therefore, based on Eq. (16), the Brier Score in Eq. (18) can be minimized by reducing the variance of the non-hot part and maximizing qiq_{i}.

Furthermore, qq may have tt non-hot values that exceed α\alpha, and the sum of these tt non-hot values is equal to tα+δt\alpha+\delta. The following expression is utilized to formulate the sum of ctc-t non-hot values that are less than α\alpha:

k=1,kicqk+α=1\displaystyle\sum_{k=1,k\neq i}^{c}q_{k}+\alpha=1~{} (19)
k=1,kicqk=1α\displaystyle\Rightarrow\sum_{k=1,k\neq i}^{c}q_{k}=1-\alpha (20)
k=1,kicqk(tα+δ)+(tα+δ)=1α\displaystyle\Rightarrow\sum_{k=1,k\neq i}^{c}q_{k}-(t\alpha+\delta)+(t\alpha+\delta)=1-\alpha (21)
k=1,kicqk(tα+δ)=1(t+1)αδ\displaystyle\Rightarrow\sum_{k=1,k\neq i}^{c}q_{k}-(t\alpha+\delta)=1-(t+1)\alpha-\delta (22)

This implies that the sum of the non-hot values is equivalent to (tα+δ)+(1(t+1)αδ)(t\alpha+\delta)+(1-(t+1)\alpha-\delta). As δ\delta increases and approaches 11, the difference between tα+δt\alpha+\delta and 1(t+1)αδ1-(t+1)\alpha-\delta also increases, leading to an increase in the variance of the non-hot part in Eq. (18). Therefore, it can be inferred that the variance of the non-hot part of qq is greater than xx. Let x~\tilde{x} represent the average of the non-hat part of xx:

(1ck=1,kic(xkx~)2)<(1ck=1,kic(qkq~)2)\displaystyle\left(\frac{1}{c}\sum_{k=1,k\neq i}^{c}(x_{k}-\tilde{x})^{2}\right)<\left(\frac{1}{c}\sum_{k=1,k\neq i}^{c}(q_{k}-\tilde{q})^{2}\right) (23)

Now, as qi=xi=αq_{i}=x_{i}=\alpha and x~=q~=1αc1\tilde{x}=\tilde{q}=\frac{1-\alpha}{c-1}, we can proceed as follows:

SBS(x,i)SBS(q,i)=\displaystyle S_{BS}(x,i)-S_{BS}(q,i)= (24)
(k=1,kic(xkx~)2+cx~2+(1xi)2)\displaystyle\left(\sum_{k=1,k\neq i}^{c}(x_{k}-\tilde{x})^{2}+c\tilde{x}^{2}+(1-x_{i})^{2}\right)-
(k=1,kic(qkq~)2+cq~2+(1qi)2)=\displaystyle\left(\sum_{k=1,k\neq i}^{c}(q_{k}-\tilde{q})^{2}+c\tilde{q}^{2}+(1-q_{i})^{2}\right)= (25)
k=1,kic(xkx~)2k=1,kic(qkq~)2=\displaystyle\sum_{k=1,k\neq i}^{c}(x_{k}-\tilde{x})^{2}-\sum_{k=1,k\neq i}^{c}(q_{k}-\tilde{q})^{2}= (26)
c(1ck=1,kic(xkx~)21ck=1,kic(qkq~)2)<0\displaystyle c\left(\frac{1}{c}\sum_{k=1,k\neq i}^{c}(x_{k}-\tilde{x})^{2}-\frac{1}{c}\sum_{k=1,k\neq i}^{c}(q_{k}-\tilde{q})^{2}\right)<0 (27)

Thus, the condition SBS(q,i)>SBS(x,i)S_{BS}(q,i)>S_{BS}(x,i) still holds even if a non-hot value is only slightly greater than α\alpha. And SBSS_{BS} is not superior in this case.

Refer to caption
Figure 4: Cumulative percentage of instances where condition SBS(q,i)>SBS(x,i)S_{BS}(q,i)>S_{BS}(x,i) is met.

8.2 qi=αβq_{i}=\alpha-\beta

To verify the validity of the condition of being superior, a Monte Carlo simulation can be conducted, which involves comparing SBS(q,i)S_{BS}(q,i) to SBS(x,i)S_{BS}(x,i) for numerous randomly generated values of xψx\in\psi and qξq\in\xi. The variables in this simulation include xx, qq, α\alpha, β\beta, and cc, where α\alpha denotes the hot value of xx, β\beta represents the difference between the hot value of qq and α\alpha, and cc represents the number of classes. All of these variables are randomly generated from a normal distribution.

For each comparison, a random xx is selected from ψ\psi such that xi=αx_{i}=\alpha and xcx\in\mathbb{R}^{c}, where α\alpha represents a hot value and cc represents the number of classes. Next, a random qq is chosen from ξ\xi such that qi=αβq_{i}=\alpha-\beta, where β\beta is a random positive value. The pair (x,q)(x,q) is then evaluated to determine whether SBS(q,i)>SBS(x,i)S_{BS}(q,i)>S_{BS}(x,i) holds.

Figure 4 presents the results of the Monte Carlo simulation for varying numbers of comparisons. The figure demonstrates the convergence of the Monte Carlo method and confirms that the condition SBS(q,i)>SBS(x,i)S_{BS}(q,i)>S_{BS}(x,i) is satisfied in almost 99.996% of comparisons. The figure displays the cumulative percentage of comparisons in which the condition SBS(q,i)>SBS(x,i)S_{BS}(q,i)>S_{BS}(x,i) holds. When qi<αq_{i}<\alpha, the Brier Score typically assigns a higher score to the observation qq; however, this is not always the case. Therefore, the Brier Score cannot be regarded as superior.

Refer to caption
Figure 5: Cumulative percentage of instances where condition SBS(q,i)>SBS(x,i)S_{BS}(q,i)>S_{BS}(x,i) is met.

8.3 qi=α+βq_{i}=\alpha+\beta

To validate the condition of being superior, it is possible to conduct another Monte Carlo simulation, which involves comparing SBS(q,i)>SBS(x,i)S_{BS}(q,i)>S_{BS}(x,i) for a large number of randomly generated values of xψx\in\psi and qξq\in\xi. The simulation includes variables such as xx, qq, α\alpha, β\beta, and cc, where α\alpha represents the hot value of xx, β\beta represents the difference between the hot value of qq and α\alpha, and cc represents the number of classes. All of these variables are randomly generated from a normal distribution.

In each comparison, a random value of xx is chosen from ψ\psi such that xi=αx_{i}=\alpha and xx belongs to c\mathbb{R}^{c}, where α\alpha denotes a hot value and cc represents the number of classes. Then, a random value of qq is selected from ξ\xi such that qi=α+βq_{i}=\alpha+\beta, where β\beta is a random positive value. The pair (x,q)(x,q) is evaluated to determine if SBS(q,i)>SBS(x,i)S_{BS}(q,i)>S_{BS}(x,i) is true.

Figure 5 depicts the outcomes of the Monte Carlo simulation for different numbers of comparisons, indicating the convergence of the Monte Carlo method. The figure illustrates the cumulative percentage of comparisons where the condition SBS(q,i)>SBS(x,i)S_{BS}(q,i)>S_{BS}(x,i) is satisfied. The results confirm that the condition SBS(q,i)>SBS(x,i)S_{BS}(q,i)>S_{BS}(x,i) holds in roughly 63% of comparisons. When qi>αq_{i}>\alpha, it is common for the Brier Score to assign a higher score to the observation xx; however, this is not always the case. Hence, the Brier Score cannot be considered superior.

9 Proof of Theorem 3

Let xx be an arbitrary member of the set ψ\psi and yy be the one-hot ground-truth label. Let xi=αx_{i}=\alpha where ii is the index of the true class. To maximize SBS(x,i)S_{BS}(x,i), the following equation is helpful:

k=1cxk=1k=1,kicxk=1α\displaystyle\sum_{k=1}^{c}x_{k}=1\Rightarrow\sum_{k=1,k\neq i}^{c}x_{k}=1-\alpha (28)
\displaystyle\Rightarrow k=1,kicxk2(1α)2\displaystyle\sum_{k=1,k\neq i}^{c}x_{k}^{2}\leq(1-\alpha)^{2} (29)

Since (1α)2(1-\alpha)^{2} is the primary component of the Brier Score, the maximization of (1α)2(1-\alpha)^{2} (or the minimization of α\alpha) is essential to obtain the highest possible value of SBS(x,i)S_{BS}(x,i). The minimum value of α\alpha is 1c+ϵ\frac{1}{c}+\epsilon, as any value of α\alpha below this threshold would make xx unsuitable for inclusion in the set ψ\psi. As a result, α\alpha is equivalent to 1c+ϵ\frac{1}{c}+\epsilon. On the other hand, if xi=1c+ϵx_{i}=\frac{1}{c}+\epsilon, the remaining elements of the vector xx cannot exceed 1c+ϵ\frac{1}{c}+\epsilon. To simplify the analysis, assuming ϵ=0\epsilon=0, the elements of xx would be xk=1c,k[1,,c]x_{k}=\frac{1}{c},~{}\forall k\in[1,\cdots,c]. Therefore:

maxSBS(x,i)\displaystyle\max S_{BS}(x,i) =k=1,kicxk2+(1xi)2\displaystyle=\sum_{k=1,k\neq i}^{c}x_{k}^{2}+(1-x_{i})^{2} (30)
=k=1,kic1c2+(11c)2\displaystyle=\sum_{k=1,k\neq i}^{c}\frac{1}{c^{2}}+(1-\frac{1}{c})^{2} (31)
=(c1)1c2+(11c)2\displaystyle=(c-1)\frac{1}{c^{2}}+(1-\frac{1}{c})^{2} (32)
=1c1c2+12c+1c2\displaystyle=\frac{1}{c}-\frac{1}{c^{2}}+1-\frac{2}{c}+\frac{1}{c^{2}} (33)
=11c=c1c\displaystyle=1-\frac{1}{c}=\frac{c-1}{c} (34)

10 Proof of Theorem 4

Let xx be an arbitrary member of the set ψ\psi, and yy be the one-hot ground-truth label vector such that xi=αx_{i}=\alpha, where ii is the index of the true class. According to Eq. (6) and since log-log is a decreasing and positive function in the range [0,1][0,1], minimizing α\alpha is necessary to maximize SLL(x,i)S_{LL}(x,i). The minimum value of α\alpha is 1c+ϵ\frac{1}{c}+\epsilon, as any value of α\alpha below this threshold would render xx unsuitable for inclusion in the set ψ\psi. To simplify the analysis, we assume ϵ=0\epsilon=0, which yields:

maxSLL(x,i)=log(1c)\displaystyle\max S_{LL}(x,i)=-log(\frac{1}{c}) (35)

11 Proof of Theorem 5

As SBSS_{BS} and SLLS_{LL} are strictly proper, so:

SBS(P,Q)>SBS(Q,Q)\displaystyle S_{BS}(P,Q)>S_{BS}(Q,Q) (36)
SLL(P,Q)>SLL(Q,Q)\displaystyle S_{LL}(P,Q)>S_{LL}(Q,Q) (37)

for QPQ\neq P. Furthermore, it is clear that:

SBS(Q,Q)=SPBS(Q,Q)\displaystyle S_{BS}(Q,Q)=S_{PBS}(Q,Q) (38)
SLL(Q,Q)=SPLL(Q,Q)\displaystyle S_{LL}(Q,Q)=S_{PLL}(Q,Q) (39)

and also:

SPBS(P,Q)SBS(P,Q)\displaystyle S_{PBS}(P,Q)\geq S_{BS}(P,Q) (40)
SPLL(P,Q)SLL(P,Q)\displaystyle S_{PLL}(P,Q)\geq S_{LL}(P,Q) (41)

Therefore:

SPBS(P,Q)SBS(P,Q)>SBS(P,Q)\displaystyle S_{PBS}(P,Q)\geq S_{BS}(P,Q)>S_{BS}(P,Q) (42)
SPLL(P,Q)SBS(P,Q)>SLL(P,Q)\displaystyle S_{PLL}(P,Q)\geq S_{BS}(P,Q)>S_{LL}(P,Q) (43)

As a result, SPBSS_{PBS} and SPLLS_{PLL} are strictly proper.

References

  • [1] Mohammad Hossin and Md Nasir Sulaiman. A review on evaluation metrics for data classification evaluations. International journal of data mining & knowledge management process, 5(2):1, 2015.
  • [2] Senzhang Wang, Jiannong Cao, and Philip Yu. Deep learning for spatio-temporal data mining: A survey. IEEE Transactions on Knowledge and Data Engineering, 2020.
  • [3] Nicholas Pudjihartono, Tayaza Fadason, Andreas W Kempa-Liehr, and Justin M O’Sullivan. A review of feature selection methods for machine learning-based disease risk prediction. Frontiers in Bioinformatics, 2:927312, 2022.
  • [4] Wentao Zhang, Jiawei Jiang, Yingxia Shao, and Bin Cui. Snapshot boosting: a fast ensemble framework for deep neural networks. Science China Information Sciences, 63:1–12, 2020.
  • [5] Chandra Sekhara Rao Annavarapu et al. Deep learning-based improved snapshot ensemble technique for covid-19 chest x-ray classification. Applied Intelligence, 51(5):3104–3120, 2021.
  • [6] Muhammad Ibraheem Siddiqui, Khurram Khan, Adnan Fazil, and Muhammad Zakwan. Snapshot ensemble-based residual network (snapensemresnet) for remote sensing image scene classification. GeoInformatica, 27(2):341–372, 2023.
  • [7] Andreas Griewank and Andrea Walther. Algorithm 799: revolve: an implementation of checkpointing for the reverse or adjoint mode of computational differentiation. ACM Transactions on Mathematical Software (TOMS), 26(1):19–45, 2000.
  • [8] Gürol Canbek, Tugba Taskaya Temizel, and Seref Sagiroglu. Ptopi: A comprehensive review, analysis, and knowledge representation of binary classification performance measures/metrics. SN Computer Science, 4(1):13, 2022.
  • [9] Lutz Prechelt. Early stopping-but when? In Neural Networks: Tricks of the trade, pages 55–69. Springer, 2002.
  • [10] Alaa Tharwat. Classification assessment methods. Applied computing and informatics, 17(1):168–192, 2020.
  • [11] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International Conference on Machine Learning, pages 1321–1330. PMLR, 2017.
  • [12] Juozas Vaicenavicius, David Widmann, Carl Andersson, Fredrik Lindsten, Jacob Roll, and Thomas Schön. Evaluating model calibration in classification. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 3459–3467. PMLR, 2019.
  • [13] Shaoxun Xu, Yufei Chen, Chao Ma, and Xiaodong Yue. Deep evidential fusion network for medical image classification. International Journal of Approximate Reasoning, 150:188–198, 2022.
  • [14] Tilmann Gneiting, Adrian E Raftery, Anton H Westveld, and Tom Goldman. Calibrated probabilistic forecasting using ensemble model output statistics and minimum crps estimation. Monthly Weather Review, 133(5):1098–1118, 2005.
  • [15] Teddy Seidenfeld, Mark J Schervish, and Joseph B Kadane. Forecasting with imprecise probabilities. International Journal of Approximate Reasoning, 53(8):1248–1261, 2012.
  • [16] Robert L Winkler, Yael Grushka-Cockayne, Kenneth C Lichtendahl Jr, and Victor Richmond R Jose. Probability forecasts and their combination: A research perspective. Decision Analysis, 16(4):239–260, 2019.
  • [17] Tilmann Gneiting and Adrian E Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American statistical Association, 102(477):359–378, 2007.
  • [18] Glenn W Brier. Verification of forecasts expressed in terms of probability. Monthly weather review, 78(1):1–3, 1950.
  • [19] Irving John Good. Rational decisions. Journal of the Royal Statistical Society: Series B (Methodological), 14(1):107–114, 1952.
  • [20] Arthur Carvalho. An overview of applications of proper scoring rules. Decision Analysis, 13(4):223–242, 2016.
  • [21] David Bolin and Jonas Wallin. Local scale invariance and robustness of proper scoring rules. Statistical Science, 38(1):140–159, 2023.
  • [22] Jürgen Landes. Probabilism, entropies and strictly proper scoring rules. International Journal of Approximate Reasoning, 63:1–21, 2015.
  • [23] Wenbin Qian, Jintao Huang, Yinglong Wang, and Yonghong Xie. Label distribution feature selection for multi-label classification with rough set. International journal of approximate reasoning, 128:32–55, 2021.
  • [24] Alexander Philip Dawid and Monica Musio. Theory and applications of proper scoring rules. Metron, 72(2):169–183, 2014.
  • [25] Ebrahim Mortaz. Imbalance accuracy metric for model selection in multi-class imbalance classification problems. Knowledge-Based Systems, 210:106490, 2020.
  • [26] Meelis Kull and Peter Flach. Novel decompositions of proper scoring rules for classification: Score adjustment as precursor to calibration. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2015, Porto, Portugal, September 7-11, 2015, Proceedings, Part I 15, pages 68–85. Springer, 2015.
  • [27] Daniel S Wilks. Forecast verification. In International geophysics, volume 100, pages 301–394. Elsevier, 2011.
  • [28] Hans Hersbach. Decomposition of the continuous ranked probability score for ensemble prediction systems. Weather and Forecasting, 15(5):559–570, 2000.
  • [29] Hristos Tyralis and Georgia Papacharalampous. A review of predictive uncertainty estimation with machine learning. Artificial Intelligence Review, 57(4):94, 2024.
  • [30] Jakub Nowotarski and Rafał Weron. Recent advances in electricity price forecasting: A review of probabilistic forecasting. Renewable and Sustainable Energy Reviews, 81:1548–1568, 2018.
  • [31] Carlo Gaetan, Federica Giummolè, and Valentina Mameli. Calibrated emos: applications to temperature and wind speed forecasting. Environmental and Ecological Statistics, pages 1–25, 2024.
  • [32] Johannes Resin. From classification accuracy to proper scoring rules: Elicitability of probabilistic top list predictions. Journal of Machine Learning Research, 24(173):1–21, 2023.
  • [33] Matthew J Heaton, Abhirup Datta, Andrew O Finley, Reinhard Furrer, Joseph Guinness, Rajarshi Guhaniyogi, Florian Gerber, Robert B Gramacy, Dorit Hammerling, Matthias Katzfuss, et al. A case study competition among methods for analyzing large spatial data. Journal of Agricultural, Biological and Environmental Statistics, 24:398–425, 2019.
  • [34] Pierluigi Casale, Oriol Pujol, and Petia Radeva. Personalization and user verification in wearable systems using biometric walking patterns. Personal and Ubiquitous Computing, 16(5):563–580, 2012.
  • [35] Gary M Weiss, Kenichi Yoneda, and Thaier Hayajneh. Smartphone and smartwatch-based biometrics using activities of daily living. IEEE Access, 7:133190–133202, 2019.
  • [36] Roberto L Shinmoto Torres, Damith C Ranasinghe, Qinfeng Shi, and Alanson P Sample. Sensor enabled wearable rfid technology for mitigating the risk of falls near beds. In 2013 IEEE international conference on RFID (RFID), pages 191–198. IEEE, 2013.
  • [37] Boštjan Kaluža, Violeta Mirchevska, Erik Dovgan, Mitja Luštrek, and Matjaž Gams. An agent-based approach to care in independent living. In International joint conference on ambient intelligence, pages 177–186. Springer, 2010.
  • [38] Gustavo Scalabrini Sampaio, Arnaldo Rabello de Aguiar Vallim Filho, Leilton Santos da Silva, and Leandro Augusto da Silva. Prediction of motor failure time using an artificial neural network. Sensors, 19(19):4342, 2019.
  • [39] Abdulwahed Salam and Abdelaaziz El Hibaoui. Comparison of machine learning algorithms for the power consumption prediction:-case study of tetouan city–. In 2018 6th International Renewable and Sustainable Energy Conference (IRSEC), pages 1–5. IEEE, 2018.
  • [40] Shuyi Zhang, Bin Guo, Anlan Dong, Jing He, Ziping Xu, and Song Xi Chen. Cautionary tales on air-quality improvement in beijing. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 473(2205):20170457, 2017.
  • [41] Dheeru Dua and Casey Graff. UCI machine learning repository, 2017.
  • [42] Hamid Reza Eftekhari and Mehdi Ghatee. Hybrid of discrete wavelet transform and adaptive neuro fuzzy inference system for overall driving behavior recognition. Transportation research part F: traffic psychology and behaviour, 58:782–796, 2018.
  • [43] Jun Shao. Linear model selection by cross-validation. Journal of the American statistical Association, 88(422):486–494, 1993.
  • [44] Fares Grina, Zied Elouedi, and Eric Lefevre. Re-sampling of multi-class imbalanced data using belief function theory and ensemble learning. International Journal of Approximate Reasoning, 156:1–15, 2023.
  • [45] Serafín Moral-García, Carlos J Mantas, Javier G Castellano, and Joaquín Abellán. Using credal c4. 5 for calibrated label ranking in multi-label classification. International Journal of Approximate Reasoning, 147:60–77, 2022.