Superior Scoring Rules for Probabilistic Evaluation of Single-Label Multi-Class Classification Tasks
Abstract
This study introduces novel superior scoring rules called Penalized Brier Score (PBS) and Penalized Logarithmic Loss (PLL) to improve model evaluation for probabilistic classification. Traditional scoring rules like Brier Score and Logarithmic Loss sometimes assign better scores to misclassifications in comparison with correct classifications. This discrepancy from the actual preference for rewarding correct classifications can lead to suboptimal model selection. By integrating penalties for misclassifications, PBS and PLL modify traditional proper scoring rules to consistently assign better scores to correct predictions. Formal proofs demonstrate that PBS and PLL satisfy strictly proper scoring rule properties while also preferentially rewarding accurate classifications. Experiments showcase the benefits of using PBS and PLL for model selection, model checkpointing, and early stopping. PBS exhibits a higher negative correlation with the F1 score compared to the Brier Score during training. Thus, PBS more effectively identifies optimal checkpoints and early stopping points, leading to improved F1 scores. Comparative analysis verifies models selected by PBS and PLL achieve superior F1 scores. Therefore, PBS and PLL address the gap between uncertainty quantification and accuracy maximization by encapsulating both proper scoring principles and explicit preference for true classifications. The proposed metrics can enhance model evaluation and selection for reliable probabilistic classification.
Keywords Strictly Proper Scoring Rules Evaluation Metric Probabilistic Evaluation Probabilistic Classification Model Selection
1 Introduction
Evaluation metrics play a critical role in model selection, feature selection, parameter tuning, and regularization when evaluating the performance of a classification model [1]. In model selection, evaluation metrics such as accuracy, precision, recall, and F-measures are used to compare the performance of different models and select the best model for a specific task [2]. Similarly, in feature selection, evaluation metrics are used to identify the most informative features for a specific task. By comparing the performance of a classification model with different feature subsets, irrelevant or redundant features can be eliminated, improving the model’s efficiency and effectiveness [3]. Model checkpointing, also known as snapshotting, involves saving the state of a model during the training process at regular intervals. By evaluating the performance of each snapshot on validation metrics, the best-performing snapshot can be selected for the final model. This helps prevent overfitting by choosing a snapshot that exhibits good generalization ability on the validation set, improving the model’s ability to generalize to unseen data [4, 5, 6, 7]. In regularization, evaluation metrics are used to balance the model’s complexity and its ability to generalize to new data. By evaluating the performance of a classification model with different regularization strengths, overfitting can be prevented, ensuring that the model performs well on unseen data [8, 9].
In classification tasks, the evaluation of the model often relies on simple accuracy measures and related metrics derived from the confusion matrix that only considers predictions specifying the truth class [10]. However, this approach overlooks the crucial aspect of probabilistic uncertainty quantification [11]. Ensuring the accuracy of predictive uncertainty is critical for classification models used in safety-critical applications. To be considered reliable and calibrated in statistical terms, a classification model must exhibit an honest expression of its predictive distribution [12]. In other words, the model should not only make predictions but also convey the level of uncertainty associated with those predictions [13]. For example, consider a binary classification problem where the goal is predicting a certain disease. A classifier that outputs a single class prediction of disease or no disease may achieve a high accuracy rate in terms of correct predictions. However, it fails to capture and communicate the inherent uncertainty surrounding each prediction. This discrepancy between the use of simple accuracy measures in classification and the need for probabilistic assessments has motivated researchers to explore the use of scoring rules [14].
Scoring rules are used to evaluate probabilistic predictions or forecasts in decision theory [15]. When assessing probabilistic forecasts, it is crucial to use a scoring method that accurately reflects the reported probabilities and their alignment with actual outcomes [16]. One approach to achieving this is through the use of strictly proper scoring rules. These rules assign optimal scores only when the forecasts match the true probabilities [17]. The Brier Score, initially introduced in meteorology, is an early example of such a rule [18]. It was developed to discourage biased reporting and promote honest assessments of uncertainty. As a variant of the quadratic scoring rule, it remains widely utilized [16]. Another pioneering effort resulted in the logarithmic rule, which is notable for its theoretical connection to entropy [19]. Over time, numerous studies have examined commonly used scoring rules and their desirable statistical properties from different perspectives. Some analyses have focused on aspects such as forecast calibration and consistency incentives, while others have explored connections to information theory concepts [11, 20]. As probabilistic prediction continues to be a significant area of research, ongoing investigations into existing rule properties and potential novel approaches will contribute to further understanding and advancement of scoring methodologies. In this regard, this paper proposes a novel attribute for strictly proper scoring rules.
1.1 Motivation
The classification decision is commonly partitioned into categories of true positive, true negative, false positive, and false negative. Let us examine an illustrative scenario involving a true vector of and two predicted vectors: and . Vector corresponds to a true negative prediction, while vector represents a false negative prediction. It is crucial to critically evaluate which vector demonstrates superior performance, taking into account that the of vector accurately predicts the outcome, while vector assigns a higher probability to the correct class. Upon analyzing the performance of vectors and in a single-label scenario, it becomes evident that vector possesses an advantage over vector in terms of classification accuracy. This attribute holds significant importance as accurate decision-making is highly desirable and essential in numerous applications. Consequently, considering the paramount significance of precise decisions, it can be concluded that vector surpasses vector in quality. Therefore, this paper advocates that the scoring rule ought to consistently assign better scores to decisions about true positives and true negatives. However, a scientific discrepancy arises when considering the Brier Score and Logarithmic Loss metrics, as they occasionally contradict this advocated principle. Surprisingly, vector exhibits higher Brier Score and Logarithmic Loss values compared to vector contrary to expectations. This inconsistency between the advocated principle and the observed outcomes highlights a scientific gap or inconsistency that this paper aims to address and resolve. Therefore, this paper proposes a novel attribute for strictly proper scoring rules to tackle this issue. Our contributions can be stated as the following:
-
•
It hypothesizes that evaluation metrics should favor accurate predictions (true positives and true negatives) over misclassified ones (false positives and false negatives) when evaluating multi-class single-label classifications.
-
•
We show traditional strictly proper scores like Brier Score and Logarithmic Loss can rate incorrect predictions higher than correct ones.
-
•
A new property termed superior is defined for scoring rules to formally establish that evaluations should preference true predictions.
-
•
To this end, this research proposes two novel scoring rules called Penalized Brier Score (PBS) and Penalized Logarithmic Loss (PLL).
-
•
This research modifies The scoring rules Brier Score and Logarithmic Loss by incorporating a penalty term to make them superior. This penalizes incorrect predictions more strongly than the original scoring rules, aligning with the initial hypothesis that favors correct decisions over wrongs.
-
•
The utility of the proposed evaluation metrics for early stopping and checkpointing is experimentally assessed.
-
•
Experimental results demonstrate that the proposed evaluation metrics enhance the efficiency of the classifier. Furthermore, these metrics exhibit a high correlation with the F-1 score while encompassing the measurement of prediction uncertainty and calibration.
The paper is structured in the following way: Section 2 provides background information and sets the stage for the research. Section 3 reviews previous work related to the topic. Section 4 details the motivation for developing new scoring rules and introduces the proposed Penalized Brier Score and Penalized Logarithmic Loss methods. Section 5 presents experimental results obtained from spatio-temporal data. Finally, section 6 concludes the paper.
2 Preliminaries
2.1 Scoring Rules
A scoring rule is a bivariate function that takes two arguments: a probability measure, representing a forecast, and an observed outcome [21]. The function returns a real number that measures the deviation between a forecast probability and reality. They provide a summary measure for the evaluation of predictions that assign probabilities to events, such as probabilistic classification of a set of mutually exclusive outcomes or classes [22]. Scoring rules can be thought of as cost function or loss function and are evaluated as the empirical mean of a given sample, simply called score [17]. It is worth noting that in neural networks, loss functions are typically utilized during the backpropagation process to optimize the network. However, in this research, we are focusing on evaluation metrics for assessing model performance.
Let represent a general sample space, which contains all possible outcomes. Let represent an event space, which is a -algebra of subsets of . An event refers to a set of observations within the sample space. The extended real numbers denote possible scores. Let represent a convex class of probability measures on the space . A probabilistic forecast is any probability measure .
Definition 1 (Scoring Rule).
A scoring rule is a function assigning numerical values to probability measure pairs , where as a probabilistic forecast is issued and as an observation materializes.
Let the distributional forecast represent the forecaster’s best estimate of the probability distribution. In the context of multi-class classification tasks, represents ground-truth vectors. A scoring rule is defined as the expected score of when . It means that .
Definition 2 (Strictly Proper Scoring Rule).
The scoring rule is proper with respect to if
and it is strictly proper if equality holds if and only if .
2.2 Probabilistic Evaluation of Multi-Class Classification
In single-label multi-class classification, the objective is to predict the specific class label, denoted as , for a given sample characterized by a feature vector, denoted as . In probabilistic classification, the goal is to learn the entire conditional distribution , which represents the probability distribution over classes conditioned on the given features [23]. This is accomplished through the use of a probabilistic classifier denoted as . The set of probability measures is typically identified with the probability -simplex and probability distributions are represented by vectors . Therefore, the overall form of the scoring rules for single-label multi-class classification problems is , where is a probability vector obtained from the classifier and is the true class.
Definition 3.
Brier Score (or squared loss or quadratic score), denoted by , is the most well-known strictly proper scoring rule and is defined as [18]
(1) |
Where in the number of classes and is the ground-truth label as a vector such that if the true class is , and otherwise.
Definition 4.
Logarithmic Loss (or Cross-entropy loss or log-loss or ignorance score), denoted by , is a strictly proper scoring rules and is defined as [19]
(2) |
Where in the number of classes and is the ground-truth label as a vector such that if the true class is , and otherwise.
In practice, competing forecast procedures are ranked by the average score [17]. Suppose that we wish to fit a parametric model based on random samples . To estimate , we might measure the goodness-of-fit by the mean score [24]
(3) |
where is a strictly proper scoring rule. If denotes the true parameter, asymptotic arguments indicate that as [17]. Let’s suppose is a negatively oriented scoring rule like the Brier Score. It suggests a general approach to estimation: choose a strictly proper scoring rule that is tailored to the problem at hand that minimizes over the parameters space and take as the optimum score estimator based on the scoring rule [17]. For positively oriented scoring rules like accuracy, .
3 Literature Review
Evaluation metrics are crucial for assessing the performance of classification algorithms. They are used to evaluate the performance and effectiveness of classifiers, as well as to discriminate and select the optimal solution during the classification training process [1]. Classification evaluation metrics can generally be divided into accuracy-based and confidence-based metrics.
Accuracy-based metrics only consider whether a prediction matches the true label, without regard to predictive confidence or uncertainty. This category includes metrics like accuracy, error rate, precision, recall, F-measures, AUC, and related threshold-based measures. Accuracy and error rate are among the most widespread evaluation metrics. However, they have limitations such as providing less distinctive values and favoring the majority class [25, 10]. Precision and recall focus on different targets, making them unsuitable for selecting the optimal solution [1]. F-measures and AUC aim to address some of the shortcomings of individual metrics. F-measures consider both precision and recall, while AUC reflects the overall ranking performance of a classifier [25]. However, AUC has a high computational cost for large datasets [1].
Confidence-based metrics assess predictive confidence by rewarding calibrated probability estimates [26]. These metrics reward probabilistic calibration and include strictly proper scoring rules. In addition to the scoring rules introduced in the preliminaries, there are several other scoring rules that have been utilized in forecast verification [27]. Some examples include the spherical score, quadratic score, pseudospherical score, and continuous ranked probability score (CRPS) [28, 29]. Variations of these rules have been used in different fields. This includes electricity price forecasting to assess volatility over time, and wind/weather modeling to evaluate probabilistic predictions [30, 31]. Financial prediction and spatial statistics also apply scoring rules [29].
A scoring rule is considered proper if the expected score is minimized by the true distribution of the outcome of interest [32]. This means that a forecaster who uses a proper scoring rule will be incentivized to provide a forecast that is as close as possible to the true distribution. In terms of elicitation, scoring rules have a significant role in motivating assessors to provide conscientious and truthful assessments [17]. Nonetheless, because the concept of strict propriety adds an additional level of uniqueness, the utilization of strictly proper scoring rules offers enhanced theoretical assurances regarding the optimality of truth-telling as a strategy [17]. This encourages predictors to provide more honest probabilistic forecasts.
In addition, assessing forecast characteristics like calibration and sharpness is important for understanding individual and combined forecasts [16]. Calibration examines the consistency between forecasts and outcomes, while sharpness measures the concentration of the forecast distribution for a variable of interest, regardless of outcomes [17]. The relationship between strictly proper scoring rules and measures of calibration and sharpness can be established by decomposing the rules into components. A common decomposition enables the expression of a scoring rule as a function of both a calibration measure and a sharpness measure [16, 26]. By utilizing strictly proper scoring rules, the evaluation process can capture the probabilistic uncertainty inherent in classification problems and prevent overconfidence [16]. Accuracy and F-measures are two proper scoring rules, which are positively oriented. They are not strictly proper scoring rules because their maximum are not unique [17]. Therefore, strictly proper scoring rules such as Brier Score and Logarithmic Loss have more advantages than metrics derived from the confusion matrix. In conclusion, this paper recommends the evaluation of multi-class classification using strictly proper scoring rules for the following reasons:
-
•
Elicitation: Strictly proper scoring rules have desirable elicitation properties. They uniquely incentivize assessors to provide honest probabilistic forecasts by maximizing the expected score [17].
-
•
Reliability: Strictly proper scoring rules enable reliable assessments of uncertainty. Decomposing proper scoring rules into calibration and sharpness components facilitates the evaluation of forecast uncertainty. This prevents overconfidence by capturing the inherent probabilistic uncertainty in classification [16, 26].
While this paper recommends using strictly proper scoring rules, they are not always sufficient to ensure that more accurate forecasts receive better scores. Previous evaluation metrics for multi-class classification tasks have the shortcoming of not explicitly favoring accurate predictions over misclassified ones. Metrics like Brier Score and Logarithmic Loss, while being strictly proper scoring rules, do not consistently assign better scores to predictions that result in true positives and true negatives compared to those that are false positives and false negatives. This can potentially lead to suboptimal model selection if a model achieves better scoring by making incorrect predictions rather than correct ones. The paper addresses this shortcoming by proposing novel superior scoring rules called Penalized Brier Score and Penalized Logarithmic Loss.
4 Proposed Model
This section will first present the theoretical background, providing theoretical results that motivate the requirement for proper scoring rules that assign higher scores to correct predictions rather than incorrect ones. Next, the proposed methodology will be introduced. It will illustrate how the scoring rules can be modified by incorporating penalty terms. This serves to penalize incorrect predictions more strongly, thereby ensuring the scoring rules consistently assign higher scores to correct predictions as intended.

4.1 Theoretical Analysis
The primary objective of any classifier is to accurately assign observations to their respective classes. Observations that are correctly classified are considered to be of greater value than those that are incorrectly classified. A classifier that can achieve a high rate of accurate classifications and a low error rate is deemed superior because it indicates a greater ability to identify patterns and differentiate between classes. Conversely, observations that are incorrectly classified suggest that the classifier has not accurately modeled the relationship between features and classes. These errors compromise the classifier’s capacity to generalize and make accurate predictions on new samples. Correct classifications represent successful outcomes for the classifier, while errors indicate failures. Therefore, it is crucial to maximize correct classifications and minimize errors in order to develop an effective and useful classifier. The reliability of a classifier’s performance is determined by its ability to consistently reproduce the correct labels instead of settling for inferior results that are prone to mistakes.
Fig. 1 shows the big picture of the theoretical analysis presented in this section. The subsequent part of this section demonstrates that both Brier Scoring and Logarithmic Loss are indifferent to this preference. To prove this point, a novel attribute for scoring rules is introduced. This attribute guarantees that the scoring rule assigns a superior score to the observations that are correctly classified.
Definition 5 (Superior Scoring Rule).
Let and be two subsets of the set of all predictions by a certain classifier:
Where is the ground-truth label as a vector such that and when the is the true class. Therefore, contains true positive and true negative predictions, and contains false positive and false negative predictions. Let represent the score when the prediction is issued and the true class is . is superior if its scores for every member belonging to are always better than those of set . Let with its corresponding true class . Let with its corresponding true class . Then, if the condition
always satisfied, is superior.
Therefore, if a scoring rule has this property, it will always give better scores to correct predictions. In the following, we show theoretically that Logarithmic Loss and Brier Score are not superior.
Theorem 1 (LL Property).
In the context of single-label multi-class classification tasks, , Logarithmic Loss is not superior.
Proof.
See Appendix 7. ∎
Theorem 2 (BS Property).
In the context of single-label multi-class classification tasks, , Brier Score is not superior.
Proof.
See Appendix 8. ∎
According to the Monte Carlo simulations provided in Appendix 8, it is shown that Brier Score outperforms Logarithmic Loss. However, to ensure that the scoring rule consistently assigns a higher score to accurately classified observations, a new criterion is necessary, which can be attained by modifying the current scoring rules. Specifically, if the probability vector belongs to set , an extra penalty was integrated into the scoring rule. Since this research focuses on the Brier Score and Logarithmic Loss, which should be minimized to be optimal, our aim is to assign a penalty to incorrect predictions that the penalty value is higher than the highest score possible for a correct prediction. To accomplish this, the penalty should be established as equivalent to the maximum value of the scoring rule for set .
Theorem 3 (Maximum in ).
The largest possible value of the Brier Score for the set is .
Proof.
See Appendix 9. ∎
Theorem 4 (Maximum in ).
The largest possible value of Logarithmic Loss for the set is .
Proof.
See Appendix 10. ∎
Based on the highest possible scores for the Brier Score and Logarithmic Loss when predictions are incorrect, an adjusted form of these scoring rules can be defined. The modified Brier Score with the penalty term, Penalized Brier Score (PBS), can be expressed as:
(4) |
The modified Logarithmic Loss with the penalty term, Penalized Logarithmic Loss (PLL), can be expressed as:
(5) |
where is the ground-truth vector, is the predicted probability vector by the probabilistic classifier, and is the number of classes. As shown in the following theorems, the two proposed evaluation metrics are not only strictly proper scoring rules but also superior.
Theorem 5 (PBS & PLL Scoring Rules).
PBS and PLL are strictly proper scoring rules.
Proof.
See Appendix 11. ∎
Theorem 6 (PBS & PLL Property).
PBS and PLL are superior.
Proof.
The penalty term represents the maximum score that can be assigned to a correct prediction. Consequently, the scoring rule modified by the penalty term passively assigns higher scores to correct predictions over incorrect ones in a consistent manner. ∎
Penalties in Eq. (4) and Eq. (5) are designed to be applied only to incorrect predictions. Therefore, these penalties depend on the values of in Eq. (1) and Eq. (2). Furthermore, the more number of incorrect predictions by in Eq. (3), the worse score it receives. As a result, the modified evaluation metrics can more reliably identify optimal model checkpoints and early stopping points that achieve better generalization performance compared to the original metrics like Brier Score and Logarithmic Loss.
4.2 Algorithm
In this section, we introduce the vectorization methods of the proposed evaluation metrics. The pseudocode of the Penalized Brier Score and the Penalized Logarithmic Loss is demonstrated in Algorithm 2 and Algorithm 3, respectively. The PBS and PLL algorithms leverage Penalizing Algorithm 1 to incorporate penalties into the Brier Score and the Logarithmic Loss.
The Penalizing algorithm detects wrong predictions and penalizes them. The output is in the form of a vector, where means that the prediction was correct, otherwise the prediction was wrong and was penalized. This payoff vector is then used by other scoring functions to calculate performance metrics. It takes as input predicted probabilities , ground-truth labels , and a penalty value for wrong predictions. It first computes the hot values for each sample, which are the predicted probabilities for the correct class. It then subtracts the hot values from the predicted probabilities to obtain a container of values that are positive for incorrect predictions. All negative values in the container are set to zero, and the sum of the remaining values is computed for each sample. The penalty value is then multiplied by a vector of ones to become a vector. Finally, all positive values in the container are replaced by penalty and the container is returned as the payoff for each sample.
The PBS algorithm takes as input predicted probabilities and ground-truth labels . It first computes the penalty factor as , where is the number of classes. It then calls the Penalizing algorithm to compute the payoff for each sample. Next, the Brier scores are computed as the mean squared difference between the predicted probabilities and the ground-truth labels. Finally, the PBS is computed as the mean of the sum of the Brier scores and the payoffs.
The PLL algorithm is similar to the PBS algorithm, but it uses the Logarithmic Loss as the base score instead of the Brier Score. The Logarithmic Loss is computed as the mean of the sum of the element-wise product of the ground-truth labels and the negative logarithm of the predicted probabilities. Finally, the payoff is subtracted from the Logarithmic Loss before taking the mean.
-
•
q: Predictions of size ,
-
•
y: Ground-truth labels of size , where is the number of samples and is the number of classes.
-
•
penalty: A scalar for wrong predictions.
5 Experimental results
5.1 Datasets
Scoring rules are commonly used in the field of spatio-temporal statistics to compare different models [33]. To assess the effectiveness of our proposed method, we have selected several spatio-temporal datasets. Classifying spatio-temporal data is generally challenging, as evidenced by the low accuracy scores achieved on these datasets. For this reason, thorough model evaluation takes on greater importance. Additionally, it is crucial to consider incorrect classes when the classifier’s ability to generalize is limited. While the probabilities related to incorrect classes may not be particularly informative if the probability of the true class for all observations is above 0.5, it becomes increasingly important when the probability of the true class is below 0.5 for many observations. As a result, any evaluation method must take into account incorrect classes when making decisions.
The proposed model was evaluated using various classification problems, as described below. The datasets used for evaluation included three-axis accelerometer signals obtained from participants’ activities such as running or lying, as described in [34, 35, 36, 37]. While these datasets were intended for research purposes related to activity recognition, they also presented challenges for identifying individuals based on their motion patterns. Also, the dataset presented in [38] was generated for predicting motor failure time using three-axis accelerometer data. The dataset in [39] provided information about power consumption such as temperature, humidity, wind speed, consumption, general diffuse flows, and diffuse flows in three zones of Tetouan City, which is suitable for predicting a zone based on consumption information. The dataset presented in [40] includes hourly air pollutant data such as PM2.5, PM10, SO2, NO2, CO, O3, temperature, pressure, dew point temperature, precipitation, wind direction, wind speed from 12 air-quality monitoring sites, and the proposed model predicted the sites based on their air-quality information. The location of participants was collected using two received signal strength indicator (RSSI) measurements, as described in [41]. While the primary task was indoor localization, the proposed model aimed to identify participants based on their location. Finally, the dataset presented in [42] included three-axis accelerometer and three-axis gyroscope data from ten drivers, and the goal was to identify drivers based on their driving behaviors.

5.2 Evaluation Strategy
Since the purpose of this evaluation is to analyze two proposed metrics, the classification model used is a Convolutional Neural Network (CNN). CNNs are widely used for spatial and temporal data modeling tasks [2]. The Fig. 2 illustrates the architecture of the proposed CNN model. As neural networks are optimized through iterative training procedures, the proposed metrics can be systematically tested during this process. Model checkpointing saves model weights periodically during training. This allows rolling back to previous checkpoints if overfitting or divergence occurs. Early stopping monitors validation performance and stops training if no improvement is seen for a set number of epochs, preventing overfitting. Implementing these techniques with the proposed metrics provides insights into their behavior at different stages of neural network optimization.
When dealing with spatio-temporal data, it is essential to employ a cross-validation technique that accounts for the temporal dependencies within the data. This is crucial because the data exhibits interdependence, and employing a random sample selection for training and testing could introduce biases in the results [43]. -block cross-validation is a type of temporal cross-validation that is often used for temporal data [43]. It helps address the issue of data leakage that can occur in standard -fold cross-validation when applied to time series. The Algorithm 4 shows the pseudocode of -block CV. The time series data was first divided into non-overlapping blocks, each representing a continuous time interval. For each block , a validation set is created by selecting a contiguous subset of blocks starting from block . A test set is then created by selecting a contiguous subset of blocks starting from the end of the validation block. The remaining blocks are assigned to the training set . Within each block, windowing techniques were applied to segment data. The model is trained on the segments of the training set and validated on the segments of the validation set . The trained model is then evaluated on the segments of the test set . For each iteration, 50% of the subsets are allocated for training, 30% for testing, and 20% for validation. By adopting this approach, the model is trained on a sufficiently large dataset that enables effective generalization to new data, while the testing phase utilizes an independent dataset.
The validation set helps guide hyperparameter tuning, such as selecting the optimal window length and overlap values of the sliding window evaluated via grid search. Table 1 summarizes the key attributes of each dataset used in the experiments. It also allows assessing model checkpointing and early stopping criteria by tracking validation performance over training epochs. Once hyperparameters are fixed, the final model performance is reported on the independent test sets from each fold. This rigorous evaluation process provides robust estimates of the proposed metrics. The CNN model is implemented using Python and the TensorFlow library. Nadam is employed to optimize network parameters. The results of the experiments can be accessed via GitHub.
Dataset | Number of Classes | Window Length | Window Overlap |
---|---|---|---|
[34] | 13 | 00:00:03 | 75% |
[35] | 10 | 00:00:08 | 75% |
[36] | 12 | 00:00:06 | 75% |
[37] | 5 | 00:00:03 | 75% |
[38] | 3 | 00:00:06 | 75% |
[39] | 3 | 00:07:00 | 75% |
[40] | 12 | 00:01:00 | 75% |
[41] | 12 | 00:00:30 | 75% |
[42] | 10 | 00:00:10 | 75% |
5.3 Details of Experiments
To ensure a thorough evaluation of the proposed evaluation metrics, the model training incorporated model checkpointing (CP) and early stopping (ES) techniques, considering both traditional evaluation metrics and the proposed metrics. CP were saved during training at epochs where either the traditional metrics (such as Brier Score and Logarithmic Loss) or the proposed metrics demonstrated the best performance on the validation set. ES was implemented as well, terminating training if there was no improvement in either the traditional metrics or the proposed metrics for a specified number of epochs, thus preventing overfitting. To enhance the robustness of the results, this entire process was repeated times to identify the optimal model hyperparameters. Such an approach enabled a fair comparison between the proposed metrics and the sole use of traditional metrics in determining the best model checkpoint. Subsequently, the checkpoint yielding the highest performance, as indicated by each metric, was evaluated on the test set, providing an accurate assessment of the proposed metric’s ability to identify the model with the highest true performance.
The F1 score and accuracy are widely recognized as a prominent evaluation metric for classification tasks. To effectively evaluate model performance on such unbalanced datasets, accuracy alone is not a suitable metric [44]. As most of the real-world datasets are either heavily or moderately imbalanced, accuracy can be misleading when the number of samples is very different between classes [45]. For this reason, the macro-averaged F1 score is also reported. F1 score considers both precision and recall, providing a better sense of classification effectiveness on unbalanced problems [1]. However, it is important to note that this metric solely serves as a proper scoring rule and does not encompass the measurement of forecast uncertainty and overconfidence [11]. In order to address this limitation, it is proposed that strictly proper scoring rules be employed, as they possess the capability to gauge uncertainty. By exhibiting behavior similar to the F1 score while also measuring uncertainty, these proposed scoring rules can effectively fulfill both reliability and accuracy requirements. Therefore, incorporating such scoring rules in the evaluation process would provide a comprehensive assessment of classification models, encompassing not only their predictive accuracy but also their ability to quantify uncertainty.
5.4 Discussion
Fig. 3 presents validation statistics over training epochs for each dataset, including the macro-averaged F1 score, Brier Score, and proposed Penalized Brier Score. This provides an illustrative example of how the different scoring rules change during model optimization. The Pearson correlation () between each scoring rule and the F1 score is also shown. When plotting and comparing the different validation metric trends together, it’s important to note that they are oriented in different directions. Specifically, the F1 score increases positively as the model improves, while the PLL and PBS decrease negatively as performance increases. Therefore, to enable the results to be interpreted more easily, the PBS values graphed have been multiplied by -1 to match the positive F1 score trend.
Importantly, the figure also marks the optimal point on each trend with a data point symbol. This optimal point corresponds to the epoch at which the metric reached its highest validation value. By pinpointing these peaks, we can see exactly where each scoring rule reached its best performance relative to the others. Notably, the PBS trends exhibit optimal points that are closer to the F1 score points compared to the standard Brier Score. This observation suggests that, in these specific examples, the PBS was more suitable for model checkpointing. Additionally, the slopes of the PBS trends display greater similarity to the F1 score trends versus the standard Brier Score slopes. Additionally, the negative correlation metric confirms the PBS consistently maintains a stronger relationship with the F1 score than the standard Brier Score. Therefore, given these observations, implementing early stopping based on the PBS rather than the Brier Score has the potential to yield models with improved test performance.
5.5 Benchmark
The previous discussion examining superior scoring rule trends during training provided initial insights into how the proposed metrics may help optimize model performance. First, the optimal points of the PBS trends were closer to those of the F1 score trends compared to the standard Brier Score. Additionally, the negative correlation between the PBS and F1 score was consistently higher. Given this observation, it was expected that employing the PBS for model selection could lead to improved performance. To decisively validate their effectiveness, rigorous quantitative evaluation was carried out using multiple experiments.
The first stage involved assessing the correlation between the metrics and the F1 score on validation data using k-fold cross-validation. To this end, the performance of models selected with different scoring rules using both model checkpointing (CP) and early stopping (ES) are evaluated. As shown in Table 2, Pearson correlation coefficients were calculated between each of the scoring rules and the macro-averaged F1 scores across datasets and folds. In this table, the symbol represents the correlation between the F1 score and the scoring rule . Strong positive correlation within the to range indicates close agreement between trends. The results demonstrate that PBS and PLL trends exhibited the highest degree of similarity to F1 score variations. With a correlation exceeding other scores, PBS and PLL can be reliably used to track changes in model performance.
Next, the ability of the proposed superior scoring rules to select high-scoring models was evaluated through k-fold cross-validation. Table 3 compares the macro-averaged F1 scores of the optimized classifier chosen by each metric on test data. These F1 scores were determined through ES or CP, which relied on the utilization of a superior scoring rule. The symbol denotes the F1 score achieved when employing the scoring rule . As depicted in the table, it is evident that the F1 score consistently performs better when the model is selected using each of the proposed superior scoring rules. This observation emphasizes the effectiveness of the superior scoring rules in identifying models that yield higher F1 scores. Furthermore, consistently better scores emerged when PBS guided selection rather than PLL.
Consequently, these findings underscore the importance of employing appropriate superior scoring rules to circumvent shortcomings of F1 score alone in model selection, ultimately enabling improved classification capability and more trustworthy predictions. Collectively, the correlational and benchmark results provide compelling quantitative evidence that proposed penalties within strictly proper scoring rules augment their capacity to reflect true performance changes, in addition to facilitating the identification of models with stronger predictive power for new samples. Therefore, by more faithfully reflecting F1 score behavior, the proposed criteria enhance optimal model evaluation, selection, and classification for challenging spatio-temporal applications.
Data | ES | CP | ||||||
✓ | 0.957 (0.02) | 0.969 (0.01) | 0.012 | 0.837 (0.08) | 0.900 (0.06) | 0.063 | ||
[34] | ✓ | 0.964 (0.01) | 0.980 (0.01) | 0.016 | 0.526 (0.47) | 0.640 (0.30) | 0.113 | |
✓ | 0.963 (0.04) | 0.983 (0.02) | 0.020 | 0.764 (0.17) | 0.845 (0.11) | 0.081 | ||
[35] | ✓ | 0.699 (0.46) | 0.926 (0.09) | 0.228 | 0.520 (0.53) | 0.589 (0.45) | 0.069 | |
✓ | 0.721 (0.17) | 0.740 (0.22) | 0.019 | 0.731 (0.15) | 0.745 (0.14) | 0.013 | ||
[36] | ✓ | 0.607 (0.49) | 0.748 (0.45) | 0.141 | 0.306 (0.53) | 0.508 (0.45) | 0.202 | |
✓ | 0.690 (0.73) | 0.748 (0.45) | 0.058 | 0.567 (0.53) | 0.619 (0.49) | 0.052 | ||
[37] | ✓ | 0.667 (1.00) | 0.674 (0.72) | 0.007 | 0.317 (0.65) | 0.378 (0.65) | 0.061 | |
✓ | 0.717 (0.61) | 0.728 (0.61) | 0.010 | 0.275 (0.70) | 0.292 (0.71) | 0.016 | ||
[38] | ✓ | 0.739 (0.63) | 0.740 (0.63) | 0.001 | 0.384 (0.60) | 0.425 (0.55) | 0.041 | |
✓ | 0.965 (0.08) | 0.987 (0.03) | 0.022 | 0.592 (0.16) | 0.664 (0.13) | 0.072 | ||
[39] | ✓ | 0.995 (0.01) | 0.997 (0.01) | 0.002 | 0.419 (0.65) | 0.446 (0.66) | 0.026 | |
✓ | 0.680 (0.31) | 0.899 (0.05) | 0.219 | 0.595 (0.48) | 0.777 (0.25) | 0.182 | ||
[40] | ✓ | 0.665 (0.30) | 0.813 (0.13) | 0.148 | 0.378 (0.71) | 0.802 (0.14) | 0.424 | |
✓ | 0.572 (0.48) | 0.597 (0.27) | 0.025 | 0.694 (0.23) | 0.724 (0.21) | 0.031 | ||
[41] | ✓ | 0.444 (0.48) | 0.704 (0.21) | 0.260 | 0.434 (0.61) | 0.498 (0.53) | 0.064 | |
✓ | 0.874 (0.14) | 0.924 (0.07) | 0.050 | 0.450 (0.38) | 0.553 (0.32) | 0.103 | ||
[42] | ✓ | 0.916 (0.12) | 0.953 (0.06) | 0.037 | 0.391 (0.52) | 0.629 (0.32) | 0.239 |
Data | ES | CP | ||||||
✓ | 45.00 (0.05) | 51.65 (0.07) | 6.65 | 64.76 (0.09) | 65.85 (0.08) | 1.10 | ||
[34] | ✓ | 55.47 (0.05) | 60.03 (0.07) | 4.56 | 70.00 (0.08) | 71.82 (0.07) | 1.83 | |
✓ | 53.01 (0.06) | 57.60 (0.04) | 4.59 | 53.90 (0.08) | 55.48 (0.05) | 1.59 | ||
[35] | ✓ | 55.37 (0.04) | 58.03 (0.05) | 2.66 | 55.74 (0.06) | 57.63 (0.07) | 1.89 | |
✓ | 28.24 (0.07) | 32.48 (0.09) | 4.24 | 30.08 (0.04) | 30.62 (0.04) | 0.53 | ||
[36] | ✓ | 32.30 (0.08) | 34.28 (0.09) | 1.99 | 31.65 (0.08) | 31.80 (0.05) | 0.15 | |
✓ | 58.27 (0.20) | 59.14 (0.19) | 0.87 | 56.79 (0.14) | 59.55 (0.13) | 2.76 | ||
[37] | ✓ | 66.51 (0.06) | 67.49 (0.05) | 0.97 | 68.21 (0.11) | 69.78 (0.08) | 1.57 | |
✓ | 43.66 (0.34) | 50.81 (0.32) | 7.14 | 51.12 (0.32) | 59.69 (0.26) | 8.57 | ||
[38] | ✓ | 48.00 (0.09) | 54.83 (0.16) | 6.83 | 64.12 (0.18) | 66.82 (0.22) | 2.71 | |
✓ | 80.93 (0.08) | 83.20 (0.07) | 2.27 | 77.87 (0.06) | 80.13 (0.08) | 2.26 | ||
[39] | ✓ | 82.08 (0.08) | 83.88 (0.07) | 1.80 | 80.22 (0.06) | 83.42 (0.06) | 3.20 | |
✓ | 50.13 (0.09) | 53.51 (0.07) | 3.38 | 55.59 (0.08) | 56.33 (0.08) | 0.74 | ||
[40] | ✓ | 52.04 (0.11) | 53.63 (0.10) | 1.59 | 51.73 (0.07) | 52.39 (0.07) | 0.66 | |
✓ | 27.09 (0.02) | 27.77 (0.03) | 0.69 | 23.10 (0.04) | 23.75 (0.02) | 0.65 | ||
[41] | ✓ | 26.51 (0.05) | 29.37 (0.04) | 2.86 | 26.77 (0.04) | 27.39 (0.05) | 0.62 | |
✓ | 66.50 (0.03) | 66.53 (0.03) | 0.03 | 65.39 (0.06) | 65.81 (0.05) | 0.43 | ||
[42] | ✓ | 67.85 (0.03) | 68.36 (0.02) | 0.51 | 66.64 (0.04) | 67.07 (0.05) | 0.43 |
6 Conclusion
This study introduced novel superior scoring rules called Penalized Brier Score (PBS) and Penalized Logarithmic Loss (PLL) for evaluating probabilistic classification models. PBS and PLL modify the traditional Brier Score and Logarithmic Loss by integrating a penalty term for misclassified observations. As demonstrated formally, PBS and PLL satisfy the properties of strictly proper scoring rules while also consistently assigning superior scores to correctly classified observations. The experimental evaluation highlighted the benefits of using PBS and PLL for model checkpointing and early stopping. PBS and PLL demonstrated a higher negative correlation with the F1 score compared to traditional Brier Score and Logarithmic Loss during model training. Consequently, the proposed scoring functions were more effective in identifying optimal model checkpoints and determining early stopping points, leading to improved F1 scores. The test results substantiated that model selection based on PBS and PLL yielded superior F1 scores compared to traditional metrics. In conclusion, PBS and PLL enable more accurate model evaluation by encapsulating both proper scoring rule principles and preferential treatment of correct classifications. The proposed metrics address a critical gap between probabilistic uncertainty assessment and deterministic accuracy maximization. By accounting for uncertainty and the value of true classifications, PBS and PLL can enhance model selection, checkpointing, and early stopping in classification tasks requiring reliable predictive uncertainty. Further research can explore PBS and PLL with different model architectures and classification problems. Also, various penalties can be investigated to obtain better performance.
Appendix
7 Proof of Theorem 1
Let be a member of the set with , where is the index of the true class, and let be the ground-truth label vector. Due to the single-label classification property, the Eq. (2) can be expressed as:
(6) |
Now, let be another prediction. There are three possible cases for the true class probability :
-
•
. It means that .
-
•
, where . Since and is monotonically descending and positive in , it follows that .
-
•
, where . Since and is monotonically descending and positive in , it follows that .
Therefore, when , the Logarithmic Loss does not assign a strictly higher score to the prediction compared to the true prediction . Hence, it cannot be considered superior.
8 Proof of Theorem 2
Let such that , where is the index of the true class. Let be the ground-truth label vector. Due to the single-label classification property, the Logarithmic Loss can be expressed as: Due to the single-label classification property, Eq. (1) can be written as:
(7) |
In the following, the term ”hot value” is used to refer to the element of the probability vectors that correspond to the truth class, whereas the other elements are referred to as ”non-hot values”. Now, let . There are three possible cases for the true class probability :
8.1
It is possible to demonstrate that the variance of the non-hot part has the greatest impact on the score value of the Brier score function. The Brier Score for is given by the following expression:
(8) |
where the index of represents the true class or . The non-hot part can be expanded in the following manner:
(9) | |||
(10) | |||
(11) | |||
(12) | |||
(13) | |||
(14) | |||
(15) |
If is the average of the non-hat part, then:
(16) |
It is evident that:
(17) |
Consequently, it can be obtained that:
(18) |
where . Therefore, based on Eq. (16), the Brier Score in Eq. (18) can be minimized by reducing the variance of the non-hot part and maximizing .
Furthermore, may have non-hot values that exceed , and the sum of these non-hot values is equal to . The following expression is utilized to formulate the sum of non-hot values that are less than :
(19) | |||
(20) | |||
(21) | |||
(22) |
This implies that the sum of the non-hot values is equivalent to . As increases and approaches , the difference between and also increases, leading to an increase in the variance of the non-hot part in Eq. (18). Therefore, it can be inferred that the variance of the non-hot part of is greater than . Let represent the average of the non-hat part of :
(23) |
Now, as and , we can proceed as follows:
(24) | |||
(25) | |||
(26) | |||
(27) |
Thus, the condition still holds even if a non-hot value is only slightly greater than . And is not superior in this case.

8.2
To verify the validity of the condition of being superior, a Monte Carlo simulation can be conducted, which involves comparing to for numerous randomly generated values of and . The variables in this simulation include , , , , and , where denotes the hot value of , represents the difference between the hot value of and , and represents the number of classes. All of these variables are randomly generated from a normal distribution.
For each comparison, a random is selected from such that and , where represents a hot value and represents the number of classes. Next, a random is chosen from such that , where is a random positive value. The pair is then evaluated to determine whether holds.
Figure 4 presents the results of the Monte Carlo simulation for varying numbers of comparisons. The figure demonstrates the convergence of the Monte Carlo method and confirms that the condition is satisfied in almost 99.996% of comparisons. The figure displays the cumulative percentage of comparisons in which the condition holds. When , the Brier Score typically assigns a higher score to the observation ; however, this is not always the case. Therefore, the Brier Score cannot be regarded as superior.

8.3
To validate the condition of being superior, it is possible to conduct another Monte Carlo simulation, which involves comparing for a large number of randomly generated values of and . The simulation includes variables such as , , , , and , where represents the hot value of , represents the difference between the hot value of and , and represents the number of classes. All of these variables are randomly generated from a normal distribution.
In each comparison, a random value of is chosen from such that and belongs to , where denotes a hot value and represents the number of classes. Then, a random value of is selected from such that , where is a random positive value. The pair is evaluated to determine if is true.
Figure 5 depicts the outcomes of the Monte Carlo simulation for different numbers of comparisons, indicating the convergence of the Monte Carlo method. The figure illustrates the cumulative percentage of comparisons where the condition is satisfied. The results confirm that the condition holds in roughly 63% of comparisons. When , it is common for the Brier Score to assign a higher score to the observation ; however, this is not always the case. Hence, the Brier Score cannot be considered superior.
9 Proof of Theorem 3
Let be an arbitrary member of the set and be the one-hot ground-truth label. Let where is the index of the true class. To maximize , the following equation is helpful:
(28) | ||||
(29) |
Since is the primary component of the Brier Score, the maximization of (or the minimization of ) is essential to obtain the highest possible value of . The minimum value of is , as any value of below this threshold would make unsuitable for inclusion in the set . As a result, is equivalent to . On the other hand, if , the remaining elements of the vector cannot exceed . To simplify the analysis, assuming , the elements of would be . Therefore:
(30) | ||||
(31) | ||||
(32) | ||||
(33) | ||||
(34) |
10 Proof of Theorem 4
Let be an arbitrary member of the set , and be the one-hot ground-truth label vector such that , where is the index of the true class. According to Eq. (6) and since is a decreasing and positive function in the range , minimizing is necessary to maximize . The minimum value of is , as any value of below this threshold would render unsuitable for inclusion in the set . To simplify the analysis, we assume , which yields:
(35) |
11 Proof of Theorem 5
As and are strictly proper, so:
(36) | |||
(37) |
for . Furthermore, it is clear that:
(38) | |||
(39) |
and also:
(40) | |||
(41) |
Therefore:
(42) | |||
(43) |
As a result, and are strictly proper.
References
- [1] Mohammad Hossin and Md Nasir Sulaiman. A review on evaluation metrics for data classification evaluations. International journal of data mining & knowledge management process, 5(2):1, 2015.
- [2] Senzhang Wang, Jiannong Cao, and Philip Yu. Deep learning for spatio-temporal data mining: A survey. IEEE Transactions on Knowledge and Data Engineering, 2020.
- [3] Nicholas Pudjihartono, Tayaza Fadason, Andreas W Kempa-Liehr, and Justin M O’Sullivan. A review of feature selection methods for machine learning-based disease risk prediction. Frontiers in Bioinformatics, 2:927312, 2022.
- [4] Wentao Zhang, Jiawei Jiang, Yingxia Shao, and Bin Cui. Snapshot boosting: a fast ensemble framework for deep neural networks. Science China Information Sciences, 63:1–12, 2020.
- [5] Chandra Sekhara Rao Annavarapu et al. Deep learning-based improved snapshot ensemble technique for covid-19 chest x-ray classification. Applied Intelligence, 51(5):3104–3120, 2021.
- [6] Muhammad Ibraheem Siddiqui, Khurram Khan, Adnan Fazil, and Muhammad Zakwan. Snapshot ensemble-based residual network (snapensemresnet) for remote sensing image scene classification. GeoInformatica, 27(2):341–372, 2023.
- [7] Andreas Griewank and Andrea Walther. Algorithm 799: revolve: an implementation of checkpointing for the reverse or adjoint mode of computational differentiation. ACM Transactions on Mathematical Software (TOMS), 26(1):19–45, 2000.
- [8] Gürol Canbek, Tugba Taskaya Temizel, and Seref Sagiroglu. Ptopi: A comprehensive review, analysis, and knowledge representation of binary classification performance measures/metrics. SN Computer Science, 4(1):13, 2022.
- [9] Lutz Prechelt. Early stopping-but when? In Neural Networks: Tricks of the trade, pages 55–69. Springer, 2002.
- [10] Alaa Tharwat. Classification assessment methods. Applied computing and informatics, 17(1):168–192, 2020.
- [11] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International Conference on Machine Learning, pages 1321–1330. PMLR, 2017.
- [12] Juozas Vaicenavicius, David Widmann, Carl Andersson, Fredrik Lindsten, Jacob Roll, and Thomas Schön. Evaluating model calibration in classification. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 3459–3467. PMLR, 2019.
- [13] Shaoxun Xu, Yufei Chen, Chao Ma, and Xiaodong Yue. Deep evidential fusion network for medical image classification. International Journal of Approximate Reasoning, 150:188–198, 2022.
- [14] Tilmann Gneiting, Adrian E Raftery, Anton H Westveld, and Tom Goldman. Calibrated probabilistic forecasting using ensemble model output statistics and minimum crps estimation. Monthly Weather Review, 133(5):1098–1118, 2005.
- [15] Teddy Seidenfeld, Mark J Schervish, and Joseph B Kadane. Forecasting with imprecise probabilities. International Journal of Approximate Reasoning, 53(8):1248–1261, 2012.
- [16] Robert L Winkler, Yael Grushka-Cockayne, Kenneth C Lichtendahl Jr, and Victor Richmond R Jose. Probability forecasts and their combination: A research perspective. Decision Analysis, 16(4):239–260, 2019.
- [17] Tilmann Gneiting and Adrian E Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American statistical Association, 102(477):359–378, 2007.
- [18] Glenn W Brier. Verification of forecasts expressed in terms of probability. Monthly weather review, 78(1):1–3, 1950.
- [19] Irving John Good. Rational decisions. Journal of the Royal Statistical Society: Series B (Methodological), 14(1):107–114, 1952.
- [20] Arthur Carvalho. An overview of applications of proper scoring rules. Decision Analysis, 13(4):223–242, 2016.
- [21] David Bolin and Jonas Wallin. Local scale invariance and robustness of proper scoring rules. Statistical Science, 38(1):140–159, 2023.
- [22] Jürgen Landes. Probabilism, entropies and strictly proper scoring rules. International Journal of Approximate Reasoning, 63:1–21, 2015.
- [23] Wenbin Qian, Jintao Huang, Yinglong Wang, and Yonghong Xie. Label distribution feature selection for multi-label classification with rough set. International journal of approximate reasoning, 128:32–55, 2021.
- [24] Alexander Philip Dawid and Monica Musio. Theory and applications of proper scoring rules. Metron, 72(2):169–183, 2014.
- [25] Ebrahim Mortaz. Imbalance accuracy metric for model selection in multi-class imbalance classification problems. Knowledge-Based Systems, 210:106490, 2020.
- [26] Meelis Kull and Peter Flach. Novel decompositions of proper scoring rules for classification: Score adjustment as precursor to calibration. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2015, Porto, Portugal, September 7-11, 2015, Proceedings, Part I 15, pages 68–85. Springer, 2015.
- [27] Daniel S Wilks. Forecast verification. In International geophysics, volume 100, pages 301–394. Elsevier, 2011.
- [28] Hans Hersbach. Decomposition of the continuous ranked probability score for ensemble prediction systems. Weather and Forecasting, 15(5):559–570, 2000.
- [29] Hristos Tyralis and Georgia Papacharalampous. A review of predictive uncertainty estimation with machine learning. Artificial Intelligence Review, 57(4):94, 2024.
- [30] Jakub Nowotarski and Rafał Weron. Recent advances in electricity price forecasting: A review of probabilistic forecasting. Renewable and Sustainable Energy Reviews, 81:1548–1568, 2018.
- [31] Carlo Gaetan, Federica Giummolè, and Valentina Mameli. Calibrated emos: applications to temperature and wind speed forecasting. Environmental and Ecological Statistics, pages 1–25, 2024.
- [32] Johannes Resin. From classification accuracy to proper scoring rules: Elicitability of probabilistic top list predictions. Journal of Machine Learning Research, 24(173):1–21, 2023.
- [33] Matthew J Heaton, Abhirup Datta, Andrew O Finley, Reinhard Furrer, Joseph Guinness, Rajarshi Guhaniyogi, Florian Gerber, Robert B Gramacy, Dorit Hammerling, Matthias Katzfuss, et al. A case study competition among methods for analyzing large spatial data. Journal of Agricultural, Biological and Environmental Statistics, 24:398–425, 2019.
- [34] Pierluigi Casale, Oriol Pujol, and Petia Radeva. Personalization and user verification in wearable systems using biometric walking patterns. Personal and Ubiquitous Computing, 16(5):563–580, 2012.
- [35] Gary M Weiss, Kenichi Yoneda, and Thaier Hayajneh. Smartphone and smartwatch-based biometrics using activities of daily living. IEEE Access, 7:133190–133202, 2019.
- [36] Roberto L Shinmoto Torres, Damith C Ranasinghe, Qinfeng Shi, and Alanson P Sample. Sensor enabled wearable rfid technology for mitigating the risk of falls near beds. In 2013 IEEE international conference on RFID (RFID), pages 191–198. IEEE, 2013.
- [37] Boštjan Kaluža, Violeta Mirchevska, Erik Dovgan, Mitja Luštrek, and Matjaž Gams. An agent-based approach to care in independent living. In International joint conference on ambient intelligence, pages 177–186. Springer, 2010.
- [38] Gustavo Scalabrini Sampaio, Arnaldo Rabello de Aguiar Vallim Filho, Leilton Santos da Silva, and Leandro Augusto da Silva. Prediction of motor failure time using an artificial neural network. Sensors, 19(19):4342, 2019.
- [39] Abdulwahed Salam and Abdelaaziz El Hibaoui. Comparison of machine learning algorithms for the power consumption prediction:-case study of tetouan city–. In 2018 6th International Renewable and Sustainable Energy Conference (IRSEC), pages 1–5. IEEE, 2018.
- [40] Shuyi Zhang, Bin Guo, Anlan Dong, Jing He, Ziping Xu, and Song Xi Chen. Cautionary tales on air-quality improvement in beijing. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 473(2205):20170457, 2017.
- [41] Dheeru Dua and Casey Graff. UCI machine learning repository, 2017.
- [42] Hamid Reza Eftekhari and Mehdi Ghatee. Hybrid of discrete wavelet transform and adaptive neuro fuzzy inference system for overall driving behavior recognition. Transportation research part F: traffic psychology and behaviour, 58:782–796, 2018.
- [43] Jun Shao. Linear model selection by cross-validation. Journal of the American statistical Association, 88(422):486–494, 1993.
- [44] Fares Grina, Zied Elouedi, and Eric Lefevre. Re-sampling of multi-class imbalanced data using belief function theory and ensemble learning. International Journal of Approximate Reasoning, 156:1–15, 2023.
- [45] Serafín Moral-García, Carlos J Mantas, Javier G Castellano, and Joaquín Abellán. Using credal c4. 5 for calibrated label ranking in multi-label classification. International Journal of Approximate Reasoning, 147:60–77, 2022.