This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Meta Pattern Concern Score: A Novel Evaluation Measure with Human Values for Multi-classifiers

Yanyun Wang Department of Computer Science
The University of Hong Kong
Hong Kong, China
[email protected]
   Dehui Du Software Engineering Institute
East China Normal University
Shanghai, China
[email protected]
   Yuanhao Liu Software Engineering Institute
East China Normal University
Shanghai, China
[email protected]
Abstract

While advanced classifiers have been increasingly used in real-world safety-critical applications, how to properly evaluate the black-box models given specific human values remains a concern in the community. Such human values include punishing error cases of different severity in varying degrees and making compromises in general performance to reduce specific dangerous cases. In this paper, we propose a novel evaluation measure named Meta Pattern Concern Score based on the abstract representation of probabilistic prediction and the adjustable threshold for the concession in prediction confidence, to introduce the human values into multi-classifiers. Technically, we learn from the advantages and disadvantages of two kinds of common metrics, namely the confusion matrix-based evaluation measures and the loss values, so that our measure is effective as them even under general tasks, and the cross entropy loss becomes a special case of our measure in the limit. Besides, our measure can also be used to refine the model training by dynamically adjusting the learning rate. The experiments on four kinds of models and six datasets confirm the effectiveness and efficiency of our measure. And a case study shows it can not only find the ideal model reducing 0.53% of dangerous cases by only sacrificing 0.04% of training accuracy, but also refine the learning rate to train a new model averagely outperforming the original one with a 1.62% lower value of itself and 0.36% fewer number of dangerous cases.

Index Terms:
Machine Learning, Classification, Evaluation Measure, Human Value.

I Introduction

A classifier is a learned model that approximates a specific target function to sort given input into predicted output [1]. Through processing labeled data, supervised algorithms enable advanced classifiers to learn the key features needed for constructing accurate predictions automatically [2]. While such simplicity lays a solid foundation for the widespread implementation of classifiers nowadays, it also makes the model learning a black-box process that is hard to be flexibly and properly evaluated.

Currently, there are two types of metrics commonly used in practice to assess the quality of learning-based classifiers. One is the evaluation measure based on confusion matrix [3], which provides people a certain degree of freedom to define the specific learning objective with different statistics of the output results, like Precision and Recall. However, as such statistical values are always discrete, evaluation measures are insensitive to slight improvements in the model [4]. Also, since there is a gap between observed results in existing data and the underlying posterior probability distribution [5], these evaluation measures are considered theoretically incomplete in assessing the generalization ability of the model. The other type of metric is the loss value serving as the optimization objective in model training [6]. While researchers have proven that losses do not suffer from the issues above [4, 7], they are always fixedly predefined upon the probabilistic prediction scores of all the classes, and as a result, lack the flexibility to be customized for different tasks.

With more and more classifiers used in safety-critical areas such as self-driving [8] and healthcare [9], just uniformly assessing different models built for different tasks is gradually found to be limited and unsatisfying. This is because, beyond the general performance, there are usually some specific human values to be concerned with and weighed in such areas. For instance, the mistakes to recognize a red light respectively as yellow and green have completely different degrees of severity in real-world traffic [10]. In this paper, we mainly focus on introducing two kinds of human values into multi-classifiers. Firstly, many specific cases in our human society are not just black-or-white, so instead of equally punishing all kinds of incorrect predictions with different destructiveness, we allow assigning specific weights individually for every single error case. In this way, we can specify which kinds of errors should be strictly punished, and which are relatively tolerable. Secondly, in order to satisfy some safety-critical requirements in practice, we would rather make certain compromises in overall performance to reduce specific dangerous cases. For this purpose, we introduce a threshold to indicate the concession we can make in the confidence of prediction, to leave enough space to search for model parameters that are better under the given requirements.

Based on these ideas, we propose a novel evaluation measure named Meta Pattern Concern Score (MPCS) for multi-classification. We design this measure for the following targets: 1) it can be flexibly customized for different tasks to find better models given the human values mentioned; 2) it is still qualified enough to serve as a general evaluation measure, which means the models found should also be fine under assessments of the common metrics; 3) it should reduce the impact of the two inherent drawbacks of existing evaluation measures mentioned. Specifically, we take the basic idea of the confusion matrix to design an abstract representation for the probabilistic prediction result, to specify different punishments for every single kind of incorrect prediction. At the same time, through introducing the adjustable fine-grained interval into the representation, we make MPCS approximating to cross entropy loss in the limit. In other words, cross entropy can be viewed as a special case of MPCS. In this way, we not only make it inherit certain mathematical completeness and work effectively as general metrics, but also reduce the insensitivity caused by the discrete value. Finally, in addition to the evaluation, although the abstract representation makes MPCS non-differentiable, which means it can not directly serve as the loss, we can still use it to refine the training by dynamically adjusting the learning rate according to its value.

The experiments on four different kinds of models and six reality and synthetic datasets confirm that MPCS is as effective and computationally efficient as the common metrics in the evaluation of multi-classifiers. A specific case study in MNIST shows that given customized requirements, MPCS can not only accordingly pick out the ideal model better than the one selected by the common metrics without violating them too much, but also improve the training process to newly train a better model. Specifically, it can pick a model with 0.53% fewer dangerous misclassifications by just sacrificing 0.04% of training accuracy, or directly train a new model averagely outperforming the original one with a 1.62% lower value of MPCS and 0.36% fewer number of dangerous cases, which can be especially useful in real-world safety-critical applications.

II Background

II-A Common Metrics for Classification

A number of metrics have been proposed to train and pick out the desired classifier. For one thing, the evaluation measures based on simple observations and statistics of the output results such as classification accuracy rate are widely used. Most of these measures are derived from the confusion matrix, a structure to illustrate the results obtained from a classifier [3]. Given a cc-class classification task, let mijm_{ij} be the number of samples actually belonging to the ii-th class while that are classified into the jj-th class, the confusion matrix can be defined as =[mij]\mathcal{M}=\begin{bmatrix}m_{ij}\end{bmatrix}. Given a target positive label rr, there are four different situations namely “True Positive” (TP =mrr=m_{rr}), “False Negative” (FN =j=1cmrjmrr=\sum_{j=1}^{c}\hskip 1.99997ptm_{rj}-m_{rr}), “False Positive” (FP =i=1cmirmrr=\sum_{i=1}^{c}\hskip 1.99997ptm_{ir}-m_{rr}) and “True Negative” (TN =i=1cj=1cmiji=1cmirj=1cmrj+mrr=\sum_{i=1}^{c}\hskip 1.99997pt\sum_{j=1}^{c}\hskip 1.99997ptm_{ij}-\sum_{i=1}^{c}\hskip 1.99997ptm_{ir}-\sum_{j=1}^{c}\hskip 1.99997ptm_{rj}+m_{rr}) [11], with which a series of evaluation measures are defined as follows:

Accuracy=TP+TNTP+FN+FP+TN;Precision=TPTP+FP;\displaystyle\textit{{Accuracy}}=\frac{\textit{TP}+\textit{TN}}{\textit{TP}+\textit{FN}+\textit{FP}+\textit{TN}}\textbf{;}\hskip 10.00002pt\textit{{Precision}}=\frac{\textit{TP}}{\textit{TP}+\textit{FP}}\textbf{;}
Recall=TPTP+FN;F1-score=2×Precision×RecallPrecision+Recall;\displaystyle\textit{{Recall}}=\frac{\textit{TP}}{\textit{TP}+\textit{FN}}\textbf{;}\hskip 10.00002pt\textbf{F}_{1}\textbf{\text{-}}\textit{{score}}=2\times\frac{\textit{Precision}\times\textit{Recall}}{\textit{Precision}+\textit{Recall}}\hskip 1.00006pt\textbf{;}
MCC=TP×TNFP×FN(TP+FP)(TP+FN)(TN+FP)(TN+FN).\displaystyle\textit{{MCC}}=\frac{\textit{TP}\times\textit{TN}-\textit{FP}\times\textit{FN}}{\sqrt{(\textit{TP}+\textit{FP})(\textit{TP}+\textit{FN})(\textit{TN}+\textit{FP})(\textit{TN}+\textit{FN})}}\hskip 1.00006pt\textbf{.}

For another, the objective of model training is to find the optimal parameters ω\omega^{*} to minimize the value of global loss function (ω)\mathcal{L}(\omega) over the whole training dataset 𝒳\mathcal{X} [6]:

ω=argmin(ω)=argmin1|𝒳|x𝒳l(x,ω)\displaystyle\omega^{*}=arg\hskip 1.99997ptmin\hskip 1.99997pt\mathcal{L}(\omega)=arg\hskip 1.99997ptmin\hskip 1.99997pt\frac{1}{|\mathcal{X}|}\sum_{x\in\mathcal{X}}l(x,\omega) (1)

where l(x,ω)l(x,\omega) denotes the specific loss value computed from sample xx with parameters ω\omega. So naturally, how well the objective is met can serve as a metric of model quality. Given a model 𝒩\mathcal{N} having fully connected final layer with the softmax activation, the supervised labels yy in the format of one-hot, and the probability outputs y^\hat{y} predicted by 𝒩\mathcal{N}, the famous loss mean squared error (MS) and the cross entropy error (CE) commonly used in classification [12] can be denoted as:

MS=12|𝒳|i=1|𝒳|j=1c(y^i(j)yi(j))2\displaystyle\mathcal{L}_{\textit{MS}}=-\frac{1}{2|\mathcal{X}|}\sum_{i=1}^{|\mathcal{X}|}\hskip 1.99997pt\sum_{j=1}^{c}\hskip 1.99997pt(\hat{y}_{i}^{(j)}-y_{i}^{(j)})^{2} (2)
CE=1|𝒳|i=1|𝒳|j=1cyi(j)log(y^i(j))\displaystyle\mathcal{L}_{\textit{CE}}=-\frac{1}{|\mathcal{X}|}\sum_{i=1}^{|\mathcal{X}|}\hskip 1.99997pt\sum_{j=1}^{c}\hskip 1.99997pty_{i}^{(j)}\hskip 1.99997ptlog(\hat{y}_{i}^{(j)}) (3)

II-B Drawbacks of Existing Metrics

The two kinds of metrics mentioned above both have their own advantages in contrast, while they also struggle with different dilemmas. Compared with loss value, confusion matrix-based evaluation measures have advantages in the ability of flexible customization. Through the various combinations of TP, FN, FP and TN, evaluation measures allow one to define the objective in an intuitive and direct format according to the specific application. For instance, as the over-abundance of negative examples is commonly seen in information retrieval, recommendation systems and social network analysis [13], one of the reasonable metrics in such situations is the Precision who can measure the ability of the classifier not to label as positive a sample that is negative, so that the always-negative classifier will not be over-valued.

Nevertheless, evaluation measures directly refer to the classification results, which brings some potential problems. For one thing, each element in the confusion matrix is essentially a number of classification results having specific actual and predicted labels, which means the values of all the evaluation measures based on the confusion matrix are discrete. As a consequence, they are insensitive to slight improvements in model parameters and only vary discontinuously and abruptly when the prediction results of some samples change [4]. This can be especially disappointing in the latter stages of model training. For another, evaluation measures heavily focus on the existing data, while the ideal model is expected to reliably classify previously unseen objects in the real world. So what really matters is, according to the Bayesian classification theory, how well the model approximates the underlying posterior probability distribution [14]. Researchers have shown that improving observed accuracy rates and improving posterior probability estimation are not entirely synonymous [5]. In fact, efforts to improve posterior probability estimation may yield lower accuracy rates in the known dataset, but lead to better performance on future reality tasks.

The problems mentioned above are actually the important reasons why loss is proposed [4]. Carefully designed loss functions ensure the gradient of most parameters is not 0\vec{\textbf{0}} in the training process, so as to guide the continuous optimization of the model. And for the latter problem mentioned, the theoretical relationship between Bayesian posterior probability estimation and MS cost functions had also been explored and widely adopted [7]. However, squared error assumes Gaussian target data, which is violated given the discrete targets used to train classifiers. Besides, it can be found from (2) that MS is likely to be dominated by a few outlier data points that have particularly large errors in practice.

As a log-linear error function based on the maximum likelihood estimate approach [12] that is impacted much less by these further problems, CE gradually became more popular in classification tasks to date. However, entropy-based measures are specifically designed for binary targets at the beginning, so when CE is applied in multi-classification tasks, it has to treat all the classes except the class with the correct label as a collective concept “incorrect class”, without considering the distinction between them. What’s worse, CE merely takes the probability of the correct label in the prediction results into error calculation, which means that of the incorrect labels, which are certainly more in quantity in every single output of multi-classification and may also impact the prediction effectiveness, are totally ignored.

All in all, there is indeed an uncharted territory: is the bias between the two kinds of metrics irreconcilable? Empirically, if we consider the multi-classification task from a similar perspective as the confusion matrix, but construct a specific formula with the mathematical principle closer to the loss, it is likely that a metric can be proposed with not only the customizability just as the former but also the theoretical completeness inherited from the latter at the same time. We take this idea into consideration when specifically designing the formula of our own evaluation measure.

III Meta Pattern Concern Score

In the past section, from the perspective of basic technology, we discussed the ideas and issues to be concerned with for our design. In this section, we first rely on two real-world scenarios to illustrate and describe the motivation and intuitive scheme to take the two kinds of human values into consideration, and then accordingly propose our novel evaluation measure, Meta Pattern Concern Score (MPCS).

III-A Considering Human Values in MPCS

The two scenarios to be talked about come from a typical safety-critical task, Traffic Light Recognition, which is important regarding the traffic participants’ safety in autonomous driving [15]. Note that this task usually involves a variety of technologies in practice, but for convenience here we just simplify it as a 3-class classification problem.

The first scenario happens when a red light shows, where there are two different kinds of misclassifications according to the specific prediction result. If it is classified as a green light, the autonomous vehicle will be incorrectly allowed to proceed, which may endanger both its passengers and others, and as a result, totally unacceptable. While if the light is predicted to be yellow, which means “stop, unless it is unsafe to do so” [10], at least in most cases the car will stop and not break the traffic rules. So it is obviously inappropriate to treat both of the errors with the same severity, which is, however, what the traditional metrics do.

Since this kind of scenario with negative classes having different damages is commonly seen in practice, it would make sense to design for it. But before providing formal definitions and descriptions in section III-B, showing our idea in an intuitive way firstly might be found helpful. As mentioned previously, CE is only affected by the predicted value of the correct label. Just as shown in Fig. 1, this is marked as follows: the significance of the correct class “Red” is +1+1, while that of others is assigned to be 0 to represent no influence. On the other hand, the MS and all the evaluation measures from confusion matrix treat the prediction as either True or False, respectively corresponding to +1+1 and 1-1 in the “Others” line.

In our idea, we mark α-\alpha instead of 1-1 for the incorrect prediction “Yellow” to represent it is relatively less destructive. What’s more, we assign that of the “Red” to be ((α)+(1))=1+α-((-\alpha)+(-1))=1+\alpha, which is basically a compromise between CE and other metrics. Specifically, we neither ignore negative classes completely nor make each of them as important as the positive class, while we set that the positive class and the whole of negative classes are of equal significance. These marked values correspond to a concept to be formally defined later in our measure called concern degree.

Refer to caption
Figure 1: The figure shows the simplified traffic light classification problem. The values in the figure illustrate how the three different predictions (i.e. predicting “Red” respectively as “Red”, “Yellow” and “Green”) contribute to the calculation of different metrics. This corresponds to a concept to be formally defined later named concern degree. The “Others” includes the MS and all the evaluation measures from the confusion matrix.
TABLE I: A simple case showing that the classifier with a lower loss value is not necessarily the better one in specific safety-critical tasks. Specifically, classifier A correctly predicts 99 samples with the confidence of 0.99 and wrongly predicts one sample with a confidence of 0.49, while classifier B correctly predicts all the 100 samples with the confidence of 0.98.
Training Size Classifier Prediction Result Number of Sample Prediction Confidence Cross Entropy Loss
100 A Correct 99 0.99 0.741920
Wrong 1 0.49
B Correct 100 0.98 0.877392
Wrong 0 -
Refer to caption
Figure 2: The figure illustrates our idea to set a threshold for prediction confidence (e.g. 1 - 0.02 = 0.98) to indicate the concession we can tolerate, and divide the confidence field into different intervals to extend the concession interval by interval, to accordingly calculate the specific punishment values.

The second scenario is presented aiming to illustrate the significance to introduce a threshold for the concession in prediction confidence. As shown in Table I, there are two classifiers trained for a task with a training size of 100 and the loss CE. The classifier A makes a mistake in one of the input samples, predicting the probability of the correct label to be only 0.490.49, while the classifier B makes correct predictions in the whole dataset. However, the confidence of all the correct predictions of classifier A is 0.990.99, while that of B is 0.980.98. In this case, it is indeed that the CE value of classifier A is less than that of B, but does that really mean A is a better choice? At least in our simplified Traffic Light Recognition case, the answer is no because it is the failure of prediction that is more intolerable compared with the minor weakness of confidence in such a safety-critical task. Note that although here we take CE as an instance, MS and other loss functions also have the same problem to different extents.

To deal with this problem, we can set a threshold value for the confidence and keep the model from being punished in specific samples once the confidence value of the correct label exceeds it. In our idea shown in Fig. 2, we further divide the entire confidence field into several intervals according to the granularity determined by the threshold, and calculate the punishment of a specific confidence value based on the distance between the interval it falls into and the interval representing the highest confidence.

III-B Detailed Design of MPCS

In this section, we propose the specific definition of MPCS, and provide a detailed process to calculate its value. As shown in Fig. 3, the calculation process can be divided into three parts. The first part is about running the multi-classifier and normalizing its results to the probabilistic distribution using softmax. The second part is where the two constructors extract and build meta patterns from the processed results. Finally, in the third part, concern degrees and interval punishments are calculated given specific human values, and the MPCS on the entire dataset is accordingly acquired.

Refer to caption
Figure 3: The figure shows the calculation process of MPCS. The meta pattern includes the prediction pattern and the confidence pattern constructed from probabilistic predictions of the target multi-classifier. Given specific human values, the concern degrees and interval punishments are calculated upon meta pattern, and they are then used to calculate the final MPCS value.

To begin with, to realize our idea, it is not sufficient enough to merely take the label with the highest confidence into account. However, considering all the labels may also lead to a lot of unnecessary computational time costs. So we introduce a hyper-parameter kk to indicate the top-kk classes in the order of confidence decreasing to be considered according to the practical task. With lil_{i} denoting a specific label in a classification task, the prediction pattern is defined as a vector Ppred\textit{P}_{\textit{pred}} with length kk:

Ppred=[l1,l2,,lk]\displaystyle\textit{P}_{\textit{pred}}=[l_{1},l_{2},\dots,l_{k}] (4)

where the value of kk is between one and the number of classes of the task.

To realize the idea described in Fig. 2, we divide the confidence field into tt intervals and determine the confidence level of a specific label under an input sample by the interval in which its confidence value falls and whether the label is the correct one or not. The confidence pattern is defined as a vector Pconf\textit{P}_{\textit{conf}} with length kk that must be the same as Ppred\textit{P}_{\textit{pred}} in one task:

Pconf=[C(l1),C(l2),,C(lk)]\displaystyle\textit{P}_{\textit{conf}}=[C(l_{1}),C(l_{2}),\dots,C(l_{k})] (5)

in which C(li)C(l_{i}) denotes the confidence level of the label lil_{i}. Given a hyper-parameter t1t\geq 1 and the correct label ll, and let cic_{i} be the confidence of label lil_{i}, C(li)C(l_{i}) can be calculated as follows:

C(li)={tci,li=lttci1,lil\displaystyle C(l_{i})=\begin{cases}\lfloor t\cdot c_{i}\rfloor,&l_{i}=l\\ \hskip 1.99997ptt-\lfloor t\cdot c_{i}\rfloor-1,&l_{i}\neq l\\ \end{cases} (6)

Notice that there is a correspondence between each pair of elements in the same position of Ppred\textit{P}_{\textit{pred}} and Pconf\textit{P}_{\textit{conf}} in any specific sample. As a pair of the two patterns can be used to record the key information to uniquely represent any specific output of the classifier, we always refer to them together. The meta pattern is an abstract representation of the probabilistic prediction result of multi-classifiers, which is defined as the combination of the corresponding prediction pattern Ppred\textit{P}_{\textit{pred}} and confidence pattern Pconf\textit{P}_{\textit{conf}}.

Then it is time to recall the idea shown in Fig. 1. We introduce a release list as a hyper-parameter with every single element indicating a less destructive situation in practice, as well as a factor to indicate that in what extent the concern of them could be released. In every single data sample, for every label selected into its Ppred\textit{P}_{\textit{pred}}, there are three kinds of values of the corresponding element in its concern degree depending on whether the label is the correct one and whether it exists in any element of release list together with the correct label if not. Given a prediction pattern Ppred\textit{P}_{\textit{pred}}, its concern degree can be defined as a vector \mathcal{I} with the same length kk:

=[D(l1),D(l2),,D(lk)]\displaystyle\mathcal{I}=[D(l_{1}),D(l_{2}),\dots,D(l_{k})] (7)

Given the correct label ll, a release list \mathcal{R} with rαr_{\alpha} representing any element of it, and a release factor ff_{\mathcal{R}}, D(li)D(l_{i}) can be calculated as:

D(li)={ljPpred,jiD(lj),li=lf,lil,[l,li]rα1,lil,[l,li]rα\displaystyle D(l_{i})=\begin{cases}\sum_{l_{j}\in\textit{P}_{\textit{pred}},\hskip 1.39998ptj\neq i}D(l_{j}),&l_{i}=l\\ \hskip 1.99997ptf_{\mathcal{R}},&l_{i}\neq l,\hskip 1.99997pt[l,li]\in r_{\alpha}\\ \hskip 1.99997pt1,&l_{i}\neq l,\hskip 1.99997pt[l,li]\notin r_{\alpha}\\ \end{cases} (8)
Algorithm 1 Calculation of Meta Pattern Concern Score

Input: Classifier 𝒩\mathcal{N}, Dataset (𝒳,𝒴)(\mathcal{X},\mathcal{Y}), Release List \mathcal{R}, Release Factor ff_{\mathcal{R}}, Hyper-parameters kk and tt
Output: Meta Pattern Concern Score 𝒮\mathcal{S}

1:  𝒮0\mathcal{S}^{\prime}\leftarrow 0
2:  while x,y(𝒳,𝒴)x,y\in(\mathcal{X},\mathcal{Y}) do
3:     Ppredargsort(𝒩(x))[:k]P_{\textit{pred}}\leftarrow argsort(-\mathcal{N}(x))[:k]
4:     Pconftfloor((sort(𝒩(x))[:k])×t)1P_{\textit{conf}}\leftarrow t-floor((-sort(-\mathcal{N}(x))[:k])\times t)-1
5:     Pconf[Pconf=1]0P_{\textit{conf}}[P_{\textit{conf}}=-1]\leftarrow 0
6:     ones(k)\mathcal{I}\leftarrow ones(k)
7:     if exist iPpred.index(y)i\leftarrow P_{\textit{pred}}.index(y) then
8:        Pconf[i]tPconf[i]1P_{\textit{conf}}[i]\leftarrow t-P_{\textit{conf}}[i]-1
9:        while rr\in\mathcal{R} and Ppred[i]=r[0]P_{\textit{pred}}[i]=r[0] do
10:           while jj from 0 to k1k-1 do
11:              if jij\neq i and Ppred[j]r[1:]P_{\textit{pred}}[j]\in r[1:] then
12:                 [j]f\mathcal{I}[j]\leftarrow f_{\mathcal{R}}
13:              end if
14:           end while
15:        end while
16:        [i]sum()1\mathcal{I}[i]\leftarrow sum(\mathcal{I})-1
17:     end if
18:     Pconf[Pconf=0]1e-7P_{\textit{conf}}[P_{\textit{conf}}=0]\leftarrow\textbf{1e-7}
19:     Plog(Pconf/(t1))P^{\prime}\leftarrow-log(P_{\textit{conf}}/(t-1))
20:     /sum()\mathcal{I}^{\prime}\leftarrow\mathcal{I}/sum(\mathcal{I})
21:     𝒮𝒮+sum(multiply(P,))\mathcal{S}^{\prime}\leftarrow\mathcal{S}^{\prime}+sum(multiply(P^{\prime},\mathcal{I}^{\prime}))
22:  end while
23:  𝒮𝒮/|𝒳|\mathcal{S}\leftarrow\mathcal{S}^{\prime}/|\mathcal{X}|
24:  return 𝒮\mathcal{S}
TABLE II: The Spearman similarity between MPCS and the five benchmark metrics Accuracy (ACC), F1-F_{1}\text{-}score (F1), MCC, MS and CE under the four models and six datasets. The high similarities confirm that MPCS is effective as benchmark metrics in general tasks.
ID Model Dataset Spearman Similarity
ACC F1 MCC MS CE
Exp. 1 MLP IRIS -0.9091 -0.9092 -0.9011 0.9873 0.9827
Exp. 2 DIGITS -0.9886 -0.9936 -0.9935 0.9989 0.9989
Exp. 3 MNIST -0.9881 -0.9886 -0.9880 0.9880 0.9841
Exp. 4 CNN DIGITS -0.9106 -0.9106 -0.9107 0.9547 0.9547
Exp. 5 MNIST -0.9231 -0.9229 -0.9220 0.9299 0.9168
Exp. 6 RNN UCR-CT -0.9576 -0.9549 -0.9587 0.9618 0.9619
Exp. 7 UCR-SS -0.9846 -0.9844 -0.9827 0.9929 0.9928
Exp. 8    LSTM    UCR-SS    -0.9382    -0.9352    -0.9373    0.9700    0.9718
Exp. 9 ADD -0.9721 -0.9735 -0.9706 0.9815 0.9862
   Exp. 10 MNIST -0.9757 -0.9754 -0.9760 0.9774 0.9763
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 4: Corresponding to Table II, the figures show the trend of MPCS and the benchmark metrics in the ten different training processes, which also intuitively show their similarity in general practice. Notice that for better visual comparison between them, the specific values of MS, CE and MPCS are normalized.

Now we can propose our evaluation measure MPCS based on the above definitions. To calculate the interval punishment, given confidence pattern Pconf\textit{P}_{\textit{conf}}, we select the loglog function just as CE to convert the interval distance into a concrete punishment value. Then with the concern degree \mathcal{I}, the meta pattern concern score 𝒮\mathcal{S} of the classification result of every single sample can be calculated as:

𝒮=i=1k(log(Pit1)×Iij=1kIj)\displaystyle\mathcal{S}=\sum_{i=1}^{k}\hskip 1.99997pt(-log\hskip 1.99997pt(\frac{P_{i}}{\hskip 1.99997ptt-1\hskip 1.99997pt})\times\frac{I_{i}}{\hskip 1.99997pt\sum_{j=1}^{k}I_{j}\hskip 1.99997pt}) (9)

where PiP_{i} and IiI_{i} respectively represents the ii-th element in the Pconf\textit{P}_{\textit{conf}} and \mathcal{I}, and kk, tt are the hyper-parameters defined previously. An important characteristic to be noticed is that MPCS approximates CE in the limit of k=1k=1 and tt\to\infty. In other words, CE can be viewed as a special case of MPCS under the limit condition. We illustrate the whole process of MPCS calculation in Algorithm 1, including the construction of meta pattern (line 3-5, 8), the calculation of concern degree (line 6-7, 9-17) and interval punishment (line 18-20), and the calculation of the MPCS value in the whole dataset (line 21-24).

IV Evaluation and Discussion

In this section, we first evaluate MPCS in general effectiveness and efficiency, and then illustrate its specific advantages in introducing the human values by a case study. Finally, we also discuss the contribution of MPCS based on the experiment results. The code and detailed experiment records are available on our Github repository: https://github.com/FlaAI/MPCS.

IV-A Experimental Setup

There are six datasets from four sources used in the experiments, namely the IRIS and DIGITS from scikit-learn [16], the MNIST, the pedestrian counting dataset CT and the smooth subspace clustering dataset SS from UCR time-series archive [17], and the synthetic Autonomous Driving Dataset (ADD) generated by the Scenic scenario programming language and the Carla simulator [18]. Different models, including Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM), are adopted and Adam is chosen as the optimizer uniformly. The Accuracy, F1-F_{1}\text{-}score, MCC, MS and CE mentioned in section II-A are the benchmarks adopted. The experiments are implemented with Python 3.8.8, and PyTorch 1.10.1.

Refer to caption
Refer to caption
Figure 5: The figures show the confusion matrices of the prediction results of the two multi-classifiers respectively picked out by the benchmark metrics (they all pick the same model in this cases) and MPCS given specific human values (the wrong predictions framed in red are less destructive according to the t-SNE [20]). The model picked by MPCS reduces 0.53% of dangerous cases by only sacrificing 0.04% of training accuracy.

IV-B General Effectiveness and Efficiency

In this section, we evaluate MPCS in two general aspects. For one thing, through the comparison of the value and trend between MPCS and the benchmark metrics, we verify its effectiveness as an evaluation measure in the training process of multi-classifier. For another, by illustrating the time cost of MPCS calculation, we clarify that it has considerable efficiency as the benchmark metrics.

IV-B1 Effectiveness

Inspired by the evaluation method adopted in [19], we use the spearman correlation coefficient to calculate the similarity value between MPCS and the benchmarks, and regard this kind of relationship as the verification of its effectiveness. We illustrate the results in Table II with corresponding figures as a supplement in Fig. 4. From the results, we can find that all the similarity values exceed ±0.9\pm 0.9, and more than half of them are even above ±0.97\pm 0.97, based on which we can give an empirical inference that MPCS is highly correlated with the benchmark metrics, so it can indeed serve as an evaluation measure with practical significance.

TABLE III: The computational time costs of MPCS and the benchmark metrics are also similar in the experiments in Table II.
ID Time Cost (×103s\times 10^{-3}s)
ACC F1 MCC MS CE MPCS
Exp. 1 0.28 1.76 2.01 0.49 0.49 4.27
Exp. 2 0.33 1.62 2.79 0.59 0.49 13.66
Exp. 3 26.15 50.21 88.63 33.88 33.74 497.49
Exp. 4 31.56 32.98 33.90 31.80 35.55 43.98
Exp. 5 692.03 696.62 702.07 693.26 788.27 710.92
Exp. 6 7.25 8.34 8.17 7.26 7.59 7.08
Exp. 7 21.30 22.27 22.34 21.39 22.43 22.40
Exp. 8 38.92 40.05 39.99 39.04 40.07 38.89
Exp. 9 228.87 230.86 232.93 229.05 245.07 230.59
Exp. 10 2672.93 2703.11 2746.43 2685.93 2927.46 2802.63

IV-B2 Efficiency

We verify the efficiency of MPCS by comparing its computational time cost with the benchmark metrics. Specifically, we calculate them in the whole training dataset for every turn of the training process with a total of 150 turns and take the average time cost as the final results. As illustrated in Table III, it can be found that the time costs of all the metrics are about the same order of magnitude in general, which is especially clear in relatively complex models.

IV-C Specific Advantages given Human Values

So far we have confirmed that MPCS can work as a general evaluation measure, but the particular benefits it can bring given specific human values have not been discussed. In this section, we use a case study to illustrate how to use MPCS to introduce human values into the evaluation and even the learning of the multi-classifier. We build the case with MLP and MNIST like Exp. 3, and determine the less destructive misclassifications according to the t-SNE projection [20], a technique that can plot 2D embeddings for high-dimensional datasets while keeping the distance between the samples the same. To be specific, if the incorrect prediction result of a sample is adjacent to its correct label in the projected clusters, this mistake is less destructive compared to confusing non-adjacent labels. So we pick all that kinds of mistakes, which are framed in red in Fig. 5, into the release list \mathcal{R}. Notice that here we just take this as an example since its meaning is relatively simple to understand, while actually the release condition can be customized according to any specific complex requirements in practice.

IV-C1 MPCS for Evaluation

We train MLP using CE for 150 epochs and record the model parameters from each epoch as different candidates, then respectively use different metrics to pick out the optimal models among them. The ff_{\mathcal{R}}, kk and tt of MPCS is set to be 0.5, 5 and 200 here. In this case, all the metrics except MPCS pick out the same model, and the prediction results of the two models are respectively shown in Fig. 5 as the form of the confusion matrix. As can be calculated from the two confusion matrices, although the total number of misclassification of the model picked by the MPCS (5128) is greater than that of the model picked by other metrics (5108), the number of destructive cases of the former (2605) is less than that of the latter (2621). In other words, we pick out a model that trades off training accuracy from 91.49%91.49\% to 91.45%91.45\% for a reduced destructive rate from 51.31%51.31\% to 50.78%50.78\% by applying MPCS. This result can especially make sense in real-world safety-critical applications.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 6: The figures show the trend of training and test Accuracy, the MPCS value and the number of dangerous samples among the learning of a multi-classifier respectively with and without MPCS given the same human values as Fig. 5 introduced to dynamically adjust the learning rate.

IV-C2 MPCS for Learning

In addition to the evaluation, we also explore how to introduce the human values into the learning by MPCS. Since the abstract representation makes MPCS non-differentiable, which means it can not be directly used as a loss, we can just introduce it to refine the training process by dynamically adjusting the learning rate according to its value. This is because the loss is basically a punishment to be learned, so for a smaller (i.e. better) MPCS value gotten, we can reduce the punishment by a proportionately smaller learning rate of the current turn. In Fig. 6, we record the change of the Accuracy in training and test set, the MPCS value and the number of destructive samples among the model learning with and without MPCS respectively. With MPCS introduced, the newly trained model averagely outperforms the original one with a 1.62% lower MPCS value and 0.36% fewer number of destructive samples. At the same time, there are even slight improvements under the Accuracy measure as well, namely 0.04% and 0.03% for training and test data.

IV-D Contributions

With all the results shown above, now we are ready to answer what can MPCS contribute to the community. For the first time, we provide a general idea to introduce two kinds of specific human values into the evaluation and even Learning of multi-classifiers. Different from common metrics having a fixed form all the time, the MPCS allows people to flexibly declare what they care more about the model in different practices, and try to cater to their specific will to pick out the optimal model, under the premise of not violating the common metrics too much and having a similar time cost as them.

On the other hand, technically speaking, MPCS is designed regarding the advantages of the existing two common kinds of metrics. Specifically, it provides an abstract view of model output to enable customization as the confusion matrix-based evaluation measures, and calculates its value in a way that approximates CE in the limit to make the picked model approximate the posterior probability distribution as much as possible like the loss. And for the negative labels, MPCS neither ignores them completely like CE, nor gives them the same importance as MS, while it adopts a compromise to avoid their shortcomings and makes the assessment more reasonable. Although MPCS can still not avoid discrete values in non-limit conditions, not only the granularity is adjustable, but also the extent of discreteness is much lower than the confusion matrix-based measures in general.

V Conclusion

In this paper, to acquire better multi-classifiers given specific human values, we proposed a novel evaluation measure called Meta Pattern Concern Score. It not only achieves considerable effectiveness and efficiency as common metrics in general tasks, but also shows particular advantages under given human values. The MPCS is expected to support the customized evaluation and even training of multi-classifiers in real-world practice, especially in safety-critical areas with various human values to be considered. In the future, we plan to evaluate and refine several existing applications by using MPCS customized for them, to make them safer and more trustworthy for the public.

References

  • [1] T. M. Mitchell and T. M. Mitchell, Machine learning.   McGraw-hill New York, 1997, vol. 1, no. 9.
  • [2] F. Chollet, Deep learning with Python.   Simon and Schuster, 2021.
  • [3] A. Luque, A. Carrasco, A. Martín, and A. de Las Heras, “The impact of class imbalance in classification performance metrics based on the binary confusion matrix,” Pattern Recognition, vol. 91, pp. 216–231, 2019.
  • [4] S. Weidman, Deep learning from scratch: building with python from first principles.   O’Reilly Media, 2019.
  • [5] D. M. Kline and V. L. Berardi, “Revisiting squared-error and cross-entropy functions for training neural network classifiers,” Neural Computing & Applications, vol. 14, no. 4, pp. 310–318, 2005.
  • [6] C. Chen, H. Xu, W. Wang, B. Li, B. Li, L. Chen, and G. Zhang, “Communication-efficient federated learning with adaptive parameter freezing,” in 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS).   IEEE, 2021, pp. 1–11.
  • [7] M. Hung, M. Hu, M. Shanker, and B. Patuwo, “Estimating posterior probabilities in classification problems with neural networks,” International Journal of Computational Intelligence and Organizations, vol. 1, no. 1, pp. 49–60, 1996.
  • [8] E. Soares, P. Angelov, B. Costa, and M. Castro, “Actively semi-supervised deep rule-based classifier applied to adverse driving scenarios,” in 2019 international joint conference on neural networks (IJCNN).   IEEE, 2019, pp. 1–8.
  • [9] A. Lucieri, M. N. Bajwa, S. A. Braun, M. I. Malik, A. Dengel, and S. Ahmed, “On interpretability of deep learning based skin lesion classifiers using concept activation vectors,” in 2020 international joint conference on neural networks (IJCNN).   IEEE, 2020, pp. 1–10.
  • [10] HMG, “The traffic signs regulations and general directions 2016,” 2016.
  • [11] D. M. Ibrahim, N. M. Elshennawy, and A. M. Sarhan, “Deep-chest: Multi-classification deep learning model for diagnosing covid-19, pneumonia, and lung cancer chest diseases,” Computers in biology and medicine, vol. 132, p. 104348, 2021.
  • [12] R. Das and S. Chaudhuri, “On the separability of classes with the cross-entropy loss function,” arXiv preprint arXiv:1909.06930, 2019.
  • [13] P. Flach and M. Kull, “Precision-recall-gain curves: Pr analysis done right,” Advances in neural information processing systems, vol. 28, 2015.
  • [14] R. O. Duda, P. E. Hart et al., Pattern classification and scene analysis.   Wiley New York, 1973, vol. 3.
  • [15] C. Fernández, C. Guindel, N.-O. Salscheider, and C. Stiller, “A deep analysis of the existing datasets for traffic light state recognition,” in 2018 21st International Conference on Intelligent Transportation Systems (ITSC).   IEEE, 2018, pp. 248–254.
  • [16] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.
  • [17] H. A. Dau, A. Bagnall, K. Kamgar, C.-C. M. Yeh, Y. Zhu, S. Gharghabi, C. A. Ratanamahatana, and E. Keogh, “The ucr time series archive,” IEEE/CAA Journal of Automatica Sinica, vol. 6, no. 6, pp. 1293–1305, 2019.
  • [18] M. Zhang, D. Du, M. Zhang, L. Zhang, Y. Wang, and W. Zhou, “A meta-modeling approach for autonomous driving scenario based on sttd.” International Journal of Software & Informatics, vol. 11, no. 3, 2021.
  • [19] T. Baluta, Z. L. Chua, K. S. Meel, and P. Saxena, “Scalable quantitative verification for deep neural networks,” in 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE).   IEEE, 2021, pp. 312–323.
  • [20] L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” Journal of machine learning research, vol. 9, no. 11, 2008.