An AI Architecture with the Capability to Explain Recognition Results
Abstract
Explainability is needed to establish confidence in machine learning results. Some explainable methods take a post hoc approach to explain the weights of machine learning models, others highlight areas of the input contributing to decisions. These methods do not adequately explain decisions, in plain terms. Explainable property-based systems have been shown to provide explanations in plain terms, however, they have not performed as well as leading unexplainable machine learning methods. This research focuses on the importance of metrics to explainability and contributes two methods yielding performance gains. The first method introduces a combination of explainable and unexplainable flows, proposing a metric to characterize explainability of a decision. The second method compares classic metrics for estimating the effectiveness of neural networks in the system, posing a new metric as the leading performer. Results from the new methods and examples from handwritten datasets are presented.
Index Terms:
Explainable Artificial Intelligence, Machine Learning, Support Vector Machine, Neural Network, Multilayer PerceptronI Introduction
Explainability is needed to establish confidence in machine learning (ML) results. Much work has been posed to assist in explaining automated decisions by Artificial Intelligence (AI), however, current explainable results still remain unsatisfactory.
Post hoc analysis of Neural Networks (NN) has assessed local and global decisions by examining the weights of the NN. Explainability through the identification of areas of the input that contribute to a decision has been posed. While these add an element of explainability, they have been unable to explain, in plain terms, why a decision was made by an automated system.
The focus of this research is the use of techniques to provide explainable results with combined property-based NN models. Results center on explainability and are not meant to compete with the latest unexplainable recognition techniques.

Contributions of this work include two methods to achieve explainability and performance gains. The first is the use of an unexplainable ML model in an explainable architecture shown in Fig. 1. Unexplainability occurs when a property cannot account for an explanation. Unexplainability becomes an additional component in the architecture. In order to improve the overall explainability and results, and to quantify explainability of decisions, a new explainability metric, , is introduced. The second is a comparison of classic ML performance metrics and proposing the metric to improve accuracy in an explainable architecture.
The popular MNIST and EMNIST handwritten digit and character datasets [1, 2] are leveraged to explore the new methods in recognizing handwritten characters. With these datasets, the problem approached is one of recognizing the class, , of an input, where class is defined as a particular digit in MNIST or an alphanumeric character in EMNIST.
Related work is outlined in Section II. Section III presents the explainable architecture, the addition of unexplainable flows to combined NN models to increase performance, and introduces per-class effectiveness metrics. Finally, results including an explainable example are discussed in Section IV.
II Related Work
Several works discuss the combination of results of multiple trained neural networks. Jacobs, et al. identified local experts of trained networks [3]. Other works treated multiple trained networks as committees and combined NNs to form a collective decision [4, 5].
An explainable additive NN model is posed by Vaughan et al. where a system is composed of layering distinct NNs that are trained on transforms of the inputs. A layer then combines the outputs of the distinct NNs to perform a prediction. Explainability comes from each distinct NN modeling features of the input which lends to interpretability of the architecture [6].
An explainable architecture using explainable properties, transforms of input related to the properties, Inference Engines (IE) for each property, probabilistic decision-making, and attributing decisions to relevant explainable properties was posed[7, 8]. Like combined NN and additive NN systems, the explainable architecture examined decisions of multiple NNs.
Metrics such as Accuracy, Precision, and Recall have evolved from disciplines such as Statistics, Data Science, and Information Retrieval[9] to use in the evaluation of ML models. The Accuracy metric (1) for a binary predictor is given by the ratio of correct predictions (positive and negative), the sum of True Positives () and True Negatives (), to the total number of predictions which is given by the sum of , , False Positives (), and False Negatives (). The Precision metric (2) represents the ratio of correct positive predictions, , to all positive predictions, . Recall (3) represents the ratio of correct positive predictions, , to all positive cases, . Specificity (4) represents the ratio of correct negative predictions, , to all negative cases, .
There has been extensive study of additional metrics for gauging the performance of ML models [10, 11, 12]. The Receiver Operating Characteristic (ROC) Curve provides a relationship between the and Rates for various thresholds and the Area Under the ROC Curve (AUC) [13, 14] provides a metric characterizing the performance of a classifier. F1-Score is a metric that uses the harmonic mean of Recall and Precision of a model [15]. Cohen’s Kappa was used as a metric for characterizing the agreement between observers of psychological behavior [16, 17]. In the case of ML models Cohen’s Kappa is effective in comparing agreement between labels and predictions. Matthew’s correlation coefficient (MCC) is presented as a leading metric for comparison of imbalanced datasets [18, 19]. AUC, F1-Score, Cohen’s Kappa, and MCC are compared as potential effectiveness metrics in the explainable architecture.
(1) |
(2) |
(3) |
(4) |
Class | TP | TN | FP | FN | |
---|---|---|---|---|---|
0 | 0.0 | 90.1 | 0.0 | 9.87 | 90.1 |
1 | 11.2 | 0.0 | 88.8 | 0.0 | 11.2 |
2 | 0.0 | 90.0 | 0.0 | 9.93 | 90.1 |
3 | 0.0 | 89.8 | 0.0 | 10.2 | 89.8 |
4 | 0.0 | 90.2 | 0.0 | 9.74 | 90.3 |
5 | 0.0 | 91.0 | 0.0 | 9.04 | 91.0 |
6 | 0.0 | 90.1 | 0.0 | 9.86 | 90.1 |
7 | 0.0 | 89.6 | 0.0 | 10.4 | 89.6 |
8 | 0.0 | 90.2 | 0.0 | 9.75 | 90.3 |
9 | 0.0 | 90.0 | 0.0 | 9.92 | 90.1 |
MNIST | EMNIST | |||||||
---|---|---|---|---|---|---|---|---|
MLP | SVM | MLP | SVM | |||||
Effectiveness Metric | ||||||||
95.5 | 97.6 | 95.4 | 97.3 | 71.7 | 77.4 | 75.9 | 81.0 | |
95.2 | 97.4 | 95.1 | 97.1 | 71.7 | 77.3 | 75.9 | 81.0 | |
94.3 | 97.2 | 94.8 | 97.0 | 71.1 | 76.8 | 75.9 | 81.0 | |
95.0 | 97.3 | 94.4 | 96.8 | 70.4 | 76.9 | 74.7 | 80.8 | |
Precision () | 94.4 | 97.0 | 94.2 | 96.7 | 70.4 | 76.8 | 74.7 | 80.8 |
Cohen’s Kappa | 94.1 | 96.9 | 94.2 | 96.7 | 71.5 | 77.5 | 74.4 | 80.6 |
92.6 | 96.2 | 93.8 | 96.5 | 69.7 | 75.5 | 73.9 | 80.4 | |
F1-Score | 91.9 | 95.9 | 93.5 | 96.4 | 70.6 | 76.7 | 74.2 | 80.5 |
85.6 | 93.1 | 92.0 | 95.7 | 37.0 | 55.3 | 67.1 | 76.9 | |
Specificity () | 88.1 | 93.7 | 92.0 | 95.1 | 49.2 | 61.5 | 68.5 | 76.9 |
Accuracy (ACC) | 85.6 | 92.6 | 92.0 | 95.1 | 47.7 | 60.0 | 68.0 | 76.3 |
AUC | 67.5 | 77.7 | 89.8 | 94.1 | 5.62 | 8.49 | 63.4 | 73.1 |
Balanced Accuracy | 75.0 | 85.4 | 89.7 | 94.2 | 8.46 | 16.4 | 63.1 | 72.9 |
Recall () | 52.4 | 68.0 | 85.1 | 91.7 | 2.53 | 3.35 | 50.5 | 63.7 |

PDF Effectiveness | PDF Explainability | |||||||||
Property | Class Vote | |||||||||
Stroke | ‘S’ | 0.9229 | 1.0 | |||||||
Circle | ‘1’ | 0.9954 | 1.0 | |||||||
Crossing | ‘1’ | 0.9850 | 1.0 | |||||||
Ellipse | ‘n’ | 0.1079 | 1.0 | |||||||
Ell-Cir | ‘1’ | 0.9954 | 1.0 | |||||||
Endpoint | ‘S’ | 0.4963 | 1.0 | |||||||
Encl. Reg. | ‘1’ | 0.9996 | 1.0 | |||||||
Line | ‘S’ | 0.4950 | 1.0 | |||||||
Convex Hull | ‘E’ | 0.2604 | 1.0 | |||||||
Corner | ‘S’ | 0.5088 | 1.0 | |||||||
No Property | ‘S’ | 0.9238 | 0.0 | |||||||
Weights | ||||||||||
Confidence / Explainability |
Class | Confidence | Explainability | Explainable Description |
---|---|---|---|
‘1’ | 51.7% | 100% | Confidence is medium for interpreting this character as a one due to the enclosed region, circle, ellipse-circle, and crossing properties. |
‘S’ | 43.5% | 72.4% | Confidence is medium for interpreting this character as an S due to the stroke, corner, endpoint, and line properties. |
‘E’ | 3.39% | 100% | Confidence is low for interpreting this character as an E due to the convex hull property. |
‘n’ | 1.40% | 100% | Confidence is low for interpreting this character as an n due to the ellipse property. |
III Method
III-A Explainable Architecture
Fig. 1 depicts the explainable architecture with explainable and unexplainable contributions. Consider each horizontal element in the architecture incident on the Decision Making Process as a Pre-Decision Flow (PDF) of the explainable system. An explainable PDF would consist of one Property Transform and one Inference Engine (IE). An unexplainable PDF would consist of only one IE. The through explainable PDFs in the architecture are depicted as green while the single unexplainable PDF is represented in orange as . Explainable PDFs are related to properties that contribute to the explainability of the system. Often, explainable properties provide mediocre recognition results compared to an IE trained against the untransformed data. In many cases, this is because a property may not pertain to each class. E.g., the ellipse property does not apply to a well-formed digit one.
In this architecture, the unexplainable PDF is the untransformed input image. The unexplainable flow contributes excellent recognition performance but deters from the explainability of the system since the IE is considered an opaque box, without an explainable property for justification of a decision. An explainability metric is introduced to quantify the impact of unexplainable PDFs on decisions in Section III-B.
In Fig. 1, input flows into the system at the left. The initial stage of the architecture, for explainable PDFs, is the Transform phase. In the Transform phase, the input image is transformed according to the explainable property. Transforms emphasize particular explainable properties in the input and are essential to explainability.
The second stage of the architecture is the Property Inferencing stage. In Property Inferencing, IEs are used to make decisions on local data. IEs in this work are Multilayer Perceptrons (MLP) or Support Vector Machines (SVM). Explainable IEs are trained on transformed training data. The unexplainable IE, , is trained on untransformed input. Training data and training results are stored in a knowledgebase (KB) as depicted at the bottom of Fig. 1.
Local decisions from Property Inferencing are passed to Decision Making, where a global decision is made. This global decision is merely an ordering of the classes voted on based upon Confidence. The Confidence, , is shown in (5) where , in (6), represents the weighted effectiveness that voted for a particular class of over the sum of weighted effectiveness for all classes voted for. Effectiveness, , indicates how well an IE is at recognizing a class and is obtained from data stored in the KB. Section III-C details effectiveness and outlines new methods of gauging effectiveness while Section IV-A provides results of the new effectiveness metrics.
(5) |
(6) |
The Explainability phase in the explainable Architecture, Fig. 1, composes the textual rationale, related to the explainable properties contributing to decisions. This is done by constructing phrases indicating the properties that voted for each class. The explainability measure, from (7), is also determined in this phase of the system. The measure indicates how explainable each decision for a class, , is based on the flow’s relation to an explainable property. The user is finally presented with the ranked recognition results, the confidence of the decisions, the rationale for a decision, and the level of explainability for each decision. Sample explainable results are presented in Section IV-C.
III-B Explainable metric for Unexplainable Flows
Explainability, in the property-based system, is provided by explainable properties, used to justify a local decision in plain terms to a user. With the introduction of unexplainable flows, each flow, , is assigned an explainability metric , where . An close to zero would signify an unexplainable flow while an near one would indicate an explainable flow. Examples in this work use for explainable and for the unexplainable flows.
(7) |
The explainability of a decision for class is given by in (7) where the numerator is the sum of the product of the explainability metric and effectiveness for the flows that voted for class , and the denominator is the sum of effectiveness for flows that voted for class .
III-C Effectiveness Metrics
PDF Effectiveness | PDF Explainability | |||||||||
Property | Class Vote | |||||||||
Stroke | ‘S’ | 0.9964 | 1.0 | |||||||
Circle | ‘1’ | 0.8417 | 1.0 | |||||||
Crossing | ‘1’ | 0.7354 | 1.0 | |||||||
Ellipse | ‘n’ | 0.9774 | 1.0 | |||||||
Ell-Cir | ‘1’ | 0.8444 | 1.0 | |||||||
Endpoint | ‘S’ | 0.9776 | 1.0 | |||||||
Encl. Reg. | ‘1’ | 0.3615 | 1.0 | |||||||
Line | ‘S’ | 0.9787 | 1.0 | |||||||
Convex Hull | ‘E’ | 0.9767 | 1.0 | |||||||
Corner | ‘S’ | 0.9791 | 1.0 | |||||||
No Property | ‘S’ | 0.9964 | 0.0 | |||||||
Weights | ||||||||||
Confidence / Explainability |
Class | Confidence | Explainability | Explainable Description |
---|---|---|---|
‘S’ | 51.0% | 79.8% | Confidence is medium for interpreting this character as an S due to the stroke, corner, endpoint, and line properties. |
‘1’ | 28.8% | 100% | Confidence is medium for interpreting this character as a one due to the enclosed region, circle, ellipse-circle, and crossing properties. |
‘n’ | 10.1% | 100% | Confidence is low for interpreting this character as an n due to the ellipse property. |
‘E’ | 10.1% | 100% | Confidence is low for interpreting this character as an E due to the convex hull property. |
An important area of the architecture, influencing the overall performance, concerns assigning weights to the IE votes. A key concept, related to assigning of weights to the IE votes, is the Effectiveness of an IE at recognizing a particular class. Effectiveness, , was posed as a metric for how well the IE performs at recognizing class and is used to weight IE votes. Recall, given in (3), was previously used for Effectiveness. While Recall had acceptable results, the system exhibited false positives for some of the poor-performing flows.
It is not enough to know that an IE is effective overall. Effectiveness must be examined at the class level. This requires that metrics are taken as one class versus others. The one versus others strategy imposes a significant imbalance between the single class and others with the MNIST and EMNIST datasets [1, 2]. Some metrics are sensitive to imbalanced data and produce misleading results. This is evident in Table I where Accuracy (), for nine out of ten classes, is very high, around 90% despite no .
Particular metrics from literature that were examined and evaluated as effectiveness in the explainable architecture include Accuracy, Recall, Specificity, Precision, F1-Score, Cohen’s Kappa, Matthews Correlation Coefficient, Balanced Accuracy, and the Area Under the Curve of the Receiver Operator Characteristic (AUC).
In addition to evaluating metrics from literature, some metrics were combined and compared. The combination of metrics found to have the best performance as per-class effectiveness was the product of class vs others Precision (), Accuracy (), Recall (), and Specificity () as shown in (8) as the new effectiveness metric, . Equation (9) shows the expansion of in terms of , , , and .
(8) |
(9) |
III-D Inference Engine Training
SVM IEs used the Support Vector Classification implementation and NN IEs used the MLP Classifier implementation in scikit-learn version 1.2.2 [21]. In the SVMs, radial basis function kernels were used as they performed better than alternative kernels with the transformed data. MLPs used two hidden layers of 128 neurons with a rectified linear unit activation function. This MLP architecture performed well on the handwritten data and was fast to train without specialized hardware.
Results shown in examples were obtained using discrete classification of the MLP and SVM models, which outputs either a one or zero, not continuous probability estimates p, where . The default behavior for the scikit-learn SVM model is discrete classification. When the SVMs were trained to enable probability estimates and the estimates were used along with effectiveness, there was much less variance between the metrics and higher system accuracy.
IV Results
Table II depicts the MNIST and EMNIST overall accuracy percentage results obtained with MLP and SVM using various per-class effectiveness metrics in ranking and selecting a global decision. MLPs and SVMs performed comparably on MNIST and the SVMs performed a few percentage points better on EMNIST. SVMs also appeared to be more forgiving when used with lower performing effectiveness metrics.
Accuracy reflected in the tables is an appropriate overall measure of performance of the system since the classes are balanced and the metric is being taken on the entire architecture. The Explainable result (E) columns indicate the overall system accuracy for ten explainable flows while the combined Explainable and Unexplainable result (E+U) columns indicate the overall accuracy using ten explainable and one unexplainable flow.
IV-A Explainable Results
Adding an accurate, but unexplainable, classifier to the system improves performance. This is illustrated in Table II where combined results (E+U) are greater than the strictly explainable () results. A marked improvement in accuracy was shown by moving to combined explainable and unexplainable flows.
The combined (E+U) results were greater than explainable (E) in all cases. In the MNIST example using Recall as Effectiveness, the increase using MLP flows was over 15%. Typical increases with combined flows ranged from two to ten percentage points better than strictly explainable flows.
IV-B Effectiveness Results
PDF Effectiveness | PDF Explainability | |||||||||
Property | Class Vote | |||||||||
Stroke | ‘S’ | 0.8341 | 1.0 | |||||||
Circle | ‘1’ | 0.0830 | 1.0 | |||||||
Crossing | ‘1’ | 0.0389 | 1.0 | |||||||
Ellipse | ‘n’ | 0.0406 | 1.0 | |||||||
Ell-Cir | ‘1’ | 0.0848 | 1.0 | |||||||
Endpoint | ‘S’ | 0.2274 | 1.0 | |||||||
Encl. Reg. | ‘1’ | 0.0040 | 1.0 | |||||||
Line | ‘S’ | 0.2401 | 1.0 | |||||||
Convex Hull | ‘E’ | 0.1067 | 1.0 | |||||||
Corner | ‘S’ | 0.2508 | 1.0 | |||||||
No Property | ‘S’ | 0.8385 | 0.0 | |||||||
Weights | ||||||||||
Confidence / Explainability | 86.9% | 64.9% |
Class | Confidence | Explainability | Explainable Description |
---|---|---|---|
‘S’ | 86.9% | 64.9% | Confidence is high for interpreting this character as an S due to the stroke, corner, endpoint, and line properties. |
‘1’ | 7.66% | 100% | Confidence is low for interpreting this character as a one due to the enclosed region, circle, ellipse-circle, and crossing properties. |
‘E’ | 3.88% | 100% | Confidence is low for interpreting this character as an E due to the convex hull property. |
‘n’ | 1.40% | 100% | Confidence is low for interpreting this character as an n due to the ellipse property. |
Some of the performance metrics from the literature, especially those that are resilient to imbalanced data such as Cohen’s Kappa, provide outstanding results as a measure of effectiveness in the explainable architecture. Surprisingly, Precision alone as well as the product of Precision and other metrics perform among the highest of those attempted. The best-performing metric found is given in (8) as , the product of Precision, Accuracy, Recall, and Specificity. The previously reported accuracy observed, using the explainable architecture, on MNIST with MLPs was about 92% using Recall as Effectiveness and probability estimates. Employing as effectiveness and without using probability estimates, accuracy was observed at about 95.5%, an improvement of over 3%. When probability estimates are used with as effectiveness, accuracy was increased to 96.1%. Adding an unexplainable component to the system and using probability estimates increased accuracy results to as high as 98.0%.
Observing the numerator in (9), performs well because to balance is maintained since ( for MNIST and for EMNIST) and ( for MNIST and for EMNIST). A larger dataset with more classes may not result in similar performance.
IV-C Explainable Example
This section presents an example and outlines how the explainable architecture comes to a decision. The example illustrates the confidence, rationale, and explainability provided to a user. Also included are results with differing effectiveness metrics. Where the terms low, medium, and high are used in this section, they denote below 25%, 25% to 75%, and above 75%, respectively.
Fig. III depicts the example, an EMNIST balanced test, handwritten letter ‘S’ sample (index 18) labeled as a capital ‘S’. The explainable architecture used for the example is composed of eleven PDFs with SVM IEs. Data from multiple effectiveness metrics (such as Recall, Accuracy, and ) will be presented to demonstrate how the explainable system benefits from more robust effectiveness metrics.
The results of processing the example using the Recall metric are in Tables III and IV, the results of using the Accuracy metric are depicted in Tables V and VI, and the results of using the metric are in Tables VII and VIII. Table IX illustrates the metric particulars for the PDFs in the example in one place. Note that four classes were voted on for the example. Each PDF’s vote and explainability metric is the same across the various tables. Only the effectiveness varies.
When examining the Explainability data from Tables III, V, and VII, each flow identifier, , is in the first column where denotes the flow number. The second column indicates the property name. Flow is labeled as no property because it represents an unexplainable flow, without an explainable property. The Class Vote column indicates the class that was selected by each flow.
The remaining columns are related to effectiveness and explainability. Since four classes were voted on in this example, there will be four columns each for effectiveness and explainability, representing each class with a vote. The columns labeled , represent the effectiveness of the flows for each class, , that were voted upon. The columns labeled represent the explainability metric for each flow. Only the column representing the class that a flow voted for will have a value in the table for effectiveness and explainability. The last two rows of the Explainability tables represent the weights, of effectiveness and explainability, as well as the Confidence, , and Explainability, , for each class .
Ordered explanation results are represented in Tables IV, VI, and VIII. The first column indicates the class, . The second column the Confidence. The third column, represents the explainability associated with the decision. The final column is the rationale provided to the user.
Observe in Tables III and IV that the four PDFs that voted for class one had high Recall, reflected in the Effectiveness column of Table III in bold. This resulted in the digit one winning with medium confidence, , due to the enclosed region, circle, ellipse-circle, and crossing properties. The second choice was the ‘S’ with medium confidence, . Since Recall in (3) has in the numerator, high and very low counts of the PDFs that voted for the digit one observed in Table IX explains why the digit one wins using the Recall metric.
Tables V and VI contain data and results from using the Accuracy metric on the example. Accuracy is given in (1). Due to the comparatively high rate of the PDFs that voted for the ‘S’, as shown in bold in Table IX it wins with medium confidence, , due to the stroke, corner, endpoint, and line properties. The ‘S’ also received a vote from the unexplainable PDF. The lower explainability metric, at , reflects diminished explainability due to no explainable property associated with part () of the contribution for the decision.
Property | Class | TP | TN | FP | FN | R | ACC | |
---|---|---|---|---|---|---|---|---|
Encl. Reg. | ‘1’ | 2.13 | 34.0 | 63.8 | 0.0 | 100 | 36.2 | 0.40 |
Circle | ‘1’ | 2.12 | 82.1 | 15.8 | 0.0 | 99.5 | 84.2 | 8.29 |
Ell-Cir | ‘1’ | 2.12 | 82.3 | 15.5 | 0.0 | 99.5 | 84.5 | 8.47 |
Crossing | ‘1’ | 2.01 | 71.5 | 26.4 | 0.0 | 98.5 | 73.5 | 3.88 |
Convex Hull | ‘E’ | 0.55 | 97.1 | 0.76 | 1.57 | 26.0 | 97.7 | 10.7 |
No Property | ‘S’ | 1.97 | 97.7 | 0.19 | 0.16 | 92.4 | 99.7 | 83.9 |
Stroke | ‘S’ | 1.96 | 97.7 | 0.20 | 0.16 | 92.3 | 99.6 | 83.4 |
Corner | ‘S’ | 1.08 | 96.8 | 1.04 | 1.05 | 50.9 | 97.9 | 25.1 |
Endpoint | ‘S’ | 1.06 | 96.7 | 0.20 | 0.20 | 49.6 | 97.8 | 22.7 |
Line | ‘S’ | 1.05 | 96.8 | 1.05 | 1.07 | 49.5 | 97.9 | 24.0 |
Ellipse | ‘n’ | 0.23 | 97.5 | 0.36 | 1.90 | 10.8 | 97.7 | 4.07 |
The final set of Explainability and Explanations Tables, VII and VIII, involve the metric. The stroke and unexplainable flows have a much higher effectiveness than other metrics. High effectiveness from the Accuracy metric for the flows voting for the digit one were due to insensitivity of Accuracy to s. The same flows voting for the digit one have a reduced effectiveness due to relative sensitivity of to counts, as observed in Table IX. The capital ‘S’ correctly wins with high confidence, , using .
Conclusion
Introduction of unexplainable, but high-performing, flows into the explainable architecture increased the accuracy and explainability of the system. Using previous effectiveness metrics an increase in MNIST accuracy with MLP was observed at over 15% by introducing unexplainable flows. A metric to characterize the impact of the unexplainable addition to the system, , and along with an example from EMNIST, illustrate its utility.
The results and analysis pertaining to performance metrics to gauge effectiveness suggest that several metrics from the literature perform better than previous effectiveness in the explainable architecture, as noted in Section IV-B. The best-performing metric was devised as the product of Precision, Accuracy, Recall, and Specificity. The example demonstrated that more robust and resilient effectiveness metrics improve results. With analysis of the metric, one could speculate that larger data sets, especially with more classes, may not perform as well with . A derivation of that similarly scales the at or above based on the dataset size and number of classes could be devised and may be suitable as a metric gauging model performance on imbalanced datasets.
References
- [1] L. Deng, “The mnist database of handwritten digit images for machine learning research,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 141–142, 2012.
- [2] G. Cohen, S. Afshar, J. Tapson, and A. Van Schaik, “Emnist: Extending mnist to handwritten letters,” in 2017 international joint conference on neural networks (IJCNN). IEEE, 2017, pp. 2921–2926.
- [3] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton, “Adaptive mixtures of local experts,” Neural Computation, vol. 3, no. 1, pp. 79–87, 1991.
- [4] M. Perrone, “Putting it all together: Methods for combining neural networks,” Advances in neural information processing systems, vol. 6, 1993.
- [5] A. J. C. SHARKEY, “On combining artificial neural nets,” Connection science, vol. 8, no. 3-4, pp. 299–314, 1996.
- [6] J. Vaughan, A. Sudjianto, E. Brahimi, J. Chen, and V. N. Nair, “Explainable neural networks based on additive index models,” arXiv preprint arXiv:1806.01933, 2018.
- [7] P. Whitten, F. Wolff, and C. n. Papachristou, “Explainable artificial intelligence methodology for handwritten applications,” in NAECON 2021 - IEEE National Aerospace and Electronics Conference, 2021, pp. 277–282.
- [8] P. Whitten, F. Wolff, and C. Papachristou, “Explainable neural network recognition of handwritten characters,” in 2023 IEEE 13th Annual Computing and Communication Workshop and Conference (CCWC), 2023, pp. 0176–0182.
- [9] S. P. Harter, “The cranfield ii relevance assessments: A critical evaluation,” The Library Quarterly, vol. 41, no. 3, pp. 229–243, 1971.
- [10] S. Picek, A. Heuser, A. Jovic, S. Bhasin, and F. Regazzoni, “The curse of class imbalance and conflicting metrics with machine learning for side-channel evaluations,” IACR Transactions on Cryptographic Hardware and Embedded Systems, pp. 209–237, 2019.
- [11] B. J. Erickson and F. Kitamura, “Magician’s corner: 9. performance metrics for machine learning models,” p. e200126, 2021.
- [12] M. Z. Naser and A. H. Alavi, “Error metrics and performance fitness indicators for artificial intelligence and machine learning in engineering and sciences,” Architecture, Structures and Construction, vol. 3, no. 4, p. 499–517, Nov. 2021.
- [13] C. E. Metz, “Basic principles of roc analysis,” Seminars in Nuclear Medicine, vol. 8, no. 4, pp. 283–298, 1978.
- [14] J. A. Hanley and B. J. McNeil, “The meaning and use of the area under a receiver operating characteristic (roc) curve.” Radiology, vol. 143, no. 1, pp. 29–36, 1982.
- [15] Y. Sasaki et al., “The truth of the f-measure,” Teach tutor mater, vol. 1, no. 5, pp. 1–5, 2007.
- [16] J. Cohen, “A coefficient of agreement for nominal scales,” Educational and Psychological Measurement, vol. 20, pp. 37 – 46, 1960.
- [17] A. Ben-David, “About the relationship between roc curves and cohen’s kappa,” Engineering Applications of Artificial Intelligence, vol. 21, no. 6, pp. 874–882, 2008.
- [18] B. Matthews, “Comparison of the predicted and observed secondary structure of t4 phage lysozyme,” Biochimica et Biophysica Acta (BBA) - Protein Structure, vol. 405, no. 2, pp. 442–451, 1975.
- [19] D. Chicco, V. Starovoitov, and G. Jurman, “The benefits of the matthews correlation coefficient (mcc) over the diagnostic odds ratio (dor) in binary classification assessment,” IEEE Access, vol. 9, pp. 47 112–47 124, 2021.
- [20] C. Harris, M. Stephens et al., “A combined corner and edge detector,” in Alvey vision conference, vol. 15, no. 50. Citeseer, 1988, pp. 10–5244.
- [21] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.