This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Inter Observer Variability Assessment through Ordered Weighted Belief Divergence Measure in MAGDM: Application to the Ensemble Classifier Feature Fusion

Pragya Gupta, Student Member, IEEE Debjani Chakraborty, Debashree Guha, Senior Member, IEEE Pragya Gupta and Debjani Chakraborty are with the Department of Mathematics, Indian Institute of Technology Kharagpur, Kharagpur 721302, India (e-mail: [email protected], [email protected]).
Debashree Guha is with the School of Medical Science and Technology, Indian Institute of Technology Kharagpur, Kharagpur 721302, India (e-mail: [email protected]).
Abstract

A large number of multi-attribute group decision-making (MAGDM) have been widely introduced to obtain consensus results. However, most of the methodologies ignore the conflict among the experts’ opinions and only consider equal or variable priorities of them. Therefore, this study aims to propose an Evidential MAGDM method by assessing the inter-observational variability and handling uncertainty that emerges between the experts. The proposed framework has fourfold contributions. First, the basic probability assignment (BPA) generation method is introduced to consider the inherent characteristics of each alternative by computing the degree of belief. Second, the ordered weighted belief and plausibility measure is constructed to capture the overall intrinsic information of the alternative by assessing the inter-observational variability and addressing the conflicts emerging between the group of experts. An ordered weighted belief divergence measure is constructed to acquire the weighted support for each group of experts to obtain the final preference relationship. Finally, we have shown an illustrative example of the proposed Evidential MAGDM framework. Further, we have analyzed the interpretation of Evidential MAGDM in the real-world application for ensemble classifier feature fusion to diagnose retinal disorders using optical coherence tomography images.

Index Terms:
Ordered Weighted Belief Measure, Plausibility Measure, Ordered Weighted Belief Divergence Measure, BPA, Feature Fusion, Evidential MAGDM.

I Introduction

Multi-attribute group decision-making (MAGDM) is one of the most crucial parts of modern decision science, which includes the process of determining the best optimal solution from all multiple decision attributes and alternatives [1, 2] with respect to the group of experts. The key role is to support the decision analysts to take all necessary objectives and subjective attributes of the problem into contemplation to employ rational and explicit decision procedure [3, 2]. MAGDM has been extensively applied to various fields such as medicine, management, pattern analysis domain, and classification [4, 5, 6, 7, 2], which has acquired more attention from researchers [1, 2, 8]. In pattern classification, the MAGDM is generally used in feature fusion to acquire a desirable recognition interpretation based on the fused features. The MAGDM problems exhibit three key characteristics: alternatives, multiple attributes with incomparable units, and multiple experts, in which the expert’s weights holds a significant part. Determining the weight of experts is an essential research area. Several techniques have been introduced for determining the weights of the experts [9, 10].
The traditional group decision-making approaches commonly focus on calculating results through the majority and voting principles, which is a straightforward arithmetic aggregation procedure ignoring trade-offs among alternatives and experts. Due to that, group decision-making (GDM) approaches [11, 12] have been proposed to overcome the inconsistency and compute the optimal agreed results. In this regard, some approaches [13, 14, 15, 9] used the relative importance or preference relations between the group of experts by determining the influence relations. The weight determination methods that are focused on pairwise comparisons include two classes of approaches [16] that can be utilized to intervene in the discrepancy between the experts. In the first approach, the individual decisions are aggregated, in which initially, the pairwise comparison of different experts is incorporated into one, and the resultant aggregation pairwise comparison is treated as a single expert problem [16, 17]. However, in the other class, the individuals’ preference is aggregated, in which a weight vector is generated for each expert [18, 4, 19]. The generated set of weight vectors is transformed into a single weight vector by employing Dempster Shafer’s theory or utilizing various types of aggregation operators to determine the optimal weights. Although these approaches are simple to implement, but the information loss is extensive, and while computing the weights for experts, the inter-observational variability among the alternatives is not taken into consideration. These aforementioned challenges motivate us to propose a MAGDM approach that considers the degree of belief for each alternative which includes the inter-observational variability between the group of experts.
With this view, this study aims to develop a MAGDM framework based on the Evidential ordered weighted belief divergence measure to evaluate the support for each expert and transform it into a consensus situation. For computing the ordered weighted belief divergence measure, the construction of ordered weighted belief and plausibility is proposed. The basic probability assignment (BPA) generation method has been introduced to evaluate the ordered weighted belief and plausibility measure, which analyzes the crucial information by considering the alternative responses corresponding to the attributes. It helps to assess the belongingness of each alternative corresponding to the domain of the attribute that is partitioned by selecting the suitable number of linguistic term sets followed by computing BPAs. The empirical study of the proposed Evidential MAGDM technique is demonstrated over an illustrative example to show the efficacy of the proposed approach. We have shown the effectiveness of the proposed technique in real-world problems to assess the capability of handling uncertainty and inter-observational behavior in the feature fusion module for the diagnosis of retinal disorders. In our study, we propose an ensemble classifier feature fusion using the proposed Evidential MAGDM approach for the classification of various retinal disorders by integrating the multi-expert view phenomena. The proposed system includes three expert paths to generate the OCT dataset at a different scale space followed by integrating the EfficientNet-B0 model [20]. The model extracts the discriminative feature representation at various scales to capture the diversity in feature representation locally and globally, which influences the classification probability for the diagnosis of retinal disease through OCT images. The extracted features from each path are fused together using the proposed Evidential MAGDM approach by handling the conflict and impreciseness emerging between the distinctive features at various scales, followed by the classification of retinal disorders. In a nutshell, the contribution to this is recapitulated as follows:

  • This study proposes a generation framework of BPAs for MAGDM with respect to the group of experts by computing the belief degree based on partitioning the domain of attribute into linguistic terms.

  • We propose an ordered weighted belief measure to compute the overall belongingness by considering inter-observational variability evidence of each alternative corresponding to the attributes and to analyze the conflicts and discrepancy between the group of experts; the ordered weighted plausibility measure is proposed.

  • We propose an ordered weighted belief generalized divergence measure to compute the weighted support for each expert for MAGDM.

  • We propose an Evidential MAGDM framework based on the constructed ordered weighted belief divergence measure to compute overall weights for each group of experts.

  • To analyze the effectiveness of the proposed Evidential MAGDM approach, an illustrative example is demonstrated.

  • We propose the application of the Evidential MAGDM method in the ensemble classifier feature fusion for the diagnosis of retinal disorders.

The remainder of this manuscript is organized as follows: Section II briefly introduces the key concepts of BPA and divergence measures. Section III introduces the Evidential MAGDM approach. In Section IV, a numerical example and comparative analysis are provided to validate the effectiveness of the proposed approach, followed by an application to the ensemble classifier feature fusion for diagnosing the retinal disorder. Finally, concluding remarks are given in Section V.

II Prerequisites

II-A Theory of Belief Functions

Various approaches have been exploited to address the uncertainty in recent times. DS evidence theory of the belief function is one of the tools that provides a general technique for modeling uncertainty and is widely executed in various areas. This section proceeds towards presenting the basic concepts that are considered helpful in the subsequent work.

Definition 1.

Frame of Discernment [21]: Let Ω\Omega be a set of nn mutually exclusive elements, which can be expressed as Ω={ω1,,ωn}\Omega=\{\omega_{1},...,\omega_{n}\}. Then, the set Ω\Omega is called a frame of discernment. The power set of Ω\Omega is defined as 2Ω={,{ω1},{ω1,ω2},,Ω}2^{\Omega}=\{\varnothing,\{\omega_{1}\},\{\omega_{1},\omega_{2}\},...,\Omega\}. If U2ΩU\in 2^{\Omega}, UU is called proposition.

Definition 2.

Basic Probability Assignment [21, 22]: It is a mapping m:2Ω[0,1]m:2^{\Omega}\mapsto[0,1], which satisfies:

m()=0,UΩm(U)=1{}m(\varnothing)=0,\quad\sum_{U\subseteq\Omega}m(U)=1 (1)

If m(U)>0m(U)>0, then UU is said to be a focal element. The value m(U)m(U) signifies the belief degree to support proposition UU. In DS evidence theory, mm is also called a basic belief assignment (BBA).

In DS evidence theory, the belief and plausibility function is related to BPA, which represents the lower and upper bounds for each proposition in a BPA, and it can be described as follows:

Definition 3.

Belief Function [22]: It is a mapping Bel:2Ω[0,1]Bel:2^{\Omega}\mapsto[0,1] and it can be defined as:

Bel(U)=AUm(A){}Bel(U)=\sum_{A\subseteq U}m(A) (2)
Definition 4.

Plausibility Function [22]: It is a mapping Pl:2Ω[0,1]Pl:2^{\Omega}\mapsto[0,1], and it can be defined as:

Pl(U)=AUm(U),Pl(U)=1Bel(U¯){}Pl(U)=\sum_{A\cap U\neq\varnothing}m(U),\hskip 2.84544ptPl(U)=1-Bel(\overline{U}) (3)

It is to be noted that Pl(U)Bel(U)Pl(U)\geq Bel(U) for each UΩU\subseteq\Omega.

Definition 5.

Dempster’s Combination Rule [21]: Let two BPAs m1m_{1} and m2m_{2} that are independent in frame of discernment Ω\Omega, Dempster’s rule of combination m1m2m_{1}\oplus m_{2}, is defined as follows:

(m1m2)A={11KA1A2=Am1(A1)m2(A2),A0,A={}(m_{1}\oplus m_{2})A=\left\{\begin{array}[]{ c l }\frac{1}{1-K}\sum_{\begin{subarray}{c}A_{1}\cap A_{2}=A\end{subarray}}m_{1}(A_{1})m_{2}(A_{2}),&A\neq\varnothing\\ 0,&A=\varnothing\end{array}\right. (4)

with K=A1A2=m1(A1)m2(A2)K=\sum_{A_{1}\cap A_{2}=\varnothing}m_{1}(A_{1})m_{2}(A_{2}), where KK denotes the coefficient of conflicts between BPAs m1m_{1} and m2m_{2}.

II-B Generalized Jensen Shannon Divergence Measures

In information theory, divergence measures are employed to measure the discrepancy of data information and are widely exploited in various application areas [23, 19]. The Kullback-Leibler (KL) divergence [24] is one of the popular divergence measures which is non-negative, additive, but not symmetric. In this regard, a new divergence measure called the Jensen-Shannon divergence measure [25] is introduced such that it is symmetric but also satisfies the boundedness and the range between 0 to 1 for the logarithmic base 2, which is illustrated below.

Definition 6.

[25] The Jensen-Shannon divergence between two probability distributions A={a1,a2,,an}A=\{a_{1},a_{2},...,a_{n}\} and B={b1,b2,,bn}B=\{b_{1},b_{2},...,b_{n}\} is defined as follows:

JS(A,B)=12[I(A,A+B2)+I(B,A+B2)]{}JS(A,B)=\dfrac{1}{2}\Big{[}I(A,\dfrac{A+B}{2})+I(B,\dfrac{A+B}{2})\Big{]} (5)

where I(A,B)I(A,B) is the KL divergence [24].

Definition 7.

Generalized Jensen-Shannon Divergence[25]: Let A1,A2,,ApA_{1},A_{2},...,A_{p} be pp probability distributions with weights w1,w2,,wpw_{1},w_{2},...,w_{p}, respectively. The generalized Jensen-Shannon (JS) divergence can be expressed as

JSw(A1,A2,,Ap)=H(i=1pwiAi)i=1pwiH(Ai){}JS_{w}(A_{1},A_{2},...,A_{p})=H(\sum_{i=1}^{p}w_{i}A_{i})-\sum_{i=1}^{p}w_{i}H(A_{i}) (6)

where H(Ai)=iajilogajiH(A_{i})=-\sum_{i}a_{ji}loga_{ji} with aij=1\sum a_{ij}=1 (i=1,2,,n;j=1,2,,p)(i=1,2,...,n;j=1,2,...,p).

Belief Jensen-Shannon (BJS) divergence measure was proposed in [26] to measure the conflicts between two BPAs by considering the belief and plausibility measure and it can be defined as:

Definition 8.

[26] The divergence measure between two independent BPAs m1m_{1} and m2m_{2} define on Ω\Omega is defined as:

Div(m1,m2)=12[I(WPBlm1,WPBlm1+WPBlm22)\displaystyle Div(m_{1},m_{2})=\dfrac{1}{2}\Big{[}I(WPBl_{m_{1}},\dfrac{WPBl_{m_{1}}+WPBl_{m_{2}}}{2}) (7)
+I(WPBlm2,WPBlm1+WPBlm22)]\displaystyle+I(WPBl_{m_{2}},\dfrac{WPBl_{m_{1}}+WPBl_{m_{2}}}{2})\Big{]}

where, WPBlm(Ai)=Bel(Ai)+Pl(Ai)AiΩBel(Ai)+Pl(Ai)WPBl_{m}(A_{i})=\dfrac{Bel(A_{i})+Pl(A_{i})}{\sum_{A_{i}\subseteq\Omega}Bel(A_{i})+Pl(A_{i})}.

The Div(m1,m2)Div(m_{1},m_{2}) can also be expressed in the following form:

Div(m1,m2)=\displaystyle Div(m_{1},m_{2})= (8)
12[AiΩWPBlm1(Ai)log2WPBlm1(Ai)WPBlm1(Ai)+WPBlm2(Ai)+\displaystyle\dfrac{1}{2}\Big{[}\sum_{A_{i}\subseteq\Omega}WPBl_{m_{1}}(A_{i})\log\dfrac{2WPBl_{m_{1}}(A_{i})}{WPBl_{m_{1}}(A_{i})+WPBl_{m_{2}}(A_{i})}+
AiΩWPBlm2(Ai)log2WPBlm2(Ai)WPBlm1(Ai)+WPBlm2(Ai)]\displaystyle\sum_{A_{i}\subseteq\Omega}WPBl_{m_{2}}(A_{i})\log\dfrac{2WPBl_{m_{2}}(A_{i})}{WPBl_{m_{1}}(A_{i})+WPBl_{m_{2}}(A_{i})}\Big{]}

Next, we propose a BPA generation method to evaluate the degree of support for each alternative with respect to the attributes for group decision-making, followed by the construction of ordered weighted belief and plausibility function. To calculate the weighted support for each expert for the fusion, a novel construction method for a generalized weighted belief divergence measure is proposed, which will consider the inter-observational variability between the alternatives and handle the conflicts occurring between the group of experts.

III The proposed Evidential MAGDM methodology

This section introduces a BPA generation method for each alternative corresponding to the attribute associated with experts using the membership function for the linguistic value, followed by the construction of ordered weighted belief and plausibility function. To handle the conflicts occurring between the group of experts and impreciseness, a generalized ordered weighted belief divergence measure is proposed.

III-A BPAs generation for a group decision making

In this study, we assume that {A1,A2,,Ap}\{A_{1},A_{2},...,A_{p}\} (p2,P={1,2,,p})(p\geq 2,P=\{1,2,...,p\}) be a discrete set of feasible alternatives, {t1,t2,,tq}\{t_{1},t_{2},...,t_{q}\} (Q={1,2,,q})(Q=\{1,2,...,q\}) be a set of attributes; {u1,u2,,uk}\{u_{1},u_{2},...,u_{k}\} (K={1,2,,k})(K=\{1,2,...,k\}) be the group of experts. A multi-attribute group decision-making (MAGDM) can be represented as follows:

Yk=(yij(k))p×q=t1t2tqA1( y11(k)y12(k)y1q(k)) A2y21(k)y22(k)y2q(k)Apyp1(k)yp2(k)ypq(k),kK{}Y_{k}=\Big{(}y_{ij}^{(k)}\Big{)}_{p\times q}=\bordermatrix{&t_{1}&t_{2}&\ldots&t_{q}\cr A_{1}&y_{11}^{(k)}&y_{12}^{(k)}&\ldots&y_{1q}^{(k)}\cr A_{2}&y_{21}^{(k)}&y_{22}^{(k)}&\ldots&y_{2q}^{(k)}\cr\vdots&\vdots&\vdots&\ldots&\vdots\cr A_{p}&y_{p1}^{(k)}&y_{p2}^{(k)}&\ldots&y_{pq}^{(k)}},\quad k\in K (9)

be the decision matrix provided by kkth expert. In order to evaluate all attributes in dimensionless units and facilitate inter-attribute comparisons, each attribute of the considered decision matrix is normalized [27] using Eqs. 10 and obtain corresponding element yij(k)y_{ij}^{(k)} in normalized decision matrix YkY_{k}.

yij(k)=yij(k)i=1p(yij(k))2(iP,jQ,kK),{}y_{ij}^{(k)}=\dfrac{y_{ij}^{(k)}}{\sqrt{\sum_{i=1}^{p}(y_{ij}^{(k)})^{2}}}\quad(i\in P,j\in Q,k\in K), (10)

for benefit attribute tjt_{j}.
As considered above, there are kk experts; each expert provides their preference over the alternatives corresponding to attributes. These provided subjective/objective preferences for the alternative may differ according to the group of experts, which influences the final decision-making framework. The primary goal is to compute the weighted support for the above-considered MAGDM problem by considering this fact.
Due to subjectiviy in experts’ opinions, it is assumed that there is still uncertainty in their expressions. Opinions are not fully precise. In a broader sense, it may be said that there is a chance of variation of a particular opinion for some external causes. Causes may be - expert’s attitude, peer’s influence, incomplete knowledge, the complexity of studied systems, or any other environmental factors. In order to capture these chance factors, we present the framework to compute the degree of belief in terms of basic probability assignment (BPA). Another challenge is to capture the diversity emerging among the group of experts without hindering the degree of belief (support) associated with the alternatives. Consequently, we propose a weighted belief divergence measure to assess the diversity emerging between the group of experts for each alternative. It is computed based on the ordered weighted belief and plausibility measure, which handles the conflicts occurring between the group of experts and comprises the degree of belief for each alternative. The ordered weighted divergence measure relies on belief and plausibility measures, which capture the degree of confidence by evaluating the support of belongingness in the domain of the attributes, followed by examining the provided subjective/objective preference relationship among the experts for each alternative to handle the conflicting information. The subjective preference of the alternative is stretched over the domain of attribute for the evaluation of the ordered weighted belief and plausibility measure, which analyzes the inter-observational variability between the group of experts. For computing belief and plausibility measures, we need to evaluate basic probability assignment, which defines the degree of confidence. For computing the degree of belief for each alternative over the attribute domain, we have scaled the subjective preferences of alternatives over the attribute domain by employing the idea presented in [28]. We have utilized the augmented number of linguistic terms for partitioning the domain of the attribute to capture the intrinsic characteristic and inter-observational variability among the alternatives for each linguistic term set, as shown in Fig. 1.

Refer to caption
Figure 1: Linguistic values membership function with (H+1)(H+1) terms

In this work, it is assumed that each linguistic variable y1,y2,,ypy_{1},y_{2},...,y_{p} values are defined in the interval [c,d][c,d]\subset\mathbb{R} (where, [c,d][c,d] represents the attribute’s domain). Consider that Y=[c,d]Y=[c,d] and (y)\Im(y) consists of H+1H+1, (H2)(H\geq 2), terms shown in Fig. 1 as elaborated below.
={low1,around(c+α),around(c+2α),,around(c+(H1)α),highH}\Im=\{low_{1},around(c+\alpha),around(c+2\alpha),...,around(c+(H-1)\alpha),high_{H}\};
where α=dcH\alpha=\dfrac{d-c}{H}, and each term can be represented using the triangular membership functions {μB1~,μB2~,,μBH+1~}\{\mu_{\tilde{B_{1}}},\mu_{\tilde{B_{2}}},...,\mu_{\tilde{B_{H+1}}}\} of the subsequent form:

μB1~(y)=μlow1(y)={1(yc)(dc)ifcyd0otherwise,{}\mu_{\tilde{B_{1}}}(y)=\mu_{low_{1}}(y)=\left\{\begin{array}[]{ c l }1-\dfrac{(y-c)}{(d-c)}&\quad if\quad c\leq y\leq d\\ 0&\quad otherwise,\end{array}\right. (11)
μB~h(y)=μaround(c+hα)(y)={1(c+hαy)hαifcyc+hα1(ychα)(dchα)ifc+hαyd0otherwise{}\mu_{\tilde{B}_{h}}(y)=\mu_{around(c+h\alpha)}(y)=\left\{\begin{array}[]{ c l }1-\dfrac{(c+h\alpha-y)}{h\alpha}&\quad if\quad c\leq y\leq c+h\alpha\\ 1-\dfrac{(y-c-h\alpha)}{(d-c-h\alpha)}&\quad if\quad c+h\alpha\leq y\leq d\\ 0&\quad otherwise\end{array}\right. (12)

where 1h(H1)1\leq h\leq(H-1).

μB~H+1(y)=μhighH(y)={1(dy)(dc)ifcyd0otherwise,{}\mu_{\tilde{B}_{H+1}}(y)=\mu_{high_{H}}(y)\\ =\left\{\begin{array}[]{ c l }1-\dfrac{(d-y)}{(d-c)}&\quad if\quad c\leq y\leq d\\ 0&\quad otherwise,\end{array}\right. (13)

In this work, we have determined a ll linguistic term set associated with triangular membership functions {μB1~,μB2~,,μBl~}\{\mu_{\tilde{B_{1}}},\mu_{\tilde{B_{2}}},...,\mu_{\tilde{B_{l}}}\} to characterize the alternative’s belongingness with respect to the attributes.

Rk=\displaystyle R_{k}= (14)
( μt1B~1kA1μt1B~lkA1μtqB~1kA1μtqB~lkA1) μt1B~1kA2μt1B~lkA2μtqB~1kA2μtqB~lkA2μt1B~1kApμt1B~lkApμtqB~1kApμtqB~lkAp,\displaystyle\bordermatrix{&&&&&&&&\cr&\mu_{t_{1}\tilde{B}_{1}}^{k_{A_{1}}}&\ldots&\mu_{t_{1}\tilde{B}_{l}}^{k_{A_{1}}}&\vdots&\ldots&\vdots&\mu_{t_{q}\tilde{B}_{1}}^{k_{A_{1}}}&\ldots&\mu_{t_{q}\tilde{B}_{l}}^{k_{A_{1}}}\cr&\mu_{t_{1}\tilde{B}_{1}}^{k_{A_{2}}}&\ldots&\mu_{t_{1}\tilde{B}_{l}}^{k_{A_{2}}}&\vdots&\ldots&\vdots&\mu_{t_{q}\tilde{B}_{1}}^{k_{A_{2}}}&\ldots&\mu_{t_{q}\tilde{B}_{l}}^{k_{A_{2}}}\cr&\ldots&\ldots&\ldots&\vdots&\ldots&\vdots&\ldots&\ldots&\ldots\cr&\mu_{t_{1}\tilde{B}_{1}}^{k_{A_{p}}}&\ldots&\mu_{t_{1}\tilde{B}_{l}}^{k_{A_{p}}}&\vdots&\ldots&\vdots&\mu_{t_{q}\tilde{B}_{1}}^{k_{A_{p}}}&\ldots&\mu_{t_{q}\tilde{B}_{l}}^{k_{A_{p}}}},

where c=min1ip{yij(k)}c=min_{1\leq i\leq p}\{y_{ij}^{(k)}\}, d=max1ip{yij(k)}d=max_{1\leq i\leq p}\{y_{ij}^{(k)}\} (kK)(k\in K) with respect to the attribute tjt_{j}, and μtjB~lmAi\mu^{m_{A_{i}}}_{t_{j}\tilde{B}_{l}}, (iP,jQ,kK(i\in P,\quad j\in Q,\quad k\in K) represents the degree of belongingness in the partitioned domain B~l\tilde{B}_{l} for the attribute tjt_{j} corresponding to the kkth expert. To obtain the basic probability assignments (BPAs) for each alternative corresponding to the attributes, the above matrix is normalized column-wise as follows to satisfy the criterion for the BPAs given in Eq. 1.

R¯k=\displaystyle\bar{R}_{k}= (15)
( mt1B~1kA1mt1B~lkA1mtqB~1kA1mtqB~lkA1) mt1B~1kA2mt1B~lkA2mtqB~1kA2mtqB~lkA2mt1B~1kApmt1B~lkApmtqB~1kApmtqB~lkAp,\displaystyle\bordermatrix{&&&&&&&&\cr&m_{t_{1}\tilde{B}_{1}}^{k_{A_{1}}}&\ldots&m_{t_{1}\tilde{B}_{l}}^{k_{A_{1}}}&\vdots&\ldots&\vdots&m_{t_{q}\tilde{B}_{1}}^{k_{A_{1}}}&\ldots&m_{t_{q}\tilde{B}_{l}}^{k_{A_{1}}}\cr&m_{t_{1}\tilde{B}_{1}}^{k_{A_{2}}}&\ldots&m_{t_{1}\tilde{B}_{l}}^{k_{A_{2}}}&\vdots&\ldots&\vdots&m_{t_{q}\tilde{B}_{1}}^{k_{A_{2}}}&\ldots&m_{t_{q}\tilde{B}_{l}}^{k_{A_{2}}}\cr&\ldots&\ldots&\ldots&\vdots&\ldots&\vdots&\ldots&\ldots&\ldots\cr&m_{t_{1}\tilde{B}_{1}}^{k_{A_{p}}}&\ldots&m_{t_{1}\tilde{B}_{l}}^{k_{A_{p}}}&\vdots&\ldots&\vdots&m_{t_{q}\tilde{B}_{1}}^{k_{A_{p}}}&\ldots&m_{t_{q}\tilde{B}_{l}}^{k_{A_{p}}}},

where mtjB~lkAi=μtjB~lkAii=1pμtjB~lkAim_{t_{j}\tilde{B}_{l}}^{k_{A_{i}}}=\dfrac{\mu_{t_{j}\tilde{B}_{l}}^{k_{A_{i}}}}{\sum_{i=1}^{p}\mu_{t_{j}\tilde{B}_{l}}^{k_{A_{i}}}} (iP,jQ,kK)(i\in P,j\in Q,k\in K).
The decision matrix R~k\tilde{R}_{k} in Eq. 15 represents the BPAs of each alternative with respect to the attributes corresponding to the kkth expert, which indicates the degree of support (belief) with respect to each partitioned region to handles the impreciseness between the alternative data information. Next, we construct the ordered weighted belief and plausibility function to evaluate the overall support for each alternative by assessing the conflicts occurring in the decisions.

III-B Construction of Ordered Weighted Belief, Plausibility, and Ordered Weighted Belief Divergence Measure based on experts

This section introduces the ordered weighted belief function and plausibility function, followed by a generalized ordered weighted divergence measure based on the belief and plausibility function.

Definition 9.

The ordered weighted belief function based on the weighting vector w=(w1,w2,,wl)w=(w_{1},w_{2},...,w_{l}), with f=1lwf=1\sum_{f=1}^{l}w_{f}=1, wf[0,1]w_{f}\in[0,1], for alternative AiA_{i}, with respect to the attribute tjt_{j} is defined as:

Belwktj(Ai)=f=1lwfmtjB~σ(f)kAi,iP,jQ{}Bel_{wk}^{t_{j}}(A_{i})=\sum_{f=1}^{l}w_{f}m_{t_{j}\tilde{B}_{\sigma{(f)}}}^{k_{A_{i}}},\quad i\in P,j\in Q (16)

where σ:{1,2,,l}{1,2,,l}\sigma:\{1,2,...,l\}\mapsto\{1,2,...,l\} is a permutation such that mtjB~σ(1)kAimtjB~σ(2)kAimtjB~σ(l)kAim_{t_{j}\tilde{B}_{\sigma{(1)}}}^{k_{A_{i}}}\geq m_{t_{j}\tilde{B}_{\sigma{(2)}}}^{k_{A_{i}}}\geq...\geq m_{t_{j}\tilde{B}_{\sigma{(l)}}}^{k_{A_{i}}}.

Here, Belwktj(Ai)Bel_{wk}^{t_{j}}(A_{i}) represents a total amount of belief supporting necessarily to the alternative AiA_{i} corresponding to the attribute tjt_{j} with respect to the kkthe expert by giving the higher weight to the most supported belief degree using the orness concept for weight determination [29]. The more dispersed the ww, the more individual belief degree is used to evaluate the overall support for alternative AiA_{i} with respect to the kkth expert.

Definition 10.

The plausibility function for alternative AiA_{i} based on the expert uku_{k} corresponding to the attribute tjt_{j} is defined as

Plwktj(Ai)=1i=1ikkBelwitj(Ai)i=1kBelwitj(Ai){}Pl_{wk}^{t_{j}}(A_{i})=1-\dfrac{\sum_{\begin{subarray}{c}i=1\\ i\neq k\end{subarray}}^{k}Bel_{wi}^{t_{j}}(A_{i})}{\sum_{i=1}^{k}Bel_{wi}^{t_{j}}(A_{i})} (17)

Where Plwktj(Ai)Pl_{wk}^{t_{j}}(A_{i}) quantifies the overall amount of belief for alternative AiA_{i} with respect to the group of experts by enduring the conflicts and impreciseness emerging between the experts. Next, we define an ordered weighted belief divergence measure based on the ordered weighted belief and plausibility function to measure discrepancy and conflict among the group of experts by considering the inter-observational variability between the group of experts for the alternatives.

Definition 11.

Given two independent BPAs m1m_{1} and m2m_{2}, define on the frame of discriminant Ω\Omega, the weighted divergence between m1m_{1} and m2m_{2} based on set of weights w1,w20w_{1},w_{2}\geq 0 with w1+w2=1w_{1}+w_{2}=1 is defined as:

Divw(m1,m2)=i=12wiAjΩWPBlmσ(i)(Aj)\displaystyle Div_{w}(m_{1},m_{2})=\sum_{i=1}^{2}w_{i}\sum_{A_{j}\subseteq\Omega}WPBl_{m_{\sigma(i)}}(A_{j}) (18)
logWPBlmσ(i)(Aj)w1WPBlmσ(1)(Aj)+w2WPBlmσ(2)(Aj)\displaystyle\log\dfrac{WPBl_{m_{\sigma(i)}}(A_{j})}{w_{1}WPBl_{m_{\sigma(1)}}(A_{j})+w_{2}WPBl_{m_{\sigma(2)}}(A_{j})}

When wi=12w_{i}=\dfrac{1}{2} (i{1,2})(i\in\{1,2\}) and WPBlmσ(1)(Aj)WPBlmσ(2)(Aj)WPBl_{m_{\sigma(1)}}(A_{j})\geq WPBl_{m_{\sigma(2)}}(A_{j}), then the Eq. 18 can be expressed in Eq. 7. The Eq. 18 can be expressed as the Jensen-Shannon (JS) divergence measure, which depends on the weights [25]. One of the key features of the weighted JS divergence is that we can allocate distinct weights to the distributions implicated according to their prominence. Therefore, the weights associated in Eq. 18 restrain the trade-off behavior arising between the data information with respect to the group of experts in the evaluation of the divergence measure. To consider the discrepancy between groups of experts, the generalized weighted belief divergence based on Jensen Shannon inspired from [30] in terms of belief and plausibility for multiple BPAs is defined as follows:

Definition 12.

Given the set of BPAs {m1,m2,,mp}\{m_{1},m_{2},...,m_{p}\} over the frame of discriminant Ω\Omega with corresponding set of weights {w1,,wp}\{w_{1},...,w_{p}\} with i=1pwi=1\sum_{i=1}^{p}w_{i}=1, the generalized weighted divergence is defined as:

GDivw(m1,m2,,mp)=i=1pwiAjΩWPBlmσ(i)(Aj)|Aj|\displaystyle GDiv_{w}(m_{1},m_{2},...,m{p})=\sum_{i=1}^{p}w_{i}\sum_{A_{j}\subseteq\Omega}\dfrac{WPBl_{m_{\sigma(i)}}(A_{j})}{|A_{j}|} (19)
logWPBlmσ(i)(Aj)w1WPBlmσ(1)(Aj)++wpWPBlmσ(p)(Aj)\displaystyle\log\dfrac{WPBl_{m_{\sigma(i)}}(A_{j})}{w_{1}WPBl_{m_{\sigma(1)}}(A_{j})+...+w_{p}WPBl_{m_{\sigma(p)}}(A_{j})}

where, WPBlm(σ(1))(Aj)WPBlm(σ(p))(Aj)WPBl_{m_{(\sigma(1))}}(A_{j})\geq...\geq WPBl_{m_{(\sigma(p))}}(A_{j}) for any permutation σ:{1,,p}{1,,p}\sigma:\{1,...,p\}\mapsto\{1,...,p\}. When p=2p=2 and |Aj|=1|A_{j}|=1, then the Eq. 19 is reduced in the Eq. 18. Next, we present the properties of the proposed generalized divergence measure.

Property III.1.

GDivw(m1,,mi,,mp)GDiv_{w}(m_{1},...,m_{i},...,m_{p}) is bounded, where 0GDivw(m1,,mi,,mp)logp0\leq GDiv_{w}(m_{1},...,m_{i},...,m_{p})\leq\log p.

Property III.2.

GDivw(m1,,mi,,mp)=0GDiv_{w}(m_{1},...,m_{i},...,m_{p})=0 if and only if mim_{i} (1ip)(1\leq i\leq p) are equal.

Property III.3.

The proposed divergence measure GDivwGDiv_{w} is symmetrical, GDivw(m1,,mp)=GDivw(mσ(1),,mσ(p))GDiv_{w}(m_{1},...,m_{p})=GDiv_{w}(m_{\sigma(1)},...,m_{\sigma(p)}) for any permutation σ:{1,,p}{1,,p}.\sigma:\{1,...,p\}\mapsto\{1,...,p\}.

Next, we presented the framework of the Evidential MAGDM technique by computing the belief degree for each alternative for determining the expert weight to obtain the agreed results by considering all the necessary aspects of the group of experts.

III-C Proposed framework for group decision making

This section introduces the proposed framework for group decision-making demonstrated in Fig. 2 by conceptualizing the novel ordered weighted belief and plausibility functions followed by the ordered weighted belief divergence measure for computing the expert’s weight by considering inter-observational variability and controlling the impact of conflicting information, which will not affect the final decision. The proposed framework for MAGDM for evaluating an expert’s weight can be elaborated in the following steps:
Step 1. Generate BPAs for the normalized decision matrix of Eq. 9 based on the proposed method for computing BPAs for each alternative corresponding to each expert presented in section III-A.
Step 2. Determine the ordered weighted belief measure for kkth expert using Eq. 16.

Belwk=( Belwkt1(A1)Belwkt2(A1)Belwktq(A1)) Belwkt1(A2)Belwkt2(A2)Belwkt2(A2)Belwkt1(Ap)Belwkt2(Ap)Belwkt2(Ap),{}Bel_{wk}=\bordermatrix{&&&&\cr&Bel_{wk}^{t_{1}}(A_{1})&Bel_{wk}^{t_{2}}(A_{1})&\ldots&Bel_{wk}^{t_{q}}(A_{1})\cr\cr&Bel_{wk}^{t_{1}}(A_{2})&Bel_{wk}^{t_{2}}(A_{2})&\ldots&Bel_{wk}^{t_{2}}(A_{2})\cr&\vdots&\vdots&\ldots&\vdots\cr&Bel_{wk}^{t_{1}}(A_{p})&Bel_{wk}^{t_{2}}(A_{p})&\ldots&Bel_{wk}^{t_{2}}(A_{p})}, (20)

Step 3. Determine the weighted plausibility measure for kkth expert using Eq. 17.

Plwk=( Plwkt1(A1)Plwkt2(A1)Plwktq(A1)) Plwkt1(A2)Plwkt2(A2)Plwkt2(A2)Plwkt1(Ap)Plwkt2(Ap)Plwkt2(Ap){}Pl_{wk}=\bordermatrix{&&&&\cr&Pl_{wk}^{t_{1}}(A_{1})&Pl_{wk}^{t_{2}}(A_{1})&\ldots&Pl_{wk}^{t_{q}}(A_{1})\cr\cr&Pl_{wk}^{t_{1}}(A_{2})&Pl_{wk}^{t_{2}}(A_{2})&\ldots&Pl_{wk}^{t_{2}}(A_{2})\cr&\vdots&\vdots&\ldots&\vdots\cr&Pl_{wk}^{t_{1}}(A_{p})&Pl_{wk}^{t_{2}}(A_{p})&\ldots&Pl_{wk}^{t_{2}}(A_{p})} (21)

Step 4. Calculate the weighted divergence measure by taking pair of experts with respect to the alternative in the following way using Eq. 18.

D=( DivwA1(u1,u2)DivwA1(uk,uk1)) DivwA2(u1,u2)DivwA2(uk,uk1)DivwAp(u1,u2)DivwAp(uk,uk1){}D=\bordermatrix{&&&&\cr&Div_{wA_{1}}(u_{1},u_{2})&\ldots&Div_{wA_{1}}(u_{k},u_{k-1})\cr\cr&Div_{wA_{2}}(u_{1},u_{2})&\ldots&Div_{wA_{2}}(u_{k},u_{k-1})\cr&\vdots&\vdots&\vdots\cr&Div_{wA_{p}}(u_{1},u_{2})&\ldots&Div_{wA_{p}}(u_{k},u_{k-1})\cr} (22)

Step 5. Determine the divergence measure matrix for the group of experts.

DMM=u1u2uku1( d11d12d1k) u2d21d22d2kukdk1dk2dkk{}D_{MM}=\bordermatrix{&u_{1}&u_{2}&\ldots&u_{k}\cr u_{1}&d_{11}&d_{12}&\ldots&d_{1k}\cr\cr u_{2}&d_{21}&d_{22}&\ldots&d_{2k}\cr\vdots&\vdots&\vdots&\ldots&\vdots\cr u_{k}&d_{k1}&d_{k2}&\ldots&d_{kk}\cr} (23)

where dk1k2=i=1pDivwAi(uk1,uk2)d_{k_{1}k_{2}}=\sum_{i=1}^{p}Div_{wA_{i}}(u_{k_{1}},u_{k_{2}}) and k1,k2Kk_{1},k_{2}\in K.
Step 6. Calculate the average divergence measure for each experts using DMMD_{MM} matrix.

duj=i=1kdij,jK{}d_{u_{j}}=\sum_{i=1}^{k}d_{ij},\quad j\in K (24)

Step 7. Determine the support for each expert as follows using the average divergence measure illustrated in Step 6.

S~uk=1duj,jK{}\tilde{S}_{u_{k}}=\dfrac{1}{d_{u_{j}}},\quad j\in K (25)
Refer to caption
Figure 2: The proposed framework for group decision-making by analyzing the inter-observational variability between each alternative corresponding to the attributes for handling the impreciseness and conflicts between the group of experts.

Step 8. Calculate the final weighted support for each expert for group decision-making.

Wuk=S~ukikS~uk{}W_{u_{k}}=\dfrac{\tilde{S}_{u_{k}}}{\sum_{i}^{k}\tilde{S}_{u_{k}}} (26)

Step 9. Computing the fusion of the attributes corresponding to the experts by providing the weight WukW_{u_{k}} determined by previous steps in the following way.

Y=ikWukYk{}Y=\sum_{i}^{k}W_{u_{k}}Y_{k} (27)

Step 10. Ranking the alternatives in accordance with higher weighted response using [9] based on the ideal solution xj+=max1ip(yij)x_{j}^{+}=max_{1\leq i\leq p}(y_{ij}) and it can be evaluated [31] as RWx+(yi)=j=1qyijxj+j=1q(xj+)2RW_{x^{+}}(y_{i})=\dfrac{\sum_{j=1}^{q}y_{ij}x^{+}_{j}}{\sqrt{\sum_{j=1}^{q}(x_{j}^{+})^{2}}}. Next, we present an illustrative example of the simulated data using the proposed framework for MAGDM.

IV Experimental analysis

IV-A Illustrative example

In this section, a numerical example (adapted from [32]) is provided to illustrate the proposed approach. A company is endeavoring to recruit a manager. The human resources division of the company equips some applicable selection examinations as the benefit attributes to be assessed. These objective examinations contain knowledge and skill examinations. After these objective examinations, there are 17 eligible candidates on the inventory for selection. Then, four experts are reliable for the selection of the candidate, among others, based on subjective examinations. The primary data of subjective attributes, consisting of panel interviews and 1-on-1 interview examinations for the decisions, are provided in Table I.

TABLE I: Example of decision matrices-subjective attributes

No. of Candidates u1u_{1} u2u_{2} u3u_{3} u4u_{4} Panel interview 1-on-1 interview Panel interview 1-on-1 interview Panel interview 1-on-1 interview Panel interview 1-on-1 interview 1 80 75 85 80 75 70 90 85 2 65 75 60 70 70 77 60 70 3 90 85 80 85 80 90 90 95 4 65 70 55 60 68 72 62 72 5 75 80 75 80 50 55 70 75 6 80 80 75 85 77 82 75 75 7 65 70 70 60 65 72 67 75 8 70 60 75 65 75 67 82 85 9 80 85 95 85 90 85 90 92 10 70 75 75 80 68 78 65 70 11 50 60 62 65 60 65 65 70 12 60 65 65 75 50 60 45 50 13 75 75 80 80 65 75 70 75 14 80 70 75 72 80 70 75 75 15 70 65 75 70 65 70 60 65 16 90 95 92 90 85 80 88 90 17 80 85 70 75 75 80 70 75


Step 1. Construct the normalized decision matrices for Table I, followed by BPA generation for each alternative corresponding to each attribute with the respective expert. In this study, we have determined a 55 linguistic term set {verylow,low,medium,high,veryhigh}\{verylow,low,medium,high,veryhigh\} and corresponding triangular membership functions {μB1~,μB2~,,μB5~}\{\mu_{\tilde{B_{1}}},\mu_{\tilde{B_{2}}},...,\mu_{\tilde{B_{5}}}\} to characterize the alternative’s belongingness with respect to the attributes. According to the empirical investigation performed by psychologist Miller [33], the utilization of less than five linguistic terms is not competent for apprehending sufficient information. Selecting more than 9 linguistic terms is disproportionate to understanding the essential differences. Therefore, a set of 7±27\pm 2 linguistic terms is needed for characterizing the objectives and decision variables in real decision situations [33]. The evaluation of the membership function followed by the BPAs is illustrated in Table II and III using a set of 55 linguistic term sets.

TABLE II: Membership of each alternative with respect to the attributes for expert u1u_{1}

No. of Candidates Panel interview 1-on-1 interview μt1B~1u1\mu^{u_{1}}_{t_{1}\tilde{B}_{1}} μt1B~2u1\mu^{u_{1}}_{t_{1}\tilde{B}_{2}} μt1B~3u1\mu^{u_{1}}_{t_{1}\tilde{B}_{3}} μt1B~4u1\mu^{u_{1}}_{t_{1}\tilde{B}_{4}} μt1B~5u1\mu^{u_{1}}_{t_{1}\tilde{B}_{5}} μt2B~1u1\mu^{u_{1}}_{t_{2}\tilde{B}_{1}} μt2B~2u1\mu^{u_{1}}_{t_{2}\tilde{B}_{2}} μt2B~3u1\mu^{u_{1}}_{t_{2}\tilde{B}_{3}} μt2B~4u1\mu^{u_{1}}_{t_{2}\tilde{B}_{4}} μt2B~5u1\mu^{u_{1}}_{t_{2}\tilde{B}_{5}} 1 0.2500 0.3333 0.5000 1.000 0.7500 0.5714 0.7619 0.8571 0.5714 0.4286 2 0.6250 0.8333 0.7500 0.5000 0.3750 0.5714 0.7619 0.8571 0.5714 0.4286 3 0.0000 0.0000 0.0000 0.0000 1.0000 0.2857 0.3809 0.5714 0.9523 0.7142 4 0.6250 0.8333 0.7500 0.5000 0.3750 0.7142 0.9523 0.5714 0.3809 0.2857 5 0.3750 0.5000 0.7500 0.8333 0.6250 0.4285 0.5714 0.8571 0.7619 0.5714 6 0.2500 0.3333 0.5000 1.0000 0.7500 0.4285 0.5714 0.8571 0.7619 0.5714 7 0.6250 0.8333 0.7500 0.5000 0.3750 0.7143 0.9523 0.5714 0.3809 0.2857 8 0.5000 0.6667 1.0000 0.6667 0.5000 1.0000 0.0000 0.0000 0.0000 0.0000 9 0.2500 0.3333 0.5000 1.0000 0.7500 0.2857 0.3809 0.5714 0.9523 0.7143 10 0.5000 0.6667 1.0000 0.667 0.5000 0.5714 0.7619 0.8571 0.5714 0.4286 11 1.0000 0.0000 0.0000 0.0000 0.0000 1.0000 0.0000 0.0000 0.0000 0.0000 12 0.7500 1.0000 0.5000 0.3333 0.2500 0.8571 0.5714 0.2857 0.1904 0.1428 13 0.3750 0.5000 0.7500 0.8333 0.6250 0.5714 0.7619 0.8571 0.5714 0.3809 14 0.2500 0.3333 0.5000 1.0000 0.7500 0.7142 0.9523 0.5714 0.3809 0.2857 15 0.5000 0.6667 1.0000 0.6667 0.5000 0.8571 0.5714 0.2857 0.1905 0.1428 16 0.0000 0.0000 0.0000 0.0000 1.0000 0.0000 0.0000 0.0000 0.0000 1.0000 17 0.2500 0.3333 0.5000 1.0000 0.7500 0.2857 0.3809 0.5714 0.9523 0.7143

TABLE III: BPAs of each alternative with respect to the attributes for expert u1u_{1}

No. of Candidates Panel interview 1-on-1 interview mt1B~1u1m^{u_{1}}_{t_{1}\tilde{B}_{1}} mt1B~2u1m^{u_{1}}_{t_{1}\tilde{B}_{2}} mt1B~3u1m^{u_{1}}_{t_{1}\tilde{B}_{3}} mt1B~4u1m^{u_{1}}_{t_{1}\tilde{B}_{4}} mt1B~5u1m^{u_{1}}_{t_{1}\tilde{B}_{5}} mt2B~1u1m^{u_{1}}_{t_{2}\tilde{B}_{1}} mt2B~2u1m^{u_{1}}_{t_{2}\tilde{B}_{2}} mt2B~3u1m^{u_{1}}_{t_{2}\tilde{B}_{3}} mt2B~4u1m^{u_{1}}_{t_{2}\tilde{B}_{4}} mt2B~5u1m^{u_{1}}_{t_{2}\tilde{B}_{5}} 1 0.0351 0.0408 0.0513 0.0952 0.0759 0.0579 0.0816 0.0937 0.0697 0.0600 2 0.0877 0.1021 0.0769 0.0476 0.0379 0.0579 0.0816 0.0937 0.0697 0.0600 3 0.0000 0.0000 0.0000 0.0000 0.1012 0.0289 0.0408 0.0625 0.1163 0.1000 4 0.0877 0.1021 0.0769 0.0476 0.0379 0.0725 0.1021 0.0625 0.0465 0.0400 5 0.0526 0.0612 0.0769 0.0793 0.0633 0.0435 0.0612 0.0937 0.0931 0.0800 6 0.0351 0.0408 0.0512 0.0952 0.0759 0.0435 0.0612 0.0937 0.0931 0.0800 7 0.0877 0.1021 0.0769 0.0476 0.0379 0.0725 0.1021 0.0625 0.0465 0.0400 8 0.0702 0.0816 0.1026 0.0635 0.0506 0.1014 0.0000 0.0000 0.0000 0.0000 9 0.0351 0.0408 0.0512 0.0952 0.0759 0.0289 0.0408 0.0625 0.1162 0.1000 10 0.0702 0.0816 0.1026 0.0635 0.0506 0.0579 0.0816 0.0938 0.0697 0.0600 11 0.1404 0.0000 0.0000 0.0000 0.0000 0.1015 0.0000 0.0000 0.0000 0.0000 12 0.1053 0.1224 0.0513 0.0317 0.0253 0.0869 0.0612 0.0313 0.0233 0.0200 13 0.0526 0.0612 0.0769 0.0794 0.0633 0.0579 0.0816 0.0938 0.0698 0.0600 14 0.0351 0.0408 0.0513 0.0952 0.0759 0.0724 0.1021 0.0625 0.0465 0.0400 15 0.0702 0.0816 0.1026 0.0635 0.0506 0.0869 0.0612 0.0313 0.0233 0.0200 16 0.0000 0.0000 0.0000 0.0000 0.1013 0.0000 0.0000 0.0000 0.0000 0.1400 17 0.0351 0.0408 0.0512 0.0952 0.0759 0.0289 0.0408 0.0625 0.1163 0.1000

Step 2. Determine the ordered weighted belief measure for each alternative corresponding to the experts with respect to the attributes.
Step 3. Determine the ordered weighted plausibility measure for each alternative corresponding to the experts with respect to the attributes to handle the conflicts and impreciseness occurring between the group of experts.
Step 4. Determine WPBlw1,WPBlw2,WPBlw3,WPBlw4WPBl_{w1},WPBl_{w_{2}},WPBl_{w3},WPBl_{w4} using the belief and plausibility measure evaluated in Step 2 and Step 3. Step 5. Determine the weighted belief divergence measure between each pair of a group of experts based on attributes proposition for each alternative using Step 4, and the results are reported in Table IV.

TABLE IV: The proposed weighted divergence measure between each pair of a group of experts.

No. of Candidates D(u1,u2)D(u_{1},u_{2}) D(u1,u3)D(u_{1},u_{3}) D(u1,u4)D(u_{1},u_{4}) D(u2,u3)D(u_{2},u_{3}) D(u2,u4)D(u_{2},u_{4}) D(u3,u4)D(u_{3},u_{4}) 1 0.0006 0.0000 0.0002 0.0007 0.0016 0.0003 2 0.0031 0.0004 0.0009 0.0059 0.0074 0.0001 3 0.0036 0.0058 0.0029 0.0003 0.0001 0.0005 4 0.0003 0.0001 0.0009 0.0006 0.0001 0.0013 5 0.0004 0.0009 0.0049 0.0001 0.0026 0.0016 6 0.0000 0.0001 0.0032 0.0001 0.0034 0.0026 7 0.0000 0.0001 0.0020 0.0000 0.0022 0.0004 8 0.0005 0.0034 0.0015 0.0012 0.0002 0.0004 9 0.0039 0.0037 0.0017 0.0001 0.0005 0.0004 10 0.0011 0.0001 0.0000 0.0009 0.0010 0.0000 11 0.0036 0.0025 0.0029 0.0001 0.0001 0.0000 12 0.0093 0.0053 0.0045 0.0006 0.0009 0.0000 13 0.0004 0.0040 0.0044 0.0019 0.0021 0.0001 14 0.0002 0.0021 0.0041 0.0035 0.0060 0.0003 15 0.0065 0.0003 0.0008 0.0041 0.0028 0.0001 16 0.0045 0.0044 0.0045 0.0000 0.0000 0.0000 17 0.0001 0.0007 0.0061 0.0014 0.0081 0.0027 Average 0.0023 0.0021 0.0027 0.0012 0.0023 0.0007


Step 6. Construct the divergence measure matrix to handle the impreciseness and to consider inter-observational variability between each group of experts corresponding to the attributes for each alternative. The divergence measure matrix DMM=[dij]4×4D_{MM}=[d_{ij}]_{4\times 4} is evaluated as follows:

DMM=u1u2u3u4u1( 0.00000.00230.00210.0027) u20.00210.00000.00120.0023u30.00210.00120.00000.0007u40.00270.00230.00070.0000D_{MM}=\bordermatrix{&u_{1}&u_{2}&u_{3}&u_{4}\cr u_{1}&0.0000&0.0023&0.0021&0.0027\cr u_{2}&0.0021&0.0000&0.0012&0.0023\cr u_{3}&0.0021&0.0012&0.0000&0.0007\cr u_{4}&0.0027&0.0023&0.0007&0.0000}

Step 7. Calculate the average divergence measure corresponding to the DMMD_{MM} for each experts uku_{k} as follows: D~u1=0.0017,D~u2=0.0014,D~u3=0.0010,D~u4=0.0014.\tilde{D}_{u_{1}}=0.0017,\quad\tilde{D}_{u_{2}}=0.0014,\quad\tilde{D}_{u_{3}}=0.0010,\quad\tilde{D}_{u_{4}}=0.0014. Step 8. Calculate the weighted support (S~\tilde{S}) for each exert as follows: S~u1=573.03,S~u2=691.65,S~u3=997.56,S~u4=704.38.\tilde{S}_{u_{1}}=573.03,\quad\tilde{S}_{u_{2}}=691.65,\tilde{S}_{u_{3}}=997.56,\quad\tilde{S}_{u_{4}}=704.38.
Step 9. Calculate the final weighted support for each expert as follows: Wu1=0.1932,Wu2=0.2331,W_{u_{1}}=0.1932,\quad W_{u_{2}}=0.2331, Wu3=0.3362,Wu4=0.2374\quad W_{u_{3}}=0.3362,\quad W_{u_{4}}=0.2374. The final weighted support is shown in Table V, and the higher weight corresponding to the group of experts indicates the influence of attributes in group decision-making. Therefore, the final ranking of the experts is u3u4u2u1u_{3}\succ u_{4}\succ u_{2}\succ u_{1}. We have compared the expert ranking with the existing methods, and the results are reported in Table V.

TABLE V: Experts weights, ranking and comparison of the results with other methods

Methods Weights of the experts Ranking orders Z.Yue [27] λ1=0.2350,λ2=0.2601,λ3=0.2485,λ4=0.2564\lambda_{1}=0.2350,\lambda_{2}=0.2601,\lambda_{3}=0.2485,\lambda_{4}=0.2564 u2u4u3u1u_{2}\succ u_{4}\succ u_{3}\succ u_{1} Z.Yue [9] λ1=0.2478,λ2=0.2502,λ3=0.2517,λ4=0.2503\lambda_{1}=0.2478,\lambda_{2}=0.2502,\lambda_{3}=0.2517,\lambda_{4}=0.2503 u3u4u2u1u_{3}\succ u_{4}\succ u_{2}\succ u_{1} Proposed (Evidential MAGDM) Wu1=0.1932,Wu2=0.2331,Wu3=0.3362,Wu4=0.2374W_{u_{1}}=0.1932,W_{u_{2}}=0.2331,W_{u_{3}}=0.3362,W_{u_{4}}=0.2374 u3u4u2u1u_{3}\succ u_{4}\succ u_{2}\succ u_{1}

TABLE VI: Collective analysis of all the alternatives.

No. of Candidates Panel interview 1-on-1 interview Weights Ranking Shih et al. [32] Yue [9] 1 0.2715 0.2474 0.4809 4 5 4 2 0.2137 0.2364 0.4166 12 14 12 3 0.2797 0.2869 0.5247 3 3 3 4 0.2093 0.2218 0.3991 15 12 15 5 0.2164 0.2264 0.4101 13 11 11 6 0.2544 0.2599 0.4763 5 4 5 7 0.2211 0.2241 0.4122 12 13 13 8 0.2513 0.2235 0.4402 9 8 10 9 0.2961 0.2791 0.5331 1 2 2 10 0.2299 0.2449 0.4396 10 10 9 11 0.1982 0.2101 0.3781 16 16 16 12 0.1796 0.2002 0.3515 17 17 17 13 0.2373 0.2454 0.4470 8 9 7 14 0.2578 0.2308 0.4530 7 6 8 15 0.2225 0.2187 0.4087 14 15 14 16 0.2929 0.2821 0.5328 2 1 1 17 0.2444 0.2534 0.4609 6 7 6 Ideal Solution 0.2961 0.2869 - - - - -


Step 10. Perform the fusion of attributes corresponding to the experts by providing weightage to each attribute determined in Step 9 and rank as shown in Table VI. The performance of the Evidential MAGDM is compared with the existing method illustrated in Table VI, which shows the effectiveness of the proposed method when expert weightage is not included, whereas it is incorporated in the existing approaches.
Next, we show the application of the proposed Evidential MAGDM for the ensemble classifier feature fusion for the classification of retinal disorders using OCT images.

IV-B Ensemble classifier feature fusion for classification using the proposed Evidential MAGDM method

This section introduces the real-world application of the proposed Evidential MAGDM framework in the ensemble classifier feature fusion for the diagnosis of retinal disorders using OCT images illustrated in Fig. 3. To analyze the effectiveness of the proposed Evidential MAGDM for feature fusion, we have employed a publicly available [34] OCT image dataset and compared it with state-of-the-art methods. Additionally, the experimental performance of the proposed method is compared by taking distinct weights for fusing features.

IV-B1 Proposed Ensemble Feature Fusion Classifier for Diagnosis of Retinal Disorders

Several crucial ocular and systematic disorders may cause vision loss or even blindness. Retinal imagining approaches are widely employed in ophthalmology to diagnose ocular disorders in a non-invasive manner. Optical coherence tomography (OCT) is currently the most widespread imaging modality for analyzing several retinal disorders, such as diabetic retinopathy, age-related macular degeneration, and glaucoma [35, 36]. Optical coherence tomography (OCT) provides a cross-sectional view of biological tissues at microscopic spatial resolution level [37] and the surface information of the retinal layers, which has a major contribution to the early diagnosis of ocular disease. However, the interpretation of 3D OCT images is a time-consuming procedure for the ophthalmologist. Therefore, various automated computer-aided diagnoses have been introduced during recent years [38, 5, 6, 39] to analyze OCT data.
We propose a multi-scale space problem by taking different regularization levels to control the smoothing of the OCT image dataset, which functions as different experts illustrated in Fig. 3. We assess three scale-spaces that induce multiple sets of OCT image datasets (considered as experts) and feed them to three homogeneous EfficientNetB0 [20] models for extracting the features that are assumed as attributes, and the spatial resolution of features are considered as alternatives. The extracted features through the considered model EfficientNetB0 [20] characterize variations in the discriminative and textural features at multi-scale. The representation of extracted features from each path is illustrated in Fig. 3. Now, these extracted features are fused by incorporating the proposed Evidential MAGDM fusion module. For each feature space, the weights are generated by analyzing the inter-observational variability between the extracted features through the proposed Evidential MAGDM approach, which also handles the impreciseness emerging between the distinct scale spaces. Finally, these fused features work as input for the random forest classifier (RFC) model using the transfer learning mechanism for classification purposes. The Evidential MAGDM method apprehends the variations in the features during the fusion procedure by optimizing the impreciseness of the multi-scale features and computes the weights in such a way that the performance of the model is not biased by suppressing the conflicts emerging in the feature information.

Refer to caption
Figure 3: The proposed Evidential MAGDM feature fusion module integrated into the ensemble classifier to analyze inter-observational variability between the features.

IV-B2 Dataset and Implementation Details

To perform the empirical study, we have used the publicly available OCTDL dataset [34], which includes 2064 images classified into eclectic diseases and retinal disorders. The datasets comprise high spatial resolution OCT B-scans, which enable the visualization of sub-surface layer information centered on the fovea and the choroidal blood vessel. The description of the dataset is illustrated in Table VII, which includes seven various classes, and it can be visualized from Fig. 4.

TABLE VII: OCTDL dataset distribution corresponding to the retinal disorder

OCTDL Dataset (Retinal Disorder Number of Scans Class Normal 332 NO Age-related macular degeneration 1231 AMD Diabetic macular edema 147 DME Epiretinal membrane 155 ERM Retinal artery occlusion 22 RAO Retinal vein occlusion 101 RVO Vitreomacular interface disease 76 VID Total 2064 -

The dataset is divided into 80:20 for the training and testing part. The data augmentation strategy is utilized, which retrieved 1200 images for each class of the OCTDL dataset, and the images are resized of spatial resolution of 224×224224\times 224. The model is trained over the training dataset for 100 epochs with a cosine learning rate for feature extraction purposes followed by integrating the proposed Evidential MAGDM feature fusion model and fed to the RFC for the classification. To analyze the performance of proposed Evidential MAGDM in the ensemble classifier feature fusion, we have computed the seven evaluation measures, which include Accuracy, Sensitivity, Specificity, Precision, F1 score, AUC (area under the receiver operating characteristic), and Kappa. It can be formulated as follows:

Accuracy=TP+TNTP+FP+TN+FNSensitivity=TPTP+FNAccuracy=\dfrac{TP+TN}{TP+FP+TN+FN}\quad Sensitivity=\dfrac{TP}{TP+FN}
Specificity=TNTN+FPPrecision=TPTP+FPSpecificity=\dfrac{TN}{TN+FP}\quad Precision=\dfrac{TP}{TP+FP}
F1=2×Precision×SensitivityPrecision+SensitivityF_{1}=\dfrac{2\times Precision\times Sensitivity}{Precision+Sensitivity}
Refer to caption
Figure 4: Description of various retinal disorders includes in the OCTDL dataset.

IV-B3 Experimental Results

We investigate the efficacy of the proposed Evidential MAGDM module by taking different combinations of weights for feature fusion in the proposed model illustrated in Fig. 3 for the OCTDL dataset. The comparison is accomplished with the same network backbone(EfficientB0), including training and testing strategies. We have considered a set of distinct weights to analyze the effectiveness of the proposed method. The experimental performance is reported in Table VIII in terms of evaluation measures presented in section IV-B2 over the OCTDL dataset for the classification of diverse retinal disorders in OCT images. The evaluation measures for the proposed Evidential MAGDM feature fusion module are presented in blue color in Table VIII.

TABLE VIII: Average evaluation measures over the OCTDL datasets with various considered weights for fusing the extracted features and comapred with the state-of-the-art methods

Method Accuracy Sensitivity Specificity Precision F1 AUC Kappa EfficientB0(1,0,0) 0.877 0.830 0.976 0.836 0.880 0.956 0.802 EfficientB0(0,1,0) 0.901 0.861 0.980 0.873 0.903 0.962 0.839 EfficientB0(0,0,1) 0.863 0.763 0.971 0.814 0.853 0.961 0.775 EfficientB0(0.5,0.5,0) 0.901 0.855 0.980 0.881 0.902 0.973 0.840 EfficientB0(0.5,0.25,0.25) 0.906 0.868 0.982 0.883 0.907 0.977 0.848 ResNet50 [34] 0.846 0.846 _ 0.898 0.866 0.988 _ VGG16 [34] 0.859 0.859 _ 0.888 0.869 0.977 _ Sunija et al. [38] 0.882 0.882 0.883 0.884 0.881 0.971 0.807 Geroge et al.[5] 0.873 0.873 _ 0.879 0.874 0.975 0.795 EfficientB0 (Evidential MAGDM) 0.911 0.870 0.982 0.901 0.912 0.981 0.855

Refer to caption
Figure 5: The confusion matrix of the proposed Evidential MAGDM method in the homogeneous ensemble classifier feature fusion with different weights (a) EfficientB0 (Evidential MAGDM), (b) EfficientB0(1,0,0), (c) EfficientB0(0,1,0), (d) EfficientB0(0,0,1), (e) EfficientB0(0.5,0.5,0), (f) EfficientB0(0.5,0.25,0.25).

The proposed Evidential MAGDM module achieves 0.911 Accuracy, which is adequate compared to the other considered weights. When we are considering one expert for decision (classification), the performance of the model is reduced compared to the proposed Evidential MAGDM, especially for the (1,0,0)(1,0,0) and (0,0,1)(0,0,1) weights in terms of all the evaluation measures. When we are evaluating the model performance over the weight (0,0,1)(0,0,1), the Sensitivity of the model is lower compared to others, which indicates that the model is not able to capture the discriminative feature representation when conflicting and impreciseness is included, which increments the false positive. However, the performance of the model is increased for the (0.5,0.25,0.25)(0.5,0.25,0.25) weights compared to other considered weights while lower than the proposed strategy. It can be observed that taking random weights for the experts corresponding to the feature fusion leads to degradation of the model’s performance, and choosing the optimal weight for feature fusion is tedious. The confusion matrix is demonstrated in Fig. 5 for all the considered weights and the proposed Evidential MAGDM module to show the performance of the proposed method. It can be analyzed the proposed method classification is adequate compared to other experts’ weights. To analyze the behavior of the proposed Evidential MGDM in the ensemble model for classification, the ROC curve is plotted. The classification performance of the proposed method is satisfactory at different thresholds compared to other experts’ weights, which indicates the effectiveness of the proposed Evidential MAGDM approach to handling the impreciseness and considering the inter-observational variability between the features extracted through different experts. The performance of the proposed method is compared with the state-of-the-art methods illustrated in Table VIII. The proposed method achieves a higher Accuracy of 0.911, which shows the robustness of the Evidential MAGDM approach in the ensemble classifier fusion. Additionally, the proposed method achieves higher evaluation metrics compared to the [34, 38, 5]. The experimental results indicate the efficacy of the proposed Evidential MAGDM in the ensemble classifier feature fusion compared to the utilization of the single classifier. The proposed method can help in decision-making for the diagnosis of retinal disorders when various ophthalmologists are involved.

IV-C Discussion

Through the experimental analysis of the proposed Evidential MAGDM approach in the illustrative example and ensemble classifier feature fusion, the effectiveness of the proposed method has been affirmed. The proposed method has some advantages over the existing MAGDM. The proposed MAGDM method can effectively handle the conflicts occurring between the group of experts, which can influence decision-making. The existing method required attribute weight information for the respective experts. However, if there is a lack of weight attribute information, then it may affect the performance of the MAGDM approach, while it can not affect the proposed method decision-making. Additionally, the ordered weighted belief computed for each alternative considers the inter-variability between the other alternatives for the respective attribute by handling the impreciseness and integrating the partitioning of the domain of the attribute into linguistic terms. The ranking results of the group of experts are illustrated in Table V and compared with the existing approach. It can be noted that the lack of attribute weights does not affect the final outcome illustrated in Table V. However, the attribute weights are utilized in the presented method [27, 9], which may affect the decision if there is a lack of attribute weight information. The proposed Evidential MAGDM has a vast range of applications in the medical domain. We have analyzed the effectiveness of the proposed method by integrating it in the ensemble classifier feature fusion for the diagnosis of the retinal disorder using OCT images. The performance of the proposed approach is demonstrated in Table VIII, which indicates the effectiveness of the proposed method over the existing models. The extracted feature information from the multi-scale considers the distinct type of feature information included in the original source. Therefore, the weight determination method is chosen appropriately for fusing the feature information so that biases and imbalances will not emerge, which can be handled through the proposed method. The experimental results of the proposed method in the ensemble classifier feature fusion indicate the effectiveness of the computed weight for feature fusion. Further, it can be visualized through the confusion matrix illustrated in Fig. 5 that the classification performance of the proposed method is adequately compared to the other fusion methods.

V Conclusion

In this study, we propose a novel Evidential MAGDM method by constructing the ordered weighted belief divergence measure to compute the expert weightage for MAGDM. To evaluate the ordered weighted belief and plausibility measure, we have proposed a novel generation method of BPA for computing the degree of belief for each alternative corresponding to the attributes of each expert. The proposed methodology is able to capture the inter-observational behavior occurring between the alternatives for the respective attributes, followed by handling the conflicts and impreciseness emerging between the group of experts. To analyze the effectiveness of the proposed MAGDM method, we have shown the illustrative example and ranked the preference between the alternatives. Further, we have shown the real-world application for the diagnosis of retinal disorder using OCT images by modeling ensemble classifier feature fusion based on the proposed Evidential MAGDM model. The empirical study of the proposed method is accomplished over the publicly available OCTDL dataset and compared with different experts’ weights, followed by a comparison with the state-of-the-art methods. The experimental performance indicates the effectiveness of the proposed Evidential MAGDM method and it outperforms compared with others. Moreover, we try to incorporate the multimodality imaging techniques for the diagnosis of the retinal disorder and localization by focusing on the enhancement of the Evidential MAGDM approach for the feature fusion mechanism.

References

  • [1] D. R. Anderson, D. J. Sweeney, T. A. Williams, J. D. Camm, and J. J. Cochran, An introduction to management science: quantitative approach.   Cengage learning, 2018.
  • [2] T. Huang, X. Tang, Q. Zhang, Z. Cai, and W. Pedrycz, “An automatic consensus reaching approach with preference adjustment willingness for group decision-making,” IEEE Transactions on Fuzzy Systems, vol. 31, no. 10, pp. 3331–3345, 2023.
  • [3] S. Wang, J. Wu, F. Chiclana, Q. Sun, and E. Herrera-Viedma, “Two-stage feedback mechanism with different power structures for consensus in large-scale group decision making,” IEEE Transactions on Fuzzy Systems, vol. 30, no. 10, pp. 4177–4189, 2022.
  • [4] Z. Cao, C.-H. Chuang, J.-K. King, and C.-T. Lin, “Multi-channel eeg recordings during a sustained-attention driving task,” Scientific data, vol. 6, no. 1, p. 19, 2019.
  • [5] N. George, L. Shine, N. Ambily, B. Abraham, and S. Ramachandran, “A two-stage cnn model for the classification and severity analysis of retinal and choroidal diseases in oct images,” International Journal of Intelligent Networks, vol. 5, pp. 10–18, 2024.
  • [6] S. Al-Fahdawi, A. S. Al-Waisy, D. Q. Zeebaree, R. Qahwaji, H. Natiq, M. A. Mohammed, J. Nedoma, R. Martinek, and M. Deveci, “Fundus-deepnet: Multi-label deep learning classification system for enhanced detection of multiple ocular diseases through data fusion of fundus images,” Information Fusion, vol. 102, p. 102059, 2024.
  • [7] A. Hussain, S. U. Khan, I. Rida, N. Khan, and S. W. Baik, “Human centric attention with deep multiscale feature fusion framework for activity recognition in internet of medical things,” Information Fusion, vol. 106, p. 102211, 2024.
  • [8] J. Ye, B. Sun, J. Bai, Q. Bao, X. Chu, and K. Bao, “A preference-approval structure-based non-additive three-way group consensus decision-making approach for medical diagnosis,” Information Fusion, vol. 101, p. 102008, 2024.
  • [9] Z. Yue, “Approach to group decision making based on determining the weights of experts by using projection method,” Applied Mathematical Modelling, vol. 36, no. 7, pp. 2900–2910, 2012.
  • [10] K. Pang, L. Martínez, N. Li, J. Liu, L. Zou, and M. Lu, “A concept lattice-based expert opinion aggregation method for multi-attribute group decision-making with linguistic information,” Expert Systems with Applications, vol. 237, p. 121485, 2024.
  • [11] Z. Zhang and Z. Li, “Personalized individual semantics-based consistency control and consensus reaching in linguistic group decision making,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 52, no. 9, pp. 5623–5635, 2021.
  • [12] Y. Dong, Q. Zha, H. Zhang, and F. Herrera, “Consensus reaching and strategic manipulation in group decision making with trust relationships,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 51, no. 10, pp. 6304–6318, 2020.
  • [13] J. R. French Jr, “A formal theory of social power.” Psychological review, vol. 63, no. 3, p. 181, 1956.
  • [14] H. Theil, “On the symmetry approach to the committee decision problem,” Management Science, vol. 9, no. 3, pp. 380–393, 1963.
  • [15] Z. Yue, “Deriving decision maker’s weights based on distance measure for interval-valued intuitionistic fuzzy group decision making,” Expert Systems with Applications, vol. 38, no. 9, pp. 11 665–11 670, 2011.
  • [16] E. Forman and K. Peniwati, “Aggregating individual judgments and priorities with the analytic hierarchy process,” European journal of operational research, vol. 108, no. 1, pp. 165–169, 1998.
  • [17] B. Blagojevic, B. Srdjevic, Z. Srdjevic, and T. Zoranovic, “Heuristic aggregation of individual judgments in ahp group decision making using simulated annealing algorithm,” Information Sciences, vol. 330, pp. 260–273, 2016.
  • [18] L. Jin, R. Mesiar, and G. Qian, “Weighting models to generate weights and capacities in multicriteria group decision making,” IEEE Transactions on Fuzzy Systems, vol. 26, no. 4, pp. 2225–2236, 2017.
  • [19] F. Xiao, J. Wen, and W. Pedrycz, “Generalized divergence-based decision making method with an application to pattern classification,” IEEE Transactions on Knowledge and Data Engineering, 2022.
  • [20] M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International conference on machine learning.   PMLR, 2019, pp. 6105–6114.
  • [21] A. P. Dempster, “Upper and lower probabilities induced by a multivalued mapping,” in Classic works of the Dempster-Shafer theory of belief functions.   Springer, 2008, pp. 57–72.
  • [22] G. Shafer, A mathematical theory of evidence.   Princeton university press, 1976, vol. 42.
  • [23] Y. He, A. B. Hamza, and H. Krim, “A generalized divergence measure for robust image registration,” IEEE Transactions on Signal Processing, vol. 51, no. 5, pp. 1211–1220, 2003.
  • [24] S. Kullback, Information theory and statistics.   Courier Corporation, 1997.
  • [25] J. Lin, “Divergence measures based on the shannon entropy,” IEEE Transactions on Information theory, vol. 37, no. 1, pp. 145–151, 1991.
  • [26] H. Wang, X. Deng, W. Jiang, and J. Geng, “A new belief divergence measure for dempster–shafer theory based on belief and plausibility function and its application in multi-source data fusion,” Engineering Applications of Artificial Intelligence, vol. 97, p. 104030, 2021.
  • [27] Z. Yue, “A method for group decision-making based on determining weights of decision makers using topsis,” Applied Mathematical Modelling, vol. 35, no. 4, pp. 1926–1936, 2011.
  • [28] D. Chakraborty, D. Guha, and B. Dutta, “Multi-objective optimization problem under fuzzy rule constraints using particle swarm optimization,” Soft Computing, vol. 20, pp. 2245–2259, 2016.
  • [29] M. O’Hagan, “Aggregating template or rule antecedents in real-time expert systems with fuzzy set logic,” in Twenty-second asilomar conference on signals, systems and computers, vol. 2.   IEEE, 1988, pp. 681–689.
  • [30] F. Xiao, “Gejs: A generalized evidential divergence measure for multisource information fusion,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 53, no. 4, pp. 2246–2258, 2022.
  • [31] G. R. Jahanshahloo, F. H. Lotfi, and M. Izadikhah, “An algorithmic method to extend topsis for decision-making problems with interval data,” Applied mathematics and computation, vol. 175, no. 2, pp. 1375–1384, 2006.
  • [32] H.-S. Shih, H.-J. Shyur, and E. S. Lee, “An extension of topsis for group decision making,” Mathematical and computer modelling, vol. 45, no. 7-8, pp. 801–813, 2007.
  • [33] G. A. Miller, “The magical number seven, plus or minus two: Some limits on our capacity for processing information.” Psychological review, vol. 63, no. 2, p. 81, 1956.
  • [34] M. Kulyabin, A. Zhdanov, A. Nikiforova, A. Stepichev, A. Kuznetsova, M. Ronkin, V. Borisov, A. Bogachev, S. Korotkich, P. A. Constable et al., “Octdl: Optical coherence tomography dataset for image-based deep learning methods,” Scientific Data, vol. 11, no. 1, p. 365, 2024.
  • [35] U. Schmidt-Erfurth, A. Sadeghipour, B. S. Gerendas, S. M. Waldstein, and H. Bogunović, “Artificial intelligence in retina,” Progress in retinal and eye research, vol. 67, pp. 1–29, 2018.
  • [36] P. M. Burlina, N. Joshi, M. Pekala, K. D. Pacheco, D. E. Freund, and N. M. Bressler, “Automated grading of age-related macular degeneration from color fundus images using deep convolutional neural networks,” JAMA ophthalmology, vol. 135, no. 11, pp. 1170–1176, 2017.
  • [37] M. E. van Velthoven, D. J. Faber, F. D. Verbraak, T. G. van Leeuwen, and M. D. de Smet, “Recent developments in optical coherence tomography for imaging the retina,” Progress in retinal and eye research, vol. 26, no. 1, pp. 57–77, 2007.
  • [38] A. Sunija, S. Kar, S. Gayathri, V. P. Gopi, and P. Palanisamy, “Octnet: A lightweight cnn for retinal disease classification from optical coherence tomography images,” Computer methods and programs in biomedicine, vol. 200, p. 105877, 2021.
  • [39] M. Elsharkawy, A. Sharafeldeen, F. Khalifa, A. Soliman, A. Elnakib, M. Ghazal, A. Sewelam, A. Thanos, H. Sandhu, and A. El-Baz, “A clinically explainable ai-based grading system for age-related macular degeneration using optical coherence tomography,” IEEE Journal of Biomedical and Health Informatics, 2024.