This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Pairwise Comparisons Matrix Decomposition into Approximation and Orthogonal Component Using Lie Theory

W.W. Koczkodaj Computer Science
Laurentian University
Sudbury, Ontario P3E 2C6, Canada
[email protected]
V.W. Marek Department of Computer Science
University of Kentucky
Lexington, KY 40506, USA
[email protected]
 and  Y. Yayli Department of Mathematics, Faculty of Science, Ankara University
TR-06100 Ankara, Turkey
[email protected]
Abstract.

This paper examines the use of Lie group and Lie Algebra theory to construct the geometry of pairwise comparisons matrices. The Hadamard product (also known as coordinatewise, coordinate-wise, elementwise, or element-wise product) is analyzed in the context of inconsistency and inaccuracy by the decomposition method.

The two designed components are the approximation and orthogonal components. The decomposition constitutes the theoretical foundation for the multiplicative pairwise comparisons.

Keywords: approximate reasoning, subjectivity, inconsistency, consistency-driven, pairwise comparison, matrix Lie group, Lie algebra, approximation, orthogonality, decomposition.

1. Introduction

Pairwise comparisons (PCs) take place when we somehow compare two entities (objects or abstract concepts). According to [13], Raymond Llull is credited for the first documented use of pairwise comparisons in “A system for the election of persons” (Artifitium electionis personarum) before 1283 and in “An electoral system” (De arte eleccionis) in 1299. Both manuscripts were handwritten (there was no scientific publication process established yet) for deciding the winner of elections.

There are two variants of pairwise comparisons: multiplicative and additive. The multiplicative PCs variant reflects a relationship:

Aisxtimes[comparisonoperator]thanBA~{}is~{}x~{}times~{}[comparison~{}operator]~{}than~{}B

The additive type expresses: “by how much (the percentage is often used) one entity is [comparison operator] than another entity”. The comparison operator could be: bigger, better, more important, or similar comparisons (see [37]).

The multiplicative pairwise comparison is determined by the ratio of two entities. For instance, the constant π\pi, is one of the most recognized ratios in mathematics is defined as the ratio of a circle circumference to its diameter.

Entities may be physical objects or abstract concepts (e.g., software dependability or software safety). In both cases, we can provide an estimate based on expert assessment. However, physical measurements (e.g., area or weight) should be considered if they are available.

In practice, multiplicative (or ratio) PCs are more popular than additive PCs. However, they are more mathematically challenging than additive comparisons. The additive PCs can be produced from the multiplicative form by a logarithmic mapping (introduced in [10]). Values of additive pairwise comparisons could be both negative and positive real numbers. The logarithmic mapping is used to provide the proof of inconsistency convergence and for the interpretation of the limit of convergence in [16, 27, 28]. Fuzzy extensions of pairwise comparisons are presented in [11, 32].

Pairwise comparisons are usually represented by a PC matrix. In the case of multiplicative PCs, it is a matrix of ratios of entities with 1s on the main diagonal (for the entity being compared to itself) and reciprocal (xx and 1/x1/x) values in upper/lower triangles as it is also reasonable to assume that the ratio of B/AB/A is the reciprocal value of A/BA/B. By the definition, ratios are strictly positive real numbers.

For entities which are abstract concepts (e.g., software quality and software safety), the division operation is undefined but using the ratio still makes sense (e.g., by stating: “three times more important”). For this reason, ratios are often given by expert assessments.

The main goal of PCs is to split 1 into nn real values assigned to nn entities EiE_{i}, i=1,,ni=1,\ldots,n. We call them weights.

1.1. Problem outline

In [23] (a follow-up to [26]), Smarzewski has observed that PC matrices form a group under Hadamard product. In this paper, we show that it is, in fact, a Lie group. It is the first application of Lie theory to pairwise comparisons method.

1.2. Group theory applications in Computer Science

In pairwise comparisons, the use of abelian groups took place in [9]. More recently, abelian groups were used in [39, 8, 31].

The intensive Internet search has identified the first publication related to Lie group as [5]. Most computer science publications, which use Lie theory are related to graphics and vision (see [4, 14, 20, 33, 42]), Machine Learning (see [12, 33, 34]), and network (see [43]).

1.3. Structure of this paper

A brief introduction to the pairwise comparisons method is provided in Section 2. Section 2 also includes a simple example of using the PC method for a generic exam. Section 3 provides reasoning for the necessity of theorems and propositions. Section 4 shows the construction of Lie groups and Lie algebra for PC matrices. Section 5 introduces the exponential transformation and its properties. Section 6 presents the main theorem. Section 7 outlines the generalization of our results.

2. Example of using pairwise comparisons in practice

A Monte Carlo experiment for pairwise comparisons accuracy improvement was presented in [21, 22]. It provided statistical evidence that the accuracy gain was substantial by using pairwise comparisons. For simplicity, let us assume that we have three entities to compare: A,B,A,B, and CC. The three comparisons are: AA to BB, AA to CC, and BB to CC. We assume the reciprocity of PC matrix MM: mji=1/mijm_{ji}=1/m_{ij} which is reasonable (when comparing BB to AA, we expect to get the inverse of the comparison AA to BB). The exam is hence represented by the following PC matrix MM:

(2.1) M=[mij]=[1A/BA/CB/A1B/CC/AC/B1]M=[m_{ij}]=\begin{bmatrix}1&A/B&A/C\\ B/A&1&B/C\\ C/A&C/B&1\\ \end{bmatrix}

A/BA/B reads “the ratio between A and B” and may not necessarily be a result of the division (in the case of the exam problem, the use of division operation makes no mathematical sense but using the ratio is still valid).

Ratios of three entities create a triad (A/B,A/C,B/C)(A/B,A/C,B/C). This triad is said to be consistent provided A/BB/C=A/CA/B*B/C=A/C. It is illustrated by Fig. 1. Random numbers of dots are hard to count but can be compared in pairs as two random clouds. [A/B][A/B] reflects the assessed ratio of dots by expert opinions.

A large enough number of dots represents the concept of (numerosity). They may, for example, represent votes of experts. In an emergency situation (e.g., mine rescue), it is impossible to count votes in a short period of time. The exact number of votes is there but all we need is to assess the numerosity of votes .

Symbolically, in a PC matrix MM, each triad (or a cycle) is defined by (mik,mij,mkj)(m_{ik},m_{ij},m_{kj}). Such triad is consistent providing (mikmkj=mij)(m_{ik}*m_{kj}=m_{ij}). When all triads are consistent (known as the consistency condition or transitivity condition), the entire PC matrix is considered consistent.

Refer to caption
Figure 1. An inconsistency indicator cycle

Looking at the above exam grading case, we have discovered a pairwise comparisons method which can be used to construct a PC matrix. The solution to the PC matrix is a vector of weights which are geometric means of rows(for more detail see, for example, [30]). We usually normalize it to 1 as the sum. The justification for the use of the vector w=[vi]w=[v_{i}] of geometric means (GM) of rows is not trivial and it is the subject of this paper. The exact reconstruction of the PC matrix (say MM) via M=[vi/vj]M=[v_{i}/v_{j}] is guaranteed only for the consistent matrices.

In our example, the computed weights (as normalized geometric means of rows) are approximated to: [0.58,0.31,0.11][0.58,0.31,0.11]. By looking at the result, we can conclude that problem AA is the most difficult with the weight 0.58. The easiest problem is CC with the weight 0.110.11.

One of the challenges of pairwise comparisons is the inconsistency of assessments. It is well demonstrated by Fig. 1. It seems that a trivial mistake took place: 6 should be in place of 5 since 2*3 gives this value. However, there are no grounds to assume that 2 and 3 are accurate assessments. No specific accuracy assumption is made of any assessment.

3. The problem formulation

This paper examines the use of group theory to construct the geometry of pairwise comparisons matrices and improve the consistency-driven process in [17, 7]. The Hadamard product (also known as coordinatewise, coordinate-wise, elementwise, or element-wise product) is examined in the context of inconsistency and inaccuracy. To achieve this goal, we provide a proof that PC matrices are represented by a Lie group. Subsequently, a Lie algebra of the Lie group of PC matrices is constructed. A decomposition method of PC matrices is introduced for the Hadamard product. One of the components is an approximation PC matrix and the other orthogonal component is interpreted as the approximation inaccuracy. The importance of selecting PC matrix components is also provided in this paper. Subgroups of the PC matrix Lie group have been identified and presented as an internal direct product.

4. Lie groups and Lie algebras of PC matrices

The monograph [41] stipulates that, “Intuitively, a manifold is a generalization of curves and surfaces to higher dimensions. It is locally Euclidean in that every point has a neighborhood, called a chart, homeomorphic to an open subset of n\mathbb{R}^{n}”. We find the above justification to be sufficient to be followed by computer science researchers.

A group that is also a differentiable (or smooth) manifold is called Lie group (after its proponent Sophus Lie). According to [3], a Lie group is an abstract group 𝒢\mathcal{G} with a smooth structure, that is:

Definition 4.1.
  1. (1)

    𝒢\mathcal{G} is a group,

  2. (2)

    𝒢\mathcal{G} is a smooth manifold,

  3. (3)

    the operation 𝒢×𝒢𝒢,\mathcal{G}\times\mathcal{G}\longrightarrow\mathcal{G}, (x,y)xy1(x,y)\longrightarrow xy^{-1} is smooth.

Matrix Lie group operates on matrices.

Definition 4.2.

The Lie algebra of a Lie group 𝒢\mathcal{G} is the vector space 𝕋e𝒢\mathbb{T}_{e}\mathcal{G} equipped with the Lie bracket operation [,][~{},~{}] of vector fields.

The bracket operation [,][~{},~{}] is assumed to be bilinear, antisymmetric, and satisfies the Jacobi identity: Cyclic([X,[Y,Z]])=0([X,[Y,Z]])=0 for all X,Y,ZX,Y,Z belonging to this algebra.

Lie group and Lie algebra have been analyzed in [38, 2, 18, 6].

Proposition 4.3.

For every dimension n>0n>0, the following group:

𝒢={M=[mij]n×n|MMT=I,mij=1mji>0 for every i,j=1,2,,n}\mathcal{G}=\{\ M=[m_{ij}]_{n\times n}|M\cdot M^{T}=I,m_{ij}=\frac{1}{m_{ji}}>0\text{ for every }i,j=1,2,\ldots,n\}\

is an abelian group of n×nn\times n PC matrices with an operation

:𝒢×𝒢𝒢𝑑𝑒𝑓𝑖𝑛𝑒𝑑𝑏𝑦(M,N)MN=[mijnij]\begin{split}&\cdot:\mathcal{G}\times\mathcal{G}\longmapsto\mathcal{G}\\ &\mathit{defined\ by}\ \quad(M,N)\longrightarrow M\cdot N=[m_{ij}\cdot n_{ij}]\end{split}

where ”\cdot” is the Hadamard product.

Proof.

To begin, we know that II1=II\cdot I^{-1}=I so I𝒢I\in\mathcal{G} where I=[ηij]n×nI=[\eta_{ij}]_{n\times n} is the identity element of the group and satisfies the condition ηij=1\eta_{ij}=1 every i,j=1,,ni,j=1,\ldots,n.

Now, observe that if M𝒢M\in\mathcal{G} then MMT=IM\cdot M^{T}=I and MT=M1M^{T}=M^{-1}. Thus MT(MT)T=IM^{T}(M^{T})^{T}=I so M1𝒢M^{-1}\in\mathcal{G}.

Let MM and NN be arbitrary elements of 𝒢\mathcal{G}. Observe that by the properties of 𝒢\mathcal{G}:

NM(NM)T=(NM)(NTMT)=N(MMT)NT=NINT=I.NM(NM)^{T}=(NM)(N^{T}M^{T})=N(MM^{T})N^{T}=NIN^{T}=I.

𝒢\mathcal{G} is closed and commutative under Hadamard product. Consequently, we see that (𝒢,)(\mathcal{G},\cdot) is an abelian group. ∎

Definition 4.4.

Let 𝒢\mathcal{G} be PC matrix Lie group and M(t)M(t) be a path through 𝒢\mathcal{G}. We say that M(t)M(t) is smooth if each entry in M(t)M(t) is differentiable. The derivative of M(t)M(t) at the point tt is denoted M(t)M^{\prime}(t) which is the matrix whose ijthij^{th} element is the derivative of ijthij^{th} element of M(t)M(t).

Corollary 4.5.

The abelian group 𝒢\mathcal{G} is a PC matrix Lie group.

Proof.

We know that the Hadamard product ”\cdot” and the operation MM1=MTM\longrightarrow M^{-1}=M^{T} are differentiable for every PC matrix M.
Thus, 𝒢\mathcal{G} is a PC matrix Lie Group. ∎

We also know that the tangent space of any matrix Lie group at unity is a vector space.

The tangent space of any matrix group 𝒢\mathcal{G} at unity II will be denoted by TI(𝒢)=gT_{I}(\mathcal{G})=g where II is the unit matrix of 𝒢\mathcal{G}.

Theorem 4.6.

The tangent space of the PC matrix Lie Group 𝒢\mathcal{G} at unity II consists of all n×nn\times n real matrices XX that satisfy X+XT=0X+X^{T}=0.

Proof.

Recall that any matrix A𝒢A\in\mathcal{G} satisfies the condition AAT=IA\cdot A^{T}=I. Let us consider a smooth path A(t)A(t) such that A(0)=IA(0)=I.

We know that:

(4.1) A(t)A(t)T=IA(t)\cdot A(t)^{T}=I

for all parameters tt. By differentiating the equation 4.1, we get

(4.2) ddt(A(t))A(t)T+A(t)ddt(A(t)T)=0\frac{d}{dt}(A(t))\cdot A(t)^{T}+A(t)\cdot\frac{d}{dt}(A(t)^{T})=0

Considering that:

ddt(A(t)T)=(ddt(A(t)))T\frac{d}{dt}(A(t)^{T})=(\frac{d}{dt}(A(t)))^{T}

the equation 4.2 can be rewritten as

A(t)A(t)T+A(t)A(t)T=0A^{\prime}(t)\cdot A(t)^{T}+A(t)\cdot A^{\prime}(t)^{T}=0

and at the point t=0t=0, we obtain:

A(0)+A(0)T=0A^{\prime}(0)+A^{\prime}(0)^{T}=0

Thus, any tangent vector X=A(0)X=A^{\prime}(0) satisfies X+XT=0X+X^{T}=0.

Corollary 4.7.

The Lie algebra of 𝒢\mathcal{G}, denoted by TI(𝒢)T_{I}(\mathcal{G}), is a Lie algebra of 𝒢\mathcal{G} and TI(𝒢)T_{I}(\mathcal{G}) is the space of the skew-symmetric n×nn\times n matrices. Observe that:

dim(𝒢)=dimTI(𝒢)=n(n1)2.dim(\mathcal{G})=dimT_{I}(\mathcal{G})=\frac{n\cdot(n-1)}{2}.

5. Exponential map and PC matrices

The exponential map is a map from Lie algebra of a given Lie group to that group. In this Section, we will introduce the exponential transformation from gg (the tangent space to the identity element of PC matrix Lie group 𝒢\mathcal{G}) to 𝒢\mathcal{G}.

Let 𝒢\mathcal{G} be PC matrix Lie group and gg be the Lie algebra of 𝒢\mathcal{G}. Then, the exponential map:

exp:g\displaystyle exp:g 𝒢\displaystyle\longrightarrow\mathcal{G}
A=[aij]n×n\displaystyle A=[a_{ij}]_{n\times n} exp[A]=[eaij]\displaystyle\longrightarrow exp[A]=[e^{a_{ij}}]

has the following properties:

  1. (1)

    G1={δ(t)=etA|t,Ag}G_{1}=\{\delta(t)=e^{tA}~{}|~{}t\in\mathbb{R},A\in g\} is one parameter subgroup of 𝒢\mathcal{G}.

  2. (2)

    Let AA and BB be two elements of the Lie algebra gg. Then, the following equality holds:

    eA+B=eAeBe^{A+B}=e^{A}\cdot e^{B}
  3. (3)

    Given any matrix AgA\in g, the tangent vector of the smooth path γ(t)\gamma(t) is equal to AetAA\cdot e^{tA}, that is,

    γ(t)=ddtetA=AetA\gamma^{\prime}(t)=\frac{d}{dt}e^{tA}=A\cdot e^{tA}
  4. (4)

    For any matrix AgA\in g,

    (eA)1=eA=(eA)T=eAT(e^{A})^{-1}=e^{-A}=(e^{A})^{T}=e^{A^{T}}

    and

    (eA)(eA)T=eAeAT=eA+AT=e0=1.(e^{A})(e^{A})^{T}=e^{A}\cdot e^{A^{T}}=e^{A+A^{T}}=e^{0}=1.
  5. (5)

    For any matrix AgA\in g, γ(t)=etA\gamma(t)=e^{tA} is a geodesic curve of the pc matrix Lie group 𝒢\mathcal{G} passing through the point γ(0)=1\gamma(0)=1.

  6. (6)

    For any matrix AgA\in g, we would like to stipulate that

    det(eA)=eTrAdet(e^{A})=e^{TrA}

    where Tr(A)Tr(A) is the trace function of AA. However, this cannot always be achieved and a counterexample is presented in the Example 5.1.

Example 5.1.

Let us consider the following matrix

A=(011100100)A=\begin{pmatrix}0&-1&1\\ 1&0&0\\ -1&0&0\end{pmatrix}

AA is the element of gg, hence the exponential map of AA is:

eA=(11eee111e11)e^{A}=\begin{pmatrix}1&\frac{1}{e}&e\\ e&1&1\\ \frac{1}{e}&1&1\end{pmatrix}

The determinant of eAe^{A} is:

det(eA)=e2+e22det(e^{A})=e^{2}+e^{-2}-2

and the trace of eAe^{A} is:

Tr(A)=Σi=13ai,i=0Tr(A)=\Sigma^{3}_{i=1}a_{i,i}=0

Consequently, det(eA)det(e^{A}) is not equal to eTr(A)e^{Tr(A)} for the matrix AA.

6. Internal direct product of Lie group of 3×33\times 3 PC Matrices

The aim of this section is to provide both geometric and algebraic perspectives on PC matrices. Our presentation is based on the techniques in [1],[2], and [6]. However, a modified approach is used in this Section. Let us recall that we consider only 3×33\times 3 PC matrices. Section 7 outlines generalization to n>3n>3. Let us introduce the definition of the internal direct product.

We use the notation In={1,n}I_{n}=\{1,\ldots n\}.

Definition 6.1.

Let 𝒢\mathcal{G} be a group and let {Ni|iIn}\{N_{i}|i\in I_{n}\} be a family of normal subgroups of 𝒢\mathcal{G}. Then 𝒢=N1N2Nn\mathcal{G}=N_{1}N_{2}\dotsm N_{n} is called the internal direct product of {Ni|iIn}\{N_{i}|i\in I_{n}\} and Ni(N1Ni1Ni+1Nn)={e}N_{i}\cap(N_{1}\dotsm N_{i-1}N_{i+1}\dotsm N_{n})=\{e\} for all i{1,2,,n}i\in\{1,2,\ldots,n\} (see [15]).

Theorem 6.2.

Let 𝒢\mathcal{G} be a group and {Ni|iIn}\{N_{i}|i\in I_{n}\} be a family of normal subgroups of 𝒢\mathcal{G}. Then 𝒢\mathcal{G} is an internal direct product of {Ni|iIn}\{N_{i}|i\in I_{n}\} if and only if for all A𝒢A\in\mathcal{G}, AA can be uniquely expressed as

A=a1a2anA=a_{1}\cdot a_{2}\dotsm a_{n}

where aiNi,i{1,,n}a_{i}\in N_{i},i\in\{1,\ldots,n\} (see [15])

Theorem 6.3.

Let 𝒢\mathcal{G} be the internal direct product of a family of normal subgroups {Ni|iIn}\{N_{i}|i\in I_{n}\}. Then

𝒢N1×N2××Nn\mathcal{G}\simeq N_{1}\times N_{2}\times\dotsm\times N_{n}

.

The collection Mn=(Mn,)M_{n}=(M_{n},\cdot) of all consistent PC matrices MM is a multiplicative Lie subgroup of the Lie group 𝒢\mathcal{G}.

Moreover, let us consider additive consistent matrices represented by the following set:

l=Cg={A=[ai,j]n×ng|ai,k+ak,j=ai,j} for every (i,j,k)Tnl=C_{g}=\{A=[a_{i,j}]_{n\times n}\in g|a_{i,k}+a_{k,j}=a_{i,j}\}\text{ for every }(i,j,k)\in T_{n}

where T(n)={(i,j,k){1,2,3,,n}:i<j<k}T(n)=\{(i,j,k)\in\{1,2,3,\dots,n\}:i<j<k\} was considered in [1].

If A,BCgA,B\in C_{g} then A+BCgA+B\in C_{g} and kACgkA\in C_{g}. Thus CgC_{g} is a Lie subalgebra of (g,+)(g,+). Let us observe that the following equality holds:

g=Cg(Cg)(Cg)=h.g=C_{g}\oplus(C_{g})^{\bot}(C_{g})^{\bot}=h.

It follows that C𝒢C_{\mathcal{G}} is a Lie subgroup of (𝒢,)({\mathcal{G}},\cdot); therefore, the following equality also holds:

𝒢=C𝒢(C𝒢)\mathcal{G}=C_{\mathcal{G}}(C_{\mathcal{G}})^{\perp}

Considering the above results, we provide the new geometric and algebraic interpretation for 3×33\times 3 PC matrices.

Proposition 6.4.

Let

h={(0xxx0xxx0):x}h=\Bigg{\{}\begin{pmatrix}0&x&-x\\ -x&0&x\\ x&-x&0\end{pmatrix}:x\in\mathbb{R}\Bigg{\}}

and

l={(0yy+zy0zyzz0):y,z}l=\Bigg{\{}\begin{pmatrix}0&y&y+z\\ -y&0&z\\ -y-z&-z&0\end{pmatrix}:y,z\in\mathbb{R}\Bigg{\}}

be two sets of 3×33\times 3 PC matrices. Then the following holds:

  1. (i)

    hh and ll are 1 and 2 dimensional Lie subalgebras of the Lie algebra of gg,

  2. (ii)

    the vector space denoted by the Lie algebra of hh is the orthogonal complement space of the vector space denoted by the Lie algebra of ll.

Proof.
  1. (i)

    For the proof, let us use spsp for the linear span of a set of vectors.

    h=sp{(011101110)}h=sp\Bigg{\{}\begin{pmatrix}0&1&-1\\ -1&0&1\\ 1&-1&0\end{pmatrix}\Bigg{\}}

    and

    l=sp{(011100100),(001001110)}l=sp\Bigg{\{}\begin{pmatrix}0&1&1\\ -1&0&0\\ -1&0&0\end{pmatrix},\begin{pmatrix}0&0&1\\ 0&0&1\\ -1&-1&0\end{pmatrix}\Bigg{\}}

    Since the space hh produces one matrix, dim(h)=1dim(h)=1 and the space ll produces two linearly independent vectors, dim(l)=2dim(l)=2. Moreover, hh and ll are Lie subalgebras of 𝒢\mathcal{G}.

  2. (ii)

    It is implied by the basic properties of Lie algebras.

We will show that every element of the Lie algebra gg can be written as the sum of one element of the Lie subalgebra hh and one element of Lie subalgebra ll, that is, for all A𝒢A\in\mathcal{G}, A=Ah+AlA=A_{h}+A_{l}, with AhhA_{h}\in h and AllA_{l}\in l.

(0aba0cbc0)=(0xxx0xxx0)+(0yy+zy0zyzz0)\begin{pmatrix}0&a&b\\ -a&0&c\\ -b&-c&0\end{pmatrix}=\begin{pmatrix}0&x&-x\\ -x&0&x\\ x&-x&0\end{pmatrix}+\begin{pmatrix}0&y&y+z\\ -y&0&z\\ -y-z&-z&0\end{pmatrix}

where

x=13(ab+c)x=\frac{1}{3}(a-b+c)
y=13(2a+bc)y=\frac{1}{3}(2a+b-c)
z=13(2ca+b)z=\frac{1}{3}(2c-a+b)

Proposition 6.5.

Let

H={N|N=(1k1k1k1kk1k1),𝑤ℎ𝑒𝑟𝑒k+}H=\Bigg{\{}N|N=\begin{pmatrix}1&k&\frac{1}{k}\\ \frac{1}{k}&1&k\\ k&\frac{1}{k}&1\end{pmatrix},\mathit{where}\ k\in\mathbb{R^{+}}\Bigg{\}}

and

L={W|W=(1yyz1y1z1yz1z1),𝑤ℎ𝑒𝑟𝑒y,z+}L=\Bigg{\{}W|W=\begin{pmatrix}1&y&yz\\ \frac{1}{y}&1&z\\ \frac{1}{yz}&\frac{1}{z}&1\end{pmatrix},\mathit{where}\ y,z\in\mathbb{R^{+}}\Bigg{\}}

be two sets of 3×33\times 3 matrices. Then:

  1. (i)

    HH and LL are normal subgroups of the Lie group of 𝒢\mathcal{G}

  2. (ii)

    The Lie group 𝒢\mathcal{G} is the internal direct product of the normal Lie subgroups HH and LL. In particular:

    𝒢H×L.\mathcal{G}\simeq H\times L.
Proof.

We know that:

𝒢={M|M=(1m12m131m121m231m131m231),m12,m13,m23+}\mathcal{G}=\Bigg{\{}M|M=\begin{pmatrix}1&m_{12}&m_{13}\\ \frac{1}{m_{12}}&1&m_{23}\\ \frac{1}{m_{13}}&\frac{1}{m_{23}}&1\end{pmatrix},m_{12},m_{13},m_{23}\in\mathbb{R^{+}}\Bigg{\}}

Let us consider:

A=(1ξη1ξ1γ1η1γ1)𝒢A=\begin{pmatrix}1&\xi&\eta\\ \frac{1}{\xi}&1&\gamma\\ \frac{1}{\eta}&\frac{1}{\gamma}&1\end{pmatrix}\in\mathcal{G}

Using the logarithmic transformation we get:

A~=(0ln(ξ)ln(η)ln(ξ)0ln(γ)ln(η)ln(γ)0)\tilde{A}=\begin{pmatrix}0&ln(\xi)&ln(\eta)\\ -ln(\xi)&0&ln(\gamma)\\ -ln(\eta)&-ln(\gamma)&0\end{pmatrix}

The Proposition 6.5 implies that:

A~=(0xxx0xxx0)+(0yy+zy0zyzz0)=Ah+Al\tilde{A}=\begin{pmatrix}0&x&-x\\ -x&0&x\\ x&-x&0\end{pmatrix}+\begin{pmatrix}0&y&y+z\\ -y&0&z\\ -y-z&-z&0\end{pmatrix}=A_{h}+A_{l}

Moreover, we know that:

x=ln(ξ)ln(η)+ln(γ)3=ln(ξγη)1/3x=\frac{ln(\xi)-ln(\eta)+ln(\gamma)}{3}=ln(\frac{\xi\gamma}{\eta})^{1/3}
y=2ln(ξ)+ln(η)ln(γ)3=ln(ξ2ηγ)1/3y=\frac{2ln(\xi)+ln(\eta)-ln(\gamma)}{3}=ln(\frac{\xi^{2}\eta}{\gamma})^{1/3}
z=2ln(γ)ln(ξ)+ln(η)3=ln(γ2ηξ)1/3z=\frac{2ln(\gamma)-ln(\xi)+ln(\eta)}{3}=ln(\frac{\gamma^{2}\eta}{\xi})^{1/3}

Using the exponential transformation for A~=Ah+Al\tilde{A}=A_{h}+A_{l} we conclude that:

exp(A~)=exp(Ah)exp(Al)exp(\tilde{A})=exp(A_{h})\cdot exp(A_{l})

which implies:

A=AHALA=A_{H}\cdot A_{L}

Observe that the matrix ALA_{L} is a consistent PC matrix. However, the following proposition needs to be considered for deciding on the classification of the matrix AHA_{H}.

Proposition 6.6.
  1. (i)

    Lie algebra of the PC matrix Lie group of consistent LL is ll.

  2. (ii)

    Lie algebra of the PC matrix normal Lie subgroup HH is hh.

From the above proposition for additive consistent matrices, we conclude that hh is the orthogonal complement space of ll. Hence every element of hh can be classified as ortho-additive consistent matrix and every element of HH is an ortho-consistent PC matrix.

7. Generalization outline

Following [29], we would like to point out that PC matrices of the size n>3n>3 can be viewed as consisting of 3×33\times 3 PC submatrices obtained by deleting rows and columns with the identical indices. That is, if we delete row jj, we need to also delete column jj. Fig. 2 demonstrates how to obtain one PC matrix 3×33\times 3 from PC matrix 5×55\times 5 by deleting rows and columns numbered 3 and 4.

Refer to caption
Figure 2. Generatization to n>3n>3

Definition 3.3 from [29] states this as follows:

Assume n<mn<m, AA and BB are square matrices of degree nn and mm respectively. We call matrix AA a submatrix of BB (ABA\subset B) if there exist injection

σ:{1,,n}{1,,m}\sigma:\{1,\ldots,n\}\rightarrow\{1,\ldots,m\}

such that for all n,m{1,,n}n,m\in\{1,\ldots,n\} aij=bσ(i)σ(j).a_{ij}=b_{\sigma(i)\sigma(j)}.

The PC matrix reconstruction from its 3×33\times 3 submatrix approximation components is PC matrix SS of geometric means of all corresponding elements in these components. The need for geometric mean use comes from the occurrence of the same matrix element (n1)(n2)(n3)/6(n-1)(n-2)(n-3)/6 times in submatrices.

It is important to observe that PC matrix SS does not need to be consistent even though all submatrices of SS are consistent. However, the reconstruction process converges to a consistent PC matrix as partly proven by [16] and completed in [27]. The above reconstruction process will be analyzed in the planned follow-up paper.

Conclusions

Using fundamental theorems from [35] and [36], the collaborative research effort [24] demonstrated that group generalization for pairwise comparisons matrices is a challenge. In particular, [24] provided the proof that elements of a multiplicative PC matrix must be selected from a torsion-free abelian group.

Our paper demonstrates that the multiplicative PC matrices (not the elements of a PC matrix) generate a Lie group for the Hadamard product. Lie algebras of these Lie groups are identified here. It has been shown that Lie algebras form spaces of skew-symmetric matrices. It has also been proven that the Lie group of PC matrices can be represented as an internal direct product using the direct summability property of vector spaces.

In conclusion, a relatively simple concept of pairwise comparisons turns out to be related to the theory of Lie groups and Lie algebras (what is commonly regarded as very sophisticated mathematics). For the first time, the decomposition of a PC matrix into an approximation component and orthogonal component (interpreted as the approximation error) was obtained. Without such decomposition, the pairwise comparisons method has remained incomplete for 722 years from its first scholarly presentation.

Acknowledgments

We thank Tuǧçe Çalci for the verification of mathematical formulas and the terminology associated with them. The authors recognize the efforts of Tiffany Armstrong in proofreading this text and Lillian (Yingli) Song with the technical editing. We also acknowledge that algebraic terminology and basic concepts are based on [40].

References

  • [1] Adukov, V., On invertibility of matrix Wiener-Hopf operator on discrete linearly ordered abelian group, Integral Equations and Operator Theory, 23(4):373–386, 1995.
  • [2] Aksoyak, F.K.; Yayli, Y., Homothetic Motions and Lie Groups in 24\mathbb{R}_{2}^{4}, Journal of Dynamical Systems and Geometric Theories, 11(1–2): 23–38, 2013.
  • [3] Arvanitoyeorgos, A., Lie Transformation Groups and Geometry, In: Mladenov, I.M, (Ed), Ninth International Conference on Geometry, Integrability and Quantization, SOFTEX, 2008.
  • [4] Bansal, S.; Tatu, A., Affine Interpolation in a Lie Group Framework, ACM Transactions on Graphics, 38(4): 71, 2019.
  • [5] Beck, Re; Kolman, B., Computer Approaches to Representations of Lie-Algebras, Journal of the ACM, 19(4):577-577, 1972.
  • [6] Bekar, M.; Yayli, Y., Lie Algebra of Unit Tangent Bundle, Advances in Applied Clifford Algebras, 27(2): 965–975, 2017.
  • [7] Bozoki, S.; Fueloep, J.; Koczkodaj, W.W., An LP-based inconsistency monitoring of pairwise comparison matrices, Mathematical and Computer Modelling, 54(1-2): 789-793, 2011.
  • [8] Cavallo, B.; Brunelli, M., A general unified framework for interval pairwise comparison matrices, International Journal of Approximate Reasoning, 93: 178–198, 2018.
  • [9] Cavallo, B.; D’Apuzzo L., A General unified framework for pairwise comparison matrices in multicriterial methods, International Journal of Intelligent Systems, 24(4): 377–398, 2009.
  • [10] Crawford, G.; Williams C., The Analysis of Subjective Judgment Matrices, RAND Report R–2572–1–AF, 1985.
  • [11] Deng, HP, Multicriteria analysis with fuzzy pairwise comparison International Journal of Approximate Reasoning, 21(3): 215-231, 1999.
  • [12] Demisse, G.G.; Aouada, D.; Ottersten, B.., IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(6): 1338–1351, 2018.
  • [13] Drton, M. ; Hägele, G.; Haneberg, D.; Pukelsheim, F. ; Reif W., The Augsburg Web Edition of Llull’s Electoral Writings, https://www.math.uni-augsburg.de/htdocs/emeriti/pukelsheim/llull/
  • [14] Ebisu, T.; Ichise, R., Generalized Translation-Based Embedding of Knowledge Graph, IEEE Transactions on Knowledge and Data Engineering, 32(5): 941–951, 2020.
  • [15] Fitting, H, The theory of automorphism rings of Abelian groups and their analogue for non-commutative groups, Mathematische Annalen, 107:514–542, 1933.
  • [16] Holsztynski, W.; Koczkodaj, W.W., Convergence of inconsistency algorithms for the pairwise comparisons, Information Processing Letters, 59(4): 197–202, 1996.
  • [17] Janicki, R; Koczkodaj, W.W., A weak order approach to group ranking, Computers & Mathematics with Applications, 32(2): 51-59, 1996.
  • [18] Ismail Gök, I.; Okuyucu, O.Z.; Ekmekci, N.;Yayli, Y., On Mannheim partner curves in three dimensional Lie groups, Miskolc Mathematical Notes, 15(2): 467–479, 2014.
  • [19] Jefferys, W.H.; Berger, J.O., Ockham’s Razor and Bayesian Statistics, American Scientist, 80(1): 64–72, 1991.
  • [20] Kobilarov, M.; Crane, K.; Desbrun, M., Lie Group Integrators for Animation and Control of Vehicles, ACM Transactions on Graphics, 28(2): 16, 2009.
  • [21] Koczkodaj, W.W., Testing the accuracy enhancement of pairwise comparisons by a Monte Carlo experiment, Journal of Statistical Planning and Inference, 69(1):21–31, 1996.
  • [22] Koczkodaj, W.W., Statistically accurate evidence of improved error rate by pairwise comparisons, Perceptual and Motor Skills, 82(1):43–48, 1996.
  • [23] Koczkodaj, W.W.; Smarzewski, R.; Szybowski, J., On Orthogonal Projections on the Space of Consistent Pairwise Comparisons Matrices, Fundamenta Informaticae, 172(4): 379–397, 2020.
  • [24] Koczkodaj, W.W.; Liu, F; Marek, V.W. ; Mazurek, J; Mazurek, M; Mikhailov, L; Ozel, C; Pedrycz, W; Przelaskowski, A; Schumann, A; Smarzewski, R; Strzalka, D; Szybowski, J; Yayli, Y, On the use of group theory to generalize elements of pairwise comparisons matrix: a cautionary note, International Journal of Approximate Reasoning, 124: 59–65, 2020.
  • [25] Koczkodaj, W.W.; Mikhailov, L.; Redlarski, G.; Soltys, M.; Szybowski, J.; Tamazian, G.; Wajch, E.; Yuen, K.K.F., Important Facts and Observations about Pairwise Comparisons (the special issue edition), Fundamenta Informaticae, 144(3–4): 291–307, 2016.
  • [26] Koczkodaj, W.W.; Orlowski, M, An orthogonal basis for computing a consistent approximation to a pairwise comparisons matrix, Computers & Mathematics With Applications, 34(10): 41–47, 1997.
  • [27] Koczkodaj, W.W.; Szarek, S., On distance-based inconsistency reduction algorithms for pairwise comparisons, Logic Journal of the IGPL, 18(6): 859–869, 2010.
  • [28] Koczkodaj, W.W.; Szybowski, J., The Limit of Inconsistency Reduction in Pairwise Comparisons, International Journal of Applied Mathematics and Computer Science 26(3):721–729, 2016.
  • [29] Koczkodaj, W.W.; Urban, R., Axiomatization of inconsistency indicators for pairwise comparisons, International Journal of Approximate Reasoning, 94: 18–29, 2018.
  • [30] Kulakowski, K., On the Properties of the Priority Deriving Procedure in the Pairwise Comparisons Method, Fundamenta Informaticae,139(4): 403-419, 2015.
  • [31] Kulakowski, K.; Mazurek, J.; Ramík, J., Soltys, M., When is the condition of order preservation met? European Journal of Operational Research, 277(1): 248–254, 2019.
  • [32] Kuo, M.-S.; Liang, G.-S.; Huang, W.-C., Extensions of the multicriteria analysis with pairwise comparison under a fuzzy environment International Journal of Approximate Reasoning, 43(3): 268–285, 2006.
  • [33] Koudounas, A.; Fiori, S., Gradient-based Learning Methods Extended to Smooth Manifolds Applied to Automated Clustering, Journal of Artificial Intelligence Research, 68: 777–815, 2020.
  • [34] Lebanon, G., Metric learning for text documents, IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(4): 497–508, 2006.
  • [35] Levi, F.W., Ordered groups, Proceedings of the Indian Academy of Sciences, Section A, 16: 256–263, 1942.
  • [36] Levi, F. W., Contributions to the theory of ordered groups, Proceedings of the Indian Academy of Sciences, 17:199–201, 1943.
  • [37] Li, CC; Dong, YC; Xu, YJ; Chiclana, F.; Herrera-Viedma,; Herrera, F., An overview on managing additive consistency of reciprocal preference relations for consistency-driven decision making and fusion: Taxonomy and future directions, Information Fusion, 52: 143–156, 2019.
  • [38] Ozkaldi, S.; Yayli, Y., Tensor Product Surfaces in 4\mathbb{R}^{4} and Lie Groups, Bulletin of the Malaysian Mathematical Sciences Society, 33(1), 69–77, 2010.
  • [39] Ramík, J., Ranking alternatives by pairwise comparisons matrix with fuzzy element on Alo-group, in Intelligent decision technologies, part II, book series: Smart Innovation Systems and Technologies, Springer, 57: 371–380, 2016.
  • [40] Robinson, D.J.S., A Course in the Theory of Groups, Graduate Texts in Mathematics (GTM), vol. 80, Berlin, New York: Springer-Verlag, 1996.
  • [41] Tu, T.L., An Introduction to Manifolds, 2nd ed., Springer, 2010.
  • [42] Zhang, L.; Zheng, Q.-Z.; Huang, H., Intrinsic Motion Stability Assessment for Video Stabilization, IEEE Transactions on Visualization and Computer Graphics, 25(4): 1681–1692, 2019.
  • [43] Zhao, XL; Chen, QB; Xue, JF; Zhang, YM; Zhao, JJ, A Method for Calculating Network System Security Risk Based on a Lie Group, IEEE Access 7: 70610–70623, 2019.