This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Application of tropical optimization for solving multicriteria problems of pairwise comparisons using log-Chebyshev approximationthanks: Internat. J. Approx. Reason. 169, 109168 (2024)

N. Krivulin Faculty of Mathematics and Mechanics, Saint Petersburg State University, 28 Universitetsky Ave., St. Petersburg, 198504, Russia, [email protected].
Abstract

We consider a decision-making problem to find absolute ratings of alternatives that are compared in pairs under multiple criteria, subject to constraints in the form of two-sided bounds on ratios between the ratings. Given matrices of pairwise comparisons made according to the criteria, the problem is formulated as the log-Chebyshev approximation of these matrices by a common consistent matrix (a symmetrically reciprocal matrix of unit rank) to minimize the approximation errors for all matrices simultaneously. We rearrange the approximation problem as a constrained multiobjective optimization problem of finding a vector that determines the approximating consistent matrix. The problem is then represented in the framework of tropical algebra, which deals with the theory and applications of idempotent semirings and provides a formal basis for fuzzy and interval arithmetic. We apply methods and results of tropical optimization to develop a new approach for handling the multiobjective optimization problem according to various principles of optimality. New complete solutions in the sense of the max-ordering, lexicographic ordering and lexicographic max-ordering optimality are obtained, which are given in a compact vector form ready for formal analysis and efficient computation. We present numerical examples of solving multicriteria problems of rating four alternatives from pairwise comparisons to illustrate the technique and compare it with others.

Keywords: idempotent semifield, tropical optimization, log-Chebyshev approximation, constrained multiobjective optimization, multiple criteria evaluation.

MSC (2020): 15A80, 90C24, 41A50, 90C29, 90B50

1 Introduction

Decision-making problems that are encountered in real-world applications often have to cope uncertain data and involve multiple objectives. To handle uncertainty in data in these problems, various approaches are adopted, including the applications of interval, fuzzy and possibilistic data models [2, 11, 7]. A key component of many approaches that aim to address uncertainty is the use of the arithmetics of idempotent semirings and semifields, which provides a bridge between these approaches and tropical mathematics. The interaction between tropical (idempotent) mathematics which concentrates on the theory and applications of algebraic systems with idempotent operations, and the fuzzy, interval and possibilistic mathematics appears to be bidirectional. On the one hand, fuzzy and possibilistic models have inspired the study of idempotent algebraic systems such as fuzzy algebra, Łukasiewicz semirings and Viterbi semirings (see, e.g. [19, 20, 44, 18]). On the other hand, the tropical mathematics offers a conceptual and analytical framework for computations in fuzzy, interval and other arithmetics that involve max\max and min\min operations. Examples of related recent studies include [17, 56, 34, 53]. The above observations indicate that the development and investigation of novel solutions in tropical mathematics can benefit the fuzzy sets theory and other research domains where idempotency plays an essential role to deal with uncertainty.

As another bridge between the tropical mathematics and the mathematics of uncertainty, we consider the concept of Chebyshev distance and related theory of Chebyshev approximation. In tropical semifields, Chebyshev distance is introduced in a direct way using only internal operations, which offers a strong potential for developing efficient techniques to solve optimization problems involving Chebyshev approximation. At the same time, the methods and techniques of Chebyshev approximation appear to be in growing demand as useful approach to solve optimization and other problems in fuzzy mathematics [10, 37, 36], which makes the related results in tropical mathematics relevant.

The known methods to address uncertain decision problems under multiple criteria include both crisp and fuzzy implementations of regular multiobjective optimization methods and heuristic algorithms [39, 6, 16, 13]. One of the approaches is to use methods and techniques of tropical optimization which is concerned with optimization problems that are formulated and solved in terms of tropical mathematics. Due to the clear connection between the fuzzy and tropical mathematics, the development of new solutions that are based on multiobjective tropical optimization suggests a promising resource to handle uncertain multicriteria decision problems, for example through a fuzzy implementation of the optimization method.

The problem of evaluating ratings (scores, priorities, weights) of alternatives based on pairwise comparisons under multiple criteria is an uncertain decision problem, where the uncertainty arises from subjective judgment errors and imprecise data of comparisons. This uncertainty leads to inconsistent or opposite results of rating and ranking alternatives, and needs to be managed to achieve acceptable solutions. The problem under consideration is widely occurring and highly demanded in decision making, and has been studied for decades in many researches including the classical paper by L. L. Thurstone [54]. Given results of comparisons of decision alternatives in the form of pairwise comparison matrices, the problem is to find a vector of individual ratings of alternatives, which can be used to rank alternatives according to their ratings and thus provide a basis for optimal decisions.

The solution techniques available for the problem apply various heuristic procedures that do not guarantee the optimal solution, but usually produce acceptable results in practice, and mathematical methods that are formally justified, but may be computationally very expensive. Along with the methods based on conventional algebra, there exist solutions that use interval arithmetic, fuzzy arithmetic and tropical algebra.

One of the most prominent approach to solve multicriteria pairwise comparison problems, which is known as the analytical hierarchy process, has been proposed in the 1970s by T. L. Saaty [47, 48, 49]. The approach employs the principal eigenvector method to derive a vector of individual ratings by calculating the eigenvectors of the pairwise comparison matrices, corresponding to their maximal eigenvalues (the Perron eigenvectors). In succeeding years the approach was extended to handle problems with interval and fuzzy pairwise comparison matrices [57, 5, 51, 52] (see also literature overviews in [25, 46]) and to implement tropical algebra [14, 15, 22, 55, 18].

Another widely used approach is the weighted geometric mean method, which is based on matrix approximation in the sense of the Euclidean metric in logarithmic scale [43, 9, 1]. Under this approach, the individual ratings are obtained from a matrix of pairwise comparisons as the vector of geometric means of elements in the rows of the matrix. For interval and fuzzy extensions of the geometric mean method, one can consult [25, 46].

Different solution approaches to the pairwise comparison problems may yield results where the acquired ratings produce different or contradictory orders of alternatives [3, 1, 24, 41]. Some of the approaches offer analytical solutions with low computational complexity, whereas the other are based on numerical algorithms that may have a sufficient computational cost. As a result, the development of new effective solutions of the problem to supplement and complement existing approaches remains relevant.

A technique that applies approximation of pairwise comparison matrices in Chebyshev metric in logarithmic scale (log-Chebyshev approximation) is proposed and developed in [29, 30, 34]. The technique leads to optimization problems that are formulated and solved in the framework of tropical algebra where addition is defined as maximum. Using methods and results of tropical optimization yields analytical solutions of the problem in a compact vector form ready for both formal analysis and straightforward computations. In contrast to the methods of principle eigenvector and geometric mean, which offer unique solutions to the problem of pairwise comparisons, the log-Chebyshev approximation approach can produce multiple solutions. At the same time, an important advantage of this approach is that it can be directly extended to solve pairwise comparison problems with constraints imposed on the individual ratings of alternatives, whereas both methods of principal eigenvector and geometric mean cannot accommodate additional constraints without considerable complication of solution.

The log-Chebyshev approximation technique differs from another existing approach based on tropical algebra, which is exploited in many works including [14, 15, 22, 55, 21]. This approach can be considered as a tropical analogue of the principal eigenvector method, where the ordinary Perron eigenvector is formally replaced by the tropical eigenvector of the pairwise comparison matrix. The solution obtained as the tropical eigenvector is known to be one of the solutions provided by the log-Chebyshev approximation. However, the tropical eigenvector as shown in [30] can yield a less relevant solution in the context of the pairwise comparison problem than other vectors that are found using the approximation. Since the tropical analogue of the principal eigenvector method may miss better results provided by the log-Chebyshev approximation technique, it seems unnecessary to consider this method separately.

Multicriteria pairwise comparison problems can be considered as optimization problems with multiple objectives and solved using methods and techniques available in multiobjective optimization [12, 38, 4, 42, 45]. The most common approach to solving multiobjective optimization problems is to obtain Pareto-optimal (nondominated) solutions that cannot be improved for any one objective without worsening at least one other objective. To obtain Pareto-optimal solutions, various techniques are applied to reduce the problem with a vector objective function to one or more problems with ordinary (scalar) objective functions. As examples, one can consider linear scalarization used in the weighted geometric mean method [9, 1] and Chebyshev scalarization in the log-Chebyshev approximation method [34]. Other approaches to solve multiobjective problems include max-ordering, lexicographic ordering and lexicographic max-ordering techniques [12].

In this paper, we consider a new constrained decision problem to find absolute ratings of alternatives that are compared in pairs under multiple criteria, subject to constraints in the form of two-sided bounds (box-constraints) on ratios between the ratings. Given matrices of pairwise comparisons according to the criteria, the problem is formulated as the log-Chebyshev approximation of these matrices by a common consistent matrix (a symmetrically reciprocal matrix of unit rank) to minimize the approximation errors for all matrices simultaneously [34]. We rearrange the approximation problem as a constrained multiobjective optimization problem of finding a vector that determines the approximating consistent matrix. The constrained optimization problem is then represented in the framework of tropical algebra where addition is defined as taking maximum and multiplication is as usual.

We apply results and further develop methods of tropical optimization to handle the multiobjective optimization problem according to various principles of optimality. The results obtained include several statements proved as lemmas and theorems to provide a basis for formal derivation and justification of the solution procedures. We present numerical examples of solving multicriteria problems of rating four alternatives to illustrate the technique. Specifically, we show how the proposed techniques is used to solve the problem of selecting a plan for vacation from [47], and compare results.

The novelty of this study is twofold and lies in the development of new solution methods and results that are important for both decision making and tropical optimization. First, we consider a new class of constrained multicriteria pairwise comparison problems where the individual ratings of alternatives are evaluated subject to box-constraints on the ratios of ratings. These constraints can describe predefined relations between ratings, derived from some prior knowledge (such as approved and adopted results of previous studies) independent of the current comparison data. We extend the solution obtained in [33] for a constrained bi-criteria problem to the problems with arbitrary fixed number of criteria. The constrained problems under study allow to take into account the increasing complexity of contemporary decision processes, yet they have received little or no attention in the literature.

Furthermore, we develop a new technique to handle the above constrained problems, which combines the log-Chebyshev approximation of pairwise comparison matrices with multicriteria optimization based on the max-ordering, lexicographic ordering and lexicographic max-ordering principles of optimality. The solutions are given analytically in a compact parametric form that represents all solution vectors of ratings in a way ready for formal analysis and efficient computation. We observe that implementation of the lexicographic ordering and lexicographic max-ordering optimization schemes to solve multiobjective problems involves solving a series of constrained scalar optimization problems, where the constraints of a problem define a feasible set as the solution set of the previous problem in the series [12]. The need for this predetermines and justifies the application of the log-Chebyshev approximation which can solve constrained problems analytically, rather than the methods of the principal eigenvector and geometric mean, which are not able to address the constrained pairwise comparison problems in a direct way.

Second, we present new results on the solution of tropical optimization problems, which are of independent interest. A theorem is derived that formally establishes the equivalence between the solution of a constrained tropical optimization problem, the solution of a linear vector inequality and a set of vectors given in parametric form. The theorem is obtained as a consequence of some known facts (see, e.g. [27, 28, 31, 32]) combined into one result here for the first time to serve as a useful analytical tool for implementation in tropical optimization. Specifically, this result plays a key role in the construction and justification of solution procedures for the constrained multicriteria pairwise comparison problem under consideration.

We derive new solutions for particular problems of maximizing and minimizing a function in the multiplicative form of the Hilbert seminorm under a normalization constraint. This result is used to deal with multiple solutions of log-Chebyshev approximation in the pairwise comparison problem, which can make it difficult to choose the optimal solution in decision making. We improve and simplify the technique developed in [34] to handle the multiple solutions issue by finding two solution vectors that have the highest ratio (the best differentiating solution) and the lowest ratio (the worst differentiating solution) between ratings. For some problems, the technique may fail to overcome the issue and yield both best and worst differentiating solutions that are not unique. We propose a new approach, which in case of multiple solutions allows to reduce the number of best differentiating solutions and replace all worst differentiating solutions by one solution.

Finally, we develop new computational procedures to solve a class of constrained multiobjective tropical optimization problems according to various principles of optimality, which can find application in other practical contexts including temporal project scheduling and facility location problems.

The rest of the paper is organized as follows. We start with Section 2 where a constrained pairwise comparison problem with a single criterion and its solution based on the log-Chebyshev approximation are considered. In Section 3, we extend the pairwise comparison problem to take into account multiple criteria and describe various solution approaches. Section 4 offers an outline of basic definitions and results of tropical max-algebra needed below. In Section 5, we present solutions to tropical optimization problems, which are used to handle multicriteria pairwise comparison problems in Section 6. Sections 7 and 8 give numerical examples to illustrate the obtained results. Finally, Section 9 includes short concluding remarks.

2 Single-Criterion Pairwise Comparison Problem

Suppose that nn alternatives are compared in pairs, which results in an (n×n)(n\times n)-matrix 𝑪=(cij)\bm{C}=(c_{ij}) of pairwise comparisons, where the entry cij>0c_{ij}>0 shows by how many times alternative ii is considered more preferable than alternative jj. The pairwise comparison matrix 𝑪\bm{C} is assumed to satisfy the condition cij=1/cjic_{ij}=1/c_{ji} (and thus cii=1c_{ii}=1) to be valid for all i,j=1,,ni,j=1,\ldots,n, which makes the matrix 𝑪\bm{C} be symmetrically reciprocal. This condition says that if alternative ii is estimated to be cijc_{ij} times better than jj, then alternative jj must be 1/cij1/c_{ij} times better (cijc_{ij} times worse) than ii.

Given a pairwise comparison matrix 𝑪\bm{C} which represents the results of relative evaluation of one alternative against another, the problem of interest is to calculate absolute ratings of alternatives in the form of an nn-vector 𝒙=(xj)\bm{x}=(x_{j}) where xj>0x_{j}>0 represents the rating of alternative jj.

A pairwise comparison matrix 𝑪\bm{C} is referred to as consistent if the condition cij=cikckjc_{ij}=c_{ik}c_{kj} holds for all i,j,k=1,,ni,j,k=1,\ldots,n. This condition corresponds to the natural transitivity of judgments, which requires that if alternative ii is considered cikc_{ik} times better than kk, and alternative kk is ckjc_{kj} times better than jj, then alternative ii should be cikckjc_{ik}c_{kj} times better than jj.

For a consistent matrix 𝑪\bm{C}, it is not difficult to verify that there exists a positive vector 𝒙=(xj)\bm{x}=(x_{j}) whose entries determine the entries of 𝑪\bm{C} by the relation cij=xi/xjc_{ij}=x_{i}/x_{j} to be valid for all i,ji,j, which means that the matrix 𝑪\bm{C} is of unit rank. Indeed, the transitivity condition cij=cikckjc_{ij}=c_{ik}c_{kj} implies that any two columns jj and kk in 𝑪\bm{C} are collinear, and thus we can write 𝑪=𝒙𝒚T\bm{C}=\bm{x}\bm{y}^{T} where 𝒙=(xj)\bm{x}=(x_{j}) and 𝒚=(yj)\bm{y}=(y_{j}) are positive column vectors.

Since each entry of the matrix 𝑪\bm{C} is given by cij=xiyjc_{ij}=x_{i}y_{j} with xiyi=cii=1x_{i}y_{i}=c_{ii}=1, we have yi=1/xiy_{i}=1/x_{i} for all ii, which yields the above relation. Moreover, it directly follows from this relation that the vector 𝒙\bm{x}, which is defined up to a positive factor, can be taken as a vector of absolute ratings of alternatives and thus gives the solution of the pairwise comparison problem.

The matrices of pairwise comparisons that appear in real-world problems are commonly not consistent, which makes the problem of evaluating absolute ratings nontrivial. The solution techniques available to handle the problem include various heuristic methods that offer acceptable results in many cases in practice, and approximation methods that provide mathematically justified optimal solutions.

2.1 Solution Approaches to Pairwise Comparison Problem

The heuristic methods are mainly based on aggregating columns of the pairwise comparison matrix to obtain a solution by calculating a weighted sum of these columns [8]. In the most widely used method of principal eigenvector [47, 48, 49], the solution vector 𝒙\bm{x} is defined as the sum of columns taken with weights assumed proportional to the components of 𝒙\bm{x}. Under this assumption, the vector 𝒙\bm{x} must satisfy the equation 𝑪𝒙=λ𝒙\bm{C}\bm{x}=\lambda\bm{x} where 1/λ1/\lambda is a coefficient of proportionality, and it is found as the principal (Perron) eigenvector of the matrix 𝑪\bm{C} corresponding to the maximum eigenvalue.

The approximation methods reduce the problem to finding a consistent matrix 𝑿\bm{X} that approximates a given matrix 𝑪\bm{C} in the sense of some distance function as approximation error [8]. Then, a positive vector 𝒙\bm{x} that determines the approximating consistent matrix is taken as a vector of absolute ratings of alternatives. The approximation problem is solved as an optimization problem of minimizing the distance between the matrices 𝑪\bm{C} and 𝑿\bm{X}, which provides a formal justification of the solution obtained.

If the approximation error is measured on the standard linear scale, the approach normally leads to complex multiextremal optimization problems that are very hard to solve [50]. On the contrary, the application of the logarithmic scale (with logarithm to a base greater than 11) makes the approximation problem more tractable and even allows the solution to be derived analytically in an exact explicit form.

A widespread approximation technique to solve the pairwise comparison problem measures the error between the matrices 𝑪=(cij)\bm{C}=(c_{ij}) and 𝑿=(xi/xj)\bm{X}=(x_{i}/x_{j}) by using the Euclidean metric in logarithmic scale [43, 9, 1]. This technique known as the method of geometric mean, involves finding a positive vector 𝒙=(xj)\bm{x}=(x_{j}) that solves the problem

min𝒙>𝟎\displaystyle\min_{\bm{x}>\bm{0}} 1i,jn(logcijlogxixj)2.\displaystyle\sum_{1\leq i,j\leq n}\left(\log c_{ij}-\log\frac{x_{i}}{x_{j}}\right)^{2}.

The standard solution approach, which applies the first derivative test to find the roots of the partial derivatives of the objective function with respect to all xix_{i}, yields a solution vector 𝒙\bm{x} with the components given in the parametric form

xi=(j=1ncij)1/nu,u>0,i=1,,n.x_{i}=\left(\prod_{j=1}^{n}c_{ij}\right)^{1/n}u,\qquad u>0,\qquad i=1,\ldots,n.

After normalization (e.g. by dividing by the sum of components), this vector is taken as the optimal solution of the pairwise comparison problem.

As another approximation technique, a method of minimizing the Chebyshev distance in logarithmic scale (a log-Chebyshev approximation method) is proposed in [29, 30, 34]. The method aims to find positive vectors 𝒙\bm{x} that solve the problem

min𝒙>𝟎\displaystyle\min_{\bm{x}>\bm{0}} max1i,jn|logcijlogxixj|.\displaystyle\max_{1\leq i,j\leq n}\left|\log c_{ij}-\log\frac{x_{i}}{x_{j}}\right|. (1)

We observe that the methods of principal eigenvector and geometric mean lead to unique solutions and offer efficient computational procedures of calculating the result. At the same time, the solution provided by log-Chebyshev approximation can be nonunique and thus require further analysis. Considering an approximate character of the pairwise comparison model which usually leads to inconsistent results of pairwise comparisons, multiple solutions of the problem look quite reasonable and even allow to select a solution satisfying additional constraints.

2.2 Log-Chebyshev Approximation of Pairwise Comparison Matrix

Let us consider the problem of unconstrained log-Chebyshev approximation at (1). Observing that the logarithm to a base greater than 11 monotonically increases, the objective function is rewritten as (see, e.g. [34])

max1i,jn|logcijlogxixj|=logmax1i,jncijxjxi.\max_{1\leq i,j\leq n}\left|\log c_{ij}-\log\frac{x_{i}}{x_{j}}\right|=\log\max_{1\leq i,j\leq n}\frac{c_{ij}x_{j}}{x_{i}}.

Since the logarithmic function on the right-hand side attains its maximum there where its argument is maximal, we remove the logarithm from the objective function. Moreover, it is not difficult to verify (see [14, 15, 34]) that the obtained objective function is connected with the standard relative approximation error by the equality

max1i,jncijxjxi=max1i,jn|cijxi/xj|cij+1.\max_{1\leq i,j\leq n}\frac{c_{ij}x_{j}}{x_{i}}=\max_{1\leq i,j\leq n}\frac{|c_{ij}-x_{i}/x_{j}|}{c_{ij}}+1.

This means that the log-Chebyshev approximation of the matrix 𝑪\bm{C} is equivalent to the approximation in the sense of relative error as well.

Consider an extension of the pairwise comparison problem where the ratings of alternatives are to be evaluated subject to some predefined constraints. Suppose the constraints imposed on the ratings are in the form of two-sided bounds on ratios between the ratings. These constraints can reflect prior knowledge about the relationship between ratings, which can be obtained by different assessment techniques or resulted from the nature of alternatives.

Given a nonnegative matrix 𝑩=(bij)\bm{B}=(b_{ij}) where bij>0b_{ij}>0 shows that alternative ii must be considered not less than bijb_{ij} times better than jj, and bij=0b_{ij}=0 indicates that no constraint is imposed on ii with respect to jj. The constraints are given by the inequalities bijxjxib_{ij}x_{j}\leq x_{i} for all i,j=1,,ni,j=1,\ldots,n, or equivalently by the double inequality bijxi/xj1/bjib_{ij}\leq x_{i}/x_{j}\leq 1/b_{ji} when bji>0b_{ji}>0. Combining the former inequalities for all jj yields the system of inequalities

max1jnbijxjxi,i=1,,n.\max_{1\leq j\leq n}b_{ij}x_{j}\leq x_{i},\qquad i=1,\ldots,n.

We incorporate the constraints into (1) to formulate the following problem of constrained log-Chebyshev approximation. Given a positive (n×n)(n\times n)-matrix 𝑪=(cij)\bm{C}=(c_{ij}) of pairwise comparisons and nonnegative (n×n)(n\times n)-matrix 𝑩=(bij)\bm{B}=(b_{ij}) of constraints, the problem is to find positive nn-vectors 𝒙=(xj)\bm{x}=(x_{j}) that achieve

min𝒙>𝟎\displaystyle\min_{\bm{x}>\bm{0}} max1i,jncijxjxi;\displaystyle\max_{1\leq i,j\leq n}\frac{c_{ij}x_{j}}{x_{i}}; (2)
s.t. max1jnbijxjxi,i=1,,n.\displaystyle\max_{1\leq j\leq n}b_{ij}x_{j}\leq x_{i},\qquad i=1,\ldots,n.

Note that multiplying all components of the vector 𝒙\bm{x} by a common positive factor does not change both objective function and inequality constraints in problem (2), and hence the solutions of the problem are scale-invariant.

Finally, we observe that the methods of principal eigenvector and geometric mean cannot accommodate the above constraints in a straightforward way without considerable modification and complication of the solution. As it is shown later, the constrained problem of log-Chebyshev approximation at (2) can be directly solved to give the result in a compact vector form.

2.3 Best and Worst Differentiating Solutions

If problem (2) has a unique solution up to a positive factor, this solution is taken as the vector of absolute ratings of alternatives in question.

Suppose that the solution of (2) is not unique and denote the set of obtained solution vectors 𝒙\bm{x} by

X=argmin𝒙>𝟎{max1i,jncijxjxi:max1jnbijxjxi,i=1,,n}.X=\arg\min_{\bm{x}>\bm{0}}\left\{\max_{1\leq i,j\leq n}\frac{c_{ij}x_{j}}{x_{i}}\ :\ \max_{1\leq j\leq n}b_{ij}x_{j}\leq x_{i},\quad i=1,\ldots,n\right\}.

A nonunique solution vector of the pairwise comparison problem may make it difficult to derive an optimal decision, and thus further analysis is needed to characterize the result by one or two vectors that are reasonably representative of the solution. An approach developed in [34] suggests to reduce the entire set of solutions to two vectors that provide the “best” and “worst” answers to the questions of which alternative has the highest rating and how much this rating differs from the ratings of other alternatives. As the best solution, the approach takes a vector that maximally differentiates the alternatives with the highest and lowest ratings, and as the worst a vector that minimally differentiates all alternatives. The difference between alternatives is measured by the ratio in the form of a multiplicative version of the Hilbert (span, range) seminorm of the vector 𝒙\bm{x}, which is given by

max1inxi/min1jnxj=max1inxi×max1jnxj1.\max_{1\leq i\leq n}x_{i}\Big{/}\min_{1\leq j\leq n}x_{j}=\max_{1\leq i\leq n}x_{i}\times\max_{1\leq j\leq n}x_{j}^{-1}. (3)

The best and worst differentiating solutions are then obtained by maximizing and minimizing the Hilbert seminorm over all vectors 𝒙X\bm{x}\in X, which leads to the optimization problems

max𝒙Xmax1inxi×max1jnxj1;min𝒙Xmax1inxi×max1jnxj1.\begin{aligned} \max_{\bm{x}\in X}&&&\max_{1\leq i\leq n}x_{i}\times\max_{1\leq j\leq n}x_{j}^{-1};\end{aligned}\qquad\qquad\begin{aligned} \min_{\bm{x}\in X}&&&\max_{1\leq i\leq n}x_{i}\times\max_{1\leq j\leq n}x_{j}^{-1}.\end{aligned}

As examples show (see, e.g. [34]), one of the shortcomings of this approach is that the best (worst) differentiating solution found in this way may not be unique. To overcome this possible issue, we now propose a new improved procedure of finding the best and worst differentiating solutions, which provides a reduced set of the best solutions and a unique worst solution.

Since the solutions of problem (2) are invariant under multiplication by a positive factor, we can restrict ourselves to vectors that are normalized by dividing by the maximum component. Adding the normalization condition transforms the last two problems into the problems

max𝒙Xmax1inxi1;s.t.max1inxi=1;min𝒙Xmax1inxi1;s.t.max1inxi=1.\begin{aligned} \max_{\bm{x}\in X}&&&\max_{1\leq i\leq n}x_{i}^{-1};\\ \text{s.t.}&&&\max_{1\leq i\leq n}x_{i}=1;\end{aligned}\qquad\qquad\begin{aligned} \min_{\bm{x}\in X}&&&\max_{1\leq i\leq n}x_{i}^{-1};\\ \text{s.t.}&&&\max_{1\leq i\leq n}x_{i}=1.\end{aligned} (4)

Furthermore, we note that both problems can have nonunique normalized solutions. Under this circumstance, it is reasonable to concentrate only on the minimal normalized solution to the maximization problem and the maximal normalized solution to the minimization problem. As usual, a solution 𝒙0\bm{x}_{0} is called minimal (maximal) if the componentwise inequality 𝒙0𝒙\bm{x}_{0}\leq\bm{x} (𝒙0𝒙\bm{x}_{0}\geq\bm{x}) holds for any solution 𝒙\bm{x}.

Indeed, all normalized vectors that maximize or minimize the ratio between the highest and lowest ratings have two common components whose ratio is fixed: the maximum component equal to 1, and the minimum component less or equal to 1. In this case, the lower (higher) the ratings of the other alternatives, the better (worse) the alternative with the maximum rating is distinguishable from the others.

3 Multicriteria Pairwise Comparison Problems

We now assume that nn alternatives are compared in pairs according to mm criteria. For each criterion l=1,,ml=1,\ldots,m, the results of pairwise comparisons are represented by a matrix 𝑪l=(cij(l))\bm{C}_{l}=(c_{ij}^{(l)}). The problem is to assess the alternatives by evaluating a vector 𝒙=(xj)\bm{x}=(x_{j}) of ratings subject to constraints given by a matrix 𝑩=(bij)\bm{B}=(b_{ij}). Application of the log-Chebyshev approximation technique yields the following constrained multiobjective problem:

min𝒙>𝟎\displaystyle\min_{\bm{x}>\bm{0}} (max1i,jncij(1)xjxi,,max1i,jncij(m)xjxi);\displaystyle\left(\max_{1\leq i,j\leq n}\frac{c_{ij}^{(1)}x_{j}}{x_{i}},\ldots,\max_{1\leq i,j\leq n}\frac{c_{ij}^{(m)}x_{j}}{x_{i}}\right); (5)
s.t. max1jnbijxjxi,i=1,,n.\displaystyle\max_{1\leq j\leq n}b_{ij}x_{j}\leq x_{i},\qquad i=1,\ldots,n.

Suppose we have a solution to the problem, which forms a nonempty set of solution vectors. If the solution obtained is a unique vector up to a positive factor, we normalize this vector by dividing by the maximum element and take its entries as absolute ratings of alternatives in the pairwise comparison problem. Otherwise, the best and worst differentiating vectors of ratings are obtained respectively as the minimal solution of the maximization problem and the maximal solution of the minimization problem at (4).

In the rest of this section, we consider three common approaches to handle problem (5), which results in the development of new procedures to find the solution set. The proposed solutions follow the max-ordering, lexicographic ordering and lexicographic max-ordering principles of optimality [12].

3.1 Max-Ordering Solution

The max-ordering optimization aims at minimizing the worst value of the scalar objective functions in the multiobjective problem. For each 𝒙\bm{x}, the approach considers that function which takes the maximal (worst) value. This leads to the replacement of the vector objective function by a scalar function given by the maximum component of the vector function, which is known as the Chebyshev scalarization approach in multiobjective optimization [42].

The solution of the constrained problem at (5) starts with the introduction of the feasible solution set X0X_{0} given by the constraints as

X0={𝒙>𝟎:max1jnbijxjxi,i=1,,n}.X_{0}=\left\{\bm{x}>\bm{0}\ :\ \max_{1\leq j\leq n}b_{ij}x_{j}\leq x_{i},\quad i=1,\ldots,n\right\}. (6)

We apply the Chebyshev scalarization and use the associativity of the maximum operation to form the scalar objective function

max1lmmax1i,jncij(l)xjxi=max1i,jnaijxjxi,aij=max1lmcij(l).\max_{1\leq l\leq m}\max_{1\leq i,j\leq n}\frac{c_{ij}^{(l)}x_{j}}{x_{i}}=\max_{1\leq i,j\leq n}\frac{a_{ij}x_{j}}{x_{i}},\qquad a_{ij}=\max_{1\leq l\leq m}c_{ij}^{(l)}.

Then, the problem reduces to the minimization problem

min𝒙X0\displaystyle\min_{\bm{x}\in X_{0}} max1i,jnaijxjxi,\displaystyle\max_{1\leq i,j\leq n}\frac{a_{ij}x_{j}}{x_{i}}, (7)

which is solved to obtain the max-ordering solution in the form of the set

X1=argmin𝒙X0max1i,jnaijxjxi.X_{1}=\arg\min_{\bm{x}\in X_{0}}\max_{1\leq i,j\leq n}\frac{a_{ij}x_{j}}{x_{i}}.

Note that the solution obtained by the max-ordering optimization is known to be week Pareto-optimal and become Pareto-optimal if unique [42].

If the solution is not unique (up to a positive factor), we further reduce the solution set by extracting the best and worst differentiating vectors of ratings given by the minimal and maximal normalized solutions of problems (4) over the set X1X_{1}.

3.2 Lexicographic Ordering Solution

The lexicographic optimization examines the scalar objective functions in a hierarchical order based on certain ranking of objectives. Suppose the objectives are numbered in such a way that objective 11 has the highest rank, objective 22 has the second highest and so on until objective mm having the lowest rank. The lexicographic approach first solves the problem of minimizing function 11 and examines the set of optimal solutions obtained. If the solution is unique, it is taken as the solution of the overall multiobjective problem. Otherwise function 22 is minimized over all solutions of the first problem, and the procedure continues until a unique solution is obtained or the problem with function mm is solved.

To apply this approach to problem (5), we first take the set X0X_{0} given by (6), and then successively obtain the solution sets XsX_{s} for each problem

min𝒙Xs1\displaystyle\min_{\bm{x}\in X_{s-1}} max1i,jncij(s)xjxi,s=1,,m.\displaystyle\max_{1\leq i,j\leq n}\frac{c_{ij}^{(s)}x_{j}}{x_{i}},\qquad s=1,\ldots,m. (8)

The solution procedure stops at step s<ms<m if the set XsX_{s} consists of a single solution vector, or at step s=ms=m otherwise. The last found set XsX_{s} is taken as the lexicographic solution for the entire problem. In the same way as before, if the solution given by XsX_{s} is not unique, it is reduced to the minimal and maximal solutions of respective problems at (4) over XsX_{s}.

3.3 Lexicographic Max-Ordering Solution

This approach combines the lexicographic ordering and max-ordering into one procedure that improves the accuracy of the assessment provided by the max-ordering approach. The procedure consists of several steps, each of which finds the max-ordering solution of a reduced problem that has a lower multiplicity of objectives and smaller feasible set. The first solution step coincides with the above described max-ordering solution of the constrained problem with mm objectives and the feasible solution set given by the constraints. Each subsequent step takes the solution from the previous step as the current feasible set and selects objectives that can be further minimized over this set, to incorporate into a current vector objective function. A scalar objective function is included if it has its minimum value over the current feasible set below the minimum of the objective function at the previous step.

We denote by IsI_{s} the set of indices of scalar objective functions that form components of the vector objective function, and by XsX_{s} the solution set obtained at step ss. We initially set I0={1,,m}I_{0}=\{1,\ldots,m\} and define X0X_{0} as in (6).

We find a solution set XsX_{s} by solving the problem

min𝒙Xs1\displaystyle\min_{\bm{x}\in X_{s-1}} maxlIs1max1i,jncij(l)xjxi,s=1,,m.\displaystyle\max_{l\in I_{s-1}}\max_{1\leq i,j\leq n}\frac{c_{ij}^{(l)}x_{j}}{x_{i}},\qquad s=1,\ldots,m. (9)

With the minimum value of the objective function at step ss denoted by θs\theta_{s}, we define the index set

Is={lIs1:θs>min𝒙Xsmax1i,jncij(l)xjxi}.I_{s}=\left\{l\in I_{s-1}:\ \theta_{s}>\min_{\bm{x}\in X_{s}}\max_{1\leq i,j\leq n}\frac{c_{ij}^{(l)}x_{j}}{x_{i}}\right\}. (10)

The procedure is completed at step s<ms<m if either the set XsX_{s} reduces to a single solution vector or the condition Is=I_{s}=\emptyset holds. We additionally solve optimization problems (4) if the final set XsX_{s} provides a nonunique solution.

Below we show how the solutions offered by the above three approaches can be directly represented in explicit analytical form using methods and result of tropical mathematics.

4 Preliminary Algebraic Definitions, Notation and Results

In this section, we offer a brief overview of definitions, notation and results of tropical algebra that are used in the sequel. For further detail on the theory and application of tropical mathematics, one can consult the resent monographs and textbooks [19, 23, 20, 40].

4.1 Idempotent Semifield

Consider the set +\mathbb{R}_{+} of nonnegative reals with two binary operations: addition denoted by \oplus and defined as maximum, and multiplication denoted and defined as usual. Both operations are associative and commutative, and multiplication distributes over addition. Addition is idempotent since xx=max(x,x)=xx\oplus x=\max(x,x)=x for all x+x\in\mathbb{R}_{+}, and thus not invertible. It induces a partial order by the rule: xyx\leq y if and only if xy=yx\oplus y=y, which is in agreement with the natural linear order on +\mathbb{R}_{+}. With these properties, the system +,0,\langle\mathbb{R}_{+},0,\oplus\rangle forms an idempotent Abelian semigroup with identity.

With usual multiplication, the system +,1,×\langle\mathbb{R}_{+},1,\times\rangle has the structure of an Abelian group where powers with rational (and even real) exponents are well defined, which means that this system is radicable (algebraically complete). In what follows, the multiplication sign ×\times is omitted as in the standard algebra to save writing. Under the above properties, the system +,0,1,,×\langle\mathbb{R}_{+},0,1,\oplus,\times\rangle is commonly classified as a linearly ordered idempotent radicable semifield and referred to as max-algebra.

Both addition and multiplication are monotone in each argument: if the inequality xyx\leq y holds for some x,y+x,y\in\mathbb{R}_{+}, then the inequalities xzyzx\oplus z\leq y\oplus z and xzyzxz\leq yz are valid for any z+z\in\mathbb{R}_{+} as well. Furthermore, it results from the inequality xyx\leq y with x,y0x,y\neq 0 that xqyqx^{q}\geq y^{q} if q0q\leq 0 and xqyqx^{q}\leq y^{q} if q>0q>0.

4.2 Matrix and Vector Algebra

Matrices over max-algebra are introduced in the usual way; addition and multiplication of compatible matrices as well as multiplication of matrices by scalars follow the standard entrywise rules where the arithmetic addition ++ is replaced by =max\oplus=\max (whereas the multiplication does not change). The monotonicity of scalar operations extends to the matrix operations, where the inequalities are understood entrywise.

The transpose of a matrix 𝑨\bm{A} is denoted by 𝑨T\bm{A}^{T}. The matrices which consist of one column are vectors, and of one row are transposed vectors. The set of matrices of mm rows and nn columns is denoted by +m×n\mathbb{R}_{+}^{m\times n}, and the set of column vectors with nn entries is by +n\mathbb{R}_{+}^{n}.

The zero matrix denoted by 𝟎\bm{0} and the identity matrix denoted by 𝑰\bm{I} are defined in the same way as in conventional algebra. A matrix without zero rows (columns) is called row-regular (column-regular), and a vector without zero entries is called regular. The matrices without zero entries and regular vectors are also called positive.

A vector 𝒚\bm{y} is collinear to vector 𝒙\bm{x}, if 𝒚=c𝒙\bm{y}=c\bm{x} for some c+c\in\mathbb{R}_{+}.

For any nonzero matrix 𝑨=(aij)\bm{A}=(a_{ij}) from the set +m×n\mathbb{R}_{+}^{m\times n}, its multiplicative conjugate transpose (or simply conjugate) is the matrix 𝑨=(aij)\bm{A}^{-}=(a_{ij}^{-}) in +n×m\mathbb{R}_{+}^{n\times m} where aij=aji1a_{ij}^{-}=a_{ji}^{-1} if aji0a_{ji}\neq 0, and aij=0a_{ij}^{-}=0 otherwise. If a matrix 𝑨\bm{A} is row-regular, then the following inequality obviously holds:

𝑨𝑨𝑰.\bm{A}\bm{A}^{-}\geq\bm{I}.

For any nonzero vector 𝒙=(xj)\bm{x}=(x_{j}) from +n\mathbb{R}_{+}^{n}, its conjugate is the row vector 𝒙=(xj)\bm{x}^{-}=(x_{j}^{-}) where xj=xj1x_{j}^{-}=x_{j}^{-1} if xj0x_{j}\neq 0, and xj=0x_{j}^{-}=0 otherwise. If a vector 𝒙\bm{x} is regular, then the following matrix inequality and scalar equality are valid:

𝒙𝒙𝑰,𝒙𝒙=1\bm{x}\bm{x}^{-}\geq\bm{I},\qquad\bm{x}^{-}\bm{x}=1

(where the equality holds for any nonzero vector 𝒙\bm{x}).

For any square matrix 𝑨=(aij)\bm{A}=(a_{ij}) from +n×n\mathbb{R}_{+}^{n\times n}, the nonnegative integer powers represent repeated multiplication of the matrix by itself, defined by 𝑨0=𝑰\bm{A}^{0}=\bm{I} and 𝑨p=𝑨𝑨p1\bm{A}^{p}=\bm{A}\bm{A}^{p-1} for all integer p>0p>0. The trace of 𝑨\bm{A} is given by

missingtr𝑨=a11ann=k=1nakk.\mathop{\mathrm{missing}}{tr}\bm{A}=a_{11}\oplus\cdots\oplus a_{nn}=\bigoplus_{k=1}^{n}a_{kk}.

For any matrix 𝑨+n×n\bm{A}\in\mathbb{R}_{+}^{n\times n}, a trace function is introduced to serve as a tropical analogue of the matrix determinant in the form

missingTr(𝑨)=missingtr𝑨missingtr𝑨n=k=1nmissingtr𝑨k.\mathop{\mathrm{missing}}{Tr}(\bm{A})=\mathop{\mathrm{missing}}{tr}\bm{A}\oplus\cdots\oplus\mathop{\mathrm{missing}}{tr}\bm{A}^{n}=\bigoplus_{k=1}^{n}\mathop{\mathrm{missing}}{tr}\bm{A}^{k}.

If the condition missingTr(𝑨)1\mathop{\mathrm{missing}}{Tr}(\bm{A})\leq 1 holds, then the Kleene star operator is defined to map the matrix 𝑨\bm{A} into the matrix

𝑨=𝑰𝑨𝑨n1=k=0n1𝑨k.\bm{A}^{\ast}=\bm{I}\oplus\bm{A}\oplus\cdots\oplus\bm{A}^{n-1}=\bigoplus_{k=0}^{n-1}\bm{A}^{k}.

Moreover, under the same condition, the inequality

𝑨𝑨p,\bm{A}^{\ast}\geq\bm{A}^{p},

is valid for all integer p0p\geq 0, from which it follows directly that

𝑨𝑨=𝑨.\bm{A}^{\ast}\bm{A}^{\ast}=\bm{A}^{\ast}.

The spectral radius of a matrix 𝑨+n×n\bm{A}\in\mathbb{R}_{+}^{n\times n} is given by

λ=missingtr𝑨missingtr1/n(𝑨n)=k=1nmissingtr1/k(𝑨k).\lambda=\mathop{\mathrm{missing}}{tr}\bm{A}\oplus\cdots\oplus\mathop{\mathrm{missing}}{tr}\nolimits^{1/n}(\bm{A}^{n})=\bigoplus_{k=1}^{n}\mathop{\mathrm{missing}}{tr}\nolimits^{1/k}(\bm{A}^{k}).

Consider a matrix 𝑨=(aij)\bm{A}=(a_{ij}) in +m×n\mathbb{R}_{+}^{m\times n} and vector 𝒙=(xj)\bm{x}=(x_{j}) in +n\mathbb{R}_{+}^{n}. With the notation 𝟏=(1,,1)T\bm{1}=(1,\ldots,1)^{T}, tropical analogues of the matrix and vector norms are defined as

𝑨=𝟏T𝑨𝟏=i=1mj=1naij,𝒙=𝟏T𝒙=𝒙T𝟏=j=1nxj.\|\bm{A}\|=\bm{1}^{T}\bm{A}\bm{1}=\bigoplus_{i=1}^{m}\bigoplus_{j=1}^{n}a_{ij},\qquad\|\bm{x}\|=\bm{1}^{T}\bm{x}=\bm{x}^{T}\bm{1}=\bigoplus_{j=1}^{n}x_{j}.

4.3 Vector Inequalities

We conclude the overview with two inequalities that appear in the derivation of the solution of optimization problems in what follows. Suppose 𝑨+m×n\bm{A}\in\mathbb{R}_{+}^{m\times n} is a given matrix and 𝒅+m\bm{d}\in\mathbb{R}_{+}^{m} is a given vector, and consider the problem to find all vectors 𝒙+n\bm{x}\in\mathbb{R}_{+}^{n} that satisfy the inequality

𝑨𝒙𝒅.\bm{A}\bm{x}\leq\bm{d}. (11)

Solutions of (11) are known under various assumptions in different forms. We use a solution suggested by the following assertion (see, e.g. [27]).

Lemma 1.

For any column-regular matrix 𝐀\bm{A} and regular vector 𝐝\bm{d}, all solutions of inequality (11) are given by the inequality 𝐱(𝐝𝐀)\bm{x}\leq(\bm{d}^{-}\bm{A})^{-}.

As a consequence of the lemma, the solution of the equation 𝑨𝒙=𝒅\bm{A}\bm{x}=\bm{d}, if exists, can be represented as a family of solutions each defined by the vector inequality 𝒙(𝒅𝑨)\bm{x}\leq(\bm{d}^{-}\bm{A})^{-} where one scalar inequality is replaced by an equality.

Assume that for a given square matrix 𝑩+n×n\bm{B}\in\mathbb{R}_{+}^{n\times n}, we need to find all regular solutions 𝒙+n\bm{x}\in\mathbb{R}_{+}^{n} to the inequality

𝑩𝒙𝒙.\bm{B}\bm{x}\leq\bm{x}. (12)

A complete solution of (12) can be obtained as follows [27, 28].

Lemma 2.

For any matrix 𝐁\bm{B}, the following statements are true.

  1. 1.

    If missingTr(𝑩)1\mathop{\mathrm{missing}}{Tr}(\bm{B})\leq 1, then all regular solutions of inequality (12) are given in parametric form by 𝒙=𝑩𝒖\bm{x}=\bm{B}^{\ast}\bm{u} where 𝒖𝟎\bm{u}\neq\bm{0} is a vector of parameters.

  2. 2.

    If missingTr(𝑩)>1\mathop{\mathrm{missing}}{Tr}(\bm{B})>1, then there is no regular solution.

5 Tropical Optimization Problems

We are now concerned with optimization problems formulated and solved in the framework of tropical algebra to provide the basis for the solutions of multicriteria pairwise comparison problems in what follows. We present a new result that establishes a direct correspondence between the solutions of a conjugate quadratic optimization problem and the solutions of a linear inequality. Next, we offer new direct solutions to the problems of both maximization and minimization of a function in the form of the Hilbert seminorm.

5.1 Conjugate Quadratic Optimization Problem

Let us consider a problem to minimize a conjugate quadratic (or pseudoquadratic) vector form subject to a linear constraint. Given matrices 𝑨,𝑩+n×n\bm{A},\bm{B}\in\mathbb{R}_{+}^{n\times n}, we find regular vectors 𝒙+n\bm{x}\in\mathbb{R}_{+}^{n} that attain the minimum

min𝒙>𝟎\displaystyle\min_{\bm{x}>\bm{0}} 𝒙𝑨𝒙;\displaystyle\bm{x}^{-}\bm{A}\bm{x}; (13)
s.t. 𝑩𝒙𝒙.\displaystyle\bm{B}\bm{x}\leq\bm{x}.

Complete solutions to the problem and its variations are proposed in [27, 28, 31, 32]. We use a solution in the form of the next statement.

Theorem 3.

Let 𝐀\bm{A} be a matrix with nonzero spectral radius and 𝐁\bm{B} be a matrix such that missingTr(𝐁)1\mathop{\mathrm{missing}}{Tr}(\bm{B})\leq 1. Then, the minimum value of the objective function in problem (13) is equal to

θ=k=1n0i1++iknkmissingtr1/k(𝑨𝑩i1𝑨𝑩ik),\theta=\bigoplus_{k=1}^{n}\bigoplus_{0\leq i_{1}+\cdots+i_{k}\leq n-k}\mathop{\mathrm{missing}}{tr}\nolimits^{1/k}(\bm{A}\bm{B}^{i_{1}}\cdots\bm{A}\bm{B}^{i_{k}}), (14)

and all regular solutions of the problem are given in the parametric form

𝒙=𝑮𝒖,𝑮=(θ1𝑨𝑩),\bm{x}=\bm{G}\bm{u},\qquad\bm{G}=(\theta^{-1}\bm{A}\oplus\bm{B})^{\ast}, (15)

where 𝐮𝟎\bm{u}\neq\bm{0} is a vector of parameters.

We note that the above solution has a polynomial computational complexity in the dimension nn. Specifically, it is shown in [31, 32] that this solution can be calculated with at most O(n5)O(n^{5}) scalar operations.

We now formulate an equivalence statement that allows all solutions of problem (13) to be described using solutions of a vector inequality in the form of (12) and vice versa. This new result plays a key role in the development of solutions for the multiobjective optimization problems in subsequent sections.

Theorem 4.

Let 𝐀\bm{A} be a matrix with nonzero spectral radius, 𝐁\bm{B} a matrix such that missingTr(𝐁)1\mathop{\mathrm{missing}}{Tr}(\bm{B})\leq 1, and θ\theta be the minimum value of the objective function in problem (13), given by (14). Then, the following assertions are equivalent:

  1. (i)

    𝒙\bm{x} is a regular solution of problem (13).

  2. (ii)

    𝒙\bm{x} is a regular solution of the inequality

    (θ1𝑨𝑩)𝒙𝒙.(\theta^{-1}\bm{A}\oplus\bm{B})\bm{x}\leq\bm{x}.
  3. (iii)

    𝒙\bm{x} is a regular vector given in the parametric form

    𝒙=(θ1𝑨𝑩)𝒖,\bm{x}=(\theta^{-1}\bm{A}\oplus\bm{B})^{\ast}\bm{u},

    where 𝒖𝟎\bm{u}\neq\bm{0} is a vector of parameters.

Proof.

The proof of this equivalence theorem is readily obtained by combining the result of Theorem 3 with that of Lemma 2. ∎

5.2 Maximization and Minimization of Hilbert Seminorm

In this subsection, we concentrate on optimization problems with the objective function in the form of Hilbert seminorm, which appear in the analysis of multiple solutions of the pairwise comparison problem. We consider constrained problems of maximization and minimization of Hilbert seminorm and present results that allow to find all solutions of the problems.

We start with a conventional form given by (3) for the Hilbert seminorm of a vector 𝒙=(xi)\bm{x}=(x_{i}). After rewriting it in terms of max-algebra, we obtain

i=1nxij=1nxj1=𝟏T𝒙𝒙𝟏=𝒙𝟏𝟏T𝒙=𝒙𝒙.\bigoplus_{i=1}^{n}x_{i}\bigoplus_{j=1}^{n}x_{j}^{-1}=\bm{1}^{T}\bm{x}\bm{x}^{-}\bm{1}=\bm{x}^{-}\bm{1}\bm{1}^{T}\bm{x}=\|\bm{x}\|\|\bm{x}^{-}\|.

We examine optimization problems with 𝒙𝟏𝟏T𝒙\bm{x}^{-}\bm{1}\bm{1}^{T}\bm{x} as the objective function, which we refer to as a tropical representation of the multiplicative Hilbert seminorm or simply as Hilbert seminorm.

First, we suppose that given a matrix 𝑩+n×n\bm{B}\in\mathbb{R}_{+}^{n\times n}, we seek for regular solutions 𝒙+n\bm{x}\in\mathbb{R}_{+}^{n} of the constrained maximization problem

max𝒙>𝟎\displaystyle\max_{\bm{x}>\bm{0}} 𝒙𝟏𝟏T𝒙;\displaystyle\bm{x}^{-}\bm{1}\bm{1}^{T}\bm{x}; (16)
s.t. 𝑩𝒙𝒙.\displaystyle\bm{B}\bm{x}\leq\bm{x}.

A solution to problem (16) can be obtained as follows (see [34]).

Lemma 5.

For any matrix 𝐁\bm{B} without zero entries and with missingTr(𝐁)1\mathop{\mathrm{missing}}{Tr}(\bm{B})\leq 1, the maximum in problem (16) is equal to 𝐁(𝐁)\|\bm{B}^{\ast}(\bm{B}^{\ast})^{-}\|, and all regular solutions of the problem are given in the parametric form

𝒙=𝑮𝒖,𝑮=𝑩(𝑩lk)𝑩𝑩,𝒖𝟎,\bm{x}=\bm{G}\bm{u},\qquad\bm{G}=\bm{B}^{\ast}(\bm{B}_{lk}^{\ast})^{-}\bm{B}^{\ast}\oplus\bm{B}^{\ast},\qquad\bm{u}\neq\bm{0},

where 𝐁lk\bm{B}_{lk}^{\ast} is a matrix obtained from the matrix 𝐁=(𝐛j)\bm{B}^{\ast}=(\bm{b}_{j}^{\ast}) with columns 𝐛j=(bij)\bm{b}_{j}^{\ast}=(b_{ij}^{\ast}) by setting to 0 all entries except the entry blkb_{lk}^{\ast} with indices given by

k=argmaxj𝒃j(𝒃j),l=argminibik.k=\arg\max_{j}\|\bm{b}_{j}^{\ast}\|\|(\bm{b}_{j}^{\ast})^{-}\|,\qquad l=\arg\min_{i}b_{ik}^{\ast}.

It is not difficult to see that the most computationally intensive component of the solution is the calculation of the Kleene star matrix 𝑩\bm{B}^{\ast}. This calculation can take no more that O(n4)O(n^{4}) scalar operations, which determines the computational complexity involved in the solution.

Another problem of interest is to find regular vectors 𝒙\bm{x} that achieve the minimum

min𝒙>𝟎\displaystyle\min_{\bm{x}>\bm{0}} 𝒙𝟏𝟏T𝒙;\displaystyle\bm{x}^{-}\bm{1}\bm{1}^{T}\bm{x}; (17)
s.t. 𝑩𝒙𝒙.\displaystyle\bm{B}\bm{x}\leq\bm{x}.

The statement below offers a solution to problem (17), obtained by applying Theorem 3 with substitution 𝑨=𝟏𝟏T\bm{A}=\bm{1}\bm{1}^{T} (see [35] for a solution of the problem with an extended system of constraints).

Lemma 6.

For any matrix 𝐁\bm{B} with missingTr(𝐁)1\mathop{\mathrm{missing}}{Tr}(\bm{B})\leq 1, the minimum value of the objective function in problem (17) is equal to 𝐁\|\bm{B}^{\ast}\|, and all regular solutions are given in the parametric form

𝒙=𝑮𝒖,𝑮=0i+jn1𝑩1𝑩i𝟏𝟏T𝑩j𝑩,𝒖𝟎.\bm{x}=\bm{G}\bm{u},\qquad\bm{G}=\bigoplus_{0\leq i+j\leq n-1}\|\bm{B}^{\ast}\|^{-1}\bm{B}^{i}\bm{1}\bm{1}^{T}\bm{B}^{j}\oplus\bm{B}^{\ast},\qquad\bm{u}\neq\bm{0}.

The computational complexity of the solution is shown in [35] to be O(n4)O(n^{4}).

Let us examine problems (16) and (17) under additional conditions to implement an improved technique of evaluating the best and worst differentiating solutions of the pairwise comparison problem. We add to both problems a normalization constraint on the solution vector 𝒙\bm{x} in the form

𝟏T𝒙=𝒙=1.\bm{1}^{T}\bm{x}=\|\bm{x}\|=1.

We now concentrate on finding extremal (minimal and maximal) normalized solutions of problems (16) and (17). To solve the problems with normalization constraint, one can refine the results of Lemmas 5 and 6 and then find the extremal solutions. Below we offer direct solutions to the problems, which use less specific assumptions and involve less complicated calculations.

5.3 Maximization Problem

Suppose that we need to find the minimal solution of problem (16) with the normalization constraint added. The problem takes the following form:

max𝒙>𝟎\displaystyle\max_{\bm{x}>\bm{0}} 𝒙𝟏;\displaystyle\bm{x}^{-}\bm{1}; (18)
s.t. 𝑩𝒙𝒙,𝟏T𝒙=1.\displaystyle\bm{B}\bm{x}\leq\bm{x},\quad\bm{1}^{T}\bm{x}=1.

The next statement improves the result of Lemma (5) and extends it to matrices that may have zero entries.

Lemma 7.

For any matrix 𝐁\bm{B} with missingTr(𝐁)1\mathop{\mathrm{missing}}{Tr}(\bm{B})\leq 1, the maximum value of the objective function in problem (18) is equal to 𝐁(𝐁)\|\bm{B}^{\ast}(\bm{B}^{\ast})^{-}\|, and the minimal solutions are given by the set of normalized columns in the matrix 𝐁=(𝐛j)\bm{B}^{\ast}=(\bm{b}_{j}^{\ast}) in the form

𝒙=𝒃k𝒃k1,k=argmax1jn𝒃j(𝒃j).\bm{x}=\bm{b}_{k}^{\ast}\|\bm{b}_{k}^{\ast}\|^{-1},\qquad k=\arg\max_{1\leq j\leq n}\|\bm{b}_{j}^{\ast}\|\|(\bm{b}_{j}^{\ast})^{-}\|.

If a vector of the set is dominated by the other vectors (the minimal vector) or is the only vector in the set, it uniquely determines the minimal solution of the problem. Otherwise, the minimal solution is not uniquely defined.

Proof.

We solve the inequality constraint in problem (18) by using Lemma 2 to write 𝒙=𝑩𝒖\bm{x}=\bm{B}^{\ast}\bm{u} where 𝒖𝟎\bm{u}\neq\bm{0}, and then represent the problem in the form

max𝒖𝟎\displaystyle\max_{\bm{u}\neq\bm{0}} (𝑩𝒖)𝟏;\displaystyle(\bm{B}^{\ast}\bm{u})^{-}\bm{1};
s.t. 𝟏T𝑩𝒖=1.\displaystyle\bm{1}^{T}\bm{B}^{\ast}\bm{u}=1.

First, we observe that the equality constraint always has solutions. It follows from this constraint that 𝒖(𝟏T𝑩)\bm{u}\leq(\bm{1}^{T}\bm{B}^{\ast})^{-} where at least for one component of the vector 𝒖\bm{u}, the scalar inequality is satisfied as an equality.

Suppose that the equality holds for some k=1,,nk=1,\ldots,n, which defines the components of 𝒖=(uj)\bm{u}=(u_{j}) by the conditions

uk=(𝟏T𝒃k)1=𝒃k1,uj(𝟏T𝒃j)1=𝒃j1,jk.u_{k}=(\bm{1}^{T}\bm{b}_{k}^{\ast})^{-1}=\|\bm{b}_{k}^{\ast}\|^{-1},\qquad u_{j}\leq(\bm{1}^{T}\bm{b}_{j}^{\ast})^{-1}=\|\bm{b}_{j}^{\ast}\|^{-1},\quad j\neq k.

Under this assumption, we use properties of idempotent addition to bound the objective function from above as follows:

(𝑩𝒖)𝟏=i=1n(bi1u1binun)1i=1n(bikuk)1=(𝒃k)𝒃k.(\bm{B}^{\ast}\bm{u})^{-}\bm{1}=\bigoplus_{i=1}^{n}(b_{i1}^{\ast}u_{1}\oplus\cdots\oplus b_{in}^{\ast}u_{n})^{-1}\leq\bigoplus_{i=1}^{n}(b_{ik}^{\ast}u_{k})^{-1}=\|(\bm{b}_{k}^{\ast})^{-}\|\|\bm{b}_{k}^{\ast}\|.

Note that the above inequality becomes an equality if we put uj=0u_{j}=0 for all jkj\neq k since in this case, we have (bi1u1binun)1=(bikuk)1(b_{i1}^{\ast}u_{1}\oplus\cdots\oplus b_{in}^{\ast}u_{n})^{-1}=(b_{ik}^{\ast}u_{k})^{-1}.

Furthermore, the maximum value of the objective function (𝑩𝒖)𝟏(\bm{B}^{\ast}\bm{u})^{-}\bm{1} is attained if the index kk is chosen as

k=argmax1jn𝒃j(𝒃j),k=\arg\max_{1\leq j\leq n}\|\bm{b}_{j}^{\ast}\|\|(\bm{b}_{j}^{\ast})^{-}\|,

which yields this maximum equal to 𝟏T𝑩(𝑩)𝟏=𝑩(𝑩)\bm{1}^{T}\bm{B}^{\ast}(\bm{B}^{\ast})^{-}\bm{1}=\|\bm{B}^{\ast}(\bm{B}^{\ast})^{-}\|.

For each index kk that provides the above maximum, we take as a solution to the maximization problem the vector 𝒖\bm{u} with the components

uk=𝒃k1,uj=0,jk.u_{k}=\|\bm{b}_{k}^{\ast}\|^{-1},\qquad u_{j}=0,\quad j\neq k.

Since the equality 𝒙=𝑩𝒖\bm{x}=\bm{B}^{\ast}\bm{u} holds, this vector 𝒖\bm{u} produces the vector

𝒙=u1𝒃1un𝒃n=𝒃k𝒃k1.\bm{x}=u_{1}\bm{b}_{1}^{\ast}\oplus\cdots\oplus u_{n}\bm{b}_{n}^{\ast}=\bm{b}_{k}^{\ast}\|\bm{b}_{k}^{\ast}\|^{-1}.

We observe that the solution vector obtained has the form of a normalized column of the matrix 𝑩\bm{B}^{\ast}, and that for a fixed kk, it is the minimal solution due to monotonicity of the matrix operations.

In the case of several indices kk that yield the maximum value of the objective function, we combine the corresponding solution vectors in a set. If there is the minimal vector among them, we take this vector as the unique minimal solution. Otherwise, the minimal solution cannot be uniquely determined, and therefore it is given by multiple vectors. ∎

5.4 Minimization Problem

We turn to the derivation of the maximal solution to the following minimization problem obtained from (17) by adding the normalization constraint:

min𝒙>𝟎\displaystyle\min_{\bm{x}>\bm{0}} 𝒙𝟏;\displaystyle\bm{x}^{-}\bm{1}; (19)
s.t. 𝑩𝒙𝒙,𝟏T𝒙=1.\displaystyle\bm{B}\bm{x}\leq\bm{x},\quad\bm{1}^{T}\bm{x}=1.

A complete solution of the problem is given by the next statement.

Lemma 8.

For any matrix 𝐁\bm{B} with missingTr(𝐁)1\mathop{\mathrm{missing}}{Tr}(\bm{B})\leq 1, the minimum value of the objective function in problem (19) is equal to 𝐁\|\bm{B}^{\ast}\|, and the maximal solution is given by

𝒙=(𝟏T𝑩).\bm{x}=(\bm{1}^{T}\bm{B}^{\ast})^{-}.
Proof.

Similarly as before, we replace the inequality 𝑩𝒙𝒙\bm{B}\bm{x}\leq\bm{x} by its solution 𝒙=𝑩𝒖\bm{x}=\bm{B}^{\ast}\bm{u} with 𝒖𝟎\bm{u}\neq\bm{0}. Combining with the constraint 𝟏T𝒙=1\bm{1}^{T}\bm{x}=1 yields the equality 𝟏T𝑩𝒖=1\bm{1}^{T}\bm{B}^{\ast}\bm{u}=1, from which the inequality 𝒖(𝟏T𝑩)\bm{u}\leq(\bm{1}^{T}\bm{B}^{\ast})^{-} follows.

After multiplication of the last inequality by 𝑩\bm{B}^{\ast} from the left, we arrive at a constraint on 𝒙\bm{x} in the form

𝒙=𝑩𝒖𝑩(𝟏T𝑩).\bm{x}=\bm{B}^{\ast}\bm{u}\leq\bm{B}^{\ast}(\bm{1}^{T}\bm{B}^{\ast})^{-}.

In the same way as in [26], we verify the equality 𝑩(𝟏T𝑩)=(𝟏T𝑩)\bm{B}^{\ast}(\bm{1}^{T}\bm{B}^{\ast})^{-}=(\bm{1}^{T}\bm{B}^{\ast})^{-}. Indeed, since 𝑩𝑰\bm{B}^{\ast}\geq\bm{I}, the inequality 𝑩(𝟏T𝑩)(𝟏T𝑩)\bm{B}^{\ast}(\bm{1}^{T}\bm{B}^{\ast})^{-}\geq(\bm{1}^{T}\bm{B}^{\ast})^{-} holds.

To show the opposite inequality 𝑩(𝟏T𝑩)(𝟏T𝑩)\bm{B}^{\ast}(\bm{1}^{T}\bm{B}^{\ast})^{-}\leq(\bm{1}^{T}\bm{B}^{\ast})^{-}, we note that the identity 𝑩𝑩=𝑩\bm{B}^{\ast}\bm{B}^{\ast}=\bm{B}^{\ast} is valid according to the properties of the Kleene star matrix. Application of the properties of conjugate transposition yields

(𝟏T𝑩)𝟏T𝑩𝑰,𝟏T𝑩𝑩(𝟏T𝑩𝑩)=1.(\bm{1}^{T}\bm{B}^{\ast})^{-}\bm{1}^{T}\bm{B}^{\ast}\geq\bm{I},\qquad\bm{1}^{T}\bm{B}^{\ast}\bm{B}^{\ast}(\bm{1}^{T}\bm{B}^{\ast}\bm{B}^{\ast})^{-}=1.

We combine these relations to obtain the desired inequality as follows:

𝑩(𝟏T𝑩)=𝑩(𝟏T𝑩𝑩)(𝟏T𝑩)𝟏T𝑩𝑩(𝟏T𝑩𝑩)=(𝟏T𝑩).\bm{B}^{\ast}(\bm{1}^{T}\bm{B}^{\ast})^{-}=\bm{B}^{\ast}(\bm{1}^{T}\bm{B}^{\ast}\bm{B}^{\ast})^{-}\leq(\bm{1}^{T}\bm{B}^{\ast})^{-}\bm{1}^{T}\bm{B}^{\ast}\bm{B}^{\ast}(\bm{1}^{T}\bm{B}^{\ast}\bm{B}^{\ast})^{-}=(\bm{1}^{T}\bm{B}^{\ast})^{-}.

As a result, the inequality constraint on the vector 𝒙\bm{x} reduces to

𝒙(𝟏T𝑩).\bm{x}\leq(\bm{1}^{T}\bm{B}^{\ast})^{-}.

Furthermore, we examine the vector 𝒙=(𝟏T𝑩)\bm{x}=(\bm{1}^{T}\bm{B}^{\ast})^{-}, which is the maximal solution of the last inequality, to verify that this vector satisfies both constraints of the problem. Since 𝑩𝑩\bm{B}^{\ast}\geq\bm{B}, we can write

𝑩𝒙=𝑩(𝟏T𝑩)𝑩(𝟏T𝑩)=(𝟏T𝑩)=𝒙,\bm{B}\bm{x}=\bm{B}(\bm{1}^{T}\bm{B}^{\ast})^{-}\leq\bm{B}^{\ast}(\bm{1}^{T}\bm{B}^{\ast})^{-}=(\bm{1}^{T}\bm{B}^{\ast})^{-}=\bm{x},

and thus this vector 𝒙\bm{x} satisfies the inequality constraint. Moreover, we have

𝟏T𝒙=𝟏T(𝟏T𝑩)=𝟏T𝑩(𝟏T𝑩)=1,\bm{1}^{T}\bm{x}=\bm{1}^{T}(\bm{1}^{T}\bm{B}^{\ast})^{-}=\bm{1}^{T}\bm{B}^{\ast}(\bm{1}^{T}\bm{B}^{\ast})^{-}=1,

which means that the equality constraint holds as well.

After conjugate transposition of both sides of the inequality 𝒙(𝟏T𝑩)\bm{x}\leq(\bm{1}^{T}\bm{B}^{\ast})^{-} and multiplication by 𝟏\bm{1}, we obtain a lower bound on the objective function

𝒙𝟏𝟏T𝑩𝟏=𝑩.\bm{x}^{-}\bm{1}\geq\bm{1}^{T}\bm{B}^{\ast}\bm{1}=\|\bm{B}^{\ast}\|.

It is clear that the maximal feasible solution 𝒙=(𝟏T𝑩)\bm{x}=(\bm{1}^{T}\bm{B}^{\ast})^{-} attains this bound, which means that 𝑩\|\bm{B}^{\ast}\| is the minimum of the objective function.

Note that the solution obtained is unique and can be represented as

𝒙=(𝒃11𝒃n1)T,\bm{x}=\begin{pmatrix}\|\bm{b}_{1}^{\ast}\|^{-1}&\ldots&\|\bm{b}_{n}^{\ast}\|^{-1}\end{pmatrix}^{T},

which is the vector of inverses of the maximum column elements in 𝑩\bm{B}^{\ast}. ∎

Let us verify that the solution (𝟏T𝑩)(\bm{1}^{T}\bm{B}^{\ast})^{-} of the minimization problem (19) is greater or equal to any solution of the maximization problem (18). Indeed, with the equality (𝟏T𝑩)=𝑩(𝟏T𝑩)(\bm{1}^{T}\bm{B}^{\ast})^{-}=\bm{B}^{\ast}(\bm{1}^{T}\bm{B}^{\ast})^{-}, we have for each k=1,,nk=1,\ldots,n that

(𝟏T𝑩)=𝑩(𝟏T𝑩)=j=1n𝒃j𝒃j1𝒃k𝒃k1,(\bm{1}^{T}\bm{B}^{\ast})^{-}=\bm{B}^{\ast}(\bm{1}^{T}\bm{B}^{\ast})^{-}=\bigoplus_{j=1}^{n}\bm{b}_{j}^{\ast}\|\bm{b}_{j}^{\ast}\|^{-1}\geq\bm{b}_{k}^{\ast}\|\bm{b}_{k}^{\ast}\|^{-1},

which shows that all normalized columns of the matrix 𝑩\bm{B}^{\ast}, including the solution vectors of problem (18), are less or equal to the solution of (19).

To conclude this section, we note that the obtained solutions to both maximization and minimization problems involve the calculation of the Kleene star matrix 𝑩\bm{B}^{\ast} as the main component, and thus have the complexity O(n4)O(n^{4}).

6 Solution of Multicriteria Pairwise Comparison Problems

Consider the multiobjective optimization problem (5), which arises in the solution of the constrained multicriteria pairwise comparison problem using log-Chebyshev approximation. After rewriting the objective functions and inequality constraints in the max-algebra setting, the problem becomes

min𝒙>𝟎\displaystyle\min_{\bm{x}>\bm{0}} (1i,jnxi1cij(1)xj,,1i,jnxi1cij(m)xj);\displaystyle\left(\bigoplus_{1\leq i,j\leq n}x_{i}^{-1}c_{ij}^{(1)}x_{j},\ldots,\bigoplus_{1\leq i,j\leq n}x_{i}^{-1}c_{ij}^{(m)}x_{j}\right);
s.t. 1jnbijxjxi,i=1,,n.\displaystyle\bigoplus_{1\leq j\leq n}b_{ij}x_{j}\leq x_{i},\qquad i=1,\ldots,n.

By using vector and matrix notation, we finally formulate the problem as follows. Given (n×n)(n\times n)-matrices 𝑪l\bm{C}_{l} of pairwise comparisons of nn alternatives for criteria l=1,,ml=1,\ldots,m, and nonnegative (n×n)(n\times n)-matrix 𝑩\bm{B} of constraints, find positive nn-vectors 𝒙\bm{x} of ratings that solve the multiobjective problem

min𝒙>𝟎\displaystyle\min_{\bm{x}>\bm{0}} (𝒙𝑪1𝒙,,𝒙𝑪m𝒙);\displaystyle(\bm{x}^{-}\bm{C}_{1}\bm{x},\ldots,\bm{x}^{-}\bm{C}_{m}\bm{x}); (20)
s.t. 𝑩𝒙𝒙.\displaystyle\bm{B}\bm{x}\leq\bm{x}.

In this section, we offer new max-ordering, lexicographic and lexicographic max-ordering optimal solutions to the problem. If the solution obtained is not a unique vector of ratings, this solution is reduced to two the best and worst solution vectors, which most and least differentiate between the alternatives with the highest and lowest ratings.

6.1 Max-Ordering Solution

We start with the max-ordering solution technique, which leads to the solution of problem (7) over the feasible set X0X_{0} given by (6).

In terms of max-algebra, the set X0X_{0} is the set of regular vectors 𝒙\bm{x} that solve the inequality 𝑩𝒙𝒙\bm{B}\bm{x}\leq\bm{x}. The objective function at (7) takes the form

max1lm𝒙𝑪l𝒙=𝒙𝑨𝒙,𝑨=l=1m𝑪l.\max_{1\leq l\leq m}\bm{x}^{-}\bm{C}_{l}\bm{x}=\bm{x}^{-}\bm{A}\bm{x},\qquad\bm{A}=\bigoplus_{l=1}^{m}\bm{C}_{l}.

Combining the objective function with the constraint leads to the problem

min𝒙>𝟎\displaystyle\min_{\bm{x}>\bm{0}} 𝒙𝑨𝒙;\displaystyle\bm{x}^{-}\bm{A}\bm{x};
s.t. 𝑩𝒙𝒙.\displaystyle\bm{B}\bm{x}\leq\bm{x}.

Since the problem takes the form of (13), we apply Theorem 3 to find the minimum value θ\theta of the objective function, which is given by (14). Next, we obtain the solution set X1X_{1} of the problem as the set of vectors

𝒙=𝑩1𝒖,𝑩1=θ1𝑨𝑩,𝒖𝟎.\bm{x}=\bm{B}_{1}^{\ast}\bm{u},\qquad\bm{B}_{1}=\theta^{-1}\bm{A}\oplus\bm{B},\qquad\bm{u}\neq\bm{0}.

If the columns in the matrix 𝑮=𝑩1\bm{G}=\bm{B}_{1}^{\ast} are collinear, then any column after normalization can be taken as the unique solution. Otherwise by Theorem 4, the set X1X_{1} coincides with the set of regular solutions to the inequality

𝑩1𝒙𝒙.\bm{B}_{1}\bm{x}\leq\bm{x}.

It remains to obtain the best and worst differentiating solutions in the set, which are the minimal and maximal solutions of the respective problems

max𝒙>𝟎𝒙𝟏;s.t.𝑩1𝒙𝒙,𝟏T𝒙=1;min𝒙>𝟎𝒙𝟏;s.t.𝑩1𝒙𝒙,𝟏T𝒙=1.\begin{aligned} \max_{\bm{x}>\bm{0}}&&&\bm{x}^{-}\bm{1};\\ \text{s.t.}&&&\bm{B}_{1}\bm{x}\leq\bm{x},\quad\bm{1}^{T}\bm{x}=1;\end{aligned}\qquad\qquad\begin{aligned} \min_{\bm{x}>\bm{0}}&&&\bm{x}^{-}\bm{1};\\ \text{s.t.}&&&\bm{B}_{1}\bm{x}\leq\bm{x},\quad\bm{1}^{T}\bm{x}=1.\end{aligned}

By applying Lemma 7, we find the best differentiating solutions represented in terms of the columns of the matrix 𝑮=(𝒈j)\bm{G}=(\bm{g}_{j}) as the set of vectors

𝒙best=𝒈k𝒈k1,k=argmax1jn𝒈j(𝒈j).\bm{x}^{\textup{best}}=\bm{g}_{k}\|\bm{g}_{k}\|^{-1},\qquad k=\arg\max_{1\leq j\leq n}\|\bm{g}_{j}\|\|(\bm{g}_{j})^{-}\|.

If there is a minimal solution in the set, it is taken as the unique minimal best differentiating solution; otherwise, the minimal solution cannot be uniquely determined.

By Lemma 8, the maximal worst differentiating solution is obtained as

𝒙worst=(𝟏T𝑮).\bm{x}^{\textup{worst}}=(\bm{1}^{T}\bm{G})^{-}.

The next statement combines the results of the derivation of all max-ordering solution vectors for problem (20) with the calculation of the best and worst differentiating vectors among them.

Theorem 9.

Let 𝐂l\bm{C}_{l} for all l=1,,ml=1,\ldots,m be matrices with nonzero spectral radii and 𝐁\bm{B} be a matrix such that missingTr(𝐁)1\mathop{\mathrm{missing}}{Tr}(\bm{B})\leq 1. Define

θ=k=1n0i1++iknkmissingtr1/k(𝑨𝑩i1𝑨𝑩ik),𝑨=l=1m𝑪l,\displaystyle\theta=\bigoplus_{k=1}^{n}\bigoplus_{0\leq i_{1}+\cdots+i_{k}\leq n-k}\mathop{\mathrm{missing}}{tr}\nolimits^{1/k}(\bm{A}\bm{B}^{i_{1}}\cdots\bm{A}\bm{B}^{i_{k}}),\qquad\bm{A}=\bigoplus_{l=1}^{m}\bm{C}_{l},
𝑩1=θ1𝑨𝑩.\displaystyle\bm{B}_{1}=\theta^{-1}\bm{A}\oplus\bm{B}.

Then, with the notation 𝐆=𝐁1\bm{G}=\bm{B}_{1}^{\ast}, the following statements hold:

  1. (i)

    All max-ordering solutions of problem (20) are given by the matrix 𝑮=(𝒈j)\bm{G}=(\bm{g}_{j}) in the parametric form

    𝒙=𝑮𝒖,𝒖𝟎.\bm{x}=\bm{G}\bm{u},\qquad\bm{u}\neq\bm{0}.
  2. (ii)

    The minimal normalized best differentiating solution is given by

    𝒙best=𝒈k𝒈k1,k=argmax1jn𝒈j𝒈j.\bm{x}^{\textup{best}}=\bm{g}_{k}\|\bm{g}_{k}\|^{-1},\qquad k=\arg\max_{1\leq j\leq n}\|\bm{g}_{j}\|\|\bm{g}_{j}^{-}\|.
  3. (iii)

    The maximal normalized worst differentiating solution is

    𝒙worst=(𝟏T𝑮).\bm{x}^{\textup{worst}}=(\bm{1}^{T}\bm{G})^{-}.

We observe that if the matrix 𝑮\bm{G} produces a unique solution, then the minimal best and maximal worst differentiating solutions obviously coincide. This allows one not to check whether all columns in 𝑮\bm{G} are collinear or not. Instead one can directly calculate the best and worst solutions, which provide a common result if the solution is unique.

The above result is valid for the unconstrained problem which is obtained by setting 𝑩=𝟎\bm{B}=\bm{0}. In this case, the minimum value θ\theta of the objective function is reduced to the spectral radius of the matrix 𝑨\bm{A}.

Finally, note that with additional computational effort required to find the matrix 𝑨\bm{A}, the computational complexity of the solution is O(mn2+n5)O(mn^{2}+n^{5}).

6.2 Lexicographic Ordering Solution

Consider the implementation of the lexicographic ordering technique which involves a series of problems (8). We solve problem (20) in no more than mm steps each consisting in the minimization of a scalar objective function over a feasible set given by the solutions of the previous step.

At the step s=1s=1, we use the symbol 𝑩0=𝑩\bm{B}_{0}=\bm{B} and formulate the problem

min𝒙>𝟎\displaystyle\min_{\bm{x}>\bm{0}} 𝒙𝑪1𝒙;\displaystyle\bm{x}^{-}\bm{C}_{1}\bm{x};
s.t. 𝑩0𝒙𝒙.\displaystyle\bm{B}_{0}\bm{x}\leq\bm{x}.

To solve the problem, we apply Theorem 3 to calculate the minimum

θ1=k=1n0i1++iknkmissingtr1/k(𝑪1𝑩0i1𝑪1𝑩0ik).\theta_{1}=\bigoplus_{k=1}^{n}\bigoplus_{0\leq i_{1}+\cdots+i_{k}\leq n-k}\mathop{\mathrm{missing}}{tr}\nolimits^{1/k}(\bm{C}_{1}\bm{B}_{0}^{i_{1}}\cdots\bm{C}_{1}\bm{B}_{0}^{i_{k}}).

Then, we find the solution set, which is given in parametric form by

𝒙=𝑩1𝒖,𝑩1=θ11𝑪1𝑩0,𝒖𝟎,\bm{x}=\bm{B}_{1}^{\ast}\bm{u},\qquad\bm{B}_{1}=\theta_{1}^{-1}\bm{C}_{1}\oplus\bm{B}_{0},\qquad\bm{u}\neq\bm{0},

or according to Theorem 4, as the set of solutions of the inequality

𝑩1𝒙𝒙.\bm{B}_{1}\bm{x}\leq\bm{x}.

We take the last inequality as a constraint that determines the set X1X_{1} and formulate the problem of step s=2s=2 as follows:

min𝒙>𝟎\displaystyle\min_{\bm{x}>\bm{0}} 𝒙𝑪2𝒙;\displaystyle\bm{x}^{-}\bm{C}_{2}\bm{x};
s.t. 𝑩1𝒙𝒙.\displaystyle\bm{B}_{1}\bm{x}\leq\bm{x}.

The minimum in the problem is given by

θ2=k=1n0i1++iknkmissingtr1/k(𝑪2𝑩1i1𝑪2𝑩1ik),\theta_{2}=\bigoplus_{k=1}^{n}\bigoplus_{0\leq i_{1}+\cdots+i_{k}\leq n-k}\mathop{\mathrm{missing}}{tr}\nolimits^{1/k}(\bm{C}_{2}\bm{B}_{1}^{i_{1}}\cdots\bm{C}_{2}\bm{B}_{1}^{i_{k}}),

and the set of solutions X2X_{2} is defined as

𝒙=𝑩2𝒖,𝑩2=θ21𝑪2𝑩1,𝒖𝟎.\bm{x}=\bm{B}_{2}^{\ast}\bm{u},\qquad\bm{B}_{2}=\theta_{2}^{-1}\bm{C}_{2}\oplus\bm{B}_{1},\qquad\bm{u}\neq\bm{0}.

We repeat the procedure for each step s=3,,ms=3,\ldots,m. Upon completion of step s=ms=m, we arrive at the lexicographic solution given by

𝒙=𝑩m𝒖,𝒖𝟎.\bm{x}=\bm{B}_{m}^{\ast}\bm{u},\qquad\bm{u}\neq\bm{0}.

If the solution obtained is not unique (up to a positive factor), we calculate the best and worst differentiating solution vectors.

We summarize the above computational scheme in the following form.

Theorem 10.

Let 𝐂l\bm{C}_{l} for all l=1,,ml=1,\ldots,m be matrices with nonzero spectral radii and 𝐁\bm{B} be a matrix such that missingTr(𝐁)1\mathop{\mathrm{missing}}{Tr}(\bm{B})\leq 1. Denote 𝐁0=𝐁\bm{B}_{0}=\bm{B} and define the recurrence relations

θs=k=1n0i1++iknkmissingtr1/k(𝑪s𝑩s1i1𝑪s𝑩s1ik),\displaystyle\theta_{s}=\bigoplus_{k=1}^{n}\bigoplus_{0\leq i_{1}+\cdots+i_{k}\leq n-k}\mathop{\mathrm{missing}}{tr}\nolimits^{1/k}(\bm{C}_{s}\bm{B}_{s-1}^{i_{1}}\cdots\bm{C}_{s}\bm{B}_{s-1}^{i_{k}}),\qquad
𝑩s=θs1𝑪s𝑩s1,s=1,,m.\displaystyle\bm{B}_{s}=\theta_{s}^{-1}\bm{C}_{s}\oplus\bm{B}_{s-1},\qquad s=1,\ldots,m.

Then, with the notation 𝐆=𝐁m\bm{G}=\bm{B}_{m}^{\ast}, the following statements hold:

  1. (i)

    All lexicographic ordering solutions of problem (20) are given by the matrix 𝑮=(𝒈j)\bm{G}=(\bm{g}_{j}) in the parametric form

    𝒙=𝑮𝒖,𝒖𝟎.\bm{x}=\bm{G}\bm{u},\qquad\bm{u}\neq\bm{0}.
  2. (ii)

    The minimal normalized best differentiating solution is given by

    𝒙best=𝒈k𝒈k1,k=argmax1jn𝒈j𝒈j.\bm{x}^{\textup{best}}=\bm{g}_{k}\|\bm{g}_{k}\|^{-1},\qquad k=\arg\max_{1\leq j\leq n}\|\bm{g}_{j}\|\|\bm{g}_{j}^{-}\|.
  3. (iii)

    The maximal normalized worst differentiating solution is

    𝒙worst=(𝟏T𝑮).\bm{x}^{\textup{worst}}=(\bm{1}^{T}\bm{G})^{-}.

Suppose that for some step s<ms<m, all columns in the matrix 𝑩s\bm{B}_{s}^{\ast} are collinear, and thus the matrix generates a unique solution vector. In this case, further steps cannot change the obtained solution and thus can be avoided to stop the procedure. Note that if a matrix 𝑩s\bm{B}_{s}^{\ast} generates a unique solution, then both the minimal best and maximal worst normalized vectors obtained from 𝑩s\bm{B}_{s}^{\ast} are the same. As a result, we can calculate these vectors to stop the procedure if these vectors coincide or continue otherwise.

The complexity of the solution can be estimated not greater than O(mn5)O(mn^{5}).

6.3 Lexicographic Max-Ordering Solution

We now describe a solution technique based on the lexicographic max-ordering optimality principle. Similar to the lexicographic ordering solution, we handle problem (20) by solving a series of problems, where each problem has a scalar objective function and inequality constraint provided by the solution of the previous problem. According to the computational scheme given by formulas (6), (9) and (10), the solution involves no more than mm steps, which are described in the framework of max-algebra as follows.

We set 𝑩0=𝑩\bm{B}_{0}=\bm{B} and define X0={𝒙>𝟎|𝑩0𝒙𝒙}X_{0}=\{\bm{x}>\bm{0}|\ \bm{B}_{0}\bm{x}\leq\bm{x}\}, I0={1,,m}I_{0}=\{1,\ldots,m\}.

The step s=1s=1 starts with calculating the matrix

𝑨1=lI0𝑪l=𝑪1𝑪m.\bm{A}_{1}=\bigoplus_{l\in I_{0}}\bm{C}_{l}=\bm{C}_{1}\oplus\cdots\oplus\bm{C}_{m}.

The purpose of this step is to solve the problem

min𝒙>𝟎\displaystyle\min_{\bm{x}>\bm{0}} 𝒙𝑨1𝒙;\displaystyle\bm{x}^{-}\bm{A}_{1}\bm{x};
s.t. 𝑩0𝒙𝒙.\displaystyle\bm{B}_{0}\bm{x}\leq\bm{x}.

We apply Theorem 3 to find the minimum of the objective function

θ1=k=1n0i1++iknkmissingtr1/k(𝑨1𝑩0i1𝑨1𝑩0ik),\theta_{1}=\bigoplus_{k=1}^{n}\bigoplus_{0\leq i_{1}+\cdots+i_{k}\leq n-k}\mathop{\mathrm{missing}}{tr}\nolimits^{1/k}(\bm{A}_{1}\bm{B}_{0}^{i_{1}}\cdots\bm{A}_{1}\bm{B}_{0}^{i_{k}}),

and then obtain all solutions in the parametric form

𝒙=𝑩1𝒖,𝑩1=θ11𝑨1𝑩0,𝒖𝟎.\bm{x}=\bm{B}_{1}^{\ast}\bm{u},\qquad\bm{B}_{1}=\theta_{1}^{-1}\bm{A}_{1}\oplus\bm{B}_{0},\qquad\bm{u}\neq\bm{0}.

By Theorem 4 these solutions can be defined by the inequality 𝑩1𝒙𝒙\bm{B}_{1}\bm{x}\leq\bm{x} to provide the new feasible set

X1={𝒙>𝟎:𝑩1𝒙𝒙}.X_{1}=\{\bm{x}>\bm{0}:\ \bm{B}_{1}\bm{x}\leq\bm{x}\}.

To prepare the next step, we find the minimums

θ1l=min𝒙X1𝒙𝑪l𝒙=k=1n0i1++iknkmissingtr1/n(𝑪l𝑩1i1𝑪l𝑩1ik),lI0.\theta_{1l}=\min_{\bm{x}\in X_{1}}\bm{x}^{-}\bm{C}_{l}\bm{x}=\bigoplus_{k=1}^{n}\bigoplus_{0\leq i_{1}+\cdots+i_{k}\leq n-k}\mathop{\mathrm{missing}}{tr}\nolimits^{1/n}(\bm{C}_{l}\bm{B}_{1}^{i_{1}}\cdots\bm{C}_{l}\bm{B}_{1}^{i_{k}}),\qquad l\in I_{0}.

At the step s=2s=2, we form the matrix

𝑨2=lI1𝑪l,I1={lI0:θ1>θ1l},\bm{A}_{2}=\bigoplus_{l\in I_{1}}\bm{C}_{l},\qquad I_{1}=\left\{l\in I_{0}\ :\ \theta_{1}>\theta_{1l}\right\},

and then solve the problem

min𝒙>𝟎\displaystyle\min_{\bm{x}>\bm{0}} 𝒙𝑨2𝒙;\displaystyle\bm{x}^{-}\bm{A}_{2}\bm{x};
s.t. 𝑩1𝒙𝒙.\displaystyle\bm{B}_{1}\bm{x}\leq\bm{x}.

The minimum in the problem is given by

θ2=k=1n0i1++iknkmissingtr1/k(𝑨2𝑩1i1𝑨2𝑩1ik),\theta_{2}=\bigoplus_{k=1}^{n}\bigoplus_{0\leq i_{1}+\cdots+i_{k}\leq n-k}\mathop{\mathrm{missing}}{tr}\nolimits^{1/k}(\bm{A}_{2}\bm{B}_{1}^{i_{1}}\cdots\bm{A}_{2}\bm{B}_{1}^{i_{k}}),

and the set of solutions is

X2={𝒙>𝟎:𝑩2𝒙𝒙},𝑩2=θ21𝑨2𝑩1.X_{2}=\{\bm{x}>\bm{0}:\ \bm{B}_{2}\bm{x}\leq\bm{x}\},\qquad\bm{B}_{2}=\theta_{2}^{-1}\bm{A}_{2}\oplus\bm{B}_{1}.

To complete this step, we calculate the values

θ2l=k=1n0i1++iknkmissingtr1/n(𝑪l𝑩2i1𝑪l𝑩2ik),lI1,\theta_{2l}=\bigoplus_{k=1}^{n}\bigoplus_{0\leq i_{1}+\cdots+i_{k}\leq n-k}\mathop{\mathrm{missing}}{tr}\nolimits^{1/n}(\bm{C}_{l}\bm{B}_{2}^{i_{1}}\cdots\bm{C}_{l}\bm{B}_{2}^{i_{k}}),\qquad l\in I_{1},

and then define

𝑨3=lI2𝑪l,I2={lI1:θ2>θ2l}.\bm{A}_{3}=\bigoplus_{l\in I_{2}}\bm{C}_{l},\qquad I_{2}=\left\{l\in I_{1}\ :\ \theta_{2}>\theta_{2l}\right\}.

We repeat the procedure for all remaining steps sms\leq m. The procedure stops at step ss if the set XsX_{s} consists of a single solution, or Is=I_{s}=\emptyset.

The above solution scheme can be summarized as follows.

Theorem 11.

Let 𝐂l\bm{C}_{l} for all l=1,,ml=1,\ldots,m be matrices with nonzero spectral radii and 𝐁\bm{B} be a matrix such that missingTr(𝐁)1\mathop{\mathrm{missing}}{Tr}(\bm{B})\leq 1. Denote 𝐁0=𝐁\bm{B}_{0}=\bm{B} and I0={1,,m}I_{0}=\{1,\ldots,m\}, and define the recurrence relations

θs=k=1n0i1++iknkmissingtr1/k(𝑨s𝑩s1i1𝑨s𝑩s1ik),𝑨s=lIs1𝑪l,\displaystyle\theta_{s}=\bigoplus_{k=1}^{n}\bigoplus_{0\leq i_{1}+\cdots+i_{k}\leq n-k}\mathop{\mathrm{missing}}{tr}\nolimits^{1/k}(\bm{A}_{s}\bm{B}_{s-1}^{i_{1}}\cdots\bm{A}_{s}\bm{B}_{s-1}^{i_{k}}),\qquad\bm{A}_{s}=\bigoplus_{l\in I_{s-1}}\bm{C}_{l},
𝑩s=θs1𝑨s𝑩s1,Is={lIs1:θs>θsl},\displaystyle\bm{B}_{s}=\theta_{s}^{-1}\bm{A}_{s}\oplus\bm{B}_{s-1},\qquad I_{s}=\left\{l\in I_{s-1}\ :\ \theta_{s}>\theta_{sl}\right\},
θsl=k=1n0i1++iknkmissingtr1/n(𝑪l𝑩si1𝑪l𝑩sik),lIs1,s=1,,m.\displaystyle\theta_{sl}=\bigoplus_{k=1}^{n}\bigoplus_{0\leq i_{1}+\cdots+i_{k}\leq n-k}\mathop{\mathrm{missing}}{tr}\nolimits^{1/n}(\bm{C}_{l}\bm{B}_{s}^{i_{1}}\cdots\bm{C}_{l}\bm{B}_{s}^{i_{k}}),\quad l\in I_{s-1},\qquad s=1,\ldots,m.

Then, with the notation 𝐆=𝐁m\bm{G}=\bm{B}_{m}^{\ast}, the following statements hold:

  1. (i)

    All lexicographic max-ordering solutions of problem (20) are given by the matrix 𝑮=(𝒈j)\bm{G}=(\bm{g}_{j}) in the parametric form

    𝒙=𝑮𝒖,𝒖𝟎.\bm{x}=\bm{G}\bm{u},\qquad\bm{u}\neq\bm{0}.
  2. (ii)

    The minimal normalized best differentiating solution is given by

    𝒙best=𝒈k𝒈k1,k=argmax1jn𝒈j𝒈j.\bm{x}^{\textup{best}}=\bm{g}_{k}\|\bm{g}_{k}\|^{-1},\qquad k=\arg\max_{1\leq j\leq n}\|\bm{g}_{j}\|\|\bm{g}_{j}^{-}\|.
  3. (iii)

    The maximal normalized worst differentiating solution is

    𝒙worst=(𝟏T𝑮).\bm{x}^{\textup{worst}}=(\bm{1}^{T}\bm{G})^{-}.

Considering evaluation of the minimums θsl\theta_{sl} at each step ss, we see that the computational complexity of the solution is not more than O(m2n5)O(m^{2}n^{5}).

7 Illustrative Examples

In this section, we present numerical examples intended to illustrate the solution technique developed. We consider a constrained multicriteria problem of pairwise comparisons, which is formulated as a multiobjective optimization problem of constrained log-Chebyshev matrix approximation, and then solved in the framework of tropical optimization.

Suppose that there are n=4n=4 alternatives that are compared in pairs according to m=4m=4 criteria. The results of comparisons are given by the following pairwise comparison matrices:

𝑪1=(12341/21321/31/311/31/41/231),𝑪2=(12341/21231/31/2121/41/31/21),\displaystyle\bm{C}_{1}=\left(\begin{array}[]{cccc}1&2&3&4\\ 1/2&1&3&2\\ 1/3&1/3&1&1/3\\ 1/4&1/2&3&1\end{array}\right),\qquad\bm{C}_{2}=\left(\begin{array}[]{cccc}1&2&3&4\\ 1/2&1&2&3\\ 1/3&1/2&1&2\\ 1/4&1/3&1/2&1\end{array}\right),
𝑪3=(13231/31241/21/2111/31/411),𝑪4=(12211/211/231/221211/31/21).\displaystyle\bm{C}_{3}=\left(\begin{array}[]{cccc}1&3&2&3\\ 1/3&1&2&4\\ 1/2&1/2&1&1\\ 1/3&1/4&1&1\end{array}\right),\qquad\bm{C}_{4}=\left(\begin{array}[]{cccc}1&2&2&1\\ 1/2&1&1/2&3\\ 1/2&2&1&2\\ 1&1/3&1/2&1\end{array}\right).

The problem is to evaluate the vector 𝒙=(x1,x2,x3,x4)T\bm{x}=(x_{1},x_{2},x_{3},x_{4})^{T} of individual ratings of alternatives subject to the constraint x3x4x_{3}\geq x_{4}, which specifies that the rating of alternative 33 cannot be less than the rating of alternative 44.

In terms of max-algebra, the problem takes the form of (20) where the constraint matrix is given by

𝑩=(0000000000010000).\bm{B}=\left(\begin{array}[]{cccc}0&0&0&0\\ 0&0&0&0\\ 0&0&0&1\\ 0&0&0&0\end{array}\right).

Note that a straightforward analysis of the pairwise comparison matrices shows that according to their entries, alternatives 11 and 22 should respectively receive the first and second highest ratings. Alternatives 33 and 44 have lower ratings and must satisfy the constraint specified in the problem.

Below we describe solutions obtained according to the max-ordering, lexicographic ordering and lexicographic max-ordering principles of optimality.

7.1 Max-Ordering Solution

To obtain the max-ordering solution, we apply Theorem 9. Under the same notation as in this theorem, we calculate

𝑨=(13341/21341/221211/231),θ=3.\bm{A}=\left(\begin{array}[]{cccc}1&3&3&4\\ 1/2&1&3&4\\ 1/2&2&1&2\\ 1&1/2&3&1\end{array}\right),\qquad\theta=3.

Furthermore, we form the matrices

𝑩1=(1/3114/31/61/314/31/62/31/311/31/611/3),𝑮=𝑩1=(114/34/34/914/34/31/32/3111/32/311).\bm{B}_{1}=\left(\begin{array}[]{cccc}1/3&1&1&4/3\\ 1/6&1/3&1&4/3\\ 1/6&2/3&1/3&1\\ 1/3&1/6&1&1/3\end{array}\right),\qquad\bm{G}=\bm{B}_{1}^{\ast}=\left(\begin{array}[]{cccc}1&1&4/3&4/3\\ 4/9&1&4/3&4/3\\ 1/3&2/3&1&1\\ 1/3&2/3&1&1\end{array}\right).

Evaluation of the minimal normalized best differentiating and maximal normalized worst differentiating solutions from the matrix 𝑮\bm{G} gives

𝒙best=(14/91/31/3)(1.00000.44440.33330.3333),𝒙worst=(113/43/4)=(1.00001.00000.75000.7500).\bm{x}^{\textup{best}}=\left(\begin{array}[]{c}1\\ 4/9\\ 1/3\\ 1/3\end{array}\right)\approx\left(\begin{array}[]{c}1.0000\\ 0.4444\\ 0.3333\\ 0.3333\end{array}\right),\qquad\bm{x}^{\textup{worst}}=\left(\begin{array}[]{c}1\\ 1\\ 3/4\\ 3/4\end{array}\right)=\left(\begin{array}[]{c}1.0000\\ 1.0000\\ 0.7500\\ 0.7500\end{array}\right).

We can combine both solutions in a vector where some entries are given in interval form as

𝒙(1.00000.4444 1.00000.3333 0.75000.3333 0.7500).\bm{x}\approx\left(\begin{array}[]{c}1.0000\\ 0.4444\ \ldots\ 1.0000\\ 0.3333\ \ldots\ 0.7500\\ 0.3333\ \ldots\ 0.7500\end{array}\right).

7.2 Lexicographic Ordering Solution

According to Theorem 10, the lexicographic ordering solution consists of several steps and proceeds as follows. First, we calculate

θ1=3,\theta_{1}=3,

and then obtain the matrices

𝑩1=(1/32/314/31/61/312/31/91/91/311/121/611/3),𝑩1=(12/34/34/31/61111/91/6111/91/611).\bm{B}_{1}=\left(\begin{array}[]{cccc}1/3&2/3&1&4/3\\ 1/6&1/3&1&2/3\\ 1/9&1/9&1/3&1\\ 1/12&1/6&1&1/3\end{array}\right),\qquad\bm{B}_{1}^{\ast}=\left(\begin{array}[]{cccc}1&2/3&4/3&4/3\\ 1/6&1&1&1\\ 1/9&1/6&1&1\\ 1/9&1/6&1&1\end{array}\right).

To check whether the matrix B1B_{1}^{\ast} generates a nonunique solution, we find the corresponding best and worst differentiating solution vectors

𝒙1best=(11/61/91/9),𝒙1worst=(113/43/4).\bm{x}_{1}^{\textup{best}}=\left(\begin{array}[]{c}1\\ 1/6\\ 1/9\\ 1/9\end{array}\right),\qquad\bm{x}_{1}^{\textup{worst}}=\left(\begin{array}[]{c}1\\ 1\\ 3/4\\ 3/4\end{array}\right).

Since these vectors are different, we continue to the next step to evaluate

θ2=2.\theta_{2}=2.

Further calculations lead to the matrices

𝑩2=(1/213/221/41/213/21/61/41/211/81/611/2),𝑩2=(11221/413/23/21/61/4111/61/411).\bm{B}_{2}=\left(\begin{array}[]{cccc}1/2&1&3/2&2\\ 1/4&1/2&1&3/2\\ 1/6&1/4&1/2&1\\ 1/8&1/6&1&1/2\end{array}\right),\qquad\bm{B}_{2}^{\ast}=\left(\begin{array}[]{cccc}1&1&2&2\\ 1/4&1&3/2&3/2\\ 1/6&1/4&1&1\\ 1/6&1/4&1&1\end{array}\right).

The best and worst differentiating solution vectors given by 𝑩2\bm{B}_{2}^{\ast} are

𝒙2best=(11/41/61/6),𝒙2worst=(111/21/2),\bm{x}_{2}^{\textup{best}}=\left(\begin{array}[]{c}1\\ 1/4\\ 1/6\\ 1/6\end{array}\right),\qquad\bm{x}_{2}^{\textup{worst}}=\left(\begin{array}[]{c}1\\ 1\\ 1/2\\ 1/2\end{array}\right),

which shows that these vectors do not coincide.

Because the solution is not unique, we perform the next step and calculate

θ3=61/31.8171.\theta_{3}=6^{1/3}\approx 1.8171.

Furthermore, we use the above result to obtain the matrices

𝑩3=(1/θ33/θ33/221/41/θ32/θ34/θ31/2θ31/2θ31/θ311/3θ31/611/θ3),𝑩3=(13/θ32θ32θ3θ3/314/θ34/θ31/2θ3θ3/4111/2θ3θ3/411).\bm{B}_{3}=\left(\begin{array}[]{cccc}1/\theta_{3}&3/\theta_{3}&3/2&2\\ 1/4&1/\theta_{3}&2/\theta_{3}&4/\theta_{3}\\ 1/2\theta_{3}&1/2\theta_{3}&1/\theta_{3}&1\\ 1/3\theta_{3}&1/6&1&1/\theta_{3}\end{array}\right),\quad\bm{B}_{3}^{\ast}=\left(\begin{array}[]{cccc}1&3/\theta_{3}&2\theta_{3}&2\theta_{3}\\ \theta_{3}/3&1&4/\theta_{3}&4/\theta_{3}\\ 1/2\theta_{3}&\theta_{3}/4&1&1\\ 1/2\theta_{3}&\theta_{3}/4&1&1\end{array}\right).

After evaluation of the best and worst solutions from 𝑩3\bm{B}_{3}^{\ast}, we have

𝒙3best=(1θ3/31/2θ31/2θ3),𝒙3worst=(1θ3/31/2θ31/2θ3).\bm{x}_{3}^{\textup{best}}=\left(\begin{array}[]{c}1\\ \theta_{3}/3\\ 1/2\theta_{3}\\ 1/2\theta_{3}\end{array}\right),\qquad\bm{x}_{3}^{\textup{worst}}=\left(\begin{array}[]{c}1\\ \theta_{3}/3\\ 1/2\theta_{3}\\ 1/2\theta_{3}\end{array}\right).

Since both solutions coincide, we complete the procedure. As the unique final solution, we take the vector

𝒙=(1θ3/31/2θ31/2θ3)(1.00000.60570.27520.2752).\bm{x}=\left(\begin{array}[]{c}1\\ \theta_{3}/3\\ 1/2\theta_{3}\\ 1/2\theta_{3}\end{array}\right)\approx\left(\begin{array}[]{c}1.0000\\ 0.6057\\ 0.2752\\ 0.2752\end{array}\right).

7.3 Lexicographic Max-Ordering Solution

At the first step of this solution, we set I0={1,2,3,4}I_{0}=\{1,2,3,4\} and then obtain

𝑨1=(13341/21341/221211/231),θ1=3.\bm{A}_{1}=\left(\begin{array}[]{cccc}1&3&3&4\\ 1/2&1&3&4\\ 1/2&2&1&2\\ 1&1/2&3&1\end{array}\right),\qquad\theta_{1}=3.

Furthermore, as in the max-ordering solution, we calculate the matrices

𝑩1=(1/3114/31/61/314/31/62/31/311/31/611/3),𝑩1=(114/34/34/914/34/31/32/3111/32/311),\bm{B}_{1}=\left(\begin{array}[]{cccc}1/3&1&1&4/3\\ 1/6&1/3&1&4/3\\ 1/6&2/3&1/3&1\\ 1/3&1/6&1&1/3\end{array}\right),\qquad\bm{B}_{1}^{\ast}=\left(\begin{array}[]{cccc}1&1&4/3&4/3\\ 4/9&1&4/3&4/3\\ 1/3&2/3&1&1\\ 1/3&2/3&1&1\end{array}\right),

and then obtain the best and worst differentiating vectors

𝒙1best=(14/91/31/3),𝒙1worst=(113/43/4).\bm{x}_{1}^{\textup{best}}=\left(\begin{array}[]{c}1\\ 4/9\\ 1/3\\ 1/3\end{array}\right),\qquad\bm{x}_{1}^{\textup{worst}}=\left(\begin{array}[]{c}1\\ 1\\ 3/4\\ 3/4\\ \end{array}\right).

We observe that these vectors do not coincide and further calculate

θ11=3,θ12=2,θ13=8/3,θ14=8/3.\theta_{11}=3,\qquad\theta_{12}=2,\qquad\theta_{13}=8/3,\qquad\theta_{14}=8/3.

Since θ12,θ13,θ14<θ1\theta_{12},\theta_{13},\theta_{14}<\theta_{1}, we have the index set I1={2,3,4}I_{1}=\{2,3,4\}.

We turn to the next step and use the set I1I_{1} to calculate

𝑨2=(13341/21241/221211/311),θ2=81/22.8284.\bm{A}_{2}=\left(\begin{array}[]{cccc}1&3&3&4\\ 1/2&1&2&4\\ 1/2&2&1&2\\ 1&1/3&1&1\end{array}\right),\qquad\theta_{2}=8^{1/2}\approx 2.8284.

To describe the solution set for this step, we form the matrices

𝑩2=(1/θ23/θ23/θ24/θ21/2θ21/θ214/θ21/2θ22/θ21/θ211/θ21/611/θ2),𝑩2=(13/θ23/23/21/214/θ24/θ21/θ22/θ2111/θ22/θ211)\bm{B}_{2}=\left(\begin{array}[]{cccc}1/\theta_{2}&3/\theta_{2}&3/\theta_{2}&4/\theta_{2}\\ 1/2\theta_{2}&1/\theta_{2}&1&4/\theta_{2}\\ 1/2\theta_{2}&2/\theta_{2}&1/\theta_{2}&1\\ 1/\theta_{2}&1/6&1&1/\theta_{2}\end{array}\right),\quad\bm{B}_{2}^{\ast}=\left(\begin{array}[]{cccc}1&3/\theta_{2}&3/2&3/2\\ 1/2&1&4/\theta_{2}&4/\theta_{2}\\ 1/\theta_{2}&2/\theta_{2}&1&1\\ 1/\theta_{2}&2/\theta_{2}&1&1\end{array}\right)

and then find the vectors

𝒙2best=(11/21/θ21/θ2),𝒙2worst=(1θ2/32/32/3).\bm{x}_{2}^{\textup{best}}=\left(\begin{array}[]{c}1\\ 1/2\\ 1/\theta_{2}\\ 1/\theta_{2}\end{array}\right),\qquad\bm{x}_{2}^{\textup{worst}}=\left(\begin{array}[]{c}1\\ \theta_{2}/3\\ 2/3\\ 2/3\end{array}\right).

Next, we calculate the minimums

θ22=381/2/4,θ23=81/2,θ24=81/2,\theta_{22}=3\cdot 8^{1/2}/4,\qquad\theta_{23}=8^{1/2},\qquad\theta_{24}=8^{1/2},

which leads to the index set I2={2}I_{2}=\{2\}.

According to the set I2I_{2}, we define

𝑨3=(12341/21231/31/2121/41/31/21),θ3=381/2/42.1213.\bm{A}_{3}=\left(\begin{array}[]{cccc}1&2&3&4\\ 1/2&1&2&3\\ 1/3&1/2&1&2\\ 1/4&1/3&1/2&1\end{array}\right),\qquad\theta_{3}=3\cdot 8^{1/2}/4\approx 2.1213.

Furthermore, we calculate the matrices

𝑩3=(1/θ3θ3/24/θ34/θ3θ3/91/θ313/θ3θ3/12θ3/31/θ31θ3/61/611/θ3),𝑩3=(14/3θ3/24/θ31/213/θ33/θ3θ3/6θ3/311θ3/6θ3/311).\bm{B}_{3}=\left(\begin{array}[]{cccc}1/\theta_{3}&\theta_{3}/2&4/\theta_{3}&4/\theta_{3}\\ \theta_{3}/9&1/\theta_{3}&1&3/\theta_{3}\\ \theta_{3}/12&\theta_{3}/3&1/\theta_{3}&1\\ \theta_{3}/6&1/6&1&1/\theta_{3}\end{array}\right),\quad\bm{B}_{3}^{\ast}=\left(\begin{array}[]{cccc}1&4/3&\theta_{3}/2&4/\theta_{3}\\ 1/2&1&3/\theta_{3}&3/\theta_{3}\\ \theta_{3}/6&\theta_{3}/3&1&1\\ \theta_{3}/6&\theta_{3}/3&1&1\end{array}\right).

The best and worst differentiating vectors are given by

𝒙3best=(11/2θ3/6θ3/6)(1.00000.50000.35360.3536),𝒙3worst=(13/4θ3/4θ3/4)(1.00000.75000.53030.5303).\bm{x}_{3}^{\textup{best}}=\left(\begin{array}[]{c}1\\ 1/2\\ \theta_{3}/6\\ \theta_{3}/6\end{array}\right)\approx\left(\begin{array}[]{c}1.0000\\ 0.5000\\ 0.3536\\ 0.3536\end{array}\right),\qquad\bm{x}_{3}^{\textup{worst}}=\left(\begin{array}[]{c}1\\ 3/4\\ \theta_{3}/4\\ \theta_{3}/4\end{array}\right)\approx\left(\begin{array}[]{c}1.0000\\ 0.7500\\ 0.5303\\ 0.5303\end{array}\right).

Since at this step we have I3=I_{3}=\emptyset, the procedure terminates. We can couple both obtained vectors into one vector with interval entries as

𝒙(1.00000.5000 0.75000.3536 0.53030.3536 0.5303).\bm{x}\approx\left(\begin{array}[]{c}1.0000\\ 0.5000\ \ldots\ 0.7500\\ 0.3536\ \ldots\ 0.5303\\ 0.3536\ \ldots\ 0.5303\end{array}\right).

To conclude we observe that all solutions result in the same preference order of alternatives in the form (1)(2)(3)(4)(1)\succ(2)\succ(3)\equiv(4), whereas the individual ratings of alternatives may differ. We note that this order is in agreement with the above conclusions based on straightforward analysis of pairwise comparison matrices, and satisfies the constraint imposed on the ratings.

8 Solution to Vacation Plan Problem

As another example which offers a potential to consider the proposed approach in line with existing methods, we solve the problem of selecting a plan for vacation from [47]. The aim is to choose a destination for vacation trip from Philadelphia among the alternatives 𝐒\mathbf{S}: short trips (i.e., New York, Washington, Atlantic City, New Hope, etc.), 𝐐\mathbf{Q}: Quebec, 𝐃\mathbf{D}: Denver, 𝐂\mathbf{C}: California. These places are evaluated in terms of the following criteria: (1) cost of the trip from Philadelphia, (2) sight-seeing opportunities, (3) entertainment (doing things), (4) way of travel, (5) eating places.

The results of pairwise comparison of criteria are given by the matrix

𝑪0=(11/51/511/3511/51/515511/51155153111/51).\bm{C}_{0}=\left(\begin{array}[]{ccccc}1&1/5&1/5&1&1/3\\ 5&1&1/5&1/5&1\\ 5&5&1&1/5&1\\ 1&5&5&1&5\\ 3&1&1&1/5&1\end{array}\right).

The pairwise comparison matrices of places with respect to the criteria are as follows:

𝑪1=(13791/31671/71/6131/91/71/31),𝑪2=(11/51/61/4512461/21641/41/61),\displaystyle\bm{C}_{1}=\left(\begin{array}[]{cccc}1&3&7&9\\ 1/3&1&6&7\\ 1/7&1/6&1&3\\ 1/9&1/7&1/3&1\end{array}\right),\qquad\bm{C}_{2}=\left(\begin{array}[]{cccc}1&1/5&1/6&1/4\\ 5&1&2&4\\ 6&1/2&1&6\\ 4&1/4&1/6&1\end{array}\right),
𝑪3=(1771/21/7111/71/7111/72771),𝑪4=(141/41/31/411/23421331/31/31),\displaystyle\bm{C}_{3}=\left(\begin{array}[]{cccc}1&7&7&1/2\\ 1/7&1&1&1/7\\ 1/7&1&1&1/7\\ 2&7&7&1\end{array}\right),\qquad\bm{C}_{4}=\left(\begin{array}[]{cccc}1&4&1/4&1/3\\ 1/4&1&1/2&3\\ 4&2&1&3\\ 3&1/3&1/3&1\end{array}\right),
𝑪5=(117411631/71/611/41/41/341).\displaystyle\bm{C}_{5}=\left(\begin{array}[]{cccc}1&1&7&4\\ 1&1&6&3\\ 1/7&1/6&1&1/4\\ 1/4&1/3&4&1\end{array}\right).

To solve the problem, the analytical hierarchy process method is applied in [47], which provides the following order of alternatives: 𝐒𝐃𝐂𝐐\mathbf{S}\succ\mathbf{D}\succ\mathbf{C}\succ\mathbf{Q}. Another solution based on the log-Chebyshev approximation in [34] yields a different order 𝐂𝐒𝐃𝐐\mathbf{C}\succeq\mathbf{S}\succ\mathbf{D}\succeq\mathbf{Q}. Below, we show how the new approach can be used to obtain solutions according to various principles of optimality.

8.1 Max-Ordering Solution

We start with the max-ordering solution given by Theorem 9. Observing that 𝑩0=𝑩=𝟎\bm{B}_{0}=\bm{B}=\bm{0}, we calculate

𝑨=(1779516762164771),θ=3141/37.2304,\bm{A}=\left(\begin{array}[]{cccc}1&7&7&9\\ 5&1&6&7\\ 6&2&1&6\\ 4&7&7&1\end{array}\right),\qquad\theta=3\cdot 14^{1/3}\approx 7.2304,

Next, we form the matrices

𝑩1=θ1(1779516762164771),𝑮=𝑩1=(1θ/6θ/69/θ7/917θ/547/θ6/θ11θ/7θ/97/θ7/θ1).\bm{B}_{1}=\theta^{-1}\left(\begin{array}[]{cccc}1&7&7&9\\ 5&1&6&7\\ 6&2&1&6\\ 4&7&7&1\end{array}\right),\qquad\bm{G}=\bm{B}_{1}^{\ast}=\left(\begin{array}[]{cccc}1&\theta/6&\theta/6&9/\theta\\ 7/9&1&7\theta/54&7/\theta\\ 6/\theta&1&1&\theta/7\\ \theta/9&7/\theta&7/\theta&1\end{array}\right).

The minimal normalized best differentiating and maximal normalized worst differentiating solutions obtained from 𝑮\bm{G} takes the form

𝒙best=(17/96/θθ/9)(1.00000.77780.82980.8034),𝒙worst=(16/θ6/θθ/9)(1.00000.82980.82980.8034),\bm{x}^{\textup{best}}=\left(\begin{array}[]{c}1\\ 7/9\\ 6/\theta\\ \theta/9\end{array}\right)\approx\left(\begin{array}[]{c}1.0000\\ 0.7778\\ 0.8298\\ 0.8034\\ \end{array}\right),\qquad\bm{x}^{\textup{worst}}=\left(\begin{array}[]{c}1\\ 6/\theta\\ 6/\theta\\ \theta/9\end{array}\right)\approx\left(\begin{array}[]{c}1.0000\\ 0.8298\\ 0.8298\\ 0.8034\end{array}\right),

and specify the respective orders 𝐒𝐃𝐂𝐐\mathbf{S}\succ\mathbf{D}\succ\mathbf{C}\succ\mathbf{Q} and 𝐒𝐃𝐐𝐂\mathbf{S}\succ\mathbf{D}\equiv\mathbf{Q}\succ\mathbf{C}.

Both solutions can be written as the vector

𝒙(1.00000.7778 0.82980.82980.8034).\bm{x}\approx\left(\begin{array}[]{c}1.0000\\ 0.7778\ \ldots\ 0.8298\\ 0.8298\\ 0.8034\end{array}\right).

The obtained solutions result in the order 𝐒𝐃𝐂𝐐\mathbf{S}\succ\mathbf{D}\succeq\mathbf{C}\parallel\mathbf{Q}, where the alternatives 𝐂\mathbf{C} and 𝐐\mathbf{Q} cannot be uniquely ordered.

8.2 Lexicographic Ordering Solution

In order to obtain a lexicographic ordering solution, we first need to rank the criteria from the pairwise comparison matrix 𝑪0\bm{C}_{0}. By comparing the rows of the matrix 𝑪0\bm{C}_{0}, one can expect that the most important is criterion 44 and the second is 33. Next come criteria 22 and 55, and finally 11 as the least important criterion. These considerations are confirmed by the result in [47], where a priority vector is calculated from 𝑪0\bm{C}_{0} by using the principal eigenvector method, which puts the criteria into the order 4,3,2,5,14,3,2,5,1.

To simplify the application of the lexicographic ordering based solutions in what follows, we renumber the criteria in the problem according to their importance. Specifically, we assign the number 11 one to criterion 44, the number 22 to criterion 33 and so on. We use the new numbers of criteria to rename the pairwise comparison matrices for alternatives as follows:

𝑪1=(141/41/31/411/23421331/31/31),𝑪2=(1771/21/7111/71/7111/72771),\displaystyle\bm{C}_{1}=\left(\begin{array}[]{cccc}1&4&1/4&1/3\\ 1/4&1&1/2&3\\ 4&2&1&3\\ 3&1/3&1/3&1\end{array}\right),\qquad\bm{C}_{2}=\left(\begin{array}[]{cccc}1&7&7&1/2\\ 1/7&1&1&1/7\\ 1/7&1&1&1/7\\ 2&7&7&1\end{array}\right),
𝑪3=(11/51/61/4512461/21641/41/61),𝑪4=(117411631/71/611/41/41/341),\displaystyle\bm{C}_{3}=\left(\begin{array}[]{cccc}1&1/5&1/6&1/4\\ 5&1&2&4\\ 6&1/2&1&6\\ 4&1/4&1/6&1\end{array}\right),\qquad\bm{C}_{4}=\left(\begin{array}[]{cccc}1&1&7&4\\ 1&1&6&3\\ 1/7&1/6&1&1/4\\ 1/4&1/3&4&1\end{array}\right),
𝑪5=(13791/31671/71/6131/91/71/31).\displaystyle\bm{C}_{5}=\left(\begin{array}[]{cccc}1&3&7&9\\ 1/3&1&6&7\\ 1/7&1/6&1&3\\ 1/9&1/7&1/3&1\end{array}\right).

We now solve the problem by applying Theorem 10. We first calculate

θ1=361/33.3019,\theta_{1}=36^{1/3}\approx 3.3019,

and then construct the matrices

𝑩1=θ11(141/41/31/411/23421331/31/31),𝑩1=(14/θ1θ1/18θ1/3θ1/411/2θ13/θ14/θ14θ1/914/33/θ1θ1/31/61).\bm{B}_{1}=\theta_{1}^{-1}\left(\begin{array}[]{cccc}1&4&1/4&1/3\\ 1/4&1&1/2&3\\ 4&2&1&3\\ 3&1/3&1/3&1\end{array}\right),\qquad\bm{B}_{1}^{\ast}=\left(\begin{array}[]{cccc}1&4/\theta_{1}&\theta_{1}/18&\theta_{1}/3\\ \theta_{1}/4&1&1/2\theta_{1}&3/\theta_{1}\\ 4/\theta_{1}&4\theta_{1}/9&1&4/3\\ 3/\theta_{1}&\theta_{1}/3&1/6&1\end{array}\right).

Calculation of the best and worst differentiating solutions from 𝑩1\bm{B}_{1}^{\ast} yields

𝒙1best=(θ1/181/2θ111/6),𝒙1worst=(θ1/49/4θ113/4).\bm{x}_{1}^{\textup{best}}=\left(\begin{array}[]{c}\theta_{1}/18\\ 1/2\theta_{1}\\ 1\\ 1/6\end{array}\right),\qquad\bm{x}_{1}^{\textup{worst}}=\left(\begin{array}[]{c}\theta_{1}/4\\ 9/4\theta_{1}\\ 1\\ 3/4\end{array}\right).

Observing that the obtained vectors do not coincide, we further evaluate

θ2=28/39.3333,\theta_{2}=28/3\approx 9.3333,

and find the matrices

𝑩2=(1/θ14/θ13/41/3θ11/4θ11/θ11/2θ13/θ14/θ12/θ11/θ13/θ13/θ13/43/41/θ1),𝑩2=(14/θ1θ1/4θ1/3θ1/419/4θ13/θ14/θ14θ1/914/33/θ1θ1/33/41).\bm{B}_{2}=\left(\begin{array}[]{cccc}1/\theta_{1}&4/\theta_{1}&3/4&1/3\theta_{1}\\ 1/4\theta_{1}&1/\theta_{1}&1/2\theta_{1}&3/\theta_{1}\\ 4/\theta_{1}&2/\theta_{1}&1/\theta_{1}&3/\theta_{1}\\ 3/\theta_{1}&3/4&3/4&1/\theta_{1}\end{array}\right),\qquad\bm{B}_{2}^{\ast}=\left(\begin{array}[]{cccc}1&4/\theta_{1}&\theta_{1}/4&\theta_{1}/3\\ \theta_{1}/4&1&9/4\theta_{1}&3/\theta_{1}\\ 4/\theta_{1}&4\theta_{1}/9&1&4/3\\ 3/\theta_{1}&\theta_{1}/3&3/4&1\end{array}\right).

Further calculation shows that the best and worst differentiating solutions given by 𝑩2\bm{B}_{2}^{\ast} coincide and thus provide a unique solution to the problem as

𝒙=𝒙2best=𝒙2worst=(θ1/49/4θ113/4)(0.82550.68141.00000.7500).\bm{x}=\bm{x}_{2}^{\textup{best}}=\bm{x}_{2}^{\textup{worst}}=\left(\begin{array}[]{c}\theta_{1}/4\\ 9/4\theta_{1}\\ 1\\ 3/4\\ \end{array}\right)\approx\left(\begin{array}[]{c}0.8255\\ 0.6814\\ 1.0000\\ 0.7500\end{array}\right).

The obtained solution gives the order 𝐃𝐒𝐂𝐐\mathbf{D}\succ\mathbf{S}\succ\mathbf{C}\succ\mathbf{Q}.

8.3 Lexicographic Max-Ordering Solution

We now apply Theorem 11 to obtain the lexicographic max-ordering solution. We set I0={1,2,3,4}I_{0}=\{1,2,3,4\} and calculate

𝑨1=(1779516762164771),θ1=3141/37.2304.\bm{A}_{1}=\left(\begin{array}[]{cccc}1&7&7&9\\ 5&1&6&7\\ 6&2&1&6\\ 4&7&7&1\end{array}\right),\qquad\theta_{1}=3\cdot 14^{1/3}\approx 7.2304.

Furthermore, we obtain the matrices

𝑩1=θ1(1779516762164771),𝑩1=(1θ/6θ/69/θ7/917θ/547/θ6/θ11θ/7θ/97/θ7/θ1).\bm{B}_{1}=\theta^{-1}\left(\begin{array}[]{cccc}1&7&7&9\\ 5&1&6&7\\ 6&2&1&6\\ 4&7&7&1\end{array}\right),\qquad\bm{B}_{1}^{\ast}=\left(\begin{array}[]{cccc}1&\theta/6&\theta/6&9/\theta\\ 7/9&1&7\theta/54&7/\theta\\ 6/\theta&1&1&\theta/7\\ \theta/9&7/\theta&7/\theta&1\end{array}\right).

The best and worst differentiating solutions are given by the vectors

𝒙best=(17/96/θθ/9)(1.00000.77780.82980.8034),𝒙worst=(16/θ6/θθ/9)(1.00000.82980.82980.8034).\bm{x}^{\textup{best}}=\left(\begin{array}[]{c}1\\ 7/9\\ 6/\theta\\ \theta/9\end{array}\right)\approx\left(\begin{array}[]{c}1.0000\\ 0.7778\\ 0.8298\\ 0.8034\\ \end{array}\right),\qquad\bm{x}^{\textup{worst}}=\left(\begin{array}[]{c}1\\ 6/\theta\\ 6/\theta\\ \theta/9\end{array}\right)\approx\left(\begin{array}[]{c}1.0000\\ 0.8298\\ 0.8298\\ 0.8034\end{array}\right).

Considering that these vectors do not coincide, we further calculate

θ11=2θ1/3,θ12=θ1,θ13=θ1,θ14=6,θ15=θ1.\theta_{11}=2\theta_{1}/3,\qquad\theta_{12}=\theta_{1},\qquad\theta_{13}=\theta_{1},\qquad\theta_{14}=6,\qquad\theta_{15}=\theta_{1}.

We observe that θ11,θ14<θ1\theta_{11},\theta_{14}<\theta_{1} and take the set I1={1,4}I_{1}=\{1,4\} to calculate

𝑨2=(14741163421331/341),θ2=6.\bm{A}_{2}=\left(\begin{array}[]{cccc}1&4&7&4\\ 1&1&6&3\\ 4&2&1&3\\ 3&1/3&4&1\end{array}\right),\qquad\theta_{2}=6.

Further calculations yield the matrices

𝑩2=(1/67/θ17/69/θ15/θ11/617/θ16/θ11/31/66/θ14/θ17/θ17/θ11/6),𝑩2=(1θ1/6θ1/69/θ16/θ111θ1/76/θ111θ1/7θ1/97/θ17/θ11).\bm{B}_{2}=\left(\begin{array}[]{cccc}1/6&7/\theta_{1}&7/6&9/\theta_{1}\\ 5/\theta_{1}&1/6&1&7/\theta_{1}\\ 6/\theta_{1}&1/3&1/6&6/\theta_{1}\\ 4/\theta_{1}&7/\theta_{1}&7/\theta_{1}&1/6\end{array}\right),\quad\bm{B}_{2}^{\ast}=\left(\begin{array}[]{cccc}1&\theta_{1}/6&\theta_{1}/6&9/\theta_{1}\\ 6/\theta_{1}&1&1&\theta_{1}/7\\ 6/\theta_{1}&1&1&\theta_{1}/7\\ \theta_{1}/9&7/\theta_{1}&7/\theta_{1}&1\end{array}\right).

Evaluation of the best and worst differentiating solutions yields the vector

𝒙=𝒙2best=𝒙2worst=(16/θ16/θ1θ1/9)(1.00000.82980.82980.8034),\bm{x}=\bm{x}_{2}^{\textup{best}}=\bm{x}_{2}^{\textup{worst}}=\left(\begin{array}[]{c}1\\ 6/\theta_{1}\\ 6/\theta_{1}\\ \theta_{1}/9\end{array}\right)\approx\left(\begin{array}[]{c}1.0000\\ 0.8298\\ 0.8298\\ 0.8034\end{array}\right),

which places the alternatives in the order 𝐒𝐃𝐐𝐂\mathbf{S}\succ\mathbf{D}\equiv\mathbf{Q}\succ\mathbf{C}.

A comparison of the results obtained shows that the solutions obtained according to the max-ordering and lexicographic max-ordering principles of optimality and the solution obtained by the analytical hierarchy process method in [47] rank the alternative 𝐒\mathbf{S} higher than the others. A different result provided by lexicographic ordering is explained by the fact that this technique can overestimate the contribution of pairwise comparisons made according to the most important criterion, and thereby provide little or no consideration of the other comparisons. Finally, we note that the order obtained by the analytical hierarchy process method and the order given by the best differentiating max-ordering solution completely coincide.

9 Conclusion

In this paper, we have considered a decision-making problem of rating alternatives, based on pairwise comparisons under multiple criteria, subject to constraints imposed on ratings. The problem was reduced to constrained log-Chebyshev approximation of pairwise comparison matrices by a common consistent matrix that directly determines the vector of ratings. We have represented the approximation problem as a constrained multiobjective optimization problem in terms of tropical algebra where the addition is defined as maximum. We have applied methods and results of tropical optimization to solve the multiobjective problem according to the max-ordering, lexicographic ordering and lexicographic max-ordering principles of optimality. The proposed technique was illustrated with numerical solutions of some pairwise comparison problems, including the problem of selecting a plan for vacation from [47].

The obtained results show that the application of the log-Chebyshev approximation in combination with tropical algebra allows one to obtain analytical solutions ready for both formal analysis and straightforward computations. Although the new technique to solve pairwise comparison problems in general involves more computational effort than the existing principal eigenvector and geometric mean methods, the computational complexity of this technique remains of a moderate degree and does not exceed O(m2n5)O(m^{2}n^{5}) where mm is the number of criteria and nn is the number of alternatives. At the same time, it offers an advantage over the above two methods that cannot handle constraints in a direct and efficient way, and thus do not provide easy implementation of the optimality principles which involve solving constrained optimization problems.

The application of the log-Chebyshev approximation can yield multiple solutions to the pairwise comparison problem. Although the multiple solutions seem to correspond well to the approximate nature of the model of pairwise comparisons, which in most cases is poorly consistent with the data, the existence of many optimal solutions can make it difficult to find an appropriate decision. To overcome this possible drawback, we have developed a new technique that reduces the multiple solutions to the “best” and “worst” solutions that can be reasonably representative of the entire solution set and hence simplify the identification and selection of the right decision.

The technique examines the solution vectors of ratings, which are normalized to have the maximum value of 11. As the best solution, a minimal vector is taken that maximizes the ratio between the maximum and minimum components (the highest and lowest ratings). The worst solution is the maximal vector that minimizes the ratio between the maximum and minimum components. While the worst solution is always unique, there may be more than one best solution, which can be considered as a limitation of the technique. Note however that multiple best solutions (the number of which cannot exceed nn either) should be explained as a result of unstable pairwise comparison data and hence indicate unreliable input information, which is difficult to interpret unambiguously.

The main contribution of the study is twofold. First, we have formulated and investigated new constrained multicriteria decision problems that model complex decision-making processes, but received no attention in the literature. We have proposed an approach based on the log-Chebyshev approximation, which in contrast to other methods in decision making makes it possible to obtain analytical solutions to the problems of interest. Moreover, new formal methods and real-world applications developed to extend the mathematical instruments and broaden the application scope of tropical optimization, present another significant outcome.

Practical implications of the study are that it suggests new practically applicable solution techniques to supplement and complement existing methods in the situation when these methods are known to produce different and even opposite results. In the case of conflicting results, the new technique can serve to obtain additional arguments in favor or against solutions offered by other methods, and thereby help make more accurate decisions. As another practical consequence, one can consider the ability to solve real-world multicriteria decision problems with constraints, where weights of criteria are not specified or the criteria are differentiated only by ranks.

Possible lines of further investigation include extension of the results to solve multicriteria problems under different types of constraints and principles of optimality. The development of interval and fuzzy implementations of the proposed solutions is another area of interest for future research.

Acknowledgments

The author is very grateful to two anonymous reviewers for their valuable comments and suggestions, which have been incorporated in the revised manuscript.

References

  • [1] J. Barzilai, W. Cook, and B. Golany. Consistent weights for judgements matrices of the relative importance of alternatives. Oper. Res. Lett., 6(3):131–134, 1987. doi:10.1016/0167-6377(87)90026-5.
  • [2] R. E. Bellman and L. A. Zadeh. Decision-making in a fuzzy environment. Management Sci., 17(4):B141–B164, 1970. doi:10.1287/mnsc.17.4.B141.
  • [3] V. Belton and T. Gear. On a short-coming of Saaty’s method of analytic hierarchies. Omega, 11(3):228–230, 1983. doi:10.1016/0305-0483(83)90047-6.
  • [4] H. P. Benson. Multi-objective optimization: Pareto optimal solutions, properties. In C. A. Floudas and P. M. Pardalos, editors, Encyclopedia of Optimization, pages 2478–2481. Springer, 2 edition, 2009. doi:10.1007/978-0-387-74759-0_426.
  • [5] J. J. Buckley. Fuzzy hierarchical analysis. Fuzzy Sets and Systems, 17(3):233–247, 1985. doi:10.1016/0165-0114(85)90090-9.
  • [6] J. J. Buckley. Multiobjective possibilistic linear programming. Fuzzy Sets and Systems, 35(1):23–28, 1990. doi:10.1016/0165-0114(90)90015-X.
  • [7] C. Carlsson and R. Fullér. Possibility for Decision, volume 270 of Studies in Fuzziness and Soft Computing. Springer, Berlin, 2011. doi:10.1007/978-3-642-22642-7.
  • [8] E. U. Choo and W. C. Wedley. A common framework for deriving preference values from pairwise comparison matrices. Comput. Oper. Res., 31(6):893–908, 2004. doi:10.1016/S0305-0548(03)00042-X.
  • [9] G. Crawford and C. Williams. A note on the analysis of subjective judgment matrices. J. Math. Psych., 29(4):387–405, 1985. doi:10.1016/0022-2496(85)90002-1.
  • [10] D. Dubois and P. Fortemps. Computing improved optimal solutions to max-min flexible constraint satisfaction problems. European J. Oper. Res., 118(1):95–126, 1999. doi:10.1016/S0377-2217(98)00307-5.
  • [11] D. J. Dubois and H. M. Prade. Fuzzy Sets and Systems, volume 144 of Mathematics in Science and Engineering. Academic Press, San Diego, 1980.
  • [12] M. Ehrgott. Multicriteria Optimization. Springer, Berlin, 2 edition, 2005. doi:10.1007/3-540-27659-9.
  • [13] W. F. A. El-Wahed. Intelligent fuzzy multi-criteria decision making: Review and analysis. In C. Kahraman, editor, Fuzzy Multi-Criteria Decision Making: Theory and Applications with Recent Developments, pages 19–50. Springer, Boston, MA, 2008. doi:10.1007/978-0-387-76813-7_2.
  • [14] L. Elsner and P. van den Driessche. Max-algebra and pairwise comparison matrices. Linear Algebra Appl., 385(1):47–62, 2004. doi:10.1016/S0024-3795(03)00476-2.
  • [15] L. Elsner and P. van den Driessche. Max-algebra and pairwise comparison matrices, II. Linear Algebra Appl., 432(4):927–935, 2010. doi:10.1016/j.laa.2009.10.005.
  • [16] M. Fiedler, J. Nedoma, J. Ramík, J. Rohn, and K. Zimmermann. Linear Optimization Problems with Inexact Data. Springer, Boston, MA, 2006. doi:10.1007/0-387-32698-7.
  • [17] M. Gavalec, Z. Němcová, and S. Sergeev. Tropical linear algebra with the Łukasiewicz T-norm. Fuzzy Sets and Systems, 276:131–148, 2015. doi:10.1016/j.fss.2014.11.008.
  • [18] M. Gavalec, J. Ramík, and K. Zimmermann. Decision Making and Optimization, volume 677 of Lecture Notes in Economics and Mathematical Systems. Springer, Cham, 2015. doi:10.1007/978-3-319-08323-0.
  • [19] J. S. Golan. Semirings and Affine Equations Over Them, volume 556 of Mathematics and Its Applications. Kluwer Acad. Publ., Dordrecht, 2003. doi:10.1007/978-94-017-0383-3.
  • [20] M. Gondran and M. Minoux. Graphs, Dioids and Semirings, volume 41 of Operations Research/ Computer Science Interfaces. Springer, New York, 2008. doi:10.1007/978-0-387-75450-5.
  • [21] H. Goto and S. Wang. Polyad inconsistency measure for pairwise comparisons matrices: Max-plus algebraic approach. Oper. Res. Int. J., 22(1):401–422, 2022. doi:10.1007/s12351-020-00547-9.
  • [22] B. B. Gursoy, O. Mason, and S. Sergeev. The analytic hierarchy process, max algebra and multi-objective optimisation. Linear Algebra Appl., 438(7):2911–2928, 2013. doi:10.1016/j.laa.2012.11.020.
  • [23] B. Heidergott, G. J. Olsder, and J. van der Woude. Max Plus at Work. Princeton Series in Applied Mathematics. Princeton Univ. Press, Princeton, NJ, 2006.
  • [24] A. Ishizaka and M. Lusti. How to derive priorities in AHP: A comparative study. Cent. Eur. J. Oper. Res., 14(4):387–400, 2006. doi:10.1007/s10100-006-0012-9.
  • [25] J. Krejčí. Pairwise Comparison Matrices and their Fuzzy Extension. Studies in Fuzziness and Soft Computing. Springer, Cham, 2018. doi:10.1007/978-3-319-77715-3.
  • [26] N. Krivulin. Complete solution of a constrained tropical optimization problem with application to location analysis. In P. Höfner, P. Jipsen, W. Kahl, and M. E. Müller, editors, Relational and Algebraic Methods in Computer Science, volume 8428 of Lecture Notes in Comput. Sci., pages 362–378. Springer, Cham, 2014. arXiv:1311.2795, doi:10.1007/978-3-319-06251-8_22.
  • [27] N. Krivulin. Extremal properties of tropical eigenvalues and solutions to tropical optimization problems. Linear Algebra Appl., 468:211–232, 2015. arXiv:1311.0442, doi:10.1016/j.laa.2014.06.044.
  • [28] N. Krivulin. A multidimensional tropical optimization problem with a non-linear objective function and linear constraints. Optimization, 64(5):1107–1129, 2015. arXiv:1303.0542, doi:10.1080/02331934.2013.840624.
  • [29] N. Krivulin. Rating alternatives from pairwise comparisons by solving tropical optimization problems. In Z. Tang, J. Du, S. Yin, L. He, and R. Li, editors, 2015 12th Intern. Conf. on Fuzzy Systems and Knowledge Discovery (FSKD), pages 162–167. IEEE, 2015. arXiv:1504.00800, doi:10.1109/FSKD.2015.7381933.
  • [30] N. Krivulin. Using tropical optimization techniques to evaluate alternatives via pairwise comparisons. In A. H. Gebremedhin, E. G. Boman, and B. Ucar, editors, 2016 Proc. 7th SIAM Workshop on Combinatorial Scientific Computing, pages 62–72. SIAM, Philadelphia, PA, 2016. arXiv:1503.04003, doi:10.1137/1.9781611974690.ch7.
  • [31] N. Krivulin. Direct solution to constrained tropical optimization problems with application to project scheduling. Comput. Manag. Sci., 14(1):91–113, Jan 2017. arXiv:1501.07591, doi:10.1007/s10287-016-0259-0.
  • [32] N. Krivulin. Tropical optimization problems in time-constrained project scheduling. Optimization, 66(2):205–224, 2017. arXiv:1502.06222, doi:10.1080/02331934.2016.1264946.
  • [33] N. Krivulin. Algebraic solution to constrained bi-criteria decision problem of rating alternatives through pairwise comparisons. Mathematics, 9(4):303, 2021. arXiv:1911.09700, doi:10.3390/math9040303.
  • [34] N. Krivulin and S. Sergeev. Tropical implementation of the Analytical Hierarchy Process decision method. Fuzzy Sets and Systems, 377:31–51, 2019. arXiv:1802.01989, doi:10.1016/j.fss.2018.10.013.
  • [35] N. K. Krivulin and S. A. Gubanov. Algebraic solution of a problem of optimal project scheduling in project management. Vestnik St. Petersburg Univ. Math., 54(1):58–68, 2021. doi:10.1134/S1063454121010088.
  • [36] P. Li and S.-C. Fang. Chebyshev approximation of inconsistent fuzzy relational equations with max-T{T} composition. In W. A. Lodwick and J. Kacprzyk, editors, Fuzzy Optimization, volume 254 of Studies in Fuzziness and Soft Computing, pages 109–124. Springer, Berlin, 2010. doi:10.1007/978-3-642-13935-2_5.
  • [37] W. Liu and M. Liang. Multi-objective design optimization of reconfigurable machine tools: A modified fuzzy-Chebyshev programming approach. Int. J. Prod. Res., 46(6):1587–1618, 2008. doi:10.1080/00207540600943944.
  • [38] D. T. Luc. Pareto optimality. In A. Chinchuluun, P. M. Pardalos, A. Migdalas, and L. Pitsoulis, editors, Pareto Optimality, Game Theory and Equilibria, pages 481–515. Springer, New York, 2008. doi:10.1007/978-0-387-77247-9_18.
  • [39] M. Luhandjula. Fuzzy optimization: An appraisal. Fuzzy Sets and Systems, 30(3):257–282, 1989. doi:10.1016/0165-0114(89)90019-5.
  • [40] D. Maclagan and B. Sturmfels. Introduction to Tropical Geometry, volume 161 of Graduate Studies in Mathematics. AMS, Providence, RI, 2015. doi:10.1090/gsm/161.
  • [41] J. Mazurek, R. Perzina, J. Ramík, and D. Bartl. A numerical comparison of the sensitivity of the geometric mean method, eigenvalue method, and best–worst method. Mathematics, 9(5), 2021. doi:10.3390/math9050554.
  • [42] H. Nakayama, Y. Yun, and M. Yoon. Sequential Approximate Multiobjective Optimization Using Computational Intelligence. Vector Optimization. Springer, Berlin, 2009. doi:10.1007/978-3-540-88910-6.
  • [43] R. Narasimhan. A geometric averaging procedure for constructing supertransitive approximation to binary comparison matrices. Fuzzy Sets and Systems, 8(1):53–61, 1982. doi:10.1016/0165-0114(82)90029-X.
  • [44] A. D. Nola and C. Russo. The semiring-theoretic approach to MV-algebras: A survey. Fuzzy Sets and Systems, 281:134–154, 2015. Special Issue Celebrating the 50th Anniversary of Fuzzy Sets. doi:10.1016/j.fss.2015.08.026.
  • [45] R. Ramesh and S. Zionts. Multiple criteria decision making. In S. I. Gass and M. C. Fu, editors, Encyclopedia of Operations Research and Management Science, pages 1007–1013. Springer, Boston, MA, 2013. doi:10.1007/978-1-4419-1153-7_653.
  • [46] J. Ramík. Pairwise Comparisons Method, volume 690 of Lecture Notes in Economics and Mathematical Systems. Springer, Cham, 2020. doi:10.1007/978-3-030-39891-0.
  • [47] T. L. Saaty. A scaling method for priorities in hierarchical structures. J. Math. Psych., 15(3):234–281, 1977. doi:10.1016/0022-2496(77)90033-5.
  • [48] T. L. Saaty. The Analytic Hierarchy Process. RWS Publications, Pittsburgh, PA, 2 edition, 1990.
  • [49] T. L. Saaty. On the measurement of intangibles: A principal eigenvector approach to relative measurement derived from paired comparisons. Notices Amer. Math. Soc., 60(2):192–208, 2013. doi:10.1090/noti944.
  • [50] T. L. Saaty and L. G. Vargas. Comparison of eigenvalue, logarithmic least squares and least squares methods in estimating ratios. Math. Modelling, 5(5):309–324, 1984. doi:10.1016/0270-0255(84)90008-3.
  • [51] T. L. Saaty and L. G. Vargas. Uncertainty and rank order in the analytic hierarchy process. European J. Oper. Res., 32(1):107–117, 1987. doi:10.1016/0377-2217(87)90275-X.
  • [52] A. A. Salo and R. P. Hämäläinen. Preference programming through approximate ratio comparisons. European J. Oper. Res., 82(3):458–475, 1995. doi:10.1016/0377-2217(93)E0224-L.
  • [53] Y. Shitov. Factoring a band matrix over a semiring. Fuzzy Sets and Systems, 382:165–171, 2020. doi:10.1016/j.fss.2019.02.004.
  • [54] L. L. Thurstone. A law of comparative judgment. Psychol. Review, 34(4):273–286, 1927. doi:10.1037/h0070288.
  • [55] N. M. Tran. Pairwise ranking: Choice of method can produce arbitrarily different rank order. Linear Algebra Appl., 438(3):1012–1024, 2013. doi:10.1016/j.laa.2012.08.028.
  • [56] F. J. Valverde-Albacete and C. Peláez-Moreno. The spectra of irreducible matrices over completed idempotent semifields. Fuzzy Sets and Systems, 271:46–69, 2015. doi:10.1016/j.fss.2014.09.022.
  • [57] P. J. M. van Laarhoven and W. Pedrycz. A fuzzy extension of Saaty’s priority theory. Fuzzy Sets and Systems, 11(1):229–241, 1983. doi:10.1016/S0165-0114(83)80082-7.