Application of tropical optimization for solving multicriteria problems of pairwise comparisons using log-Chebyshev approximation††thanks: Internat. J. Approx. Reason. 169, 109168 (2024)
Abstract
We consider a decision-making problem to find absolute ratings of alternatives that are compared in pairs under multiple criteria, subject to constraints in the form of two-sided bounds on ratios between the ratings. Given matrices of pairwise comparisons made according to the criteria, the problem is formulated as the log-Chebyshev approximation of these matrices by a common consistent matrix (a symmetrically reciprocal matrix of unit rank) to minimize the approximation errors for all matrices simultaneously. We rearrange the approximation problem as a constrained multiobjective optimization problem of finding a vector that determines the approximating consistent matrix. The problem is then represented in the framework of tropical algebra, which deals with the theory and applications of idempotent semirings and provides a formal basis for fuzzy and interval arithmetic. We apply methods and results of tropical optimization to develop a new approach for handling the multiobjective optimization problem according to various principles of optimality. New complete solutions in the sense of the max-ordering, lexicographic ordering and lexicographic max-ordering optimality are obtained, which are given in a compact vector form ready for formal analysis and efficient computation. We present numerical examples of solving multicriteria problems of rating four alternatives from pairwise comparisons to illustrate the technique and compare it with others.
Keywords: idempotent semifield, tropical optimization, log-Chebyshev approximation, constrained multiobjective optimization, multiple criteria evaluation.
MSC (2020): 15A80, 90C24, 41A50, 90C29, 90B50
1 Introduction
Decision-making problems that are encountered in real-world applications often have to cope uncertain data and involve multiple objectives. To handle uncertainty in data in these problems, various approaches are adopted, including the applications of interval, fuzzy and possibilistic data models [2, 11, 7]. A key component of many approaches that aim to address uncertainty is the use of the arithmetics of idempotent semirings and semifields, which provides a bridge between these approaches and tropical mathematics. The interaction between tropical (idempotent) mathematics which concentrates on the theory and applications of algebraic systems with idempotent operations, and the fuzzy, interval and possibilistic mathematics appears to be bidirectional. On the one hand, fuzzy and possibilistic models have inspired the study of idempotent algebraic systems such as fuzzy algebra, Łukasiewicz semirings and Viterbi semirings (see, e.g. [19, 20, 44, 18]). On the other hand, the tropical mathematics offers a conceptual and analytical framework for computations in fuzzy, interval and other arithmetics that involve and operations. Examples of related recent studies include [17, 56, 34, 53]. The above observations indicate that the development and investigation of novel solutions in tropical mathematics can benefit the fuzzy sets theory and other research domains where idempotency plays an essential role to deal with uncertainty.
As another bridge between the tropical mathematics and the mathematics of uncertainty, we consider the concept of Chebyshev distance and related theory of Chebyshev approximation. In tropical semifields, Chebyshev distance is introduced in a direct way using only internal operations, which offers a strong potential for developing efficient techniques to solve optimization problems involving Chebyshev approximation. At the same time, the methods and techniques of Chebyshev approximation appear to be in growing demand as useful approach to solve optimization and other problems in fuzzy mathematics [10, 37, 36], which makes the related results in tropical mathematics relevant.
The known methods to address uncertain decision problems under multiple criteria include both crisp and fuzzy implementations of regular multiobjective optimization methods and heuristic algorithms [39, 6, 16, 13]. One of the approaches is to use methods and techniques of tropical optimization which is concerned with optimization problems that are formulated and solved in terms of tropical mathematics. Due to the clear connection between the fuzzy and tropical mathematics, the development of new solutions that are based on multiobjective tropical optimization suggests a promising resource to handle uncertain multicriteria decision problems, for example through a fuzzy implementation of the optimization method.
The problem of evaluating ratings (scores, priorities, weights) of alternatives based on pairwise comparisons under multiple criteria is an uncertain decision problem, where the uncertainty arises from subjective judgment errors and imprecise data of comparisons. This uncertainty leads to inconsistent or opposite results of rating and ranking alternatives, and needs to be managed to achieve acceptable solutions. The problem under consideration is widely occurring and highly demanded in decision making, and has been studied for decades in many researches including the classical paper by L. L. Thurstone [54]. Given results of comparisons of decision alternatives in the form of pairwise comparison matrices, the problem is to find a vector of individual ratings of alternatives, which can be used to rank alternatives according to their ratings and thus provide a basis for optimal decisions.
The solution techniques available for the problem apply various heuristic procedures that do not guarantee the optimal solution, but usually produce acceptable results in practice, and mathematical methods that are formally justified, but may be computationally very expensive. Along with the methods based on conventional algebra, there exist solutions that use interval arithmetic, fuzzy arithmetic and tropical algebra.
One of the most prominent approach to solve multicriteria pairwise comparison problems, which is known as the analytical hierarchy process, has been proposed in the 1970s by T. L. Saaty [47, 48, 49]. The approach employs the principal eigenvector method to derive a vector of individual ratings by calculating the eigenvectors of the pairwise comparison matrices, corresponding to their maximal eigenvalues (the Perron eigenvectors). In succeeding years the approach was extended to handle problems with interval and fuzzy pairwise comparison matrices [57, 5, 51, 52] (see also literature overviews in [25, 46]) and to implement tropical algebra [14, 15, 22, 55, 18].
Another widely used approach is the weighted geometric mean method, which is based on matrix approximation in the sense of the Euclidean metric in logarithmic scale [43, 9, 1]. Under this approach, the individual ratings are obtained from a matrix of pairwise comparisons as the vector of geometric means of elements in the rows of the matrix. For interval and fuzzy extensions of the geometric mean method, one can consult [25, 46].
Different solution approaches to the pairwise comparison problems may yield results where the acquired ratings produce different or contradictory orders of alternatives [3, 1, 24, 41]. Some of the approaches offer analytical solutions with low computational complexity, whereas the other are based on numerical algorithms that may have a sufficient computational cost. As a result, the development of new effective solutions of the problem to supplement and complement existing approaches remains relevant.
A technique that applies approximation of pairwise comparison matrices in Chebyshev metric in logarithmic scale (log-Chebyshev approximation) is proposed and developed in [29, 30, 34]. The technique leads to optimization problems that are formulated and solved in the framework of tropical algebra where addition is defined as maximum. Using methods and results of tropical optimization yields analytical solutions of the problem in a compact vector form ready for both formal analysis and straightforward computations. In contrast to the methods of principle eigenvector and geometric mean, which offer unique solutions to the problem of pairwise comparisons, the log-Chebyshev approximation approach can produce multiple solutions. At the same time, an important advantage of this approach is that it can be directly extended to solve pairwise comparison problems with constraints imposed on the individual ratings of alternatives, whereas both methods of principal eigenvector and geometric mean cannot accommodate additional constraints without considerable complication of solution.
The log-Chebyshev approximation technique differs from another existing approach based on tropical algebra, which is exploited in many works including [14, 15, 22, 55, 21]. This approach can be considered as a tropical analogue of the principal eigenvector method, where the ordinary Perron eigenvector is formally replaced by the tropical eigenvector of the pairwise comparison matrix. The solution obtained as the tropical eigenvector is known to be one of the solutions provided by the log-Chebyshev approximation. However, the tropical eigenvector as shown in [30] can yield a less relevant solution in the context of the pairwise comparison problem than other vectors that are found using the approximation. Since the tropical analogue of the principal eigenvector method may miss better results provided by the log-Chebyshev approximation technique, it seems unnecessary to consider this method separately.
Multicriteria pairwise comparison problems can be considered as optimization problems with multiple objectives and solved using methods and techniques available in multiobjective optimization [12, 38, 4, 42, 45]. The most common approach to solving multiobjective optimization problems is to obtain Pareto-optimal (nondominated) solutions that cannot be improved for any one objective without worsening at least one other objective. To obtain Pareto-optimal solutions, various techniques are applied to reduce the problem with a vector objective function to one or more problems with ordinary (scalar) objective functions. As examples, one can consider linear scalarization used in the weighted geometric mean method [9, 1] and Chebyshev scalarization in the log-Chebyshev approximation method [34]. Other approaches to solve multiobjective problems include max-ordering, lexicographic ordering and lexicographic max-ordering techniques [12].
In this paper, we consider a new constrained decision problem to find absolute ratings of alternatives that are compared in pairs under multiple criteria, subject to constraints in the form of two-sided bounds (box-constraints) on ratios between the ratings. Given matrices of pairwise comparisons according to the criteria, the problem is formulated as the log-Chebyshev approximation of these matrices by a common consistent matrix (a symmetrically reciprocal matrix of unit rank) to minimize the approximation errors for all matrices simultaneously [34]. We rearrange the approximation problem as a constrained multiobjective optimization problem of finding a vector that determines the approximating consistent matrix. The constrained optimization problem is then represented in the framework of tropical algebra where addition is defined as taking maximum and multiplication is as usual.
We apply results and further develop methods of tropical optimization to handle the multiobjective optimization problem according to various principles of optimality. The results obtained include several statements proved as lemmas and theorems to provide a basis for formal derivation and justification of the solution procedures. We present numerical examples of solving multicriteria problems of rating four alternatives to illustrate the technique. Specifically, we show how the proposed techniques is used to solve the problem of selecting a plan for vacation from [47], and compare results.
The novelty of this study is twofold and lies in the development of new solution methods and results that are important for both decision making and tropical optimization. First, we consider a new class of constrained multicriteria pairwise comparison problems where the individual ratings of alternatives are evaluated subject to box-constraints on the ratios of ratings. These constraints can describe predefined relations between ratings, derived from some prior knowledge (such as approved and adopted results of previous studies) independent of the current comparison data. We extend the solution obtained in [33] for a constrained bi-criteria problem to the problems with arbitrary fixed number of criteria. The constrained problems under study allow to take into account the increasing complexity of contemporary decision processes, yet they have received little or no attention in the literature.
Furthermore, we develop a new technique to handle the above constrained problems, which combines the log-Chebyshev approximation of pairwise comparison matrices with multicriteria optimization based on the max-ordering, lexicographic ordering and lexicographic max-ordering principles of optimality. The solutions are given analytically in a compact parametric form that represents all solution vectors of ratings in a way ready for formal analysis and efficient computation. We observe that implementation of the lexicographic ordering and lexicographic max-ordering optimization schemes to solve multiobjective problems involves solving a series of constrained scalar optimization problems, where the constraints of a problem define a feasible set as the solution set of the previous problem in the series [12]. The need for this predetermines and justifies the application of the log-Chebyshev approximation which can solve constrained problems analytically, rather than the methods of the principal eigenvector and geometric mean, which are not able to address the constrained pairwise comparison problems in a direct way.
Second, we present new results on the solution of tropical optimization problems, which are of independent interest. A theorem is derived that formally establishes the equivalence between the solution of a constrained tropical optimization problem, the solution of a linear vector inequality and a set of vectors given in parametric form. The theorem is obtained as a consequence of some known facts (see, e.g. [27, 28, 31, 32]) combined into one result here for the first time to serve as a useful analytical tool for implementation in tropical optimization. Specifically, this result plays a key role in the construction and justification of solution procedures for the constrained multicriteria pairwise comparison problem under consideration.
We derive new solutions for particular problems of maximizing and minimizing a function in the multiplicative form of the Hilbert seminorm under a normalization constraint. This result is used to deal with multiple solutions of log-Chebyshev approximation in the pairwise comparison problem, which can make it difficult to choose the optimal solution in decision making. We improve and simplify the technique developed in [34] to handle the multiple solutions issue by finding two solution vectors that have the highest ratio (the best differentiating solution) and the lowest ratio (the worst differentiating solution) between ratings. For some problems, the technique may fail to overcome the issue and yield both best and worst differentiating solutions that are not unique. We propose a new approach, which in case of multiple solutions allows to reduce the number of best differentiating solutions and replace all worst differentiating solutions by one solution.
Finally, we develop new computational procedures to solve a class of constrained multiobjective tropical optimization problems according to various principles of optimality, which can find application in other practical contexts including temporal project scheduling and facility location problems.
The rest of the paper is organized as follows. We start with Section 2 where a constrained pairwise comparison problem with a single criterion and its solution based on the log-Chebyshev approximation are considered. In Section 3, we extend the pairwise comparison problem to take into account multiple criteria and describe various solution approaches. Section 4 offers an outline of basic definitions and results of tropical max-algebra needed below. In Section 5, we present solutions to tropical optimization problems, which are used to handle multicriteria pairwise comparison problems in Section 6. Sections 7 and 8 give numerical examples to illustrate the obtained results. Finally, Section 9 includes short concluding remarks.
2 Single-Criterion Pairwise Comparison Problem
Suppose that alternatives are compared in pairs, which results in an -matrix of pairwise comparisons, where the entry shows by how many times alternative is considered more preferable than alternative . The pairwise comparison matrix is assumed to satisfy the condition (and thus ) to be valid for all , which makes the matrix be symmetrically reciprocal. This condition says that if alternative is estimated to be times better than , then alternative must be times better ( times worse) than .
Given a pairwise comparison matrix which represents the results of relative evaluation of one alternative against another, the problem of interest is to calculate absolute ratings of alternatives in the form of an -vector where represents the rating of alternative .
A pairwise comparison matrix is referred to as consistent if the condition holds for all . This condition corresponds to the natural transitivity of judgments, which requires that if alternative is considered times better than , and alternative is times better than , then alternative should be times better than .
For a consistent matrix , it is not difficult to verify that there exists a positive vector whose entries determine the entries of by the relation to be valid for all , which means that the matrix is of unit rank. Indeed, the transitivity condition implies that any two columns and in are collinear, and thus we can write where and are positive column vectors.
Since each entry of the matrix is given by with , we have for all , which yields the above relation. Moreover, it directly follows from this relation that the vector , which is defined up to a positive factor, can be taken as a vector of absolute ratings of alternatives and thus gives the solution of the pairwise comparison problem.
The matrices of pairwise comparisons that appear in real-world problems are commonly not consistent, which makes the problem of evaluating absolute ratings nontrivial. The solution techniques available to handle the problem include various heuristic methods that offer acceptable results in many cases in practice, and approximation methods that provide mathematically justified optimal solutions.
2.1 Solution Approaches to Pairwise Comparison Problem
The heuristic methods are mainly based on aggregating columns of the pairwise comparison matrix to obtain a solution by calculating a weighted sum of these columns [8]. In the most widely used method of principal eigenvector [47, 48, 49], the solution vector is defined as the sum of columns taken with weights assumed proportional to the components of . Under this assumption, the vector must satisfy the equation where is a coefficient of proportionality, and it is found as the principal (Perron) eigenvector of the matrix corresponding to the maximum eigenvalue.
The approximation methods reduce the problem to finding a consistent matrix that approximates a given matrix in the sense of some distance function as approximation error [8]. Then, a positive vector that determines the approximating consistent matrix is taken as a vector of absolute ratings of alternatives. The approximation problem is solved as an optimization problem of minimizing the distance between the matrices and , which provides a formal justification of the solution obtained.
If the approximation error is measured on the standard linear scale, the approach normally leads to complex multiextremal optimization problems that are very hard to solve [50]. On the contrary, the application of the logarithmic scale (with logarithm to a base greater than ) makes the approximation problem more tractable and even allows the solution to be derived analytically in an exact explicit form.
A widespread approximation technique to solve the pairwise comparison problem measures the error between the matrices and by using the Euclidean metric in logarithmic scale [43, 9, 1]. This technique known as the method of geometric mean, involves finding a positive vector that solves the problem
The standard solution approach, which applies the first derivative test to find the roots of the partial derivatives of the objective function with respect to all , yields a solution vector with the components given in the parametric form
After normalization (e.g. by dividing by the sum of components), this vector is taken as the optimal solution of the pairwise comparison problem.
As another approximation technique, a method of minimizing the Chebyshev distance in logarithmic scale (a log-Chebyshev approximation method) is proposed in [29, 30, 34]. The method aims to find positive vectors that solve the problem
(1) |
We observe that the methods of principal eigenvector and geometric mean lead to unique solutions and offer efficient computational procedures of calculating the result. At the same time, the solution provided by log-Chebyshev approximation can be nonunique and thus require further analysis. Considering an approximate character of the pairwise comparison model which usually leads to inconsistent results of pairwise comparisons, multiple solutions of the problem look quite reasonable and even allow to select a solution satisfying additional constraints.
2.2 Log-Chebyshev Approximation of Pairwise Comparison Matrix
Let us consider the problem of unconstrained log-Chebyshev approximation at (1). Observing that the logarithm to a base greater than monotonically increases, the objective function is rewritten as (see, e.g. [34])
Since the logarithmic function on the right-hand side attains its maximum there where its argument is maximal, we remove the logarithm from the objective function. Moreover, it is not difficult to verify (see [14, 15, 34]) that the obtained objective function is connected with the standard relative approximation error by the equality
This means that the log-Chebyshev approximation of the matrix is equivalent to the approximation in the sense of relative error as well.
Consider an extension of the pairwise comparison problem where the ratings of alternatives are to be evaluated subject to some predefined constraints. Suppose the constraints imposed on the ratings are in the form of two-sided bounds on ratios between the ratings. These constraints can reflect prior knowledge about the relationship between ratings, which can be obtained by different assessment techniques or resulted from the nature of alternatives.
Given a nonnegative matrix where shows that alternative must be considered not less than times better than , and indicates that no constraint is imposed on with respect to . The constraints are given by the inequalities for all , or equivalently by the double inequality when . Combining the former inequalities for all yields the system of inequalities
We incorporate the constraints into (1) to formulate the following problem of constrained log-Chebyshev approximation. Given a positive -matrix of pairwise comparisons and nonnegative -matrix of constraints, the problem is to find positive -vectors that achieve
(2) | ||||||
s.t. |
Note that multiplying all components of the vector by a common positive factor does not change both objective function and inequality constraints in problem (2), and hence the solutions of the problem are scale-invariant.
Finally, we observe that the methods of principal eigenvector and geometric mean cannot accommodate the above constraints in a straightforward way without considerable modification and complication of the solution. As it is shown later, the constrained problem of log-Chebyshev approximation at (2) can be directly solved to give the result in a compact vector form.
2.3 Best and Worst Differentiating Solutions
If problem (2) has a unique solution up to a positive factor, this solution is taken as the vector of absolute ratings of alternatives in question.
Suppose that the solution of (2) is not unique and denote the set of obtained solution vectors by
A nonunique solution vector of the pairwise comparison problem may make it difficult to derive an optimal decision, and thus further analysis is needed to characterize the result by one or two vectors that are reasonably representative of the solution. An approach developed in [34] suggests to reduce the entire set of solutions to two vectors that provide the “best” and “worst” answers to the questions of which alternative has the highest rating and how much this rating differs from the ratings of other alternatives. As the best solution, the approach takes a vector that maximally differentiates the alternatives with the highest and lowest ratings, and as the worst a vector that minimally differentiates all alternatives. The difference between alternatives is measured by the ratio in the form of a multiplicative version of the Hilbert (span, range) seminorm of the vector , which is given by
(3) |
The best and worst differentiating solutions are then obtained by maximizing and minimizing the Hilbert seminorm over all vectors , which leads to the optimization problems
As examples show (see, e.g. [34]), one of the shortcomings of this approach is that the best (worst) differentiating solution found in this way may not be unique. To overcome this possible issue, we now propose a new improved procedure of finding the best and worst differentiating solutions, which provides a reduced set of the best solutions and a unique worst solution.
Since the solutions of problem (2) are invariant under multiplication by a positive factor, we can restrict ourselves to vectors that are normalized by dividing by the maximum component. Adding the normalization condition transforms the last two problems into the problems
(4) |
Furthermore, we note that both problems can have nonunique normalized solutions. Under this circumstance, it is reasonable to concentrate only on the minimal normalized solution to the maximization problem and the maximal normalized solution to the minimization problem. As usual, a solution is called minimal (maximal) if the componentwise inequality () holds for any solution .
Indeed, all normalized vectors that maximize or minimize the ratio between the highest and lowest ratings have two common components whose ratio is fixed: the maximum component equal to 1, and the minimum component less or equal to 1. In this case, the lower (higher) the ratings of the other alternatives, the better (worse) the alternative with the maximum rating is distinguishable from the others.
3 Multicriteria Pairwise Comparison Problems
We now assume that alternatives are compared in pairs according to criteria. For each criterion , the results of pairwise comparisons are represented by a matrix . The problem is to assess the alternatives by evaluating a vector of ratings subject to constraints given by a matrix . Application of the log-Chebyshev approximation technique yields the following constrained multiobjective problem:
(5) | ||||||
s.t. |
Suppose we have a solution to the problem, which forms a nonempty set of solution vectors. If the solution obtained is a unique vector up to a positive factor, we normalize this vector by dividing by the maximum element and take its entries as absolute ratings of alternatives in the pairwise comparison problem. Otherwise, the best and worst differentiating vectors of ratings are obtained respectively as the minimal solution of the maximization problem and the maximal solution of the minimization problem at (4).
In the rest of this section, we consider three common approaches to handle problem (5), which results in the development of new procedures to find the solution set. The proposed solutions follow the max-ordering, lexicographic ordering and lexicographic max-ordering principles of optimality [12].
3.1 Max-Ordering Solution
The max-ordering optimization aims at minimizing the worst value of the scalar objective functions in the multiobjective problem. For each , the approach considers that function which takes the maximal (worst) value. This leads to the replacement of the vector objective function by a scalar function given by the maximum component of the vector function, which is known as the Chebyshev scalarization approach in multiobjective optimization [42].
The solution of the constrained problem at (5) starts with the introduction of the feasible solution set given by the constraints as
(6) |
We apply the Chebyshev scalarization and use the associativity of the maximum operation to form the scalar objective function
Then, the problem reduces to the minimization problem
(7) |
which is solved to obtain the max-ordering solution in the form of the set
Note that the solution obtained by the max-ordering optimization is known to be week Pareto-optimal and become Pareto-optimal if unique [42].
If the solution is not unique (up to a positive factor), we further reduce the solution set by extracting the best and worst differentiating vectors of ratings given by the minimal and maximal normalized solutions of problems (4) over the set .
3.2 Lexicographic Ordering Solution
The lexicographic optimization examines the scalar objective functions in a hierarchical order based on certain ranking of objectives. Suppose the objectives are numbered in such a way that objective has the highest rank, objective has the second highest and so on until objective having the lowest rank. The lexicographic approach first solves the problem of minimizing function and examines the set of optimal solutions obtained. If the solution is unique, it is taken as the solution of the overall multiobjective problem. Otherwise function is minimized over all solutions of the first problem, and the procedure continues until a unique solution is obtained or the problem with function is solved.
To apply this approach to problem (5), we first take the set given by (6), and then successively obtain the solution sets for each problem
(8) |
The solution procedure stops at step if the set consists of a single solution vector, or at step otherwise. The last found set is taken as the lexicographic solution for the entire problem. In the same way as before, if the solution given by is not unique, it is reduced to the minimal and maximal solutions of respective problems at (4) over .
3.3 Lexicographic Max-Ordering Solution
This approach combines the lexicographic ordering and max-ordering into one procedure that improves the accuracy of the assessment provided by the max-ordering approach. The procedure consists of several steps, each of which finds the max-ordering solution of a reduced problem that has a lower multiplicity of objectives and smaller feasible set. The first solution step coincides with the above described max-ordering solution of the constrained problem with objectives and the feasible solution set given by the constraints. Each subsequent step takes the solution from the previous step as the current feasible set and selects objectives that can be further minimized over this set, to incorporate into a current vector objective function. A scalar objective function is included if it has its minimum value over the current feasible set below the minimum of the objective function at the previous step.
We denote by the set of indices of scalar objective functions that form components of the vector objective function, and by the solution set obtained at step . We initially set and define as in (6).
We find a solution set by solving the problem
(9) |
With the minimum value of the objective function at step denoted by , we define the index set
(10) |
The procedure is completed at step if either the set reduces to a single solution vector or the condition holds. We additionally solve optimization problems (4) if the final set provides a nonunique solution.
Below we show how the solutions offered by the above three approaches can be directly represented in explicit analytical form using methods and result of tropical mathematics.
4 Preliminary Algebraic Definitions, Notation and Results
In this section, we offer a brief overview of definitions, notation and results of tropical algebra that are used in the sequel. For further detail on the theory and application of tropical mathematics, one can consult the resent monographs and textbooks [19, 23, 20, 40].
4.1 Idempotent Semifield
Consider the set of nonnegative reals with two binary operations: addition denoted by and defined as maximum, and multiplication denoted and defined as usual. Both operations are associative and commutative, and multiplication distributes over addition. Addition is idempotent since for all , and thus not invertible. It induces a partial order by the rule: if and only if , which is in agreement with the natural linear order on . With these properties, the system forms an idempotent Abelian semigroup with identity.
With usual multiplication, the system has the structure of an Abelian group where powers with rational (and even real) exponents are well defined, which means that this system is radicable (algebraically complete). In what follows, the multiplication sign is omitted as in the standard algebra to save writing. Under the above properties, the system is commonly classified as a linearly ordered idempotent radicable semifield and referred to as max-algebra.
Both addition and multiplication are monotone in each argument: if the inequality holds for some , then the inequalities and are valid for any as well. Furthermore, it results from the inequality with that if and if .
4.2 Matrix and Vector Algebra
Matrices over max-algebra are introduced in the usual way; addition and multiplication of compatible matrices as well as multiplication of matrices by scalars follow the standard entrywise rules where the arithmetic addition is replaced by (whereas the multiplication does not change). The monotonicity of scalar operations extends to the matrix operations, where the inequalities are understood entrywise.
The transpose of a matrix is denoted by . The matrices which consist of one column are vectors, and of one row are transposed vectors. The set of matrices of rows and columns is denoted by , and the set of column vectors with entries is by .
The zero matrix denoted by and the identity matrix denoted by are defined in the same way as in conventional algebra. A matrix without zero rows (columns) is called row-regular (column-regular), and a vector without zero entries is called regular. The matrices without zero entries and regular vectors are also called positive.
A vector is collinear to vector , if for some .
For any nonzero matrix from the set , its multiplicative conjugate transpose (or simply conjugate) is the matrix in where if , and otherwise. If a matrix is row-regular, then the following inequality obviously holds:
For any nonzero vector from , its conjugate is the row vector where if , and otherwise. If a vector is regular, then the following matrix inequality and scalar equality are valid:
(where the equality holds for any nonzero vector ).
For any square matrix from , the nonnegative integer powers represent repeated multiplication of the matrix by itself, defined by and for all integer . The trace of is given by
For any matrix , a trace function is introduced to serve as a tropical analogue of the matrix determinant in the form
If the condition holds, then the Kleene star operator is defined to map the matrix into the matrix
Moreover, under the same condition, the inequality
is valid for all integer , from which it follows directly that
The spectral radius of a matrix is given by
Consider a matrix in and vector in . With the notation , tropical analogues of the matrix and vector norms are defined as
4.3 Vector Inequalities
We conclude the overview with two inequalities that appear in the derivation of the solution of optimization problems in what follows. Suppose is a given matrix and is a given vector, and consider the problem to find all vectors that satisfy the inequality
(11) |
Solutions of (11) are known under various assumptions in different forms. We use a solution suggested by the following assertion (see, e.g. [27]).
Lemma 1.
For any column-regular matrix and regular vector , all solutions of inequality (11) are given by the inequality .
As a consequence of the lemma, the solution of the equation , if exists, can be represented as a family of solutions each defined by the vector inequality where one scalar inequality is replaced by an equality.
Assume that for a given square matrix , we need to find all regular solutions to the inequality
(12) |
Lemma 2.
For any matrix , the following statements are true.
-
1.
If , then all regular solutions of inequality (12) are given in parametric form by where is a vector of parameters.
-
2.
If , then there is no regular solution.
5 Tropical Optimization Problems
We are now concerned with optimization problems formulated and solved in the framework of tropical algebra to provide the basis for the solutions of multicriteria pairwise comparison problems in what follows. We present a new result that establishes a direct correspondence between the solutions of a conjugate quadratic optimization problem and the solutions of a linear inequality. Next, we offer new direct solutions to the problems of both maximization and minimization of a function in the form of the Hilbert seminorm.
5.1 Conjugate Quadratic Optimization Problem
Let us consider a problem to minimize a conjugate quadratic (or pseudoquadratic) vector form subject to a linear constraint. Given matrices , we find regular vectors that attain the minimum
(13) | ||||||
s.t. |
Complete solutions to the problem and its variations are proposed in [27, 28, 31, 32]. We use a solution in the form of the next statement.
Theorem 3.
Let be a matrix with nonzero spectral radius and be a matrix such that . Then, the minimum value of the objective function in problem (13) is equal to
(14) |
and all regular solutions of the problem are given in the parametric form
(15) |
where is a vector of parameters.
We note that the above solution has a polynomial computational complexity in the dimension . Specifically, it is shown in [31, 32] that this solution can be calculated with at most scalar operations.
We now formulate an equivalence statement that allows all solutions of problem (13) to be described using solutions of a vector inequality in the form of (12) and vice versa. This new result plays a key role in the development of solutions for the multiobjective optimization problems in subsequent sections.
Theorem 4.
Let be a matrix with nonzero spectral radius, a matrix such that , and be the minimum value of the objective function in problem (13), given by (14). Then, the following assertions are equivalent:
-
(i)
is a regular solution of problem (13).
-
(ii)
is a regular solution of the inequality
-
(iii)
is a regular vector given in the parametric form
where is a vector of parameters.
5.2 Maximization and Minimization of Hilbert Seminorm
In this subsection, we concentrate on optimization problems with the objective function in the form of Hilbert seminorm, which appear in the analysis of multiple solutions of the pairwise comparison problem. We consider constrained problems of maximization and minimization of Hilbert seminorm and present results that allow to find all solutions of the problems.
We start with a conventional form given by (3) for the Hilbert seminorm of a vector . After rewriting it in terms of max-algebra, we obtain
We examine optimization problems with as the objective function, which we refer to as a tropical representation of the multiplicative Hilbert seminorm or simply as Hilbert seminorm.
First, we suppose that given a matrix , we seek for regular solutions of the constrained maximization problem
(16) | ||||||
s.t. |
Lemma 5.
For any matrix without zero entries and with , the maximum in problem (16) is equal to , and all regular solutions of the problem are given in the parametric form
where is a matrix obtained from the matrix with columns by setting to all entries except the entry with indices given by
It is not difficult to see that the most computationally intensive component of the solution is the calculation of the Kleene star matrix . This calculation can take no more that scalar operations, which determines the computational complexity involved in the solution.
Another problem of interest is to find regular vectors that achieve the minimum
(17) | ||||||
s.t. |
The statement below offers a solution to problem (17), obtained by applying Theorem 3 with substitution (see [35] for a solution of the problem with an extended system of constraints).
Lemma 6.
For any matrix with , the minimum value of the objective function in problem (17) is equal to , and all regular solutions are given in the parametric form
The computational complexity of the solution is shown in [35] to be .
Let us examine problems (16) and (17) under additional conditions to implement an improved technique of evaluating the best and worst differentiating solutions of the pairwise comparison problem. We add to both problems a normalization constraint on the solution vector in the form
We now concentrate on finding extremal (minimal and maximal) normalized solutions of problems (16) and (17). To solve the problems with normalization constraint, one can refine the results of Lemmas 5 and 6 and then find the extremal solutions. Below we offer direct solutions to the problems, which use less specific assumptions and involve less complicated calculations.
5.3 Maximization Problem
Suppose that we need to find the minimal solution of problem (16) with the normalization constraint added. The problem takes the following form:
(18) | ||||||
s.t. |
The next statement improves the result of Lemma (5) and extends it to matrices that may have zero entries.
Lemma 7.
For any matrix with , the maximum value of the objective function in problem (18) is equal to , and the minimal solutions are given by the set of normalized columns in the matrix in the form
If a vector of the set is dominated by the other vectors (the minimal vector) or is the only vector in the set, it uniquely determines the minimal solution of the problem. Otherwise, the minimal solution is not uniquely defined.
Proof.
We solve the inequality constraint in problem (18) by using Lemma 2 to write where , and then represent the problem in the form
s.t. |
First, we observe that the equality constraint always has solutions. It follows from this constraint that where at least for one component of the vector , the scalar inequality is satisfied as an equality.
Suppose that the equality holds for some , which defines the components of by the conditions
Under this assumption, we use properties of idempotent addition to bound the objective function from above as follows:
Note that the above inequality becomes an equality if we put for all since in this case, we have .
Furthermore, the maximum value of the objective function is attained if the index is chosen as
which yields this maximum equal to .
For each index that provides the above maximum, we take as a solution to the maximization problem the vector with the components
Since the equality holds, this vector produces the vector
We observe that the solution vector obtained has the form of a normalized column of the matrix , and that for a fixed , it is the minimal solution due to monotonicity of the matrix operations.
In the case of several indices that yield the maximum value of the objective function, we combine the corresponding solution vectors in a set. If there is the minimal vector among them, we take this vector as the unique minimal solution. Otherwise, the minimal solution cannot be uniquely determined, and therefore it is given by multiple vectors. ∎
5.4 Minimization Problem
We turn to the derivation of the maximal solution to the following minimization problem obtained from (17) by adding the normalization constraint:
(19) | ||||||
s.t. |
A complete solution of the problem is given by the next statement.
Lemma 8.
For any matrix with , the minimum value of the objective function in problem (19) is equal to , and the maximal solution is given by
Proof.
Similarly as before, we replace the inequality by its solution with . Combining with the constraint yields the equality , from which the inequality follows.
After multiplication of the last inequality by from the left, we arrive at a constraint on in the form
In the same way as in [26], we verify the equality . Indeed, since , the inequality holds.
To show the opposite inequality , we note that the identity is valid according to the properties of the Kleene star matrix. Application of the properties of conjugate transposition yields
We combine these relations to obtain the desired inequality as follows:
As a result, the inequality constraint on the vector reduces to
Furthermore, we examine the vector , which is the maximal solution of the last inequality, to verify that this vector satisfies both constraints of the problem. Since , we can write
and thus this vector satisfies the inequality constraint. Moreover, we have
which means that the equality constraint holds as well.
After conjugate transposition of both sides of the inequality and multiplication by , we obtain a lower bound on the objective function
It is clear that the maximal feasible solution attains this bound, which means that is the minimum of the objective function.
Note that the solution obtained is unique and can be represented as
which is the vector of inverses of the maximum column elements in . ∎
Let us verify that the solution of the minimization problem (19) is greater or equal to any solution of the maximization problem (18). Indeed, with the equality , we have for each that
which shows that all normalized columns of the matrix , including the solution vectors of problem (18), are less or equal to the solution of (19).
To conclude this section, we note that the obtained solutions to both maximization and minimization problems involve the calculation of the Kleene star matrix as the main component, and thus have the complexity .
6 Solution of Multicriteria Pairwise Comparison Problems
Consider the multiobjective optimization problem (5), which arises in the solution of the constrained multicriteria pairwise comparison problem using log-Chebyshev approximation. After rewriting the objective functions and inequality constraints in the max-algebra setting, the problem becomes
s.t. |
By using vector and matrix notation, we finally formulate the problem as follows. Given -matrices of pairwise comparisons of alternatives for criteria , and nonnegative -matrix of constraints, find positive -vectors of ratings that solve the multiobjective problem
(20) | ||||||
s.t. |
In this section, we offer new max-ordering, lexicographic and lexicographic max-ordering optimal solutions to the problem. If the solution obtained is not a unique vector of ratings, this solution is reduced to two the best and worst solution vectors, which most and least differentiate between the alternatives with the highest and lowest ratings.
6.1 Max-Ordering Solution
We start with the max-ordering solution technique, which leads to the solution of problem (7) over the feasible set given by (6).
In terms of max-algebra, the set is the set of regular vectors that solve the inequality . The objective function at (7) takes the form
Combining the objective function with the constraint leads to the problem
s.t. |
Since the problem takes the form of (13), we apply Theorem 3 to find the minimum value of the objective function, which is given by (14). Next, we obtain the solution set of the problem as the set of vectors
If the columns in the matrix are collinear, then any column after normalization can be taken as the unique solution. Otherwise by Theorem 4, the set coincides with the set of regular solutions to the inequality
It remains to obtain the best and worst differentiating solutions in the set, which are the minimal and maximal solutions of the respective problems
By applying Lemma 7, we find the best differentiating solutions represented in terms of the columns of the matrix as the set of vectors
If there is a minimal solution in the set, it is taken as the unique minimal best differentiating solution; otherwise, the minimal solution cannot be uniquely determined.
By Lemma 8, the maximal worst differentiating solution is obtained as
The next statement combines the results of the derivation of all max-ordering solution vectors for problem (20) with the calculation of the best and worst differentiating vectors among them.
Theorem 9.
Let for all be matrices with nonzero spectral radii and be a matrix such that . Define
Then, with the notation , the following statements hold:
-
(i)
All max-ordering solutions of problem (20) are given by the matrix in the parametric form
-
(ii)
The minimal normalized best differentiating solution is given by
-
(iii)
The maximal normalized worst differentiating solution is
We observe that if the matrix produces a unique solution, then the minimal best and maximal worst differentiating solutions obviously coincide. This allows one not to check whether all columns in are collinear or not. Instead one can directly calculate the best and worst solutions, which provide a common result if the solution is unique.
The above result is valid for the unconstrained problem which is obtained by setting . In this case, the minimum value of the objective function is reduced to the spectral radius of the matrix .
Finally, note that with additional computational effort required to find the matrix , the computational complexity of the solution is .
6.2 Lexicographic Ordering Solution
Consider the implementation of the lexicographic ordering technique which involves a series of problems (8). We solve problem (20) in no more than steps each consisting in the minimization of a scalar objective function over a feasible set given by the solutions of the previous step.
At the step , we use the symbol and formulate the problem
s.t. |
To solve the problem, we apply Theorem 3 to calculate the minimum
Then, we find the solution set, which is given in parametric form by
or according to Theorem 4, as the set of solutions of the inequality
We take the last inequality as a constraint that determines the set and formulate the problem of step as follows:
s.t. |
The minimum in the problem is given by
and the set of solutions is defined as
We repeat the procedure for each step . Upon completion of step , we arrive at the lexicographic solution given by
If the solution obtained is not unique (up to a positive factor), we calculate the best and worst differentiating solution vectors.
We summarize the above computational scheme in the following form.
Theorem 10.
Let for all be matrices with nonzero spectral radii and be a matrix such that . Denote and define the recurrence relations
Then, with the notation , the following statements hold:
-
(i)
All lexicographic ordering solutions of problem (20) are given by the matrix in the parametric form
-
(ii)
The minimal normalized best differentiating solution is given by
-
(iii)
The maximal normalized worst differentiating solution is
Suppose that for some step , all columns in the matrix are collinear, and thus the matrix generates a unique solution vector. In this case, further steps cannot change the obtained solution and thus can be avoided to stop the procedure. Note that if a matrix generates a unique solution, then both the minimal best and maximal worst normalized vectors obtained from are the same. As a result, we can calculate these vectors to stop the procedure if these vectors coincide or continue otherwise.
The complexity of the solution can be estimated not greater than .
6.3 Lexicographic Max-Ordering Solution
We now describe a solution technique based on the lexicographic max-ordering optimality principle. Similar to the lexicographic ordering solution, we handle problem (20) by solving a series of problems, where each problem has a scalar objective function and inequality constraint provided by the solution of the previous problem. According to the computational scheme given by formulas (6), (9) and (10), the solution involves no more than steps, which are described in the framework of max-algebra as follows.
We set and define , .
The step starts with calculating the matrix
The purpose of this step is to solve the problem
s.t. |
We apply Theorem 3 to find the minimum of the objective function
and then obtain all solutions in the parametric form
By Theorem 4 these solutions can be defined by the inequality to provide the new feasible set
To prepare the next step, we find the minimums
At the step , we form the matrix
and then solve the problem
s.t. |
The minimum in the problem is given by
and the set of solutions is
To complete this step, we calculate the values
and then define
We repeat the procedure for all remaining steps . The procedure stops at step if the set consists of a single solution, or .
The above solution scheme can be summarized as follows.
Theorem 11.
Let for all be matrices with nonzero spectral radii and be a matrix such that . Denote and , and define the recurrence relations
Then, with the notation , the following statements hold:
-
(i)
All lexicographic max-ordering solutions of problem (20) are given by the matrix in the parametric form
-
(ii)
The minimal normalized best differentiating solution is given by
-
(iii)
The maximal normalized worst differentiating solution is
Considering evaluation of the minimums at each step , we see that the computational complexity of the solution is not more than .
7 Illustrative Examples
In this section, we present numerical examples intended to illustrate the solution technique developed. We consider a constrained multicriteria problem of pairwise comparisons, which is formulated as a multiobjective optimization problem of constrained log-Chebyshev matrix approximation, and then solved in the framework of tropical optimization.
Suppose that there are alternatives that are compared in pairs according to criteria. The results of comparisons are given by the following pairwise comparison matrices:
The problem is to evaluate the vector of individual ratings of alternatives subject to the constraint , which specifies that the rating of alternative cannot be less than the rating of alternative .
In terms of max-algebra, the problem takes the form of (20) where the constraint matrix is given by
Note that a straightforward analysis of the pairwise comparison matrices shows that according to their entries, alternatives and should respectively receive the first and second highest ratings. Alternatives and have lower ratings and must satisfy the constraint specified in the problem.
Below we describe solutions obtained according to the max-ordering, lexicographic ordering and lexicographic max-ordering principles of optimality.
7.1 Max-Ordering Solution
To obtain the max-ordering solution, we apply Theorem 9. Under the same notation as in this theorem, we calculate
Furthermore, we form the matrices
Evaluation of the minimal normalized best differentiating and maximal normalized worst differentiating solutions from the matrix gives
We can combine both solutions in a vector where some entries are given in interval form as
7.2 Lexicographic Ordering Solution
According to Theorem 10, the lexicographic ordering solution consists of several steps and proceeds as follows. First, we calculate
and then obtain the matrices
To check whether the matrix generates a nonunique solution, we find the corresponding best and worst differentiating solution vectors
Since these vectors are different, we continue to the next step to evaluate
Further calculations lead to the matrices
The best and worst differentiating solution vectors given by are
which shows that these vectors do not coincide.
Because the solution is not unique, we perform the next step and calculate
Furthermore, we use the above result to obtain the matrices
After evaluation of the best and worst solutions from , we have
Since both solutions coincide, we complete the procedure. As the unique final solution, we take the vector
7.3 Lexicographic Max-Ordering Solution
At the first step of this solution, we set and then obtain
Furthermore, as in the max-ordering solution, we calculate the matrices
and then obtain the best and worst differentiating vectors
We observe that these vectors do not coincide and further calculate
Since , we have the index set .
We turn to the next step and use the set to calculate
To describe the solution set for this step, we form the matrices
and then find the vectors
Next, we calculate the minimums
which leads to the index set .
According to the set , we define
Furthermore, we calculate the matrices
The best and worst differentiating vectors are given by
Since at this step we have , the procedure terminates. We can couple both obtained vectors into one vector with interval entries as
To conclude we observe that all solutions result in the same preference order of alternatives in the form , whereas the individual ratings of alternatives may differ. We note that this order is in agreement with the above conclusions based on straightforward analysis of pairwise comparison matrices, and satisfies the constraint imposed on the ratings.
8 Solution to Vacation Plan Problem
As another example which offers a potential to consider the proposed approach in line with existing methods, we solve the problem of selecting a plan for vacation from [47]. The aim is to choose a destination for vacation trip from Philadelphia among the alternatives : short trips (i.e., New York, Washington, Atlantic City, New Hope, etc.), : Quebec, : Denver, : California. These places are evaluated in terms of the following criteria: (1) cost of the trip from Philadelphia, (2) sight-seeing opportunities, (3) entertainment (doing things), (4) way of travel, (5) eating places.
The results of pairwise comparison of criteria are given by the matrix
The pairwise comparison matrices of places with respect to the criteria are as follows:
To solve the problem, the analytical hierarchy process method is applied in [47], which provides the following order of alternatives: . Another solution based on the log-Chebyshev approximation in [34] yields a different order . Below, we show how the new approach can be used to obtain solutions according to various principles of optimality.
8.1 Max-Ordering Solution
We start with the max-ordering solution given by Theorem 9. Observing that , we calculate
Next, we form the matrices
The minimal normalized best differentiating and maximal normalized worst differentiating solutions obtained from takes the form
and specify the respective orders and .
Both solutions can be written as the vector
The obtained solutions result in the order , where the alternatives and cannot be uniquely ordered.
8.2 Lexicographic Ordering Solution
In order to obtain a lexicographic ordering solution, we first need to rank the criteria from the pairwise comparison matrix . By comparing the rows of the matrix , one can expect that the most important is criterion and the second is . Next come criteria and , and finally as the least important criterion. These considerations are confirmed by the result in [47], where a priority vector is calculated from by using the principal eigenvector method, which puts the criteria into the order .
To simplify the application of the lexicographic ordering based solutions in what follows, we renumber the criteria in the problem according to their importance. Specifically, we assign the number one to criterion , the number to criterion and so on. We use the new numbers of criteria to rename the pairwise comparison matrices for alternatives as follows:
Calculation of the best and worst differentiating solutions from yields
Observing that the obtained vectors do not coincide, we further evaluate
and find the matrices
Further calculation shows that the best and worst differentiating solutions given by coincide and thus provide a unique solution to the problem as
The obtained solution gives the order .
8.3 Lexicographic Max-Ordering Solution
We now apply Theorem 11 to obtain the lexicographic max-ordering solution. We set and calculate
Furthermore, we obtain the matrices
The best and worst differentiating solutions are given by the vectors
Considering that these vectors do not coincide, we further calculate
We observe that and take the set to calculate
Further calculations yield the matrices
Evaluation of the best and worst differentiating solutions yields the vector
which places the alternatives in the order .
A comparison of the results obtained shows that the solutions obtained according to the max-ordering and lexicographic max-ordering principles of optimality and the solution obtained by the analytical hierarchy process method in [47] rank the alternative higher than the others. A different result provided by lexicographic ordering is explained by the fact that this technique can overestimate the contribution of pairwise comparisons made according to the most important criterion, and thereby provide little or no consideration of the other comparisons. Finally, we note that the order obtained by the analytical hierarchy process method and the order given by the best differentiating max-ordering solution completely coincide.
9 Conclusion
In this paper, we have considered a decision-making problem of rating alternatives, based on pairwise comparisons under multiple criteria, subject to constraints imposed on ratings. The problem was reduced to constrained log-Chebyshev approximation of pairwise comparison matrices by a common consistent matrix that directly determines the vector of ratings. We have represented the approximation problem as a constrained multiobjective optimization problem in terms of tropical algebra where the addition is defined as maximum. We have applied methods and results of tropical optimization to solve the multiobjective problem according to the max-ordering, lexicographic ordering and lexicographic max-ordering principles of optimality. The proposed technique was illustrated with numerical solutions of some pairwise comparison problems, including the problem of selecting a plan for vacation from [47].
The obtained results show that the application of the log-Chebyshev approximation in combination with tropical algebra allows one to obtain analytical solutions ready for both formal analysis and straightforward computations. Although the new technique to solve pairwise comparison problems in general involves more computational effort than the existing principal eigenvector and geometric mean methods, the computational complexity of this technique remains of a moderate degree and does not exceed where is the number of criteria and is the number of alternatives. At the same time, it offers an advantage over the above two methods that cannot handle constraints in a direct and efficient way, and thus do not provide easy implementation of the optimality principles which involve solving constrained optimization problems.
The application of the log-Chebyshev approximation can yield multiple solutions to the pairwise comparison problem. Although the multiple solutions seem to correspond well to the approximate nature of the model of pairwise comparisons, which in most cases is poorly consistent with the data, the existence of many optimal solutions can make it difficult to find an appropriate decision. To overcome this possible drawback, we have developed a new technique that reduces the multiple solutions to the “best” and “worst” solutions that can be reasonably representative of the entire solution set and hence simplify the identification and selection of the right decision.
The technique examines the solution vectors of ratings, which are normalized to have the maximum value of . As the best solution, a minimal vector is taken that maximizes the ratio between the maximum and minimum components (the highest and lowest ratings). The worst solution is the maximal vector that minimizes the ratio between the maximum and minimum components. While the worst solution is always unique, there may be more than one best solution, which can be considered as a limitation of the technique. Note however that multiple best solutions (the number of which cannot exceed either) should be explained as a result of unstable pairwise comparison data and hence indicate unreliable input information, which is difficult to interpret unambiguously.
The main contribution of the study is twofold. First, we have formulated and investigated new constrained multicriteria decision problems that model complex decision-making processes, but received no attention in the literature. We have proposed an approach based on the log-Chebyshev approximation, which in contrast to other methods in decision making makes it possible to obtain analytical solutions to the problems of interest. Moreover, new formal methods and real-world applications developed to extend the mathematical instruments and broaden the application scope of tropical optimization, present another significant outcome.
Practical implications of the study are that it suggests new practically applicable solution techniques to supplement and complement existing methods in the situation when these methods are known to produce different and even opposite results. In the case of conflicting results, the new technique can serve to obtain additional arguments in favor or against solutions offered by other methods, and thereby help make more accurate decisions. As another practical consequence, one can consider the ability to solve real-world multicriteria decision problems with constraints, where weights of criteria are not specified or the criteria are differentiated only by ranks.
Possible lines of further investigation include extension of the results to solve multicriteria problems under different types of constraints and principles of optimality. The development of interval and fuzzy implementations of the proposed solutions is another area of interest for future research.
Acknowledgments
The author is very grateful to two anonymous reviewers for their valuable comments and suggestions, which have been incorporated in the revised manuscript.
References
- [1] J. Barzilai, W. Cook, and B. Golany. Consistent weights for judgements matrices of the relative importance of alternatives. Oper. Res. Lett., 6(3):131–134, 1987. doi:10.1016/0167-6377(87)90026-5.
- [2] R. E. Bellman and L. A. Zadeh. Decision-making in a fuzzy environment. Management Sci., 17(4):B141–B164, 1970. doi:10.1287/mnsc.17.4.B141.
- [3] V. Belton and T. Gear. On a short-coming of Saaty’s method of analytic hierarchies. Omega, 11(3):228–230, 1983. doi:10.1016/0305-0483(83)90047-6.
- [4] H. P. Benson. Multi-objective optimization: Pareto optimal solutions, properties. In C. A. Floudas and P. M. Pardalos, editors, Encyclopedia of Optimization, pages 2478–2481. Springer, 2 edition, 2009. doi:10.1007/978-0-387-74759-0_426.
- [5] J. J. Buckley. Fuzzy hierarchical analysis. Fuzzy Sets and Systems, 17(3):233–247, 1985. doi:10.1016/0165-0114(85)90090-9.
- [6] J. J. Buckley. Multiobjective possibilistic linear programming. Fuzzy Sets and Systems, 35(1):23–28, 1990. doi:10.1016/0165-0114(90)90015-X.
- [7] C. Carlsson and R. Fullér. Possibility for Decision, volume 270 of Studies in Fuzziness and Soft Computing. Springer, Berlin, 2011. doi:10.1007/978-3-642-22642-7.
- [8] E. U. Choo and W. C. Wedley. A common framework for deriving preference values from pairwise comparison matrices. Comput. Oper. Res., 31(6):893–908, 2004. doi:10.1016/S0305-0548(03)00042-X.
- [9] G. Crawford and C. Williams. A note on the analysis of subjective judgment matrices. J. Math. Psych., 29(4):387–405, 1985. doi:10.1016/0022-2496(85)90002-1.
- [10] D. Dubois and P. Fortemps. Computing improved optimal solutions to max-min flexible constraint satisfaction problems. European J. Oper. Res., 118(1):95–126, 1999. doi:10.1016/S0377-2217(98)00307-5.
- [11] D. J. Dubois and H. M. Prade. Fuzzy Sets and Systems, volume 144 of Mathematics in Science and Engineering. Academic Press, San Diego, 1980.
- [12] M. Ehrgott. Multicriteria Optimization. Springer, Berlin, 2 edition, 2005. doi:10.1007/3-540-27659-9.
- [13] W. F. A. El-Wahed. Intelligent fuzzy multi-criteria decision making: Review and analysis. In C. Kahraman, editor, Fuzzy Multi-Criteria Decision Making: Theory and Applications with Recent Developments, pages 19–50. Springer, Boston, MA, 2008. doi:10.1007/978-0-387-76813-7_2.
- [14] L. Elsner and P. van den Driessche. Max-algebra and pairwise comparison matrices. Linear Algebra Appl., 385(1):47–62, 2004. doi:10.1016/S0024-3795(03)00476-2.
- [15] L. Elsner and P. van den Driessche. Max-algebra and pairwise comparison matrices, II. Linear Algebra Appl., 432(4):927–935, 2010. doi:10.1016/j.laa.2009.10.005.
- [16] M. Fiedler, J. Nedoma, J. Ramík, J. Rohn, and K. Zimmermann. Linear Optimization Problems with Inexact Data. Springer, Boston, MA, 2006. doi:10.1007/0-387-32698-7.
- [17] M. Gavalec, Z. Němcová, and S. Sergeev. Tropical linear algebra with the Łukasiewicz T-norm. Fuzzy Sets and Systems, 276:131–148, 2015. doi:10.1016/j.fss.2014.11.008.
- [18] M. Gavalec, J. Ramík, and K. Zimmermann. Decision Making and Optimization, volume 677 of Lecture Notes in Economics and Mathematical Systems. Springer, Cham, 2015. doi:10.1007/978-3-319-08323-0.
- [19] J. S. Golan. Semirings and Affine Equations Over Them, volume 556 of Mathematics and Its Applications. Kluwer Acad. Publ., Dordrecht, 2003. doi:10.1007/978-94-017-0383-3.
- [20] M. Gondran and M. Minoux. Graphs, Dioids and Semirings, volume 41 of Operations Research/ Computer Science Interfaces. Springer, New York, 2008. doi:10.1007/978-0-387-75450-5.
- [21] H. Goto and S. Wang. Polyad inconsistency measure for pairwise comparisons matrices: Max-plus algebraic approach. Oper. Res. Int. J., 22(1):401–422, 2022. doi:10.1007/s12351-020-00547-9.
- [22] B. B. Gursoy, O. Mason, and S. Sergeev. The analytic hierarchy process, max algebra and multi-objective optimisation. Linear Algebra Appl., 438(7):2911–2928, 2013. doi:10.1016/j.laa.2012.11.020.
- [23] B. Heidergott, G. J. Olsder, and J. van der Woude. Max Plus at Work. Princeton Series in Applied Mathematics. Princeton Univ. Press, Princeton, NJ, 2006.
- [24] A. Ishizaka and M. Lusti. How to derive priorities in AHP: A comparative study. Cent. Eur. J. Oper. Res., 14(4):387–400, 2006. doi:10.1007/s10100-006-0012-9.
- [25] J. Krejčí. Pairwise Comparison Matrices and their Fuzzy Extension. Studies in Fuzziness and Soft Computing. Springer, Cham, 2018. doi:10.1007/978-3-319-77715-3.
- [26] N. Krivulin. Complete solution of a constrained tropical optimization problem with application to location analysis. In P. Höfner, P. Jipsen, W. Kahl, and M. E. Müller, editors, Relational and Algebraic Methods in Computer Science, volume 8428 of Lecture Notes in Comput. Sci., pages 362–378. Springer, Cham, 2014. arXiv:1311.2795, doi:10.1007/978-3-319-06251-8_22.
- [27] N. Krivulin. Extremal properties of tropical eigenvalues and solutions to tropical optimization problems. Linear Algebra Appl., 468:211–232, 2015. arXiv:1311.0442, doi:10.1016/j.laa.2014.06.044.
- [28] N. Krivulin. A multidimensional tropical optimization problem with a non-linear objective function and linear constraints. Optimization, 64(5):1107–1129, 2015. arXiv:1303.0542, doi:10.1080/02331934.2013.840624.
- [29] N. Krivulin. Rating alternatives from pairwise comparisons by solving tropical optimization problems. In Z. Tang, J. Du, S. Yin, L. He, and R. Li, editors, 2015 12th Intern. Conf. on Fuzzy Systems and Knowledge Discovery (FSKD), pages 162–167. IEEE, 2015. arXiv:1504.00800, doi:10.1109/FSKD.2015.7381933.
- [30] N. Krivulin. Using tropical optimization techniques to evaluate alternatives via pairwise comparisons. In A. H. Gebremedhin, E. G. Boman, and B. Ucar, editors, 2016 Proc. 7th SIAM Workshop on Combinatorial Scientific Computing, pages 62–72. SIAM, Philadelphia, PA, 2016. arXiv:1503.04003, doi:10.1137/1.9781611974690.ch7.
- [31] N. Krivulin. Direct solution to constrained tropical optimization problems with application to project scheduling. Comput. Manag. Sci., 14(1):91–113, Jan 2017. arXiv:1501.07591, doi:10.1007/s10287-016-0259-0.
- [32] N. Krivulin. Tropical optimization problems in time-constrained project scheduling. Optimization, 66(2):205–224, 2017. arXiv:1502.06222, doi:10.1080/02331934.2016.1264946.
- [33] N. Krivulin. Algebraic solution to constrained bi-criteria decision problem of rating alternatives through pairwise comparisons. Mathematics, 9(4):303, 2021. arXiv:1911.09700, doi:10.3390/math9040303.
- [34] N. Krivulin and S. Sergeev. Tropical implementation of the Analytical Hierarchy Process decision method. Fuzzy Sets and Systems, 377:31–51, 2019. arXiv:1802.01989, doi:10.1016/j.fss.2018.10.013.
- [35] N. K. Krivulin and S. A. Gubanov. Algebraic solution of a problem of optimal project scheduling in project management. Vestnik St. Petersburg Univ. Math., 54(1):58–68, 2021. doi:10.1134/S1063454121010088.
- [36] P. Li and S.-C. Fang. Chebyshev approximation of inconsistent fuzzy relational equations with max- composition. In W. A. Lodwick and J. Kacprzyk, editors, Fuzzy Optimization, volume 254 of Studies in Fuzziness and Soft Computing, pages 109–124. Springer, Berlin, 2010. doi:10.1007/978-3-642-13935-2_5.
- [37] W. Liu and M. Liang. Multi-objective design optimization of reconfigurable machine tools: A modified fuzzy-Chebyshev programming approach. Int. J. Prod. Res., 46(6):1587–1618, 2008. doi:10.1080/00207540600943944.
- [38] D. T. Luc. Pareto optimality. In A. Chinchuluun, P. M. Pardalos, A. Migdalas, and L. Pitsoulis, editors, Pareto Optimality, Game Theory and Equilibria, pages 481–515. Springer, New York, 2008. doi:10.1007/978-0-387-77247-9_18.
- [39] M. Luhandjula. Fuzzy optimization: An appraisal. Fuzzy Sets and Systems, 30(3):257–282, 1989. doi:10.1016/0165-0114(89)90019-5.
- [40] D. Maclagan and B. Sturmfels. Introduction to Tropical Geometry, volume 161 of Graduate Studies in Mathematics. AMS, Providence, RI, 2015. doi:10.1090/gsm/161.
- [41] J. Mazurek, R. Perzina, J. Ramík, and D. Bartl. A numerical comparison of the sensitivity of the geometric mean method, eigenvalue method, and best–worst method. Mathematics, 9(5), 2021. doi:10.3390/math9050554.
- [42] H. Nakayama, Y. Yun, and M. Yoon. Sequential Approximate Multiobjective Optimization Using Computational Intelligence. Vector Optimization. Springer, Berlin, 2009. doi:10.1007/978-3-540-88910-6.
- [43] R. Narasimhan. A geometric averaging procedure for constructing supertransitive approximation to binary comparison matrices. Fuzzy Sets and Systems, 8(1):53–61, 1982. doi:10.1016/0165-0114(82)90029-X.
- [44] A. D. Nola and C. Russo. The semiring-theoretic approach to MV-algebras: A survey. Fuzzy Sets and Systems, 281:134–154, 2015. Special Issue Celebrating the 50th Anniversary of Fuzzy Sets. doi:10.1016/j.fss.2015.08.026.
- [45] R. Ramesh and S. Zionts. Multiple criteria decision making. In S. I. Gass and M. C. Fu, editors, Encyclopedia of Operations Research and Management Science, pages 1007–1013. Springer, Boston, MA, 2013. doi:10.1007/978-1-4419-1153-7_653.
- [46] J. Ramík. Pairwise Comparisons Method, volume 690 of Lecture Notes in Economics and Mathematical Systems. Springer, Cham, 2020. doi:10.1007/978-3-030-39891-0.
- [47] T. L. Saaty. A scaling method for priorities in hierarchical structures. J. Math. Psych., 15(3):234–281, 1977. doi:10.1016/0022-2496(77)90033-5.
- [48] T. L. Saaty. The Analytic Hierarchy Process. RWS Publications, Pittsburgh, PA, 2 edition, 1990.
- [49] T. L. Saaty. On the measurement of intangibles: A principal eigenvector approach to relative measurement derived from paired comparisons. Notices Amer. Math. Soc., 60(2):192–208, 2013. doi:10.1090/noti944.
- [50] T. L. Saaty and L. G. Vargas. Comparison of eigenvalue, logarithmic least squares and least squares methods in estimating ratios. Math. Modelling, 5(5):309–324, 1984. doi:10.1016/0270-0255(84)90008-3.
- [51] T. L. Saaty and L. G. Vargas. Uncertainty and rank order in the analytic hierarchy process. European J. Oper. Res., 32(1):107–117, 1987. doi:10.1016/0377-2217(87)90275-X.
- [52] A. A. Salo and R. P. Hämäläinen. Preference programming through approximate ratio comparisons. European J. Oper. Res., 82(3):458–475, 1995. doi:10.1016/0377-2217(93)E0224-L.
- [53] Y. Shitov. Factoring a band matrix over a semiring. Fuzzy Sets and Systems, 382:165–171, 2020. doi:10.1016/j.fss.2019.02.004.
- [54] L. L. Thurstone. A law of comparative judgment. Psychol. Review, 34(4):273–286, 1927. doi:10.1037/h0070288.
- [55] N. M. Tran. Pairwise ranking: Choice of method can produce arbitrarily different rank order. Linear Algebra Appl., 438(3):1012–1024, 2013. doi:10.1016/j.laa.2012.08.028.
- [56] F. J. Valverde-Albacete and C. Peláez-Moreno. The spectra of irreducible matrices over completed idempotent semifields. Fuzzy Sets and Systems, 271:46–69, 2015. doi:10.1016/j.fss.2014.09.022.
- [57] P. J. M. van Laarhoven and W. Pedrycz. A fuzzy extension of Saaty’s priority theory. Fuzzy Sets and Systems, 11(1):229–241, 1983. doi:10.1016/S0165-0114(83)80082-7.