A Linear Input Dependence Model for Interdependent Networks
Abstract
We consider a linear relaxation of a generalized minimum-cost network flow problem with binary input dependencies. In this model the flows through certain arcs are bounded by linear (or more generally, piecewise linear concave) functions of the flows through other arcs. This formulation can be used to model interrelated systems in which the components of one system require the delivery of material from another system in order to function (for example, components of a subway system may require delivery of electrical power from a separate system). We propose and study randomized rounding schemes for how this model can be used to approximate solutions to a related mixed integer linear program for modeling binary input dependencies. The introduction of side constraints prevents this problem from being solved using the well-known network simplex algorithm, however by characterizing its basis structure we develop a generalization of network simplex algorithm that can be used for its computationally efficient solution.
1 Introduction
As society continues its trend towards urbanization it also becomes more reliant on highly interconnected civil infrastructure systems, which include services such as transportation, electrical power, telecommunications, water, and waste management. Interconnections arise when the functionality of one system relies on the delivery of resources from another, for example the reliance of telephone lines and subway systems on the delivery of power from the electrical grid in order to function. These interdependencies make the analysis of such systems much more complicated, and cause them to become more vulnerable to both attack and natural disaster due to the possibility of cascading failures.
A cascading failure occurs when the failure of one component of a system causes another component which depends on the functionality of the first component (possibly in a different system) to also fail. This may in turn cause a third component (depending on the second component) to fail, and then a fourth component, and so on, resulting in a chain reaction that causes damage far beyond the initial failures. This phenomenon has been the subject of a great deal of recent research incorporating a wide variety of modeling techniques [22]. For example, Kinney et al. 2005 [14] studied the North American power grid by examining the effects of removing random substations, which may result in a chain reaction of substation overloads that damages a large portion of the grid. Ouyang and Dueñas-Osorio 2011 [24] examined different strategies for designing the interfaces between different infrastructure systems in an effort to maximize resistance to random natural disaster scenarios, Ouyang and Wang 2015 [26] examined different interdependent network restoration strategies, and Ouyang 2017 [23] examined worst-case scenarios as the results of directed attacks. Zhong et al. 2019 [33] studied the repair process of various interdependent networks with load-dependent cascading failures. It should also be mentioned that the interdependent network paradigm has been used to study many other types of systems in recent years, for example in Wang et al. 2014 [32], which used interdependent networks to study evolutionary games for the resolution of social dilemmas.
Many interdependent network studies focus purely on topological aspects of the networks, drawing from random graph theory and percolation theory to examine how damage can propagate through a collection of interconnected networks. Gao et al. 2012 [10] showed that reducing the number of interdependencies between networks could reduce their susceptibility to attack. Parandehgheibi and Modiano 2013 [27], Nguyen et al. 2013 [19], and Sen et al. 2014 [30] all studied the interaction between an electrical power network and a communication network by finding optimal strategies for causing as much damage propagation as possible by removing as few nodes as possible. Havlin et al. 2014 [11] showed results for various attack strategies on various types of random network. Lam and Tai 2018 [16] used fuzzy set theory to model networks with uncertain interdependencies.
Rather than focusing purely on the network topology, many other studies of interdependent infrastructure systems have instead made use of network flows models [25], which have long been in use for other common civil infrastructure problems like transportation design [9]. Under this modeling paradigm each infrastructure system is thought of as transporting a flow of material through a collection of nodes and arcs, with the flows and the network components representing different things in different systems. For example, in part of the transportation network, flow might represent cars, arcs roads, and nodes intersections, while in the water network, flow might represent water, arcs pipelines, and nodes water treatment facilities and homes. This model allows design and recovery problems to be stated as minimum-cost network flow (MCNF) problems [1, pp. 4–5].
Interdependencies can be introduced into the standard MCNF problem by including side constraints in addition to the usual flow conservation and and capacity constraints. Lee et al. 2007 [17] put forward a model for describing these relationships. This paper described a type of interdependence called input dependence, in which a component of one infrastructure system requires delivery of resources from another system in order to function (for example, the relationship between the electrical power network and the subway network described above). Input dependence can be modeled by causing an arc in one network (the child) to become unable to transport flow unless a demand node in another network (the parent) receives its full flow demand. Instead of strictly enforcing that all demand be satisfied as in the standard MCNF, shortfall is allowed but is penalized by adding a penalty cost to the objective and by causing other components of the network to fail. This type of constraint has since been incorporated into other models, notably by Cavdaroglu et al., whom used it as part of their disaster recovery scheduling model [8] to study a series of similar problems involving interdependent infrastructure systems [7, 20, 21, 31]. A similar notion of input dependence has been used in other network flows-based models [3, 12].
The modeling technique used by Lee et al. included a binary indicator variable for each parent/child pair with logical constraints to force its value to 0 if the parent’s demand was not fully met and 1 otherwise. This variable was used as a multiplier for the child arc’s capacity constraint, effectively leaving its capacity unchanged if the parent’s demand was satisfied and setting it to 0 otherwise. The use of binary variables can cause the resulting mixed integer linear program (MILP) to become computationally intractable for large civil infrastructure systems.
Motivation and contributions
The purpose of this paper is to examine the linear program (LP) relaxation of the Lee et al. input dependence MILP model obtained by replacing the binary indicator variables with real-valued variables on the interval . In fact we study a generalization of this relaxation, which we will refer to as the minimum-cost network flow problem with linear interdependencies (MCNFLI), in which the child’s capacity may be bounded by any linear function of the parent’s inflow. The main contribution of this paper is to formulate this novel model and to study its methodological and algorithmic properties. Most importantly this includes characterizing the basis of the underlying LP, which enables the development of a generalized network simplex algorithm [1, pp. 402–460] that can be used for its efficient solution.
We also explore applications of the MCNFLI model as a modeling and algorithmic tool. This includes its ability to describe relationships outside the scope of the original binary input dependence model and its use as part of an approximation algorithm for the MILP model. This usage is particularly important for larger multi-staged optimization models that include an underlying interdependent flow network, such as the disaster recovery scheduling models of Cavdaroglu et al. cited above. The solution algorithms for such models often involve an iterative approach in which the underlying network flows problem must be solved repeatedly many times over, in which case the computational savings of the LP relaxation over the MILP model may be greatly multiplied. Moreover, pure LPs possess some important mathematical properties that MILPs lack (such as strong duality [5, p. 148]), which can allow the LP relaxation to be used in formulating far more computationally efficient non-iterative approximation algorithms for the overall model. This application of the MCNFLI is explored further in Rumpf 2020 [29], which uses the MCNFLI model to develop several approximation algorithms for solving an interdependent network interdiction game.
Structure of this paper
In Section 2 we describe the formulation of the MCNFLI model by generalizing the LP relaxation of the binary input dependence model from Lee et al. 2007 and the motivation underlying its formulation. Section 3 discusses an application wherein the solution of the MCNFLI model is combined with a variety of randomized rounding schemes to build an approximate solution for the computationally intractable MILP model. Having illustrated the usefulness of solutions of the MCNFLI model, the remainder of the paper is dedicated to the development of an efficient solution algorithm for the MCNFLI. In Section 4 we explore the structure of the MCNFLI and derive some important theoretical results, including the main result of this paper: the characterization of the basic feasible solution. These results are then applied in Section 5 to formulate a generalized network simplex algorithm. Finally we conclude in Section 6 with a summary of results and a discussion of future work.
2 Model Formulation and Motivation
The purpose of this section is to give a general formulation for the MCNFLI model to be used in developing the main theoretical results of this study. The MCNFLI consists of an MCNF with side constraints. Since any collection of interdependent networks can be merged into a single network using artificial arcs with high cost, for the remainder of the paper without loss of generality we consider a single flow network with node set and arc set . For each node we have a supply value which is positive for supply nodes, negative for demand nodes, and zero for transshipment nodes. We assume that . For each arc we have a constant upper bound , a unit flow cost of , and a nonnegative flow variable of .
We also have a set of interdependencies. Rather than the node-to-arc interdependencies described in Section 1, we will use arc-to-arc interdependencies in which one arc acts as the parent of a different arc. This formulation can be obtained from a node-to-arc interdependence using a transformation similar to that of the minimum-cost formulation of the maximum flow problem [5, pp. 430–433], with a parent arc being saturated corresponding to a parent node’s demand being met. Each element of is an ordered pair whose first element represents a parent arc and whose second element represents the corresponding child arc. Without loss of generality we assume a one-to-one correspondence between parents and children, and that each arc appears in at most one interdependence, since many-to-one interdependencies, one-to-many interdependencies, and mutually interdependent pairs of arcs can all be modeled as arrangements of interdependent arcs in series. For each interdependence we are also given constants and , satisfying for all , which define the linear interdependence. The resulting model, which we will refer to as the linear input dependence model (LIDM), is
(1) | ||||||
(2) | ||||||
(3) | ||||||
(4) |
The LIDM represents one particular formulation of the MCNFLI. Note that (1)–(3) are exactly the objective and constraints of the MCNF. The new side constraints (4) describe the interdependencies, bounding the flow through a child arc by a linear function of its corresponding parent arc .
The MILP from Lee et al. 2007, which we will refer to as the binary input dependence model (BIDM), also takes the form of the MCNF with side constraints. It makes use of a binary linking variable for each interdependence . Additional constraints and slack variables force when and when , allowing to act as an indicator of whether the parent is saturated. The interdependence, itself, is then enforced by a constraint of the form , forcing the child’s flow to be bounded by if the parent is saturated and 0 otherwise.
The LP relaxation of the BIDM involves allowing the linking variables to take values in the interval , rather than only the set . Doing so allows the linking variables to be eliminated via substitution. The resulting linking constraint takes the form , which is the special case of the LIDM for which and for all . For this reason, throughout the remainder of the paper we will refer to the LIDM as the LP relaxation of BIDM.
Linear input dependencies have several modeling applications of their own beyond simply approximating binary input dependencies. While binary input dependence renders a child arc completely unusable if there is any shortfall at its parent arc, linear input dependence still permits partial functionality from partial delivery, which may be more appropriate for modeling certain systems. In addition, by placing a sequence of parents and children in series, linear input dependencies as modeled in the LIDM can be used to generate a piecewise linear concave bound on the child’s capacity as a function of the parent’s saturation, which can model input dependencies in which the delivery of more material has diminishing returns.
3 Approximate Mixed Integer Solutions
Although, as discussed above, a linear input dependency model like the LIDM is useful in its own right, in this section we will investigate how solutions of the LIDM can be applied to find good approximations to the solutions of the BIDM. As the BIDM is NP-complete it is not in general computationally tractable to solve large instances of it exactly (particularly if being used as a sub-model that must be solved repeatedly within a larger problem), however its linear relaxation in the form of the LIDM can be solved quickly and then those solutions can be used to build a solution for the BIDM. In this section we describe a family of randomized rounding algorithms for obtaining a near-optimal, feasible solution to the BIDM based on its linear relaxation. We then conduct computational experiments to demonstrate that these schemes typically generate high-quality approximations within a small number of attempts for a large number of test cases, particularly for networks with relatively few interdependencies.
3.1 Randomized Rounding Algorithms
In this section we describe the randomized rounding approximation algorithms which will be the subject of the remainder of Section 3, as well as explaining why a randomized rounding approach was chosen. In the BIDM the binary linking variable for each interdependent pair either forces the parent arc to be saturated (when ) or the child arc to be unused (when ). A common approximation technique for such a program with binary variables would be to solve the linear relaxation (restricting to instead of ) and then deterministically round the relaxed binary variables to 1 if above some threshold or 0 if below some threshold, fixing the rounded variables and then finally solving the resulting pure LP. Unfortunately there is no deterministic threshold that could be applied in this case that would guarantee the existence of a feasible solution. As an alternative we consider the related idea of randomized rounding algorithms.
Let be the optimal flow for the LP relaxation of the BIDM. For each interdependence we set to 1 with probability and 0 otherwise, where describes the relative saturation of either the parent or the child. If the resulting binary variables define an infeasible LP then the random binary variables can be re-realized, repeating as necessary until a feasible solution is found. As it would be reasonable to base the value of on the saturation of either the parent or the child, we define three basic types of randomized rounding algorithm: randomized rounding based on child flow (RR-Child) in which , randomized rounding based on parent flow (RR-Parent) in which , and the control test fair randomized rounding (RR-Fair) in which .
There is a danger that such a randomized rounding algorithm may not terminate regardless of how many times the random variables are realized. This can occur in certain cases where is allowed to equal exactly 0 or 1, as shown in the examples in Appendix B. To prevent this we define modified versions of the RR-Child and RR-Parent schemes with alternate definitions for . For any , let be equivalent to RR-Child described above, but with its value of restricted to the interval (that is, equal to ) in order to avoid probabilities too close to 0 or 1. For the purposes of our study we will explore the special cases of , which is equivalent to RR-Child as described above, as well as and , which restrict to the intervals and , respectively. We define similarly to , with .
Finally we note some simple theoretical bounds. Clearly the LP relaxation of the BIDM provides a lower bound for its optimal value, and any feasible solution to the BIDM obtained from a randomized rounding scheme provides an upper bound. Unfortunately in general these bounds may not be very tight, and there exist example networks for which they are arbitrarily far from the true optimum (see Appendix C). For this reason theoretical bounds are not very useful in evaluating the LP relaxation and randomized rounding approximations, and so in order to learn more about the typical errors in these approximations we turn to empirical results obtained through computational trials.
In order to constitute reasonable approximation methods we should expect that the LP relaxation and the randomized rounding schemes produce typically similar objective values to those of the BIDM solution, and that the randomized rounding schemes typically find a feasible solution in a small number of iterations. Moreover, in order to justify the utility of the LP relaxation in finding a feasible solution to the BIDM we should expect all RR-Child and RR-Parent schemes to typically produce higher-quality solutions than the RR-Fair scheme, as the RR-Child and RR-Parent schemes require an LP relaxation pre-solve while the RR-Fair scheme does not. It is also reasonable to expect that all approximation methods perform better at lower interdependence densities, as fewer interdependencies leads to fewer relaxed constraints. These expectations were substantiated through use of the computational experiments explained below.
3.2 Computational Trials
The approximation methods described above were empirically tested on a large number of random interdependent networks. Each network was solved once as a MILP by treating its interdependencies as binary, once as an LP by treating its interdependencies as linear, and then once again as a MILP using each of the randomized rounding schemes (based on the LP solution). For each solution method the objective value was recorded, and for the randomized rounding schemes the number of realizations required to first achieve a feasible MILP solution was recorded. In order to evaluate the effect of the interdependence density as well as the network’s size, arc density, and interdependence structure on these results, several network parameters were varied between problem sets.
We used a modified version of NETGEN [15], a random MCNF problem generator, to create our test cases, adding a procedure to generate pairs of interdependent arcs alongside the standard MCNF problem parameters and used these to define instances of the BIDM, as well as a preliminary check to ensure that all problem instances were feasible. We tested two main types of problem: those in which parent arcs corresponded to demand nodes, and those in which parent arcs were distributed randomly throughout the network. We will refer to these problems as Structured-Type and Unstructured-Type, respectively. Structured-Type problems represent a more realistic interdependence structure that might be seen in infrastructure networks, while Unstructured-Type problems assume no particular interdependence structure in order to evaluate the models’ behaviors in a more general setting. For Structured-Type problems, we applied a transformation to convert parent demand nodes into equivalent parent arcs, relaxing the demand values at these nodes as well as the supply values. The full source code for the computational trial generation can be viewed online [28].
For each problem type, we generated test networks on 256 and 512 nodes. For each test network the number of arcs was either 4 or 8 times the number of nodes. For Structured-Type problems we randomly selected either 2%, 5%, 10%, or 15% of the demand nodes as parents, with their corresponding children being chosen uniformly at random from the arc set. For Unstructured-Type problems we randomly selected either 1%, 2%, 5%, or 10% of the arcs within the network as parents and children.
For each combination of parameters, 60 test cases were generated. In all cases, 20% of the nodes were sources and 20% were sinks (in Structured-Type problems, this refers to the original number of sink nodes, before some were converted into transshipment nodes during the parent arc transformation). Each arc’s cost was chosen uniformly at random from the interval , with its capacity chosen from and with 100% of the NETGEN skeleton arcs having maximum cost. The total supply was 10,000 per 256 nodes. The total number of test cases generated over all parameter combinations listed above was 1960.
Each test began by solving the binary input dependence MILP for the current network using CPLEX. Next the LP relaxation was solved using CPLEX, and the flow values from the optimal solution were recorded. Finally each of the seven randomized rounding schemes described above was executed in turn, using the flow values from the LP solution to determine the probability of setting each binary variable to 0 or 1. During each attempt a random binary vector was generated according to the rules of the current scheme, the resulting constraints were added to the MILP formulation, and the resulting problem was evaluated with CPLEX. If infeasible, a new random binary vector was generated from the same probability distribution and the process was repeated. If no feasible solution was found after an arbitrarily-chosen cutoff of 1000 attempts then the attempt was labeled as a failure and the next scheme was tested.
3.3 Computational Results
In this section we summarize the results of the computational trials described above. Full data tables can be found in Appendix E, while the raw data can be viewed online [13]. Across all groups of trials the RR-Child and RR-Parent schemes performed so similarly that only the RR-Child and RR-Fair results will be discussed here.
LP Relaxation
For the LP relaxation the primary value of interest is the relative error (as a relative difference between the MILP and LP objective values), which was extremely small across all trials. Among the Structured-Type trial sets all showed a mean error of less than 0.2% with a maximum of less than 1.7%. More than half of the Structured-Type trial errors were exactly zero. Among the Unstructured-Type trials all mean errors were less than 1.7% with a maximum of less than 6.3% and a median of less than 0.2%. In addition, within each Unstructured-Type trial set, increasing the interdependence density for a fixed node and arc count resulted in an increased mean relative error. No clear trend was displayed among the Structured-Type trials.
Randomized Rounding, Structured-Type Problems
For the randomized rounding schemes there are two values of interest: the relative error and the number of iterations required to reach a feasible solution. For Structured-Type problems, all randomized rounding schemes except for one were able to find a feasible solution after a single iteration. The one exception occurred on a test network with 512 nodes, 4 arcs per node, and 10% of sinks acting as interdependencies, for which failed to find a solution within 1000 iterations, although the remaining randomized rounding schemes all succeeded.

Figure 1 shows the mean relative errors for the three RR-Child schemes and the single RR-Fair scheme applied to Structured-Type problem instances, although all other Structured-Type test groups showed a similar trend. All schemes show a general trend of the mean error increasing as the interdependence density increases, with the largest errors occurring for the networks with the lowest arc density. The scheme achieved the smallest relative error for most trial groups followed very closely by the scheme, with all relative errors being less than 6.5%. The trials showed larger errors, but still all less than 9.4%. The RR-Fair trials were by far the largest, with a maximum error of approximately 18.7%.
Randomized Rounding, Unstructured-Type Problems
Structured-Type problems require relaxing some demand constraints while Unstructured-Type problems do not, making it more difficult to obtain a feasible solution. Figure 2(a) shows the fraction of each type of trial in which each randomized rounding scheme failed to find a feasible solution within 1000 attempts. As expected, in most cases failure rates increase as the number of interdependencies increases, since this introduces additional side constraints and makes it more difficult to find a feasible solution. Other network sizes displayed less of a clear trend for small numbers of interdependencies, but in all cases the largest failure rates coincided with the largest number of interdependencies.


More important is the comparison between the failure rates of the different randomized rounding schemes. In all trial groups except for one (256 nodes, 4 arcs per node, and 2% of arcs interdependent) the RR-Fair scheme showed by far the largest failure rate, followed by , with and showing by far the lowest failure rates. For the trial groups in which 10% of arcs were interdependent, all RR-Child failure rates fell below 15% while all RR-Fair failure rates were above 95%. Among the RR-Child schemes, as seen in Figure 2(a), failed significantly more often than and .
Figure 2(b) shows the mean for the relative error of each randomized rounding scheme for Unstructured-Type problems, restricted only to instances for which a feasible solution was obtained within 1000 attempts. The trend appears extremely similar to that of the Structured-Type trials. All mean errors increase with the number of interdependencies. In all cases RR-Fair produces by far the largest error, followed by , and then and finally . While and performed almost identically in the Structured-Type trials there is a more pronounced difference in the Unstructured-Type trials, for which produces clearly smaller errors than . Across all trial sets the largest mean error for the solutions was approximately 16.8%, while for it was 4.1% and for it was 0.8%.
3.4 Discussion
The computational results of the previous section largely confirm our expectations for the approximation algorithms as explained in Section 3.1. The LP relaxation and the resulting randomized rounding solutions typically produce reasonable bounds for the MILP solution. The mean relative error of the LP relaxation had a mean of less than 1.7% for all trial groups. For Structured-Type problems, with one exception the scheme was able to find a feasible solution at a relative error of less than 6.5%, while achieved nearly the same accuracy with a better success rate. For Unstructured-Type problems the scheme achieved the smallest relative errors and the scheme achieved the smallest failure rate, but both performed well especially for the smallest trials. In all trial sets, all RR-Child and RR-Parent schemes achieved significantly smaller errors and higher success rates than the RR-Fair scheme, which further justifies the utility in solving an instance of the LIDM while approximating the solution to the BIDM. However, these results also show that the error and failure rates of the approximation methods increase with the interdependence density. While this was expected, it also implies that the use of the randomized rounding algorithms should be limited to networks with relatively low interdependence density. That being said, in spite of the upward trend, the relative error in the LP relaxation was so small across all trial sets that it is still a useful approximation method even for networks with relatively large interdependence densities, which motivates the development of an efficient solution algorithm for the LIDM.
The remainder of the paper will be dedicated to developing and proving the correctness of a generalized version of network simplex capable of solving an arbitrary instance of the MCNFLI. This problem is technically a special case of the MCNF problem with side constraints studied in Mathies and Mevert 1998 [18], but we can exploit the structure of our side constraints to define a simpler solution algorithm. Similar algorithms have been studied in Calvete 2003 [6] for the general equal flow problem and in Bahçeci and Feyziog̃lu 2012 [4] for the proportional flow problem.
4 Problem Structure and Basis
In this section we present some of the notation and theoretical results that will be used throughout the rest of the paper. We prove some important properties of the matrix form that will be used in Section 5 to develop a modified network simplex algorithm, most importantly the characterization of a basis for the MCNFLI in Section 4.2. Before moving on it will be useful to express the LIDM in matrix form. To this end we introduce a slack variable for each and rewrite the linking constraint (4) as the equality
(5) |
Let , , and , and assume without loss of generality that . Let be the vector of flow variables, be the vector of slack variables, be the vector of unit flow costs, be the vector of arc capacities, be the vector of supply values, be the vector of linear inequality constants, and be the node-arc incidence matrix of [5, p. 277]. Let be a matrix with one column for each arc and one row for each interdependence . Within the row corresponding to interdependence , element is , element is , and all other elements are zero. Then the block matrix form of the LIDM, which we will refer to as the LIDM-B, is
(6) | ||||
(7) | ||||
(8) | ||||
(9) |
where denotes the transpose of , is a identity matrix, and is a matrix or vector of zeros of the appropriate dimensions. Let be the matrix on the lefthand side of (7), which constitutes the constraint matrix of the LIDM-B.
4.1 Terminology
In this section we define some terminology that will be used throughout the remainder of the paper. For brevity we will use the terms “arc”, “flow variable”, and “column” interchangeably to refer to an arc, an arc’s associated flow variable, and a flow variable’s associated column of . Similarly we will use the terms “node”, “flow conservation constraint”, and “row” interchangeably to refer to a node, a node’s associated flow conservation constraint, and a constraint’s associated row of .
During each iteration of simplex, let denote the set of basic variables. For such , let be the submatrix of containing only basic columns. Let and be the sets of nonbasic variables at their lower or upper bound, respectively. Let a variable be called interdependent if it is involved in any interdependence as either a parent arc, child arc, or slack variable, and independent otherwise. For each interdependence , we say that flow variables and and slack variable are all linked to each other. We call arcs and partners of each other. For any interdependent arc , let be defined as the coefficient of variable in its corresponding linking constraint (5), which is if is a parent and 1 if it is a child.
4.2 Basis Characterization
In this section we prove the main result of the paper, the characterization of the basis of the LIDM-B, which will require first establishing some preliminary observations regarding the problem’s structure. A basis of the LIDM-B consists of a set of basic variables whose corresponding columns of are full-rank, plus a partition of all nonbasic variables into sets and . We begin by observing the rank of .
Observation 1.
The coefficient matrix of the LIDM-B has rank .
The proof of this follows from the fact that, as an incidence matrix, submatrix has rank [5, p. 280] while the identity matrix in the lower right has rank , and the block structure of makes it clear that the last rows are linearly independent when taken together with any full-rank subset of the first rows. This tells us that our basis must always contain basic variables.
Our next goal is to find a graph-based characterization of this basis analogous to the spanning tree basis of the standard MCNF. We assume without loss of generality that contains at least one spanning tree made up entirely of independent arcs. As in the MCNF, the subgraph of basic arcs must contain only a single component.
Observation 2.
Given any basis , the subgraph of containing only the arcs in must be connected.
The proof of this relies on the fact that the basis matrix can be rearranged into a block form with one block for each component of the basic subgraph. Each block is an incidence matrix and thus rank-deficient by 1, meaning that adding all rows corresponding to one component results in a zero row. Since is rank-deficient by only 1, only one zero row should be possible, which implies that only one component is possible.
Unlike the standard MCNF it is possible for the MCNFLI basis to contain cycles of arcs, however no such cycle can consist entirely of independent arcs.
Observation 3.
Given any basis , the subgraph of containing only the independent arcs in must be free of cycles.
The proof of this is almost identical to that of the MCNF problem [5, p. 283]. Observations 2 and 3, combined, imply that the subgraph of basic independent arcs must form a spanning forest of . Let a spanning forest with components be referred to as a -spanning forest.
For the remainder of this paper we will let represent the number of basic interdependent variables, which could include a combination of interdependent arcs and slack variables. Note that it is always the case that or else Observation 3 would be violated. This allows us to define the structure of the basic independent arcs.
Lemma 1.
The basic independent arcs must form an -spanning forest of .
This follows from Observations 2 and 3. Then the collection of basic variables consists of a spanning forest of independent arcs, plus possibly some interdependent arcs linking the components of the forest, plus possibly some slack variables. For the remainder of the paper we will use to refer to the spanning -forest of formed by the basic independent arcs, where are the subgraphs of that form the components of the forest.
Define matrix as
(14) |
where is the number of basic interdependent arcs, is the number of basic slack variables, and are the submatrices of and containing only basic columns, and is defined as
(18) |
This matrix proves to be the key for determining the rank of .
Lemma 2.
The rank of the basis matrix is if and only if matrix as defined in (14) has rank .
Proof.
We can reorder the columns and rows of any basis matrix to obtain the block structure
(24) |
where is the incidence matrix of and is the submatrix of whose columns correspond to interdependent and slack variables and whose rows correspond to the nodes of . For notational convenience, for the remainder of the proof we will refer to arcs by a single index rather than the indices of both of their endpoints. Let the basic interdependent arcs be labeled with indices corresponding to the order of the columns in each submatrix from (24).
Consider one of the submatrices . As the incidence matrix of a tree, it has exactly one more row than it has columns and it is rank-deficient by exactly 1 [5, p. 280]. Summing all rows of yields the zero vector. Transform into an equivalent matrix by adding all rows corresponding to to the first row of , for each tree . After this addition, the element corresponding to the first row of in column will become , which is exactly . The result is
(33) |
where and are exactly and , respectively, with their first row removed. All matrices are full-rank [5, p. 281], and since they occupy disjoint sets of columns, the rows associated with must all be linearly independent. Their total rank when taken together is ( nodes, minus one deleted row for each of the trees). Then has rank exactly when all remaining rows have rank , and are linearly independent when taken together with the rows of . Since the first row associated with each matrix is zero in (33) we need only examine the remaining rows for columns 1 through . The submatrix defined by this subset is
(38) |
This dimensions of this matrix are , since it has one column for each basic interdependent variable and one row for each of the trees in the spanning forest, plus one for each of the interdependencies. Consider the effects of summing rows 1 through . In column , from our earlier conclusion that , this is , which is zero since all columns of an incidence matrix sum to zero. Because of this we may drop any arbitrary row without affecting the rank of this matrix. Dropping the row corresponding to tree gives exactly the matrix defined earlier in (14). ∎
Adopting the terminology of [2, 4, 6], we refer to an -spanning forest of as a good -forest with respect to a set of non-independent variables if is full-rank. Then the preceding results allow us to finally characterize the basis of the LIDM-B.
Theorem 1.
A basis of the LIDM-B consists of a set of basic variables, containing a good -spanning forest of verified by a set of interdependent variables, plus an assignment of all nonbasic variables to a category or , containing variables at their lower or upper bounds, respectively.
The proof of Theorem 1 follows directly from Lemma 2 and the definition of a basis. It should also be noted that the basis characterization presented in Theorem 1 can be modified to depend on only a submatrix of , as shown in Appendix A. In either case the spanning forest element of the the LIDM-B basis makes it structurally similar to that of the standard MCNF basis. This enables us to develop a generalized network simplex algorithm for its solution, as will be explained in the next section.
5 Modified Network Simplex
The network simplex algorithm is a well-known, computationally efficient, exact solution algorithm for the network flows problems [1, pp. 402–421]. It is a specialized version of the simplex algorithm that takes advantage of the network structure of the MCNF, however the introduction of side constraints in the LIDM-B prevents network simplex from being applied directly. Fortunately the basis structure derived in the previous section is so similar to the spanning tree basis of a standard MCNF problem that the network simplex algorithm can be generalized to develop an efficient solution algorithm for the MCNFLI.
This section will focus primarily on the elements of network simplex that need to be modified. Our overall methodology in developing this generalization follows the general approach used by Calvete 2003 [6] and by Bahçeci and Feyziog̃lu 2012 [4] in generalizing network simplex for the equal and proportional flows problems, respectively, though these problems are not equivalent to the MCNFLI and so their specific results cannot be directly applied. Among other changes, solving the LIDM-B requires further procedures to accommodate the non-network slack variables as well as casework to accommodate different types of basis changes. In Section 5.1 we describe the modified reduced cost calculation procedure (which searches for candidates to enter the basis). In Section 5.2 we describe the modified change of basis procedure (consisting of pivoting to bring a candidate into the basis while another variable leaves). We conclude in Section 5.3 with an analysis of of their computational complexity. See also Appendix D for an illustrative numerical example of the algorithm carried out on a test network.
5.1 Reduced Cost Calculation
The vector of reduced costs is defined by , where is the potential vector [5, p. 84]. There is one reduced cost for each flow variable and one for each slack variable . There is one potential for reach node and one for each interdependence . Given our definitions for and , we have the relationships
if is independent | (39) | |||||
if | (40) | |||||
if | (41) | |||||
if | (42) |
The reduced cost of a basic variable is zero, allowing us to use equations (39)–(42) to solve for the potentials by adopting a two-step strategy based on that of Calvete 2003 [6] and Bahçeci and Feyziog̃lu 2012 [4]. First we make an initial guess for the potential vector. To do this, within each tree of , arbitrarily select a root and arbitrarily assign it a potential. Assign a potential of zero to all interdependencies of basic slack variables and arbitrarily assign a potential to all remaining interdependencies. Then equations (39) can be used to solve for all remaining node potentials within each tree in order from root to leaf, since for all .
Second, we test whether the guessed potentials were correct by testing whether they produce reduced costs of zero for all basic variables when substituted into (39)–(42). If so, then is the correct potential vector and we can move on. If not, then we must calculate a “corrected potential” vector defined by
(43) | ||||||
(44) |
The vector of correction terms is defined as the solution of the linear system , which includes the reduced cost vector resulting from the initial guess .
Proposition 1.
Proof.
Let be the solution to . From the first part of the two-step strategy described above it is clear that results in for all , and since the same constant offset is applied to all node potentials in , this remains the case after applying the correction terms. From the structure of we find that the row of system corresponding to interdependence is simply the equation . For a basic slack variable we have , in which case the corrected potential and thus the new reduced cost are all also zero.
All that remains is to show that the corrected potentials also result in zero reduced cost for the basic interdependent arcs. For convenience define . For any basic parent arc with , where and , substituting the corrected potentials into the righthand side of (40) and simplifying gives , which is zero if and only if . To show this, the row of corresponding to arc is . If , then is 1 for , , and for all other , resulting in . The same result holds even if , since in this case and . In either case, this row of is set equal to , meaning that must satisfy . As noted above, this implies that produces when substituted into (40). A similar argument holds for the basic child arcs, thus the overall claim holds. ∎
At the end of the two-step process the resulting corrected potential vector can be applied to equations (39)–(42) to compute the reduced cost of any nonbasic variable. As noted at the end of Section 4.2, there is a way to characterize the basis using only a submatrix of . This reduced version of also gives rise to a reduced version of the above algorithm explained in Appendix A. In addition, note that the full version of the above algorithm only needs to be used during the initial iteration. Between consecutive iterations of simplex, within most components of the previous potential vector will still satisfy (39) after the change of basis. As a result the previous potential vector can be used to calculate the correction vector without the need to recalculate all node potentials within each component of .
To explain, if a component of remains unchanged between iterations (meaning that no arc within leaves the basis and no independent arc incident to a node in enters the basis), or if loses an arc and splits into two separate components, then all previous node potentials within still satisfy (39). Only if an independent arc enters the basis, causing two components of to merge, can one of these equations become violated. In this event equations (39) can be satisfied by increasing all node potentials within one of the merged components by an appropriate constant. If independent arc with endpoints in separate components and of , respectively, enters the basis, then either can be added to all node potentials in or can be added to all node potentials in . All basic slack potentials can then be set to 0 as before, after which the correction vector can be calculated and used it to calculate the corrected potential vector.
In any case, if all reduced costs of variables in are nonnegative and all reduced costs of variables in are nonpositive, then the current basic solution is optimal and the algorithm may terminate. Otherwise we choose some variable in with a negative reduced cost or in with a positive reduced cost to enter the basis and conduct change of basis.
5.2 Change of Basis
After deciding on a variable to enter the basis we conduct pivoting and update the solution vector. Before describing this process it will be helpful to first discuss how to calculate the values of the basic variables given a particular basis. This may be a necessary step during the algorithm’s first iteration starting from an initial basis, and it will also introduce some of the notation and results required to define the change of basis procedure in Section 5.2.2.
5.2.1 Computing the Values of the Basic Variables
Our goal in this section is to derive a procedure to calculate the values of the basic variables given a valid basis . For any such basis the variables in and are fixed at their lower or upper bound, respectively, and the variables in must satisfy transshipment constraints (2) and linking constraints (5). For , define and .
Adopting the terminology of Calvete 2003 [6], for each tree of we define a net requirement , which represents the net inflow to through nonbasic arcs and supply values. Let be the vector of basic interdependent flow variables and all slack variables, arranged in the same order as the columns of . For interdependence corresponding to row of , define its net requirement as
(49) |
Let be a vector of the form , containing all tree net requirement values for followed by all interdependence net requirement values for . These net requirement values can be used to compute the values of the basic variables.
Proposition 2.
Given a basis , the values of the basic interdependent variables and the slack variables are given by the solution of the system .
Proof.
Suppose that satisfies . Our goal is to show that the values in satisfy constraints (2) and (5). Let be the set of basic independent arcs, be the set of basic interdependent arcs, be the set of nonbasic independent arcs at their upper bound, and be the set of nonbasic interdependent arcs at their upper bound. Define
(50) |
This is the outflow from node required to satisfy (2) after all nonbasic arc flows are taken into account. The basic flow variables must carry this amount of net outflow from node , giving
(51) |
For any tree , consider the result of summing equations (51) over all . All basic independent arcs have either both or neither endpoints in , thus for all the terms and will each appear in one equation, causing the two to cancel. The same cancellation occurs when summing equations (50) over all . Equating the two gives
(52) |
Both sides of (52) represent the net outflow of an entire tree after taking nonbasic flows into account. The only arcs capable of transporting a variable amount of flow between trees are the basic interdependent arcs, and so equations (52) define a necessary and sufficient condition for the basic interdependent flow variables to satisfy (2). On the lefthand side, for any arc , the term is present if exits , the term is present if enters , and otherwise no term of the form is present. Then the lefthand side is equal to . Similarly, on the righthand side, for any arc , the term is present if exits , the term is present if enters , and otherwise no term of the form is present. Then the righthand side is equal to , which is exactly the definition of . Combining these two gives , which is row of system .
All that remains is to show that the slack variables in satisfy constraints (5). Each linking constraint has the form . If either flow variable is nonbasic then its value is either zero or its upper bound (depending on whether it is in or ), and placing the resulting constant value on the righthand side produces the net requirement as defined in (49). Then linking constraint is exactly row of the system . Combining this with our earlier result we may conclude that consists of the values of the basic interdependent variables and slack variables. ∎
After obtaining the values of all basic interdependent variables and slack variables, the values of the basic independent variables can be computed in the same way as in standard network simplex [1, pp. 413–415] within each tree. With this procedure in mind, we are finally ready to describe the change of basis procedure.
5.2.2 Change of Basis Procedure
In this section we define a procedure for bringing the entering variable into the basis and choosing a variable to leave the basis. We consider here only entering variables that begin at their lower bound, but an analogous procedure may be described for those at their upper bound. Pivoting consists of increasing the entering variable by units while some of the basic variables change in response in order to maintain feasibility. Assuming finite capacities, at some increase the entering variable or a basic variable will reach one of its bounds, becoming a blocking variable (with infinite capacities there may be no blocking variable, in which case the algorithm can terminate with an objective value of ). A standard pivoting rule [5, p. 111] can be used to select a blocking variable to leave the basis into or while the entering variable takes its place in , at which point the structure of the basic forest is updated.
We will use (and ) to refer to the increase in a given variable as a result of increasing the entering variable by units. Our main goal is to determine the vector of increases . The process for doing so depends on whether changing the entering variable causes flow to be transferred between different components of .
Case 1
If increasing the entering variable causes no flow to be transferred between components of , then pivoting can be conducted similarly to standard network simplex [5, p. 284]. This case can occur when the entering variable is an independent arc or an interdependent arc with a basic slack variable, and when both endpoints of the arc lie within the same tree, in which case units of flow are circulated around the unique cycle defined by the arc in the tree (and for the basic linked slack variable, if it exists). It can also occur when the entering variable is interdependent with a basic linked arc whose endpoints both lie in the same component of , in which case additional flow is circulated around the unique cycle defined by the entering arc’s partner. The amount of flow circulated is if the entering variable is slack, if it is the parent of a child , and if it is the child of a parent .
Case 2
If increasing the entering variable does transfer flow between different components of , we apply a two-step process: first determine the flow increases of the basic interdependent variables, and then determine the flow increases of the basic independent arcs independently within each component of . For the first step, if the entering variable is an arc with , , and , then increasing its flow effectively increases the net requirements and by and , respectively. If is interdependent, then the value associated with its interdependence is also increased by . These changes in net requirements can be accommodated by solving the following perturbed version of the system .
(54) |
The instance of in row of the above system may be ignored if is independent. If the entering variable is slack then we may ignore the instances of in rows and and replace with . In any case solving system (54) gives a vector describing the values of the basic interdependent variables after the entering variable has increased by units, from which can be calculated. These are then used in the second step to calculate net requirement increases for all , which can in turn be used to calculate basic independent arc increases using the procedure described in Section 5.2.1 to process the arcs within each component of .
In either case, upon completion we obtain a vector of basic (and entering) variable increases. These can be used to calculate the maximum increase as well as the blocking variables as in network simplex [5, p. 290]. We then decide on a leaving variable and update the basis structure , the solution vector , the basic forest , and the matrix .
5.3 Computational Complexity
As a specialized form of simplex, the modified network simplex algorithm defined in the previous sections goes through exactly the same number of basis changes as simplex: exponentially many in the worst case, but empirically only polynomially many [5, pp. 127–128]. For this reason a more useful point of comparison is to measure the time complexity of a single iteration of both algorithms. Table 1 shows the time complexities and memory requirements of two standard implementations of simplex [5, p. 107] alongside our modified network simplex algorithm, as applied to the LIDM-B.
Memory Requirement | Worst-Case Time | |
---|---|---|
Simplex (Tableau) | ||
Simplex (Revised) | ||
Modified Network Simplex |
The memory requirement for our algorithm is , dominated by the node-level attributes, arc-level attributes, and elements of . The time complexity is , dominated by the operations to set node potentials and net requirements, operations to set flow values, and operations to solve systems and by Gaussian elimination. For the applications of the MCNFLI described earlier in Section 3 it is reasonable to assume that a relatively small portion of the network’s arcs are interdependent. In particular if is then both the tableau and the revised implementations of simplex have memory and time requirements that are quadratic in and , while our specialized algorithm’s requirements are only linear in and . As with network simplex, this allows it to become significantly more efficient than standard simplex for large networks [5, p. 287].
6 Conclusion and Future Work
The MCNFLI model introduced in Section 2 began as simply an LP relaxation of the binary input dependence model developed by Lee et al. From our discussion we see that the MCNFLI has some interesting modeling applications beyond its original purpose to interdependent networks where the flows through certain arcs are bounded by piecewise linear concave functions of the flows through other arcs. In Section 3 we also see how it can be used to obtain approximate solutions and near-optimal feasible mixed integer solutions to the original MILP. In particular the objective value of the LP relaxation is typically extremely similar to that of the MILP (within across all trials), and in the case of Structured-Type trials nearly all randomized rounding schemes were able to find a feasible MILP solution after a single iteration, and typically with an extremely small relative error (less than across all trials). Moreover, due to its special basis structure the modified network simplex algorithm described in Section 5 allows for it to be solved much faster than with general LP solution techniques. If the number of interdependencies is sufficiently small in comparison to the size of the overall network we can even achieve a running time that approaches that of network simplex, in spite of the fact that network simplex cannot be directly applied to this problem. In particular the time requirement of this generalization is , which is equivalent to the complexity of network simplex when is .
From the computational results presented in Section 3.2 it is clear that the linear relaxation of the BIDM typically produces extremely similar objective values while being significantly less computationally expensive to solve, though it should be noted that these results are based on a relatively limited range of network topologies, and that more computational testing would be required to extend these results to a broader range of applications. Regardless, the computational savings associated with the linear relaxation could become even more pronounced within a larger model that requires solving instances of the BIDM repeatedly during its solution process, for example in the disaster recovery models by Cavdaroglu et al. [7, 8, 20, 21, 31]. Of particular interest are applications for which only the objective value of the BIDM is significant within the overall model and not the flow vector, itself, since in this case the objective of the LIDM can be used as a direct substitute for that of the BIDM without requiring a randomized rounding algorithm to obtain a feasible solution. Rumpf 2020 [29] explored applications of the MCNFLI model to network interdiction games for planning interdependent network defenses against intelligent attacks, but further computational experiments in this area are needed.
Acknowledgments
The authors would like to thank Dr. Lili Du, Dr. Robert Ellis, Dr. Zongzhi Li, and Dr. Michael Pelsmajer for their valuable input, as well as Susanne Mathies for assistance regarding their previous research. The random problem instance generator mentioned in Section 3.2 is based on a NETGEN implementation programmed in C by Norbert Schlenker. Thank you also to Daniel Simmons, Dr. Qipeng Phil Zheng, and Dr. Leandro C. Coelho for their CPLEX guides which proved invaluable in setting up our computational trials. And, thanks to the anonymous referees for their comments that helped improve the exposition of the paper.
References
- [1] R. K. Ahuja, T. L. Magnanti, and J. B. Orlin. Network Flows: Theory, Algorithms, and Applications. Prentice Hall, Englewood Cliffs, NJ, 1993.
- [2] R. K. Ahuja, J. B. Orlin, G. M. Sechi, and P. Zuddas. Algorithms for the simple equal flow problem. Management Science, 45(10):1440–1455, 1999. doi:10.1287/mnsc.45.10.1440.
- [3] Y. Almoghathawi, K. Barker, and L. A. Albert. Resilience-driven restoration model for interdependent infrastructure networks. Reliability Engineering & System Safety, 185:12–23, 2019. doi:10.1016/j.ress.2018.12.006.
- [4] U. Bahçeci and O. Feyziog̃lu. A network simplex based algorithm for the minimum cost proportional flow problem with disconnected subnetworks. Optimization Letters, 6:1173–1184, 2012. doi:10.1007/s11590-011-0356-5.
- [5] D. Bertsimas and J. N. Tsitsiklis. Introduction to Linear Optimization. Athena Scientific, Belmont, MA, 1997.
- [6] H. I. Calvete. Network simplex algorithm for the general equal flow problem. European Journal of Operational Research, 150(3):585–600, 2003. doi:10.1016/S0377-2217(02)00505-2.
- [7] B. Cavdaroglu, E. Hammel, J. E. Mitchell, T. C. Sharkey, and W. A. Wallace. Integrating restoration and scheduling decisions for disrupted interdependent infrastructure systems. Annals of Operational Research, 203:279–294, 2013. doi:10.1007/s10479-011-0959-3.
- [8] B. Cavdaroglu, S. G. Nurre, J. E. Mitchell, T. C. Sharkey, and W. A. Wallace. Decomposition methods for restoring infrastructure systems. In B. M. Ayyub, editor, Vulnerability, Uncertainty, and Risk: Analysis, Modeling, and Management, pages 171–179. American Society of Civil Engineers, 2011. doi:10.1061/41170(400)21.
- [9] R. Z. Farahani, E. Miandoabchi, W. Y. Szeto, and H. Rashidi. A review of urban transportation network design problems. European Journal of Operational Research, 229:281–302, 2013. doi:10.1016/j.ejor.2013.01.001.
- [10] J. Gao, S. V. Buldyrev, H. E. Stanley, and S. Havlin. Networks formed from interdependent networks. Nature Physics, 8:40–48, 2012. doi:10.1038/nphys2180.
- [11] S. Havlin, D. Y. Kenett, A. Bashan, J. Gao, and H. E. Stanley. Vulnerability of network of networks. European Physical Journal Special Topics, 223:2087–2106, 2014. doi:10.1140/epjst/e2014-02251-6.
- [12] R. Holden, D. V. Val, R. Burkhard, and S. Nodwell. A network flow model for interdependent infrastructures at the local scale. Safety Science, 53:51–60, 2013. doi:10.1016/j.ssci.2012.08.013.
- [13] H. Kaul and A. Rumpf. A linear input dependence model for interdependent networks. Mendeley Data, 2021. doi:10.17632/ptzc7jxhmn.
- [14] R. Kinney, P. Crucitti, R. Albert, and V. Latora. Modeling cascading failures in the North American power grid. European Physical Journal B, 46:101–106, 2005. doi:10.1140/epjb/e2005-00237-9.
- [15] D. Klingman, A. Napier, and J. Stutz. NETGEN: A program for generating large scale capacitated assignment, transportation, and minimum cost flow network problems. Management Science, 20(5):814–821, 1974. doi:10.1287/mnsc.20.5.814.
- [16] C. Y. Lam and K. Tai. Modeling infrastructure interdependencies by integrating network and fuzzy set theory. International Journal of Critical Infrastructure Protection, 22:51–61, 2018. doi:10.1016/j.ijcip.2018.05.005.
- [17] E. E. Lee II, J. E. Mitchell, and W. A. Wallace. Restoration of services in interdependent infrastructure systems: A network flows approach. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 37(6):1303–1317, 2007. doi:10.1109/TSMCC.2007.905859.
- [18] S. Mathies and P. Mevert. A hybrid algorithm for solving network flow problems with side constraints. Computers & Operations Research, 25(9):745–756, 1998. doi:10.1016/S0305-0548(98)00001-X.
- [19] D. T. Nguyen, Y. Shen, and M. T. Thai. Detecting critical nodes in interdependent power networks for vulnerability assessment. IEEE Transactions on Smart Grid, 4(1):151–159, 2013. doi:10.1109/TSG.2012.2229398.
- [20] S. G. Nurre, B. Cavdaroglu, J. E. Mitchell, T. C. Sharkey, and W. A. Wallace. Restoring infrastructure systems: An integrated network design and scheduling (INDS) problem. European Journal of Operational Research, 223:794–806, 2012. doi:10.1016/j.ejor.2012.07.010.
- [21] S. G. Nurre and T. C. Sharkey. Integrated network design and scheduling problems with parallel identical machines: Complexity results and dispatching rules. Networks, 63:306–326, 2014. doi:10.1002/net.21547.
- [22] M. Ouyang. Review on modeling and simulation of interdependent critical infrastructure systems. Reliability Engineering & System Safety, 121:43–60, 2014. doi:10.1016/j.ress.2013.06.040.
- [23] M. Ouyang. A mathematical framework to optimize resilience of interdependent critical infrastructure systems under spatial localized attacks. European Journal of Operational Research, 262(3):1072–1084, 2017. doi:10.1016/j.ejor.2017.04.022.
- [24] M. Ouyang and L. Dueñas-Osorio. An approach to design interface topologies across interdependent urban infrastructure systems. Reliability Engineering & System Safety, 96:1462–1473, 2011. doi:10.1016/j.ress.2011.06.002.
- [25] M. Ouyang, L. Hong, Z.-J. Mao, M.-H. Yu, and F. Qi. A methodological approach to analyze vulnerability of interdependent infrastructures. Simulation Modelling Practice and Theory, 17:817–828, 2009. doi:10.1016/j.simpat.2009.02.001.
- [26] M. Ouyang and Z. Wang. Resilience assessment of interdependent infrastructure systems: With a focus on joint restoration modeling and analysis. Reliability Engineering & System Safety, 141:74–82, 2015. doi:10.1016/j.ress.2015.03.011.
- [27] M. Parandehgheibi and E. Modiano. Robustness of interdependent networks: The case of communication networks and the power grid. In IEEE Global Communications Conference, GLOBECOM, pages 2164–2169, Atlanta, GA, December 2013. doi:10.1109/GLOCOM.2013.6831395.
- [28] A. Rumpf. MCNFLI computational trials. GitHub Repository, 2019. URL: https://github.com/adam-rumpf/mcnfli-trials.
- [29] A. Rumpf. Mathematics of Civil Infrastructure Network Optimization. PhD thesis, Illinois Institute of Technology, 2020.
- [30] A. Sen, A. Mazumder, J. Banerjee, A. Das, and R. Compton. Identification of K most vulnerable nodes in multi-layered network using a new model of interdependency. In Proceedings - IEEE INFOCOM, pages 831–836. Institute of Electrical and Electronics Engineers Inc., 2014. doi:10.1109/INFCOMW.2014.6849338.
- [31] T. C. Sharkey, B. Cavdaroglu, H. Nguyen, J. Holman, J. E. Mitchell, and W. A. Wallace. Interdependent network restoration: Modeling restoration interdependencies and evaluating the value of information-sharing. European Journal of Operational Research, 244(1):309–321, 2015. doi:10.1016/j.ejor.2014.12.051.
- [32] Z. Wang, A. Szolnoki, and M. Perc. Self-organization towards optimally interdependent networks by means of coevolution. New Journal of Physics, 16:033041, 2014. doi:10.1088/1367-2630/16/3/033041.
- [33] J. Zhong, F.-M. Zhang, S. Yang, and D. Li. Restoration of interdependent network against cascading overload failure. Physica A: Statistical Mechanics and its Applications, 514:884–891, 2019. doi:10.1016/j.physa.2018.09.130.
Appendix A Simplified Basis Characterization
In this appendix we describe an alternative way to characterize the basis that does not require storing the entire matrix as defined in (14). For the purposes of this explanation we will call a basic interdependent arc loose if its linked slack variable is basic, and tight otherwise. Suppose that the columns are arranged to that all tight arcs are listed before all loose arcs, and that the rows are arranged so that all interdependencies with nonbasic slack variables are listed above all interdependencies with basic slack variables. Then has the block structure
(55) |
where contains only the columns and rows corresponding to tight arcs, and contains only the columns and rows corresponding to loose arcs. Similarly, each submatrix has the block structure . Then (24) becomes
(63) |
where and represent identity matrices of the appropriate dimensions. Starting from this block structure and going through the same steps as in the proof of Lemma 2 results in the following block structure for .
(70) |
Here, is the number of tight arcs and is the number of loose arcs. This matrix is identical to the version shown in (14), simply arranged into a different block structure. By Lemma 2, is rank if and only if is full-rank. Consider the submatrix of defined by
(75) |
This matrix is created by dropping from all columns corresponding to basic slack variables and all rows corresponding to interdependencies whose associated slack variable is basic. Note that it is possible to have , in which case is (this occurs exactly when there are no basic interdependent arcs, or equivalently when all slack variables are basic). For convenience we consider a matrix to be full-rank. This allows the following observation:
Observation 4.
The proof of this relies on the fact that the rows dropped from to obtain each contain the only nonzero entry in one of the columns that is also dropped. Thus the dropped rows are always linearly independent when taken together with the remaining rows, and their contents does not affect the rank of . Then we have the following corollary:
Corollary 1.
The rank of matrix is if and only if matrix as defined in (75) is full-rank.
This follows immediately from Lemma 2 and Observation 4. The advantage of maintaining matrix instead of is that is smaller (its dimensions are between and , in contrast to the dimensions of which are between and ), and can thus be used to reduce the sizes of the linear systems that need to be solved during the modified network simplex algorithm presented in Section 5. This requires some minor modifications to the procedures defined above.
Within the reduced cost calculation algorithm described in Section 5.1, the system is solved to obtain a potential correction term for each component of and each slack variable. Let and be the subvectors of and , respectively, which exclude the elements corresponding to basic slack variables. The proof of Proposition 1 shows that the system results in for all basic slack variables , and so it suffices to solve the system in place of and then set for all basic slack variables.
Within Case 2 of the change of basis algorithm described in Section 5.2.2, the perturbed system is solved to determine the values of the basic interdependent variables. Let be the subvector of which excludes elements corresponding to basic slack variables, and let be the subvector of which excludes elements corresponding to the interdependencies of loose arcs. Then the system can be solved to obtain the flows of all basic interdependent arcs, which along with the definitions of and gives the flows of all interdependent arcs. These flows, along with the linking equations (5), give the values of all basic slack variables. Because excludes basic slack variables, additional steps are required to calculate the entries of corresponding to slack variables. If the incoming variable has a basic linked slack variable, say , then . If is basic and both linked arcs are also basic, then (if is nonbasic then we can set , and likewise if is nonbasic we can set ).
Applying these improvements which use in place of does not change the worst-case time complexity of the modified network simplex algorithm, which remains since in the worst case still has dimensions of . However in practice is much smaller than , resulting in a practical reduction in computational time as well as improving the best-case time complexity from to , which occurs when is .
Appendix B Examples of Randomized Rounding Failure
The and randomized rounding schemes described in Section 3.1 carry the potential of failing to terminate for certain problem instances. This can occur if takes a value of either 0 or 1, in which case the scheme will choose the same value for in every iteration, even if that decision can never lead to a feasible MILP solution. Figures 3(a) and 3(b) show two different ways in which this can occur.
Figure 3(a) shows an example in which and both attempt in each iteration to force the saturation of an arc which is unusable in any feasible MILP solution. In this network the optimal LP solution saturates both and , which form a parent/child pair, and so and both set equal to 1. However, child arcs and cannot be used in any feasible MILP solution as their parent arcs and cannot be saturated due to bottlenecks. As a result, and both create an infeasible MILP in each iteration.
Similarly, Figure 3(b) shows an example in which and both attempt in each iteration to disallow the use of an arc which is required in any feasible MILP solution. In this network the optimal LP solution sends no flow through either or , which form a parent/child pair, resulting in both and setting equal to 0. However in this case all feasible MILP solutions require the use of both and , and so again each iteration of and result in an infeasible MILP.




Appendix C Examples of Arbitrarily Bad Theoretical Bounds
As noted in Section 3.1, the LP relaxation of the BIDM provides a lower bound for its optimal value while any randomized rounding solution provides an upper bound, but both of these bounds may be arbitrarily far away from the true value. Figures 4(a) and 4(b) show examples for which each of these is the case.
In the network shown in Figure 4(a), the optimal LP solution sends 1 unit of flow through node 2 to half-saturate parent arc , allowing the remaining 1 unit of flow to be sent through node 3 to take advantage of the half-usable capacity of child arc , at a flow cost of 0. The only feasible MILP solution sends all flow through arc at a cost of , and since can be arbitrarily large, the LP objective can be arbitrarily far below the MILP objective.
Similarly, in the network shown in Figure 4(b) the optimal LP solution splits the flow equally between the path through node 2 and the path through node 4 by half-saturating the parent/child pair and . Assuming , the optimal MILP solution sends all flow through node 2 at a total cost of . Because both the parent and the child are half-saturated in the LP solution, any randomized rounding scheme has a probability of 0.5 of either forcing the use of or disallowing the use of . If is disallowed, then the only remaining feasible MILP solution sends all flow through at a cost of , and so the randomized rounding objective can be arbitrarily far above the MILP objective.
Appendix D Numerical Example
In order to illustrate how our proposed solution algorithm works in practice we will conduct a few iterations on the example network shown in Figure 5. This network is based on the example network from Calvete 2003 [6]. It has nodes, arcs, and interdependencies, meaning that its basis must contain variables at all times.




Initial Basis
The initial basis contains the arcs shown in Figure 6(a) as well as slack variables , , and . Most nonbasic variables are in and currently have a value of zero, while and are in , putting them at their upper bound of 15. The initial values of the basic variables are
The basic forest contains four components: , , , and . All components, combined, contain the 7 basic independent arcs, while the remaining 7 basic variables are interdependent: , , , , , , and . This means that , thus is . Arranging the basic interdependent variables and trees in this order, the matrix as described in (14) is
Iteration 1
To calculate potentials using the method described in Section 5.1, we begin by tentatively assigning a potential of zero to each interdependence. For the node potentials we select an arbitrary root for each component of the basic forest and assign it a potential of zero, in this case selecting 1 as the root of , 2 as the root of , 5 as the root of , and 8 as the root of . We then use equations (39) to calculate the remaining node potentials from root to leaf, obtaining a complete potential vector of
Some of these are nonzero, and so we must calculate a correction term for each forest component and interdependence. Solving gives a correction vector of
Equipped with the potential values, we can use equations (39)–(42) to calculate the reduced cost of any nonbasic variable. In this case, we might notice that . Since and it has a negative reduced cost it is a candidate to enter the basis, and so we proceed with a change of basis.
is an independent arc and bridges the gap between and , making this a Case 2 change of basis. We first determine the net requirement values as defined in Section 5.2.1. For each component of we find the total supply value and inflow from arcs in , resulting in , , and . Interdependencies do not contain any variables in , and so by (49) we have , , and . Interdependent arc is in , so .
We use these values to form the perturbed system , adding to and to because the incoming arc leaves and enters . The solution to the system is
Subtracting the initial basic feasible solution from this vector gives us the change vector
Next we must determine which basic independent variables change. We may ignore because it contains no arcs, and because it is not incident to any of the affected arcs. Within and , we must calculate changes in the modified net requirements of each affected node as referenced in Section 5.2.1. We need only calculate the modified requirement values of nodes 1, 2, 3, and 5, since these are the only nodes incident to affected arcs. This gives
The only nonzero changes occur at nodes 2 and 3. Finally we can scan through each tree from leaf to root, calculating the change in each independent flow variable as described in Section 5.2.1. There are only two nonzero changes: and .
Having determined the full change vector , we use the method outlined in Section 5.2.2 to calculate and the blocking variables. The first variable to reach a bound as increases is , which reaches zero when . Then the change of basis consists of entering the basis from , leaving the basis into , and a change increment of . The new basis is shown in Figure 6(b), and the values of the basic variables are
Iteration 2
Due to the previous basis change, components and merge into a single component. We will label the new components , , and . Under this naming scheme, and keeping the interdependencies in their previous order, we now have
Going through the same steps as in the previous iteration, the corrected potential vector is
From this we can calculate , and since , it is a candidate to enter the basis. is an independent arc that bridges the gap between and . This leads to another instance of change of basis Case 2. The perturbed net requirement vector is
Solving gives
The relevant supply value changes are
The only nonzero changes occur for nodes in components and , and so only arcs in these trees need be considered. Calculating the changes in their arcs as before, we find three nonzero values: , , and .
A total of eight variables changes as increases. The first to reach a bound is , which becomes zero when . Then moves from into , moves from into , and the remaining variables are adjusted by substituting into their change terms. The new basis is shown in Figure 6(c), and the values of the basic variables are now
Iteration 3
If the new components are labeled as , , and , then remains exactly the same as in the previous iteration since all basic independent arcs still bridge the same tree indices. Applying the same potential calculation technique, the corrected potential vector is
Using these potentials to calculate reduced costs, we find that no variable in has a negative reduced cost and no variable in has a positive reduced cost. This implies that the current solution is optimal. We may terminate the algorithm and output the current solution vector , which has a total cost of .
Appendix E Computational Trial Data Tables
This section contains the full data tables for all trials described in Section 3.2. The raw data summarized in these tables can be viewed online [13]. Tables 2 and 3 show the results of the LP relaxation trials for Structured-Type and Unstructured-Type problems, respectively. Within each table, columns correspond to different percentages of sink nodes (for Structured-Type) or arcs (for Unstructured-Type) acting as parents, while rows correspond to different network densities. Each table entry gives the mean and standard deviation in the relative error for all successful trials of the given network type, as well as the number () of trials on which the statistics are based. All tables throughout this section are organized in the same format.
Tables 4–7 show the results of the , , , and RR-Fair trials for Structured-Type problems, respectively, and corresponds to Figure 1. All results displayed are based on the full set of 60 trials, except for the single failed case of for 512 nodes and 10% of sinks interdependent. Tables 8–11 show the failure rates of the , , , and RR-Fair trials for Unstructured-Type problems, respectively, in terms of the percentage of the 60 trials in each category that failed to reach a feasible solution within 1000 attempts. Tables 12–15 show the relative errors for the , , , and RR-Fair trials for Unstructured-Type problems, respectively, limited to only the successful trials. Tables 8–11 correspond to Figure 2(a) while Tables 12–15 correspond to Figure 2(b).
Structured-Type, LP | ||||||
---|---|---|---|---|---|---|
Nodes | Arcs/Node | 2% | 5% | 10% | 15% | |
256 | 4 | Mean | 0.03218% | 0.09867% | 0.08198% | 0.06282% |
Std. Dev. | 0.09196% | 0.29471% | 0.20290% | 0.12985% | ||
60 | 60 | 60 | 60 | |||
8 | Mean | 0.02036% | 0.03139% | 0.06379% | 0.05054% | |
Std. Dev. | 0.06776% | 0.13905% | 0.24593% | 0.14242% | ||
60 | 60 | 60 | 60 | |||
512 | 4 | Mean | 0.01396% | 0.06195% | 0.07669% | 0.13231% |
Std. Dev. | 0.03793% | 0.11006% | 0.10339% | 0.18200% | ||
60 | 60 | 60 | 60 | |||
8 | Mean | 0.00692% | 0.00984% | 0.03386% | 0.02207% | |
Std. Dev. | 0.02401% | 0.03239% | 0.06100% | 0.06443% | ||
60 | 60 | 60 | 60 |
Unstructured-Type, LP | ||||||
---|---|---|---|---|---|---|
Nodes | Arcs/Node | 1% | 2% | 5% | 10% | |
256 | 4 | Mean | 0.12551% | 0.33397% | 0.40693% | 1.60084% |
Std. Dev. | 0.28373% | 0.46781% | 0.57130% | 1.48633% | ||
60 | 60 | 60 | 60 | |||
8 | Mean | 0.08949% | 0.13669% | 0.28389% | 0.71868% | |
Std. Dev. | 0.29966% | 0.31658% | 0.46065% | 1.06470% | ||
60 | 60 | 60 | 60 | |||
512 | 4 | Mean | 0.11005% | 0.28175% | 0.41681% | 1.43813% |
Std. Dev. | 0.24153% | 0.36669% | 0.40287% | 0.92611% | ||
60 | 60 | 60 | 60 | |||
8 | Mean | 0.05810% | 0.12283% | 0.33781% | 0.83951% | |
Std. Dev. | 0.19562% | 0.33859% | 0.51161% | 0.96077% | ||
60 | 60 | 60 | 60 |
Structured-Type, | ||||||
---|---|---|---|---|---|---|
Nodes | Arcs/Node | 2% | 5% | 10% | 15% | |
256 | 4 | Mean | 0.08396% | 0.12902% | 0.30359% | 0.47040% |
Std. Dev. | 0.45237% | 0.30627% | 0.69299% | 0.91533% | ||
60 | 60 | 60 | 60 | |||
8 | Mean | 0.04560% | 0.05072% | 0.12079% | 0.20614% | |
Std. Dev. | 0.20377% | 0.19954% | 0.39895% | 0.64658% | ||
60 | 60 | 60 | 60 | |||
512 | 4 | Mean | 0.11863% | 0.15723% | 0.22745% | 0.53016% |
Std. Dev. | 0.33121% | 0.36941% | 0.42859% | 0.67835% | ||
60 | 60 | 59 | 60 | |||
8 | Mean | 0.05038% | 0.09224% | 0.20583% | 0.48200% | |
Std. Dev. | 0.14546% | 0.29561% | 0.47322% | 0.70649% | ||
60 | 60 | 60 | 60 |
Structured-Type, | ||||||
---|---|---|---|---|---|---|
Nodes | Arcs/Node | 2% | 5% | 10% | 15% | |
256 | 4 | Mean | 0.15995% | 0.17999% | 0.42908% | 0.51373% |
Std. Dev. | 0.73355% | 0.50584% | 1.06876% | 0.93875% | ||
60 | 60 | 60 | 60 | |||
8 | Mean | 0.08235% | 0.05400% | 0.16980% | 0.20614% | |
Std. Dev. | 0.34519% | 0.20031% | 0.53964% | 0.64658% | ||
60 | 60 | 60 | 60 | |||
512 | 4 | Mean | 0.12555% | 0.16074% | 0.24860% | 0.61767% |
Std. Dev. | 0.33302% | 0.36889% | 0.43587% | 0.89859% | ||
60 | 60 | 60 | 60 | |||
8 | Mean | 0.08089% | 0.10822% | 0.22707% | 0.51905% | |
Std. Dev. | 0.25132% | 0.32182% | 0.47803% | 0.75221% | ||
60 | 60 | 60 | 60 |
Structured-Type, | ||||||
---|---|---|---|---|---|---|
Nodes | Arcs/Node | 2% | 5% | 10% | 15% | |
256 | 4 | Mean | 0.15995% | 0.27764% | 0.83326% | 0.99746% |
Std. Dev. | 0.73355% | 0.70453% | 1.78568% | 1.82337% | ||
60 | 60 | 60 | 60 | |||
8 | Mean | 0.17594% | 0.14096% | 0.24590% | 0.26612% | |
Std. Dev. | 0.64976% | 0.53061% | 0.62247% | 0.70883% | ||
60 | 60 | 60 | 60 | |||
512 | 4 | Mean | 0.29846% | 0.32916% | 0.51934% | 1.02361% |
Std. Dev. | 0.63220% | 0.63879% | 0.77623% | 1.32727% | ||
60 | 60 | 60 | 60 | |||
8 | Mean | 0.15358% | 0.17735% | 0.40364% | 0.63427% | |
Std. Dev. | 0.39285% | 0.43903% | 0.63889% | 0.89402% | ||
60 | 60 | 60 | 60 |
Structured-Type, RR-Fair | ||||||
---|---|---|---|---|---|---|
Nodes | Arcs/Node | 2% | 5% | 10% | 15% | |
256 | 4 | Mean | 1.69196% | 1.68475% | 3.71021% | 5.60403% |
Std. Dev. | 2.51891% | 1.88534% | 3.31362% | 4.90118% | ||
60 | 60 | 60 | 60 | |||
8 | Mean | 1.30573% | 1.33120% | 2.03500% | 2.76798% | |
Std. Dev. | 2.21282% | 1.83162% | 2.67148% | 3.29595% | ||
60 | 60 | 60 | 60 | |||
512 | 4 | Mean | 0.93481% | 1.80408% | 3.24969% | 5.04507% |
Std. Dev. | 0.98142% | 1.50691% | 2.20683% | 3.20177% | ||
60 | 60 | 60 | 60 | |||
8 | Mean | 0.92729% | 1.27037% | 1.27321% | 1.92335% | |
Std. Dev. | 1.00769% | 1.40316% | 1.14424% | 1.81029% | ||
60 | 60 | 60 | 60 |
Unstructured-Type, | |||||
---|---|---|---|---|---|
Nodes | Arcs/Node | 1% | 2% | 5% | 10% |
256 | 4 | 3.3% | 1.7% | 0.0% | 8.3% |
8 | 0.0% | 0.0% | 0.0% | 5.0% | |
512 | 4 | 0.0% | 1.7% | 6.7% | 15.0% |
8 | 1.7% | 1.7% | 1.7% | 1.7% |
Unstructured-Type, | |||||
---|---|---|---|---|---|
Nodes | Arcs/Node | 1% | 2% | 5% | 10% |
256 | 4 | 3.3% | 0.0% | 0.0% | 3.3% |
8 | 0.0% | 0.0% | 0.0% | 0.0% | |
512 | 4 | 0.0% | 0.0% | 0.0% | 5.0% |
8 | 1.7% | 0.0% | 0.0% | 1.7% |
Unstructured-Type, | |||||
---|---|---|---|---|---|
Nodes | Arcs/Node | 1% | 2% | 5% | 10% |
256 | 4 | 3.3% | 0.0% | 0.0% | 1.7% |
8 | 0.0% | 0.0% | 0.0% | 0.0% | |
512 | 4 | 0.0% | 0.0% | 0.0% | 1.7% |
8 | 1.7% | 0.0% | 0.0% | 1.7% |
Unstructured-Type, RR-Fair | |||||
---|---|---|---|---|---|
Nodes | Arcs/Node | 1% | 2% | 5% | 10% |
256 | 4 | 3.3% | 0.0% | 11.7% | 96.7% |
8 | 0.0% | 8.3% | 86.7% | 100.0% | |
512 | 4 | 0.0% | 11.7% | 91.7% | 100.0% |
8 | 11.7% | 71.7% | 100.0% | 100.0% |
Unstructured-Type, | ||||||
---|---|---|---|---|---|---|
Nodes | Arcs/Node | 1% | 2% | 5% | 10% | |
256 | 4 | Mean | 0.08042% | 0.09553% | 0.25293% | 0.78186% |
Std. Dev. | 0.28710% | 0.36233% | 0.52748% | 1.10045% | ||
58 | 59 | 60 | 55 | |||
8 | Mean | 0.01565% | 0.08292% | 0.32456% | 0.17906% | |
Std. Dev. | 0.07513% | 0.34117% | 1.29113% | 0.49942% | ||
60 | 60 | 60 | 57 | |||
512 | 4 | Mean | 0.07700% | 0.10265% | 0.20542% | 0.49078% |
Std. Dev. | 0.26175% | 0.25362% | 0.45111% | 0.57372% | ||
60 | 59 | 56 | 51 | |||
8 | Mean | 0.45111% | 0.09384% | 0.09531% | 0.34873% | |
Std. Dev. | 0.08387% | 0.30559% | 0.21521% | 0.60217% | ||
59 | 59 | 59 | 59 |
Unstructured-Type, | ||||||
---|---|---|---|---|---|---|
Nodes | Arcs/Node | 1% | 2% | 5% | 10% | |
256 | 4 | Mean | 0.14013% | 0.27409% | 0.52975% | 1.50022% |
Std. Dev. | 0.52865% | 0.72796% | 1.22847% | 2.16678% | ||
58 | 60 | 60 | 58 | |||
8 | Mean | 0.61139% | 1.09969% | 2.04546% | 3.54538% | |
Std. Dev. | 1.79169% | 2.93195% | 4.06981% | 5.15958% | ||
60 | 60 | 60 | 60 | |||
512 | 4 | Mean | 0.14450% | 0.35749% | 0.51950% | 1.65035% |
Std. Dev. | 0.38393% | 0.92922% | 1.03785% | 2.41296% | ||
60 | 60 | 60 | 57 | |||
8 | Mean | 0.49411% | 0.71963% | 1.44727% | 4.06781% | |
Std. Dev. | 1.42606% | 1.38286% | 2.13480% | 3.92718% | ||
59 | 60 | 60 | 59 |
Unstructured-Type, | ||||||
---|---|---|---|---|---|---|
Nodes | Arcs/Node | 1% | 2% | 5% | 10% | |
256 | 4 | Mean | 0.57540% | 1.33160% | 2.41167% | 5.33097% |
Std. Dev. | 1.44484% | 2.25426% | 3.64680% | 5.02341% | ||
58 | 60 | 60 | 59 | |||
8 | Mean | 1.72941% | 3.44290% | 8.96035% | 16.80720% | |
Std. Dev. | 2.91170% | 4.67988% | 8.84133% | 10.32456% | ||
60 | 60 | 60 | 60 | |||
512 | 4 | Mean | 0.39119% | 1.13194% | 2.18552% | 4.94240% |
Std. Dev. | 0.85671% | 1.66150% | 2.41066% | 3.47922% | ||
60 | 60 | 60 | 59 | |||
8 | Mean | 1.57806% | 3.87011% | 9.29088% | 15.76693% | |
Std. Dev. | 2.56120% | 3.59570% | 6.51458% | 8.10469% | ||
59 | 60 | 60 | 59 |
Unstructured-Type, RR-Fair | ||||||
Nodes | Arcs/Node | 1% | 2% | 5% | 10% | |
256 | 4 | Mean | 6.25055% | 12.06161% | 22.86896% | 57.94037% |
Std. Dev. | 6.03445% | 7.11377% | 8.44746% | 0.78205% | ||
58 | 60 | 53 | 2 | |||
8 | Mean | 19.62852% | 38.37295% | 83.90358% | – | |
Std. Dev. | 11.38919% | 18.65668% | 15.18412% | – | ||
60 | 55 | 8 | 0 | |||
512 | 4 | Mean | 6.29454% | 10.59501% | 22.63133% | – |
Std. Dev. | 3.56471% | 5.17104% | 8.17028% | – | ||
60 | 53 | 5 | 0 | |||
8 | Mean | 19.80782% | 37.17727% | – | – | |
Std. Dev. | 7.31611% | 11.84658% | – | – | ||
53 | 17 | 0 | 0 |