This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

University of Arizona, Tucson, United [email protected] University of Michigan, Ann Arbor, United [email protected] University of Arizona, Tucson, United [email protected] University of Texas at Arlington, Arlington, United [email protected] University of Arizona, Tucson, United [email protected] University of Arizona, Tucson, United [email protected] \CopyrightReyan Ahmed, Greg Bodwin, Faryad Darabi Sahneh, Keaton Hamm, Stephen Kobourov, and Richard Spence {CCSXML} <ccs2012> <concept> <concept_id>10003752.10003809</concept_id> <concept_desc>Theory of computation Design and analysis of algorithms</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> \ccsdesc[300]Theory of computation Design and analysis of algorithms \supplementAll algorithms, implementations, the ILP solver, experimental data and analysis are available on Github at https://github.com/abureyanahmed/multi_level_weighted_additive_spanners.

Acknowledgements.
The research for this paper was partially supported by NSF grants CCF-1740858, CCF1712119, and DMS-1839274.

Multi-level Weighted Additive Spanners111This paper has been accepted in the 19th Symposium on Experimental Algorithms, SEA 2021.

Reyan Ahmed    Greg Bodwin    Faryad Darabi Sahneh    Keaton Hamm    Stephen Kobourov    Richard Spence
Abstract

Given a graph G=(V,E)G=(V,E), a subgraph HH is an additive +β+\beta spanner if distH(u,v)distG(u,v)+β\operatorname{dist}_{H}(u,v)\leq\operatorname{dist}_{G}(u,v)+\beta for all u,vVu,v\in V. A pairwise spanner is a spanner for which the above inequality only must hold for specific pairs PV×VP\subseteq V\times V given on input, and when the pairs have the structure P=S×SP=S\times S for some subset SVS\subseteq V, it is specifically called a subsetwise spanner. Spanners in unweighted graphs have been studied extensively in the literature, but have only recently been generalized to weighted graphs.

In this paper, we consider a multi-level version of the subsetwise spanner in weighted graphs, where the vertices in SS possess varying level, priority, or quality of service (QoS) requirements, and the goal is to compute a nested sequence of spanners with the minimum total number of edges. We first generalize the +2+2 subsetwise spanner of [Pettie 2008, Cygan et al., 2013] to the weighted setting. We experimentally measure the performance of this and several other algorithms for weighted additive spanners, both in terms of runtime and sparsity of output spanner, when applied at each level of the multi-level problem. Spanner sparsity is compared to the sparsest possible spanner satisfying the given error budget, obtained using an integer programming formulation of the problem. We run our experiments with respect to input graphs generated by several different random graph generators: Erdős–Rényi, Watts–Strogatz, Barabási–Albert, and random geometric models. By analyzing our experimental results we developed a new technique of changing an initialization parameter value that provides better performance in practice.

keywords:
multi-level, graph spanner, approximation algorithms
category:
\relatedversion

1 Introduction

This paper studies spanners of undirected input graphs with edge weights. Given an input graph, a spanner is a sparse subgraph with approximately the same distance metric as the original graph. Spanners are used as a primitive for many algorithmic tasks involving the analysis of distances or shortest paths in enormous input graphs; it is often advantageous to first replace the graph with a spanner, which can be analyzed much more quickly and stored in much smaller space, at the price of a small amount of error. See the recent survey [5] for more details on these applications.

Spanners were first studied in the setting of multiplicative error, where for an input graph GG and an error (“stretch”) parameter kk, the spanner HH must satisfy distH(s,t)kdistG(s,t)\operatorname{dist}_{H}(s,t)\leq k\cdot\operatorname{dist}_{G}(s,t) for all vertices s,ts,t. This setting was quickly resolved in a seminal paper by Althöfer, Das, Dobkin, Joseph, and Soares [8], where the authors proved that for all positive integers kk, all nn-vertex graphs have spanners on O(n1+1/k)O(n^{1+1/k}) edges with stretch 2k12k-1, and that this tradeoff is best possible. Thus, as expected, one can trade off error for spanner sparsity, increasing the stretch kk to pay more and more error for sparser and sparser spanners.

For very large graphs, purely additive error is arguably a much more appealing paradigm. For a constant c>0c>0, a +c+c spanner of an nn-vertex graph GG is a subgraph HH such that distH(s,t)distG(s,t)+c\operatorname{dist}_{H}(s,t)\leq\operatorname{dist}_{G}(s,t)+c for all vertices s,ts,t. Thus, for additive error the excess distance in HH is independent of the graph size and of distG(s,t)\operatorname{dist}_{G}(s,t), which can be large when nn is large. Additive spanners were introduced by Liestman and Shermer [28], and followed by three landmark theoretical results on the sparsity of additive spanners in unweighted graphs: Aingworth, Chekuri, Indyk, and Motwani [7] showed that all graphs have +2+2 spanners on O(n3/2)O(n^{3/2}) edges, Chechik [17, 13] showed that all graphs have +4+4 spanners on O(n7/5)O(n^{7/5}) edges, and Baswana, Kavitha, Mehlhorn, and Pettie [11] showed that all graphs have +6+6 spanners on O(n4/3)O(n^{4/3}) edges.

Despite the inherent appeal of additive error, spanners with multiplicative error remain much more commonly used in practice. There are two reasons for this.

  1. 1.

    First, while the multiplicative spanner of Althöfer et al [8] works without issue for weighted graphs, the previous additive spanner constructions hold only for unweighted graphs, whereas the metrics that arise in applications often require edge weights. Addressing this, recent work of the authors [3] and in two papers of Elkin, Gitlitz, and Neiman [22, 23] gave natural extensions of the classic additive spanner constructions to weighted graphs. For example, the +2+2 spanner bound becomes the following statement: for all nn-vertex weighted graphs GG, there is a subgraph HH satisfying distH(s,t)distG(s,t)+2W(s,t)\operatorname{dist}_{H}(s,t)\leq\operatorname{dist}_{G}(s,t)+2W(s,t), where W(s,t)W(s,t) denotes the maximum edge weight along an arbitrary sts\leadsto t shortest path in GG. The +4+4 spanner generalizes similarly, and the +6+6 spanner does as well with the small exception that the error increases to +(6+ε)W(s,t)+(6+\varepsilon)W(s,t), for ε>0\varepsilon>0 arbitrarily small which trades off with the implicit constant in the spanner size.

  2. 2.

    Second, poly(n)\text{poly}(n) factors in spanner size can be quite serious in large graphs, and so applications often require spanners of near-linear size, say O(n1.01)O(n^{1.01}) edges for an nn-vertex input graph. The worst-case spanner sizes of O(n4/3)O(n^{4/3}) or greater for additive spanner constructions are thus undesirable, and unfortunately, there is a theoretical barrier to improving them: Abboud and Bodwin [1] proved that one cannot generally trade off more additive error for sparser spanners, as one can in the multiplicative setting. Specifically, for any constant c>0c>0, there is no general construction of +c+c spanners for nn-node input graphs on O(n4/30.001)O(n^{4/3-0.001}) edges. However, the lower bound construction is rather pathological, and it is not likely to arise in practice. It is known that for many practical graph classes, e.g., those with good expansion, near-linear additive spanners always exist [11]. Thus, towards applications of additive error, it is currently an important open question whether modern additive spanner constructions on practical graphs of interest tend to exhibit performance closer to the worst-case bounds from [1], or bounds closer to the best ones available for the given input graphs. (We remark here that there are strong computational barriers to designing algorithms that achieve the sparsest possible +c+c spanners directly, or which even closely approximate this quantity in general [18]).

The goal of this work is to address the second point, by measuring the experimental performance of the state-of-the-art constructions for weighted additive spanners on graphs generated from various popular random models and measuring their performance. We consider both +cW+cW spanners (where W=maxuvEw(uv)W=\max_{uv\in E}w(uv) is the maximum edge weight) and +cW(,)+cW(\cdot,\cdot) spanners, whose additive error is +cW(s,t)+cW(s,t) for each pair s,tVs,t\in V. We are interested both in runtime and in the ratio of output spanner size to the size of the sparsest possible spanner (which we obtain using an ILP, discussed in Section 5). We specifically consider generalizations of the three staple constructions for weighted additive spanners mentioned above, in which the spanner distance constraint only needs to be satisfied for given pairs of vertices.

In particular, the following extensions are considered. A pairwise spanner is a subgraph that must satisfy the spanner error inequality for a given set of vertex pairs PP taken on input, and a subsetwise spanner is a pairwise spanner with the additional structure P=S×SP=S\times S for some vertex subset SS. See [31, 21, 27, 26, 12, 20, 14, 15] for recent prior work on pairwise and subsetwise spanners. We also discuss a multi-level version of the subsetwise additive spanner problem where we have an edge-weighted graph G=(V,E)G=(V,E), a nested sequence of terminals SS1S1VS_{\ell}\subseteq S_{\ell-1}\subseteq\cdots\subseteq S_{1}\subseteq V and a real number c0c\geq 0 as input. We want to compute a nested sequence of subgraphs GG1G1G_{\ell}\subseteq G_{\ell-1}\subseteq\cdots\subseteq G_{1} such that GiG_{i} is a +cW+cW subsetwise spanner of GG over SiS_{i}. The objective is to minimize the total number of edges in all subgraphs. Similar generalizations have been studied for the Steiner tree problem under various names including Multi-level Network Design [9], Quality of Service Multicast Tree (QoSMT) [16, 25], Priority Steiner Tree [19], Multi-Tier Tree [29], and Multi-level Steiner Tree [2, 6]. However, multi-level or QoS generalizations of spanner problems appear to have been much less studied in literature. Section 2 generalizes the +2+2 subsetwise construction [21], and Section 4 generalizes to the multi-level setting.

2 Subsetwise spanners

All unweighted graphs have polynomially constructible +2+2 subsetwise spanners over SVS\subseteq V on O(n|S|)O(n\sqrt{|S|}) edges [31, 21]. For weighted graphs, Ahmed et al. [3] recently give a +4W+4W subsetwise spanner construction, also using O(n|S|)O(n\sqrt{|S|}) edges. In this section we show how to generalize the +2+2 subsetwise construction [31, 21] to the weighted setting by giving a construction which produces a subsetwise +2W+2W spanner of a weighted graph (with integer edge weights in [1,W][1,W]) on O(nW|S|)O(nW\sqrt{|S|}) edges. Due to space, we omit most proof details here but describe the construction instead.

A clustering 𝒞={C1,C2,,Cq}\mathcal{C}=\{C_{1},C_{2},\ldots,C_{q}\} is a set of disjoint subsets of vertices. Initially, every vertex is unclustered. The subsetwise +2W+2W construction has two steps: the clustering phase and the path buying phase. The clustering phase is exactly the same as that of [31, 21] in which we construct a cluster subgraph G𝒞G_{\mathcal{C}} as follows: set β=logn|S|W\beta=\log_{n}\sqrt{|S|W}, and while there is a vertex vv with at least nβ\lceil{n^{\beta}}\rceil unclustered neighbors, we add a cluster CC to 𝒞\mathcal{C} containing exactly nβ\lceil n^{\beta}\rceil unclustered neighbors of vv (note that vCv\not\in C). We add to G𝒞G_{\mathcal{C}} all edges vxvx (xCx\in C) and xyxy (x,yCx,y\in C). When there are no more vertices with at least nβ\lceil{n^{\beta}}\rceil unclustered neighbors, we add all the unclustered vertices and their incident edges to G𝒞G_{\mathcal{C}}.

In the second (path-buying) phase, we start with a clustering 𝒞\mathcal{C} and a cluster subgraph G0:=G𝒞G_{0}:=G_{\mathcal{C}}. There are z:=(|S|2)z:=\binom{|S|}{2} unordered pairs of vertices in SS; let π1\pi_{1}, π2\pi_{2}, …, πz\pi_{z} denote the shortest paths between these vertex pairs where πi=π(ui,vi)\pi_{i}=\pi(u_{i},v_{i}) has endpoints {ui,vi}\{u_{i},v_{i}\}. As in [21], we iterate from i=1i=1 to i=zi=z and determine whether to add path πi\pi_{i} to the spanner. Define the cost and value of a path πi\pi_{i} as follows:

cost(πi)\displaystyle\text{cost}(\pi_{i}) :=# edges of πi which are absent in Gi1\displaystyle:=\#\text{ edges of }\pi_{i}\text{ which are absent in }G_{i-1}
value(πi)\displaystyle\text{value}(\pi_{i}) :=# pairs (x,C) where x{ui,vi},C𝒞,\displaystyle:=\#\text{ pairs }(x,C)\text{ where }x\in\{u_{i},v_{i}\},C\in\mathcal{C},
CC contains at least one vertex in πi\pi_{i},
and distπi(x,C)<distGi1(x,C)\displaystyle\text{ and }\operatorname{dist}_{\pi_{i}}(x,C)<\operatorname{dist}_{G_{i-1}}(x,C)

If cost(πi)(2W+1)value(πi)\text{cost}(\pi_{i})\leq(2W+1)\text{value}(\pi_{i}), then we add (“buy”) πi\pi_{i} to the spanner by letting Gi=Gi1πiG_{i}=G_{i-1}\cup\pi_{i}. Otherwise, we do not add πi\pi_{i}, and let Gi=Gi1G_{i}=G_{i-1}. The final spanner returned is H=GzH=G_{z}.

Lemma 2.1.

For any ui,viSu_{i},v_{i}\in S, we have distH(ui,vi)distG(ui,vi)+2W\operatorname{dist}_{H}(u_{i},v_{i})\leq\operatorname{dist}_{G}(u_{i},v_{i})+2W.

Proof 2.2 (Proof sketch.).

The proof is largely the same as in [21] except with one main difference: in the unweighted case, the distance between any two points within the same cluster is at most 2, and it is shown in [21] that using this property, if there are tt edges in π(u,v)\pi(u,v) missing from G𝒞G_{\mathcal{C}}, then there are at least t2\frac{t}{2} clusters containing at least one vertex on π(u,v)\pi(u,v). In the weighted case, assuming WW is constant, there are Ω(t)\Omega(t) clusters of 𝒞\mathcal{C} which contain at least one vertex on π(u,v)\pi(u,v).

Corollary 2.3.

Let GG be a weighted graph with integer edge weights in [1,W][1,W]. Then GG has a +6W+6W pairwise spanner on O(Wn|P|1/4)O(Wn|P|^{1/4}) edges.

This follows from applying the +8W+8W construction of Ahmed et al. [3] (Appendix 3, Algorithm 3), except we use the above +2W+2W subsetwise spanner instead of the +4W+4W subsetwise spanner construction given in [3] as a subroutine.

3 Pairwise spanner constructions [3]

Here, we provide pseudocode (Algorithms 13) describing the +2W+2W, +4W+4W, and +8W+8W pairwise spanner constructions222Using a tighter analysis or the above +2W+2W subsetwise construction in place of the +4W+4W construction in Algorithm 3, the additive error can be improved to +2W(,)+2W(\cdot,\cdot), +4W(,)+4W(\cdot,\cdot), and +6W+6W for integer edge weights. by Ahmed et al. [3]. These spanner constructions have a similar theme: first, construct a dd-light initialization, which is a subgraph HH obtained by adding the dd lightest edges incident to each vertex (or all edges if the degree is at most dd). Then for each pair (s,t)P(s,t)\in P, consider the number of edges in π(s,t)\pi(s,t) which are absent in the current subgraph HH. Add π(s,t)\pi(s,t) to HH if the number of missing edges is at most some threshold \ell, or otherwise randomly sample vertices and either add a shortest path tree rooted at these vertices, or construct a subsetwise spanner among them.

Algorithm 1 +2W+2W pairwise spanner [3]
1:d=|P|1/3d=|P|^{1/3}, =n/|P|2/3\ell=n/|P|^{2/3}
2:H=dH=d-light initialization
3:let mm^{\prime} be the number of missing edges needed for a valid construction
4:while m>ndm^{\prime}>nd do
5:     for (s,t)P(s,t)\in P do
6:         x=|E(π(s,t))E(H)|x=|E(\pi(s,t))\setminus E(H)|
7:         if xx\leq\ell then
8:              add π(s,t)\pi(s,t) to HH               
9:     R=R= random sample of vertices, each with probability 1/(d)1/(\ell d)
10:     for rRr\in R do
11:         add a shortest path tree rooted at rr to each vertex      
12:add the mm^{\prime} missing edges
13:return HH
Algorithm 2 +4W+4W pairwise spanner [3]
1:d=|P|2/7d=|P|^{2/7}, =n/|P|5/7\ell=n/|P|^{5/7}
2:H=dH=d-light initialization
3:let mm^{\prime} be the number of missing edges needed for a valid construction
4:while m>ndm^{\prime}>nd do
5:     for (s,t)P(s,t)\in P do
6:         x=|E(π(s,t))E(H)|x=|E(\pi(s,t))\setminus E(H)|
7:         if xx\leq\ell then
8:              add π(s,t)\pi(s,t) to HH
9:         else if xn/d2x\geq n/d^{2} then
10:              R1R_{1} = random sample of vertices, each w.p. d2/nd^{2}/n
11:              add a shortest path tree rooted at each rR1r\in R_{1}
12:         else
13:              add first \ell and last \ell missing edges of π(s,t)\pi(s,t) to HH
14:              R2R_{2} = i.i.d. sample of vertices, w.p. 1/(d)1/(\ell d)
15:              for each r,rR2r,r^{\prime}\in R_{2} do
16:                  if exists rrr\to r^{\prime} path missing n/d2\leq n/d^{2} edges then
17:                       add to HH a shortest rrr\to r^{\prime} path among paths missing n/d2\leq n/d^{2} edges                                               
18:add the mm^{\prime} missing edges
19:return HH
Algorithm 3 +8W+8W pairwise spanner [3]
1:d=|P|1/4d=|P|^{1/4}, =n/|P|3/4\ell=n/|P|^{3/4}
2:H=dH=d-light initialization
3:let mm^{\prime} be the number of missing edges needed for a valid construction
4:while m>ndm^{\prime}>nd do
5:     for (s,t)P(s,t)\in P do
6:         x=|E(π(s,t))E(H)|x=|E(\pi(s,t))\setminus E(H)|
7:         if xx\leq\ell then
8:              add π(s,t)\pi(s,t) to HH
9:         else
10:              add first \ell and last \ell missing edges of π(s,t)\pi(s,t) to HH
11:              RR = random sample of vertices, each w.p. 1/(d)1/(\ell d)
12:              H=+4WH^{\prime}=+4W subsetwise (R×R)(R\times R)-spanner [3]
13:              add HH^{\prime} to HH               
14:add the mm^{\prime} missing edges
15:return HH

4 Multi-level spanners

Here we study a multi-level variant of graph spanners. We first define the problem:

Definition 4.1 (Multi-level weighted additive spanner).

Given a weighted graph G(V,E)G(V,E) with maximum weight WW, a nested sequence of subsets of vertices SS1S1VS_{\ell}\subseteq S_{\ell-1}\subseteq\ldots\subseteq S_{1}\subseteq V, and c0c\geq 0, a multi-level additive spanner is a nested sequence of subgraphs GG1G1GG_{\ell}\subseteq G_{\ell-1}\subseteq\ldots\subseteq G_{1}\subseteq G, where GiG_{i} is a subsetwise +cW+cW spanner over SiS_{i}.

Observe that Definition 4.1 generalizes the subsetwise spanner, which is a special case where =1\ell=1. We measure the size of a multi-level spanner by its sparsity, defined by sparsity({Gi}i=1):=i=1|E(Gi)|.\text{sparsity}(\{G_{i}\}_{i=1}^{\ell}):=\sum_{i=1}^{\ell}|E(G_{i})|.

The problem can equivalently be phrased in terms of priorities and rates: each vertex vS1v\in S_{1} has a priority P(v)P(v) between 1 and \ell (namely, P(v)=max{i:vSi}P(v)=\max\{i:v\in S_{i}\}), and we wish to compute a single subgraph containing edges of different rates such that for all u,vS1u,v\in S_{1}, there is a +cW+cW spanner path consisting of edges of rate at least min{P(u),P(v)}\min\{P(u),P(v)\}. With this, we will typically refer to the priority of vv to denote the highest ii such that vSiv\in S_{i}, or 0 if vS1v\not\in S_{1}. In this section, we show that the multi-level version is not significantly harder than the ordinary “single-level” version: a subroutine which can compute an additive spanner can be used to compute a multi-level spanner whose sparsity is comparably good. Let OPT denote the minimum sparsity over all candidate multi-level additive spanners.

We first describe a simple rounding-up approach based on an algorithm by Charikar et al. [16] for the QoSMT problem, a similar generalization of the Steiner tree problem. For this approach, assume we have a subroutine which computes an exact or approximate single-level subsetwise spanner. Given vS1v\in S_{1}, let P(v)[1,]P(v)\in[1,\ell] denote the priority of vv. The rounding-up approach is as follows: for each vv, round P(v)P(v) up to the nearest power of 2. This effectively constructs a “rounded-up” instance where all vertices in S1S_{1} have priority either 1, 2, 4, …, 2log22^{\lceil\log_{2}\ell\rceil}. The sparsity of the optimum solution in the rounded-up instance is at most 2OPT2\text{OPT}; given the optimum solution to the original instance with sparsity OPT, a feasible solution to the rounded-up instance with sparsity at most 2OPT2\text{OPT} can be obtained by rounding up the rate of each edge to the nearest power of 2.

For each i{1,2,4,,2log2}i\in\{1,2,4,\ldots,2^{\lceil\log_{2}\ell\rceil}\}, use the subroutine to compute a level-ii subsetwise spanner over all vertices whose rounded-up priority is at least ii. The final multi-level additive spanner is obtained by taking the union of these computed spanners, by keeping an edge at the highest level it appears in.

Theorem 4.2.

Assuming an exact subsetwise spanner subroutine, the solution computed by the rounding-up approach has sparsity at most 4OPT4\cdot\text{OPT}.

Refer to caption
(a)
Refer to caption
(b)
Figure 1: (a) The rounding-up approach computes an optimal spanner at each level (assuming an exact subroutine), so the sizes of the spanners on each level are at most that of the optimal solution (9+409+40 edges vs. 12+4012+40). (b) However, when an edge is present in a top-level solution, it must be present in lower-level solutions. The rounding-up approach takes the union of the spanners in the bottom level; in this case, the sparsity of the rounded-up solution (9+489+48 vs. 12+4012+40) is greater than that of the optimum.

This is proved using the same ideas as the 4ρ4\rho-approximation for QoSMT [16]. As mentioned earlier, in practice we use an approximation algorithm to compute the subsetwise spanner instead of computing the minimum spanner.

Theorem 4.3.

There exists a O~(n/|S1|)\tilde{O}(n/\sqrt{|S_{1}|})-approximation algorithm to compute multi-level weighted additive spanners with additive stretch 2W2W when W=O(logn)W=O(\log n).

This follows from using the +2W+2W subsetwise construction in Section 2. The approximation ratio of this subsetwise spanner algorithm is O(nW/|S|)O(nW/\sqrt{|S|}) as the construction produces a spanner of size O(nW|S|)O(nW\sqrt{|S|}), while the sparsest additive spanner trivially has at least |S|1=Ω(|S|)|S|-1=\Omega(|S|) edges.

We now show that, under certain conditions, if we have a subroutine which computes a subsetwise spanner of GG, SS of size O(na|S|b)O(n^{a}|S|^{b}) where aa and bb are absolute constants, a very naïve algorithm can be used to obtain a multi-level spanner also with sparsity O(na|S1|b)O(n^{a}|S_{1}|^{b}).

Theorem 4.4.

Suppose there is an absolute constant 0<α<10<\alpha<1 such that |Si|α|Si1||S_{i}|\leq\alpha|S_{i-1}| for all i{1,,}i\in\{1,\ldots,\ell\}. Then we can compute a multi-level spanner with sparsity O(na|S1|b)O(n^{a}|S_{1}|^{b}).

Proof 4.5.

Consider the following simple construction: for each i{1,2,3,,}i\in\{1,2,3,\ldots,\ell\}, compute a level-ii subsetwise spanner of size O(na|Si|b)O(n^{a}|S_{i}|^{b}). Consider the union of these spanners, by keeping each edge at the highest level it appears. The sparsity of the returned multi-level spanner is at most

sparsity({Gi})\displaystyle\text{sparsity}(\{G_{i}\}) =O(na|S1|b+2na|S2|b+3na|S3|b++na|S|b)\displaystyle=O(n^{a}|S_{1}|^{b}+2n^{a}|S_{2}|^{b}+3n^{a}|S_{3}|^{b}+\ldots+\ell n^{a}|S_{\ell}|^{b})
O(na|S1|b(1+2αb+3α2b++α(1)b))\displaystyle\leq O(n^{a}|S_{1}|^{b}(1+2\alpha^{b}+3\alpha^{2b}+\ldots+\ell\alpha^{(\ell-1)b}))
=O(na|S1|b)\displaystyle=O(n^{a}|S_{1}|^{b})

where we used the arithmetico-geometric series 1+2(αb)+3(αb)2+=1(1αb)21+2(\alpha^{b})+3(\alpha^{b})^{2}+\ldots=\frac{1}{(1-\alpha^{b})^{2}} which is constant for fixed α\alpha, bb. Note that 0<α<10<\alpha<1 and b>0b>0, which implies 0<αb<10<\alpha^{b}<1.

The assumption that |Si|α|Si1||S_{i}|\leq\alpha|S_{i-1}| for some constant α\alpha is fairly natural, as many realistic networks tend to have significantly fewer hubs than non-hubs.

Corollary 4.6.

Under the assumption |Si|α|Si1||S_{i}|\leq\alpha|S_{i-1}| for all i{2,,}i\in\{2,\ldots,\ell\}, there exists a poly-time algorithm which computes a multi-level +2+2 spanner of sparsity O(n|S1|)O(n\sqrt{|S_{1}|}).

Proof 4.7.

This follows by using the +2+2 construction by Cygan et al. [21] on O(n|S|)O(n\sqrt{|S|}) edges as the subroutine.

5 Exact algorithm

To compute a minimum size additive spanner, we utilize a slight modification of the ILP in [5, Section 9], wherein we choose the specific distortion function f(t)=t+cWf(t)=t+cW and minimize the sparsity rather than total weight of the spanner. For completeness, we present the full ILP for computing a single-level additive subsetwise spanner below along with a brief description of the multi-level extension. Here EE^{\prime} represents the bidirected edge set, obtained by adding directed edges (u,v)(u,v) and (v,u)(v,u) for each edge uvEuv\in E. The binary variable x(i,j)uvx_{(i,j)}^{uv} is 1 if edge (i,j)(i,j) is included on the selected uu-vv path and 0 otherwise, and w(e)w(e) is the weight of edge ee.

MinimizeeExe subject to\displaystyle\allowdisplaybreaks\text{Minimize}\sum_{e\in E}x_{e}\text{ subject to} (1)
(i,j)Ex(i,j)uvw(e)\displaystyle\sum_{(i,j)\in E^{\prime}}x_{(i,j)}^{uv}w(e) distG(u,v)+cW\displaystyle\leq\operatorname{dist}_{G}(u,v)+cW (u,v)S;e=ij\displaystyle\forall(u,v)\in S;e=ij (2)
(i,j)Out(i)x(i,j)uv(j,i)In(i)x(j,i)uv\displaystyle\sum_{(i,j)\in Out(i)}x_{(i,j)}^{uv}-\sum_{(j,i)\in In(i)}x_{(j,i)}^{uv} ={1i=u1i=v0else\displaystyle=\begin{cases}1&i=u\\ -1&i=v\\ 0&\text{else}\end{cases} (u,v)S;iV\displaystyle\forall(u,v)\in S;\forall i\in V (3)
(i,j)Out(i)x(i,j)uv\displaystyle\sum_{(i,j)\in Out(i)}x_{(i,j)}^{uv} 1\displaystyle\leq 1 (u,v)S;iV\displaystyle\forall(u,v)\in S;\forall i\in V (4)
x(i,j)uv+x(j,i)uv\displaystyle x_{(i,j)}^{uv}+x_{(j,i)}^{uv} xe\displaystyle\leq x_{e} (u,v)S;e={i,j}E\displaystyle\forall(u,v)\in S;\forall e=\{i,j\}\in E (5)
xe,x(i,j)uv\displaystyle x_{e},x_{(i,j)}^{uv} {0,1}\displaystyle\in\{0,1\} (6)

Inequalities (3)–(4) enforce that for each uu, vSv\in S, the selected edges corresponding to uu, vv form a path; inequality (2) enforces that the length of this path is at most distG(u,v)+cW\operatorname{dist}_{G}(u,v)+cW (note that WW may be replaced with W(u,v)W(u,v)). Inequality (5) ensures that if x(i,j)uv=1x_{(i,j)}^{uv}=1 or x(i,j)uv=1x_{(i,j)}^{uv}=1, then edge ijij is taken.

To generalize the ILP formulation to the multi-level problem, we take a similar set of variables for every level. The rest of the constraints are similar, except we define xek=1x_{e}^{k}=1 if edge ee is present on level kk and the variables x(i,j)uvx_{(i,j)}^{uv} are also indexed by level. We add the constraint xekxek1x_{e}^{k}\leq x_{e}^{k-1} for all k{2,,}k\in\{2,\ldots,\ell\} which enforces that if edge ee is present on level kk, it is also present on all lower levels. Finally, the objective is to minimize the sparsity k=1eExek\sum_{k=1}^{\ell}\sum_{e\in E}x_{e}^{k}.

6 Experiments

In this section, we provide experimental results involving the rounding-up framework described in Section 4. This framework needs a single level subroutine; we use the +2W+2W subsetwise construction in Section 2 and the three pairwise +2W(,)+2W(\cdot,\cdot), +4W(,)+4W(\cdot,\cdot), +6W+6W constructions provided in [3]333Note that, one can show that the +2W+2W, +4W+4W, +8W+8W spanners in [3] are actually +2W(.,.),+4W(.,.)+2W(.,.),+4W(.,.) and +6W+6W spanners respectively by using a tighter analysis [4]. (see Appendix 3). We generate multi-level instances and solve the instances using our exact algorithm and the four approximation algorithms. We consider natural questions about how the number of levels \ell, number of vertices nn, and decay rate of terminals with respect to levels affect the running times and (experimental) approximation ratios, defined as the sparsity of the returned multi-level spanner divided by OPT.

We used CPLEX 12.6.2 as an ILP solver in a high-performance computer for all experiments (Lenovo NeXtScale nx360 M5 system with 400 nodes). Each node has 192 GB of memory. We have used Python for implementing the algorithms and spanner constructions. Since we have run the experiment on a couple of thousand instances, we run the solver for four hours.

6.1 Experiment Parameters

We run experiments first to test experimental approximation ratio vs. the parameters, and then to test running time vs. parameters. Each set of experiments has several parameters: the graph generator, the number of levels \ell, the number of vertices nn, and how the size of the terminal sets SiS_{i} (vertices requiring level or priority at least ii) decrease as ii decreases.

In what follows, we use the Erdős–Rényi (ER) [24], Watts–Strogatz (WS) [32], Barabási–Albert (BA) [10], and random geometric (GE) [30] models. Let pp be the edge selection probability. If we set p=(1+ε)lnnnp=(1+\varepsilon)\frac{\ln n}{n}, then the generated Erdős–Rényi graph is connected with high probability for ε>0\varepsilon>0 [24]). For our experiments we use ε=1\varepsilon=1. In the Watts-Strogatz model, we initially create a ring lattice of constant degree KK. For our experiments we use K=6K=6 and p=0.2p=0.2. In the Barabási–Albert model, a new node is connected to mm existing nodes. For our experiments we use m=5m=5. In the random geometric graph model, two nodes are connected to each other if their Euclidean distance is not larger than a threshold rcr_{c}. For rc=(1+ϵ)lnnπnr_{c}=\sqrt{\frac{(1+\epsilon)\ln n}{\pi n}} with ϵ>0\epsilon>0, the synthesized graph is connected with a high probability[30]. We generate a set of small graphs (10n4010\leq n\leq 40) and a set of large graphs (50n50050\leq n\leq 500). We only compute the exact solutions for the small graphs since the ILP has an exponential running time. In this paper, we provide the results of Erdős–Rényi graphs since it is the most popular model. However, the radius444The minimum over all vVv\in V of maxwVdG(v,w)\max_{w\in V}d_{G}(v,w) where dG(v,w)d_{G}(v,w) is the graph distance (by number of edges, not total weight) between vv and ww of Erdős–Rényi graphs is relatively small. In our dataset, the range of the radius is 2-4. Hence, we also provide the results of random geometric graphs which have larger radius (4-12). The remaining results and the radius distribution of different generators are available at the supplement Github link. We consider number of levels {1,2,3}\ell\in\{1,2,3\} for small graphs, {1,,10}\ell\in\{1,\dots,10\} for large graphs, and adopt two methods for selecting terminal sets: linear and exponential. A terminal set S1S_{1} with lowest priority of size n(11+1)n(1-\frac{1}{\ell+1}) in the linear case and n2\frac{n}{2} in the exponential case is chosen uniformly at random. For each subsequent level, 1+1\frac{1}{\ell+1} vertices are deleted at random in the linear case, whereas half the remaining vertices are deleted in the exponential case. Levels/priorities and terminal sets are related via Si={vS1:P(v)i}S_{i}=\{v\in S_{1}:P(v)\geq i\}. We choose edge weights w(e)w(e) randomly, independently, and uniformly from {1,2,3,10}\{1,2,3\dots,10\}.

An experimental instance of the multi-level problem here is thus characterized by four parameters: graph generator, number of vertices nn, number of levels \ell, and terminal selection method TSM{Linear,Exponential}\textsc{TSM}\in\{\textsc{Linear,Exponential}\}. As there is randomness involved, we generated five instances for every choice of parameters (e.g., ER, n=30n=30, =2\ell=2, Linear). For each instance of the small graphs, we compute the approximate solution using either the +2W+2W, +2W(,)+2W(\cdot,\cdot), +4W+4W, or +6W+6W spanner subroutine, and the exact solution using the ILP described in Section 5. We compute the experimental approximation ratio by dividing the sparsity of the approximate solution by the sparsity of the optimum solution (OPT). For large graphs, we only compute the approximate solution.

6.2 Results

We consider different spanner constructions as the single level subroutine in the multi-level spanner. We first consider the +2W+2W subsetwise construction (Section 2).

6.2.1 The +2W+2W Subsetwise Construction-based Approximation

We first describe the experimental results on Erdős–Rényi graphs w.r.t. nn, \ell, and terminal selection method in Figure 2. The average experimental ratio increases as nn increases. This is expected since the theoretical approximation ratio of O~(n/|S1|)\tilde{O}(n/\sqrt{|S_{1}|}) is proportional to nn. The average and minimum experimental ratio does not change that much as the number of levels increases; however, the maximum ratio increases. The experimental ratio of the linear terminal selection method is slightly better compared to that of the exponential method.

Refer to caption
Refer to caption
Refer to caption
Figure 2: Performance of the algorithm that uses +2W+2W subsetwise spanner as the single level solver on Erdős–Rényi graphs w.r.t. nn, \ell, and terminal selection method.

We describe the experimental results on random geometric graphs w.r.t. nn, \ell, and terminal selection methods in Figure 3. In both cases the average ratio increases as nn and \ell increases. The average ratio is relatively lower for the linear terminal selection method.

Refer to caption
Refer to caption
Refer to caption
Figure 3: Performance of the algorithm that uses +2W+2W subsetwise spanner as the single level solver on random geometric graphs w.r.t. nn, \ell, and terminal selection method.

The plots of Watts–Strogatz and Barabási–Albert graphs are available in the Github repository.

6.2.2 The +2W(,)+2W(\cdot,\cdot) Pairwise Construction-based Approximation

We now consider the +2W(,)+2W(\cdot,\cdot) pairwise construction [3] (Algorithm 1). We first describe the experimental results on Erdős–Rényi graphs w.r.t. nn, \ell, and terminal selection method in Figure 4. The average experimental ratio increases as nn increases. This is expected since the theoretical approximation ratio is proportional to nn. The average and minimum experimental ratio do not change that much as the number of levels increases, however, the maximum ratio increases. The experimental ratio of the linear terminal selection method is also slightly better compared to that of the exponential method.

Refer to caption
Refer to caption
Refer to caption
Figure 4: Performance of the algorithm that uses +2W(,)+2W(\cdot,\cdot) pairwise spanner as the single level solver on Erdős–Rényi graphs w.r.t. nn, \ell, and terminal selection method.

We describe the experimental results on random geometric graphs w.r.t. nn, \ell, and terminal selection method in Figure 5. The average experimental ratio increases as nn increases. The maximum ratio increases as \ell increases. Again, the experimental ratio of the linear terminal selection method is relatively smaller compared to the exponential method.

Refer to caption
Refer to caption
Refer to caption
Figure 5: Performance of the algorithm that uses +2W(,)+2W(\cdot,\cdot) pairwise spanner as the single level solver on random geometric graphs w.r.t. nn, \ell, and terminal selection method.

6.2.3 Comparison between Global and Local Setups

One major difference between the subsetwise and pairwise construction is the subsetwise construction considers the (global) maximum edge weight WW of the graph in the error. On the other hand, the +cW(,)+cW(\cdot,\cdot) spanners consider the (local) maximum edge weight in a shortest path for each pair of vertices s,ts,t. We provide a comparison between the global and local settings.

We describe the experimental results on Erdős–Rényi graphs w.r.t. nn, \ell, and the terminal selection method in Figure 6. The average experimental ratio increases as nn increases for both global and local settings. However, the ratio of the local setting is smaller compared to that of the global setting. One reason for this difference is the solution to the global exact algorithm is relatively smaller since the global setting considers larger errors. The ratio of the global setting increases as the number of levels increases and for the exponential terminal selection method. For the local setting, the ratio does not change that much.

Refer to caption
Refer to caption
Refer to caption
Figure 6: Performance of the global and local construction-based algorithms on Erdős–Rényi graphs w.r.t. nn, \ell, and terminal selection method.

We describe the experimental results on random geometric graphs w.r.t. nn, \ell, and the terminal selection method in Figure 7. The ratio of the local setting is smaller compared to the global setting.

Refer to caption
Refer to caption
Refer to caption
Figure 7: Performance of the global and local construction-based algorithms on random geometric graphs w.r.t. nn, \ell, and terminal selection method.

6.2.4 The +4W(,)+4W(\cdot,\cdot) Pairwise Construction-based Approximation

We now consider the +4W(,)+4W(\cdot,\cdot) pairwise construction [3] (Algorithm 2) as a single level subroutine. We first describe the experimental results on Erdős–Rényi graphs w.r.t. nn, \ell, and terminal selection method in Figure 8. The average experimental ratio increases as nn increases. This is expected since the theoretical approximation ratio is proportional to nn. The average experimental ratio does not change that much as the number of levels increases; however, the maximum ratio increases. The experimental ratio of the linear terminal selection method is also slightly better compared to that of the exponential method.

Refer to caption
Refer to caption
Refer to caption
Figure 8: Performance of the algorithm that uses +4W(,)+4W(\cdot,\cdot) pairwise spanner as the single level solver on Erdős–Rényi graphs w.r.t. nn, \ell, and terminal selection method.

We describe the experimental results on random geometric graphs w.r.t. nn, \ell, and terminal selection method in Figure 9. The experimental ratio increases as the number of vertices increases. The maximum ratio increases as the number of levels increases. Again, the experimental ratio of the linear terminal selection method is relatively smaller compared to the exponential method.

Refer to caption
Refer to caption
Refer to caption
Figure 9: Performance of the algorithm that uses +4W(,)+4W(\cdot,\cdot) pairwise spanner as the single level solver on random geometric graphs w.r.t. nn, \ell, and terminal selection method.

6.2.5 Comparison between +2W(,)+2W(\cdot,\cdot) and +4W(,)+4W(\cdot,\cdot) Setups

We now provide a comparison between the pairwise +2W(,)+2W(\cdot,\cdot) and +4W(,)+4W(\cdot,\cdot) construction-based approximation algorithms. We first describe the experimental results on Erdős–Rényi graphs w.r.t. nn, \ell, and the terminal selection method in Figure 10. The average experimental ratio increases as nn increases for both +2W(,)+2W(\cdot,\cdot) and +4W(,)+4W(\cdot,\cdot) settings. As nn increases, the ratio of the +4W(,)+4W(\cdot,\cdot) construction-based algorithm decreases slightly. The +4W(,)+4W(\cdot,\cdot) construction-based algorithm slightly outperforms the +2W(,)+2W(\cdot,\cdot) algorithm for =3\ell=3 and exponential selection method.

Refer to caption
Refer to caption
Refer to caption
Figure 10: Performance of the pairwise +2W(,)+2W(\cdot,\cdot) and +4W(,)+4W(\cdot,\cdot) construction-based algorithms on Erdős–Rényi graphs w.r.t. nn, \ell, and terminal selection method.

We describe the experimental results on random geometric graphs w.r.t. nn, \ell, and the terminal selection method in Figure 11. As nn increases the average ratio of +4W(,)+4W(\cdot,\cdot)-based approximation algorithm becomes smaller compared to the +2W(,)+2W(\cdot,\cdot)-based algorithm. The average ratio of +4W(,)+4W(\cdot,\cdot) is relatively smaller for the exponential terminal selection method.

Refer to caption
Refer to caption
Refer to caption
Figure 11: Performance of the pairwise +2W(,)+2W(\cdot,\cdot) and +4W(,)+4W(\cdot,\cdot) construction-based algorithms on random geometric graphs w.r.t. nn, \ell, and terminal selection method.

6.2.6 The +6W+6W Pairwise Construction-based Approximation

We now consider the +6W+6W pairwise construction [3] (Algorithm 3) as a single level solver. We first describe the experimental results on Erdős–Rényi graphs w.r.t. nn, \ell, and terminal selection method in Figure 12. The average experimental ratio increases as nn increases. This is expected since the theoretical approximation ratio is proportional to nn. The average experimental ratio does not change that much as the number of levels increases; however, the maximum ratio increases. The maximum and average experimental ratios of the linear terminal selection method are slightly better compared to that of the exponential method.

Refer to caption
Refer to caption
Refer to caption
Figure 12: Performance of the algorithm that uses +6W+6W pairwise spanner as the single level solver on Erdős–Rényi graphs w.r.t. nn, \ell, and terminal selection method.

We describe the experimental results on random geometric graphs w.r.t. nn, \ell, and terminal selection method in Figure 13. The experimental ratio increases as the number of vertices increases. The maximum ratio increases as the number of levels increases. Again, the experimental ratio of the linear terminal selection method is relatively smaller compared to the exponential method.

Refer to caption
Refer to caption
Refer to caption
Figure 13: Performance of the algorithm that uses +6W(,)+6W(\cdot,\cdot) pairwise spanner as the single level solver on random geometric graphs w.r.t. nn, \ell, and terminal selection method.

6.2.7 Comparison between +2W+2W and +6W+6W Setups

We now provide a comparison between pairwise +2W+2W and +6W+6W construction-based approximation algorithms. We first describe the experimental results on Erdős–Rényi graphs w.r.t. nn, \ell, and the terminal selection method in Figure 14. The average experimental ratio increases as nn increases for both +2W+2W and +6W+6W settings. As the number of vertices increases, the ratio of the +6W+6W construction-based algorithm gets smaller. This is expected since a larger error makes the problem easier to solve. Similarly, as \ell increases, the +6W+6W construction-based algorithm outperforms the +2W+2W algorithm. The average experimental ratio of the +6W+6W construction based algorithm is smaller both in the linear and exponential terminal selection methods.

Refer to caption
Refer to caption
Refer to caption
Figure 14: Performance of the pairwise +2W+2W and +6W+6W construction-based algorithms on Erdős–Rényi graphs w.r.t. nn, \ell, and terminal selection method.

We describe the experimental results on random geometric graphs w.r.t. nn, \ell, and the terminal selection method in Figure 15. We can see that as nn gets larger the ratio of +6W+6W gets smaller. The situation is similar when \ell increases.

Refer to caption
Refer to caption
Refer to caption
Figure 15: Performance of the pairwise +2W+2W and +6W+6W construction-based algorithms on random geometric graphs w.r.t. nn, \ell, and terminal selection method.

6.2.8 Experiment on Large Graphs

We generate some large instances on up to 500 vertices and run different multi-level spanner algorithms on them. We use n={50,100,150,,500}n=\{50,100,150,\ldots,500\} and ={1,2,3,,10}\ell=\{1,2,3,\ldots,10\}. We describe the experimental results on Erdős–Rényi graphs w.r.t. nn, \ell, and the terminal selection method in Figure 16. We are comparing four multi-level algorithms, namely those using the +2W+2W subsetwise and +2W(,)+2W(\cdot,\cdot), +4W(,)+4W(\cdot,\cdot), +6W+6W pairwise constructions [3] as subroutines with P=S×SP=S\times S. Since computing the optimal solution exactly via ILP is computationally expensive on large instances, we report the ratio in terms of relative sparsity, defined as the sparsity of the multi-level spanner returned by one algorithm divided by the minimum sparsity over the spanners returned by all four. The ratio of the +6W+6W construction based algorithm is lowest and the +2W+2W construction based algorithm is highest. This is expected since a higher additive error generally reduces the number of edges needed. Overall the ratio decreases as nn increases. This is because the significance of small additive error reduces as the graph size and distances get larger. The relative ratio for the +2W+2W construction increases as \ell increases, and for the exponential terminal selection method.

Refer to caption
Refer to caption
Refer to caption
Figure 16: Performance of different approximation algorithms on large Erdős–Rényi graphs w.r.t. nn, \ell, and terminal selection method.

We describe the experimental results on random geometric graphs w.r.t. nn, \ell, and the terminal selection method in Figure 17.

Refer to caption
Refer to caption
Refer to caption
Figure 17: Performance of different approximation algorithms on large random geometric graphs w.r.t. nn, \ell, and terminal selection method.

6.2.9 Impact of Initial Parameters

It is worth mentioning that the +2+2 subsetwise spanner [21] and +2W+2W subsetwise spanner (Section 2) begin with a clustering phase, while the algorithms described in Appendix 3 begin with a dd-light initialization. In dd-light initialization, we add the dd lightest edges incident to each vertex; these edges tend to be on shortest paths. In practice, there may be relatively few edges which appear on shortest paths and some of these edges might be redundant. Hence, we compute +2W(,)+2W(\cdot,\cdot) spanners with different values of dd. We describe the experimental results on Erdős–Rényi graphs w.r.t. nn, \ell, and the terminal selection method in Figure 18. We have computed the ratio as described in Section 6.2.8. From the figures, we see that as we reduce the value of dd exponentially, the ratio decreases: in particular, the optimal choice of parameter dd in practice might be significantly smaller than the optimal value of dd in theory. Generally, it could make sense in practical implementations of spanner algorithms to try all values {d,d/2,d/4,d/8,}\{d,d/2,d/4,d/8,\dots\}, computing logd\log d different spanners, and then use only the sparsest one. This costs only a logd\log d factor in the running time of the algorithm, which is typically reasonable.

Refer to caption
Refer to caption
Refer to caption
Figure 18: Impact of different values of dd on large Erdős–Rényi graphs w.r.t. nn, \ell, and terminal selection method.

We describe the experimental results on random geometric graphs w.r.t. nn, \ell, and the terminal selection method in Figure 19. Again, the experiment suggests that we can exponentially reduce the value of dd and take the solution that has a minimum number of edges.

Refer to caption
Refer to caption
Refer to caption
Figure 19: Impact of different values of dd on large random geometric graphs w.r.t. nn, \ell, and terminal selection method.

6.3 Running Time

We now provide the running times of the different algorithms. We show the running time of the ILP on Erdős–Rényi graphs w.r.t. nn, \ell, and terminal selection method in Figure 20. The running time of the ILP increases exponentially as nn increases, as expected. The execution time of a single level instance with 45 vertices is more than 64 hours using a 28 core processor. Hence, we kept the number of vertices less than or equal to 40 for our small graphs. The experimental running time should increase as \ell increases, but we do not see that pattern in these plots because some of the instances were not able to finish in four hours.

Refer to caption
Refer to caption
Refer to caption
Figure 20: Running time of all exact algorithms on Erdős–Rényi graphs w.r.t. nn, \ell, and terminal selection method.

We provide the experimental running time of the approximation algorithm on Erdős–Rényi graphs in Figure 21. The running time of the +2W+2W construction-based algorithm is the largest. Overall, the running time increases as nn increases. There is no straightforward relation between the running time and \ell. Although the number of calls to the single level subroutine increases as \ell increases, it also depends on the size of the subset in a single level. Hence, if the subset sizes are larger, then it may take longer for small \ell. The running time of the linear method is larger.

Refer to caption
Refer to caption
Refer to caption
Figure 21: Running time of all approximation algorithms on large Erdős–Rényi graphs w.r.t. nn, \ell, and terminal selection method.

The running times appear reasonable in other settings too; see the supplemental Github repository for details and experimental results.

7 Conclusion

We have provided a framework where we can use different spanner subroutines to compute multi-level spanners of varying additive error. We additionally introduced a generalization of the +2+2 subsetwise spanner [21] to integer edge weights, and illustrate that this can reduce the +8W+8W error in [3] to +6W+6W. A natural question is to provide an approximation algorithm that can handle different additive error for different levels. We also provided an ILP to find the exact solution for both global and local settings. Using the ILP and the given spanner constructions, we have run experiments and provided the experimental approximation ratio over the graphs we generated. The experimental results suggest that the +2W+2W clustering-based approach is slower than the initialization based approaches. We provided a method of changing the initialization parameter dd which reduces the sparsity in practice.

References

  • [1] Amir Abboud and Greg Bodwin. The 4/3 additive spanner exponent is tight. Journal of the ACM (JACM), 64(4):1–20, 2017.
  • [2] Abu Reyan Ahmed, Patrizio Angelini, Faryad Darabi Sahneh, Alon Efrat, David Glickenstein, Martin Gronemann, Niklas Heinsohn, Stephen Kobourov, Richard Spence, Joseph Watkins, and Alexander Wolff. Multi-level Steiner trees. In 17th International Symposium on Experimental Algorithms, (SEA), pages 15:1–15:14, 2018. URL: https://doi.org/10.4230/LIPIcs.SEA.2018.15, doi:10.4230/LIPIcs.SEA.2018.15.
  • [3] Reyan Ahmed, Greg Bodwin, Faryad Darabi Sahneh, Stephen Kobourov, and Richard Spence. Weighted additive spanners. In Isolde Adler and Haiko Müller, editors, Graph-Theoretic Concepts in Computer Science, pages 401–413, Cham, 2020. Springer International Publishing.
  • [4] Reyan Ahmed, Greg Bodwin, Keaton Hamm, Stephen Kobourov, and Richard Spence. Weighted sparse and lightweight spanners with local additive error. arXiv preprint arXiv:2103.09731, 2021.
  • [5] Reyan Ahmed, Greg Bodwin, Faryad Darabi Sahneh, Keaton Hamm, Mohammad Javad Latifi Jebelli, Stephen Kobourov, and Richard Spence. Graph spanners: A tutorial review. Computer Science Review, 37:100253, 2020.
  • [6] Reyan Ahmed, Faryad Darabi Sahneh, Keaton Hamm, Stephen Kobourov, and Richard Spence. Kruskal-Based Approximation Algorithm for the Multi-Level Steiner Tree Problem. In Fabrizio Grandoni, Grzegorz Herman, and Peter Sanders, editors, 28th Annual European Symposium on Algorithms (ESA 2020), volume 173 of Leibniz International Proceedings in Informatics (LIPIcs), pages 4:1–4:21, Dagstuhl, Germany, 2020. Schloss Dagstuhl–Leibniz-Zentrum für Informatik. URL: https://drops.dagstuhl.de/opus/volltexte/2020/12870, doi:10.4230/LIPIcs.ESA.2020.4.
  • [7] Donald Aingworth, Chandra Chekuri, Piotr Indyk, and Rajeev Motwani. Fast estimation of diameter and shortest paths (without matrix multiplication). SIAM Journal on Computing, 28:1167–1181, 04 1999. doi:10.1137/S0097539796303421.
  • [8] Ingo Althöfer, Gautam Das, David Dobkin, and Deborah Joseph. Generating sparse spanners for weighted graphs. In Scandinavian Workshop on Algorithm Theory (SWAT), pages 26–37, Berlin, Heidelberg, 1990. Springer Berlin Heidelberg.
  • [9] Anantaram Balakrishnan, Thomas L. Magnanti, and Prakash Mirchandani. Modeling and heuristic worst-case performance analysis of the two-level network design problem. Management Sci., 40(7):846–867, 1994. doi:10.1287/mnsc.40.7.846.
  • [10] Albert-László Barabási and Réka Albert. Emergence of scaling in random networks. science, 286(5439):509–512, 1999.
  • [11] Surender Baswana, Telikepalli Kavitha, Kurt Mehlhorn, and Seth Pettie. Additive spanners and (α\alpha, β\beta)-spanners. ACM Transactions on Algorithms (TALG), 7(1):5, 2010.
  • [12] Greg Bodwin. Linear size distance preservers. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 600–615. Society for Industrial and Applied Mathematics, 2017.
  • [13] Greg Bodwin. A note on distance-preserving graph sparsification. arXiv preprint arXiv:2001.07741, 2020.
  • [14] Greg Bodwin and Virginia Vassilevska Williams. Better distance preservers and additive spanners. In Proceedings of the Twenty-seventh Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 855–872. Society for Industrial and Applied Mathematics, 2016. URL: http://dl.acm.org/citation.cfm?id=2884435.2884496.
  • [15] Hsien-Chih Chang, Pawel Gawrychowski, Shay Mozes, and Oren Weimann. Near-Optimal Distance Emulator for Planar Graphs. In Proceedings of 26th Annual European Symposium on Algorithms (ESA 2018), volume 112, pages 16:1–16:17, 2018.
  • [16] M. Charikar, J. Naor, and B. Schieber. Resource optimization in QoS multicast routing of real-time multimedia. IEEE/ACM Transactions on Networking, 12(2):340–348, April 2004. doi:10.1109/TNET.2004.826288.
  • [17] Shiri Chechik. New additive spanners. In Proceedings of the twenty-fourth annual ACM-SIAM symposium on Discrete algorithms (SODA), pages 498–512. Society for Industrial and Applied Mathematics, 2013.
  • [18] Eden Chlamtáč, Michael Dinitz, Guy Kortsarz, and Bundit Laekhanukit. Approximating spanners and directed steiner forest: Upper and lower bounds. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 534–553. SIAM, 2017.
  • [19] Julia Chuzhoy, Anupam Gupta, Joseph (Seffi) Naor, and Amitabh Sinha. On the approximability of some network design problems. ACM Trans. Algorithms, 4(2):23:1–23:17, 2008. doi:10.1145/1361192.1361200.
  • [20] Don Coppersmith and Michael Elkin. Sparse sourcewise and pairwise distance preservers. SIAM Journal on Discrete Mathematics, 20(2):463–501, 2006.
  • [21] Marek Cygan, Fabrizio Grandoni, and Telikepalli Kavitha. On pairwise spanners. In Proceedings of 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013), volume 20, pages 209–220, 2013. URL: http://drops.dagstuhl.de/opus/volltexte/2013/3935, doi:10.4230/LIPIcs.STACS.2013.209.
  • [22] Michael Elkin, Yuval Gitlitz, and Ofer Neiman. Almost shortest paths and PRAM distance oracles in weighted graphs. arXiv preprint arXiv:1907.11422, 2019.
  • [23] Michael Elkin, Yuval Gitlitz, and Ofer Neiman. Improved weighted additive spanners. arXiv preprint arXiv:2008.09877, 2020.
  • [24] Paul Erdős and Alfréd Rényi. On random graphs, i. Publicationes Mathematicae (Debrecen), 6:290–297, 1959.
  • [25] Marek Karpinski, Ion I. Mandoiu, Alexander Olshevsky, and Alexander Zelikovsky. Improved approximation algorithms for the quality of service multicast tree problem. Algorithmica, 42(2):109–120, 2005. doi:10.1007/s00453-004-1133-y.
  • [26] Telikepalli Kavitha. New pairwise spanners. Theory of Computing Systems, 61(4):1011–1036, Nov 2017. URL: https://doi.org/10.1007/s00224-016-9736-7, doi:10.1007/s00224-016-9736-7.
  • [27] Telikepalli Kavitha and Nithin M. Varma. Small stretch pairwise spanners and approximate dd-preservers. SIAM Journal on Discrete Mathematics, 29(4):2239–2254, 2015. URL: https://doi.org/10.1137/140953927, arXiv:https://doi.org/10.1137/140953927, doi:10.1137/140953927.
  • [28] Arthur Liestman and Thomas Shermer. Additive graph spanners. Networks, 23:343 – 363, 07 1993. doi:10.1002/net.3230230417.
  • [29] Prakash Mirchandani. The multi-tier tree problem. INFORMS J. Comput., 8(3):202–218, 1996.
  • [30] Mathew Penrose. Random geometric graphs. Number 5. Oxford university press, 2003.
  • [31] Seth Pettie. Low distortion spanners. ACM Transactions on Algorithms (TALG), 6(1):7, 2009.
  • [32] Duncan J Watts and Steven H Strogatz. Collective dynamics of ‘small-world’networks. Nature, 393(6684):440, 1998.