This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Faster Multi-Source Directed Reachability via Shortcuts and Matrix Multiplication

Michael Elkin
Ben-Gurion University of the Negev, Israel
[email protected]
   Chhaya Trehan
University of Bristol, UK
[email protected]
Abstract

Given an nn-vertex mm-edge directed graph G=(V,E)G=(V,E) and a designated source vertex sVs\in V, let 𝒱sV\mathcal{V}_{s}\subseteq V be a subset of vertices reachable from ss in GG. Given a subset SVS\subseteq V of |S|=nσ|S|=n^{\sigma}, for some 0<σ10<\sigma\leq 1, designated sources, the S×VS\times V-direachability problem is to compute the sets 𝒱s\mathcal{V}_{s} for every sSs\in S. Known naive algorithms for this problem either run a BFS/DFS exploration separately from every source, and as a result require O(mnσ)O(m\cdot n^{\sigma}) time, or compute the transitive closure TC(G)TC(G) of the graph GG in O~(nω)\tilde{O}(n^{\omega}) time, where ω<2.371552\omega<2.371552\ldots is the matrix multiplication exponent. Hence, the current state-of-the-art bound for the problem on graphs with m=Θ(nμ)m=\Theta(n^{\mu}) edges in O~(nmin{μ+σ,ω})\tilde{O}(n^{\min\{\mu+\sigma,\omega\}}), which may be as large as O~(nmin{2+σ,ω})\tilde{O}(n^{\min\{2+\sigma,\omega\}}).

Our first contribution is an algorithm with running time O~(n1+23ω(σ))\tilde{O}(n^{1+\tiny{\frac{2}{3}}\omega(\sigma)}) for this problem on general graphs, where ω(σ)\omega(\sigma) is the rectangular matrix multiplication exponent (for multiplying an nσ×nn^{\sigma}\times n matrix by an n×nn\times n matrix). Using current state-of-the-art estimates on ω(σ)\omega(\sigma), our exponent is better than min{2+σ,ω}\min\{2+\sigma,\omega\} for σ~σ0.53\tilde{\sigma}\leq\sigma\leq 0.53, where 1/3<σ~<0.33361/3<\tilde{\sigma}<0.3336 is a universal constant. For graphs with m=θ(nμ)m=\theta(n^{\mu}) edges, for 1μ21\leq\mu\leq 2, our algorithm has running time O~(n1+μ+2ω(σ)3)\tilde{O}(n^{\tiny{\frac{1+\mu+2\cdot\omega(\sigma)}{3}}}). For every σ~σ<1\tilde{\sigma}\leq\sigma<1, there exists a non-empty interval Iσ[1,2]I_{\sigma}\subseteq[1,2], so that our running time is better than the state-of-the-art one whenever the input graph has m=Θ(nμ)m=\Theta(n^{\mu}) edges with μIσ\mu\in I_{\sigma}.

Our second contribution is a sequence of algorithms 𝒜0,𝒜1,𝒜2,\mathcal{A}_{0},\mathcal{A}_{1},\mathcal{A}_{2},\ldots for the S×VS\times V-direachability problem, where 𝒜0\mathcal{A}_{0} is our algorithms that we discussed above. We argue that under a certain assumption that we introduce, for every σ~σ<1\tilde{\sigma}\leq\sigma<1, there exists a sufficiently large index k=k(σ)k=k(\sigma) so that 𝒜k\mathcal{A}_{k} improves upon the current state-of-the-art bounds for S×VS\times V-direachability with |S|=nσ|S|=n^{\sigma}, in the densest regime μ=2\mu=2. (Unconditionally, 𝒜0\mathcal{A}_{0} satisfies this for σ~σ0.53\tilde{\sigma}\leq\sigma\leq 0.53.) We show that to prove this assumption, it is sufficient to devise an algorithm that computes a rectangular max-min matrix product roughly as efficiently as ordinary (+,)(+,\cdot) matrix product.

Our algorithms heavily exploit recent constructions of directed shortcuts [KP22b, KP22a].

1 Introduction

1.1 Our Unconditional Results

Consider an nn-vertex directed graph G=(V,E)G=(V,E) and a pair of vertices u,vVu,v\in V. We say that vv is reachable from uu (denoted uvu\leadsto v) iff there exists a directed path in GG that starts at uu and ends in vv. Given a subset SVS\subseteq V of nσn^{\sigma} designated vertices called sources, for some 0σ10\leq\sigma\leq 1, the S×VS\times V-reachability matrix R=R(S,V)R=R(S,V) is a rectangular Boolean matrix of dimension nσ×nn^{\sigma}\times n, in which for every pair (s,v)S×V(s,v)\in S\times V, the respective entry R[s,v]R[s,v] is equal to 11 iff vv is reachable from ss in GG. The S×VS\times V-direachability problem is, given a directed graph G=(V,E)G=(V,E), and a subset SVS\subseteq V of sources, to compute the reachability matrix R(S,V)R(S,V).

To the best of our knowledge, despite the fundamental nature of this problem, it was not explicitly studied so far. There are two existing obvious solutions for it. The first one runs a BFS (or DFS) from every source sSs\in S, and as a result requires O(mnσ)O(m\cdot n^{\sigma}) time. We call this algorithm BFS-based. For dense graphs this running time may be as large as O(n2+σ)O(n^{2+\sigma}). The second solution (which we call TC-based) uses fast square matrix multiplication to compute the transitive closure TC(G)TC(G) in O~(nω)\tilde{O}(n^{\omega}) time, where ω2.371552\omega\leq 2.371552\ldots is the matrix multiplication exponent. The upper bound ω2.371552\omega\leq 2.371552\ldots is due to [WXXZ24] (see also [Gal24, GU18, LG14, CW90]).111The expression O(nω)O(n^{\omega}) is the running time of the state-of-the-art algorithm for computing a matrix product of two square matrices of dimensions n×nn\times n.

In this paper we employ recent breakthroughs due to Kogan and Parter [KP22b, KP22a] concerning directed shortcuts, and devise an algorithm that outperforms the two existing solutions (BFS-based and TC-based) for computing S×VS\times V-reachability matrix whenever σ~σ0.53\tilde{\sigma}\leq\sigma\leq 0.53. (Recall that σ=logn|S|\sigma=\log_{n}|S|. Here 1/3<σ~<0.33361/3<\tilde{\sigma}<0.3336 is a universal constant.) The running time of our algorithm is O~(n1+23ω(σ))\tilde{O}(n^{1+\tiny{\frac{2}{3}}\omega(\sigma)}), where ω(σ)\omega(\sigma) is rectangular matrix multiplication exponent, i.e., O(nω(σ))O(n^{\omega(\sigma)}) is the running time of the state-of-the-art algorithm for multiplying a rectangular matrix with dimensions nσ×nn^{\sigma}\times n by a square matrix of dimensions n×nn\times n. See Table 1 for the values of this exponent (due to [WXXZ24]), and Table 2 for the comparison of running time of our algorithm with that of previous algorithms for S×VS\times V-direachability problem in the range σ~σ0.53\tilde{\sigma}\leq\sigma\leq 0.53 (in which we improve the previous state-of-the-art). The largest gap between our exponent and the previous state-of-the-art is for 0.37σ0.380.37\leq\sigma\leq 0.38. Specifically, our exponent for σ=0.38\sigma=0.38 is 2.337512.33751, while the state-of-the-art is 2.3715522.371552.

Moreover, we show that our algorithm improves the state-of-the-art solutions for S×VS\times V-direachability problem in a much wider range of σ\sigma on sparser graphs. Specifically, we show that for every σ~σ<1\tilde{\sigma}\leq\sigma<1, there exists a non-empty interval Iσ[1,2]I_{\sigma}\subset[1,2] of values μ\mu, so that for all input graphs with m=Θ(nμ)m=\Theta(n^{\mu}) edges, our algorithm outperforms both the BFS-based and TC-based solutions. The running time of our algorithm on such graphs is O~(n1+μ+2ω(σ)3)\tilde{O}(n^{\tiny{\frac{1+\mu+2\cdot\omega(\sigma)}{3}}}).

We provide some sample values of these intervals in Table 4. See also Table 5 for the comparison between our new bounds for S×VS\times V-direachability on sparser graphs and the state-of-the-art ones. For example, for σ=0.5\sigma=0.5 (when |S|=n|S|=\sqrt{n}), the interval is Iσ=(1.793,2]I_{\sigma}=(1.793,2], i.e., our algorithm improves the state-of-the-art solutions as long as μ>1.793\mu>1.793. Specifically, when μ=1.9\mu=1.9, our algorithm requires O~(n2.3287)\tilde{O}(n^{2.3287\ldots}) time for computing reachability from n\sqrt{n} sources, while the state-of-the art solution is the TC-based one, and it requires O~(n2.371552)\tilde{O}(n^{2.371552\ldots}) time. Another example is when σ=0.6\sigma=0.6. The interval Iσ=I0.6I_{\sigma}=I_{0.6} is (1.693,1.93)(1.693,1.93), and for μ=1.75\mu=1.75 our solution requires O~(n2.312)\tilde{O}(n^{2.312\ldots}) time, while the state-of-the-art BFS-based solution requires O(n2.35)O(n^{2.35}) time.

1.2 Our Conditional Results

We also devise a recursive variant of our algorithm that further improves our bounds conditioned on a certain algorithmic assumption. Specifically, we define the directed reachability problem with parameters nσn^{\sigma} and nδn^{\delta} denoted DR(nσ,nδ)DR(n^{\sigma},n^{\delta}), for a pair of parameters 0σ,δ10\leq\sigma,\delta\leq 1. The parameter σ\sigma is logn|S|\log_{n}|S| as above. The parameter δ\delta affects the hop-bound D=nδD=n^{\delta}. The problem DR(nσ,nδ)DR(n^{\sigma},n^{\delta}) accepts as input an nn-vertex graph G=(V,E)G=(V,E) and a set SS of nσn^{\sigma} sources, and asks to compute, for every sSs\in S, the set 𝒱s\mathcal{V}_{s} of vertices reachable from ss in at most nδn^{\delta} hops. For a problem Π\Pi, we denote by 𝒯(Π)\mathcal{T}(\Pi) its time complexity. In particular 𝒯(DR(nσ,nδ))\mathcal{T}(DR(n^{\sigma},n^{\delta})) stands for time complexity of DR(nσ,nδ)DR(n^{\sigma},n^{\delta}). Observe that DR(nσ,nδ)DR(n^{\sigma},n^{\delta}) can be easily solved by nδn^{\delta} rectangular Boolean matrix products of an nσ×nn^{\sigma}\times n matrix by an n×nn\times n square matrix. Let A=A(G)A=A(G) be the adjacency matrix of the input graph GG. In the first iteration, the rectangular matrix BB is just the matrix AA restricted to nσn^{\sigma} rows that correspond to sources SS, while the square matrix AA^{\prime} is the adjacency matrix AA augmented with a self-loop for every vertex. On the next iteration BB is replaced by the Boolean matrix product BAB\star A^{\prime}, etc. This process continues for nδn^{\delta} iterations. Thus, 𝒯(DR(nσ,nδ))=O(nω(σ)+δ)\mathcal{T}(DR(n^{\sigma},n^{\delta}))=O(n^{\omega(\sigma)+\delta}). Now consider the following paths directed reachability problem, PDR(nσ,nδ)PDR(n^{\sigma},n^{\delta}), in which instead of nσn^{\sigma} designated source vertices we are given nσn^{\sigma} dipaths 𝒫={p1,p2,,pnσ}\mathcal{P}=\{p_{1},p_{2},\ldots,p_{n^{\sigma}}\}. For every path p𝒫p\in\mathcal{P}, and every vertex uVu\in V, we want to compute if uu is reachable from some vertex vV(p)v\in V(p) in at most nδn^{\delta} hops, and if it is the case, we want to output the last vertex vp(u)v_{p}(u) on pp from which uu is reachable within at most nδn^{\delta} hops.

Interestingly, the BFS-based algorithm for DR(nσ,nδ)DR(n^{\sigma},n^{\delta}) problem applies (with slight modifications) to the PDR(nσ,nδ)PDR(n^{\sigma},n^{\delta}) problem, providing a solution with running time O(nμ+σ)O(n^{\mu+\sigma}) (like for the DR(nσ,nδ)DR(n^{\sigma},n^{\delta}) problem) [KP22a]. On the other hand, it does not seem to be the case for the algorithm with running time O(nω(σ)+δ)O(n^{\omega(\sigma)+\delta}) time for DR(nσ,nδ)DR(n^{\sigma},n^{\delta}) that we described above. Our assumption, which we call paths direachability assumption, is that one nevertheless can achieve running time O~(nω(σ)+δ)\tilde{O}(n^{\omega(\sigma)+\delta}) for PDR(nσ,nδ)PDR(n^{\sigma},n^{\delta}) problem as well.

We show that under this assumption a variant of our algorithm improves the state-of-the-art S×VS\times V-reachability algorithms for dense graphs (μ=2\mu=2) for all values of σ~σ<1\tilde{\sigma}\leq\sigma<1 (and not just in the range σ~σ0.53\tilde{\sigma}\leq\sigma\leq 0.53, which we show unconditionally). Recall that 1/3<σ~<0.33361/3<\tilde{\sigma}<0.3336 is a universal constant.

We analyse a sequence of algorithms 𝒜0,𝒜1,𝒜2,\mathcal{A}_{0},\mathcal{A}_{1},\mathcal{A}_{2},\ldots. For each algorithm 𝒜i\mathcal{A}_{i}, i=0,1,2,i=0,1,2,\ldots, we consider the threshold value σi<1\sigma_{i}<1 upto which this algorithm outperforms the existing state-of-the-art solutions. We show that 0.53σ0<σ1<<σi<σi+1<10.53\leq\sigma_{0}<\sigma_{1}<\ldots<\sigma_{i}<\sigma_{i+1}<\ldots\leq 1, and that limiσi=1\lim_{i\to\infty}\sigma_{i}=1. In other words, for any exponent σ<1\sigma<1 of the number of sources, there exists a (sufficiently large) index ii such that the algorithm 𝒜i\mathcal{A}_{i} outperforms existing state-of-the-art solutions for the S×VS\times V-direachability problem with |S|=nσ|S|=n^{\sigma}.

We also identify some additional assumptions under which the paths direachability assumption holds. In particular, consider the max-min matrix product. Given two matrices BB and AA of appropriate dimensions (so that matrix product BAB\cdot A is defined), max-min product C=B∨⃝AC=B\ovee A is defined as follows: for indices i,ji,j so that ii is an index of a row of BB and jj is an index of a column of AA, we have C[i,j]=maxkmin{B[i,k],A[k,j]}C[i,j]=\max_{k}\min\{B[i,k],A[k,j]\}, where kk ranges over all possible indices of columns of BB.

We show that PDR(nσ,nδ)PDR(n^{\sigma},n^{\delta}) problem reduces to nδn^{\delta} invocations of the max-min matrix products for matrices of dimensions nσ×nn^{\sigma}\times n and n×nn\times n. Thus, assuming that such a product can be computed in O~(nω(σ))\tilde{O}(n^{\omega(\sigma)}) time implies paths direachability assumption. We refer to this assumption as max-min product assumption. Max-min matrix product was studied in [DP09] and [GIA+21]. However, current state-of-the-art bounds for it are far from the desired bound of O~(nω(σ))\tilde{O}(n^{\omega(\sigma)}).

Admittedly, it is not clear how likely are these assumptions. We find it to be an intriguing direction for future research. Also, we believe that the analysis of our algorithms under these assumptions is of independent interest, and that it sheds light on the complexity of multi-source reachability problem. Furthermore, getting sufficiently close to the running time of O~(nω(σ)+δ)\tilde{O}(n^{\omega(\sigma)+\delta}) for paths direachability assumption or to O~(nω(σ))\tilde{O}(n^{\omega(\sigma)}) for the max-min product assumption would suffice for providing further improvements to the state-of-the-art bounds for this fundamental problem.

1.3 Our Techniques

In this section we provide a high-level overview of our algorithms. Given a digraph G=(V,E)G=(V,E), and a positive integer parameter DD, we say that a graph G=(V,H)G^{\prime}=(V,H) is a DD-shortcut of GG if for any ordered pair u,vVu,v\in V of vertices, vv is reachable from uu in GG iff it is reachable from uu in GG=(V,EH)G\cup G^{\prime}=(V,E\cup H) using at most DD hops. Kogan and Parter [KP22b, KP22a] showed that for any parameter 1Dn1\leq D\leq\sqrt{n}, for any digraph GG, there exists a DD-shortcut GG^{\prime} with O~(n2D3+n)\tilde{O}(\tiny{\frac{n^{2}}{D^{3}}}+n) edges, and moreover, this shortcut can be constructed in O~(mn/D2+n3/2)\tilde{O}(m\cdot n/D^{2}+n^{3/2}) time [KP22a].

Our first algorithm starts with invoking the algorithm of [KP22a] for an appropriate parameter DD. We call this the diameter-reduction step of our algorithm. We then build two matrices. The first one is the adjacency matrix AA^{\prime} of the graph GGG\cup G^{\prime}, with all the diagonal entries equal to 11. This is a square n×nn\times n matrix, and the diagonal entries correspond to self-loops. The second matrix BB is a rectangular |S|×n|S|\times n matrix. It is just the adjacency matrix of of the graph GGG\cup G^{\prime} restricted to |S||S| rows corresponding to designated sources of SS.

Next, our algorithm computes the Boolean matrix product BADB\star A^{\prime D}. Specifically, the algorithm computes the product BAB\star A^{\prime}, and then multiplies it by AA^{\prime}, etc., and does so DD times. Hence, this computation reduces to DD Boolean matrix products of a rectangular matrix of dimensions |S|×n|S|\times n by a square matrix with dimensions n×nn\times n. Each of these DD Boolean matrix products can be computed using standard matrix multiplication (see Section 2.1). We use fast rectangular matrix multiplication algorithm by [WXXZ24] to compute these products. As |S|=nσ|S|=n^{\sigma}, this computation, henceforth referred to as the reachability computation step, requires O(Dnω(σ))O(D\cdot n^{\omega(\sigma)}) time.

Observe that the running time of the reachability computation step grows with DD, and the running time of the diameter-reduction step decreases with DD. It is now left to balance these two running times by carefully choosing DD. This completes the overview of our basic direachability algorithm, that improves (unconditionally) the state-of-the-art running time for S×VS\times V-direachability, |S|=nσ|S|=n^{\sigma}, for σ~σ0.53\tilde{\sigma}\leq\sigma\leq 0.53, σ~<0.3336\tilde{\sigma}<0.3336. Moreover, as was mentioned above, for any σ\sigma, σ~σ<1\tilde{\sigma}\leq\sigma<1, there is a non-empty interval Iσ[1,2]I_{\sigma}\subseteq[1,2] of values μ\mu, such that thus algorithm improves (also, unconditionally) upon state-of-the-art solutions for this problem on graphs with m=Θ(nμ)m=\Theta(n^{\mu}) edges,

To improve these results further (based on the paths direachability assumption), we observe that the algorithm of Kogan and Parter [KP22a] for building shortcuts has two parts. In the first part it computes a subset PP^{\prime} of O~(n/D2)\tilde{O}(n/D^{2}) dipaths and a subset VV^{\prime} of O~(n/D)\tilde{O}(n/D) vertices. This step is implemented via a reduction to a min-cost max-flow (MCMF) instance. Using the recent advances in this area [CKL+22], it can be done within O~(m1+o(1)+n3/2)\tilde{O}(m^{1+o(1)}+n^{3/2}) time. The second part of the algorithm of [KP22a] involves computing reachabilities between all pairs (p,v)P×V(p,v)\in P^{\prime}\times V^{\prime}. Here for every pair (p,v)P×V(p,v)\in P^{\prime}\times V^{\prime}, one needs to compute the last (if any) vertex on pp from which vv is reachable. Under our paths direachability assumption, this second part can be implemented via a recursive invocation of our S×VS\times V-reachability algorithm. Our conditional bounds are achieved by analysing this recursive scheme.

Related Work Fast rectangular matrix multiplication in conjunction with hopsets was recently employed in a similar way in the context of distance computation in undirected graphs by Elkin and Neiman [EN22]. Directed hopsets, based on Kogan-Parter constructions of shortcuts [KP22b, KP22a], were recently devised in [BW23].

2 Preliminaries

Graph Notation. For an nn-vertex, mm-edge digraph G=(V,E)G=(V,E), let TC(G)TC(G) denote its transitive closure. For a vertex pair u,vVu,v\in V, where (u,v)TC(G)(u,v)\in TC(G), let dG(u,v)d_{G}(u,v) be the length (measured by the number of arcs) of a shortest dipath from uu to vv. For (u,v)TC(G)(u,v)\notin TC(G), we have dG(u,v)=d_{G}(u,v)=\infty. The diameter of GG, denoted Diam(G)Diam(G), is defined as max(u,v)TC(G)dG(u,v)\max_{(u,v)\in TC(G)}d_{G}(u,v). An edge (u,v)TC(G)(u,v)\in TC(G) which is not in EE is called a shortcut edge.

For a dipath pp, let V(p)V(p) denote the vertex set of pp. Also, for two distinct vertices u,vV(p)u,v\in V(p), we write u<pvu<_{p}v if uu appears before vv in pp, and write v<puv<_{p}u, otherwise. We use this notation for any vertex sequence pp, i.e., even if pp is not necessarily a dipath.

As mentioned in Section 1, a naive solution to the S×VS\times V-direachability involves performing a BFS or DFS exploration from each of the source vertices in SS.

Definition 1.

For an nn-vertex mm-edge digraph G=(V,E)G=(V,E), we refer to the naive methods for S×VS\times V-direachability (which involve running a BFS or DFS separately from each of the sources in SS) as S×VS\times V-naiveReach method. We denote by 𝒯Naive\mathcal{T}_{Naive} the time complexity of S×VS\times V-naiveReach, and it is given by 𝒯Naive=O(m|S|)\mathcal{T}_{Naive}=O(m\cdot|S|).

Another way to solve S×VS\times V-direachability is to compute the transitive closure of the input digraph using square matrix multiplication.

Definition 2.

For an nn-vertex mm-edge digraph G=(V,E)G=(V,E), we refer to the technique of computing the transitive closure of GG using square matrix multiplication as S×VS\times V-squareReach. Observe that computing the transitive closure of GG solves S×VS\times V-direachability for up to |S|=n|S|=n sources. We denote by 𝒯Square\mathcal{T}_{Square} the time complexity of S×VS\times V-squareReach, and it is given by 𝒯Square=O~(nω)\mathcal{T}_{Square}=\tilde{O}(n^{\omega}), where ω\omega is the exponent of nn in the number of operations required to compute the product of two n×nn\times n square matrices.

Definition 3.

For a digraph GG, a set of shortcut edges HTC(G)H\subseteq TC(G) is called a DD-shortcut if Diam(GH)DDiam(G\cup H)\leq D.

The following theorem summarizes the fast shortcut computation algorithm by Kogan and Parter [KP22a].

Theorem 1.

(Theorem 1.4, [KP22a]) There exists a randomized algorithm that for every nn-vertex mm-edge digraph GG and D=O(n)D=O(\sqrt{n}), computes, w.h.p., in time O~(mn/D2+n3/2)\tilde{O}(m\cdot n/D^{2}+n^{3/2}) a DD-shortcut set HTC(G)H\subseteq TC(G) with |E(H)|=O~(n2/D3+n)|E(H)|=\tilde{O}(n^{2}/D^{3}+n) edges.

Matrix Notation. For a matrix B, we denote by B[i,j]B[i,j] the entry in row ii and column jj of BB. The transpose of BB is BTB^{T} . We use * to denote a wildcard, e.g., the notation B[,j]B[*,j] refers to the vector which is the jj-th column of BB.

2.1 Matrix Multiplication and Boolean Matrix Product

Recall that for a fixed integer nn and 0<σ<10<\sigma<1, ω(σ)\omega(\sigma) denotes the exponent of nn in the number of algebraic operations required to compute the product of an nσ×nn^{\sigma}\times n matrix by an n×nn\times n matrix. Let BB be an nσ×nn^{\sigma}\times n Boolean matrix (i.e., each entry of BB is either 0 or 11.). Let AA be another Boolean matrix of dimensions n×nn\times n. Define the Boolean matrix product C=BAC=B\star A by

C[i,j]=1knB[i,k]A[k,j],C[i,j]=\small{\bigvee_{1\leq k\leq n}}B[i,k]\wedge A[k,j], (1)

where \vee and \wedge stand for binary operations OROR and ANDAND respectively.

Note that the Boolean matrix product (BMM henceforth) of BB and AA can be easily computed by treating the two matrices as standard integer matrices and computing the integer matrix product C~=BA\tilde{C}=B\cdot A over the integers, and then setting C[i,j]=1C[i,j]=1 if C~[i,j]1\tilde{C}[i,j]\geq 1. Thus the number of operations required to compute the BMM of BB and AA as above is O(nω(σ))O(n^{\omega{(\sigma)}}). Vassilevska Williams et al. presented the current state-of-the-art upper bounds on ω(σ)\omega(\sigma) (Table 1 of [WXXZ24]). We present these bounds in Table 1 here for completeness.)

σ\sigma upper bound on ω(σ)\omega(\sigma)
0.321334 2
0.33 2.000100
0.34 2.000600
0.35 2.001363
0.40 2.009541
0.45 2.023788
σ\sigma upper bound on ω(σ)\omega(\sigma)
0.50 2.042994
0.527661 2.055322
0.55 2.066134
0.60 2.092631
0.65 2.121734
0.70 2.153048
σ\sigma upper bound on ω(σ)\omega(\sigma)
0.75 2.186210
0.80 2.220929
0.85 2.256984
0.90 2.294209
0.95 2.332440
1.00 2.371552
Table 1: Upper bounds on the exponent of nn in the number of operations required to multiply an nσ×nn^{\sigma}\times n matrix by an n×nn\times n matrix (reproduced from [WXXZ24] here for completeness).

3 Basic Two-Step S×VS\times V-Direachability

In this section, we present the description and analysis of our two-step reachability scheme. Let G=(V,E)G=(V,E) be a directed unweighted graph. Let SVS\subseteq V be a set of source vertices such that |S|=nσ|S|=n^{\sigma} for some 0<σ<10<\sigma<1. We compute S×VS\times V-direachability by executing Algorithm 1. We will henceforth refer to it as Basic S×VS\times V-Direachability Algorithm. The algorithm accepts as input the graph GG and the set SS of sources. In addition, it accepts as input a parameter DD. The algorithm is described for an arbitrary choice of DD. However, in fact, we will set it as a certain function of nn and |S||S|, which will be explicated in the sequel. (See Equation (3).)

Algorithm 1 DiReach(G,S,DG,S,D)
1:Compute a DD-shortcut HH for GG (see Theorem 1) ;
2:Define a matrix A=A+IA^{\prime}=A+I ,where AA is the adjacency matrix of GHG\cup H;
3:Let B(1)=ASB^{(1)}=A_{S*}; \triangleright Matrix AA restricted to the rows corresponding to vertices in the set SS.
4:for kk from 11 to D1D-1 do
5:    Compute B(k+1)=B(k)AB^{(k+1)}=B^{(k)}\star A^{\prime};
6:return B(D)B^{(D)};

The lines 1 and 2 constitute the diameter reduction step. We compute a DD-shortcut HH (line 1) of the input digraph GG using [KP22a] (see Theorem 1) and define an n×nn\times n boolean matrix A=A+IA^{\prime}=A+I, where AA is the adjacency matrix of GHG\cup H (line 2). In line 3 we define a new matrix B(1)B^{(1)}, which is the adjacency matrix of GHG\cup H restricted to the rows corresponding to the set SS of source vertices. Lines 4 to 5 constitute the reachability computation step. We repeatedly perform a rectangular boolean matrix product, and return the final product as our output.

Let 𝒯DR\mathcal{T}_{DR} and 𝒯RC\mathcal{T}_{RC} denote the time complexity of the diameter reduction step and reachability computation step, respectively. Then the overall time complexity of our algorithm is 𝒯DR+𝒯RC\mathcal{T}_{DR}+\mathcal{T}_{RC}.

3.1 General Graphs

In this section, we consider general nn-vertex, mm-edge digraphs, i.e., mm may be as large as n(n1)n(n-1). In the next section, we analyse the case of sparser graphs, and show that when m=Θ(nμ)m=\Theta(n^{\mu}), for some μ<2\mu<2, our bounds can be further improved.

Let D=nδD=n^{\delta} for some 0<δ1/20<\delta\leq 1/2. In the diameter reduction step we compute a DD-shortcut of GG. By Theorem 1, this can be done in time 𝒯DR=O~(mn/D2+n3/2)=O~(n32δ)\mathcal{T}_{DR}=\tilde{O}(m\cdot n/D^{2}+n^{3/2})=\tilde{O}(n^{3-2\delta}). In the reachability computation step we perform D=nδD=n^{\delta} iterations, each of which computes a rectangular matrix product of an nσ×nn^{\sigma}\times n matrix by an n×nn\times n matrix. Each matrix product requires O(nω(σ))O(n^{\omega(\sigma)}) time. It follows that 𝒯RC=O(nω(σ)+δ)\mathcal{T}_{RC}=O(n^{\omega(\sigma)+\delta}). The overall time complexity of the algorithm is therefore 𝒯DR+𝒯RC=O~(n32δ+nω(σ)+δ)\mathcal{T}_{DR}+\mathcal{T}_{RC}=\tilde{O}(n^{3-2\delta}+n^{\omega(\sigma)+\delta}). Let g0(σ)g_{0}(\sigma) denote the exponent of nn (as a function of σ\sigma) in the overall time complexity of our algorithm. It follows that

g0(σ)=minδmax{32δ,ω(σ)+δ}.g_{0}(\sigma)=\min_{\delta}\max\{3-2\delta,\omega(\sigma)+\delta\}. (2)

Since 32δ3-2\delta decreases when δ\delta grows and ω(σ)+δ\omega(\sigma)+\delta grows, the minimum is achieved when the following equation holds

32δ=ω(σ)+δ, i.e.,δ=113ω(σ).\displaystyle\begin{aligned} 3-2\delta&=\omega(\sigma)+\delta\text{, i.e.,}\\ \delta&=1-\frac{1}{3}\cdot\omega(\sigma).\end{aligned} (3)

The parameter DD is therefore set as D=nδD=n^{\delta}, with δ\delta given by Equation (3). It follows from (2) and (3) that

g0(σ)=1+23ω(σ).g_{0}(\sigma)=1+\frac{2}{3}\cdot\omega(\sigma). (4)

Recall that the time complexity 𝒯Naive\mathcal{T}_{Naive} of the naive algorithm is given by 𝒯Naive=O(m|S|)=O(n2+σ)\mathcal{T}_{Naive}=O(m\cdot|S|)=O(n^{2+\sigma}). Hence in the range ω2=0.371552σ1\omega-2=0.371552\leq\sigma\leq 1, the naive algorithm is no better than computing the transitive closure of the input digraph. The latter requires O~(nω)\tilde{O}(n^{\omega}) time (where ω=ω(1))\omega=\omega(1)). For our algorithm to do better than the naive algorithm in this range we require

1+23ω(σ)<ω.1+\frac{2}{3}\cdot\omega(\sigma)<\omega.

Incorporating the state-of-the-art upper bounds on ω(σ)\omega(\sigma) from Table 1, we get that g0(σ)<ωg_{0}(\sigma)<\omega, for σ0.53\sigma\leq 0.53. Therefore, since ω()\omega(\cdot) is a monotone function, our algorithm can compute S×VS\times V-direachability in o(nω)o(n^{\omega}) time for 1|S|n0.531\leq|S|\leq n^{0.53} sources, as opposed to naive algorithm that can only do so for 1|S|nω21\leq|S|\leq n^{\omega-2} .

Moreover, for 0.335σ<0.3715520.335\leq\sigma<0.371552, g0(σ)=1+23ω(σ)<2+σg_{0}(\sigma)=1+\frac{2}{3}\omega(\sigma)<2+\sigma. Observe that by convexity of ω(σ)\omega(\sigma), it suffices to verify the inequality for the endpoints σ=0.335\sigma=0.335 and σ=0.371552\sigma=0.371552. Therefore in the range n0.335|S|<n0.53n^{0.335}\leq|S|<n^{0.53}, our algorithm outperforms the state-of-the-art solutions.

Denote by σ~\tilde{\sigma} the threshold value such that

g0(σ~)=1+23ω(σ~)=2+σ~.g_{0}(\tilde{\sigma})=1+\frac{2}{3}\omega(\tilde{\sigma})=2+\tilde{\sigma}. (5)

For all σ~<σ<1\tilde{\sigma}<\sigma<1, we have

g0(σ)<2+σ.g_{0}(\sigma)<2+\sigma. (6)

As the Inequality (6) holds for σ=0.335\sigma=0.335, it follows that 1/3<σ~<0.3351/3<\tilde{\sigma}<0.335. (The Inequality (6) does not hold for σ=1/3\sigma=1/3.) In fact, in Section 3.2 (between Lemma 1 and Lemma 2) we argue that σ~<0.3336\tilde{\sigma}<0.3336. We conclude with the following theorem:

Theorem 2.

Let G=(V,E)G=(V,E) be an nn-vertex directed unweighted graph with θ(n2)\theta(n^{2}) edges. Fix SVS\subset V, |S|=nσ|S|=n^{\sigma}, for σ~σ0.53\tilde{\sigma}\leq\sigma\leq 0.53. Then, there exists a randomized algorithm that computes with high probability S×VS\times V-direachability on GG in o(nω)o(n^{\omega}) time. This algorithm outperforms the state-of-the-art solutions for S×VS\times V-direachability for nσ~|S|n0.53n^{\tilde{\sigma}}\leq|S|\leq n^{0.53}.

For example, we compute S×VS\times V-direachability for S=nσS=n^{\sigma}, σ=0.4\sigma=0.4 in time O(ng0(0.4))=O(n2.33969)O(n^{g_{0}(0.4)})=O(n^{2.33969}), improving upon the state-of-the-art bound of O(n2.4)O(n^{2.4}) for naive algorithm and the state-of-the-art bound of O~(n2.371552)\tilde{O}(n^{2.371552}) obtained by fast square matrix multiplication. In Table 2, we present values of g0(σ)g_{0}(\sigma) corresponding to some specific values of σ\sigma in the range [σ~,0.53][\tilde{\sigma},0.53] and show how they compare to the exponent of nn in S×VS\times V-naiveReach (see Definition 1) and S×VS\times V-squareReach (see Definition 2).

σ\sigma g0(σ)g_{0}(\sigma) Exponent of nn in 𝒯Naive\mathcal{T}_{Naive} Exponent of nn in 𝒯Square\mathcal{T}_{Square}
0.335 2.333565 2.335 2.371552
0.34 2.3337 2.34 2.371552
0.35 2.334241 2.35 2.371552
0.36 2.33533 2.36 2.371552
0.37 2.336422 2.37 2.371552
0.38 2.33751 2.38 2.371552
0.39 2.3386 2.39 2.371552
0.40 2.33969 2.40 2.371552
0.41 2.34159 2.41 2.371552
0.42 2.34349 2.42 2.371552
0.43 2.34539 2.43 2.371552
0.44 2.34729 2.44 2.371552
0.45 2.34919 2.45 2.371552
0.46 2.35175 2.46 2.371552
0.47 2.35431 2.47 2.371552
0.48 2.35687 2.48 2.371552
0.49 2.359435 2.49 2.371552
0.50 2.3621996 2.5 2.371552
0.51 2.365081 2.51 2.371552
0.52 2.368166 2.52 2.371552
0.53 2.371252 2.53 2.371552
Table 2: Comparison of our algorithm from Theorem 2 with the S×VS\times V-naiveReach (BFS-based) and S×VS\times V-squareReach (TC-based) methods in terms of exponent of nn in their respective time complexities. Previous state-of-the-art exponents (for corresponding exponents of nn in SS) are marked by bold font. Our exponent (given in the column titled g0(σ)g_{0}(\sigma)) is better than the previous state-of-the-art in the entire range presented in this table (i.e., 0.335σ0.530.335\leq\sigma\leq 0.53).

3.2 Graphs with m=Θ(nμ)m=\Theta(n^{\mu}) edges, for μ<2\mu<2

In this section we argue that when our input nn-vertex mm-edge digraph GG has m=Θ(nμ)m=\Theta(n^{\mu}) edges, for some μ<2\mu<2, then our bounds from Theorem 2 can be further improved. Let m=Θ(nμ)m=\Theta(n^{\mu}) be the size of the edge set of GG, where 0μ<20\leq\mu<2. As before, let D=nδD=n^{\delta}, for 0<δ1/20<\delta\leq 1/2. Applying Theorem 1 for m=Θ(nμ)m=\Theta(n^{\mu}) and D=nδD=n^{\delta}, it follows that 𝒯DR=O~(n1+μ2δ+n3/2)\mathcal{T}_{DR}=\tilde{O}(n^{1+\mu-2\delta}+n^{3/2}). Recall that the running time of the reachability computation step is 𝒯RC=O(nω(σ)+δ)\mathcal{T}_{RC}=O(n^{\omega(\sigma)+\delta}). The second term of 𝒯DR\mathcal{T}_{DR} will always be dominated by 𝒯RC\mathcal{T}_{RC}. The overall time complexity of the algorithm is therefore 𝒯DR+𝒯RC=O~(n1+μ2δ+nω(σ)+δ)\mathcal{T}_{DR}+\mathcal{T}_{RC}=\tilde{O}(n^{1+\mu-2\delta}+n^{\omega(\sigma)+\delta}). Let g0(μ)(σ)g^{(\mu)}_{0}(\sigma) denote the exponent of nn (as a function of σ\sigma) in the overall time time complexity of our algorithm. It follows that

g0(μ)(σ)=minδmax{1+μ2δ,ω(σ)+δ}.g^{(\mu)}_{0}(\sigma)=\min_{\delta}\max\{1+\mu-2\delta,\omega(\sigma)+\delta\}. (7)

This expression is minimized when

1+μ2δ=ω(σ)+δ, i.e.,δ=1+μω(σ)3.\displaystyle\begin{aligned} 1+\mu-2\delta&=\omega(\sigma)+\delta\text{, i.e.,}\\ \delta&=\frac{1+\mu-\omega(\sigma)}{3}.\end{aligned} (8)

It follows from (7) and (8) that

g0(μ)(σ)=1+μ+2ω(σ)3.g^{(\mu)}_{0}(\sigma)=\frac{1+\mu+2\omega(\sigma)}{3}. (9)

Note that for μ=2\mu=2, we have g0(μ)(σ)=g0(σ)g^{(\mu)}_{0}(\sigma)=g_{0}(\sigma). Also, when μ<2\mu<2, the exponent g0(μ)(σ)g^{(\mu)}_{0}(\sigma) is smaller than ω\omega for a wider range of σ\sigma than the exponent g0(σ)g_{0}(\sigma). On the other hand the sparsity of the input digraph also improves the time complexity 𝒯Naive=O(mnσ)=O(nμ+σ)\mathcal{T}_{Naive}=O(m\cdot n^{\sigma})=O(n^{\mu+\sigma}) of the naive algorithm. It is possible that for a given combination of σ\sigma and μ\mu, 𝒯Naive\mathcal{T}_{Naive} may be better than the time complexity, O~(n1+μ+2ω(σ)3)\tilde{O}(n^{\frac{1+\mu+2\omega(\sigma)}{3}}), of our algorithm. For our algorithm to outperform the naive algorithm, we need

1+μ+2ω(σ)3<μ+σ.\frac{1+\mu+2\omega(\sigma)}{3}<\mu+\sigma.

This condition is equivalent to

μ>ω(σ)+1232σ.\mu>\omega(\sigma)+\frac{1}{2}-\frac{3}{2}\sigma. (10)

By substituting values of ω(σ)\omega(\sigma) from Table 1 for specific values of σ\sigma, we get corresponding lower bounds on μ\mu. In Table 3, we present these lower bounds for various values of σ\sigma.

σ\sigma μ\mu
0 >2.5>2.5
0.34 >1.9906>1.9906
0.4 >1.91>1.91
0.5 >1.793>1.793
0.6 >1.693>1.693
0.7 >> 1.603
0.8 >1.521>1.521
0.9 >1.444>1.444
1 >ω1>\omega-1
Table 3: Values of μ\mu for which g0(σ)<μ+σg_{0}(\sigma)<\mu+\sigma. The condition μ>2.5\mu>2.5 in the first row means that for small values of σ\sigma state-of-the-art solutions outperform our algorithm for all densities of input graph.

For our algorithm to work in o(nω)o(n^{\omega}) time, we need

1+μ+2ω(σ)3<ω, i.e., μ<3ω2ω(σ)1.\frac{1+\mu+2\omega(\sigma)}{3}<\omega\text{, i.e.,~{}}\mu<3\omega-2\omega(\sigma)-1. (11)

For σ=1\sigma=1, Inequality (11) implies that μ<ω1\mu<\omega-1, matching the lower bound μ>ω1\mu>\omega-1 implied by (10). Indeed, our algorithm improves existing bounds only for σ<1\sigma<1. For every σ<1\sigma<1, we get some non-trivial intervals IσI_{\sigma} of μ\mu for which g0(μ)(σ)<min{μ+σ,ω}g^{(\mu)}_{0}(\sigma)<\min\{\mu+\sigma,\omega\}. For example, for σ=0.5\sigma=0.5, μ>1.793\mu>1.793 (see Table 3) ensures that g0(μ)(σ)<μ+σg^{(\mu)}_{0}(\sigma)<\mu+\sigma. For inequality g0(μ)(σ)<ωg^{(\mu)}_{0}(\sigma)<\omega to hold, substituting σ=0.5\sigma=0.5 in (11) implies that μ<2.028668\mu<2.028668. It follows that the inequality always holds for all μ\mu, i.e., the condition on μ\mu for σ=0.5\sigma=0.5 is that μ>1.793\mu>1.793. We computed these values of intervals for various values of σ\sigma. Specifically, for a number of values of σ\sigma, using the best known upper bounds on ω(σ)\omega(\sigma), we numerically computed the intervals Iσ=(μ1,μ2)I_{\sigma}=(\mu_{1},\mu_{2}), such that for graphs with m=Θ(nμ)m=\Theta(n^{\mu}) edges, μ1<μ<μ2\mu_{1}<\mu<\mu_{2}, our algorithm improves the existing bounds bounds on the S×VS\times V-direachability problem. In Table 4, we present these intervals for various values of σ\sigma.

σ\sigma Interval Iσ=(μ0,μ1)I_{\sigma}=(\mu_{0},\mu_{1})
0.335 μ>1.99785\mu>1.99785
0.34 μ>1.9911\mu>1.9911
0.4 μ>1.91\mu>1.91
0.5 μ>1.793\mu>1.793
0.55 (1.74,1.982)(1.74,1.982)
0.6 (1.693,1.93)(1.693,1.93)
0.7 (1.603,1.809)(1.603,1.809)
0.8 (1.521,1.673)(1.521,1.673)
0.9 (1.4442,1.526)(1.4442,1.526)
0.99 (1.3787,1.3872)(1.3787,1.3872)
1 (ω1,ω1)=ϕ(\omega-1,\omega-1)=\phi
Table 4: Intervals of values of μ\mu for which g0(μ)(σ)<min{μ+σ,ω}.g^{(\mu)}_{0}(\sigma)<\min\{\mu+\sigma,\omega\}. These intervals are not empty for any σ\sigma, σ~σ<1\tilde{\sigma}\leq\sigma<1, where σ~<0.335\tilde{\sigma}<0.335 is a universal constant.

These results suggest that for all σ,σ~<σ<1\sigma,\tilde{\sigma}<\sigma<1, there is a non-empty interval of μ\mu such that for digraphs with m=Θ(nμ)m=\Theta(n^{\mu}) edges, our algorithm improves over existing bounds for S×VS\times V-direachability for |S|=nσ|S|=n^{\sigma}. We will soon prove it formally.

In Table 5, we present a comparison of our algorithm for graphs with m=Θ(nμ)m=\Theta(n^{\mu}) edges for various combinations of μ\mu and σ\sigma.

μ\mu σ\sigma g0(μ)(σ)g^{(\mu)}_{0}(\sigma) Exponent of nn in 𝒯Naive\mathcal{T}_{Naive} Exponent of nn in 𝒯Square\mathcal{T}_{Square}
1.95 0.375 2.32 2.325 2.371552
0.4 2.323 2.35 2.371552
0.45 2.3325 2.4 2.371552
0.50 2.345 2.45 2.371552
1.9 0.45 2.3159 2.35 2.371552
0.5 2.3287 2.4 2.371552
0.55 2.344 2.45 2.371552
0.6 2.362 2.5 2.371552
1.75 0.55 2.294 2.3 2.371552
0.6 2.312 2.35 2.371552
0.65 2.331 2.4 2.371552
0.7 2.352 2.45 2.371552
1.525 0.8 2.323 2.325 2.371552
0.85 2.346 2.375 2.371552
0.9 2.3711 2.425 2.371552
Table 5: Comparison of our algorithm for graphs with m=Θ(nμ)m=\Theta(n^{\mu}) edges with the S×VS\times V-naiveReach and S×VS\times V-squareReach methods in terms of exponent of nn in their respective time complexities. For each pair (μ,σ)(\mu,\sigma) that we list in the table, we have μIσ\mu\in I_{\sigma}. We mark the previous state-of-the-art bounds by bold font.

Recall that 12+ω(σ)32σ\frac{1}{2}+\omega(\sigma)-\frac{3}{2}\sigma is the lower bound on the value of μ\mu implied by Inequality (10), i.e., for μ>1/2+ω(σ)3/2σ\mu>1/2+\omega(\sigma)-3/2\sigma, our algorithm is better than the S×VS\times V-naiveReach method. The threshold σ~\tilde{\sigma} is the value such that 12+ω(σ)32σ=2\frac{1}{2}+\omega(\sigma)-\frac{3}{2}\sigma=2, i.e., ω(σ~)=32(1+σ~)\omega(\tilde{\sigma})=\frac{3}{2}(1+\tilde{\sigma}) holds. (Recall that we defined it in Section 3.1 as the value that satisfies 1+23ω(σ~)=2+σ~1+\frac{2}{3}\omega(\tilde{\sigma})=2+\tilde{\sigma}. These two conditions are equivalent.)

Lemma 1.

For every σ>σ~\sigma>\tilde{\sigma}, we have

12+ω(σ)32σ<2, i.e., ω(σ)<32(1+σ).\displaystyle\begin{aligned} \frac{1}{2}+\omega(\sigma)-\frac{3}{2}\sigma<2\text{, i.e., }\omega(\sigma)<\frac{3}{2}(1+\sigma).\end{aligned} (12)
Proof.

Note that for σ=σ~\sigma=\tilde{\sigma}, we have ω(σ)=32(1+σ)\omega(\sigma)=\frac{3}{2}(1+\sigma) and for σ=1\sigma=1, ω=ω(1)<32(1+σ)=3\omega=\omega(1)<\frac{3}{2}(1+\sigma)=3. The inequality (12) now follows by convexity of the function ω(σ)\omega(\sigma). ∎

To evaluate the value of σ~\tilde{\sigma} we note that for σ=0.34\sigma=0.34, we have ω(σ)=2.0006\omega(\sigma)=2.0006 (see Table 1) and 32(1+σ)=2.01\frac{3}{2}(1+\sigma)=2.01. For σ=1\sigma=1, we have ω(σ)=2.371552\omega(\sigma)=2.371552 and 32(1+σ)=3\frac{3}{2}(1+\sigma)=3. Thus ω(σ)<32(1+σ)\omega(\sigma)<\frac{3}{2}(1+\sigma) holds for σ=0.34\sigma=0.34 and for σ=1\sigma=1. By convexity of the function ω(σ)\omega(\sigma), it follows that this inequality holds for all 0.34σ10.34\leq\sigma\leq 1. However, for σ=1/3\sigma=1/3, ω(σ)>ω(0.33)>2.0001>2\omega(\sigma)>\omega(0.33)>2.0001>2, whereas 32(1+σ)=2\frac{3}{2}(1+\sigma)=2. Therefore, there exists a constant σ~\tilde{\sigma}, 1/3<σ0<0.341/3<\sigma_{0}<0.34 (satisfying ω(σ~)=32(1+σ~)\omega(\tilde{\sigma})=\frac{3}{2}(1+\tilde{\sigma})), such that the inequality ω(σ)<32(1+σ)\omega(\sigma)<\frac{3}{2}(1+\sigma) (and thus g0(σ)>ω(σ)+1σ2g_{0}(\sigma)>\omega(\sigma)+\frac{1-\sigma}{2}) holds for σ~<σ1\tilde{\sigma}<\sigma\leq 1. In fact, one can verify that 1/3<σ~<0.33361/3<\tilde{\sigma}<0.3336, as ω(0.3336)<2.00028\omega(0.3336)<2.00028, while 32(1+0.3336)=2.0004\frac{3}{2}(1+0.3336)=2.0004. In the following lemma we formally prove that for σ~<σ<1\tilde{\sigma}<\sigma<1, the intervals IσI_{\sigma} are non-empty.

Lemma 2.

There exists a threshold σ~<0.3336\tilde{\sigma}<0.3336 such that for every σ~<σ<1\tilde{\sigma}<\sigma<1, we have 12+ω(σ)32σ<3ω2ω(σ)1\frac{1}{2}+\omega(\sigma)-\frac{3}{2}\sigma<3\omega-2\omega(\sigma)-1, and there is equality for σ=1\sigma=1.

Proof.

For σ=σ~\sigma=\tilde{\sigma}, we have 12+ω(σ)32σ=2\frac{1}{2}+\omega(\sigma)-\frac{3}{2}\sigma=2 and 3ω2ω(σ)1>23\omega-2\omega(\sigma)-1>2 (as σ~<0.34\tilde{\sigma}<0.34, and ω()\omega(\cdot) is a monotonically increasing function). For σ~<σ<1\tilde{\sigma}<\sigma<1, we argue that 12+ω(σ)32σ<3ω2ω(σ)1\frac{1}{2}+\omega(\sigma)-\frac{3}{2}\sigma<3\omega-2\omega(\sigma)-1, i.e., 32+3ω(σ)32σ<3ω\frac{3}{2}+3\omega(\sigma)-\frac{3}{2}\sigma<3\omega. It is sufficient to prove that

12(1σ)<ω(1)ω(σ),\displaystyle\begin{aligned} \frac{1}{2}(1-\sigma)<\omega(1)-\omega(\sigma),\end{aligned}

or equivalently

ω(1)ω(σ)1σ>12.\displaystyle\begin{aligned} \frac{\omega(1)-\omega(\sigma)}{1-\sigma}>\frac{1}{2}.\end{aligned} (13)

We prove that the Inequality (13) holds in the following claim.

Claim 1.

For σ~<σ<1\tilde{\sigma}<\sigma<1, we have ω(1)ω(σ)1σ>12\frac{\omega(1)-\omega(\sigma)}{1-\sigma}>\frac{1}{2}.

Proof.

Let σ1=α=0.321334\sigma_{1}=\alpha=0.321334 and σ2=1\sigma_{2}=1. According to Table 1, ω(σ1)=2\omega(\sigma_{1})=2 and ω(σ2)=ω=2.371552\omega(\sigma_{2})=\omega=2.371552. It follows that

ω(σ2)ω(σ1)1σ1=0.5475>1/2.\displaystyle\begin{aligned} \frac{\omega(\sigma_{2})-\omega(\sigma_{1})}{1-\sigma_{1}}=0.5475>1/2.\end{aligned} (14)

Note that ω(1)ω(σ1)1σ1\frac{\omega(1)-\omega(\sigma_{1})}{1-\sigma_{1}} is the slope of the line connecting points (σ1,ω(σ1))(\sigma_{1},\omega(\sigma_{1})) and (1,ω(1))(1,\omega(1)). By convexity of the function ω()\omega(\cdot), this slope is smaller than the respective slope ω(1)ω(σ)1σ\frac{\omega(1)-\omega(\sigma)}{1-\sigma}, for any σ1<σ<1\sigma_{1}<\sigma<1. Hence the latter slope is greater than 1/21/2 too. See Figure 1 for an illustration.

Refer to caption

Figure 1: The segment connecting connecting the points (0.321334,ω(0.321334))(0.321334,\omega(0.321334)) and (1,ω(1))(1,\omega(1)) lies above the plot of ω(σ)\omega(\sigma). Its slope is however smaller than the slope of the segment connecting the points (σ,ω(σ))(\sigma,\omega(\sigma)) and (1,ω(1))(1,\omega(1)) for any σ~<σ<1\tilde{\sigma}<\sigma<1. The latter segment is illustrated by a red line.

Hence, the assertion of the lemma holds for σ~<σ<1\tilde{\sigma}<\sigma<1.

For σ=1\sigma=1, we have 12+ω(σ)32σ=3ω2ω(σ)1=ω1\frac{1}{2}+\omega(\sigma)-\frac{3}{2}\sigma=3\omega-2\omega(\sigma)-1=\omega-1. Hence equality holds for σ=1\sigma=1. ∎

We proved the following theorem.

Theorem 3.

There is a threshold value 1/3<σ~<0.33361/3<\tilde{\sigma}<0.3336, such that for every σ~<σ<1\tilde{\sigma}<\sigma<1, there exists a non-empty interval IσI_{\sigma} that satisfies the following property:
For every μIσ\mu\in I_{\sigma} and any nn-vertex mm-edge digraph with m=nμm=n^{\mu} edges, Algorithm 1 computes S×VS\times V-direachability for |S|=nσ|S|=n^{\sigma} faster than the current state-of-the-art algorithms for the problem. (The latter are S×VS\times V-naiveReach method (see Definition 1) and S×VS\times V-squareReach method (see Definition 2).) See Table 4 for sample values of IσI_{\sigma} and Table 5 for sample exponents of our and state-of-the-art algorithms.

4 Recursive S×VS\times V-direachability

In this section, we describe and analyse our recursive scheme. The algorithm accepts as input an integer parameter k=0,1,2,,k=0,1,2,\ldots, in addition to the input graph G=(V,E)G=(V,E), the set SVS\subseteq V of sources, and the parameter DD that will be set below (as a function of nn and |S||S|). The algorithm will be referred to as Procedure Recur-DiReach(G,S,D,kG,S,D,k). If k=0k=0, then the algorithm invokes Algorithm 1. i.e., Procedure DiReach(G,S,DG,S,D), with the parameter DD set as nδn^{\delta} with δ\delta given by Equation (3). Otherwise for k1k\geq 1, Procedure Recur-DiReach(G,S,D,kG,S,D,k) (like Algorithm 1) starts by computing a DD-shortcut. The difference is, however, that while Algorithm 1 computes a DD-shortcut via a blackbox invocation of Kogan-Parter’s algorithm [KP22a] (see Theorem 1), Procedure Recur-DiReach(G,S,D,kG,S,D,k) employs itself recursively for computing the shortcut in the following way. It starts with computing the sets PP^{\prime} and VV^{\prime} of paths and vertices, respectively, exactly in the same way as Kogan-Parter’s algorithm [KP22a] does. That is, it computes a collection of paths PP of size n/Dn/D by a reduction to an instance of min-cost max-flow and then it samples every path in PP independently with probability Θ(logn/D)\Theta(\log n/D) to obtain the set PP^{\prime}, and also samples every vertex vVv\in V independently with probability Θ(logn/D)\Theta(\log n/D) to obtain VV^{\prime}. However, to compute shortcuts between paths in PP^{\prime} and vertices in VV^{\prime} it invokes Procedure Recur-DiReach(G,S,DG,S,D) recursively with parameters G,S=P,D,k1G,S=P^{\prime},D^{\prime},k-1. The parameter DD^{\prime} will be set as a certain function of nn and |P||P^{\prime}| later in the sequel. See Equation (20). (Note that the paths become sources for this invocation.) Finally, once the shortcut set HH is computed, Procedure Recur-DiReach(G,S,D,kG,S,D,k) proceeds in the same way as Procedure DiReach(G,S,DG,S,D) does, i.e., it executes lines 2-5 of Algorithm 1. The pseudocode of Procedure Recur-DiReach(G,S,D,kG,S,D,k) is given in Algorithm 2.

Algorithm 2 Recur-DiReach(G,S,D,kG,S,D,k)
1:if k = 0 then
2:    Invoke DiReach (G,S,D)(G,S,D) (Algorithm 1) and return its output;
3:else
4:    Compute the sets PP^{\prime} and VV^{\prime} of paths and vertices, respectively, as in [KP22a] \triangleright these sets are needed for computing a DD-shortcut (see [KP22a])
5:    Invoke Recur-DiReach(G,P,D,k1G,P^{\prime},D^{\prime},k-1), where DD^{\prime} will be set in the sequel.
6:    Add the output of the previous step (a DD^{\prime}-shortcut set HH of GG) to GG;
7:    Execute lines  2-5 of Algorithm 1 and return its output matrix B(D)B^{(D)}.

For k0k\geq 0, 0<σ<10<\sigma<1, we denote by gk(μ)(σ)g^{(\mu)}_{k}(\sigma) the exponent of nn in the number of operations required to compute S×VS\times V-direachability by Procedure Recur-DiReach with depth parameter equal to kk, with nσn^{\sigma} sources (|S|=nσ|S|=n^{\sigma}), on graphs with m=Θ(nμ)m=\Theta(n^{\mu}) edges, for μ2\mu\leq 2. We also write gk(σ)=gk(2)(σ)g_{k}(\sigma)=g^{(2)}_{k}(\sigma) to denote this exponent in the worst-case, i.e., when μ=2\mu=2. Recall that for a given 0<σ<10<\sigma<1, we have g0(σ)=1+23ω(σ)g_{0}(\sigma)=1+\frac{2}{3}\cdot\omega(\sigma). The base step enables us to compute S×VS\times V-direachability from up to O(n0.53)O(n^{0.53}) sources in o(nω)o(n^{\omega}) time (see Theorem 2) on general graphs. Below we overview the recursive step for k>0k>0.
Recursive Invocation, k1k\geq 1: Recall that (see Section 1.3) the two main computational parts of DD-shortcut computation algorithm of Kogan and Parter [KP22a], for a parameter 1Dn1\leq D\leq\sqrt{n}, are the computation of a set PP of vertex-disjoint dipaths in TC(G)TC(G), and the computation of shortcut edges between a set VVV^{\prime}\subset V, |V|=O(nlogn/D)|V^{\prime}|=O(n\log n/D), of randomly selected vertices and a set PPP^{\prime}\subset P, |P|=O(nlogn/D2)|P^{\prime}|=O(n\log n/D^{2}), of randomly selected paths. The first part can be computed by reducing the computation of the path collection to an instance of min-cost max-flow problem for which we use the algorithm by Chen et al. [CKL+22]). See [KP22a] for more details. The time complexity of Chen et al.’s algorithm [CKL+22] is O~(m1+o(1)+n3/2)\tilde{O}(m^{1+o(1)}+n^{3/2}). It follows therefore that the computation of the first part requires O~(m1+o(1)+n3/2)\tilde{O}(m^{1+o(1)}+n^{3/2}) time. For the second part, for every (v,p)V×P(v,p)\in V^{\prime}\times P^{\prime}, one needs to add a shortcut edge from vv to the first vertex reachable from vv (if any) on pp. Equivalently, we can reverse the edge orientations and compute reachabilities from PP^{\prime} to all the vertices in VV^{\prime}, where for each pPp\in P^{\prime} and vVv\in V^{\prime} we aim to find the last vertex on pp from which vv is reachable. For the sake of current analysis, we assume that this can be done by computing reachability from |P||P^{\prime}| sources, where each path is treated as a source. We further elaborate about this assumption in the next section.

4.1 Paths Direachability Assumption

In this section we introduce a number of auxiliary problems, and assumptions about their time complexities. In the next section we analyse our recursive scheme under these assumptions, and show that it improves our basic direachability algorithm (described and analysed in Section 3). Without loss of generality, we assume that the input graph GG to the DD-shortcut computation algorithm is a DAG. (If it not the case, one can compute and contract its strongly connected components in O(n+m)O(n+m) time, and reduce the problem to DAGs.) Denote by ζ=(v1,v2,,vn)\zeta=(v_{1},v_{2},\ldots,v_{n}) a fixed topological ordering of vertices of GG.

Definition 4.

Given an nn-vertex DAG GG, and a pair of parameters 0σ,δ10\leq\sigma,\delta\leq 1, and a set SS of |S|=nσ|S|=n^{\sigma} sources, the direachability problem with diameter nδn^{\delta}, denoted DR(nσ,nδ)DR(n^{\sigma},n^{\delta}), is to compute for every sSs\in S the set of vertices 𝒱sV\mathcal{V}_{s}\subseteq V reachable from ss via at most nδn^{\delta} hops.

We denote by 𝒯(DR(nσ,nδ))\mathcal{T}(DR(n^{\sigma},n^{\delta})) the time complexity of the problem DR(nσ,nδ)DR(n^{\sigma},n^{\delta}). More generally, we will use the notation 𝒯()\mathcal{T}(\cdot) for the time complexities of various problems that we will consider below. Note that

𝒯(DR(nσ,nδ))=O~(nω(σ)+δ).\mathcal{T}(DR(n^{\sigma},n^{\delta}))=\tilde{O}(n^{\omega(\sigma)+\delta}). (15)

We also consider a variant of this problem is which each source is replaced by a (di-)path.

Definition 5.

Given an nn-vertex DAG GG, and a pair of parameters 0σ,δ10\leq\sigma,\delta\leq 1, and a set 𝒫\mathcal{P}^{\prime} of |𝒫|=nσ|\mathcal{P}^{\prime}|=n^{\sigma} of (source) dipaths, the path direachability problem with diameter nδn^{\delta}, denoted PDR(nσ,nδ)PDR(n^{\sigma},n^{\delta}), is to compute for every p𝒫p\in\mathcal{P}^{\prime} the set of vertices 𝒱pV\mathcal{V}_{p}\subseteq V, such that each z𝒱pz\in\mathcal{V}_{p} is reachable from at least one vertex vV(p)v\in V(p) via at most nδn^{\delta} hops. Moreover, it is also required to compute, for every z𝒱pz\in\mathcal{V}_{p}, the last vertex vp(z)V(p)v_{p}(z)\in V(p) on pp from which zz is reachable within nδn^{\delta} hops.

The assumption that we will make in this section is that PDR(nσ,nδ)PDR(n^{\sigma},n^{\delta}) can be computed within essentially the same time as DR(nσ,nδ)DR(n^{\sigma},n^{\delta}).

Assumption 1.

(Paths Direachability Assumption)

𝒯(PDR(nσ,nδ))=O~(𝒯(DR(nσ,nδ))).\mathcal{T}(PDR(n^{\sigma},n^{\delta}))=\tilde{O}(\mathcal{T}(DR(n^{\sigma},n^{\delta}))). (16)

Next, we discuss several other related problems and algorithms for them that may eventually lead to bounds close to that of Equation (16).

First, one-hop paths direachability problem, denoted OHPDR(nσ)OHPDR(n^{\sigma}), is the paths direachability problem with the second parameter nδ=1n^{\delta}=1, i.e., OHPDR(nσ)=PDR(nσ,1)OHPDR(n^{\sigma})=PDR(n^{\sigma},1). A slightly more general variant of this problem, which we call one-hop monotone subsequence direachability problem, abbreviated OHMSDR(nσ)OHMSDR(n^{\sigma}), is when each p𝒫p\in\mathcal{P}^{\prime} is a subsequence η=(vi1,vi2,,vit)\eta=(v_{i_{1}},v_{i_{2}},\ldots,v_{i_{t}}), for some t1t\geq 1, i1<i2,<iti_{1}<i_{2},\ldots<i_{t}, of the (fixed) topological ordering ζ=(v1,v2,,vn)\zeta=(v_{1},v_{2},\ldots,v_{n}) of vertices of GG. We say that η\eta is a ζ\zeta-monotone subsequence. Note that any dipath in TC(G)TC(G) is a ζ\zeta-monotone subsequence, but the converse in not necessarily true.

A yet more general problem is when each p𝒫p\in\mathcal{P}^{\prime} is a general (not necessarily ζ\zeta-monotone) vertex sequence p=(v1,v2,,vq)p=(v_{1},v_{2},\ldots,v_{q}) for some q1q\geq 1. In this case we call the respective problem one-hop (general) sequence direachability problem, abbreviated OHSDR(nσ)OHSDR(n^{\sigma}).

Lemma 3.

PDR(nσ,nδ)PDR(n^{\sigma},n^{\delta}) can be computed via nδn^{\delta} applications of OHSDR(nσ)OHSDR(n^{\sigma}), i.e.,
𝒯(PDR(nσ,nδ))O(𝒯(OHSDR(nσ))nδ\mathcal{T}(PDR(n^{\sigma},n^{\delta}))\leq O(\mathcal{T}(OHSDR(n^{\sigma}))\cdot n^{\delta}).

Proof.

We prove by induction on ii, 1inδ1\leq i\leq n^{\delta}, that 𝒯(PDR(nσ,i))Ci𝒯(OHSDR(nσ))\mathcal{T}(PDR(n^{\sigma},i))\leq C\cdot i\cdot\mathcal{T}(OHSDR(n^{\sigma})), for some universal constant C>0C>0. The induction base is immediate, as PDR(nσ,1)PDR(n^{\sigma},1) is a special case of OHSDR(nσ)OHSDR(n^{\sigma}). Thus, 𝒯(PDR(nσ,1))𝒯(OHSDR(nσ))\mathcal{T}(PDR(n^{\sigma},1))\leq\mathcal{T}(OHSDR(n^{\sigma})).

For the induction step, we consider some ii, 1inδ11\leq i\leq n^{\delta}-1. By definition, an algorithm for solving PDR(nσ,i)PDR(n^{\sigma},i), for an input set 𝒫\mathcal{P}^{\prime} of nσn^{\sigma} dipaths, returns for every dipath p𝒫p\in\mathcal{P}^{\prime}, a set of vertices 𝒱p\mathcal{V}_{p}. For every z𝒱pz\in\mathcal{V}_{p}, we have a vertex vp(z)V(p)v_{p}(z)\in V(p) such that zz is reachable from vp(z)v_{p}(z) via at most ii hops, and moreover, vp(z)v_{p}(z) is the last vertex on pp from which zz is reachable within at most ii hops.

By induction hypothesis we assume that ii invocations of an algorithm for OHSDR(nσ)OHSDR(n^{\sigma}) solve PDR(nσ,i)PDR(n^{\sigma},i), i.e., they produce the sets {𝒱p|pP}\{\mathcal{V}_{p}~{}|~{}p\in P^{\prime}\}, and for every z𝒱pz\in\mathcal{V}_{p}, they produce the vertex vp(z)v_{p}(z) as above.

As we argue below, via one more invocation of an algorithm for OHSDR(nσ)OHSDR(n^{\sigma}), one can obtain a solution for PDR(nσ,i+1)PDR(n^{\sigma},i+1). Specifically, for every p𝒫p\in\mathcal{P}^{\prime}, p=(x1,x2,,xq)p=(x_{1},x_{2},\ldots,x_{q}), for some q1q\geq 1, we sort the set 𝒱p\mathcal{V}_{p} (from the induction step k=ik=i) such that the vertices zz with vp(z)=x1v_{p}(z)=x_{1} appear first, sorted among them with respect to the fixed topological ordering ζ\zeta, and then appear vertices zz with vp(z)=x2v_{p}(z)=x_{2}, etc., and finally, vertices zz with vp(z)=xqv_{p}(z)=x_{q} appear last. (Within each such subset, vertices are ordered according to ζ\zeta.) Denote by η(p)\eta(p) the resulting vertex sequence.

We invoke an algorithm for OHSDR(nσ)OHSDR(n^{\sigma}), on the input set of nσn^{\sigma} sequences η(p)\eta(p), for all p𝒫p\in\mathcal{P}^{\prime}. For every vertex zz reachable within at most one hop from sequence η(p)\eta(p), let y=vη(p)(z)y=v_{\eta(p)}(z) be the last vertex in η(p)\eta(p) from which zz is reachable within at most one hop. The algorithm then sets vp(z)=vp(y)v_{p}(z)=v_{p}(y) and returns it. (This is done for all vertices zz reachable within one hop from η(p)\eta(p), for every p𝒫p\in\mathcal{P}^{\prime}.) Also for every pPp\in P^{\prime}, the algorithm returns 𝒱p=𝒱η(p)\mathcal{V}_{p}=\mathcal{V}_{\eta(p)}. (The sets 𝒱η(p)\mathcal{V}_{\eta(p)} are returned by an algorithm for OHSDR(nσ)OHSDR(n^{\sigma}) that receives sequences {η(p)|pP}\{\eta(p)~{}|~{}p\in P^{\prime}\} as input).

To prove correctness, suppose that a vertex zVz\in V is reachable within at most i+1i+1 hops from some vertex on a path p𝒫p\in\mathcal{P}^{\prime} and let v=vp(z)v=v_{p}(z) be the last vertex on pp that satisfies this condition. Then there is an incoming neighbour yy of zz which is reachable from vp(z)v_{p}(z) within at most ii hops. The algorithm for PDR(nσ,i)PDR(n^{\sigma},i) inserts yy into η(p)\eta(p), and the algorithm for OHSDR(nσ)OHSDR(n^{\sigma}) discovers that zz is reachable from yy, and thus from η(p)\eta(p) (within one hop). Thus it inserts zz into 𝒱p\mathcal{V}_{p}.

Suppose for contradiction that the algorithm returns some vertex vV(p)v^{\prime}\in V(p), v<pvv<_{p}v^{\prime}, as vp(z)v_{p}(z). But then, by correctness of the algorithms that we use for PDR(nσ,i)PDR(n^{\sigma},i) and OHSDR(nσ)OHSDR(n^{\sigma}), it follows that there exists an incoming neighbour yy^{\prime} of zz which is reachable from vv^{\prime} within ii hops and v=vp(y)v^{\prime}=v_{p}(y^{\prime}). This is a contradiction to the assumption that v=vp(z)v=v_{p}(z). Note also that if zz is not (i+1)(i+1)-reachable from pp, then it is not 11-reachable from η(p)\eta(p). Thus, it will not be included in the output of the algorithm for PDR(nσ,i+1)PDR(n^{\sigma},i+1). ∎

This leads to the following assumption.

Assumption 2.
𝒯(OHSDR(nσ))=O~(nω(σ)).\mathcal{T}(OHSDR(n^{\sigma}))=\tilde{O}(n^{\omega(\sigma)}). (17)

By Lemma 3, Assumption 2 implies the Paths Direachability Assumption (see Assumption 1).

Assumption 1 is sufficient for the analysis that we conduct in this section. Next, we identify a number of possible approaches that may be used to provide upper bounds for 𝒯(PDR(nσ,nδ))\mathcal{T}(PDR(n^{\sigma},n^{\delta})) (in the spirit of Assumption 1).

First, one can reduce PDR(nσ,nδ)PDR(n^{\sigma},n^{\delta}) on graphs with diameter at most nδn^{\delta} to the exact S×VS\times V distance computation problem in a directed graph with |S|=nσ|S|=n^{\sigma} sources, with integer weights in the range {0,1,,n1}\{0,1,\ldots,n-1\}. Specifically, given an instance (G=(V,E),𝒫)(G=(V,E),\mathcal{P}^{\prime}) of PDR(nσ,nδ)PDR(n^{\sigma},n^{\delta}), one adds sources S={sp|p𝒫}S=\{s_{p}~{}|~{}p\in\mathcal{P}^{\prime}\}, and connects each source sps_{p} to all the vertices of its respective path pp. Denote by G=(V,E,w)G^{\prime}=(V^{\prime},E^{\prime},w) the new graph that we construct. The vertex set VV^{\prime} is given by V=V{sp|p𝒫}V^{\prime}=V\cup\{s_{p}~{}|~{}p\in\mathcal{P}^{\prime}\}. The edge set EE^{\prime} is given by E=E{(sp,v)|vV(p),p𝒫}E^{\prime}=E\cup\{(s_{p},v)~{}|~{}v\in V(p),p\in\mathcal{P}^{\prime}\}.

Denote path p=(v0,v1,,v(p))p=(v_{0},v_{1},\ldots,v_{\ell(p)}) (pPp\in P^{\prime}), where (p)\ell(p) is the length (number of hops) of the path pp. The weight function ww is set by assigning weight (p)\ell(p) to the edge sp,v0\langle s_{p},v_{0}\rangle, weight (p)1\ell(p)-1 to the edge sp,v1\langle s_{p},v_{1}\rangle, etc. (Generally, for all 0i(p)0\leq i\leq\ell(p), w(sp,vi)w(\langle s_{p},v_{i}\rangle) is set to (p)i\ell(p)-i.) Finally, all original edges of GG are assigned weight 0.

One then invokes an algorithm for S×VS\times V^{\prime} distance computation problem on the graph GG^{\prime} (with the set SS of nσn^{\sigma} sources). For each pair (p,z)𝒫×V(p,z)\in\mathcal{P}^{\prime}\times V, one sets the last vertex vp(z)v_{p}(z) on pp from which zz is reachable in the following way: If the distance dG(sp,z)=d_{G^{\prime}}(s_{p},z)=\infty then zz is not reachable from pp at all. Otherwise, let i=dG(sp,z)i=d_{G^{\prime}}(s_{p},z). Note that 0i(p)0\leq i\leq\ell(p). We then set v(p)iv_{\ell(p)-i} as vp(z)v_{p}(z). It is easy to verify the correctness of this reduction. However, to the best of our knowledge, there are currently no known directed exact distance computation algorithms (for multiple sources) that require less than O~(nω)\tilde{O}(n^{\omega}) time. The state-of-the-art algorithm by Zwick [Zwi02] requires O~(n2.523667)\tilde{O}(n^{2.523667}) time for directed all-pairs shortest paths.

Another possible approach to the the study of complexity of PDR(nσ,nδ)PDR(n^{\sigma},n^{\delta}) is through maximum witnesses of Boolean matrix multiplication (abbreviated as MWBMM) problem. Recall that given two Boolean matrices AA and BB, C=BAC=B\star A denotes their Boolean matrix product (see (1)). The MWBMM problem asks for every entry i,ji,j of CC for which C[i,j]=1C[i,j]=1, to compute the maximum index kk such that B[i,k]=A[k,j]=1B[i,k]=A[k,j]=1.

One can reduce the one-hop monotone sequence direachability problem (OHMSDR(nσ)OHMSDR(n^{\sigma})) to this problem in the following way: define the matrix BB as a 𝒫×V\mathcal{P}^{\prime}\times V incidence matrix, where entry B[p,z]=1B[p,z]=1, if vertex zz belongs to V(p)V(p), and 0 otherwise. The columns of BB are ordered according to the fixed topological ordering ζ\zeta of GG. We also need another matrix AA which is the V×VV\times V adjacency matrix of the input graph GG. Here both rows and columns are ordered according to ordering ζ\zeta.

Consider the Boolean matrix product C=BAC=B\star A. For every entry C[p,z]C[p,z], p𝒫p\in\mathcal{P}^{\prime}, zVz\in V, such that C[p,z]=0C[p,z]=0, the vertex zz is not reachable from pp within one hop. On the other hand C[p,z]=1C[p,z]=1 iff zz is one-hop reachable from pp, and moreover, the largest witness vv such that B[p,v]=A[v,z]=1B[p,v]=A[v,z]=1 is the last vertex on pp from which zz is one-hop reachable.

This approach is problematic for two reasons. First, it only solves the one-hop monotone sequence direachability problem, as opposed to the one-hop general sequence direachability problem that we need for proving Assumption 1. Second, to the best of our knowledge, there is currently no known algorithm for the MWBMMMWBMM problem for matrices of dimensions nσ×nn^{\sigma}\times n and n×nn\times n that works in O~(nω(σ))\tilde{O}(n^{\omega(\sigma)}) time.

Finally, the third approach for making progress towards proving (possibly a weaker form of) Assumption 1 is via max-min matrix product (see [DP09] and references therein).

Given two matrices BB and AA (of appropriate dimensions, i.e., the product BAB\cdot A is defined), the max-min matrix product C=B∨⃝AC=B\ovee A is defined in the following way: The entry C[i,j]C[i,j] (for ii and jj in the appropriate respective ranges) is given by C[i,j]=max𝑘min{B[i,k],A[k,j]}C[i,j]=\underset{k}{\max}~{}\min\{B[i,k],A[k,j]\}.

To reduce OHSDR(nσ)OHSDR(n^{\sigma}) problem to this matrix product (for matrices of dimensions nσ×nn^{\sigma}\times n and n×nn\times n, respectively), we define matrices BB and AA in the following way: Let 𝒬\mathcal{Q} be a set of nσn^{\sigma} sequences, which serves as an input instance for OHSDR(nσ)OHSDR(n^{\sigma}). Rows of BB are indexed by sequences q𝒬q\in\mathcal{Q} and its columns are indexed by vertices vVv\in V. Rows and columns of AA are indexed by vertices vVv\in V. The ordering of the columns of BB is the same as that of the rows (and columns) of AA. For a sequence q𝒬q\in\mathcal{Q} and a vertex vV(q)v\in V(q), we set B[q,v]=jq(v)B[q,v]=j_{q}(v), where jq(v)j_{q}(v) is the index of vv in the sequence qq. Otherwise (if vV(q)v\notin V(q)), we set B[q,v]=B[q,v]=-\infty. The matrix AA is the adjacency matrix of the input graph, in which A[v,z]=A[v,z]=\infty iff v,zE\langle v,z\rangle\in E and otherwise A[v,z]=A[v,z]=-\infty.

Given a pair q𝒬,zVq\in\mathcal{Q},z\in V, consider the entry C[q,z]C[q,z] of the product matrix C=B∨⃝AC=B\ovee A. If zz is not one-hop reachable from qq, then for all vVv\in V, either B[q,v]=B[q,v]=-\infty or A[v,z]=A[v,z]=-\infty, i.e., we have min{B[q,v],A[v,z]}=\min\{B[q,v],A[v,z]\}=-\infty. Hence in this case, C[q,z]=max𝑣min{B[q,v],A[v,z]}=C[q,z]=\underset{v}{\max}~{}\min\{B[q,v],A[v,z]\}=-\infty. On the other hand, if zz is one-hop reachable from qq, then for every vV(q)v\in V(q) for which v,zE\langle v,z\rangle\in E, we have B[q,v]=jq(v)B[q,v]=j_{q}(v) and A[v,z]=A[v,z]=\infty, i.e., min{B[q,v],A[v,z]}=jq(v)\min\{B[q,v],A[v,z]\}=j_{q}(v). Also, for every vV(q)v\notin V(q), we have B[q,v]=B[q,v]=-\infty, implying that min{B[q,v],A[v,z]}=\min\{B[q,v],A[v,z]\}=-\infty.

Hence in this case we have

C[q,z]=max𝑣min{B[q,v],A[v,z]}=maxvV(q),v,zEjq(v),C[q,z]=\underset{v}{\max}~{}\min\{B[q,v],A[v,z]\}=\underset{v\in V(q),\langle v,z\rangle\in E}{\max}j_{q}(v),

as desired. We also need to compute maximum C[q,z]C^{\prime}[q,z] between C[q,z]C[q,z] computed by the above product) and B[q,z]B[q,z], as it is possible that zV(q)z\in V(q), and that its index in qq is higher than the largest index of a vertex yV(q)y\in V(q) from which zz is reachable by exactly one hop.

Recall also that each sequence qq is associated with a path pPp\in P^{\prime}, and for every vertex zz with finite B[q,y]B[q,y] (not -\infty) we store the vertex vp(y)v_{p}(y). At this point these values are recomputed along the lines described in the proof of Lemma 3. Specifically, if C[q,z]=C[q,z]=maxvV(q),v,zEjq(v)C^{\prime}[q,z]=C[q,z]=\underset{v\in V(q),\langle v,z\rangle\in E}{\max}j_{q}(v), then let yV(q)y\in V(q) be a vertex such that C[q,z]=C[q,z]=jp(y)C^{\prime}[q,z]=C[q,z]=j_{p}(y), and we set vp(z)=vp(y)v_{p}(z)=v_{p}(y). Otherwise C[q,z]=B[q,z]C^{\prime}[q,z]=B[q,z] and the value of vp(z)v_{p}(z) stays unchanged.

Denote by 𝒯(MMMP(nσ))\mathcal{T}(MMMP(n^{\sigma})) the complexity of max-min matrix product of nσ×nn^{\sigma}\times n matrix by an n×nn\times n matrix. We have proved that

𝒯(OHSDR(nσ))O(𝒯(MMMP(nσ))).\mathcal{T}(OHSDR(n^{\sigma}))\leq O(\mathcal{T}(MMMP(n^{\sigma}))).

Hence by Lemma 3, we conclude the following corollary:

Corollary 1.
𝒯(PDR(nσ,nδ))=O(𝒯(MMMP(nσ))nδ.\mathcal{T}(PDR(n^{\sigma},n^{\delta}))=O(\mathcal{T}(MMMP(n^{\sigma}))\cdot n^{\delta}.

Hence the following assumption would suffice for proving the Paths Direachability Assumption (see Assumption 1).

Assumption 3.
𝒯(MMMP(nσ))=O~(nω(σ)).\mathcal{T}(MMMP(n^{\sigma}))=\tilde{O}(n^{\omega(\sigma)}).

However, to the best of our knowledge, there are currently no known bounds on 𝒯(MMMP(nσ))\mathcal{T}(MMMP(n^{\sigma})) which are close to O~(nω(σ))\tilde{O}(n^{\omega(\sigma)}). See [DP09, GIA+21] for the state-of-the-art bounds for 𝒯(MMMP(nσ))\mathcal{T}(MMMP(n^{\sigma})).

We note that the state-of-the-art quantum algorithm for MMMP(n)MMMP(n) (i.e., σ=1\sigma=1, that is, for square matrices) [LGN14] requires O(n2.473)O(n^{2.473}) time. It is plausible that for general σ\sigma, 0σ10\leq\sigma\leq 1, the time complexity of the quantum algorithm for MMMP(nσ)MMMP(n^{\sigma}) is not much larger than O~(nω(σ))\tilde{O}(n^{\omega(\sigma)}). This could lead to some results along the lines that we obtain under Assumption 1, albeit admittedly not as strong as those that we attain under Assumption 1 (see Section 4.2).

4.2 Analysis of the Recursive Algorithm under Paths Direachability Assumption

The current state-of-the-art algorithms based on matrix multiplication for the paths direachability problem do not provide running time given in Assumption 1. However, we believe that it is of interest to analyse the (hypothetical) situation that such algorithms will eventually be devised. It is worth pointing out that using (BFS-based) naive methods one can compute path direachability for nσn^{\sigma} dipaths in the same time as S×VS\times V-direachability for |S|=nσ|S|=n^{\sigma} sources. i.e., within O(mnσ)O(mn^{\sigma}) time [KP22a].

Let Dk+1=nδD_{k+1}=n^{\delta}, δ<1/2\delta<1/2, be the desired diameter of the invocation of Procedure Recur-DiReach (Algorithm 2) with depth parameter k+1k+1. It follows that |P|=O~(n12δ)|P^{\prime}|=\tilde{O}(n^{1-2\delta}) and we need to invoke Procedure Recur-DiReach recursively with depth kk for S×VS\times V-direachability with |S|=O~(n12δ)|S|=\tilde{O}(n^{1-2\delta}). Then, the diameter reduction sub-step of the invocation with depth parameter k+1k+1 requires time O~(ngk(12δ))\tilde{O}(n^{g_{k}(1-2\delta)}) on graphs with Θ(n2)\Theta(n^{2}) edges, and more generally, O~(ngk(μ)(12δ))\tilde{O}(n^{g_{k}^{(\mu)}(1-2\delta)}) time on graphs with Θ(nμ)\Theta(n^{\mu}) edges, for any 1μ21\leq\mu\leq 2.222Note that the diameter reduction sub-step of the algorithm of [KP22a] is applied to the original graph that has Θ(nμ)\Theta(n^{\mu}) edges. The reachability sub-step is then applied to a possibly denser graph. We remark also that the implication that this step requires O~(ngk(n(12δ))\tilde{O}(n^{g_{k}(n^{(1-2\delta)}}) time in general graphs and O~(ngk(μ)(12δ))\tilde{O}(n^{g_{k}^{(\mu)}(1-2\delta)}) time on graphs with Θ(nμ)\Theta(n^{\mu}) edges also follows, under Assumption 1, by induction on kk. Under Assumption 1, step 5 of Algorithm 2 (Procedure Recur-DiReach) is indeed a recursive invocation of the same algorithm on the same input graph GG with a decremented depth parameter kk. The reachability computation sub-step of this invocation is the same as the reachability computation step of Algorithm 1. By Assumption 1, for a given 0σ10\leq\sigma\leq 1, this computation requires Dk+1=nδD_{k+1}=n^{\delta} iterations of fast rectangular matrix multiplication, i.e., O~(nω(σ)+δ)\tilde{O}(n^{\omega(\sigma)+\delta}) time.

4.3 Analysis for General Graphs

We first analyse the recursive scheme for the most general case when mm may be as large as Θ(n2)\Theta(n^{2}). As evident from the above discussion, the following equation defines the function gk+1()g_{k+1}(\cdot) that expresses the exponent of nn in the running time of an invocation of Procedure Recur-DiReach (Algorithm 2) with depth parameter k+1k+1 in terms of the exponent δ\delta of the desired diameter Dk+1=nδD_{k+1}=n^{\delta} of this invocation and the function gk()g_{k}(\cdot) that expresses the exponent of nn in the running time of its recursive invocation (with depth parameter kk):

gk+1(σ)=minδmax{gk(12δ),ω(σ)+δ}g_{k+1}(\sigma)=\min_{\delta}\max\{g_{k}(1-2\delta),\omega(\sigma)+\delta\} (18)

Notice that for k=1k=1,

g1(σ)\displaystyle g_{1}(\sigma) =minδmax{g0(12δ),ω(σ)+δ}\displaystyle=\min_{\delta}\max\{g_{0}(1-2\delta),\omega(\sigma)+\delta\}
=minδmax{1+23ω(12δ),ω(σ)+δ}.\displaystyle=\min_{\delta}\max\{1+\frac{2}{3}\cdot\omega(1-2\delta),\omega(\sigma)+\delta\}. (19)

Note that 1+23ω(12δ)1+\frac{2}{3}\omega(1-2\delta) monotonically does not increase as δ\delta grows, and ω(σ)+δ\omega(\sigma)+\delta increases. Also, for δ=0\delta=0, we have 1+23ω(12δ)=1+23ω(1)>ω(1)>ω(σ)=ω(σ)+δ1+\frac{2}{3}\omega(1-2\delta)=1+\frac{2}{3}\omega(1)>\omega(1)>\omega(\sigma)=\omega(\sigma)+\delta, for any σ<1\sigma<1. For δ=1/2\delta=1/2, for any 0σ10\leq\sigma\leq 1, we have 1+23ω(0)=73<212ω(σ)+1/2=ω(σ)+δ1+\frac{2}{3}\omega(0)=\frac{7}{3}<2\frac{1}{2}\leq\omega(\sigma)+1/2=\omega(\sigma)+\delta (since ω(0)=2\omega(0)=2). Thus the function g1(σ)g_{1}(\sigma) is minimized when 1+23ω(12δ)=ω(σ)+δ1+\frac{2}{3}\cdot\omega(1-2\delta)=\omega(\sigma)+\delta. We will also soon show that since ω(σ)\omega(\sigma) is a continuous function, so are also the functions g0(σ),g1(σ),g2(σ),g_{0}(\sigma),g_{1}(\sigma),g_{2}(\sigma),\ldots. (This can be argued by induction on kk.) We will also show that they are all monotonically increasing, for σ>σ~\sigma>\tilde{\sigma}. Thus, the parameter δ\delta will be set so that

gk(12δ)=ω(σ)+δ,g_{k}(1-2\delta)=\omega(\sigma)+\delta, (20)

and the parameter D=Dk+1D=D_{k+1} will be set as nδn^{\delta}.

Analysing Equation (4.3) numerically using the state-of-the-art upper bounds on ω(σ)\omega(\sigma) for various values of σ\sigma (see Table 1), we get that g1(σ)<ω(1)=ωg_{1}(\sigma)<\omega(1)=\omega, for σ0.66\sigma\leq 0.66. Recall that g0(σ)<ω(1)=ωg_{0}(\sigma)<\omega(1)=\omega, for σ0.53\sigma\leq 0.53 (see Theorem 2). Thus, with only one step of our recursive scheme, we can increase the number of sources for which we can compute S×VS\times V-direachability in o(nω)o(n^{\omega}) time in general graphs from |S|=O(n0.53)|S|=O(n^{0.53}) to |S|=O(n0.66)|S|=O(n^{0.66}) (under Assumption 1). In Table 6, we present gk(σ)g_{k}(\sigma) for various values of kk and σ\sigma. Each row of the table corresponds to a fixed value of σ\sigma and shows the value of gk(σ)g_{k}(\sigma) for k=1,3,5,7k=1,3,5,7 and 99. For each kk, the row corresponding to the first value of σ\sigma for which gk(σ)>ωg_{k}(\sigma)>\omega is highlighted. Empirically, one can see that with each step of recursion, the number of sources from which we can compute S×VS\times V-direachability in o(nω)o(n^{\omega}) time in general graphs increases. For example, for k=7k=7, the number of sources from which we can compute S×VS\times V-direachability in o(nω)o(n^{\omega}) time is at least n0.96n^{0.96}.

σ\sigma g0(σ)g_{0}(\sigma) g1(σ)g_{1}(\sigma) g3(σ)g_{3}(\sigma) g5(σ)g_{5}(\sigma) g7(σ)g_{7}(\sigma) g9(σ)g_{9}(\sigma)
0.34 2.333733 2.333733 2.333733 2.333733 2.333733 2.333733
0.36 2.33533 2.333733 2.333733 2.333733 2.333733 2.333733
0.40 2.33969 2.335332 2.333733 2.333733 2.333733 2.333733
0.44 2.34729 2.3375132 2.333733 2.333733 2.333733 2.333733
0.48 2.35687 2.339694 2.335239 2.333733 2.333733 2.333733
0.52 2.368166 2.3434932 2.3353324 2.333733 2.333733 2.333733
0.54 2.374075 2.3453928 2.3353324 2.333733 2.333733 2.333733
0.56 >ω>\omega 2.3514334 2.3353324 2.333733 2.333733 2.333733
0.60 >ω>\omega 2.3568744 2.337513 2.333733 2.333733 2.333733
0.64 >ω>\omega 2.365913 2.3396939 2.335239 2.333733 2.333733
0.66 >ω>\omega 2.368166 2.34203 2.3353324 2.333733 2.333733
0.68 >ω>\omega 2.374075 2.342994 2.3353324 2.333733 2.333733
0.72 >ω>\omega >ω>\omega 2.3463128 2.336308 2.334272 2.333733
0.76 >ω>\omega >ω>\omega 2.353146 2.341734 2.335332 2.333733
0.80 >ω>\omega >ω>\omega 2.361996 2.344272 2.3395696 2.3353324
0.84 >ω>\omega >ω>\omega 2.36977 2.351433 2.344272 2.3397729
0.85 >ω>\omega >ω>\omega 2.374075 2.355351 2.346984 2.341734
0.88 >ω>\omega >ω>\omega >ω>\omega 2.359773 2.3514334 2.349319
0.92 >ω>\omega >ω>\omega >ω>\omega 2.369501 2.359773 2.359501
0.93 >ω>\omega >ω>\omega >ω>\omega 2.374209 2.364429 2.359501
0.96 >ω>\omega >ω>\omega >ω>\omega >ω>\omega 2.370262 2.370262
0.97 >ω>\omega >ω>\omega >ω>\omega >ω>\omega 2.374793 2.370262
0.98 >ω>\omega >ω>\omega >ω>\omega >ω>\omega >ω>\omega 2.375907
Table 6: gk(σ)g_{k}(\sigma) for various values of kk and σ\sigma. We remark that in each row the values strictly decrease when the index kk grows, and in each column the values strictly increase as σ\sigma grows. In the table some of these values look equal, because (naturally) we are using only a finite number of digits of precision.

We next prove that the functions gk(σ)g_{k}(\sigma) are monotonically non-decreasing and continuous.

Lemma 4.

For any k0k\geq 0, the function gk(σ)g_{k}(\sigma) is continuous and monotonically non-decreasing in the entire range 0σ10\leq\sigma\leq 1, and gk(1)>ωg_{k}(1)>\omega.

Proof.

The proof is by induction on kk. For k=0k=0, by Equation (4) we have gk(σ)=1+23ω(σ)g_{k}(\sigma)=1+\frac{2}{3}\cdot\omega(\sigma). We also saw that g0(1)>ωg_{0}(1)>\omega. Monotonicity and continuity of the function g0(σ)g_{0}(\sigma) follows from the fact that ω(σ)\omega(\sigma) is monotonically non-decreasing and continuous.

Assume inductively that g0(σ),g1(σ),,gk(σ)g_{0}(\sigma),g_{1}(\sigma),\ldots,g_{k}(\sigma) are all monotonically non-decreasing and continuous, and that g0(1),g1(1),,gk(1)>ωg_{0}(1),g_{1}(1),\ldots,g_{k}(1)>\omega. Recall that (see (18)),

gk+1(σ)=minδmax{gk(12δ),ω(σ)+δ}.g_{k+1}(\sigma)=\min_{\delta}\max\{g_{k}(1-2\delta),\omega(\sigma)+\delta\}.

In particular, gk+1(1)=minδmax{gk(12δ),ω(1)+δ}g_{k+1}(1)=\min_{\delta}\max\{g_{k}(1-2\delta),\omega(1)+\delta\}. Note that for δ=0\delta=0, by the inductive hypothesis gk(1)>ω(1)g_{k}(1)>\omega(1). Also by the inductive hypothesis, gk(12δ)g_{k}(1-2\delta) does not increase as δ\delta grows, while ω(1)+δ\omega(1)+\delta increases. For any δ>0\delta>0, max{gk(12δ),ω(1)+δ}ω(1)+δ>ω(1)\max\{g_{k}(1-2\delta),\omega(1)+\delta\}\geq\omega(1)+\delta>\omega(1). Hence gk+1(1)>ωg_{k+1}(1)>\omega.

Fix two values 0σσ10\leq\sigma\leq\sigma^{\prime}\leq 1. Since ω()\omega(\cdot) is a monotonically non-decreasing function,

gk+1(σ)=minδmax{gk(12δ),ω(σ)+δ}minδmax{gk(12δ),ω(σ)+δ}=gk+1(σ),\displaystyle\begin{aligned} g_{k+1}(\sigma^{\prime})&=\min_{\delta}\max\{g_{k}(1-2\delta),\omega(\sigma^{\prime})+\delta\}\\ &\geq\min_{\delta}\max\{g_{k}(1-2\delta),\omega(\sigma)+\delta\}=g_{k+1}(\sigma),\end{aligned} (21)

i.e., gk+1()g_{k+1}(\cdot) is monotonically non-decreasing function.

By induction hypothesis, we have gk(1)>ω(1)>ω(σ)g_{k}(1)>\omega(1)>\omega(\sigma), i.e., for δ=0\delta=0, it holds that gk(12δ)>ω(σ)+δg_{k}(1-2\delta)>\omega(\sigma)+\delta. We now argue that

gk(0)ω(σ)+1/2,g_{k}(0)\leq\omega(\sigma)+1/2, (22)

i.e, for δ=1/2\delta=1/2, we have gk(12δ)ω(σ)+δg_{k}(1-2\delta)\leq\omega(\sigma)+\delta.

Claim 2.

For any k0k\geq 0, gk(0)212g_{k}(0)\leq 2\frac{1}{2}.

Proof.

For k=0k=0, we have g0(0)=1+23ω(0)=213<212g_{0}(0)=1+\frac{2}{3}\omega(0)=2\frac{1}{3}<2\frac{1}{2}. For k>0k>0, assume inductively that gk(0)212g_{k}(0)\leq 2\frac{1}{2}. Then

gk+1(0)max{gk(1212),ω(0)+1/2}=max{gk(0),212}=212.g_{k+1}(0)\leq\max\{g_{k}(1-2\cdot\frac{1}{2}),\omega(0)+1/2\}=\max\{g_{k}(0),2\frac{1}{2}\}=2\frac{1}{2}.

On the other hand, for any 0σ10\leq\sigma\leq 1, we have ω(σ)+1/2212\omega(\sigma)+1/2\geq 2\frac{1}{2}, implying inequality (22).

Hence, for δ=0\delta=0, we have gk(12δ)=gk(1)>ω(σ)+δ=ω(σ)g_{k}(1-2\delta)=g_{k}(1)>\omega(\sigma)+\delta=\omega(\sigma), and for δ=1/2\delta=1/2 it holds that gk(12δ)=gk(0)ω(σ)+δ=ω(σ)+1/2g_{k}(1-2\delta)=g_{k}(0)\leq\omega(\sigma)+\delta=\omega(\sigma)+1/2. Thus by continuity and monotonicity of gk()g_{k}(\cdot) (which is a part of the induction hypothesis), there exists a value of δ\delta, 0<δ1/20<\delta\leq 1/2, such that

gk+1(σ)=minδmax{gk(12δ),ω(σ)+δ}=gk(12δ)=ω(σ)+δ.g_{k+1}(\sigma)=\min_{\delta}\max\{g_{k}(1-2\delta),\omega(\sigma)+\delta\}=g_{k}(1-2\delta)=\omega(\sigma)+\delta.

For fixed 0σσ10\leq\sigma\leq\sigma^{\prime}\leq 1, let δ\delta and δ\delta^{\prime}, respectively, be the values such that gk+1(σ)=gk(12δ)=ω(σ)+δg_{k+1}(\sigma)=g_{k}(1-2\delta)=\omega(\sigma)+\delta and gk+1(σ)=gk(12δ)=ω(σ)+δg_{k+1}(\sigma^{\prime})=g_{k}(1-2\delta^{\prime})=\omega(\sigma^{\prime})+\delta^{\prime}. Then, gk+1(σ)gk+1(σ)g_{k+1}(\sigma^{\prime})\geq g_{k+1}(\sigma) (by Inequality (21)) implies gk(12δ)gk(12δ)g_{k}(1-2\delta^{\prime})\geq g_{k}(1-2\delta). By the induction hypothesis, gk()g_{k}(\cdot) is monotonically non-decreasing. Thus, 12δ12δ1-2\delta^{\prime}\geq 1-2\delta, i.e., δδ\delta^{\prime}\leq\delta.

Finally, we argue that gk+1()g_{k+1}(\cdot) is a continuous function. Let σ\sigma, 0σ<10\leq\sigma<1, be a fixed value. Since ω()\omega(\cdot) is a continuous function, for any ϵ>0\epsilon>0, there exists ξ>0\xi>0 such that for any σ\sigma^{\prime} with σ<σ<σ+ξ\sigma<\sigma^{\prime}<\sigma+\xi, we have ω(σ)ω(σ)<ϵ\omega(\sigma^{\prime})-\omega(\sigma)<\epsilon. Recall that gk+1(σ)=ω(σ)+δ=gk(12δ)g_{k+1}(\sigma)=\omega(\sigma)+\delta=g_{k}(1-2\delta), for some fixed δ>0\delta>0. Also,

ω(σ)+δgk(12δ)=ω(σ)+δ=gk+1(σ).\omega(\sigma^{\prime})+\delta\geq g_{k}(1-2\delta)=\omega(\sigma)+\delta=g_{k+1}(\sigma).

Let δ\delta^{\prime} be the value that minimizes gk+1(σ)g_{k+1}(\sigma^{\prime}). As δδ\delta^{\prime}\leq\delta and gk+1(σ)=ω(σ)+δ=gk(12δ)g_{k+1}(\sigma^{\prime})=\omega(\sigma^{\prime})+\delta^{\prime}=g_{k}(1-2\delta^{\prime}), we have

gk+1(σ)gk+1(σ)=(ω(σ)+δ)(ω(σ)+δ)(ω(σ)+δ)(ω(σ)+δ)=ω(σ)ω(σ)<ϵ.\displaystyle\begin{aligned} g_{k+1}(\sigma^{\prime})-g_{k+1}(\sigma)&=(\omega(\sigma^{\prime})+\delta^{\prime})-(\omega(\sigma)+\delta)\leq(\omega(\sigma^{\prime})+\delta)-(\omega(\sigma)+\delta)\\ &=\omega(\sigma^{\prime})-\omega(\sigma)<\epsilon.\end{aligned} (23)

This proves that gk+1()g_{k+1}(\cdot) is continuous to the right of σ\sigma (for any σ<1\sigma<1). The proof that it is continuous to the left of σ\sigma, for any 0<σ10<\sigma\leq 1, is symmetric. ∎

In the next lemma we argue that with each application of the recursion we obtain running time which is no worse than the one we had before the current application. Moreover, we show that the recursion cannot lead to a better exponent than ω(σ)+1σ2\omega(\sigma)+\frac{1-\sigma}{2}. Note that unlike Lemma 4 which applies in the entire range 0<σ<10<\sigma<1, the next lemma applies only for σ~<σ<1\tilde{\sigma}<\sigma<1.

Lemma 5.

For any σ~<σ1\tilde{\sigma}<\sigma\leq 1 and any for any integer k0k\geq 0, we have

g0(σ)g1(σ)gk(σ)ω(σ)+1σ2.g_{0}(\sigma)\geq g_{1}(\sigma)\geq\ldots\geq g_{k}(\sigma)\geq\omega(\sigma)+\frac{1-\sigma}{2}.
Remark.

See Equation (5) in Section 3.1 for the definition of σ~\tilde{\sigma}.

Proof.

The proof follows by induction on kk. For k=0k=0, we require

g0(σ)ω(σ)+1σ2.g_{0}(\sigma)\geq\omega(\sigma)+\frac{1-\sigma}{2}. (24)

Recall that g0(σ)=1+23ω(σ)g_{0}(\sigma)=1+\frac{2}{3}\cdot\omega(\sigma). The inequality (24) (for σσ~\sigma\geq\tilde{\sigma}) now follows from Lemma 1, i.e., from

ω(σ)32(1+σ).\displaystyle\omega(\sigma)\leq\frac{3}{2}\cdot(1+\sigma). (25)

Assume inductively that for some kk,

g0(σ)g1(σ)gk(σ)ω(σ)+1σ2.g_{0}(\sigma)\geq g_{1}(\sigma)\geq\ldots\geq g_{k}(\sigma)\geq\omega(\sigma)+\frac{1-\sigma}{2}.

Recall that

gk+1(σ)=minδmax{gk(12δ),ω(σ)+δ}.g_{k+1}(\sigma)=\min_{\delta}\max\{g_{k}(1-2\delta),\omega(\sigma)+\delta\}. (26)

Fix σ\sigma, σ~<σ1\tilde{\sigma}<\sigma\leq 1. For 0δ1σ20\leq\delta\leq\frac{1-\sigma}{2}, the function ω(σ)+δ\omega(\sigma)+\delta increases as δ\delta grows but still we have ω(σ)+δω(σ)+1σ2\omega(\sigma)+\delta\leq\omega(\sigma)+\frac{1-\sigma}{2}. Also, by inductive hypothesis, gk(σ)ω(σ)+1σ2g_{k}(\sigma)\geq\omega(\sigma)+\frac{1-\sigma}{2}. Moreover, gk(12δ)gk(σ)g_{k}(1-2\delta)\geq g_{k}(\sigma), because 12δσ1-2\delta\geq\sigma and, by Lemma 4, gk()g_{k}(\cdot) is monotonically non-decreasing. Hence max{ω(σ)+δ,gk(12δ)}=gk(12δ)\max\{\omega(\sigma)+\delta,g_{k}(1-2\delta)\}=g_{k}(1-2\delta) for any δ1σ2\delta\leq\frac{1-\sigma}{2}. The minimum in this range is achieved by setting δ=1σ2\delta=\frac{1-\sigma}{2}, i.e., we have minδ1σ2max{gk(12δ),ω(σ)+δ}=gk(σ)ω(σ)+1σ2\min_{\delta\leq\frac{1-\sigma}{2}}\max\{g_{k}(1-2\delta),\omega(\sigma)+\delta\}=g_{k}(\sigma)\geq\omega(\sigma)+\frac{1-\sigma}{2}. At this point we still have gk(12δ)=gk(σ)ω(σ)+1σ2g_{k}(1-2\delta)=g_{k}(\sigma)\geq\omega(\sigma)+\frac{1-\sigma}{2}. As we increase δ\delta further, ω(σ)+δ\omega(\sigma)+\delta increases and gk(12δ)g_{k}(1-2\delta) decreases or stays the same. Recall that by Lemma 4, gk(σ)g_{k}(\sigma) is a monotonically non-decreasing function in the entire range 0<σ<10<\sigma<1. Suppose first that

gk(σ)>ω(σ)+1σ2.g_{k}(\sigma)>\omega(\sigma)+\frac{1-\sigma}{2}. (27)

Our analysis splits into two sub-cases. In the first sub-case, there exists a value 1σ2<δ1/2\frac{1-\sigma}{2}<\delta\leq 1/2 so that gk(12δ)=ω(σ)+δg_{k}(1-2\delta)=\omega(\sigma)+\delta. Then for ϵ=δ1σ2>0\epsilon=\delta-\frac{1-\sigma}{2}>0, we have gk(12δ)=gk(σ2ϵ)=ω(σ)+1σ2+ϵg_{k}(1-2\delta)=g_{k}(\sigma-2\epsilon)=\omega(\sigma)+\frac{1-\sigma}{2}+\epsilon. By Lemma 4 gk(12δ)g_{k}(1-2\delta) is a monotonically non-increasing function of δ\delta, while ω(σ)+δ\omega(\sigma)+\delta is a monotonically increasing function of δ\delta. Thus,

gk+1(σ)=minδmax{gk(12δ),ω(σ)+δ}=gk(σ2ϵ)=ω(σ)+1σ2+ϵ>ω(σ)+1σ2\displaystyle\begin{aligned} g_{k+1}(\sigma)&=\min_{\delta}\max\{g_{k}(1-2\delta),\omega(\sigma)+\delta\}\\ &=g_{k}(\sigma-2\epsilon)=\omega(\sigma)+\frac{1-\sigma}{2}+\epsilon\\ &>\omega(\sigma)+\frac{1-\sigma}{2}\end{aligned}

In this sub-case, by Lemma 4 (monotonicity of gk()g_{k}(\cdot)) we also have gk+1(σ)=gk(σ2ϵ)gk(σ)g_{k+1}(\sigma)=g_{k}(\sigma-2\epsilon)\leq g_{k}(\sigma). On the other hand, if for all δ1/2\delta\leq 1/2, we have gk(12δ)>ω(σ)+δg_{k}(1-2\delta)>\omega(\sigma)+\delta, then, in particular, we have gk(0)>ω(σ)+1/2212g_{k}(0)>\omega(\sigma)+1/2\geq 2\frac{1}{2}. However, by induction hypothesis, gk(0)g0(0)=1+23ω(0)=73g_{k}(0)\leq g_{0}(0)=1+\frac{2}{3}\omega(0)=\frac{7}{3}, contradiction.

The second sub-case is gk(σ)=ω(σ)+1σ2g_{k}(\sigma)=\omega(\sigma)+\frac{1-\sigma}{2}. Then by monotonicity of gk(12δ)g_{k}(1-2\delta) and ω(σ)+δ\omega(\sigma)+\delta (as functions of δ\delta) we have

minδ1σ2max{gk(12δ),ω(σ)+δ}=ω(σ)+1σ2=gk(σ).\displaystyle\begin{aligned} \min_{\delta\leq\frac{1-\sigma}{2}}\max\{g_{k}(1-2\delta),\omega(\sigma)+\delta\}=\omega(\sigma)+\frac{1-\sigma}{2}=g_{k}(\sigma).\end{aligned}

For δ>1σ2\delta>\frac{1-\sigma}{2}, max{gk(12δ),ω(σ)+δ}>ω(σ)+1σ2\max\{g_{k}(1-2\delta),\omega(\sigma)+\delta\}>\omega(\sigma)+\frac{1-\sigma}{2}, and thus

gk+1(σ)=minδmax{gk(12δ),ω(σ)+δ}=ω(σ)+1σ2=gk(σ).g_{k+1}(\sigma)=\min_{\delta}\max\{g_{k}(1-2\delta),\omega(\sigma)+\delta\}=\omega(\sigma)+\frac{1-\sigma}{2}=g_{k}(\sigma).

Recall that the dual matrix multiplication exponent α\alpha is the maximum (in fact, supremum) value such that ω(α)=2\omega(\alpha)=2.

Next, we argue that in the range σ~σ1\tilde{\sigma}\leq\sigma\leq 1, the functions gk(σ)g_{k}(\sigma) (k0k\geq 0) are strictly monotonically increasing. (We have already shown in Lemma 4 that these functions are monotonically non-decreasing in the entire range 0σ10\leq\sigma\leq 1.)

Lemma 6.

For any integer k0k\geq 0, the function gk(σ)g_{k}(\sigma) is strictly monotonically increasing in the range α<σ1\alpha<\sigma\leq 1. Moreover, in the range σ~<σ<1\tilde{\sigma}<\sigma<1, we also have

g0(σ)>g1(σ)>gk(σ)>ω(σ)+1σ2.g_{0}(\sigma)>g_{1}(\sigma)>\ldots g_{k}(\sigma)>\omega(\sigma)+\frac{1-\sigma}{2}.
Proof.

We start with proving the second assertion of the lemma. The proof is by induction on kk. The induction base holds as ω(σ)\omega(\sigma) is a monotonically increasing function for ασ1\alpha\leq\sigma\leq 1, and g0(σ)=1+23ω(σ)g_{0}(\sigma)=1+\frac{2}{3}\omega(\sigma). Moreover, we have seen that for σ>σ~\sigma>\tilde{\sigma}, we have g0(σ)>ω(σ)+1σ2g_{0}(\sigma)>\omega(\sigma)+\frac{1-\sigma}{2} (see Inequalities (24) and (25) and Lemma 1).

Assume inductively that for some k0k\geq 0, the function gk(σ)g_{k}(\sigma) is strictly monotonically increasing in the range σ~<σ1\tilde{\sigma}<\sigma\leq 1, and gk(σ)>ω(σ)+1σ2g_{k}(\sigma)>\omega(\sigma)+\frac{1-\sigma}{2}. Fix a value σ~<σ1\tilde{\sigma}<\sigma\leq 1. We have

gk+1(σ)=minδmax{gk(12δ),ω(σ)+δ}.g_{k+1}(\sigma)=\min_{\delta}\max\{g_{k}(1-2\delta),\omega(\sigma)+\delta\}.

As by Lemma 4, we have gk(1)>ω(1)>ω(σ)g_{k}(1)>\omega(1)>\omega(\sigma) and (by Claim 2) gk(0)212ω(σ)+1/2g_{k}(0)\leq 2\frac{1}{2}\leq\omega(\sigma)+1/2, by continuity of functions gk(12δ)g_{k}(1-2\delta) and ω(σ)+δ\omega(\sigma)+\delta (in terms of δ\delta) in the entire range 0δ1/20\leq\delta\leq 1/2 (see Lemma 4), there exists a value δ\delta^{*} such that ω(σ)+δ=gk(12δ)\omega(\sigma)+\delta^{*}=g_{k}(1-2\delta^{*}). By monotonicity of the function gk(12δ)g_{k}(1-2\delta) in the entire range of δ\delta (by Lemma 4 it is monotonically non-decreasing), we also conclude that

gk+1(σ)=minδmax{gk(12δ),ω(σ)+δ}=gk(12δ)=ω(σ)+δ.g_{k+1}(\sigma)=\min_{\delta}\max\{g_{k}(1-2\delta),\omega(\sigma)+\delta\}=g_{k}(1-2\delta^{*})=\omega(\sigma)+\delta^{*}.

For δ1σ2\delta\leq\tiny{\frac{1-\sigma}{2}}, we have 12δσ>σ~1-2\delta\geq\sigma>\tilde{\sigma}. By induction hypothesis, gk(12δ)g_{k}(1-2\delta) is a strictly monotonically decreasing function of δ\delta in this range. Using induction hypothesis again, we recall that

gk(σ)>ω(σ)+1σ2.g_{k}(\sigma)>\omega(\sigma)+\frac{1-\sigma}{2}.

Hence for δ=1σ2\delta=\frac{1-\sigma}{2} we have

gk+1(σ)max{gk(σ),ω(σ)+1σ2}=gk(σ).g_{k+1}(\sigma)\leq\max\{g_{k}(\sigma),\omega(\sigma)+\frac{1-\sigma}{2}\}=g_{k}(\sigma).

By slightly increasing δ\delta beyond 1σ2\tiny{\frac{1-\sigma}{2}} but below 1σ~2\tiny{\frac{1-\tilde{\sigma}}{2}} (so that σ>12δ>σ~\sigma>1-2\delta>\tilde{\sigma} and so that gk(σ)>ω(σ)+δg_{k}(\sigma)>\omega(\sigma)+\delta), we increase ω(σ)+δ\omega(\sigma)+\delta and decrease gk(12δ)g_{k}(1-2\delta). Moreover, we can also have

gk(12δ)>ω(σ)+δ>ω(σ)+1σ2.g_{k}(1-2\delta)>\omega(\sigma)+\delta>\omega(\sigma)+\tiny{\frac{1-\sigma}{2}}.

(Recall that by Lemma 4, gk()g_{k}(\cdot) is continuous.) Let δ\delta^{\prime} denote such a value of δ\delta. Since σ>12δ>σ~\sigma>1-2\delta^{\prime}>\tilde{\sigma} and, by induction hypothesis gk()g_{k}(\cdot) is monotonically increasing for σ>σ~\sigma>\tilde{\sigma}, we have

gk(σ)>max{gk(12δ),ω(σ)+δ}=gk(12δ)>ω(σ)+δ>ω(σ)+1σ2.g_{k}(\sigma)>\max\{g_{k}(1-2\delta^{\prime}),\omega(\sigma)+\delta^{\prime}\}=g_{k}(1-2\delta^{\prime})>\omega(\sigma)+\delta^{\prime}>\omega(\sigma)+\tiny{\frac{1-\sigma}{2}}.

It follows that δ>δ\delta^{*}>\delta^{\prime}. (As for all δ^δ\hat{\delta}\leq\delta^{\prime}, we have gk(12δ^)>ω(σ)+δ^g_{k}(1-2\hat{\delta})>\omega(\sigma)+\hat{\delta}, and both functions are monotone and continuous in terms of δ\delta (by Lemma 4), and gk(12δ)=ω(σ)+δg_{k}(1-2\delta^{*})=\omega(\sigma)+\delta^{*}.) Hence

gk+1(σ)=ω(σ)+δ>ω(σ)+δ>ω(σ)+1σ2,g_{k+1}(\sigma)=\omega(\sigma)+\delta^{*}>\omega(\sigma)+\delta^{\prime}>\omega(\sigma)+\frac{1-\sigma}{2},

and also, gk+1(σ)=gk(12δ)gk(12δ)<gk(σ)g_{k+1}(\sigma)=g_{k}(1-2\delta^{*})\leq g_{k}(1-2\delta^{\prime})<g_{k}(\sigma).

Next, we prove that functions g0(σ),g1(σ),g2(σ),g_{0}(\sigma),g_{1}(\sigma),g_{2}(\sigma),\ldots are all strictly monotonically increasing for α<σ\alpha<\sigma. We have already seen that it is the case for g0(σ)=1+23ω(σ)g_{0}(\sigma)=1+\tiny{\frac{2}{3}}\omega(\sigma). We assume inductively that it is the case for gk(σ)g_{k}(\sigma) for some k=0,1,2,,k=0,1,2,\ldots, and prove it for

gk+1(σ)=min0δ1/2max{gk(12δ),ω(σ)+δ}.g_{k+1}(\sigma)=\min_{0\leq\delta\leq 1/2}\max\{g_{k}(1-2\delta),\omega(\sigma)+\delta\}.

Observe that for δ=0\delta=0, gk(12δ)=gk(1)>ω(1)=ωg_{k}(1-2\delta)=g_{k}(1)>\omega(1)=\omega, and for δ1α2\delta\geq\tiny{\frac{1-\alpha}{2}}, by monotonicity of gk()g_{k}(\cdot) we have

gk(12δ)gk(α)g0(α)=1+23ω(α)=73.g_{k}(1-2\delta)\leq g_{k}(\alpha)\leq g_{0}(\alpha)=1+\frac{2}{3}\omega(\alpha)=\frac{7}{3}. (28)

On the other hand, for δ=0\delta=0, (and σ<1\sigma<1),

ω(σ)+δ=ω(σ)<ω<gk(12δ)=gk(1),\omega(\sigma)+\delta=\omega(\sigma)<\omega<g_{k}(1-2\delta)=g_{k}(1),

and for δ1α2\delta\geq\tiny{\frac{1-\alpha}{2}}, we have by (28)

ω(σ)+δ>ω(σ)+1α2>73gk(12δ).\omega(\sigma)+\delta>\omega(\sigma)+\tiny{\frac{1-\alpha}{2}}>\tiny{\frac{7}{3}}\geq g_{k}(1-2\delta).

By continuity and strict monotonicity of gk(12δ)g_{k}(1-2\delta) for 12δ>α1-2\delta>\alpha (by the induction hypothesis), the value δ(σ)\delta(\sigma) for which gk+1(σ)=ω(σ)+δ(σ)=gk(12δ(σ))g_{k+1}(\sigma)=\omega(\sigma)+\delta(\sigma)=g_{k}(1-2\delta(\sigma)) satisfies therefore 0<δ(σ)<1α20<\delta(\sigma)<\tiny{\frac{1-\alpha}{2}}.

Consider now two values ασ<σ1\alpha\leq\sigma<\sigma^{\prime}\leq 1. Then

ω(σ)+δ(σ)>ω(σ)+δ(σ)=gk(12δ(σ)).\omega(\sigma^{\prime})+\delta(\sigma)>\omega(\sigma)+\delta(\sigma)=g_{k}(1-2\delta(\sigma)). (29)

Consider the value δ(σ)\delta(\sigma^{\prime}) for which

gk+1(σ)=ω(σ)+δ(σ)=gk(12δ(σ)).g_{k+1}(\sigma^{\prime})=\omega(\sigma^{\prime})+\delta(\sigma^{\prime})=g_{k}(1-2\delta(\sigma^{\prime})). (30)

If δ(σ)δ(σ)\delta(\sigma^{\prime})\geq\delta(\sigma) then

gk+1(σ)=ω(σ)+δ(σ)ω(σ)+δ(σ)>ω(σ)+δ(σ)=gk+1(σ),g_{k+1}(\sigma^{\prime})=\omega(\sigma^{\prime})+\delta(\sigma^{\prime})\geq\omega(\sigma^{\prime})+\delta(\sigma)>\omega(\sigma)+\delta(\sigma)=g_{k+1}(\sigma),

as desired. Henceforth we assume that

δ(σ)>δ(σ).\delta(\sigma)>\delta(\sigma^{\prime}).

Note also that

12δ(σ),12δ(σ)>α,1-2\delta(\sigma),1-2\delta(\sigma^{\prime})>\alpha,

i.e., by the induction hypothesis, the function gk()g_{k}(\cdot) is strictly monotone between 12δ(σ)1-2\delta(\sigma^{\prime}) and 12δ(σ)1-2\delta(\sigma), and

12δ(σ)>12δ(σ).1-2\delta(\sigma^{\prime})>1-2\delta(\sigma).

Hence

gk+1(σ)=gk(12δ(σ))>gk(12δ(σ))=gk+1(σ).g_{k+1}(\sigma^{\prime})=g_{k}(1-2\delta(\sigma^{\prime}))>g_{k}(1-2\delta(\sigma))=g_{k+1}(\sigma).

For any integer k0k\geq 0, let 0<σk<10<\sigma_{k}<1 be the value such that gk(σk)=ω(1)=ωg_{k}(\sigma_{k})=\omega(1)=\omega. Observe that by Lemma 5, we have σ~<σ0σ1σ2σ31\tilde{\sigma}<\sigma_{0}\leq\sigma_{1}\leq\sigma_{2}\leq\sigma_{3}\ldots\leq 1. Hence the sequence (σk)k=1(\sigma_{k})_{k=1}^{\infty} converges to a limit σ1\sigma^{*}\leq 1, i.e., limkσk=σ\lim_{k\to\infty}\sigma_{k}=\sigma^{*}. Next, we analyze the value of σ\sigma^{*} and show that it is equal to 11. This means that under Assumption 1, sufficiently many recursive invocations of our algorithm lead to an algorithm that improves the state-of-the-art directed reachability for general graphs for all σ~<σ<1\tilde{\sigma}<\sigma<1.

Lemma 7.
σ=limkσk=1\sigma^{*}=\lim_{k\to\infty}\sigma_{k}=1
Proof.

Applying Lemma 5 to σ\sigma^{*}, we have

g0(σ)g1(σ)gk(σ)ω(σ)+1σ2.g_{0}(\sigma^{*})\geq g_{1}(\sigma^{*})\geq\ldots\geq g_{k}(\sigma^{*})\geq\ldots\geq\omega(\sigma^{*})+\frac{1-\sigma^{*}}{2}.

It follows that the sequence (gk(σ))k=0(g_{k}(\sigma^{*}))_{k=0}^{\infty} converges to a limit greater than or equal to ω(σ)+1σ2\omega(\sigma^{*})+\frac{1-\sigma^{*}}{2}. Assume for contradiction that σ<1\sigma^{*}<1. The following two cases arise:

Case 1.
limkgk(σ)=ω(σ)+1σ2.\lim_{k\to\infty}g_{k}(\sigma^{*})=\omega(\sigma^{*})+\frac{1-\sigma^{*}}{2}.

In this case, for any arbitrarily small ϵ>0\epsilon>0, there exists kϵk_{\epsilon} such that for every k>kϵk>k_{\epsilon}, we have

ω(σ)+1σ2gk(σ)<ω(σ)+1σ2+ϵ.\omega(\sigma^{*})+\frac{1-\sigma^{*}}{2}\leq g_{k}(\sigma^{*})<\omega(\sigma^{*})+\frac{1-\sigma^{*}}{2}+\epsilon.

By Claim 1, for any σ~<σ<1\tilde{\sigma}<\sigma<1, we also have ω(σ)+1σ2<ω(1)\omega(\sigma)+\frac{1-\sigma}{2}<\omega(1). We can pick a sufficiently small ϵ>0\epsilon>0 so that

ω(σ)+1σ2+ϵ<ω(1).\omega(\sigma^{*})+\frac{1-\sigma^{*}}{2}+\epsilon<\omega(1).

Hence, for k>kϵk>k_{\epsilon}, gk(σ)<ω(1)g_{k}(\sigma^{*})<\omega(1).

Since σ~<σ0σkσ\tilde{\sigma}<\sigma_{0}\leq\sigma_{k}\leq\sigma^{*} and gk()g_{k}(\cdot) is a monotonically non-decreasing function for σ>σ~\sigma>\tilde{\sigma} (see Lemma 5), it follows that

gk(σk)gk(σ)<ω(1).g_{k}(\sigma_{k})\leq g_{k}(\sigma^{*})<\omega(1).

But by definition, we have gk(σk)=ω(1)g_{k}(\sigma_{k})=\omega(1). Hence we get a contradiction in this case.

Case 2.

There exists some value ϵ>0\epsilon^{*}>0 such that

limkgk(σ)=ω(σ)+1σ2+ϵ.\lim_{k\to\infty}g_{k}(\sigma^{*})=\omega(\sigma^{*})+\frac{1-\sigma^{*}}{2}+\epsilon^{*}.

If ϵ\epsilon^{*} is such that ω(σ)+1σ2+ϵ<ω(1)\omega(\sigma^{*})+\frac{1-\sigma^{*}}{2}+\epsilon^{*}<\omega(1), we get a contradiction similar to the one obtained in Case 1. So we assume from this point on that

limkgk(σ)=ω(σ)+1σ2+ϵω(1).\lim_{k\to\infty}g_{k}(\sigma^{*})=\omega(\sigma^{*})+\frac{1-\sigma^{*}}{2}+\epsilon^{*}\geq\omega(1).

It is easy to see that ϵσ2\epsilon^{*}\leq\frac{\sigma^{*}}{2}. Indeed, otherwise limkgk(σ)ω(σ)+1/2\lim_{k\to\infty}g_{k}(\sigma^{*})\geq\omega(\sigma^{*})+1/2. As σσ9>0.98\sigma^{*}\geq\sigma_{9}>0.98 (see Table 6), it follows that limkgk(σ)ω(0.98)+1/2>2.85\lim_{k\to\infty}g_{k}(\sigma^{*})\geq\omega(0.98)+1/2>2.85. On the other hand, for any k1k\geq 1, gk(σ)g0(σ)=1+23ω(σ)2.59g_{k}(\sigma^{*})\leq g_{0}(\sigma^{*})=1+\frac{2}{3}\omega(\sigma^{*})\leq 2.59, and this is a contradiction.

Hence, for all k=0,1,2,k=0,1,2,\ldots and δ=1σ2+ϵ\delta=\frac{1-\sigma^{*}}{2}+\epsilon^{*}, by Equation (26), we have

gk+1(σ)max{gk(σ2ϵ),ω(σ)+1σ2+ϵ}.g_{k+1}(\sigma^{*})\leq\max\{g_{k}(\sigma^{*}-2\epsilon^{*}),\omega(\sigma^{*})+\frac{1-\sigma^{*}}{2}+\epsilon^{*}\}.

Next, we prove that gk(σ2ϵ)g_{k}(\sigma^{*}-2\epsilon^{*}) maximises the right-hand-side.

Claim 3.
gk(σ2ϵ)ω(σ)+1σ2+ϵ.g_{k}(\sigma^{*}-2\epsilon^{*})\geq\omega(\sigma^{*})+\frac{1-\sigma^{*}}{2}+\epsilon^{*}.
Proof.

Suppose for contradiction gk(σ2ϵ)<ω(σ)+1σ2+ϵg_{k}(\sigma^{*}-2\epsilon^{*})<\omega(\sigma^{*})+\frac{1-\sigma^{*}}{2}+\epsilon^{*}. Then, since gk()g_{k}(\cdot) is a continuous function (see Lemma 4), there exists 0ϵ<ϵ0\leq\epsilon^{\prime}<\epsilon^{*} such that

gk+1(σ)=ω(σ)+1σ2+ϵ=gk(σ2ϵ).g_{k+1}(\sigma^{*})=\omega(\sigma^{*})+\frac{1-\sigma^{*}}{2}+\epsilon^{\prime}=g_{k}(\sigma^{*}-2\epsilon^{\prime}).

But then

gk+1(σ)<ω(σ)+1σ2+ϵ=limtgt(σ).g_{k+1}(\sigma^{*})<\omega(\sigma^{*})+\frac{1-\sigma^{*}}{2}+\epsilon^{*}=\lim_{t\to\infty}g_{t}(\sigma^{*}).

But by Lemma 5, the sequence (gt(σ))t=1(g_{t}(\sigma^{*}))_{t=1}^{\infty} is monotonically non-increasing, and so gk+1(σ)g_{k+1}(\sigma^{*}) cannot be smaller than the limit of this sequence. (Recall that we assumed that σ<1\sigma^{*}<1.) ∎

Therefore, we have gk(σ2ϵ)ω(σ)+1σ2+ϵω(1)=gk(σk)g_{k}(\sigma^{*}-2\epsilon^{*})\geq\omega(\sigma^{*})+\frac{1-\sigma^{*}}{2}+\epsilon^{*}\geq\omega(1)=g_{k}(\sigma_{k}). Hence σ2ϵσk\sigma^{*}-2\epsilon^{*}\geq\sigma_{k}. But σk\sigma_{k} (weakly) increases as kk grows, and it is equal to σ\sigma^{*} in the limit. On the other hand, there exists a fixed constant ϵ>0\epsilon^{*}>0 such that (for all kk) σkσ2ϵ\sigma_{k}\leq\sigma^{*}-2\epsilon^{*}. This is a contradiction.

Under the assumption that σ<1\sigma^{*}<1, we have derived a contradiction in all the cases for which limtgt(σ)=ω(σ)+1σ2+ϵ\lim_{t\to\infty}g_{t}(\sigma^{*})=\omega(\sigma^{*})+\frac{1-\sigma^{*}}{2}+\epsilon^{*}, for some ϵ0\epsilon^{*}\geq 0, regardless of whether ω(σ)+1σ2+ϵ\omega(\sigma^{*})+\frac{1-\sigma^{*}}{2}+\epsilon^{*} is less than ω(1)\omega(1) or at least ω(1)\omega(1).

Therefore we conclude that σ=1\sigma^{*}=1. In other words, for any ϵ>0\epsilon>0, there exists kϵk_{\epsilon} such that for any kkϵk\geq k_{\epsilon}, we have σk>1ϵ\sigma_{k}>1-\epsilon, i.e., gk(1ϵ)<gk(σk)=ω(1)g_{k}(1-\epsilon)<g_{k}(\sigma_{k})=\omega(1). ∎

It also follows that

limkgk(1)=limkgk(σk)=limkω(1)=ω=ω(1)+112,\lim_{k\to\infty}g_{k}(1)=\lim_{k\to\infty}g_{k}(\sigma_{k})=\lim_{k\to\infty}\omega(1)=\omega=\omega(1)+\frac{1-1}{2},

and also for any σ<σ=1\sigma<\sigma^{*}=1, it holds that

limkgk(σ)limkgk(1)=ω(1).\lim_{k\to\infty}g_{k}(\sigma)\leq\lim_{k\to\infty}g_{k}(1)=\omega(1). (31)

4.4 Analysis for graphs with m=Θ(nμ)m=\Theta(n^{\mu}) edges, for μ<2\mu<2

In this section, we discuss the convergence of the sequence g0(μ)(σ),g1(μ)(σ),,gk(μ)(σ)g_{0}^{(\mu)}(\sigma),g_{1}^{(\mu)}(\sigma),\dots,g_{k}^{(\mu)}(\sigma), for a general value μ<2\mu<2. Recall that for graphs with m=Θ(nμ)m=\Theta(n^{\mu}) edges, gk(μ)(σ)g_{k}^{(\mu)}(\sigma) is the exponent of nn in the number of operations required (under Assumption 1) for computing S×VS\times V-direachability (|S|=nσ|S|=n^{\sigma}) by our recursive scheme of depth kk. By Inequality (9) we have

g0(μ)(σ)=1+μ3+23ω(σ).g_{0}^{(\mu)}(\sigma)=\frac{1+\mu}{3}+\frac{2}{3}\omega(\sigma).

The following equation expresses the function gk+1(μ)()g_{k+1}^{(\mu)}(\cdot) in terms of the exponent δ\delta of the desired diameter Dk+1D_{k+1} and the function gk(μ)()g_{k}^{(\mu)}(\cdot).

gk+1(μ)(σ)=minδmax{gk(μ)(12δ),ω(σ)+δ}.g_{k+1}^{(\mu)}(\sigma)=\min_{\delta}\max\{g_{k}^{(\mu)}(1-2\delta),\omega(\sigma)+\delta\}.

The formula for gk+1(μ)()g_{k+1}^{(\mu)}(\cdot) for k>0k>0 follows, because the running time required to build a DD-shortcut is dominated by the time needed to solve the path-direachability problem for O~(n12δ)\tilde{O}(n^{1-2\delta}) paths in a graph with Θ(nμ)\Theta(n^{\mu}) edges. The latter, by Assumption 1, can be done in O~(ngk(μ)(12δ))\tilde{O}(n^{g_{k}^{(\mu)}(1-2\delta)}) time. (Observe that the recursive step (Line 5 of Algorithm 2) is invoked on the original input graph GG. Hence if GG has O(nμ)O(n^{\mu}) egdes, all recursive invocations are performed on a graph with O(nμ)O(n^{\mu}) edges as well.)

The following lemma is an easy lower bound on gk(μ)(σ)g_{k}^{(\mu)}(\sigma).

Lemma 8.

For any k=0,1,2,,k=0,1,2,\ldots, and σ\sigma and μ\mu that satisfy 1+μ>ω(σ)1+\mu>\omega(\sigma), we have

gk(μ)(σ)>ω(σ).g_{k}^{(\mu)}(\sigma)>\omega(\sigma).
Remark.

If ω(σ)1+μ\omega(\sigma)\geq 1+\mu, then a naive direachability algorithm that computes all-pairs direachabilities in O(mn)=O(n1+μ)O(m\cdot n)=O(n^{1+\mu}) time is at least as fast as any algorithm based on rectangular matrix multiplication. (The rectangular matrix multiplication requires Ω(nω(σ))\Omega(n^{\omega(\sigma)}) time.)

Proof.

The proof follows by induction on kk. First, observe that

g0(μ)(σ)=1+μ3+23ω(σ)>ω(σ),g_{0}^{(\mu)}(\sigma)=\frac{1+\mu}{3}+\frac{2}{3}\omega(\sigma)>\omega(\sigma),

for 1+μ>ω(σ)1+\mu>\omega(\sigma).

Assume that for some k0k\geq 0, we have gk(μ)(σ)>ω(σ)g_{k}^{(\mu)}(\sigma)>\omega(\sigma). By setting δ=1σ2\delta=\frac{1-\sigma}{2}, we get

gk+1(μ)(σ)max{ω(σ)+1σ2,gk(μ)(σ)}.\displaystyle g_{k+1}^{(\mu)}(\sigma)\leq\max\{\omega(\sigma)+\frac{1-\sigma}{2},g_{k}^{(\mu)}(\sigma)\}. (32)

For δ<1σ2\delta<\frac{1-\sigma}{2}, we have gk(μ)(12δ)gk(μ)(σ)>ω(σ)g_{k}^{(\mu)}(1-2\delta)\geq g_{k}^{(\mu)}(\sigma)>\omega(\sigma). The last inequality is by inductive hypothesis. For δ1σ2\delta\geq\frac{1-\sigma}{2}, we have ω(σ)+δ>ω(σ)\omega(\sigma)+\delta>\omega(\sigma). In either case, max{ω(σ)+δ,gk(μ)(12δ)}>ω(σ)\max\{\omega(\sigma)+\delta,g_{k}^{(\mu)}(1-2\delta)\}>\omega(\sigma), and thus

gk+1(μ)(σ)=minδmax{ω(σ)+δ,gk(μ)(12δ)}>ω(σ).g_{k+1}^{(\mu)}(\sigma)=\min_{\delta}\max\{\omega(\sigma)+\delta,g_{k}^{(\mu)}(1-2\delta)\}>\omega(\sigma).

We next prove a stronger inequality.

Lemma 9.

Assuming μω(σ)32σ+12\mu\geq\omega(\sigma)-\frac{3}{2}\sigma+\frac{1}{2}, and σσ~\sigma\geq\tilde{\sigma}, for all k=0,1,2,k=0,1,2,\dots, we have

gk(μ)(σ)ω(σ)+1σ2.g_{k}^{(\mu)}(\sigma)\geq\omega(\sigma)+\frac{1-\sigma}{2}. (33)

We also have

g0(μ)(σ)g1(μ)(σ)gk(μ)(σ)g_{0}^{(\mu)}(\sigma)\geq g_{1}^{(\mu)}(\sigma)\geq\ldots\geq g_{k}^{(\mu)}(\sigma)\geq\ldots (34)

Strict inequalities in (33) and (34) hold when μ>ω(σ)32σ+12\mu>\omega(\sigma)-\frac{3}{2}\sigma+\frac{1}{2}. Moreover, each gk(μ)(σ)g_{k}^{(\mu)}(\sigma) is a continuous and monotonically non-decreasing function in the entire range [0,1][0,1].

Remark.

Recall that the expression ω(σ)32σ+12\omega(\sigma)-\frac{3}{2}\sigma+\frac{1}{2} is the left boundary of the feasibility interval IσI_{\sigma}. This is the interval in which g0(μ)(σ)g_{0}^{(\mu)}(\sigma) improves over the existing bound provided by naive direachability algorithm (see Inequality (10)).

Proof.

The proof is by induction on kk. The induction base is k=0k=0.
For δ=1σ2\delta=\frac{1-\sigma}{2}, we have ω(σ)+δ=ω(σ)+1σ2\omega(\sigma)+\delta=\omega(\sigma)+\frac{1-\sigma}{2}, and

1+μ3+23ω(12δ)=1+μ3+23ω(σ)=g0(μ)(σ).\frac{1+\mu}{3}+\frac{2}{3}\omega(1-2\delta)=\frac{1+\mu}{3}+\frac{2}{3}\omega(\sigma)=g_{0}^{(\mu)}(\sigma).

Hence we need to argue that ω(σ)+1σ21+μ3+23ω(σ)\omega(\sigma)+\frac{1-\sigma}{2}\leq\frac{1+\mu}{3}+\frac{2}{3}\omega(\sigma), i.e.,

ω(σ)μ+32σ12.\displaystyle\omega(\sigma)\leq\mu+\frac{3}{2}\sigma-\frac{1}{2}. (35)

This is exactly the assumption of the lemma. Note also that strict inequality in (35) implies that g0(μ)(σ)>ω(σ)+1σ2g_{0}^{(\mu)}(\sigma)>\omega(\sigma)+\frac{1-\sigma}{2}. (For μ=2\mu=2, the right-hand-side of (35) is 32(1+σ)\frac{3}{2}(1+\sigma). The inequality ω(σ)32(1+σ)\omega(\sigma)\leq\frac{3}{2}(1+\sigma) holds for σσ~\sigma\geq\tilde{\sigma}. See Inequality (12) in Section 3.2.) Note also that since ω()\omega(\cdot) is continuous and monotonically non-decreasing in the range [0,1][0,1], so is g0(μ)(σ)=1+μ3+23ω(σ)g_{0}^{(\mu)}(\sigma)=\frac{1+\mu}{3}+\frac{2}{3}\omega(\sigma) as well.
Similarly, since ω()\omega(\cdot) is monotonically increasing in the range [σ~,1][\tilde{\sigma},1], this is the case with g0(μ)(σ)g_{0}^{(\mu)}(\sigma) too. For the induction step we again use δ=1σ2\delta=\frac{1-\sigma}{2}. It follows that

gk+1(μ)(σ)\displaystyle g_{k+1}^{(\mu)}(\sigma) =minδmax{gk(μ)(12δ),ω(σ)+δ}\displaystyle=\min_{\delta}\max\{g_{k}^{(\mu)}(1-2\delta),\omega(\sigma)+\delta\} (36)
max{gk(μ)(σ),ω(σ)+1σ2}=gk(μ)(σ)\displaystyle\leq\max\{g_{k}^{(\mu)}(\sigma),\omega(\sigma)+\frac{1-\sigma}{2}\}=g_{k}^{(\mu)}(\sigma)

(The last equality is by induction hypothesis.) Hence,

g0(μ)(σ)g1(μ)(σ),gk(μ)(σ),g_{0}^{(\mu)}(\sigma)\geq g_{1}^{(\mu)}(\sigma)\geq\dots,g_{k}^{(\mu)}(\sigma)\geq\ldots, (37)

and strict inequality holds if strict inequality holds in (35).

Also, for all δ<1σ2\delta<\frac{1-\sigma}{2}, by induction hypothesis, gk(μ)()g_{k}^{(\mu)}(\cdot) is monotonically increasing. Thus, we have

gk(μ)(12δ)>gk(μ)(σ)ω(σ)+1σ2>ω(σ)+δ.g_{k}^{(\mu)}(1-2\delta)>g_{k}^{(\mu)}(\sigma)\geq\omega(\sigma)+\frac{1-\sigma}{2}>\omega(\sigma)+\delta.

If gk(μ)(σ)=ω(σ)+1σ2g_{k}^{(\mu)}(\sigma)=\omega(\sigma)+\frac{1-\sigma}{2}, then since gk(μ)(12δ)g_{k}^{(\mu)}(1-2\delta) is a monotonically non-increasing function of δ\delta and ω(σ)+δ\omega(\sigma)+\delta is a monotonically increasing function of δ\delta, it follows that

gk+1(μ)(σ)=minδmax{ω(σ)+δ,gk(μ)(12δ)}=ω(σ)+1σ2.g_{k+1}^{(\mu)}(\sigma)=\min_{\delta}\max\{\omega(\sigma)+\delta,g_{k}^{(\mu)}(1-2\delta)\}=\omega(\sigma)+\frac{1-\sigma}{2}.

Otherwise, if gk(μ)(σ)>ω(σ)+1σ2g_{k}^{(\mu)}(\sigma)>\omega(\sigma)+\frac{1-\sigma}{2}, then there exists δ>1σ2\delta^{*}>\frac{1-\sigma}{2} such that gk+1(μ)(σ)=ω(σ)+δ=gk(12δ)>ω(σ)+1σ2g_{k+1}^{(\mu)}(\sigma)=\omega(\sigma)+\delta^{*}=g_{k}(1-2\delta^{*})>\omega(\sigma)+\frac{1-\sigma}{2}, proving that gk+1(μ)(σ)ω(σ)+1σ2g_{k+1}^{(\mu)}(\sigma)\geq\omega(\sigma)+\frac{1-\sigma}{2} in both cases. Moreover, this argument implies that if inductively, gk(μ)(σ)>ω(σ)+1σ2g_{k}^{(\mu)}(\sigma)>\omega(\sigma)+\frac{1-\sigma}{2}, then gk+1(μ)(σ)>ω(σ)+1σ2g_{k+1}^{(\mu)}(\sigma)>\omega(\sigma)+\frac{1-\sigma}{2} as well.

Finally, the inductive argument that shows that gk+1(μ)()g_{k+1}^{(\mu)}(\cdot) is continuous and monotonically non-decreasing function of σ\sigma in the range [0,1][0,1] is identical to the proof that it is the case for gk+1(σ)=gk+1(μ=2)(σ)g_{k+1}(\sigma)=g_{k+1}^{(\mu=2)}(\sigma), given in the proof of Lemma 4. ∎

We have seen that for any σ~<σ<1\tilde{\sigma}<\sigma<1, there exists a non-empty interval Iσ(0)I^{(0)}_{\sigma} of values of μ\mu such that for all μIσ(0)\mu\in I^{(0)}_{\sigma} we have

g0(μ)(σ)<min{μ+σ,ω(1)}.g_{0}^{(\mu)}(\sigma)<\min\{\mu+\sigma,\omega(1)\}.

Since, for any σ\sigma in this range we have

g0(μ)(σ)g1(μ)(σ)>gk(μ)(σ)>,g_{0}^{(\mu)}(\sigma)\geq g_{1}^{(\mu)}(\sigma)\geq\ldots>g_{k}^{(\mu)}(\sigma)>\ldots,

it follows that these intervals for gk(μ)(σ)g_{k}^{(\mu)}(\sigma) become only wider as kk grows, i.e.,

Iσ(0)Iσ(1)Iσ(k).I^{(0)}_{\sigma}\subseteq I^{(1)}_{\sigma}\subseteq\ldots I^{(k)}_{\sigma}\subseteq\ldots.

In fact, the left boundaries of all these intervals all agree. as they correspond to the same inequality

μ>12+ω(σ)32σ.\displaystyle\mu>\frac{1}{2}+\omega(\sigma)-\frac{3}{2}\sigma. (38)

(To see it, observe that assuming inductively that gk(μ)(σ)<μ+σg_{k}^{(\mu)}(\sigma)<\mu+\sigma for μ>ω(σ)+1232σ\mu>\omega(\sigma)+\frac{1}{2}-\frac{3}{2}\sigma, implies that gk+1(μ)(σ)<gk(μ)(σ)<μ+σg_{k+1}^{(\mu)}(\sigma)<g_{k}^{(\mu)}(\sigma)<\mu+\sigma under the same condition as well. See Lemma 9 and Inequality (37).) Recall that both inequalities g0(μ)(σ)<μ+σg_{0}^{(\mu)}(\sigma)<\mu+\sigma and ω(σ)+1σ2<μ+σ\omega(\sigma)+\frac{1-\sigma}{2}<\mu+\sigma both lead to Inequality (38). However, as kk grows, the right boundaries of these intervals tend to 11.

Recall that the right boundaries of these intervals are determined by the inequalities gk(μ)(σ)<ω(1)g_{k}^{(\mu)}(\sigma)<\omega(1). These inequalities hold for all σ~<σ<σk\tilde{\sigma}<\sigma<\sigma_{k}, and by Lemma 7, limtσt=1\lim_{t\to\infty}\sigma_{t}=1.

Note also that for μ=ω(σ)32σ+12\mu=\omega(\sigma)-\frac{3}{2}\sigma+\frac{1}{2}, all these functions agree.

Lemma 10.

For μ=ω(σ)32σ+12\mu=\omega(\sigma)-\tiny{\frac{3}{2}}\sigma+\tiny{\frac{1}{2}}, we have

g0(μ)(σ)=g1(μ)(σ)==gk(μ)(σ)==ω(σ)+1σ2.g_{0}^{(\mu)}(\sigma)=g_{1}^{(\mu)}(\sigma)=\ldots=g_{k}^{(\mu)}(\sigma)=\ldots=\omega(\sigma)+\frac{1-\sigma}{2}. (39)
Proof.
g0(μ)(σ)=1+μ3+23ω(σ)=1+(ω(σ)3/2σ+1/2)3+23ω(σ)=ω(σ)+1σ2.\displaystyle\begin{aligned} g_{0}^{(\mu)}(\sigma)&=\frac{1+\mu}{3}+\frac{2}{3}\omega(\sigma)\\ &=\frac{1+(\omega(\sigma)-3/2\cdot\sigma+1/2)}{3}+\frac{2}{3}\omega(\sigma)\\ &=\omega(\sigma)+\frac{1-\sigma}{2}.\end{aligned}

Also, assuming inductively that gk(μ)(σ)=ω(σ)+1σ2g_{k}^{(\mu)}(\sigma)=\omega(\sigma)+\frac{1-\sigma}{2}, we derive that

gk+1(μ)(σ)=minδmax{ω(σ)+δ,gk(μ)(12δ)}max{ω(σ)+1σ2,gk(μ)(σ)}=ω(σ)+1σ2.\displaystyle\begin{aligned} g_{k+1}^{(\mu)}(\sigma)&=\min_{\delta}\max\{\omega(\sigma)+\delta,g_{k}^{(\mu)}(1-2\delta)\}\\ &\leq\max\{\omega(\sigma)+\frac{1-\sigma}{2},g_{k}^{(\mu)}(\sigma)\}\\ &=\omega(\sigma)+\frac{1-\sigma}{2}.\end{aligned}

Also, by Lemma 9 for σσ~\sigma\geq\tilde{\sigma}, we have gk+1(μ)(σ)ω(σ)+1σ2g_{k+1}^{(\mu)}(\sigma)\geq\omega(\sigma)+\frac{1-\sigma}{2}, and thus gk+1(μ)(σ)=ω(σ)+1σ2g_{k+1}^{(\mu)}(\sigma)=\omega(\sigma)+\frac{1-\sigma}{2}. ∎

References

  • [BW23] Aaron Bernstein and Nicole Wein. Closing the Gap Between Directed Hopsets and Shortcut Sets, pages 163–182. 20223. URL: https://epubs.siam.org/doi/abs/10.1137/1.9781611977554.ch7, arXiv:https://epubs.siam.org/doi/pdf/10.1137/1.9781611977554.ch7, doi:10.1137/1.9781611977554.ch7.
  • [CKL+22] Li Chen, Rasmus Kyng, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg, and Sushant Sachdeva. Maximum flow and minimum-cost flow in almost-linear time. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 612–623, 2022. doi:10.1109/FOCS54457.2022.00064.
  • [CW90] Don Coppersmith and Shmuel Winograd. Matrix multiplication via arithmetic progressions. Journal of Symbolic Computation, 9(3):251–280, 1990. Computational algebraic complexity editorial. URL: https://www.sciencedirect.com/science/article/pii/S0747717108800132, doi:https://doi.org/10.1016/S0747-7171(08)80013-2.
  • [DP09] Ran Duan and Seth Pettie. Fast Algorithms for (max, min)-Matrix Multiplication and Bottleneck Shortest Paths, pages 384–391. 2009. doi:10.1137/1.9781611973068.43.
  • [EN22] Michael Elkin and Ofer Neiman. Centralized, parallel, and distributed multi-source shortest paths via hopsets and rectangular matrix multiplication. In Petra Berenbrink and Benjamin Monmege, editors, 39th International Symposium on Theoretical Aspects of Computer Science, STACS 2022, March 15-18, 2022, Marseille, France (Virtual Conference), volume 219 of LIPIcs, pages 27:1–27:22. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2022. URL: https://doi.org/10.4230/LIPIcs.STACS.2022.27, doi:10.4230/LIPICS.STACS.2022.27.
  • [Gal24] François Le Gall. Faster rectangular matrix multiplication by combination loss analysis. In Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), page To appear in SODA 2024, 2024.
  • [GIA+21] Fabrizio Grandoni, Giuseppe F. Italian, Aleksander Łukasiewicz, Nikos Parotsidis, and Przemysław Uznański. All-Pairs LCA in DAGs: Breaking through the O(n2.5)O(n^{2.5}) barrier, pages 273–289. 2021. URL: https://epubs.siam.org/doi/abs/10.1137/1.9781611976465.18, arXiv:https://epubs.siam.org/doi/pdf/10.1137/1.9781611976465.18, doi:10.1137/1.9781611976465.18.
  • [GU18] François Le Gall and Florent Urrutia. Improved rectangular matrix multiplication using powers of the coppersmith-winograd tensor. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’18, page 1029–1046, USA, 2018. Society for Industrial and Applied Mathematics.
  • [KP22a] Shimon Kogan and Merav Parter. Beating matrix multiplication for n1/3n^{1/3}-directed shortcuts. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022), volume 229 of Leibniz International Proceedings in Informatics (LIPIcs), pages 82:1–82:20, Dagstuhl, Germany, 2022. doi:10.4230/LIPIcs.ICALP.2022.82.
  • [KP22b] Shimon Kogan and Merav Parter. New Diameter-Reducing Shortcuts and Directed Hopsets: Breaking the n\sqrt{n} Barrier, pages 1326–1341. 2022.
  • [LG14] François Le Gall. Powers of tensors and fast matrix multiplication. In Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation, ISSAC ’14, page 296–303, New York, NY, USA, 2014. Association for Computing Machinery. doi:10.1145/2608628.2608664.
  • [LGN14] François Le Gall and Harumichi Nishimura. Quantum algorithms for matrix products over semirings. In R. Ravi and Inge Li Gørtz, editors, Algorithm Theory – SWAT 2014, pages 331–343, Cham, 2014. Springer International Publishing.
  • [WXXZ24] Virginia Vassilevska Williams, Yinzhan Xu, Zixuan Xu, and Renfei Zhou. New bounds for matrix multiplication: from alpha to omega. In Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), page To appear in SODA 2024, 2024.
  • [Zwi02] Uri Zwick. All pairs shortest paths using bridging sets and rectangular matrix multiplication. J. ACM, 49(3):289?317, may 2002. doi:10.1145/567112.567114.