This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\hideLIPIcs

Department of Computer Science, University of Vienna, Vienna, [email protected] LISN, CNRS & Paris-Saclay University, [email protected] Department of Computer Science, UniVie Doctoral School Computer Science DoCS, University of Vienna, Vienna, [email protected] \fundingMonika Henzinger and A. R. Sricharan: \flagLOGO_ERC-FLAG_EU_.jpgThis project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 101019564 “The Design of Modern Fully Dynamic Data Structures (MoDynStruct)” and from the Austrian Science Fund (FWF) project “Fast Algorithms for a Reactive Network Layer (ReactNet)”, P 33775-N, with additional funding from the netidee SCIENCE Stiftung, 2020–2024 \EventEditorsShiri Chechik, Gonzalo Navarro, Eva Rotenberg, and Grzegorz Herman \EventNoEds4 \EventLongTitle30th Annual European Symposium on Algorithms (ESA 2022) \EventShortTitleESA 2022 \EventAcronymESA \EventYear2022 \EventDateSeptember 5–9, 2022 \EventLocationBerlin/Potsdam, Germany \EventLogo \SeriesVolume244 \ArticleNo34 \CopyrightMonika Henzinger, Ami Paz, and A. R. Sricharan \ccsdescTheory of computation Dynamic graph algorithms

Fine-Grained Complexity Lower Bounds for Families of Dynamic Graphs

Monika Henzinger    Ami Paz    A. R. Sricharan
Abstract

A dynamic graph algorithm is a data structure that answers queries about a property of the current graph while supporting graph modifications such as edge insertions and deletions. Prior work has shown strong conditional lower bounds for general dynamic graphs, yet graph families that arise in practice often exhibit structural properties that the existing lower bound constructions do not possess. We study three specific graph families that are ubiquitous, namely constant-degree graphs, power-law graphs, and expander graphs, and give the first conditional lower bounds for them. Our results show that even when restricting our attention to one of these graph classes, any algorithm for fundamental graph problems such as distance computation or approximation or maximum matching, cannot simultaneously achieve a sub-polynomial update time and query time. For example, we show that the same lower bounds as for general graphs hold for maximum matching and (s,ts,t)-distance in constant-degree graphs, power-law graphs or expanders. Namely, in an mm-edge graph, there exists no dynamic algorithms with both O(m1/2ϵ)O(m^{1/2-\epsilon}) update time and O(m1ϵ)O(m^{1-\epsilon}) query time, for any small ϵ>0\epsilon>0. Note that for (s,ts,t)-distance the trivial dynamic algorithm achieves an almost matching upper bound of constant update time and O(m)O(m) query time. We prove similar bounds for the other graph families and for other fundamental problems such as densest subgraph detection and perfect matching.

keywords:
Dynamic graph algorithms, Expander graphs, Power-law graphs

1 Introduction

A dynamic graph algorithm is a data structure that stores a graph and supports update operations, usually consisting of edge insertions and deletions, as well as query operations that ask about a specific property of the graph. The introduction of strong conditional lower bounds based on widely-believed complexity assumptions [1, 16] has had a fundamental influence on the field, pushing the design of new algorithms towards more specialized algorithms such as partially-dynamic or even offline-dynamic algorithms or towards approximate solutions. However, graphs arising in real-world applications often differ significantly from the very specifically crafted graphs for which the lower bound results are shown. Frequently, real-world graphs have some special structure, such as having a power-law degree distribution, a constant degree, or being planar. Expanders, on the other side, have recently been used to design dynamic algorithms for general graphs. This naturally leads to the question of determining the complexity of dynamic graph algorithms for these graph classes, and this is exactly the question investigated in this paper.

While the complexity of dynamic graph algorithms for planar graphs has already been studied quite extensively [25, 18, 22, 20, 3, 2, 1, 19, 8], the question is still widely open for other families of graphs, including power-law graphs, constant-degree graphs, and expanders. Certain problems become easier for these graph classes: As an NN-node111To avoid confusion with the parameter nn and matrix MM used in the online-matrix-vector multiplication conjecture, we use NN to denote the number of vertices and mm the number of edges in the dynamic graphs. constant-degree graph has O(N)O(N) edges, computing all-pairs shortest paths (APSP) takes only time O~(N2)\tilde{O}(N^{2}), while the popular APSP conjecture postulates that for general graphs, there exists a small constant c>0c>0 such that any algorithm in the word RAM model with O(logN)O(\log N)-bit words requires N3o(1)N^{3-o(1)} expected time to compute APSP. Moreover, some problems become trivial in these graph classes, e.g., computing shortest paths with logarithmic additive error on expander graphs is trivial, due to their low diameter.

In this paper we will concentrate on graph problems that have real-world applications such as shortest-paths (which has applications in online navigation), matching (which has applications in reconfigurable datacenters), and densest subgraphs (which has applications in network analysis), yet we believe that our general approach can be applied to further graph problems. For these three problems, the known conditional lower bounds construct graphs that are far from being in the classes we consider: They have maximum degree Ω(N)\Omega(N) and small cuts, and their degree distribution is unknown as it depends on the instance that is postulated to be hard.

Constant-degree graphs.

Various dynamic graph problems that admit strong lower bounds in general graphs have very efficient algorithms on constant-degree graphs. Let Δ\Delta be the maximum degree in the graph. For local problems, where the solution at a node vv can be computed by simply analyzing information stored at the neighbors of vv such as maintaining a maximal matching, a maximal independent set, or a (Δ+1)(\Delta+1)-vertex coloring, there exist simple dynamic algorithms with O(Δ)O(\Delta) update time and constant query time. Additionally, for various problems that count or detect certain fixed subgraphs with cc nodes (such as triangle counting for c=3c=3) there exists dynamic algorithms with O(Δc1)O(\Delta^{c-1}) update time and constant query time, even though they have polynomial conditional lower bounds in general graphs (see Table 1). These efficient algorithms for local problems rule out the possibility of any non-trivial lower bound in the constant-degree setting.

Furthermore, even for the non-local problem of maintaining a maximum matching Gupta and Peng [15] designed a (1+δ)(1+\delta)-approximation algorithm for any small δ>0\delta>0 that runs in O(min{m|M(t)|,|M(t)|}δ2)O\left(\min\left\{\frac{m}{|M(t)|},|M(t)|\right\}\delta^{-2}\right) amortized time per update, where M(t)M(t) denotes the maximum cardinality matching after the tt-th update operation. As in a graph with maximum degree Δ\Delta it holds that |M(t)|m/(2Δ)|M(t)|\geq m/(2\Delta), their algorithm achieves an amortized update time of O(Δδ2)O(\Delta\,\delta^{-2}), which is O(δ2)O(\delta^{-2}) in constant-degree graphs. This raises the question of how efficiently other non-local dynamic graph problems such exact maximum matching, shortest paths, and densest subgraph can be solved in dynamic constant-degree graphs and whether it is possible to show (conditional) lower bounds for them.

Problems Lower bounds Upper bounds
General graphs [16] Erdős-Rényi avg-case [17] constant Δ\Delta (trivial)
u(m,N)u(m,N) q(m,N)q(m,N) p(m,N)p(m,N) u(m,N)u(m,N) q(m,N)q(m,N) u(m,N)u(m,N) q(m,N)q(m,N)
Triangle m1/2ϵm^{1/2-\epsilon} m1ϵm^{1-\epsilon} N3ϵN^{3-\epsilon} m1/2ϵm^{1/2-\epsilon} m1ϵm^{1-\epsilon} Δ\Delta 11
counting
C4C_{4} subgraph Δ3\Delta^{3} 11
counting
55-length N2ϵN^{2-\epsilon} Nω2ϵN^{\omega-2-\epsilon} 1 Δ4\Delta^{4} 11
(s,t)(s,t)-path count
Table 1: Counting problems which admit polynomial conditional lower bounds on general graphs (amortized) and on Erdős-Rényi graphs (average case), but have algorithms with constant update and query times in constant-degree graphs. For the lower bounds above, there is no dynamic algorithm with pre-processing time p(m,N)p(m,N), update time u(m,N)u(m,N), and query time q(m,N)q(m,N) unless the OMv conjecture is false. When p(m,N)p(m,N) is unspecified, poly(N)\textrm{{poly}}(N) pre-processing time is allowed.
Expanders.

Expander decompositions are increasingly becoming a central tool for designing dynamic graph algorithms with improved running time bounds for various graph problems such as connectivity, minimum spanning tree, shortest paths, conductance, edge-connectivity, maximum flows, and minimum cuts [21, 13, 10, 9]. One of the central subproblems that these algorithms have to handle is to solve a graph problem on a dynamically changing expander. To understand the limitations of this approach it is crucial to understand which problems can be solved efficiently on expanders, and which cannot. We present novel lower bounds for dynamic problems on expanders, more specifically on constant-degree expanders.

Note that these results also have an interesting connection to the average-case hardness of dynamic graph algorithms. Recently, lower bounds on the average-case hardness were shown for various subgraph counting problems in dynamic Erdős-Rényi graphs (see Table 1 for some of them) [17]. As random graphs are usually expanders, giving lower bounds for a problem on dynamic expanders gives an indication that this problem might also be hard in the average case and can motivate further work in this direction.

Power-law graphs.

Graphs are called power-law graphs if the fraction of nodes with degree dd is proportional to 1/dc1/d^{c} for some constant c>0c>0. Static and dynamic power-law graphs arise surprisingly often in real-world networks, such as the Internet, the world-wide web, and citation graphs, as well as in physics, linguistics, and economics. Even though the existence of large dynamic power-law graphs was already pointed out in 2004 [14], no efficient dynamic algorithms have been developed specifically for this class of graphs. This leads to the question of whether sub-polynomial time dynamic algorithms are even possible for power-law graphs or not. In fact, dynamic power-law graphs were not only never studied, they were not even defined—removing even a single edge from a power-law graph changes the degree distribution and thus violates the power-law distribution. Hence, we first present several definitions of dynamic power-law graphs, where some slackness in the degree-distribution is allowed. Then we prove lower bounds that hold for all of these dynamic power-law graph definitions.

1.1 Our Results

Throughout the paper we use the standard assumption that queries output one value, such as the size, length or weight of the solution. Note that this makes it only more challenging to prove lower bounds. All our results are conditioned on the popular OMv conjecture [16], defined in Section 2, but to simplify the terminology we usually drop the word “conditional”. Our results are also summarized in Footnote 3.

  1. 1.

    Main results. We study the hardness of dynamic algorithms for (i) constant-degree graphs, (ii) expanders, and (iii) power-law graphs, for the following graph problems: Determining (a) the size of a maximum matching, (b) the length of the (s,t)(s,t) shortest-path (i.e. (s,t)(s,t)-distance), and (c) the density of the densest subgraph. Specifically, we show the following tradeoff between the update time uu and the query time qq in an mm-edge graph for maximum matching and (s,t)(s,t)-distance: There is no dynamic algorithm which achieves both u=O(m1/2ϵ)u=O(m^{1/2-\epsilon}) and q=O(m1ϵ)q=O(m^{1-\epsilon}) for any small ϵ>0\epsilon>0. Note that these bounds match the bounds given for general graphs in [16] and that the lower bound for (s,t)(s,t)-distance is almost tight as the simple algorithm that only records the edge change at update time and computes the solution from scratch at query time achieves u=O(1)u=O(1) and q=O(m)q=O(m). For densest subgraph we show that there is no dynamic algorithm which achieves both u=O(N1/4ϵ)u=O(N^{1/4-\epsilon}) and q=O(N1/2ϵ)q=O(N^{1/2-\epsilon}) for any small ϵ>0\epsilon>0, which is weaker than the lower bound on general graphs (of u=O(N1/2ϵ)u=O(N^{1/2-\epsilon}) and q=O(N1ϵ)q=O(N^{1-\epsilon})).

    The only relevant prior work are conditional lower bounds for planar graphs [1], which have constant degree: In unweighted graphs they show for all-pairs-shortest paths a weaker tradeoff between update time uu and query time qq than we do, namely they prove max(u2q,uq2)=Ω(m1o(1))\max(u^{2}\cdot q,u\cdot q^{2})=\Omega(m^{1-o(1)}). In weighted graphs they show for (s,ts,t)-distance a tradeoff of max(u,q)=Ω(m1/2o(1))\max(u,q)=\Omega(m^{1/2-o(1)}). Note that our result is stronger as it shows that in unweighted graphs no algorithm with u=Ω(m49/100)u=\Omega(m^{49/100}) and q=Ω(m99/100)q=\Omega(m^{99/100}) is possible.

  2. 2.

    Degree–lower bound trade-off. While the constant-degree lower bounds are equal to the lower bounds for general graphs in terms of mm, they are naturally quadratically lower in terms of the number of nodes NN. To understand the behaviour of the bounds also with respect to NN, we extend our constant-degree lower bounds for maximum matching, perfect matching, and (s,t)(s,t)-distance to graphs with maximum degree O(Nt)O(N^{t}), for any 0t10\leq t\leq 1. We show the following result: There is no dynamic algorithm which achieves both u=O(N(1+t)/2ϵ)u=O(N^{(1+t)/2-\epsilon}) and q=O(N1+tϵ)q=O(N^{1+t-\epsilon}) in a graph with maximum degree O(Nt)O(N^{t}) for any ϵ>0\epsilon>0. These results hold even in bipartite graphs. Note that for t=1t=1 these results match exactly the bounds for general graphs in [16], and for t=0t=0, they match the aforementioned results for constant-degree graphs.

  3. 3.

    Approximation results. In constant-degree graphs we extend the lower bound to the problem of approximating the (s,t)(s,t)-distance within a factor of 3δ3-\delta, for any small constant δ\delta. This naturally extends the (3δ)(3-\delta)-approximation lower bounds on general graphs to the constant-degree case. In planar graphs, the (s,t)(s,t)-distance lower bound holds only for exact answers.

    Note that a similar extension to approximation algorithms is not possible for maximum cardinality matching and for densest subgraph: (a) For maximum matching, for any small δ>0\delta>0 the above-mentioned (1+δ)(1+\delta)-approximation algorithm [15] achieves an amortized update time of O(δ2)O(\delta^{-2}), which is constant for a constant δ\delta, thereby precluding any non-trivial lower bounds for approximate maximum matching in the constant-degree setting. Stated differently, our work shows an interesting dichotomy for dynamic matching matching in constant-degree graphs: For the exact setting there is no dynamic algorithm which achieves both u=O(m1/2ϵ)u=O(m^{1/2-\epsilon}) and q=O(m1ϵ)q=O(m^{1-\epsilon}) for any small ϵ>0\epsilon>0, while a (1+δ)(1+\delta)-approximation can be achieved in constant time, for any small δ>0\delta>0. (b) The same dichotomy arises for densest subgraph: For any small δ>0\delta>0 there exists a (1δ)(1-\delta)-approximation algorithm with polylogarithmic time per operation [24], while we show a polynomial lower bound for the exact value.

  4. 4.

    Partially dynamic algorithms. We extend the constant-degree reductions for maximum matching and (s,t)(s,t)-distance also to the insertions-only and the deletions-only setting, achieving the same lower bound as in the fully dynamic setting.

  5. 5.

    Perfect matching. A special case of maximum cardinality matching is determining whether a perfect matching exists in a bipartite graph. For constant-degree graphs and expander graphs we show the following lower bound: There is no dynamic algorithm which achieves both u=O(m1/2ϵ)u=O(m^{1/2-\epsilon}) and q=O(m1ϵ)q=O(m^{1-\epsilon}) for any small ϵ>0\epsilon>0. This can also be extended to the varying-degree setting.

To summarize, our paper opens up the research field of dynamic graph algorithms for more specific, practical graph classes, in contrast with previous work that concentrated on general or planar graphs. We believe that our techniques can be extended to further classes of dynamic graphs or even in other domains of theoretical computer science, such as distributed graph algorithms or streaming algorithms. One further interesting implication of our work is presenting the limitations of dynamic graph algorithms on expanders, thus complementing recent algorithmic results that use expander decompositions in dynamic graphs.

Problem Class Section u(m,N)u(m,N) q(m,N)q(m,N)
Maximum Matching Δ3\Delta\leq 3 Section 3.1 m1/2ϵm^{1/2-\epsilon} m1ϵm^{1-\epsilon}
constant degree & expansion Section 3.3
power-law graphs Section 3.4
Δ3\Delta\leq 3, partially dynamic Section B.2
ΔNt\Delta\leq N^{t} Section 3.2 N(1+t)/2ϵN^{(1+t)/2-\epsilon} N1+tϵN^{1+t-\epsilon}
(s,t)(s,t)-distance Δ3\Delta\leq 3 Section 4.1 m1/2ϵm^{1/2-\epsilon} m1ϵm^{1-\epsilon}
(3δ)(3-\delta)-approx, Δ3\Delta\leq 3 Section 4.2
constant degree & expansion Section 4.4
power-law graphs Section 4.5
Δ3\Delta\leq 3, partially dynamic Section B.1
ΔNt\Delta\leq N^{t} Section 4.3 N(1+t)/2ϵN^{(1+t)/2-\epsilon} N1+tϵN^{1+t-\epsilon}
Densest Subgraph Δ5\Delta\leq 5 Section 5.1 N1/4ϵN^{1/4-\epsilon} N1/2ϵN^{1/2-\epsilon}
constant degree & expansion Section 5.2
power-law graphs Section 5.3
Table 2: Our results for graphs on NN nodes with mm edges. For every uu and qq stated above, there is no algorithm for the corresponding problem with amortized O(u(m,N))O(u(m,N)) update time and O(q(m,N))O(q(m,N)) query time simultaneously unless the OMv conjecture is false. The first three rows hold also for perfect matching. All the lower bounds in the table except for densest subgraph match the general OMv lower bounds33footnotemark: 3.

1.2 Our Techniques

We prove lower bounds by reductions from the online matrix vector (OMv) conjecture [16]. In these reductions, the input of an online problem, which is an n×nn\times n matrix MM and a sequence of nn pairs (u,v)(u,v) of nn-vectors, is translated into a dynamic graph. The reduction is built so that there exists a pair (u,v)(u,v) satisfying uMv=1uMv=1 if and only if the dynamic graph has some desired property at some point of time. While we follow the general framework of OMv lower bounds, the details are delicate, as the dynamic graphs we construct should fall into specific graph classes at all times, while still maintaining the graph property under consideration. We give a high-level overview of our reductions below.

One way to turn known OMv-to-dynamic graphs reductions into reductions that produce bounded-degree graphs is by replacing high-degree nodes by bounded-degree trees. This technique has a rather clear and straightforward effect on the distances in the graph, so it is applicable when considering distance-related problems. This, however, is far from being the case when considering other problems, such as maximum matchings. Here, replacing a high-degree node with a gadget could adversely affect the desired matching size, since the gadget might create several augmenting paths that would not have existed when it was a single high-degree node. To overcome this, we limit the possible maximum matching sizes, by designing a reduction graph with bounded-degree gadgets composed of paths, where the maximum matching is always either a perfect matching, or a near-perfect matching, i.e., the matching size is either N/2N/2 or N/21N/2-1. This reduction thus involves a large matching and a small gap between the uMv=0uMv=0 and uMv=1uMv=1 cases, and hence cannot be extended to achieve a lower bound for the approximation of the maximum matching size. While this might seem as a limitation of our construction, recall that this is actually not the case: As described above, for any small δ>0\delta>0 there is a constant time (1+δ)(1+\delta)-approximation dynamic algorithm for the problem, and, thus, such a lower bound cannot exist.

An even more delicate reduction we present is for proving a lower bound on the densest-subgraph problem. A straightforward reduction would change O(n)O(n) graph-edges for every bit of the input, which will allow us to make sure that the density of the densest subgraph changes by a significant amount when uMv=0uMv=0 versus when uMv=1uMv=1. However, this would involve O(n2)O(n^{2}) updates for each (u,v)(u,v) input pair, and the reduction would fail to yield any non-trivial lower bound. Thus, we are forced to change very few edges for each input bit, which renders an almost negligible effect on the density, making it difficult to control the exact density of the densest subgraph. Our reduction balances these two factors, using a construction where each gadget is a sufficiently dense regular graph, while having each bit of the input translate into the existence or nonexistence of merely two edges inside specific gadgets. As in the case of matchings, our lower bounds cannot be extended to approximations, as for any δ>0\delta>0 there exists a fast algorithm with polylogarithmic update time for computing (1δ)(1-\delta)-approximations to the densest subgraph.

We then extend these reductions from bounded-degree graphs to constant-degree constant-expansion graphs. First, the standard lower bound reductions contain sparse cuts if the inputs M,uM,u or vv are sparse, making a standard reduction graph far from being an expander. Thus, we have to augment the graph with many more edges to make sure that it has no sparse cuts regardless of M,uM,u and vv. We do this augmentation “inside a layer” to prevent the additional edges from creating undesired short paths between ss and tt, or spurious augmenting paths in the case of matchings. Sparse cuts also exist in parts of the graph that do not depend on M,uM,u or vv, and to handle these, we add edges of a constant-degree expander between a well-chosen set of nodes, thus guaranteeing the expansion without changing the required graph property. Finally, in the case of distance-related problems, we note that expander graphs can have at most logarithmic diameter, but the substitution of nodes by trees described above increases the diameter to be at least logarithmic, leaving only a very small slack for our construction.

When studying densest subgraphs on expanders, adding edges in order to avoid sparse cuts might change the location and structure of the densest subgraph in an undesired way. In order to guarantee the expansion in this case, we add a copy of all the graph nodes, build a constant-degree expander on the copies of the nodes, and then connect each node to its copy by a matching.

In dynamic power-law graphs where the node degrees may depend on the inputs u,M,vu,M,v and change over time, we have to guarantee that the degree changes incurred by the processing of different inputs do not cause a violation of the power-law distribution of degrees. As before, all the changes must also be done without changing the graph property under consideration, and without performing too many update operations. We address this problem by inserting or deleting edges in an online fashion in other parts of the reduction graph, to compensate for the changes incurred by processing the input vector pairs.

Organisation

Section 2 has notation and definitions. Section 3 presents the dynamic maximum matching lower bounds, Section 4 presents the dynamic (s,t)(s,t)-distance lower bounds, and Section 5 presents the dynamic densest subgraph lower bounds. The lower bounds for the partially dynamic setting are deferred to the appendix.

2 Preliminaries

Throughout the paper, we consider vectors and matrices that are boolean, and so a vector-matrix-vector multiplication outputs a single bit. Henzinger et al. [16] define the Online Matrix Vector (OMv) and the Online Vector Matrix Vector (OuMv) multiplication problems.

Definition 2.1 (Online Matrix Vector Multiplication).

Let MM be a boolean n×nn\times n matrix. Preprocessing the matrix is allowed. Then, nn vectors v1,v2,,vnv^{1},v^{2},\ldots,v^{n} arrive one at a time, and the task is to output the product MviMv^{i} before the next vector is revealed.

Definition 2.2 (Online Vector Matrix Vector Multiplication).

Let MM be a boolean n×nn\times n matrix. Preprocessing the matrix is allowed. Then, nn vector pairs (u1,v1),(u2,v2),,(un,vn)(u^{1},v^{1}),(u^{2},v^{2}),\ldots,(u^{n},v^{n}) arrive one at a time, and the task is to output the bit uiMviu^{i}Mv^{i} before the next vector pair is revealed.

In their paper, they show that the OuMv problem can be reduced to the OMv problem, and conjecture that there is no truly subcubic time algorithm for OMv.

Conjecture 2.3 (OMv).

There is no algorithm for the OMv (and hence the OuMv) problem running in time O(n3ϵ)O(n^{3-\epsilon}) for any constant ϵ>0\epsilon>0.

We work with the OuMv problem for all the reductions in our paper. We denote the length of our input vectors ui,viu^{i},v^{i} by nn, and thus the matrix MM is of dimension n×nn\times n. We use upper indices to indicate the vector’s location in the stream, but usually focus on one pair (u,v)(u,v) omitting these indices. We use lower indices for a location in the vector or matrix, e.g., uiu_{i} and MijM_{ij}. We use NN to denote the number of nodes in our reduction graph.

Definition 2.4 (Expansion).

The expansion parameter of a graph G=(V,E)G=(V,E) is defined as

h=min{|E(S,S¯)||S||SV,|S||V|/2}h=\min\left\{\frac{|E(S,\overline{S})|}{|S|}\;\middle|\;\emptyset\neq S\subseteq V,\;|S|\leq|V|/2\right\}

where E(S,S¯)E(S,\overline{S}) is the number of edges from SS to VSV\setminus S. We call a graph with expansion hh a hh-expander. Works on dynamic algorithms use a different definition of expansion parameter hh^{\prime}, called volume expansion. However, when considering constant-degree graphs with constant expansion (as we do in this paper), both parameters are within a Δ\Delta factor of each other, so we only consider the expansion parameter hh in our proofs.

We study power-law graphs as introduced in [4], and only consider the setting where β>2\beta>2. In the following definition, if the number of nodes NN in the graph is fixed, then we get that α\alpha is roughly lnN\ln N.

Definition 2.5 (Power Law).

A graph is said to follow an (α,β)(\alpha,\beta)-power law distribution if the number NdN_{d} of nodes with degree dd is inversely proportional to dβd^{\beta} for some constant β>0\beta>0. That is,

Nd=eαdβ1ζ(β)Ndβ,N_{d}=\left\lfloor\frac{e^{\alpha}}{d^{\beta}}\right\rfloor\approx\left\lfloor\frac{1}{\zeta(\beta)}\cdot\frac{N}{d^{\beta}}\right\rfloor,

where ζ(β)=i=11/iβ\zeta(\beta)=\sum_{i=1}^{\infty}1/i^{\beta} is the Riemann Zeta function.

Since dynamic graphs allow edge insertions and deletions, it is impossible to maintain an exact degree distribution at all times. Hence, we introduce the notion of approximate power-law distributions to afford some slack for dynamic changes. One natural relaxation is to allow β\beta to vary within an interval.

Definition 2.6 (β\beta-Varying Power Law).

A graph is said to follow an (α,β1,β2)(\alpha,\beta_{1},\beta_{2})-varying power law distribution if the number NdN_{d} of nodes with degree dd satisfies

min{1ζ(β1)Ndβ1,1ζ(β2)Ndβ2}Ndmax{1ζ(β1)Ndβ1,1ζ(β2)Ndβ2},\min\left\{\left\lfloor\frac{1}{\zeta(\beta_{1})}\cdot\frac{N}{d^{\beta_{1}}}\right\rfloor,\left\lfloor\frac{1}{\zeta(\beta_{2})}\cdot\frac{N}{d^{\beta_{2}}}\right\rfloor\right\}\leq N_{d}\leq\max\left\{\left\lfloor\frac{1}{\zeta(\beta_{1})}\cdot\frac{N}{d^{\beta_{1}}}\right\rfloor,\left\lfloor\frac{1}{\zeta(\beta_{2})}\cdot\frac{N}{d^{\beta_{2}}}\right\rfloor\right\},

This relaxation of an exact power law, while being natural, is a global relaxation rather than a local one. Thus we also define two locally approximate definitions below that allow similar slack for all degrees.

Definition 2.7 (Additively Approximate Power Law).

A graph is said to follow an (α,β,c)(\alpha,\beta,c)-additively approximate power law distribution if the number NdN_{d} of nodes of degree dd for a realisable degree dd satisfies

1ζ(β)NdβcNd1ζ(β)Ndβ+c\left\lfloor\frac{1}{\zeta(\beta)}\cdot\frac{N}{d^{\beta}}\right\rfloor-c\leq N_{d}\leq\left\lfloor\frac{1}{\zeta(\beta)}\cdot\frac{N}{d^{\beta}}\right\rfloor+c

where we say that dd is a realisable degree if there is a node of degree dd in an (α,β)(\alpha,\beta)-power law graph.

Definition 2.8 (Multiplicatively Approximate Power Law).

A graph is said to follow an (α,β,ϵ)(\alpha,\beta,\epsilon)-multiplicatively approximate power law distribution if the number NdN_{d} of nodes of degree dd satisfies

1(1+ϵ)1ζ(β)NdβNd(1+ϵ)1ζ(β)Ndβ\frac{1}{(1+\epsilon)}\cdot\left\lfloor\frac{1}{\zeta(\beta)}\cdot\frac{N}{d^{\beta}}\right\rfloor\leq N_{d}\leq(1+\epsilon)\cdot\left\lfloor\frac{1}{\zeta(\beta)}\cdot\frac{N}{d^{\beta}}\right\rfloor

Our lower bounds contain at most four nodes that are one degree away from an exact power-law distribution, and thus hold in all the models discussed above with any reasonable parameter regime. We note a couple of properties of power-law graphs that we use in our lower bounds.

The maximum realizable degree in a power law graph is eα/β\left\lceil e^{\alpha/\beta}\right\rceil, since

Nd1eαdβ1deα/β.N_{d}\geq 1\iff\left\lfloor\frac{e^{\alpha}}{d^{\beta}}\right\rfloor\geq 1\iff d\leq\left\lceil e^{\alpha/\beta}\right\rceil.

In terms of NN, the maximum degree in a power-law graph is Δ=(N/ζ(β))1/β<N\Delta=\left\lceil\left(N/\zeta(\beta)\right)^{1/\beta}\right\rceil<\sqrt{N}.

We work in the setting where β>2\beta>2. Note that in this setting, the number of edges in the graph is given by

|E|=12ζ(β1)ζ(β)N,|E|=\frac{1}{2}\cdot\frac{\zeta(\beta-1)}{\zeta(\beta)}\cdot N,

which is linear in the number of nodes for a fixed β\beta.

3 Lower Bounds for Dynamic Maximum Matching

In this section, we present our lower bound results for the maximum cardinality matching problem. The previous matching lower bounds on general graphs [16, 11] use reduction graphs that contain nodes with degree Ω(N)\Omega(N). Towards showing a lower bound on expanders, we first sparsify the original reduction.

In Section 3.1, we give a lower bound for maximum matching on graphs with maximum degree 33. In Section 3.2, we show that the distinction between the unbounded and constant-degree reductions is not discrete, by giving a lower bound reduction parameterized on the maximum degree allowed in the graph. In Section 3.3, we give a reduction graph that has constant expansion. Finally, we show our lower bounds for power-law graphs in Section 3.4.

3.1 Constant-Degree Graph

We first perform a simple reduction that shows that maintaining maximum matchings is hard even on graphs where the maximum degree is 33. We use the following gadget composed of paths to maintain matching properties in our reduction graph during sparsification—see Figure 1. Our gadget construction starts by replacing each node of a dense reduction by a path; we refer to each path as a ‘subgadget’. Connecting every node of this new subgadget with nodes outside the subgadget might create unwanted matchings of larger sizes, so instead we carefully choose a subset of the path nodes to connect outside the subgadget.

[Uncaptioned image]
Figure 1: Odd and even sized paths used in the maximum matching lower bounds. The canonical matchings are marked in red.

Figure 1 shows odd and even paths (“odd” and “even” describe the number of nodes) with a “canonical” matching for each of them marked in red. Next, we detail the connections outside the subgadgets.

Consider an odd path on 2n+12n+1 vertices, and a bipartition of the vertices into (X,X)(X^{\prime},X) with |X||X||X^{\prime}|\leq|X|. Indexing the vertices as X[0]X[0] and X[i],X[i]X^{\prime}[i],X[i] for 1in1\leq i\leq n, our canonical matching matches X[i]X[i] with X[i]X^{\prime}[i], and leaves X[0]X[0] unmatched. We connect only the vertices {X[i]1in}\{X[i]\mid 1\leq i\leq n\} outside the subgadget, while vertices in XX^{\prime} and X[0]X[0] only have edges inside the subgadget. For an even path on 2n+22n+2 vertices indexed as above, our canonical matching is perfect, and matches X[i]X[i] to X[i]X^{\prime}[i]. Only vertex X[0]X^{\prime}[0] and all the vertices in XX are connected outside the subgadget, and all vertices X[i]X^{\prime}[i], 1in1\leq i\leq n, only have edges within the subgadget.

Definition 3.1 (Reduction gadget).

A reduction gadget with xx subgadgets of size yy is a bipartite graph composed of xx subgraphs, each of which is a path on yy nodes.

Reduction Graph

The reduction graph is composed of two odd-sized reduction gadgets, and two even-sized reduction gadgets as follows.

  • A reduction gadget with one subgadget of size 2n+12n+1, on a set L1L2L_{1}\cup L_{2}. The nodes are labelled L1[i]L_{1}[i] for 1in1\leq i\leq n, and L2[i]L_{2}[i] for 0in0\leq i\leq n, and the path is from L2[0]L_{2}[0] to L2[n]L_{2}[n].

  • A reduction gadget with nn subgadgets of size 2n+22n+2 each, on a set L3L4L_{3}\cup L_{4}. The subgadgets are labelled LG[i]LG[i] for 1in1\leq i\leq n, and the nodes of subgadget LG[i]LG[i] are labelled L3[i,j]L_{3}[i,j] or L4[i,j]L_{4}[i,j] for 0jn0\leq j\leq n depending on whether the node is in L3L_{3} or L4L_{4}. The path in each subgadget goes from L3[i,0]L_{3}[i,0] to L4[i,n]L_{4}[i,n].

  • A copy of the above structure, with node sets marked R1,R2,R3,R4R_{1},R_{2},R_{3},R_{4} instead of L1,L2,L3,L4L_{1},L_{2},L_{3},L_{4}, respectively.

  • For the matrix MM, add the edge (L4[i,j],R4[j,i])(L_{4}[i,j],R_{4}[j,i]) if Mij=1M_{ij}=1.

  • Given an input vector uu, for each i[n]i\in[n], add the edge (L2[i],L3[i])(L_{2}[i],L_{3}[i]) if ui=1u_{i}=1.

  • Given an input vector vv, for each j[n]j\in[n], add the edge (R2[j],R3[j])(R_{2}[j],R_{3}[j]) if vj=1v_{j}=1.

The total number of nodes in the reduction graph is N=4n2+8n+2=Θ(n2)N=4n^{2}+8n+2=\Theta(n^{2}). Note that our reduction graph is bipartite, and thus our lower bounds in this section hold for maintaining an exact maximum matching even on bipartite graphs.

Matchings in the Graph

We start by defining a base matching BB on the graph, which is made up of the canonical matchings on each of the gadgets. On the left side, BB matches L3[i,j]L_{3}[i,j] to L4[i,j]L_{4}[i,j], and L1[i]L_{1}[i] to L2[i]L_{2}[i] for all i,ji,j. The matching on the right side is similar. Note that this matching always exists regardless of the input, and only L2[0]L_{2}[0] and R2[0]R_{2}[0] are unmatched in the entire graph. Thus |B|=N21|B|=\frac{N}{2}-1. We claim that this graph has a perfect matching if and only if uMv=1uMv=1. Let CC denote the maximum cardinality matching.

Lemma 3.2.

If uMv=1uMv=1, then |C|=N2|C|=\frac{N}{2}, and otherwise |C|=N21|C|=\frac{N}{2}-1.

Proof 3.3.

Since BB is always a matching of size N21\frac{N}{2}-1 regardless of the input, the claim is equivalent to showing that uMv=1uMv=1 if and only if there is an augmenting path with respect to the matching BB.

(\implies) Suppose that uMv=1uMv=1, with ui=Mij=vj=1u_{i}=M_{ij}=v_{j}=1. Consider the path PP composed of the following subpaths, of which all except P4P_{4} start with an unmatched edge and end with a matched edge, while P4P_{4} both starts and ends with an unmatched edge.

  • P1=L2[0],L1[1],L2[1],,L2[i]P_{1}=L_{2}[0],L_{1}[1],L_{2}[1],\ldots,L_{2}[i]

  • P2=L2[i],L3[i,0],L4[i,0],,L4[i,j]P_{2}=L_{2}[i],L_{3}[i,0],L_{4}[i,0],\ldots,L_{4}[i,j]

  • P3=L4[i,j],R4[j,i],R3[j,i],,R3[j,0]P_{3}=L_{4}[i,j],R_{4}[j,i],R_{3}[j,i],\ldots,R_{3}[j,0]

  • P4=R3[j,0],R2[j],R1[j],,R2[0]P_{4}=R_{3}[j,0],R_{2}[j],R_{1}[j],\ldots,R_{2}[0]

Thus, PP is an augmenting path to the base matching BB, which gives us that the maximum matching CC has to have size >N21>\frac{N}{2}-1, implying that the maximum matching CC is a perfect matching.

(\impliedby) Suppose now that there exists an augmenting path PP to the base matching BB that starts at s=L2[0]s=L_{2}[0] and ends at t=R2[0]t=R_{2}[0].

  • Since (iLi,jRj)(\cup_{i}L_{i},\cup_{j}R_{j}) is an (s,t)(s,t)-cut, there is at least one crossing edge, say (L4[i,j],R4[j,i])(L_{4}[i,j],R_{4}[j,i]), in PP. Thus MijM_{ij} = 1.

  • Since PP leaves the subgadget LG[i]LG[i] using (L4[i,j],R4[j,i])(L_{4}[i,j],R_{4}[j,i]), it should have entered the subgadget at some previous instance. Since (L4[i,j],R4[j,i])(L_{4}[i,j],R_{4}[j,i]) is an unmatched edge and all the matching edges in LG[i]LG[i] are within the subgadget, PP should have entered the subgadget using an unmatched edge. As all the matching edges in LG[i]LG[i] are between L3L_{3} and L4L_{4}, PP cannot both enter and exit the subgadget through L4L_{4}. Thus PP enters LG[i]LG[i] through L3L_{3}. However, the only possible unmatched edge from L3L_{3} leaving the subgadget is the edge (L3[i,0],L2[i])(L_{3}[i,0],L_{2}[i]). Thus PP uses the edge (L3[i,0],L2[i])(L_{3}[i,0],L_{2}[i]) to enter the subgadget LG[i]LG[i], and so ui=1u_{i}=1.

  • The path PP now enters the subgadget RG[j]RG[j] through the unmatched edge (L4[i,j],R4[j,i])(L_{4}[i,j],R_{4}[j,i]). As before, all the matched edges in RG[j]RG[j] are between R4R_{4} and R3R_{3}, and so PP has to exit the subgadget using an unmatched edge from R3R_{3}. However, the only possible unmatched edge from R3R_{3} leaving the subgadget is the edge (R3[j,0],R2[j])(R_{3}[j,0],R_{2}[j]). Thus the edge (R3[j,0],R2[j])(R_{3}[j,0],R_{2}[j]) is used by PP, giving us that vj=1v_{j}=1.

This gives us that uMv=1uMv=1 as required.

Complexity of the Reduction

We are now ready to prove the theorem.

Theorem 3.4.

For any constant ϵ>0\epsilon>0, there is no dynamic algorithm maintaining a maximum matching or determining the existence of a perfect matching, on all NN-node graphs with maximum degree Δ3\Delta\leq 3, with amortized O(N1/2ϵ)O(N^{1/2-\epsilon}) update time and O(N1ϵ)O(N^{1-\epsilon}) query time, unless the OMv conjecture is false.

Proof 3.5.

Consider the reduction graph above. It consists of N=4n2+8n+2=Θ(n2)N=4n^{2}+8n+2=\Theta(n^{2}) nodes. Every time we get a new (u,v)(u,v) input vector pair, we delete all the edges between L2×L3L_{2}\times L_{3} and R2×R3R_{2}\times R_{3} and insert edges according to the new input vectors. This takes O(n)O(n) updates in total. After that, we query once for the size of the maximum matching in this new graph, and return 11 if and only if |C|=N2|C|=\frac{N}{2}.

Thus for each pair of input vectors, we perform O(n)O(n) updates and O(1)O(1) query. In total, checking nn pairs takes us O(n2)O(n^{2}) updates and O(n)O(n) query. If there were an algorithm for maximum matching on constant-degree graphs with update time O(N1/2ϵ)O(N^{1/2-\epsilon}) (i.e., O(n12ϵ)O(n^{1-2\epsilon})) and query time O(N1ϵ)O(N^{1-\epsilon}) (i.e., O(n22ϵ)O(n^{2-2\epsilon})), then we can decide if uMv=1uMv=1 for all nn pairs in O(n32ϵ)O(n^{3-2\epsilon}) time, contradicting the OMv conjecture.

3.2 Varying Degree Graph

We present a reduction that gives a lower bound parameterized on the maximum degree of the graph. Note that setting t=1t=1 and t=0t=0 in the following theorem give us the unbounded degree lower bound of [16] and Theorem 3.4 respectively.

Theorem 3.6.

For any 0t10\leq t\leq 1 and any constant ϵ>0\epsilon>0, there is no dynamic algorithm maintaining a maximum matching or determining the existence of a perfect matching, on all NN-node graphs with maximum degree Δ=O(Nt)\Delta=O(N^{t}), with amortized O(N1+t2ϵ)O(N^{\frac{1+t}{2}-\epsilon}) update time and O(N1+tϵ)O(N^{1+t-\epsilon}) query time, unless the OMv conjecture is false.

Given 0t10\leq t\leq 1, we construct a reduction graph that has maximum degree O(Nt)O(N^{t}). L1,L2,R1,R2L_{1},L_{2},R_{1},R_{2} and the edges for uu and vv are the same as in the previous construction. We now detail the changes for L3,L4,R3,R4L_{3},L_{4},R_{3},R_{4} and the edges that depend on MM.

  • LGLG is now an even reduction gadget with nn subgadgets of size 2n1t1+t+22n^{\frac{1-t}{1+t}}+2 each. The path in each subgadget goes from L3[i,0]L_{3}[i,0] to L4[i,n1t1+t]L_{4}[i,n^{\frac{1-t}{1+t}}], and similarly for R3R_{3} and R4R_{4}.

  • For the matrix MM, if Mij=1M_{ij}=1, let i=in2t/(t+1)i^{\prime}=\left\lceil i\cdot n^{-2t/(t+1)}\right\rceil and j=jn2t/(t+1)j^{\prime}=\left\lceil j\cdot n^{-2t/(t+1)}\right\rceil and add the edge (L4[i,j],R4[i,j])(L_{4}[i,j^{\prime}],R_{4}[i^{\prime},j]) to the graph.

Note that the augmenting paths in this reduction graph are the same as in the constant-degree reduction graph by a similar proof as in Lemma 3.9. By construction, each node in L4L_{4} is connected to at most n2t/(t+1)n^{2t/(t+1)} nodes in R4R_{4}, and each node in R4R_{4} is connected to at most n2t/(t+1)n^{2t/(t+1)} nodes in L4L_{4}. The proof of the theorem is similar to the proof of Theorem 3.4.

Proof 3.7.

The number of nodes in the reduction graph described in Section 3.2 is dominated by the number of nodes in L4L_{4} and R4R_{4}. Thus the total number of nodes in the reduction graph is N=Θ(n2/(t+1))N=\Theta(n^{2/(t+1)}) nodes. Each node in L4L_{4} and R4R_{4} has at most n2t/(t+1)n^{2t/(t+1)} edges of MM incident on it by a similar argument as in the proof of Theorem 4.9. Thus the maximum degree in the graph is O(n2t/(t+1))=O(Nt)O(n^{2t/(t+1)})=O(N^{t}) as required. The rest is similar to Theorem 3.4.

Every time we get a new (u,v)(u,v) input vector pair, we delete all the edges between L2×L3L_{2}\times L_{3} and R2×R3R_{2}\times R_{3} and insert edges according to the new input. Thus, for each pair of input vectors, we perform O(n)O(n) updates and O(1)O(1) queries. In total, checking nn pairs takes us O(n2)O(n^{2}) updates and O(n)O(n) queries. If there were an algorithm for maximum matching on graphs with maximum degree bounded by NtN^{t} with update time O(N1+t2ϵ)O(N^{\frac{1+t}{2}-\epsilon}) (i.e., O(n12ϵ)O(n^{1-2\epsilon})) and query time O(N1+tϵ)O(N^{1+t-\epsilon}) (i.e., O(n22ϵ)O(n^{2-2\epsilon})), then we can decide if uMv=1uMv=1 for all nn pairs in O(n32ϵ)O(n^{3-2\epsilon}) time, contradicting the OMv conjecture.

3.3 Expander Graph

The previous matching lower bounds on general graphs [16, 11] use reduction graphs that contain nodes with degree Ω(N)\Omega(N). In this section, we construct a constant-degree reduction graph with constant expansion.

Reduction gadgets

While the reduction gadgets in Section 3.1 suffice for sparsification, we need additional constructions in order to guarantee constant expansion. In particular, it turns out that adding edges inside a subgadget does not suffice for constant expansion, and we are forced to add edges between subgadgets. Our construction adds edges on the same side of the bipartition across subgadgets, and our proof implicitly shows that if the newly added edges take part in any augmenting path, then there also exists an augmenting path in the subgraph devoid of any newly inserted edge.

The reduction graph consists of a left subgraph (LL) and a right subgraph (RR), connected together by edges corresponding to the matrix MM. Note that for constant expansion, we need the number of edges of MM to be a constant fraction of the sizes of LL and RR. While it would be possible to construct a reduction graph with |L||L| and |R||R| that depend on the input matrix MM, we instead choose to augment the input matrix and vectors as it simplifies notation. We thus augment the input beforehand to ensure that there are Ω(n2)\Omega(n^{2}) edges crossing from LL to RR. To this end, we work with the vectors u^=(u  0)\hat{u}=(u\,\,0) and v^=(v  0)\hat{v}=(v\,\,0) of dimension 2n2n, and the matrix M^=(M111)\hat{M}=\begin{pmatrix}M&1\\ 1&1\end{pmatrix} of dimension 2n×2n2n\times 2n. It is easy to see that uMv=1u^M^v^=1uMv=1\iff\hat{u}\hat{M}\hat{v}=1.

Definition 3.8 (Reinforced gadget).

A reinforced gadget with xx subgadgets of size yy consists of xx subgraphs, each of which is a path on yy nodes. The nodes are bipartitioned into sets (X,X)(X^{\prime},X) with the larger side of the partition labelled as XX in each subgadget. Thus |X||X||X^{\prime}|\leq|X|. It is then augmented with the following edge-set: Consider a degree-dd expander graph on xy2x\cdot\left\lceil\frac{y}{2}\right\rceil nodes, choose an arbitrary bijection between the expander nodes and XX, and add the expander edges to these nodes accordingly. The resulting graph is the reinforced gadget.

Note that reinforced gadgets are not bipartite. Thus, while the constant-degree lower bounds hold for bipartite matching, the expander result is for maximum matching on general graphs. In what follows, we drop the hats from u^,M^\hat{u},\hat{M} and v^\hat{v} for simplicity, but continue our analysis with their dimensions as 2n,2n×2n,2n2n,2n\times 2n,2n respectively.

Reduction Graph

We use the following reduction graph, composed of two odd-sized reinforced gadgets and two even-sized reinforced gadgets.

  • A reinforced gadget with one subgadget of size 4n+14n+1, on a set L1L2L_{1}\cup L_{2}. The nodes are labelled L1[i]L_{1}[i] for 1i2n1\leq i\leq 2n, and L2[i]L_{2}[i] for 0i2n0\leq i\leq 2n. The path is from L2[0]L_{2}[0] to L2[2n]L_{2}[2n], and the expander is on L2L_{2}.

  • A reinforced gadget with 2n2n subgadgets of size 4n+24n+2, on a set L3L4L_{3}\cup L_{4}. The subgadgets are labelled LG[i]LG[i] for 1i2n1\leq i\leq 2n, and the nodes of subgadget LG[i]LG[i] are labelled L3[i,j]L_{3}[i,j] or L4[i,j]L_{4}[i,j] for 0j2n0\leq j\leq 2n depending on whether the node is in L3L_{3} or L4L_{4}. The path in each subgadget goes from L3[i,0]L_{3}[i,0] to L4[i,2n]L_{4}[i,2n], and the expander is on L4L_{4}.

  • A copy of the above structure, with node sets marked RiR_{i} instead of LiL_{i}, respectively.

  • For the matrix MM, add the edge (L4[i,j],R4[j,i])(L_{4}[i,j],R_{4}[j,i]) if Mij=1M_{ij}=1.

  • Given an input vector uu, for each i[2n]i\in[2n], add the edge (L2[i],L3[i,0])(L_{2}[i],L_{3}[i,0]) if ui=1u_{i}=1, and (L2[i],L4[i,0])(L_{2}[i],L_{4}[i,0]) otherwise.

  • Given an input vector vv, for each j[2n]j\in[2n], add the edge (R2[j],R3[j,0])(R_{2}[j],R_{3}[j,0]) if vj=1v_{j}=1, and add the edge (R2[j],R4[j,0])(R_{2}[j],R_{4}[j,0]) otherwise.

The total number of nodes in the reduction graph is N=16n2+16n+2=Θ(n2)N=16n^{2}+16n+2=\Theta(n^{2}).

Refer to caption
Figure 2: The expander reduction graph for maximum matching. The red lines denote the canonical matching, the blue lines denote the paths in each subgadget, the grey lines denote the expander edges, and the green lines denote the input-dependent edges.
Matchings in the Graph

Our base matching BB is similar to the constant-degree case, and is made up of the canonical matchings on each of the gadgets. On the left side, BB matches L3[i,j]L_{3}[i,j] to L4[i,j]L_{4}[i,j], and L1[i]L_{1}[i] to L2[i]L_{2}[i] for all i,ji,j. The matching on the right side is similar. Note that this matching always exists regardless of the input, and only L2[0]L_{2}[0] and R2[0]R_{2}[0] are unmatched in the entire graph. Thus |B|=N21|B|=\frac{N}{2}-1. We claim that this graph has a perfect matching if and only if uMv=1uMv=1. Let CC denote the maximum cardinality matching.

Lemma 3.9.

If uMv=1uMv=1, then |C|=N2|C|=\frac{N}{2}, and otherwise |C|=N21|C|=\frac{N}{2}-1.

The proof of Lemma 3.2 without any modifications shows Lemma 3.9 even with the additional edges present.

Complexity of the Reduction

On the arrival of a new vector pair, we first add all the edges corresponding to the new input (if they do not already exist), and then remove the previous vector pair’s edges, as opposed to the usual convention of first deleting the previous edges and inserting the new ones. This ensures that the graph remains an expander at all steps. Assuming that we have proved the expansion properties required, since number of edges in a constant-degree graph is O(N)O(N), we get the following theorem for expanders.

Theorem 3.10.

For any constant ϵ>0\epsilon>0, there is no dynamic algorithm maintaining a maximum matching or determining the existence of a perfect matching, on all NN-node graphs with constant degree and constant expansion, with amortized O(N1/2ϵ)O(N^{1/2-\epsilon}) update time and O(N1ϵ)O(N^{1-\epsilon}) query time, unless the OMv conjecture is false.

Proof 3.11.

Consider the reduction graph above. It consists of N=16n2+16n+2=Θ(n2)N=16n^{2}+16n+2=\Theta(n^{2}) nodes. Every time we get a new (u,v)(u,v) input vector pair, we update L2×L3L_{2}\times L_{3} and R2×R3R_{2}\times R_{3} as detailed above. This takes O(n)O(n) updates in total. After that, we query once for the size of the maximum matching in this new graph, and return 11 if and only if |C|=N2|C|=\frac{N}{2}.

Thus for each pair of input vectors, we perform O(n)O(n) updates and O(1)O(1) query. In total, checking nn vector pairs takes us O(n2)O(n^{2}) updates and O(n)O(n) query. If there were an algorithm for maximum matching on constant-degree graphs with update time O(N1/2ϵ)O(N^{1/2-\epsilon}) (i.e., O(n12ϵ)O(n^{1-2\epsilon})) and query time O(N1ϵ)O(N^{1-\epsilon}) (i.e., O(n22ϵ)O(n^{2-2\epsilon})), then we can decide if uMv=1uMv=1 for all nn pairs in O(n32ϵ)O(n^{3-2\epsilon}) time, contradicting the OMv conjecture.

Let us now show that the graph described is indeed an h1h_{1}-expander graph, for some constant h1>0h_{1}>0.

Lemma 3.12.

The reduction graph has constant expansion.

Throughout the rest of this section, let SS be an arbitrary subset of vertices in the reduction graph, with |S|<N/2=8n2+8n+1|S|<N/2=8n^{2}+8n+1. To simplify our proofs, we consider the reduction graphs with n>90n>90, since then 8n2+8n+1<8.1n28n^{2}+8n+1<8.1n^{2}. We use “SS expands” as a shorthand for |E(S,S¯)|c|S||E(S,\bar{S})|\geq c\cdot|S| for some constant c>0c>0.

Note that the node sets in the graph have the following sizes: |L1|=2n|L_{1}|=2n; |L2|=2n+1|L_{2}|=2n+1; |L3|=|L4|=4n2+2n|L_{3}|=|L_{4}|=4n^{2}+2n, and |Ri|=|Li||R_{i}|=|L_{i}|. We divide the expansion proof into two sections based on whether its intersection with the middle layers (L4L_{4} or R4R_{4}) is large or small.

We first deal with the case when the intersection is large.

Lemma 3.13.

If |SL4|0.1n2|S\cap L_{4}|\geq 0.1n^{2} or |SR4|0.1n2|S\cap R_{4}|\geq 0.1n^{2}, then SS expands.

We encapsulate the proof ideas for this lemma in Table 3, and split the proof into three lemmas for convenience. First, we show that if its intersection with both the middle layers is very large, then SS expands. Our proof uses the canonical matching on L3L4L_{3}\cup L_{4}.

Sizes Proof ideas Proof
SL4>3.9n2S_{L4}>3.9n^{2} \wedge SR4>3.9n2S_{R4}>3.9n^{2} Use the perfect matching on L3L4L_{3}\cup L_{4} Lemma 3.14
0.1n2SL43.9n20.1n^{2}\leq S_{L4}\leq 3.9n^{2} Use the expander on L4L_{4} Lemma 3.16
SL4>3.9n2S_{L4}>3.9n^{2} \wedge SR4<0.1n2S_{R4}<0.1n^{2} Use the edges of matrix MM Lemma 3.18
Table 3: Ideas for the proof of expansion in the matching reduction graph when either SL4S\cap L_{4} or SR4S\cap R_{4} is large. we use SL4S_{L4} as shorthand for |SL4||S\cap L_{4}|. The cases when 0.1n2SR43.9n20.1n^{2}\leq S_{R4}\leq 3.9n^{2} and SR4>3.9n2S_{R4}>3.9n^{2} \wedge SL4<0.1n2S_{L4}<0.1n^{2} are symmetric to the ones presented in the table.
Lemma 3.14.

If |SL4|>3.9n2|S\cap L_{4}|>3.9n^{2} and |SR4|>3.9n2|S\cap R_{4}|>3.9n^{2}, then SS expands.

Proof 3.15.

The two conditions together imply that |SL3|0.4n2|S\cap L_{3}|\leq 0.4n^{2}. Thus

|E(S,S¯)|\displaystyle|E(S,\bar{S})| |E(SL4,S¯L3)|\displaystyle\geq|E(S\cap L_{4},\bar{S}\cap L_{3})|
=|E(SL4,L3)||E(SL4,SL3)|\displaystyle=|E(S\cap L_{4},L_{3})|-|E(S\cap L_{4},S\cap L_{3})|
3.9n2|E(SL4,SL3)|\displaystyle\geq 3.9n^{2}-|E(S\cap L_{4},S\cap L_{3})| (by the canonical matching)
3.9n21.2n2\displaystyle\geq 3.9n^{2}-1.2n^{2} (since deg(v)3\deg(v)\leq 3 for all vL3v\in L_{3})
(2.7/8.1)|S|\displaystyle\geq(2.7/8.1)\cdot|S| (by upper bound on |S||S|)

which proves our claim.

Next, we show that if the intersection with one of the middle layers is of medium size, then SS expands. We prove this by using the fact that there is an h0h_{0}-expander on the middle layers.

Lemma 3.16.

If 0.1n2|SL4|3.9n20.1n^{2}\leq|S\cap L_{4}|\leq 3.9n^{2}, then SS expands.

Proof 3.17.

Let TT be the smaller of the two sets SL4S\cap L_{4} and S¯L4\bar{S}\cap L_{4}, then |T||L4|/2|T|\leq|L_{4}|/2. Thus

|E(S,S¯)|\displaystyle|E(S,\bar{S})| |E(SL4,S¯L4)|\displaystyle\geq|E(S\cap L_{4},\bar{S}\cap L_{4})|
=|E(T,L4T)|\displaystyle=|E(T,L_{4}\setminus T)| (by definition of TT)
h0|T|\displaystyle\geq h_{0}\cdot|T| (by expansion)
h00.1n2\displaystyle\geq h_{0}\cdot 0.1\cdot n^{2} (by constraint on |SL4||S\cap L_{4}|)
h0(0.1/8.1)|S|\displaystyle\geq h_{0}\cdot(0.1/8.1)\cdot|S| (by upper bound on |S||S|)

which proves our claim.

The above observation also holds when the size of SR4S\cap R_{4} is in the same range, using the expander on R4R_{4}. Finally, we show that if one of the intersections is large and the other is small, then SS expands. We use the edges of the matrix MM to do this.

Lemma 3.18.

If |SL4|>3.9n2|S\cap L_{4}|>3.9n^{2} and |SR4|<0.1n2|S\cap R_{4}|<0.1n^{2}, then SS expands.

Proof 3.19.

Recall that our reduction graph has 3n2\geq 3n^{2} edges crossing from L4L_{4} to R4R_{4}. Thus

|E(S,S¯)|\displaystyle|E(S,\bar{S})| |E(SL4,S¯R4)|\displaystyle\geq|E(S\cap L_{4},\bar{S}\cap R_{4})|
=|E(SL4,R4)||E(SL4,SR4)|\displaystyle=|E(S\cap L_{4},R_{4})|-|E(S\cap L_{4},S\cap R_{4})|
|E(SL4,R4)|0.1n2\displaystyle\geq|E(S\cap L_{4},R_{4})|-0.1n^{2} (by constraint on |SR4||S\cap R_{4}|)
=|E(L4,R4)||E(S¯L4,R4)|0.1n2\displaystyle=|E(L_{4},R_{4})|-|E(\bar{S}\cap L_{4},R_{4})|-0.1n^{2}
3n2|E(S¯L4,R4)|0.1n2\displaystyle\geq 3n^{2}-|E(\bar{S}\cap L_{4},R_{4})|-0.1n^{2} (by construction)
3n20.2n20.1n2\displaystyle\geq 3n^{2}-0.2n^{2}-0.1n^{2} (by constraint on |SL4||S\cap L_{4}|)
(2.7/8.1)|S|\displaystyle\geq(2.7/8.1)\cdot|S| (by upper bound on |S||S|)

which proves our claim.

A symmetric argument works by swapping L4L_{4} and R4R_{4} in the above proof. Lemmas 3.14, 3.16, and 3.18 together prove Lemma 3.13. We are left with the case when the intersection of SS with both the middle layers is small.

Lemma 3.20.

If |SL4|<0.1n2|S\cap L_{4}|<0.1n^{2} and |SR4|<0.1n2|S\cap R_{4}|<0.1n^{2}, then SS expands.

In this case, we show that the intersection of SS with the left side (L=iLiL=\cup_{i}L_{i}) and the right side (R=iRiR=\cup_{i}R_{i}) of the graph expand within their respective sides. This then proves expansion of SS, since

|E(S,S¯)|\displaystyle|E(S,\bar{S})| |E(SL.S¯L)|+|E(SR.S¯R)|\displaystyle\geq|E(S\cap L.\bar{S}\cap L)|+|E(S\cap R.\bar{S}\cap R)|
h1|SL|+h1|SR|\displaystyle\geq h_{1}\cdot|S\cap L|+h_{1}\cdot|S\cap R| (by expansion within each side)
h1|S|\displaystyle\geq h_{1}\cdot|S|

We now concentrate on proving the expansion of SLS\cap L in LL, since the right side expansion follows by similar arguments.

Sizes Proof ideas Proof
SL3>2SL4S_{L3}>2S_{L_{4}} \wedge SL3>0.2nS_{L3}>0.2n Use the perfect matching on L3L4L_{3}\cup L_{4} Lemma 3.21
SL32SL4S_{L3}\leq 2S_{L_{4}} \wedge SL4>0.1nS_{L4}>0.1n Use the expander on L4L_{4} Lemma 3.23
Table 4: Ideas for the first part of the proof of expansion in the left side of the matching reduction graph when SL4S\cap L_{4} is small (<0.1n2<0.1n^{2}). Here we deal with the case when either SL4>0.1nS_{L4}>0.1n or SL3>0.2nS_{L3}>0.2n.
Lemma 3.21.

If |SL3|>2|SL4||S\cap L_{3}|>2|S\cap L_{4}| and |SL3|>0.2n|S\cap L_{3}|>0.2n, then SS expands.

Proof 3.22.

First, we lower bound the number of crossing edges by c|SL3|c\cdot|S\cap L_{3}|.

|E(SL,S¯L)|\displaystyle|E(S\cap L,\bar{S}\cap L)| |E(SL3,S¯L4)|\displaystyle\geq|E(S\cap L_{3},\bar{S}\cap L_{4})|
|SL3||SL4|\displaystyle\geq|S\cap L_{3}|-|S\cap L_{4}| (by the canonical matching)
0.5|SL3|\displaystyle\geq 0.5\cdot|S\cap L_{3}| (by constraint on |SL4||S\cap L_{4}|)

Then we lower bound 22|SL3|22\cdot|S\cap L_{3}| by |SL||S\cap L|, which proves our claim. The final inequality uses the constraint that |SL3|>0.2n|S\cap L_{3}|>0.2n.

22|SL3|\displaystyle 22\cdot|S\cap L_{3}| =|SL3|+|SL3|+20|SL3|\displaystyle=|S\cap L_{3}|+|S\cap L_{3}|+20\cdot|S\cap L_{3}|
|SL3|+|SL4|+20|SL3|\displaystyle\geq|S\cap L_{3}|+|S\cap L_{4}|+20\cdot|S\cap L_{3}| (by constraint on |SL4||S\cap L_{4}|)
|SL3|+|SL4|+|S(L1L2)|\displaystyle\geq|S\cap L_{3}|+|S\cap L_{4}|+|S\cap(L_{1}\cup L_{2})| (using |L1L2|=4n+1|L_{1}\cup L_{2}|=4n+1)

as required.

Lemma 3.23.

If |SL3|2|SL4||S\cap L_{3}|\leq 2|S\cap L_{4}| and |SL4|>0.1n|S\cap L_{4}|>0.1n, then SS expands.

Proof 3.24.

Recall that we are working in the case when |SL4|<0.1n2<|L4|/2|S\cap L_{4}|<0.1n^{2}<|L_{4}|/2.

|E(SL,S¯L)|\displaystyle|E(S\cap L,\bar{S}\cap L)| |E(SL4,S¯L4)|\displaystyle\geq|E(S\cap L_{4},\bar{S}\cap L_{4})|
h0|SL4|\displaystyle\geq h_{0}\cdot|S\cap L_{4}| (by expansion)

Lower bounding |SL4||S\cap L_{4}| by c|SL|c\cdot|S\cap L| similar to Lemma 3.21 gives us the claim.

Now we are only left with the case when |SL4|<0.1n|S\cap L_{4}|<0.1n and |SL3|<0.2n|S\cap L_{3}|<0.2n. In what follows, we use the bound below on |SL||S\cap L|.

|SL|\displaystyle|S\cap L| =|S(L1L2)|+|S(L3L4)|\displaystyle=|S\cap(L_{1}\cup L_{2})|+|S\cap(L_{3}\cup L_{4})|
5n+|S(L3L4)|\displaystyle\leq 5n+|S\cap(L_{3}\cup L_{4})| (by bound on |L1L2||L_{1}\cup L_{2}|)
5.3n\displaystyle\leq 5.3n (by constraint on |S(L3L4)||S\cap(L_{3}\cup L_{4})|)
Sizes Proof ideas Proof
SL20.7nS_{L2}\geq 0.7n Use the edges of uu Lemma 3.25
SL2<0.7nS_{L2}<0.7n \wedge SL11.3nS_{L1}\geq 1.3n Use the canonical matching on L1L2L_{1}\cup L_{2} Lemma 3.27
SL2<0.7nS_{L2}<0.7n \wedge SL1<1.3nS_{L1}<1.3n Use the gadget expansion Lemma 3.30
Table 5: Ideas for the second part of the proof of expansion in the left side of the matching reduction graph when SL4S\cap L_{4} is small. Here we deal with the case when SL4<0.1nS_{L4}<0.1n and SL3<0.2nS_{L3}<0.2n.
Lemma 3.25.

If |SL2|>0.7n|S\cap L_{2}|>0.7n, then SS expands.

Proof 3.26.

We use the fact that there is an edge from L2L_{2} to L3L4L_{3}\cup L_{4} regardless of the value of uu.

|E(SL,S¯L)|\displaystyle|E(S\cap L,\bar{S}\cap L)| |E(SL2,S¯(L3L4))|\displaystyle\geq|E(S\cap L_{2},\bar{S}\cap(L_{3}\cup L_{4}))|
=|E(SL2,L3L4)||E(SL2,S(L3L4))|\displaystyle=|E(S\cap L_{2},L_{3}\cup L_{4})|-|E(S\cap L_{2},S\cap(L_{3}\cup L_{4}))|
0.7n|E(SL2,S(L3L4))|\displaystyle\geq 0.7n-|E(S\cap L_{2},S\cap(L_{3}\cup L_{4}))| (by edges of uu)
0.4n\displaystyle\geq 0.4n (by constraint on S(L3L4)S\cap(L_{3}\cup L_{4}))
(0.4/5.3)|SL|\displaystyle\geq(0.4/5.3)\cdot|S\cap L| (by constraint on SLS\cap L)

which proves our claim.

Lemma 3.27.

If |SL2|<0.7n|S\cap L_{2}|<0.7n and |SL1|1.3n|S\cap L_{1}|\geq 1.3n, then SS expands.

Proof 3.28.

We use the matching on L1L2L_{1}\cup L_{2}, as follows

|E(SL,S¯L)|\displaystyle|E(S\cap L,\bar{S}\cap L)| |E(SL1,S¯L2)|\displaystyle\geq|E(S\cap L_{1},\bar{S}\cap L_{2})|
|SL1||SL2|\displaystyle\geq|S\cap L_{1}|-|S\cap L_{2}| (by the canonical matching on L1L2L_{1}\cup L_{2})
0.6n\displaystyle\geq 0.6n (by constraint on L1L_{1} and L2L_{2})
(0.6/5.3)|SL|\displaystyle\geq(0.6/5.3)\cdot|S\cap L| (by constraint on SLS\cap L)

which proves our claim.

For the final case, we will need the following lemma about each reinforced gadget being an expander.

Lemma 3.29.

A reinforced gadget has constant expansion.

We use this lemma as follows.

Lemma 3.30.

If |SL2|<0.7n|S\cap L_{2}|<0.7n and |SL1|<1.3n|S\cap L_{1}|<1.3n, then SS expands.

Proof 3.31.

In this case, we reason about L1L2L_{1}\cup L_{2} and L3L4L_{3}\cup L_{4} separately. Since the intersection of SS with each reinforced gadget covers less than half the nodes, i.e.,

|S(L1L2)|\displaystyle|S\cap(L_{1}\cup L_{2})| < 2n<|L1L2|/2,\displaystyle<\ 2n\ <|L_{1}\cup L_{2}|/2,
|S(L3L4)|\displaystyle|S\cap(L_{3}\cup L_{4})| <0.3n<|L3L4|/2,\displaystyle<0.3n<|L_{3}\cup L_{4}|/2,

we use the expansion of each reduction gadget from Lemma 3.29 to get

|E(S(L1L2),S¯(L1L2))|\displaystyle|E(S\cap(L_{1}\cup L_{2}),\bar{S}\cap(L_{1}\cup L_{2}))| c|S(L1L2)|,\displaystyle\geq c\cdot|S\cap(L_{1}\cup L_{2})|,
|E(S(L3L4),S¯(L3L4))|\displaystyle|E(S\cap(L_{3}\cup L_{4}),\bar{S}\cap(L_{3}\cup L_{4}))| c|S(L3L4)|,\displaystyle\geq c\cdot|S\cap(L_{3}\cup L_{4})|,

giving the expansion of SLS\cap L as required.

Lemmas 3.213.30 together prove Lemma 3.20. Lemmas 3.13 and 3.20 together show Lemma 3.12. All that is left is to prove Lemma 3.29, which we do below.

See 3.29

Proof 3.32.

Let XXX^{\prime}\cup X be a reinforced gadget, with the h0h_{0}-expander on XX. Let SS be a subset of nodes of size <|V|/2<|V|/2. The proof of expansion follows in similar lines as before.

If the intersection with XX is large, then we use the matching edges (similar to Lemma 3.14). Concretely, if |SX|>0.9|X||S\cap X|>0.9|X|, then since |X||X||X^{\prime}|\leq|X| and |S|<|V|/2|S|<|V|/2, we get that |SX|<(1/9)|SX||S\cap X^{\prime}|<(1/9)\cdot|S\cap X|. Thus

|E(S,S¯)|\displaystyle|E(S,\bar{S})| |SX||SX|\displaystyle\geq|S\cap X|-|S\cap X^{\prime}| (by the matching on XXX^{\prime}\cup X)
(8/9)|SX|\displaystyle\geq(8/9)\cdot|S\cap X| (by constraint on |SX||S\cap X^{\prime}|)
(8/9)(9/10)|X|\displaystyle\geq(8/9)\cdot(9/10)\cdot|X| (by constraint on |SX||S\cap X|)
(4/5)|S|\displaystyle\geq(4/5)\cdot|S| (since |X||V|/2|X|\geq|V|/2 and |S||V|/2|S|\leq|V|/2)

If the intersection with XX is of medium size, then we use the expander on XX (similar to Lemma 3.16). Concretely, if 0.1|X||SX|0.9|X|0.1|X|\leq|S\cap X|\leq 0.9|X|, then let TT be the smaller of the two sets SXS\cap X and S¯X\bar{S}\cap X. Then

|E(S,S¯)|\displaystyle|E(S,\bar{S})| |E(SX,S¯X)|\displaystyle\geq|E(S\cap X,\bar{S}\cap X)|
=|E(T,XT)|\displaystyle=|E(T,X\setminus T)| (by definition of TT)
h0|T|\displaystyle\geq h_{0}\cdot|T| (by expansion)
h00.1|X|\displaystyle\geq h_{0}\cdot 0.1\cdot|X| (by constraint on |SX||S\cap X|)
h00.1|S|\displaystyle\geq h_{0}\cdot 0.1\cdot|S| (since |S||X||S|\leq|X|)

We are left with the case when the intersection with XX is small, namely, |SX|<0.1|X||S\cap X|<0.1|X|. If |SX|>2|SX||S\cap X^{\prime}|>2|S\cap X|, then we use the matching edges again.

|E(S,S¯)|\displaystyle|E(S,\bar{S})| |SX||SX|\displaystyle\geq|S\cap X^{\prime}|-|S\cap X| (by the matching on XXX^{\prime}\cup X)
=(1/3)(2|SX|4|SX|+|SX|+|SX|)\displaystyle=(1/3)\cdot(2|S\cap X^{\prime}|-4|S\cap X|+|S\cap X^{\prime}|+|S\cap X|)
(1/3)(|SX|+|SX|)\displaystyle\geq(1/3)\cdot(|S\cap X^{\prime}|+|S\cap X|) (by constraint on |SX||S\cap X^{\prime}|)

which leaves us with the final case when |SX|2|SX||S\cap X^{\prime}|\leq 2|S\cap X|. Here, we use the expander on XX

|E(S,S¯)|\displaystyle|E(S,\bar{S})| |E(SX,S¯X)|\displaystyle\geq|E(S\cap X,\bar{S}\cap X)|
h0|SX|\displaystyle\geq h_{0}\cdot|S\cap X| (by expansion)
(1/3)h0(|SX|+|SX|)\displaystyle\geq(1/3)\cdot h_{0}\cdot(|S\cap X|+|S\cap X^{\prime}|) (by constraint on |SX||S\cap X^{\prime}|)

which shows that the reinforced gadget has constant expansion.

3.4 Power-law Graph

We first make the reduction graph robust with respect to degree changes. We use the following static graph in our reduction.

  • A reduction gadget with one subgadget of size 2n+12n+1, on a set L1L2L_{1}\cup L_{2} as earlier.

  • A reduction gadget with 2n2n subgadgets of size 2n+22n+2, on a set L3L4L_{3}\cup L_{4}. The subgadgets are labelled LG[i]LG[i] for 1i2n1\leq i\leq 2n, and the nodes of subgadget LG[i]LG[i] are labelled L3[i,j]L_{3}[i,j] or L4[i,j]L_{4}[i,j] for 0jn0\leq j\leq n depending on whether the node is in L3L_{3} or L4L_{4}. The path in each subgadget goes from L3[i,0]L_{3}[i,0] to L4[i,n]L_{4}[i,n].

  • A copy of the above structure, with node sets marked RiR_{i} instead of LiL_{i}.

  • If Mij=1M_{ij}=1, then add the edges (L4[i,j],R4[j,i])(L_{4}[i,j],R_{4}[j,i]) and (L4[n+i,n+j],R4[n+j,n+i])(L_{4}[n+i,n+j],R_{4}[n+j,n+i]).

  • If Mij=0M_{ij}=0, then add the edges (L4[i,j],R4[n+j,n+i])(L_{4}[i,j],R_{4}[n+j,n+i]) and (L4[n+i,n+j],R4[j,i])(L_{4}[n+i,n+j],R_{4}[j,i]).

  • The edges for an input pair of vectors (u,v)(u,v) will be detailed later.

Layer Deg 11 Deg 22 Deg 33
L1L_{1} 0 nn 0
L2L_{2} 11 nn 0
L3L_{3} 2n2n 2n22n^{2} 0
L4L_{4} 0 2n2n 2n22n^{2}
Table 6: Degree distribution of the nodes on the left side of the reduction graph.

The degree distribution for each layer on the left side of the reduction gadget in the current instance, before adding any edges for (u,v)(u,v), is given in Table 6. For ease of notation, let us use (d,Nd)(d,N_{d}) to denote that there are NdN_{d} nodes of degree dd in the graph. Thus the degree distribution in the entire reduction graph is as follows: (1,4n+2),(2,4n2+8n),(3,4n2)(1,4n+2),(2,4n^{2}+8n),(3,4n^{2}). Some of the nodes will change their degree when we add edges for the input vector pair (u,v)(u,v), and we take care of these changes later.

Let β>2\beta>2 be the exponent for which we want to show our lower bound, and NN be the number of nodes we need in our reduction graph. First, choose NN such that

N>ζ(β)max{(2N1+2n),(2N2+2n)2β,(2N3+2n)3β},N>\zeta(\beta)\cdot\max\{(2N_{1}+2n),(2N_{2}+2n)\cdot 2^{\beta},(2N_{3}+2n)\cdot 3^{\beta}\},

and pick any power-law graph GG on NN nodes. If NdN^{\prime}_{d} is the number of nodes of degree dd in GG, then by construction we get that Nd>2Nd+2nN^{\prime}_{d}>2N_{d}+2n.

We would like to essentially embed our reduction graph into this power-law graph. For this, we would like to reduce NdN^{\prime}_{d} by exactly NdN_{d} for d{1,2,3}d\in\{1,2,3\}, which would allow us to embed our graph into GG. In what follows, we use the fact that N1<N3N_{1}<N_{3}. We “make space” for our nodes as follows.

  • Do the following N1N_{1} times: Pick a node uGu\in G of degree 11. Let ww be its neighbour. Since deg(w)<N\deg(w)<\sqrt{N} and there are N3N3>n2\geq N^{\prime}_{3}-N_{3}>n^{2} nodes of degree 33 in GG, there exists a node vGv\in G of degree 33 such that vv has a neighbour xN(v)x\in N(v) with (x,w)G(x,w)\not\in G. Delete the edges (u,w),(v,x)(u,w),(v,x) and add the edge (x,w)(x,w). This gives us a node uu which we can assign degree 11 to, in our reduction graph. We have also converted a degree 33 node to a degree 22 node, which we take care of in the next step.

  • Do the following N2+N1N_{2}+N_{1} times: Pick a node uGu\in G of degree 22. Let u1,u2u_{1},u_{2} be its neighbours. Since deg(u1)+deg(u2)<2N\deg(u_{1})+\deg(u_{2})<2\sqrt{N}, and there are N2N2\geq N^{\prime}_{2}-N_{2} nodes of degree 22 left in GG, there exists a node vGv\in G of degree 22 that has neighbours v1,v2v_{1},v_{2}, with uivju_{i}\neq v_{j}. Remove the edges (u,ui),(v,vi)(u,u_{i}),(v,v_{i}), and add the edge (ui,vi)(u_{i},v_{i}). This gives us two nodes u,vu,v which we can assign degree 11 to, in our reduction graph.

  • Similarly, free up N3N1N_{3}-N_{1} nodes of degree 33 for our reduction graph.

Note that we will have at most one extra node of each degree which we can leave unused because of the slack allowed in approximate power-law graphs.

We can now embed our reduction graph into a power-law graph with at most two nodes having different degrees, since there are an even number of nodes of degree 22 in our reduction graph by parity, and we requisitioned degree 11 nodes one-by-one and not in pairs. Thus, in particular, we are at most one degree 22 node and one degree 33 node away from a perfect power law graph.

We now come to the question of the input vectors (u,v)(u,v). If ui=1u_{i}=1, then we add the edge (L2[i],L3[i,0])(L_{2}[i],L_{3}[i,0]). Note that this changes the degree distribution in the following way: One degree 22 node increases to degree 33, and a degree 11 node increases to degree 22. We need to adjust for this in the power law graph, while making sure that the size of the maximum matching in the remaining graph is still known. We do this as follows:

During preprocessing, pick 2n2n disjoint pairs of nodes (ai,bi)(a_{i},b_{i}) in the graph GG such that deg(ai)=2\deg(a_{i})=2 and deg(bi)=3\deg(b_{i})=3, such that ciN(ai),diN(bi)\exists\,c_{i}\in N(a_{i}),d_{i}\in N(b_{i}) with (ci,di)G(c_{i},d_{i})\not\in G. G0G_{0} is the graph GG. Let GjG_{j} be the graph with the edges (ai,ci),(bi,di)(a_{i},c_{i}),(b_{i},d_{i}) deleted and the edges (ci,di)(c_{i},d_{i}) added for all 1ij1\leq i\leq j. Since poly(n)\textrm{{poly}}(n) preprocessing is allowed in the OMv conjecture, we can afford to find the sizes of the maximum matchings in all the graphs GjG_{j} before we receive any input. Denote the sizes of these matchings as mjm_{j}, 0j2n0\leq j\leq 2n.

On input (u,v)(u,v), we do the following:

  • If ui=1u_{i}=1, then add the edge (L2[i],L3[i,0])(L_{2}[i],L_{3}[i,0]).

  • If vi=1v_{i}=1, then add the edge (R2[i],R3[i,0])(R_{2}[i],R_{3}[i,0]).

  • Let k=supp(u)+supp(v)k=\textrm{{supp}}(u)+\textrm{{supp}}(v). Delete the edges (ai,ci),(bi,di)(a_{i},c_{i}),(b_{i},d_{i}) and add the edges (ci,di)(c_{i},d_{i}) for all 1ik1\leq i\leq k.

Now we ask for the size of the maximum matching in this graph. Let G¯\overline{G} be the subgraph of GG which is the reduction graph, and let N¯\overline{N} be the number of nodes in G¯\overline{G}. Note that we already know the size of the maximum matching in GG¯G\setminus\overline{G} to be mkm_{k}. uMv=1uMv=1 if and only if a maximum matching restricted to G¯\overline{G} is perfect, and since G¯\overline{G} is disjoint from GG¯G\setminus\overline{G}, if and only if the maximum matching on GG is of size mk+N¯2m_{k}+\frac{\overline{N}}{2}. We then roll back the graph to its previous state and process the new input vector pair.

We make O(n)O(n) updates and 11 queries for each input pair, and the graph consists of Θ(n2)\Theta(n^{2}) nodes, which gives us the same lower bounds as in the constant-degree reduction.

4 Lower Bounds for Dynamic (s,t)(s,t)-Shortest Path

In this section, we present our lower bound results for the dynamic (s,t)(s,t)-shortest path problem. In Section 4.1, we give a lower bound for dynamic (s,t)(s,t)-distance on graphs with maximum degree 33. We extend this lower bound to (3δ)(3-\delta)-approximations in Section 4.2. In Section 4.3, we show that the distinction between the unbounded and constant-degree reductions is not discrete, by giving a lower bound reduction parameterized on the maximum degree allowed in the graph. In Section 4.4, we show that the lower bound on constant-degree graphs holds even on expanders, by constructing a more involved reduction graph. Finally, we prove the power-law graph lower bounds in Section 4.5.

4.1 Constant-Degree Graph

Consider the OuMv problem on vectors of length nn and an n×nn\times n matrix. We first perform a simple reduction that shows that maintaining (s,t)(s,t)-distance is hard even on graphs where the maximum degree is 33.

Theorem 4.1.

For any constant ϵ>0\epsilon>0, there is no dynamic algorithm maintaining (s,t)(s,t)-distance, SSSP or APSP, on all NN-node bipartite graphs with maximum degree Δ3\Delta\leq 3, with amortized O(N1/2ϵ)O(N^{1/2-\epsilon}) update time and O(N1ϵ)O(N^{1-\epsilon}) query time, unless the OMv conjecture is false.

Since the original reduction [16] could possibly have unbounded degree, we use binary forests to sparsify our reduction graph.

Definition 4.2 (Binary Forest).

A binary forest of xx trees of height yy is a graph composed of xx disjoint binary trees, each of height yy.

We naturally split the nodes of a binary forest between internal nodes and leaves. Intuitively, we would like to replace a high degree node of the original reduction with a binary forest to moderate the maximum allowed degree. Note that a binary forest has x2yx\cdot 2^{y} leaves in total.

4.1.1 Static Graph

We use the following static graph as the base for our reduction:

  • A (logn)(\log n)-depth binary forest with a single tree. The set of n1n-1 internal nodes is marked L1L_{1} (LL for left) and the nn leaves L2L_{2}; the root of the tree is the source node ss, and the nn nodes of L2L_{2} are marked as L2[i]L_{2}[i] for 1in1\leq i\leq n.

  • A (logn)(\log n)-depth binary forest with nn trees. The n(n1)n(n-1) internal nodes are marked L3L_{3} and the n2n^{2} leaves L4L_{4}. The roots of each of the nn trees are marked as L3[i]L_{3}[i], for 1in1\leq i\leq n, and the leaves of the tree with root L3[i]L_{3}[i] are marked L4[i,j]L_{4}[i,j], for 1jn1\leq j\leq n.

  • A copy of the above structure, with node sets marked R1,R2,R3,R4R_{1},R_{2},R_{3},R_{4} instead of L1,L2,L3,L4L_{1},L_{2},L_{3},L_{4}, respectively. The root of the single tree of R1R_{1} is the target node tt.

  • Edges from L4L_{4} to R4R_{4} by the matrix MM, as detailed next.

  • For an input pair of vectors (u,v)(u,v), edges between L2L_{2} and L3L_{3} by uu, and between R2R_{2} and R3R_{3} by vv, as detailed next.

The total number of nodes in the reduction graph is N=4n2+2n2=Θ(n2)N=4n^{2}+2n-2=\Theta(n^{2}).

4.1.2 Input-Dependent Edges

We add the following edges depending on the input matrix MM and vectors u,vu,v:

  • For the matrix MM, add the edge (L4[i,j],R4[j,i])(L_{4}[i,j],R_{4}[j,i]) if Mij=1M_{ij}=1.

  • Given an input vector uu, for each i[n]i\in[n], add the edge (L2[i],L3[i])(L_{2}[i],L_{3}[i]) if ui=1u_{i}=1.

  • Given an input vector vv, for each j[n]j\in[n], add the edge (R2[j],R3[j])(R_{2}[j],R_{3}[j]) if vj=1v_{j}=1.

4.1.3 Distances in the Graph

We now show the correctness of the reduction, by considering the (s,t)(s,t) distance in different scenarios.

Lemma 4.3.

[constant-degree reduction] uMv=1uMv=1 if and only if dist(s,t)4logn+3\operatorname{dist}(s,t)\leq 4\log n+3. Moreover, the graph is bipartite.

Proof 4.4.

We first show that any path from ss to tt has to be of length at least 4logn+34\log n+3, by partitioning the node set into layers. We maintain the property that a node at level \ell can have neighbours only in levels 1\ell-1, \ell, or +1\ell+1. The layering is as follows: The layer of a node in L1L2L_{1}\cup L_{2} is its distance from ss; in L3L4L_{3}\cup L_{4} is its distance from its closest root, plus logn+1\log n+1; in R3R4R_{3}\cup R_{4} is its distance from its closest leaf, plus 2logn+22\log n+2; in R1R2R_{1}\cup R_{2} is its distance from its closest leaf, plus 3logn+33\log n+3. Specifically, the layer of node tt is 4logn+34\log n+3. Thus dist(s,t)4logn+3\operatorname{dist}(s,t)\geq 4\log n+3, regardless of uu, MM, and vv. Moreover, the the fact that edges only connects consecutive layers implies that the graph is bipartite.

(\implies) If uMv=1uMv=1, then there exists indices i,ji,j such that ui=Mij=vj=1u_{i}=M_{ij}=v_{j}=1. Then consider the path PP composed of the following sub-paths:

  • P1P_{1} is the shortest path from ss to L3[i]L_{3}[i], which follows the tree L1L2L_{1}\cup L_{2} and then the (L2[i],L3[i])(L_{2}[i],L_{3}[i]) edge (logn+1\log n+1 edges).

  • P2P_{2} is the shortest path from L3[i]L_{3}[i] to R4[j,i]R_{4}[j,i], which follows the ithi^{th} tree rooted at L3[i]L_{3}[i] and then the (L4[i,j],R4[j,i])(L_{4}[i,j],R_{4}[j,i]) edge (logn+1\log n+1 edges).

  • P3P_{3} is the shortest path from R4[j,i]R_{4}[j,i] to R2[j]R_{2}[j], which follows the jthj^{th} tree rooted at R3[j]R_{3}[j] and then the (R3[j],R2[j])(R_{3}[j],R_{2}[j]) edge (logn+1\log n+1 edges).

  • P4P_{4} is the shortest path from R2[j]R_{2}[j] to tt, which follows the tree R1R2R_{1}\cup R_{2} (logn\log n edges).

PP is then a path of length 4logn+34\log n+3 from ss to tt.

(\impliedby) Assume that there is a path of length 4logn+34\log n+3 from ss to tt. By the layering, each edge in the path must connect a layer \ell node and a layer (+1)(\ell+1) node, and there is exactly one such edge in the path for each [4logn+2]\ell\in[4\log n+2]. Next, we use these two facts (sometimes implicitly) to show that the path must have the form of the above described path, and conclude that uMv=1uMv=1.

The path must contain an edge from L2L_{2} to L3L_{3}. The only such edges are of the form (L2[i],L3[i])(L_{2}[i],L_{3}[i]) for some i[n]i\in[n], implying ui=1u_{i}=1. From there, the path must continue to some leaf LU[i,j]LU[i,j] of the tree rooted at LU[i]LU[i]. Since the only edge from L4[i,j]L_{4}[i,j] that strictly increases in level is the edge (L4[i,j],R4[i,j])(L_{4}[i,j],R_{4}[i,j]), the path must contain such an edge, implying Mij=1M_{ij}=1 for some j[n]j\in[n]. From R4[j,i]R_{4}[j,i], the path must continue to the root R3[j]R_{3}[j], and then to R2[j]R_{2}[j] over an edge (R3[j],R2[j])(R_{3}[j],R_{2}[j]), implying vj=1v_{j}=1. The path ends trivially by going from R2[j]R_{2}[j] to the root tt. Thus uMv=1uMv=1.

4.1.4 Complexity of the Reduction

We are now ready to prove the theorem.

See 4.1

Proof 4.5.

Consider the reduction graph above, which is bipartite by Lemma 4.3. It consists of N=4n2+2n2=Θ(n2)N=4n^{2}+2n-2=\Theta(n^{2}) nodes. Every time we get a new (u,v)(u,v) input vector pair, we delete all the edges between L2×L3L_{2}\times L_{3} and R2×R3R_{2}\times R_{3} and insert edges according to the new input vectors. This takes O(n)O(n) updates in total. After that, we query once for the (s,t)(s,t)-distance in this new graph, and return 11 if and only if dist(s,t)=4logn+3\operatorname{dist}(s,t)=4\log n+3.

Thus for each pair of input vectors, we perform O(n)O(n) updates and O(1)O(1) query. In total, checking nn pairs takes us O(n2)O(n^{2}) updates and O(n)O(n) query. If there were an algorithm for (s,t)(s,t)-distance on constant-degree graphs with update time O(N1/2ϵ)O(N^{1/2-\epsilon}) (i.e., O(n12ϵ)O(n^{1-2\epsilon})) and query time O(N1ϵ)O(N^{1-\epsilon}) (i.e., O(n22ϵ)O(n^{2-2\epsilon})), then we can decide if uMv=1uMv=1 for all nn pairs in O(n32ϵ)O(n^{3-2\epsilon}) time, contradicting the OMv conjecture.

4.2 (3δ)(3-\delta)-Approximation Lower Bound for Constant-Degree Graphs

We show that the above lower bounds on (s,t)(s,t)-distances also holds for (3δ)(3-\delta)-approx (s,t)(s,t)-distances by minimally modifying the reduction graph to exploit the following observation: If uMv=0uMv=0, then every path from ss to tt in the simple reduction graph needs to take at least three edges corresponding to the matrix MM, as opposed to just one such edge when uMv=1uMv=1.

We use the same reduction graph as before, but with one important difference: Earlier, we added a single edge (L4[i,j],R4[j,i])(L_{4}[i,j],R_{4}[j,i]) if Mij=1M_{ij}=1. Now, we add a path with Θ(logn)\Theta(\log n) new nodes between L4[i,j]L_{4}[i,j] and R4[j,i]R_{4}[j,i]. Note that since we only do this for MM and not for every new input vector pair, we can do this in just polynomial pre-processing time, which is allowed for in the OMv conjecture.

Formally, let α=12δ4\alpha=\left\lceil\frac{12}{\delta}-4\right\rceil. Add n2αlognn^{2}\cdot\alpha\log n new nodes to the reduction graph, with the nodes labelled vij[k]v_{ij}[k] for 1kαlogn1\leq k\leq\alpha\log n and 1i,jn1\leq i,j\leq n. If Mij=1M_{ij}=1, add the path L4[i,j],vij[1],,vij[αlogn]L_{4}[i,j],v_{ij}[1],\ldots,v_{ij}[\alpha\log n], R4[i,j]R_{4}[i,j] to the reduction graph, while otherwise all the nodes vij[k]v_{ij}[k] remain disconnected.

The following lemma is an analogue of Lemma 4.3, and the proof follows from the proof of Lemma 4.3 and the above observation.

Lemma 4.6.

If uMv=1uMv=1, then dist(s,t)=(4+α)logn+2\operatorname{dist}(s,t)=(4+\alpha)\log n+2, and otherwise dist(s,t)(4+3α)logn+2\operatorname{dist}(s,t)\geq(4+3\alpha)\log n+2. Moreover, the graph is bipartite.

Corollary 4.7.

For any constant ϵ>0\epsilon>0, there is no dynamic algorithm maintaining (3δ)(3-\delta)-approximate (s,t)(s,t)-distance, SSSP or APSP, on all NN-node bipartite graphs with maximum degree Δ3\Delta\leq 3, with amortized O(N1/2ϵ)O(N^{1/2-\epsilon}) update time and O(N1ϵ)O(N^{1-\epsilon}) query time, unless the OMv conjecture is false.

Proof 4.8.

From Lemma 4.6, we get that any approximate reported distance dd where (4+α)logn+2d<(4+3α)logn+2(4+\alpha)\log n+2\leq d<(4+3\alpha)\log n+2 would imply that uMv=1uMv=1. Note that

(4+3α)logn+2(4+α)logn+2\displaystyle\frac{(4+3\alpha)\log n+2}{(4+\alpha)\log n+2} =38logn+4(4+α)logn+2\displaystyle=3-\frac{8\log n+4}{(4+\alpha)\log n+2}
312logn(4+α)logn\displaystyle\geq 3-\frac{12\log n}{(4+\alpha)\log n} (1)
3δ\displaystyle\geq 3-\delta (2)

where Equation 1 follows from the fact that 4logn44\log n\geq 4, and Equation 2 follows from the definition of α\alpha. Thus any (3δ)(3-\delta)-approx algorithm would be able to distinguish between uMv=1uMv=1 and uMv=0uMv=0.

The number of nodes in the reduction graph is N=Θ(n2logn)=O(n2+ϵ)N=\Theta(n^{2}\log n)=O(n^{2+\epsilon}). Thus for each pair of input vectors, we perform O(n)O(n) updates and O(1)O(1) queries. In total, checking nn pairs takes us O(n2)O(n^{2}) updates and O(n)O(n) queries. If there were an algorithm for (3δ)(3-\delta)-approx (s,t)(s,t)-distance on constant-degree graphs with update time O(N1/2ϵ)=O(n13ϵ/2ϵ2)O(N^{1/2-\epsilon})=O(n^{1-3\epsilon/2-\epsilon^{2})} and query time O(N1ϵ)=O(n2ϵϵ2)O(N^{1-\epsilon})=O(n^{2-\epsilon-\epsilon^{2}}), then we can decide if uMv=1uMv=1 for all nn pairs in O(n3c)O(n^{3-c}) time for some constant cc, contradicting the OMv conjecture.

4.3 Varying-Degree Graph

We present a reduction that gives a lower bound parameterized on the maximum degree in the graph.

Theorem 4.9.

For any 0t10\leq t\leq 1 and any constant ϵ>0\epsilon>0, there is no dynamic algorithm maintaining (s,t)(s,t)-distance, SSSP or APSP, on all NN-node bipartite graphs with maximum degree Δ=O(Nt)\Delta=O(N^{t}), with amortized O(N1+t2ϵ)O(N^{\frac{1+t}{2}-\epsilon}) update time and O(N1+tϵ)O(N^{1+t-\epsilon}) query time, unless the OMv conjecture is false.

The reduction graph

For a value 0t10\leq t\leq 1 (that may depend on NN), we construct a reduction graph on NN nodes that has maximum degree Θ(Nt)\Theta(N^{t}). The node sets L1,L2,R1,R2L_{1},L_{2},R_{1},R_{2} and the edges that depend on uu and vv are the same as in the previous construction in Section 4.1. We detail the changes for L3,L4,R3,R4L_{3},L_{4},R_{3},R_{4} and MM below.

  • A (1t1+tlogn)\left(\frac{1-t}{1+t}\cdot\log n\right)-depth binary forest with nn trees. The n(n1t1+t1)n\cdot\left(n^{\frac{1-t}{1+t}}-1\right) internal nodes are marked L3L_{3} and the nn1t1+tn\cdot n^{\frac{1-t}{1+t}} leaves are marked L4L_{4}. The roots of each of the nn trees are marked as L3[i]L_{3}[i], for 1in1\leq i\leq n, and the leaves of the tree with root L3[i]L_{3}[i] are marked L4[i,j]L_{4}[i,j], for 1jn1t1+t1\leq j\leq n^{\frac{1-t}{1+t}}, and similarly for R3R_{3} and R4R_{4}.

  • For the matrix MM, if Mij=1M_{ij}=1, let i=in2t/(t+1)i^{\prime}=\left\lceil i\cdot n^{-2t/(t+1)}\right\rceil and j=jn2t/(t+1)j^{\prime}=\left\lceil j\cdot n^{-2t/(t+1)}\right\rceil. Add the edge (L4[i,j],R4[i,j])(L_{4}[i,j^{\prime}],R_{4}[i^{\prime},j]) to the graph.

Note that the distances in this reduction graph are the same as in the constant-degree reduction graph by a similar proof as used to prove Lemma 4.3. Furthermore, each node in L4L_{4} is connected to at most n2t/(t+1)n^{2t/(t+1)} nodes in R4R_{4}, and each node in R4R_{4} is connected to at most n2t/(t+1)n^{2t/(t+1)} nodes in L4L_{4}.

We can now prove Theorem 4.9.

Proof 4.10.

The number of nodes in the reduction graph above is dominated by the number of nodes in L4L_{4} and R4R_{4}. Thus, the total number of nodes in the reduction graph is N=Θ(n2/(t+1))N=\Theta(n^{2/(t+1)}) nodes. Since each node not in L4L_{4} or R4R_{4} have at most 33 edges adjacent on it, its degree is trivially O(Nt)O(N^{t}). Every node in L4L_{4} and R4R_{4} has one tree edge incident on it, and at most n2t/(t+1)n^{2t/(t+1)} edges of MM incident on it by construction. Thus the maximum degree in the graph is O(n2t/(t+1))=O(Nt)O(n^{2t/(t+1)})=O(N^{t}) as claimed. The rest is similar to Theorem 4.1. Every time we get a new (u,v)(u,v) input vector pair, we delete all the edges between L2×L3L_{2}\times L_{3} and R2×R3R_{2}\times R_{3} and insert edges according to the new input vectors.

For each pair of input vectors, we thus perform O(n)O(n) updates and O(1)O(1) queries. In total, checking nn pairs takes O(n2)O(n^{2}) updates and O(n)O(n) queries. If there was an algorithm for (s,t)(s,t)-distance on graphs with maximum degree bounded by NtN^{t} with update time O(N1+t2ϵ)O(N^{\frac{1+t}{2}-\epsilon}) (i.e., O(n12ϵ)O(n^{1-2\epsilon})) and query time O(N1+tϵ)O(N^{1+t-\epsilon}) (i.e., O(n22ϵ)O(n^{2-2\epsilon})), then we can decide if uMv=1uMv=1 for all nn pairs in O(n32ϵ)O(n^{3-2\epsilon}) time, contradicting the OMv conjecture.

4.4 Expander Graph

For our expander lower bound, we use the following gadgets, called reinforced forests.

Definition 4.11 (Reinforced forest).

A reinforced forest of xx trees of height yy is a graph composed of xx disjoint binary trees, each of them of height y>0y>0. These trees have x2yx2^{y} leaves in total; consider a degree-dd expander graph on x2yx2^{y} nodes, choose an arbitrary bijection between the expander nodes and the forest’s leaves, and add the expander edges to the leaves accordingly. Finally, in order to guarantee that the graph is bipartite, on each edge added from the expander, we add a dummy node. The resulting graph is the reinforced forest.

[Uncaptioned image]

We naturally split the nodes of a reinforced forest between internal nodes and leaves.

Reduction Graph
Refer to caption
Figure 3: The reduction graph. The input-dependent edges are dotted.

We use the following graph as the base for our reduction (see Figure 3).

  • A (logn)(\log n)-depth reinforced forest with a single tree. The set of n1n-1 internal nodes is marked L1L_{1} and the nn leaves L2L_{2}; the root of the tree is the source node ss, and the nn nodes of L2L_{2} are marked as L2[i]L_{2}[i] for 1in1\leq i\leq n.

  • A (logn)(\log n)-depth reinforced forest with 3n3n trees. The 3n(n1)3n(n-1) internal nodes are marked L3L_{3} and the 3n23n^{2} leaves L4L_{4}. We split this reinforced forest into sets of nn trees each, and label them LU,LM,LLLU,LM,LL (upper, middle, and lower). The roots of each of the nn trees are marked as LX[i]LX[i], for 1in1\leq i\leq n, X{U,M,L}X\in\{U,M,L\}, and the leaves of the tree with root LX[i]LX[i] are marked LX[i,j]LX[i,j], for 1jn1\leq j\leq n.

  • A copy of the above structure, with node sets marked RiR_{i} instead of LiL_{i}, respectively. The root of the single tree of R1R_{1} is the target node tt.

  • For the matrix MM, add the edge (LM[i,j],RM[j,i])(LM[i,j],RM[j,i]) if Mij=1M_{ij}=1, and add the edges (LM[i,j],RL[j,i])(LM[i,j],RL[j,i]) and (LL[i,j],RM[j,i])(LL[i,j],RM[j,i]) otherwise.

  • Given an input vector uu, for each i[n]i\in[n], add the edge (L2[i],LM[i])(L_{2}[i],LM[i]) if ui=1u_{i}=1, and (L2[i],LU[i])(L_{2}[i],LU[i]) otherwise.

  • Given an input vector vv, for each j[n]j\in[n], add the edge (R2[j],RM[j])(R_{2}[j],RM[j]) if vj=1v_{j}=1, and (R2[j],RU[j])(R_{2}[j],RU[j]) otherwise.

The following lemma characterizes (s,t)(s,t)-distance in the expander reduction graph based on the value of uMvuMv. We split the graph nodes into layers (indicated at the bottom of Figure 3) and use the fact that edges can only connect consecutive layers in order to prove this lemma in Section 4.4.1.

Lemma 4.12.

uMv=1uMv=1 if and only if dist(s,t)4logn+3\operatorname{dist}(s,t)\leq 4\log n+3. Moreover, the graph is bipartite.

The number of nodes in the reduction graph is dominated by the nodes in L4R4L_{4}\cup R_{4}, yielding the same asymptotic lower bounds as in the constant-degree case (Section 4.1). When a vector pair arrives, we first add all the potential edges and then remove the unnecessary ones, in a similar fashion as before, to preserve expansion. We defer the proof of expansion to Section 4.4.2. Thus, we get the following theorem for expander graphs.

Theorem 4.13.

For any constant ϵ>0\epsilon>0, there is no dynamic algorithm maintaining (s,t)(s,t)-distance, SSSP or APSP, on all NN-node constant-degree bipartite graphs with constant expansion, with amortized O(N1/2ϵ)O(N^{1/2-\epsilon}) update time and O(N1ϵ)O(N^{1-\epsilon}) query time, unless the OMv conjecture is false.

4.4.1 Distances in the expander graph

We prove the following lemma for the expander reduction graph described in Section 4.4. This is basically the graph described in Section 4.1, with expander edges added in carefully chosen places, and one dummy node added on each such edge. Note that adding the dummy nodes does not change the expansion by more than a constant factor, does not not affect the asymptotic number of nodes, and the shortest paths never use the expander edges and thus the also stay unaffected by the dummy nodes addition. To simplify the writing, we thus ignore the dummy nodes in the following. See 4.12

Proof 4.14.

The proof uses the same ideas and layering as in the proof of Lemma 4.3. We include the dummy nodes of L2L_{2} and L4L_{4} in one layer before their (non-dummy) neighbors, and the other dummy nodes in one layer after their neighbors. This already shows that the graph is bipartite.

(\implies) This direction of the proof is the same as in the proof of Lemma 4.3, since we only add edges, and the same path as earlier still exists in the graph.

(\impliedby) We still make use of the layering argument for this direction, but the argument is slightly more nuanced than the one for the constant-degree case. Assume that there is a path of length 4logn+34\log n+3 from ss to tt. By the layering, each edge in the path must connect a layer \ell node and a layer (+1)(\ell+1) node, and there is exactly one such edge in the path for each [4logn+2]\ell\in[4\log n+2]. Next, we use these two facts (sometimes implicitly) to show that the path must have the form of the above described path, and conclude that uMv=1uMv=1.

  • The path must contain an edge from L2L_{2} to L3L_{3}. Suppose the edge was to a node in LULU. Since no node in LULU has an edge to R4R_{4}, the path does not always connect two nodes of strictly increasing layers, contradicting the claimed length. Thus the edge is of the form (L2[i],LM[i])(L_{2}[i],LM[i]) for some i[n]i\in[n], implying that ui=1u_{i}=1.

  • From there, the path must continue to some leaf LM[i,j]LM[i,j] of the tree rooted at LM[i]LM[i]. Suppose the path continues to RLRL instead of RMRM. Since there are no edges from RLRL to R2R_{2}, the path would have to connect two of the non-increasing at least once, which cannot happen on a 4logn+34\log n+3 length path. Thus the path uses the edge (LM[i,j],RM[j,i])(LM[i,j],RM[j,i]), implying that Mij=1M_{ij}=1.

  • From RM[j,i]RM[j,i], the path must continue to the root RM[j]RM[j], and then to R2[j]R_{2}[j] by using the edge (RM[j],R2[j])(RM[j],R_{2}[j]), implying vj=1v_{j}=1.

We have established that there exist indices i,j[n]i,j\in[n] such that ui=Mij=vj=1u_{i}=M_{ij}=v_{j}=1, implying uMv=1uMv=1.

4.4.2 Expansion of the expander graph

Let us now verify that the graph is indeed an h1h_{1}-expander graph, for some constant h1>0h_{1}>0. Recall that the reinforced forests contain h0h_{0} expanders, for some constant h0>0h_{0}>0.

While a similar proof as in the maximum matching expander reduction works for this setting as well, we take a different approach here. Consider a reinforced forest with a set UU of internal nodes, a set UU^{\prime} of leaves, and an h0h_{0}-expander on the leaves. Let SS be a set of nodes in the forest (we do not bound |S||S|).

{observation}

Let 0<c<1/20<c<1/2, that may depend on nn. If c|U||SU|(1c)|U|c|U^{\prime}|\leq|S\cap U^{\prime}|\leq(1-c)|U^{\prime}|, then E(S,S¯)|ch0|U|E(S,\bar{S})|\geq ch_{0}|U^{\prime}|. {claimproof} Consider first the case |SU||U|/2|S\cap U^{\prime}|\leq|U^{\prime}|/2, in which the expander graph edges on UU^{\prime} guarantee |E(S,S¯)U|h0|S||E(S,\bar{S})\cap U^{\prime}|\geq h_{0}|S|, implying |E(S,S¯)||E(S,S¯)U|h0|S|ch0|U||E(S,\bar{S})|\geq|E(S,\bar{S})\cap U^{\prime}|\geq h_{0}|S|\geq ch_{0}|U^{\prime}|. Otherwise, apply the analogous argument on the set USU^{\prime}\setminus S.

{observation}

|E(S,S¯)||US||US||E(S,\bar{S})|\geq|U\cap S|-|U^{\prime}\cap S|. {claimproof} Every node in USU\cap S is a tree node that has two children. In total, these node has 2|US|2|U\cap S| edges going to children, of which at most |US|+|US||U\cap S|+|U^{\prime}\cap S| children are in SS. Hence, |E(S,S¯)|2|US|(|US|+|US|)=|US||US||E(S,\bar{S})|\geq 2|U\cap S|-(|U\cap S|+|U^{\prime}\cap S|)=|U\cap S|-|U^{\prime}\cap S|.

{observation}

Let 0<c<1/20<c<1/2 constant. If |US|c|US||U\cap S|\leq c|U^{\prime}\cap S| then |E(S,S¯)|(0.5c)|US||E(S,\bar{S})|\geq(0.5-c)|U^{\prime}\cap S|. {claimproof} If |US|c|US||U\cap S|\leq c|U^{\prime}\cap S|, consider the parent nodes of the leaves in USU^{\prime}\cap S: these have at least 0.5|US|0.5|U^{\prime}\cap S| distinct parents, of which at most c|US|c|U^{\prime}\cap S| are in USU\cap S, and the others are in US¯U\cap\bar{S}. Hence |E(US¯,US)|(0.5c)|US||E(U\cap\bar{S},U^{\prime}\cap S)|\geq(0.5-c)|U^{\prime}\cap S|.

The proof of the following lemma is similar to that of Lemma 3.29.

Lemma 4.15.

A reinforced forest is an expander.

Proof 4.16.

Fix a set SS of nodes in the forest, |S|(|UU|)/2|S|\leq(|U\cup U^{\prime}|)/2. We consider different ranges of |US||U^{\prime}\cap S|, and show that for each of them, |E(S,S¯)|h1|S||E(S,\bar{S})|\geq h_{1}|S| for some constant h1h_{1}.

  1. 1.

    If |US|>0.9|U||U^{\prime}\cap S|>0.9|U^{\prime}|, the inequalities |U|<|U||U|<|U^{\prime}| and |S||UU|/2|S|\leq|U\cup U^{\prime}|/2 imply |US|<0.1|U||U\cap S|<0.1|U^{\prime}|, and hence |US|>(1/9)|US|>0.1|US||U^{\prime}\cap S|>(1/9)|U\cap S|>0.1|U\cap S|. Observation 4.4.2 implies |E(S,S¯)|0.4|US|0.36|U|>0.18|UU|0.36|S||E(S,\bar{S})|\geq 0.4|U^{\prime}\cap S|\geq 0.36|U^{\prime}|>0.18|U\cup U^{\prime}|\geq 0.36|S|.

  2. 2.

    If 0.1|U||US|0.9|U|0.1|U^{\prime}|\leq|U^{\prime}\cap S|\leq 0.9|U^{\prime}|, then by Observation 4.4.2 we have |E(S,S¯)|0.1h0|U||E(S,\bar{S})|\geq 0.1h_{0}|U^{\prime}|. Since |U|<|U||U|<|U^{\prime}| and |S||UU|/2|S|\leq|U\cup U^{\prime}|/2, the above implies |E(S,S¯)|0.05h0|UU|0.1h0|S||E(S,\bar{S})|\geq 0.05h_{0}|U\cup U^{\prime}|\geq 0.1h_{0}|S|.

  3. 3.

    If |US|<0.1|U||U^{\prime}\cap S|<0.1|U^{\prime}|, then

    1. (a)

      If |US|>2|US||U\cap S|>2|U^{\prime}\cap S|, add 0.5|US|1.5|US|0.5|U\cap S|-1.5|U^{\prime}\cap S| to both sides of the inequality and multiply by 2/32/3 to get |US||US|>1/3(|US|+|US|)=|S|/3|U\cap S|-|U^{\prime}\cap S|>1/3(|U\cap S|+|U^{\prime}\cap S|)=|S|/3. By Observation 4.4.2, |E(S,S¯)||US||US|>|S|/3|E(S,\bar{S})|\geq|U\cap S|-|U^{\prime}\cap S|>|S|/3.

    2. (b)

      If |US|2|US||U\cap S|\leq 2|U^{\prime}\cap S| then |S|=|US|+|US|3|US||S|=|U\cap S|+|U^{\prime}\cap S|\leq 3|U^{\prime}\cap S|. The expander edges on UU^{\prime} guarantee that |E(S,S¯)|h0|US|(h0/3)|S||E(S,\bar{S})|\geq h_{0}|U^{\prime}\cap S|\geq(h_{0}/3)|S|.

Lemma 4.17.

The reduction graph is an expander.

Proof 4.18.

Consider a set SVS\subseteq V of |S|N/2=6n2n1|S|\leq N/2=6n^{2}-n-1 nodes. Note that the node sets in the graph have the following sizes: |L1|=|R1|=n1|L_{1}|=|R_{1}|=n-1; |L2|=|R2|=n|L_{2}|=|R_{2}|=n; |L3|=|R3|=3n23n|L_{3}|=|R_{3}|=3n^{2}-3n; |L4|=|R4|=3n2|L_{4}|=|R_{4}|=3n^{2}. We show that there exists a constant h1h_{1} such that |E(S,S¯)|h1|S||E(S,\bar{S})|\geq h_{1}|S|, by considering different ranges for the set size |L4S||L_{4}\cap S| (which ranges in 0,,3n20,\ldots,3n^{2}). In most cases, we show that |E(S,S¯)|cn2|E(S,\bar{S})|\geq cn^{2} for some constant cc, which is enough since |S|<6n2|S|<6n^{2}.

  1. 1.

    If |L4S|>2.9n2|L_{4}\cap S|>2.9n^{2}, then consider following sub-cases.

    1. (a)

      If |R4S|>2.1n2|R_{4}\cap S|>2.1n^{2}, then since |S|6n2n1|S|\leq 6n^{2}-n-1, we have |L3S|<n2|L_{3}\cap S|<n^{2}, which implies |L4S|>2.9|L3S||L_{4}\cap S|>2.9|L_{3}\cap S|. By Observation 4.4.2, |E(S,S¯)|0.15|L4S|>0.4n2|E(S,\bar{S})|\geq 0.15|L_{4}\cap S|>0.4n^{2}.

    2. (b)

      If 0.1n2|R4S|2.1n20.1n^{2}\leq|R_{4}\cap S|\leq 2.1n^{2}, the expander edges on R4R_{4} (Observation 4.4.2 with c=0.03c=0.03) guarantee E(S,S¯)|0.03h0n2E(S,\bar{S})|\geq 0.03h_{0}n^{2}.

    3. (c)

      If |R4S|<0.1n2|R_{4}\cap S|<0.1n^{2}, note that there are n2n^{2} edges in L4×R4L_{4}\times R_{4}, of which at most 0.1n20.1n^{2} are in (L4S¯)×(R4S¯)(L_{4}\cap\bar{S})\times(R_{4}\cap\bar{S}) (since L4SL_{4}\cap S is large), at most 0.1n20.1n^{2} are in (L4S)×(R4S)(L_{4}\cap S)\times(R_{4}\cap S) (since R4SR_{4}\cap S is small), and the remaining edges, at least 0.8n20.8n^{2} of them, are in E(S,S¯)E(S,\bar{S}).

  2. 2.

    If 0.1n2|L4S|2.9n20.1n^{2}\leq|L_{4}\cap S|\leq 2.9n^{2}, then as in case 1(b), the expander edges on L4L_{4} (Observation 4.4.2 with c=0.03c=0.03) guarantee E(S,S¯)|0.03h0n2E(S,\bar{S})|\geq 0.03h_{0}n^{2}.

  3. 3.

    If |L4S|<0.1n2|L_{4}\cap S|<0.1n^{2}, consider the following sub-cases.

    1. (a)

      If |R4S|>2.1n2|R_{4}\cap S|>2.1n^{2}, we are in a case symmetric to case 1(c): of the n2n^{2} edges in L4×R4L_{4}\times R_{4}, at most 0.1n20.1n^{2} are in (L4S¯)×(R4S¯)(L_{4}\cap\bar{S})\times(R_{4}\cap\bar{S}) (since L4SL_{4}\cap S is small), at most 0.1n20.1n^{2} are in (L4S)×(R4S)(L_{4}\cap S)\times(R_{4}\cap S) (since R4SR_{4}\cap S is large), and the remaining edges, at least 0.8n20.8n^{2} edges, are in E(S,S¯)E(S,\bar{S}).

    2. (b)

      If 0.1n2|R4S|(2.1)n20.1n^{2}\leq|R_{4}\cap S|\leq(2.1)n^{2}, again the expander edges on R4R_{4} (Observation 4.4.2 with c=0.03c=0.03) guarantee E(S,S¯)|0.03h0n2E(S,\bar{S})|\geq 0.03h_{0}n^{2}.

    3. (c)

      In the last case, namely |R4S|<0.1n2|R_{4}\cap S|<0.1n^{2}, we are bound to analyze the subgraphs on L=L1L2L3L4L=L_{1}\cup L_{2}\cup L_{3}\cup L_{4} and on R=R1R2R3R4R=R_{1}\cup R_{2}\cup R_{3}\cup R_{4} separately. We will show that there is a constant h2h_{2} such that |E(LS,LS¯)|h2|LS||E(L\cap S,L\cap\bar{S})|\geq h_{2}|L\cap S| and |E(RS,RS¯)|h2|RS||E(R\cap S,R\cap\bar{S})|\geq h_{2}|R\cap S|, which implies |E(S,S¯)||E(LS,LS¯)|+|E(RS,RS¯)|h2|LS|+h2|RS|=h2|S||E(S,\bar{S})|\geq|E(L\cap S,L\cap\bar{S})|+|E(R\cap S,R\cap\bar{S})|\geq h_{2}|L\cap S|+h_{2}|R\cap S|=h_{2}|S|, as desired.

      We focus on LL; the proof for RR is analogous.

      1. i.

        If 0.1n|L4S|<0.1n20.1n\leq|L_{4}\cap S|<0.1n^{2},

        1. A.

          If |L3S|2|L4S||L_{3}\cap S|\geq 2|L_{4}\cap S|, then by Observation 4.4.2, we have |E(LS,LS¯)||L3S||L4S||E(L\cap S,L\cap\bar{S})|\geq|L_{3}\cap S|-|L_{4}\cap S|, and we bound this difference from below several times. By |L3S|2|L4S||L_{3}\cap S|\geq 2|L_{4}\cap S|, we have |L3S||L4S|0.5|L3S||L_{3}\cap S|-|L_{4}\cap S|\geq 0.5|L_{3}\cap S|. By the same inequality |L3S||L4S||L4S||L_{3}\cap S|-|L_{4}\cap S|\geq|L_{4}\cap S|. In addition, |L4S|0.1n0.05|L1L2||L_{4}\cap S|\geq 0.1n\geq 0.05|L_{1}\cup L_{2}|. Hence, |E(LS,LS¯)|(0.5|L3S|+|L4S|+0.05|L1L2|)/30.01|LS||E(L\cap S,L\cap\bar{S})|\geq(0.5|L_{3}\cap S|+|L_{4}\cap S|+0.05|L_{1}\cup L_{2}|)/3\geq 0.01|L\cap S|, and we are done.

        2. B.

          If |L3S|2|L4S||L_{3}\cap S|\leq 2|L_{4}\cap S|, then |(L3L4)S||(L3L4)|/2|(L_{3}\cup L_{4})\cap S|\leq|(L_{3}\cup L_{4})|/2, and by Lemma 4.15 we have |E(LS,LS¯)|h1|(L3L4)S||E(L\cap S,L\cap\bar{S})|\geq h_{1}|(L_{3}\cup L_{4})\cap S|. We also have |(L3L4)S||(L4S|0.1n0.05|L1L2||(L_{3}\cup L_{4})\cap S|\geq|(L_{4}\cap S|\geq 0.1n\geq 0.05|L_{1}\cup L_{2}|. Hence, |E(LS,LS¯)|0.02h1|LS||E(L\cap S,L\cap\bar{S})|\geq 0.02h_{1}|L\cap S| and we are done.

      2. ii.

        If |L4S|<0.1n|L_{4}\cap S|<0.1n,

        1. A.

          If |L3S|0.2n|L_{3}\cap S|\geq 0.2n, then again by Observation 4.4.2, we have |E(LS,LS¯)||L3S||L4S||L3S|/2|E(L\cap S,L\cap\bar{S})|\geq|L_{3}\cap S|-|L_{4}\cap S|\geq|L_{3}\cap S|/2.

          Since |L3S|/2>|L4S|/4|L_{3}\cap S|/2>|L_{4}\cap S|/4 and also |L3S|/20.1n>0.05|L1L2||L_{3}\cap S|/2\geq 0.1n>0.05|L_{1}\cup L_{2}|, we have |E(LS,LS¯)|0.01|L3S||E(L\cap S,L\cap\bar{S})|\geq 0.01|L_{3}\cap S|.

        2. B.

          If |L3S|<0.2n|L_{3}\cap S|<0.2n, then again by Lemma 4.15 we have |E(LS,LS¯)|h1|(L3L4)S||E(L\cap S,L\cap\bar{S})|\geq h_{1}|(L_{3}\cup L_{4})\cap S|. We treat (L1L2)S(L_{1}\cap L_{2})\cap S separately, considering three cases by the sizes of L2SL_{2}\cap S and L1SL_{1}\cap S.

          • If |L2S|0.3n|L_{2}\cap S|\geq 0.3n, note that each node in L2L_{2} is connected by an edge to exactly one node in L3L_{3}, and these nodes are distinct. So there are at least 0.3n0.3n edges in (L2S)×L3(L_{2}\cap S)\times L_{3}, of which at most 0.2n0.2n have their L3L_{3} endpoint in |L3S||L_{3}\cap S|, hence |E(LS,LS¯)|0.1n0.05|(L1L2)S||E(L\cap S,L\cap\bar{S})|\geq 0.1n\geq 0.05|(L_{1}\cup L_{2})\cap S|.

          • If |L2S|<0.3n|L_{2}\cap S|<0.3n and |L1S|<0.7n|L_{1}\cap S|<0.7n, then by Lemma 4.15 we have |E(LS,LS¯)|h1|(L1L2)S||E(L\cap S,L\cap\bar{S})|\geq h_{1}|(L_{1}\cup L_{2})\cap S|.

          • If |L2S|<0.3n|L_{2}\cap S|<0.3n and |L1S|0.7n|L_{1}\cap S|\geq 0.7n, then by Observation 4.4.2, |E(LS,LS¯)||L1S||L2S|>0.4n0.2|L1L2|0.4|(L1L2)S||E(L\cap S,L\cap\bar{S})|\geq|L_{1}\cap S|-|L_{2}\cap S|>0.4n\geq 0.2|L_{1}\cup L_{2}|\geq 0.4|(L_{1}\cup L_{2})\cap S|.

4.5 Power-law Graph

As in the case of maximum matching, we first make our reduction graph robust to degree changes. The following reduction is close in spirit to the expander graph reduction. The graph is as follows:

  • A (logn)(\log n)-depth binary forest with a single tree, labelled L1L2L_{1}\cup L_{2} as before, with ss as the root of the tree.

  • A (logn)(\log n)-depth binary forest with 3n3n trees, labelled L3L4L_{3}\cup L_{4}. We split this binary forest into sets of nn trees each, and label them LU,LM,LLLU,LM,LL (upper, middle, and lower). The roots of each of the nn trees are marked as LX[i]LX[i], for 1in1\leq i\leq n, X{U,M,L}X\in\{U,M,L\}, and the leaves of the tree with root LX[i]LX[i] are marked LX[i,j]LX[i,j], for 1jn1\leq j\leq n.

  • A copy of the above structure, with node sets marked RiR_{i} instead of LiL_{i}, respectively. The root of the single tree of R1R_{1} is the target node tt.

  • For the matrix MM, add the edges (LM[i,j],RM[j,i])(LM[i,j],RM[j,i]) and (LL[i,j],RL[j,i])(LL[i,j],RL[j,i]) if Mij=1M_{ij}=1, and add the edges (LM[i,j],RL[j,i])(LM[i,j],RL[j,i]) and (LL[i,j],RM[j,i])(LL[i,j],RM[j,i]) otherwise.

  • Given an input vector uu, for each i[n]i\in[n], add the edge (L2[i],LM[i])(L_{2}[i],LM[i]) if ui=1u_{i}=1, and (L2[i],LU[i])(L_{2}[i],LU[i]) otherwise.

  • Given an input vector vv, for each j[n]j\in[n], add the edge (R2[j],RM[j])(R_{2}[j],RM[j]) if vj=1v_{j}=1, and (R2[j],RU[j])(R_{2}[j],RU[j]) otherwise.

Note that this fixes the degree distribution present in the graph regardless of the input (u,M,v)(u,M,v), unlike in the case of maximum matching where we needed to compensate for the degrees elsewhere in the graph. Table 7 shows the degree distribution on the left side of the reduction graph, regardless of the input bits.

Layer Deg 11 Deg 22 Deg 33
L1L_{1} 0 11 n2n-2
L2L_{2} 0 nn 0
L3L_{3} 0 2n2n 3n25n3n^{2}-5n
L4L_{4} n2n^{2} 2n22n^{2} 0
Table 7: Degree distribution of the nodes on the left side of the (s,t)(s,t)-distance reduction graph.

Thus the degree distribution in the entire reduction graph is as follows: (1,2n2),(2,4n2+6n+2),(3,6n28n4)(1,2n^{2}),(2,4n^{2}+6n+2),(3,6n^{2}-8n-4). As earlier, choose NN such that

N>ζ(β)max{(2N1+2n),(2N2+2n)2β,(2N3+2n)3β},N>\zeta(\beta)\cdot\max\{(2N_{1}+2n),(2N_{2}+2n)\cdot 2^{\beta},(2N_{3}+2n)\cdot 3^{\beta}\},

and pick any power-law graph GG on NN nodes. We reduce the count of degree 1,21,2 and 33 nodes in the graph as earlier, and embed our reduction graph using these nodes. Unlike in the case of matchings, note that the (s,t)(s,t)-distance is not affected by the rest of the graph, and thus uMv=1uMv=1 if and only if dist(s,t)4logn+3\operatorname{dist}(s,t)\leq 4\log n+3.

5 Lower Bounds for Dynamic Densest Subgraph

For a constant d3d\geq 3, we work with (2d)(2d)-regular graphs of two different sizes: NN-node gadget for each vector entry, and N2N^{2}-node gadget for each matrix entry.

Definition 5.1 (Vector and Matrix Gadgets).

A vector gadget is a 66-edge connected 2d2d-regular graph on nn nodes. A matrix gadget is a 66-edge connected 2d2d-regular graph on n2n^{2} nodes with one edge removed

A 66-edge connected graph is a graph that does not disconnect after the removal of any 55 edges. Such a (2d)(2d)-regular graph is, e.g., a dd-dimensional torus, or a union of dd edge-disjoint cycles, each going through all the nodes. Note that the density of a vector gadget is dd, and the density of a matrix gadget is d1n2d-\frac{1}{n^{2}}. We prove the following theorems for maintaining densest subgraphs.

Theorem 5.2.

For any constant ϵ>0\epsilon>0, there is no dynamic algorithm maintaining an exact densest subgraph, on all NN-node graphs with maximum degree Δ7\Delta\leq 7, with amortized O(N1/4ϵ)O(N^{1/4-\epsilon}) update time and O(N1/2ϵ)O(N^{1/2-\epsilon}) query time, unless the OMv conjecture is false.

Theorem 5.3.

For any constant ϵ>0\epsilon>0, there is no dynamic algorithm maintaining an exact densest subgraph, on all NN-node graphs with constant degree and constant expansion, with amortized O(N1/4ϵ)O(N^{1/4-\epsilon}) update time and O(N1/2ϵ)O(N^{1/2-\epsilon}) query time, unless the OMv conjecture is false.

5.1 Constant-Degree Graph

5.1.1 The static graph

The reduction graph is composed of 2n2n vector gadgets, and n2n^{2} matrix gadgets as follows.

  • 2n2n vector gadgets labelled UiU_{i} resp. ViV_{i}, for 1in1\leq i\leq n. The nodes in each gadget are labelled Ui[j]U_{i}[j] resp. Vi[j]V_{i}[j], for 1jn1\leq j\leq n.

  • n2n^{2} matrix gadgets labelled MijM_{ij}, for 1i,jn1\leq i,j\leq n. The missing edge in the gadget MijM_{ij} is between the two nodes labelled Mij[0]M_{ij}[0] and Mij[1]M_{ij}[1].

  • An edge from Ui[j]U_{i}[j] to Mij[0]M_{ij}[0], and an edge from Vj[i]V_{j}[i] to Mij[1]M_{ij}[1] for all 1i,jn1\leq i,j\leq n. Note that this implies that every node of MijM_{ij} has degree 2d2d and that the total number of edges incident to at least one node of MijM_{ij} is dn2+1dn^{2}+1.

The total number of nodes in the reduction graph is N=n4+2n2=Θ(n4)N=n^{4}+2n^{2}=\Theta(n^{4}).

5.1.2 Input-dependent edges

The above graph is adapted to the specific input instance as follows.

  • For the matrix MM, remove one arbitrary edge from the matrix gadget MijM_{ij} if Mij=0M_{ij}=0.

  • Given an input vector uu, for each i[n]i\in[n], remove two arbitrary edges from the vector gadget UiU_{i} if ui=0u_{i}=0.

  • Given an input vector vv, for each j[n]j\in[n], remove two arbitrary edges from the vector gadget VjV_{j} if vj=0v_{j}=0.

5.1.3 Densest subgraphs in the graph

We use the following simple lemma in our density proof.

Lemma 5.4.

[16] For all numbers a,b,c,d,a,b,c,d, and rr, we have:

  1. 1.

    If abr\frac{a}{b}\geq r and cdr\frac{c}{d}\geq r, then a+cb+dr\frac{a+c}{b+d}\geq r.

  2. 2.

    If abr\frac{a}{b}\geq r and cdr\frac{c}{d}\leq r, then acbdr\frac{a-c}{b-d}\geq r.

Let ρ\rho be the density of the densest subgraph in the current graph.

Lemma 5.5.

If uMv=1uMv=1, then ρd+1n2+2n\rho\geq d+\frac{1}{n^{2}+2n}, and otherwise ρ<d+1n2+2n\rho<d+\frac{1}{n^{2}+2n}.

Proof 5.6.

(\implies) First assume that uMv=1uMv=1. Then there are indices i,ji,j such that ui=Mij=vj=1u_{i}=M_{ij}=v_{j}=1. Consider the subgraph S=UiMijVjS=U_{i}\cup M_{ij}\cup V_{j}. It consists of n2+2nn^{2}+2n nodes and d(n2+2n)+1d(n^{2}+2n)+1 edges. Thus ρ(S)=d+1n2+2n\rho(S)=d+\frac{1}{n^{2}+2n}.

(\impliedby) Now assume that uMv=0uMv=0, and that there exists some subset SVS\subset V with ρ(S)d+1n2+2n\rho(S)\geq d+\frac{1}{n^{2}+2n}. We first claim that we can modify SS in a particular manner without loss of generality, and then derive a contradiction. Specifically, first we remove from SS all subgraphs MijM_{ij} that are not completely contained in SS.

Claim 1.

Let TT be a subgraph of a matrix gadget MijM_{ij} that is not the whole gadget. Then, after removing TT from SS, it still holds that ρ(S)d+1n2+2n\rho(S)\geq d+\frac{1}{n^{2}+2n}.

{claimproof}

Recall that every node in MijM_{ij} has degree at most 2d2d. Let q=|T|<n2q=|T|<n^{2}. It follows that there are at most qdqd edges incident to a node of TT in G[S]G[S]: If neither Mij[0]M_{ij}[0] nor Mij[1]M_{ij}[1] belong to TT then there are most qd1qd-1 edges incident to at least on node of TT in G[S]G[S] (all being edges between nodes of TT), as at least one node of TT must have an edge to a node to the rest of MijM_{ij} that does not belong to SS. If either Mij[0]M_{ij}[0] or Mij[1]M_{ij}[1], but not both belong to TT then there are at most qd1qd-1 edges incident to two nodes of TT and there is one edge between TT and the rest of SS. If both Mij[0]M_{ij}[0] and Mij[1]M_{ij}[1] belong to TT then there are at most qd2qd-2 edges incident to two nodes of TT (as there is no edge between Mij[0]M_{ij}[0] and Mij[1]M_{ij}[1]) and there are two edges between TT and the rest of SS.

Thus the removal of TT from SS removes at most qdqd edges and qq nodes and, hence by Part 2 of Lemma 5.4, STS\setminus T has density d+1n2+2n\geq d+\frac{1}{n^{2}+2n}.

Next we remove all subgraphs MijM_{ij} such that Mij=0M_{ij}=0.

Claim 2.

For all i,ji,j, if the bit Mij=0M_{ij}=0 and MijM_{ij} is fully contained in SS, then after removing the corresponding subgraph from SS, we still have ρ(S)d+1n2+2n\rho(S)\geq d+\frac{1}{n^{2}+2n}.

{claimproof}

As Mij=0M_{ij}=0, there are dn2dn^{2} edges incident to at least one node of MijM_{ij}. Thus, removing MijM_{ij} from SS removes n2n^{2} nodes and at most dn2dn^{2} edges from G[S]G[S], i.e., a subgraph of density at most dd. By Part 2 of Lemma 5.4, SMijS\setminus M_{ij} has density d+1n2+2n\geq d+\frac{1}{n^{2}+2n}.

Finally we add to SS every vector subgraph that is partially contained in SS.

Claim 3.

For all i,ji,j, after we add any partially contained subgraph UiU_{i} or VjV_{j} to SS it still holds that ρ(S)d+1n2+2n\rho(S)\geq d+\frac{1}{n^{2}+2n}.

{claimproof}

We only prove our claim for the gadget UiU_{i}, since the proof for the gadget VjV_{j} is analogous. Suppose some subset UUiU\subset U_{i} is contained in SS with |U|=q<n|U|=q<n. We will show that adding A:=UiUA:=U_{i}\setminus U to SS adds at least d(nq)+1d(n-q)+1 edges. As d(nq)+1nq>d+1n2+2n\frac{d(n-q)+1}{n-q}>d+\frac{1}{n^{2}+2n}, the claim follows from Part 1 of Lemma 5.4.

We are left with showing that adding AA to SS adds at least d(nq)+1d(n-q)+1 edges. Recall that UiU_{i} is a 2d2d-regular 6-edge connected graph with nn nodes from which 2 edges where removed if ui=0u_{i}=0 and no edges where removed if ui=1u_{i}=1. We consider the following cases:

Case 1: ui=1u_{i}=1 or no removed edge is incident to a node in AA. In this case every node in AA has degree 2d2d in G[Ui]G[U_{i}]. Let aa be the number of edges between UU and AA, and note that a6a\geq 6 since UiU_{i} is 6-edge connected. Thus, the sum of the degrees of the nodes in AA is G[A]=2d(nq)aG[A]=2d(n-q)-a. It follows that there are d(nq)a/2d(n-q)-a/2 edges between nodes of AA. Thus, the total number of edges incident to nodes of AA in G[Ui]G[U_{i}] is d(nq)+a/2d(nq)+3d(n-q)+a/2\geq d(n-q)+3 and this is a lower bound of the number of edges that are added to SS when AA is added.

Case 2: ui=0u_{i}=0 and bb removed edges belong to (A,U)(A,U) and cc removed edges have both endpoints in AA with b+c1b+c\geq 1. As only 2 edges are removed, b+c2b+c\leq 2. As UiU_{i} is 6-edge connected, let a6ba\geq 6-b be the number of edges between UU and AA. In this case the sum of the degrees of the nodes of AA in G[A]G[A] is 2d(nq)(a+b)2c2d(n-q)-(a+b)-2c. Thus, the total number of edges incident to nodes of AA in G[A]G[A] is d(nq)(a+b)/2cd(n-q)-(a+b)/2-c. Hence, the total number of edges incident to nodes of AA in G[Ui]G[U_{i}] is d(nq)(a+b)/2c+a=d(nq)+(ab)/2c.d(n-q)-(a+b)/2-c+a=d(n-q)+(a-b)/2-c. As a6ba\geq 6-b this is at least d(nq)+3bcd(n-q)+3-b-c. Since b+c2b+c\leq 2, it follows that at least d(nq)+1d(n-q)+1 edges are added to SS when AA is added.

Thus SS has the following structure: It has some matrix gadgets of set bits MijM_{ij} with all the nodes and both outgoing edges present, and it has some vector gadgets of set or unset bits with all nodes present as well. Let xx denote the number of matrix gadgets of set bits contained in SS, yy denote the number of vector gadgets of set bits in SS, and zz denote the number of vector gadgets of unset bits in SS. We now consider two cases:

Case 1: y+z=0y+z=0. Then SS consists only of xx matrix gadgets and no vector gadgets. Thus ρ(S)=x(dn21)n2x<d\rho(S)=\frac{x\cdot(dn^{2}-1)}{n^{2}x}<d, which is a contradiction

Case 2: y+z>0y+z>0. Then the density of SS is given by

ρ(S)=x(dn2+1)+ydn+z(dn2)n2x+ny+nz\rho(S)=\frac{x\cdot(dn^{2}+1)+y\cdot dn+z\cdot(dn-2)}{n^{2}x+ny+nz}

We claimed that ρ(S)d+1n2+2n\rho(S)\geq d+\frac{1}{n^{2}+2n}, which is the same

2nxny+(2n2+5n)z2nx\geq ny+(2n^{2}+5n)\cdot z

Since uMv=0uMv=0, for every matrix gadget MijM_{ij} of a set bit, either the corresponding UiU_{i} or VjV_{j} must be unset. Thus we assign at most nn matrix gadgets to each unset vector gadget, giving us that xnzx\leq nz, giving

2n2z2nxny+(2n2+5n)z,2n^{2}z\geq 2nx\geq ny+(2n^{2}+5n)\cdot z,

which is a contradiction as desired since y+z>0y+z>0.

5.1.4 Complexity of the reduction

We are now ready to prove the theorem.

See 5.2

Proof 5.7.

Consider the reduction graph above with d=3d=3. It consists of N=n4+2n2=Θ(n4)N=n^{4}+2n^{2}=\Theta(n^{4}) nodes. Every time we get a new (u,v)(u,v) input vector pair, we reinsert all the removed edges in the vector gadgets Ui,ViU_{i},V_{i}, for all 1in1\leq i\leq n, and then delete edges according to the new input vectors. This takes O(n)O(n) updates in total. After that, we query once for the density of the densest subgraph in this new graph, and return 11 if and only if ρ3+1n2+2n\rho\geq 3+\frac{1}{n^{2}+2n}.

Thus for each pair of input vectors, we perform O(n)O(n) updates and O(1)O(1) query. In total, checking nn pairs takes us O(n2)O(n^{2}) updates and O(n)O(n) query. If there were an algorithm for maintaining the density of the densest subgraph on constant-degree graphs with update time O(N1/4ϵ)O(N^{1/4-\epsilon}) (i.e., O(n14ϵ)O(n^{1-4\epsilon})) and query time O(N1/2ϵ)O(N^{1/2-\epsilon}) (i.e., O(n24ϵ)O(n^{2-4\epsilon})), then we can decide if uMv=1uMv=1 for all nn pairs in O(n34ϵ)O(n^{3-4\epsilon}) time, contradicting the OMv conjecture.

5.2 Expander Graph

The extension required to make the reduction hold even for expanders for the densest subgraph problem is simpler than the corresponding extensions for the previous problems. Pick dd such that there is a dd^{\prime}-regular expander for some dd2d^{\prime}\leq d-2. The static graph is constructed as follows.

  • The reduction graph G0G_{0} from Section 5.1.1 as a subgraph together with the input-dependent edges from Section 5.1.2.

  • A (d)(d^{\prime})-regular expander graph G1G_{1} on n4+2n2n^{4}+2n^{2} nodes, for some dd2d^{\prime}\leq d-2, with constant expansion h0h_{0}, together with a bijection π\pi from the nodes of G0G_{0} to the G1G_{1}.

  • An edge between every node vv of G0G_{0} and the node π(v)\pi(v) of G1G_{1}, giving a perfect matching connecting the nodes of G0G_{0} with these of G1G_{1}.

The total number of nodes in the reduction graph is N=2n4+4n2=Θ(n4)N=2n^{4}+4n^{2}=\Theta(n^{4}).

The density arguments in this reduction graph are similar to the one in the constant-degree case, as in Lemma 5.5. The proof follows from the following fact: for any subset TT with density >d>d, removing all of the nodes of G1G_{1} from TT cannot decrease the density of TT, since each node in G1G_{1} has degree at most d1d-1 in G[T]G[T]. Thus, in the proof of Lemma 5.5 in the setting where uMv=0uMv=0, we add a first step removing all the nodes of G1G_{1} from the subset SS and then proceed as before. This leads to the same lower bounds as in the constant-degree case. We have the following theorem for expander graphs:

See 5.3

To verify that the graph is indeed an h1h_{1}-expander graph for some constant h1>0h_{1}>0, we follow a similar pattern as in the proof of expansion for maximum matching in Section 3.3. Consider a reduction graph with V=UUV=U\cup U^{\prime}, where UU^{\prime} is the node set of the expander G1G_{1}. Let SS be some set of nodes (we do not bound |S||S|). We have the following observations. {observation} Let 0<c<1/20<c<1/2, that may depend on nn. If c|U||SU|(1c)|U|c|U^{\prime}|\leq|S\cap U^{\prime}|\leq(1-c)|U^{\prime}|, then E(S,S¯)|ch0|U|E(S,\bar{S})|\geq ch_{0}|U^{\prime}|. {claimproof} Consider first the case |SU||U|/2|S\cap U^{\prime}|\leq|U^{\prime}|/2, in which the expander graph edges on UU^{\prime} guarantee |E(S,S¯)U|h0|S||E(S,\bar{S})\cap U^{\prime}|\geq h_{0}|S|, implying |E(S,S¯)||E(S,S¯)U|h0|S|ch0|U||E(S,\bar{S})|\geq|E(S,\bar{S})\cap U^{\prime}|\geq h_{0}|S|\geq ch_{0}|U^{\prime}|. Otherwise, apply the analogous argument on the set USU^{\prime}\setminus S.

{observation}

|E(S,S¯)||US||US||E(S,\bar{S})|\geq|U\cap S|-|U^{\prime}\cap S|. {claimproof} Every node in USU\cap S has a matching partner (by the bijection used to map the nodes of UU to UU^{\prime}) in UU^{\prime}. In total, each of these nodes has |US||U\cap S| edges going to their matching partners, of which at most |US||U^{\prime}\cap S| partners are in SS. Hence, |E(S,S¯)||US||US||E(S,\bar{S})|\geq|U\cap S|-|U^{\prime}\cap S|.

{observation}

Let 0<c<1/20<c<1/2 constant. If |US|c|US||U\cap S|\leq c|U^{\prime}\cap S| then |E(S,S¯)|(1c)|US||E(S,\bar{S})|\geq(1-c)|U^{\prime}\cap S|. {claimproof} If |US|c|US||U\cap S|\leq c|U^{\prime}\cap S|, consider the matching partners of the nodes in USU^{\prime}\cap S: there are at least |US||U^{\prime}\cap S| such partners in UU, of which at most c|US|c|U\cap S| are in SS, and the others are in US¯U\cap\bar{S}. Hence |E(US¯,US)|(1c)|US||E(U\cap\bar{S},U^{\prime}\cap S)|\geq(1-c)|U^{\prime}\cap S|.

Lemma 5.8.

The reduction graph is an expander.

Proof 5.9.

Now fix a set SVS\subseteq V of nodes, with |S|n4+2n2=(|UU|)/2|S|\leq n^{4}+2n^{2}=(|U\cup U^{\prime}|)/2. We consider different ranges of |US||U^{\prime}\cap S|, and show that for each of them, |E(S,S¯)|h1|S||E(S,\bar{S})|\geq h_{1}|S| for some constant h1h_{1}.

  1. 1.

    If |US|>0.9|U||U^{\prime}\cap S|>0.9|U^{\prime}|, the inequalities |U||U||U|\leq|U^{\prime}| and |S||UU|/2|S|\leq|U\cup U^{\prime}|/2 imply |US|0.1|U||U\cap S|\leq 0.1|U^{\prime}|, and hence |US|>(1/9)|US|>0.1|US||U^{\prime}\cap S|>(1/9)|U\cap S|>0.1|U\cap S|. Observation 5.2 implies |E(S,S¯)|0.9|US|0.81|U|>0.4|UU|0.8|S||E(S,\bar{S})|\geq 0.9|U^{\prime}\cap S|\geq 0.81|U^{\prime}|>0.4|U\cup U^{\prime}|\geq 0.8|S|.

  2. 2.

    If 0.1|U||US|0.9|U|0.1|U^{\prime}|\leq|U^{\prime}\cap S|\leq 0.9|U^{\prime}|, then by Observation 5.2 we have |E(S,S¯)|0.1h0|U||E(S,\bar{S})|\geq 0.1h_{0}|U^{\prime}|. Since |U||U||U|\leq|U^{\prime}| and |S||UU|/2|S|\leq|U\cup U^{\prime}|/2, the above implies |E(S,S¯)|0.05h0|UU|0.1h0|S||E(S,\bar{S})|\geq 0.05h_{0}|U\cup U^{\prime}|\geq 0.1h_{0}|S|.

  3. 3.

    If |US|<0.1|U||U^{\prime}\cap S|<0.1|U^{\prime}|, then

    1. (a)

      If |US|>2|US||U\cap S|>2|U^{\prime}\cap S|, add 0.5|US|1.5|US|0.5|U\cap S|-1.5|U^{\prime}\cap S| to both sides of the inequality and multiply by 2/32/3 to get |US||US|>1/3(|US|+|US|)=|S|/3|U\cap S|-|U^{\prime}\cap S|>1/3(|U\cap S|+|U^{\prime}\cap S|)=|S|/3. By Observation 5.2, |E(S,S¯)||US||US|>|S|/3|E(S,\bar{S})|\geq|U\cap S|-|U^{\prime}\cap S|>|S|/3.

    2. (b)

      If |US|2|US||U\cap S|\leq 2|U^{\prime}\cap S| then |S|=|US|+|US|3|US||S|=|U\cap S|+|U^{\prime}\cap S|\leq 3|U^{\prime}\cap S|. The expander edges on UU^{\prime} guarantee that |E(S,S¯)|h0|US|(h0/3)|S||E(S,\bar{S})|\geq h_{0}|U^{\prime}\cap S|\geq(h_{0}/3)|S|.

5.3 Power-law Graph

Let d=3d=3. We show our densest subgraph lower bounds for β>2.74\beta>2.74. We use a reduction graph similar to the one in the constant-degree case for our reduction. Our reduction graph consists of the vector and matrix gadgets as before, along with new nodes we introduce to moderate the degree. We add a cycle CiuC_{i}^{u} on four new nodes for each vector gadget, and a cycle CijC_{ij} on four new nodes for each matrix gadget.

  • 2n2n vector gadgets labelled UiU_{i} resp. ViV_{i}, for 1in1\leq i\leq n. The nodes in each gadget are labelled Ui[j]U_{i}[j] resp. Vi[j]V_{i}[j], for 1jn1\leq j\leq n.

  • 2n2n cycles CiuC_{i}^{u} and CjvC_{j}^{v} on 44 nodes each, with the nodes labelled Ciw[x]C_{i}^{w}[x] for w{u,v}w\in\{u,v\}, x{a,b,c,d}x\in\{a,b,c,d\}

  • n2n^{2} matrix gadgets labelled MijM_{ij}, for 1i,jn1\leq i,j\leq n. The missing edge in the gadget MijM_{ij} is between the two nodes labelled Mij[0]M_{ij}[0] and Mij[1]M_{ij}[1].

  • n2n^{2} cycles CijC_{ij} on four nodes Cij[x]C_{ij}[x] for x{a,b,c,d}x\in\{a,b,c,d\} each.

  • An edge from Ui[j]U_{i}[j] to Mij[0]M_{ij}[0], and an edge from Vj[i]V_{j}[i] to Mij[1]M_{ij}[1] for all 1i,jn1\leq i,j\leq n.

  • If Mij=0M_{ij}=0, remove two arbitrary edges, say, (Mij[a],Mij[b]),(Mij[c],Mij[d])(M_{ij}[a],M_{ij}[b]),(M_{ij}[c],M_{ij}[d]), and the edges (Cij[a],Cij[b]),(Cij[c],Cij[d])(C_{ij}[a],C_{ij}[b]),(C_{ij}[c],C_{ij}[d]), and add the edges (Mij[x],Cij[x])(M_{ij}[x],C_{ij}[x]) for x{a,b,c,d}x\in\{a,b,c,d\} to the graph.

  • Given an input vector uu, for each i[n]i\in[n], do the following if ui=0u_{i}=0. Remove two arbitrary edges from the vector gadget UiU_{i}, say (Ui[a],Ui[b]),(Ui[c],Ui[d])(U_{i}[a],U_{i}[b]),(U_{i}[c],U_{i}[d]), and also remove the edges (Ciu[a],Ciu[b]),(Ciu[c],Ciu[d])(C_{i}^{u}[a],C_{i}^{u}[b]),(C_{i}^{u}[c],C_{i}^{u}[d]). Add the edges (Ui[x],Ciu[x])(U_{i}[x],C_{i}^{u}[x]) for x{a,b,c,d}x\in\{a,b,c,d\}. Similarly for an input vector vv.

Note that each node in a vector gadget always has degree 2d+12d+1, and each node in a matrix gadget has degree 2d2d, regardless of input. Further, the degrees of the nodes in all the cycles are exactly 22 for all inputs. The degree distribution of the reduction graph is (2,4n2+8n),(2d,n4),(2d+1,2n2)(2,4n^{2}+8n),(2d,n^{4}),(2d+1,2n^{2}). Choose NN such that

N>ζ(β)max{(4n2+8n)2β,(n4)(2d)β,(2n2)(2d+1)β}N>\zeta(\beta)\cdot\max\{(4n^{2}+8n)\cdot 2^{\beta},(n^{4})\cdot(2d)^{\beta},(2n^{2})\cdot(2d+1)^{\beta}\}

In a power-law graph with NN nodes, the sum of degrees of all nodes is given by ζ(β1)N/ζ(β)\zeta(\beta-1)\cdot N/\zeta(\beta), while the number of nodes of degree 11 is N/ζ(β)N/\zeta(\beta). Thus for all β\beta such that ζ(β1)<2\zeta(\beta-1)<2, more than half the total degree comes from nodes of degree 11, since

N1=Nζ(β)>ζ(β1)2Nζ(β)=vVdeg(v)2N^{\prime}_{1}=\frac{N}{\zeta(\beta)}>\frac{\zeta(\beta-1)}{2}\cdot\frac{N}{\zeta(\beta)}=\frac{\sum_{v\in V}\deg(v)}{2} (3)

This is true in particular for all β>2.74\beta>2.74. Thus now we first construct the reduction graph as above on n4+6n2+8nn^{4}+6n^{2}+8n nodes, and this does not exceed NdN^{\prime}_{d} for any dd because of the definition of NN. Now for every other degree that needs to be satisfied of degree d>1d^{\prime}>1, we simply add a new node and create a star with dd^{\prime} nodes of degree 11 attached to it. Note that we can do this for all the remaining nodes because of Equation 3. If there are any remaining degree 11 requirements to be satisfied, we simply add a perfect matching on that many nodes.

None of the stars or the perfect matchings can be part of a subgraph of density >d>d, since they all have density <1<1. Thus any subgraph of density >d>d must be from our reduction graph. Further, note that the cycle gadgets can be removed from any subgraph of density >d>d, since they have degree exactly 22. Thus ρd+1n2+2n\rho\geq d+\frac{1}{n^{2}+2n} if and only if uMv=1uMv=1, which proves our claim.

References

  • [1] Amir Abboud and Søren Dahlgaard. Popular conjectures as a barrier for dynamic planar graph algorithms. In FOCS, pages 477–486. IEEE Computer Society, 2016.
  • [2] Ittai Abraham, Shiri Chechik, Daniel Delling, Andrew V. Goldberg, and Renato F. Werneck. On dynamic approximate shortest paths for planar graphs with worst-case costs. In Robert Krauthgamer, editor, Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2016, Arlington, VA, USA, January 10-12, 2016, pages 740–753. SIAM, 2016. doi:10.1137/1.9781611974331.ch53.
  • [3] Ittai Abraham, Shiri Chechik, and Cyril Gavoille. Fully dynamic approximate distance oracles for planar graphs via forbidden-set distance labels. In Proceedings of the Forty-Fourth Annual ACM Symposium on Theory of Computing, STOC ’12, page 1199–1218, New York, NY, USA, 2012. Association for Computing Machinery. doi:10.1145/2213977.2214084.
  • [4] William Aiello, Fan Chung Graham, and Linyuan Lu. A random graph model for power law graphs. Exp. Math., 10(1):53–66, 2001. doi:10.1080/10586458.2001.10504428.
  • [5] Bertie Ancona. Conditional Lower Bounds for Graph Sensitivity Problems. Master’s thesis, Massachusetts Institute of Technology, Cambridge, MA, 2019.
  • [6] Bertie Ancona, Monika Henzinger, Liam Roditty, Virginia Vassilevska Williams, and Nicole Wein. Algorithms and hardness for diameter in dynamic graphs. In Christel Baier, Ioannis Chatzigiannakis, Paola Flocchini, and Stefano Leonardi, editors, 46th International Colloquium on Automata, Languages, and Programming, ICALP 2019, July 9-12, 2019, Patras, Greece, volume 132 of LIPIcs, pages 13:1–13:14. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPIcs.ICALP.2019.13.
  • [7] Thiago Bergamaschi, Monika Henzinger, Maximilian Probst Gutenberg, Virginia Vassilevska Williams, and Nicole Wein. New techniques and fine-grained hardness for dynamic near-additive spanners. In Dániel Marx, editor, Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms, SODA 2021, Virtual Conference, January 10 - 13, 2021, pages 1836–1855. SIAM, 2021. doi:10.1137/1.9781611976465.110.
  • [8] Panagiotis Charalampopoulos and Adam Karczmarz. Single-source shortest paths and strong connectivity in dynamic planar graphs. In ESA, volume 173 of LIPIcs, pages 31:1–31:23. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020.
  • [9] Julia Chuzhoy. Decremental all-pairs shortest paths in deterministic near-linear time. In Samir Khuller and Virginia Vassilevska Williams, editors, STOC ’21: 53rd Annual ACM SIGACT Symposium on Theory of Computing, Virtual Event, Italy, June 21-25, 2021, pages 626–639. ACM, 2021. doi:10.1145/3406325.3451025.
  • [10] Julia Chuzhoy and Thatchaphol Saranurak. Deterministic algorithms for decremental shortest paths via layered core decomposition. In Dániel Marx, editor, Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms, SODA 2021, Virtual Conference, January 10 - 13, 2021, pages 2478–2496. SIAM, 2021. doi:10.1137/1.9781611976465.147.
  • [11] Søren Dahlgaard. On the hardness of partially dynamic graph problems and connections to diameter. CoRR, abs/1602.06705, 2016. URL: http://arxiv.org/abs/1602.06705, arXiv:1602.06705.
  • [12] Gramoz Goranci, Monika Henzinger, and Pan Peng. Dynamic effective resistances and approximate schur complement on separable graphs. In Yossi Azar, Hannah Bast, and Grzegorz Herman, editors, 26th Annual European Symposium on Algorithms, ESA 2018, August 20-22, 2018, Helsinki, Finland, volume 112 of LIPIcs, pages 40:1–40:15. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2018. doi:10.4230/LIPIcs.ESA.2018.40.
  • [13] Gramoz Goranci, Harald Räcke, Thatchaphol Saranurak, and Zihan Tan. The expander hierarchy and its applications to dynamic graph algorithms. In Dániel Marx, editor, Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms, SODA 2021, Virtual Conference, January 10 - 13, 2021, pages 2212–2228. SIAM, 2021. doi:10.1137/1.9781611976465.132.
  • [14] Fan Chung Graham. Large dynamic graphs: What can researchers learn from them? SIAM News, 37(3), 2004.
  • [15] Manoj Gupta and Richard Peng. Fully dynamic (1+ ϵ\epsilon)-approximate matchings. In 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, pages 548–557, 2013. doi:10.1109/FOCS.2013.65.
  • [16] Monika Henzinger, Sebastian Krinninger, Danupon Nanongkai, and Thatchaphol Saranurak. Unifying and strengthening hardness for dynamic problems via the online matrix-vector multiplication conjecture. In STOC, pages 21–30. ACM, 2015.
  • [17] Monika Henzinger, Andrea Lincoln, and Barna Saha. The complexity of average-case dynamic subgraph counting. In Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 459–498. SIAM, 2022.
  • [18] Monika Rauch Henzinger, Philip N. Klein, Satish Rao, and Sairam Subramanian. Faster shortest-path algorithms for planar graphs. J. Comput. Syst. Sci., 55(1):3–23, 1997. doi:10.1006/jcss.1997.1493.
  • [19] Giuseppe F. Italiano, Adam Karczmarz, Jakub Lacki, and Piotr Sankowski. Decremental single-source reachability in planar digraphs. In Hamed Hatami, Pierre McKenzie, and Valerie King, editors, Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC, Canada, June 19-23, 2017, pages 1108–1121. ACM, 2017. doi:10.1145/3055399.3055480.
  • [20] Giuseppe F. Italiano, Yahav Nussbaum, Piotr Sankowski, and Christian Wulff-Nilsen. Improved algorithms for min cut and max flow in undirected planar graphs. In Lance Fortnow and Salil P. Vadhan, editors, Proceedings of the 43rd ACM Symposium on Theory of Computing, STOC 2011, San Jose, CA, USA, 6-8 June 2011, pages 313–322. ACM, 2011. doi:10.1145/1993636.1993679.
  • [21] Wenyu Jin and Xiaorui Sun. Fully dynamic c-edge connectivity in subpolynomial time. CoRR, abs/2004.07650, 2020. URL: https://arxiv.org/abs/2004.07650, arXiv:2004.07650.
  • [22] P. N. Klein and S. Subramanian. A fully dynamic approximation scheme for shortest paths in planar graphs. Algorithmica, 22(3):235–249, Nov 1998. doi:10.1007/PL00009223.
  • [23] David Peleg and Shay Solomon. Dynamic (1+ϵ\epsilon)-approximate matchings: A density-sensitive approach. In Proceedings of the twenty-seventh annual ACM-SIAM symposium on Discrete algorithms, pages 712–729. SIAM, 2016.
  • [24] Saurabh Sawlani and Junxing Wang. Near-optimal fully dynamic densest subgraph. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2020, page 181–193, New York, NY, USA, 2020. Association for Computing Machinery. doi:10.1145/3357713.3384327.
  • [25] Sairam Subramanian. A fully dynamic data structure for reachability in planar digraphs. In Thomas Lengauer, editor, Algorithms - ESA ’93, First Annual European Symposium, Bad Honnef, Germany, September 30 - October 2, 1993, Proceedings, volume 726 of Lecture Notes in Computer Science, pages 372–383. Springer, 1993. doi:10.1007/3-540-57273-2\_72.

Appendix A Related Work

We describe further related work in this section.

Planar graphs. Prior work for fully dynamic shortest paths in planar graphs is summarized in Table 8 and Table 9. There is one further lower bound result in dynamic planar graphs, namely for bipartite maximum weigthed matching, showing a tradeoff of max{u,q}=Ω(N1/2ϵ)\max\{u,q\}=\Omega(N^{1/2-\epsilon}), where uu denotes the update time and qq the query time [1]. The planar graphs used in these lower bound constructions all have constant degree.

There exists also further work on upper bounds in planar graphs. Italiano et al.  [20] designed a fully dynamic algorithm for maximum flow and minimum cut with O~(N2/3)\tilde{O}(N^{2/3}) update time in planar graphs. In the deletions-only setting in directed graphs Italiano et al. [19] gave an algorithm with O~(1)\tilde{O}(1) time per operation.

Other graph classes. In N\sqrt{N}-separable graphs Goranci et al. [12] give almost tight upper and lower bounds for maintaining (1+ϵ)(1+\epsilon)-approximations of the all-pairs effective resistances.

In graphs with constant arboricity Peleg and Salomon [23] gave (1+ϵ)(1+\epsilon)-approximate matching algorithm in constant time.

Insertions-only and deletions-only lower bounds. In general graphs Dahlgaard [11] presented lower bounds of Ω(N1o(1))\Omega(N^{1-o(1)}) for incremental or decremental maximum cardinality bipartite matching, of Ω(m1o(1))\Omega(m^{1-o(1)}) for incremental or decremental maximum flow in directed and weighted sparse graphs, and Ω(N1/2o(1))\Omega(N^{1/2-o(1)}) for incremental or decremental (4/3δ)(4/3-\delta)-approximating the diameter of an unweighted graph for any small constant δ>0\delta>0. These lower bounds for diameter were later improved in [6]. Results for dynamic near-additive spanners were given in [7].

Sensitivity model. Ancona [5] studied diameter approximation and related problems in dynamic constant-degree graphs. She focused on a different model than ours, called the sensitivity model, and similarly to us, proved conditional lower bounds using reductions to the OMv conjecture.

Problem Ref. Assuming LBs
APSP weighted [1] APSP conjecture uq=Ω(N1o(1))u\cdot q=\Omega(N^{1-o(1)})
APSP unit weight [1] OMv conjecture max{u2q,q2u}=Ω(N1o(1))\max\{u^{2}\cdot q,q^{2}\cdot u\}=\Omega(N^{1-o(1)})
(s,t)(s,t)-distance, girth, diameter [1] OMv conjecture max{u,q}=Ω(N1/2ϵ)\max\{u,q\}=\Omega(N^{1/2-\epsilon})
Table 8: Lower bounds for fully dynamic shortest-paths algorithms in planar graphs, where uu denotes the time per update and qq the time per query.
Problem Ref. Update Query
undirected 1+ϵ1+\epsilon-approx. (s,t)(s,t)-distance [22] O~(n2/3)\tilde{O}(n^{2/3}) O~(n2/3)\tilde{O}(n^{2/3})
undirected 1+ϵ1+\epsilon-approx. (s,t)(s,t)-distance [3] O~(n1/2)\tilde{O}(n^{1/2}) O~(n1/2)\tilde{O}(n^{1/2})
undirected (s,t)(s,t)-distance with treewidth kk [2] O(k3logn)O(k^{3}\log n) O(k2lognlog(klogn))O(k^{2}\log n\log(k\log n))
SSSP on weighted digraphs [8] O~(n4/5)\tilde{O}(n^{4/5}) O(log2n)O(\log^{2}n)
Table 9: Upper bounds for fully dynamic shortest-path algorithms in planar graphs

Appendix B Partially Dynamic Lower Bounds

B.1 Dynamic (s,t)(s,t)-Distance

We show that our dynamic (s,t)(s,t)-distance lower bound for constant-degree graphs also holds for partially dynamic algorithms using the following reduction graph.

  • A (logn)(\log n)-depth binary forest with a single tree. The internal nodes are marked L1L_{1} and the leaves L2L_{2}; the root of the tree is the source node ss, and the nodes of L2L_{2} are marked as L2[i]L_{2}[i] for 1in1\leq i\leq n.

  • A (logn)(\log n)-depth binary forest with nn trees. The internal nodes are marked L3L_{3} and the leaves L4L_{4}. The roots of each of the nn trees are marked as L3[i]L_{3}[i], for 1in1\leq i\leq n, and the leaves of the tree with root L3[i]L_{3}[i] are marked L4[i,j]L_{4}[i,j], for 1jn1\leq j\leq n.

  • An edge from L2[i]L_{2}[i] to L3[i]L_{3}[i] for 1in1\leq i\leq n.

  • nn paths P[i]P[i] on nn nodes. The nodes of the path P[i]P[i] are marked P[i,n+1j]P[i,n+1-j], 1jn1\leq j\leq n. Edges from L4[i,j]L_{4}[i,j] to P[i,j]P[i,j] for all 1i,jn1\leq i,j\leq n.

  • A (logn)(\log n)-depth binary forest with nn trees. The internal nodes are marked L5L_{5} and the leaves L6L_{6}. The roots of each of the nn trees are marked as L5[i]L_{5}[i], for 1in1\leq i\leq n, and the leaves of the tree with root L5[i]L_{5}[i] are marked L6[i,j]L_{6}[i,j], for 1jn1\leq j\leq n.

  • An edge from P[i,1]P[i,1] to L5[i]L_{5}[i] for all 1in1\leq i\leq n.

  • A copy of the above structure, with node sets marked RiR_{i} and Q[i]Q[i] instead of LiL_{i} and P[i]P[i]. The root of the single tree of R1R_{1} is the target node tt.

  • For the matrix MM, add the edge (L6[i,j],R6[j,i])(L_{6}[i,j],R_{6}[j,i]) if Mij=1M_{ij}=1.

  • Deletion of edges for each input pair (uj,vj)(u^{j},v^{j}) as detailed next.

We perform the following deletions upon the arrival of the jthj^{th} input vectors (uj,vj)(u^{j},v^{j})

  • Given the jthj^{th} input vector uju^{j}, for each i[n]i\in[n], delete the edge (L4[i,j],P[i,j])(L_{4}[i,j],P[i,j]) if uij=0u_{i}^{j}=0.

  • Given the jthj^{th} input vector vjv^{j}, for each i[n]i\in[n], delete the edge (R4[i,j],Q[i,j])(R_{4}[i,j],Q[i,j]) if vij=0v_{i}^{j}=0.

Before the (j+1)th(j+1)^{th} input vector arrives, delete all (L4[i,j],P[i,j])(L_{4}[i,j],P[i,j]) and (R4[i,j],Q[i,j])(R_{4}[i,j],Q[i,j]) edges for all 1in1\leq i\leq n. It is easy to see that there is a path of length 6logn+5+2j6\log n+5+2j if and only if ujMvj=1u^{j}Mv^{j}=1.

Since the reduction graph consists of Θ(n2)\Theta(n^{2}) nodes and we make O(n)O(n) updates and 11 query for each input pair, we get the same lower bounds as in the fully-dynamic setting. Note that this lower bound can be made to work for the insertions only setting as well by reversing the path PP and QQ.

B.2 Dynamic Maximum Matching

We use the following reduction graph to make our fully dynamic maximum matching lower bounds hold for the partially dynamic setting as well.

  • A reduction gadget with nn subgadgets of size 2n+22n+2, on a set L1L2L_{1}\cup L_{2}. The subgadgets are labelled LE[j]LE[j] for 1jn1\leq j\leq n, and the nodes of subgadget LE[j]LE[j] are labelled L1[j,i]L_{1}[j,i] or L2[j,i]L_{2}[j,i] for 0in0\leq i\leq n. The path in each subgadget goes from L1[j,0]L_{1}[j,0] to L2[j,n]L_{2}[j,n].

  • A reduction gadget with nn subgadgets of size 2n+22n+2, on a set L3L4L_{3}\cup L_{4}. The subgadgets are labelled LF[i]LF[i] for 1in1\leq i\leq n, and the nodes of subgadget LF[i]LF[i] are labelled L3[i,j]L_{3}[i,j] or L4[i,j]L_{4}[i,j] for 0jn0\leq j\leq n. The path in each subgadget goes from L3[i,0]L_{3}[i,0] to L3[i,n]L_{3}[i,n].

  • Edges from L2[j,i]L_{2}[j,i] to L3[i,j]L_{3}[i,j] for all 1i,jn1\leq i,j\leq n.

  • A reduction gadget with nn subgadgets of size 2n+22n+2, on a set L5L6L_{5}\cup L_{6}. The subgadgets are labelled LG[i]LG[i] for 1in1\leq i\leq n, and the nodes of subgadget LG[i]LG[i] are labelled L5[i,j]L_{5}[i,j] or L6[i,j]L_{6}[i,j] for 0jn0\leq j\leq n. The path in each subgadget goes from L5[i,0]L_{5}[i,0] to L6[i,n]L_{6}[i,n].

  • An edge from L4[i,n]L_{4}[i,n] to L5[i,0]L_{5}[i,0] for all 1in1\leq i\leq n.

  • A copy of the above structure, with node sets marked RiR_{i} instead of LiL_{i}.

  • For the matrix MM, add the edge (L6[i,j],R6[j,i])(L_{6}[i,j],R_{6}[j,i]) if Mij=1M_{ij}=1.

  • Deletion of edges for each input pair (uj,vj)(u^{j},v^{j}) as detailed next.

We perform the following deletions upon the arrival of the jthj^{th} input vectors (uj,vj)(u^{j},v^{j}):

  • Delete the edge (L1[j,0],L2[j,0])(L_{1}[j,0],L_{2}[j,0]).

  • Given the jthj^{th} input vector uju^{j}, for each i[n]i\in[n], delete the edge (L2[j,i],L3[i,j])(L_{2}[j,i],L_{3}[i,j]) if uij=0u_{i}^{j}=0.

  • Given the jthj^{th} input vector vjv^{j}, for each i[n]i\in[n], delete the edge (R2[j,i],R3[i,j])(R_{2}[j,i],R_{3}[i,j]) if vij=0v_{i}^{j}=0.

Before the (j+1)th(j+1)^{th} input vector arrives, delete the edges (L2[j,0],L1[j,1])(L_{2}[j,0],L_{1}[j,1]), (R2[j,0],R1[j,1])(R_{2}[j,0],R_{1}[j,1]), (L2[j,i],L3[i,j])(L_{2}[j,i],L_{3}[i,j]), and (R2[j,i],L3[i,j])(R_{2}[j,i],L_{3}[i,j]) for all 1in1\leq i\leq n. It is easy to see that there is a matching with only 4j24j-2 nodes unmatched if and only if ujMvj=1u^{j}Mv^{j}=1.

Since the reduction graph consists of Θ(n2)\Theta(n^{2}) nodes and we make O(n)O(n) updates and 11 query for each input pair, we get the same lower bounds as in the fully-dynamic setting. This reduction can be made to work for the insertions only setting by reversing the above construction.