Distributed Algorithms for Matching in Hypergraphs
Abstract
We study the -Uniform Hypergraph Matching (-UHM) problem: given an -vertex hypergraph where every hyperedge is of size , find a maximum cardinality set of disjoint hyperedges. For , the problem of finding the maximum matching is -complete, and was one of Karp’s 21 -complete problems. In this paper we are interested in the problem of finding matchings in hypergraphs in the massively parallel computation (MPC) model that is a common abstraction of MapReduce-style computation. In this model, we present the first three parallel algorithms for -Uniform Hypergraph Matching, and we analyse them in terms of resources such as memory usage, rounds of communication needed, and approximation ratio. The highlights include:
-
•
A -round -approximation algorithm that uses space per machine.
-
•
A -round, -approximation algorithm that uses space per machine.
-
•
A -round algorithm that computes a subgraph containing a -approximation, using space per machine for linear hypergraphs, and in general.
For the third algorithm, we introduce the concept of HyperEdge Degree Constrained Subgraph (HEDCS), which can be of independent interest. We show that an HEDCS contains a fractional matching with total value at least , where is the size of the maximum matching in the hypergraph. Moreover, we investigate the experimental performance of these algorithms both on random input and real instances. Our results support the theoretical bounds and confirm the trade-offs between the quality of approximation and the speed of the algorithms.
1 Introduction
As massive graphs become more ubiquitous, the need for scalable parallel and distributed algorithms that solve graph problems grows as well. In recent years, we have seen progress in many graph problems (e.g. spanning trees, connectivity, shortest paths [4, 5]) and, most relevant to this work, matchings [27, 22]. A natural generalization of matchings in graphs is to matchings in hypergraphs. Hypergraph Matching is an important problem with many applications such as capital budgeting, crew scheduling, facility location, scheduling airline flights [53], forming a coalition structure in multi-agent systems [50] and determining the winners in combinatorial auctions [49] (see [56] for a partial survey). Although matching problems in graphs are one of the most well-studied problems in algorithms and optimization, the NP-hard problem of finding a maximum matching in a hypergraph is not as well understood.
In this work, we are interested in the problem of finding matchings in very large hypergraphs, large enough that we cannot solve the problem on one computer. We develop, analyze and experimentally evaluate three parallel algorithms for hypergraph matchings in the MPC model. Two of the algorithms are generalizations of parallel algorithms for matchings in graphs. The third algorithm develops new machinery which we call a hyper-edge degree constrained subgraph (HEDCS), generalizing the notion of an edge-degree constrained subgraph (EDCS). The EDCS has been recently used in parallel and dynamic algorithms for graph matching problems [16, 17, 6]. We will show a range of algorithm tradeoffs between approximation ratio, rounds, memory and computation, evaluated both as worst case bounds, and via computational experiments.
More formally, a hypergraph is a pair where is the set of vertices and is the set of hyperedges. A hyperedge is a nonempty subset of the vertices. The cardinality of a hyperedge is the number of vertices it contains. When every hyperedge has the same cardinality , the hypergraph is said to be -uniform. A hypergraph is linear if the intersection of any two hyperedges has at most one vertex. A hypergraph matching is a subset of the hyperedges such that every vertex is covered at most once, i.e. the hyperedges are mutually disjoint. This notion generalizes matchings in graphs. The cardinality of a matching is the number of hyperedges it contains. A matching is called maximum if it has the largest cardinality of all possible matchings, and maximal if it is not contained in any other matching. In the -Uniform Hypergraph Matching Problem (also referred to as Set Packing or -Set Packing), a -uniform hypergraph is given and one needs to find the maximum cardinality matching.
We adopt the most restrictive MapReduce-like model of modern parallel computation among [40, 29, 11, 4], the Massively Parallel Computation (MPC) model of [11]. This model is widely used to solve different graph problems such as matching, vertex cover [44, 2, 7, 6, 27], independent set [27, 34], as well as many other algorithmic problems. In this model, we have machines (processors) each with space . is the size of the input and our algorithms will satisfy , which means that the total space in the system is only a polylogarithmic factor more than the input size. The computation proceeds in rounds. At the beginning of each round, the data (e.g. vertices and edges) is distributed across the machines. In each round, a machine performs local computation on its data (of size ), and then sends messages to other machines for the next round. Crucially, the total amount of communication sent or received by a machine is bounded by , its space. For example, a machine can send one message of size , or messages of size 1. It cannot, however, broadcast a size message to every machine. Each machine treats the received messages as the input for the next round. Our model limits the number of machines and the memory per machine to be substantially sublinear in the size of the input. On the other hand, no restrictions are placed on the computational power of any individual machine. The main complexity measure is therefore the memory per machine and the number of rounds required to solve a problem, which we consider to be the “parallel time” of the algorithm. For the rest of the paper, is a -uniform hypergraph with vertices and hyperedges, and when the context is clear, we will simply refer to as a graph and to its hyperedges as edges. denotes the maximum matching in , and is the size of that matching. We define to be the degree of a vertex (the number of hyperedges that belongs to) in . When the context is clear, we will omit indexing by and simply denote it .
2 Our contribution and results.
We design and implement algorithms for the -UHM in the MPC model. We will give three different algorithms, demonstrating different trade-offs between the model’s parameters. Our algorithms are inspired by methods to find maximum matchings in graphs, but require developing significant new tools to address hypergraphs. We are not aware of previous algorithms for hypergraph matching in the MPC model. First we generalize the randomized coreset algorithm of [7] which finds an 3-rounds -approximation for matching in graphs. Our algorithm partitions the graph into random pieces across the machines, and then simply picks a maximum matching of each machine’s subgraph. We show that this natural approach results in a -approximation. While the algorithmic generalization is straightforward, the analysis requires several new ideas.
Theorem (Theorem 4.2 (restated)).
There exists an algorithm that with high probability computes a -approximation for the -UHM problem in 3 rounds on machines of memory .
Our second result concerns the MPC model with per-machine memory . We adapt the sampling technique and post-processing strategy of [44] to construct maximal matchings in hypergraphs, and are able to show that in -uniform hypergraphs, this technique yields a maximal matching, and thus a -approximation to the -UHM problem in rounds.
Theorem (Theorem 5.1 (restated)).
There exists an algorithm that given a -uniform hypergraph with high probability computes a maximal matching in in rounds on machines of memory .
Our third result generalizes the edge degree constrained subgraphs (EDCS), originally introduced by Bernstein and Stein [16] for maintaining large matchings in dynamic graphs, and later used for several other problems including matching and vertex cover in the streaming and MPC model [6, 17]. We call these generalized subgraphs hyper-edge degree constrained subgraphs (HEDCS). We show that they exist for specific parameters, and that they contain a good approximation for the -UHM problem. We prove that an HEDCS of a hypergraph , with well chosen parameters, contain a fractional matching with a value at least , and that the underlying fractional matching is special in the sense that for each hyperedge, it either assigns a value of or a value less than some chosen . We call such a fractional matching an -restricted fractional matching. For Theorem 6.15, we compute an HEDCS of the hypergraph in a distributed fashion. This procedure relies on the robustness properties that we prove for the HEDCSs under sampling.
Theorem (Theorem 6.4 (restated informally)).
Let be a -uniform hypergraph and . There exists an HEDCS that contains an -restricted fractional matching with total value at least .
Theorem (Theorem 6.15 (restated)).
There exists an algorithm that given a -uniform hypergraph , where and , can construct an HEDCS of in 2 rounds on machines of memory in general and for linear hypergraphs.
Theorem (Corollary 6.16 (restated)).
There exists an algorithm that with high probability achieves a -approximation to the -Uniform Hypergraph Matching in 3 rounds.
Table 1 summarizes our results.
Approximation ratio | Rounds | Memory per machine | Computation per round |
3 | Exponential | ||
Polynomial | |||
3 | in general | Polynomial | |
for linear hypergraphs |
Experimental results. We implement our algorithms in a simulated MPC model environment and test them both on random and real-world instances. Our experimental results are consistent with the theoretical bounds on most instances, and show that there is a trade-off between the extent to which the algorithms use the power of parallelism and the quality of the approximations. This trade-off is illustrated by comparing the number of rounds and the performance of the algorithms on machines with the same memory size. See Section 7 for more details.
Our techniques. For Theorem 4.2 and Theorem 6.15, we use the concept of composable coresets, which has been employed in several distributed optimization models such as the streaming and MapReduce models [1, 47, 8, 9, 10, 37]. Roughly speaking, the main idea behind this technique is as follows: first partition the data into smaller parts. Then compute a representative solution, referred to as a coreset, from each part. Finally, obtain a solution by solving the optimization problem over the union of coresets for all parts. We use a randomized variant of composable coresets, first introduced in [46], where the above idea is applied on a random clustering of the data. This randomized variant has been used after for Graph Matching and Vertex Cover [7, 6], as well as Column Subset Selection [18]. Our algorithm for Theorem 4.2 is similar to previous works, but the analysis requires new techniques for handling hypergraphs. Theorem 5.1 is a relatively straightforward generalization of the corresponding result for matching in graphs.
The majority of our technical innovation is contained in Theorem 6.4 and 6.15. Our general approach is to construct, in parallel, an HEDCS that will contain a good approximation of the maximum matching in the original graph and that will fit on one machine. Then we can run an approximation algorithm on the resultant HEDCS to come up with a good approximation to the maximum matching in this HEDCS and hence in the original graph. In order to make this approach work, we need to generalize much of the known EDCS machinery [16, 17] to hypergraphs. This endeavor is quite involved, as almost all the proofs do not generalize easily and, as the results show, the resulting bounds are weaker than those for graphs. We first show that HEDCSs exist, and that they contain large fractional matchings. We then show that they are useful as a coreset, which amounts to showing that even though there can be many different HEDCSs of some fixed hypergraph , the degree distributions of every HEDCS (for the same parameters) are almost identical. In other words, the degree of any vertex is almost the same in every HEDCS of . We show also that HEDCS are robust under edge sampling, in the sense that edge sampling from a HEDCS yields another HEDCS. These properties allow to use HEDCS in our coresets and parallel algorithm in the rest of the paper.
3 Related Work
Hypergraph Matching. The problem of finding a maximum matching in -uniform hypergraphs is NP-hard for any [41, 48], and APX-hard for [39]. The most
natural approach for an approximation algorithm for the hypergraph matching problem is the greedy algorithm: repeatedly add an edge that doesn’t intersect any edges already in the matching.
This solution is clearly within a factor of from optimal: from the
edges removed in each iteration, the optimal solution can contain at most
edges (at most one for each element of the chosen edge). It is also easy to construct examples showing that this analysis is tight for the greedy approach.
All the best known approximation algorithms for the Hypergraph
Matching Problem in -uniform hypergraphs are
based on local search methods [13, 15, 20, 30, 36].
The first such result by Hurkens and Schrijver [36] gave a -approximation algorithm. Halldorsson [30]
presented a quasi-polynomial -approximation algorithm for the
unweighted -UHM. Sviridenko and Ward [54] established a
polynomial time -approximation algorithm that is the first polynomial time improvement over the result from [36]. Cygan [21] and Furer and Yu [26] both provide a polynomial time
approximation algorithm, which is the best approximation guarantee known so far. On the other hand, Hazan, Safra and
Schwartz [35] proved that it is hard to approximate -UHM problem
within a factor of .
Matching in parallel computation models.
The study of the graph maximum matching problem in parallel computation models can be traced back to PRAM algorithms
of 1980s [38, 3, 45]. Since then it has been studied in the LOCAL and MPC models, and - approximation can be achieved in
rounds using a space of [6, 22, 27]. The question of finding a maximal matching in a small number of rounds has also been considered in [28, 44]. Recently, Behnezhad et al. [12] presented a round algorithm for maximal matching with memory per machine. While we are not aware of previous work on hypergraph matching in
the MPC model, finding a maximal hypergraph matching has been considered in the LOCAL model.
Both Fischer et al. [24] and Harris [33] provide a deterministic distributed algorithm that computes -approximation to the -UHM.
Maximum Independent Set. The -UHM is strongly related to different variants of the Independent Set problem as well as other combinatorial problems. The maximum independent set (MIS) problem on degree bounded graphs can be mapped to -UHM when the degree bound is [31, 14, 32, 55]. The -UHM problem can also be studied under a more general problem of maximum independent set on -claw-free graphs [13, 20]. (See [19] of connections between -UHM and other combinatorial optimization problems).
4 A 3-round -approximation
In this section, we generalize the randomized composable coreset algorithm of Assadi and Khanna [7]. They used a maximum matching as a coreset and obtained a -approximation. We use a hypergraph maximum matching and we obtain a -approximation. We first define a -partitioning and then present our greedy approach.
Definition 4.1 (Random -partitioning).
Let be an edge-set of a hypergraph . We say that a collection of edges is a random -partition of if the sets are constructed by assigning each edge to some chosen uniformly at random. A random -partition of naturally results in partitioning the graph G into subgraphs where for all .
Let be any -uniform hypergraph and be a random -partitioning of . We describe a simple greedy process for combining the maximum matchings of , and prove that this process results in a -approximation of the maximum matching of .
Theorem 4.2.
Greedy computes, with high probability, a -approximation for the -UHM problem in 3 rounds on machines of memory .
In the rest of the section, we present the proof of Theorem 4.2. Let , we show that w.h.p, where is the output of Greedy. The randomness stems from the fact that the matchings (for ) constructed by Greedy are random variables depending on the random -partitioning. We adapt the general approach in [7] for -uniform hypergraphs, with . Suppose at the beginning of the -th step of Greedy, the matching is of size . One can see that in this case, there is a matching of size in that is entirely incident on vertices of that are not matched by . We can show that in fact edges of this matching are appearing in , even when we condition on the assignment of the edges in the first graphs. Next we argue that the existence of these edges forces any maximum matching of to match edges in between the vertices that are not matched by . These edges can always be added to the matching to form . Therefore, while the maximal matching in Greedy is of size , we can increase its size by edges in each of the first steps, hence obtaining a matching of size at the end. The following Lemma 4.3 formalizes this argument.
Lemma 4.3.
For any , if , then, w.p. ,
Before proving the lemma, we define some notation. Let be an arbitrary maximum matching of . For any , we define as the part of assigned to the first graphs in the random -partitioning, i.e., the graphs . We have the following concentration result:
Claim 4.1.
W.p. , for any :
Proof of claim 4.1.
Fix an ; each edge in is assigned to , w.p. , hence in expectation, size of is . For a large , the ratio is large and the claim follows from a standard application of Chernoff bound. ∎
Proof of Lemma 4.3.
Fix an and the set of edges for . This also fixes the matching while the set of edges in together with the matching are still random variables. We further condition on the event that after fixing the edges in which happens w.p. by claim 4.1.
Let be the set of vertices incident on and be the remaining vertices. Let be the set of edges in . We partition into two parts: (i) : the set of edges with at least one endpoint in , and (ii) : the set of edges incident entirely on . Our goal is to show that w.h.p. any maximum matching of matches vertices in to each other by using the edges in . The lemma then follows easily from this.
The edges in the graph are chosen by independently assigning each edge in to w.p. . This independence makes it possible to treat the edges in and separately. We can fix the set of sampled edges of in , denoted by , without changing the distribution of edges in chosen from . Let , i.e., the maximum number of edges that can be matched in using only the edges in . In the following, we show that w.h.p., there exists a matching of size in . By the definition of , this implies that any maximum matching of has to use at least edges in .
Let be any arbitrary maximum matching of size in . Let be
the set of vertices in that are incident on . We show that there is a large matching in
that avoids .
Claim 4.2.
.
Proof of Claim 4.2.
Since any edge in has at least one endpoint in , we have . By the assertion of the lemma, , and hence . ∎
Claim 4.3.
There exists a matching in of size that avoids the vertices of .
Proof of Claim 4.3.
By the assumption that , there is a matching of size in the graph . By removing the edges in that are either incident on or , at most edges are removed from . Now the remaining matching is entirely contained in and also avoids , hence proving the claim. ∎
We are now ready to finalize the proof of Lemma 4.3. Let be the matching guaranteed by Claim 4.3. Each edge in this matching is chosen in w.p. independent of the other edges. Hence, by Chernoff bound, there is a matching of size
in the edges of that appear in (for ). This matching can be directly added to the matching , implying the existence of a matching of size in . As we argued before, this ensures that any maximum matching of contains at least edges in . These edges can always be added to to form , hence proving the lemma. ∎
Proof of Theorem 4.2.
Recall that is the output matching of Greedy. For the first steps of Greedy, if at any step we got a matching of size at least , then we are already done. Otherwise, at each step, by Lemma 4.3, w.p. , we increase the size of the maximal matching by edges; consequently, by taking a union bound on the steps, w.p. , the size of the maximal matching would be . Since , we ensure that and in either case, the matching computed by Greedy is of size at least , and this proves that Greedy is a -approximation. All is left is to prove that Greedy can be implemented in 3 rounds with a memory of per machine. Let be the number of machines, each with a memory of . We claim that Greedy can run in three rounds. In the first round, each machine randomly partitions the edges assigned to it across the machines. This results in a random -partitioning of the graph across the machines. In the second round, each machine sends a maximum matching of its input to a designated central machine ; as there are machines and each machine is sending size coreset, the input received by is of size and hence can be stored entirely on that machine. Finally, computes the answer by combining the matchings. ∎
We conclude this section by stating that computing a maximum matching on every machine is only required for the analysis, i.e., to show that there exists a large matching in the union of coresets. In practice, we can use an approximation algorithm to obtain a large matching from the coresets. In our experiments, we use a maximal matching instead of maximum.
5 A -rounds -approximation algorithm
In this section, we show how to compute a maximal matching in MPC rounds, by generalizing the algorithm in [44] to -uniform hypergraphs. The algorithm provided by Lattanzi el al. [44] computes a maximal matching in graphs in if , and in at most iterations when , where is a fixed constant. Harvey et al. [34] show a similar result.
The algorithm first samples edges and finds a maximal matching on the resulting subgraph (we will specify a bound on the memory later). Given this matching, we can safely remove edges that are in conflict (i.e. those incident on nodes in ) from the original graph . If the resulting filtered graph is small enough to fit onto a single machine, the algorithm augments with a matching found on . Otherwise, we augment with the matching found by recursing on . Note that since the size of the graph reduces from round to round, the effective sampling probability increases, resulting in a larger sample of the remaining graph.
Theorem 5.1.
Given a -uniform hypergraph , Iterated-Sampling omputes a maximal matching in with high probability in rounds on machines of memory .
Next we present the proof of Theorem 5.1. We first show that after sampling edges, the size of the graph induced by unmatched vertices decreases exponentially with high probability, and therefore the algorithm terminates in rounds. Next we argue that is indeed a maximal matching by showing that if after terminating, there is an edge that’s still unmatched, this will yield a contradiction. This is formalized in the following lemmas.
Lemma 5.2.
Let be a set of edges chosen independently with probability . Then with probability at least , for all either or .
Proof of Lemma 5.2.
Fix one such subgraph, with . The probability that none of the edges in were chosen to be in is . Since there are at most total possible induced subgraphs (because each vertex is either matched or unmatched), the probability that there exists one that does not have an edge in is at most . ∎
Lemma 5.3.
If then Iterated-Sampling runs for at most iterations with high probability.
Proof of Lemma 5.3.
Fix an iteration of Iterated-Sampling and let be the sampling probability for this iteration. Let be the set of edges at the beginning of this iteration, and denote by be the set of unmatched vertices after this iteration. From Lemma 5.2, if then an edge of will be sampled with high probability. Note that no edge in is incident on any edge in . Thus, if an edge from is sampled then Iterated-Sampling would have chosen this edge to be in the matching. This contradicts the fact that no vertex in is matched. Hence, with high probability.
Now consider the first iteration of the algorithm, let be the induced graph on the unmatched nodes after the first step of the algorithm. The above argument implies that . Similarly . After iterations : , and the algorithm will terminate after iterations. ∎
Lemma 5.4.
Iterated-Sampling finds a maximal matching of with high probability.
Proof of Lemma 5.4.
First consider the case that the algorithm does not fail. Suppose there is an edge such that none of the ’s are matched in the final matching that the algorithm output. In the last iteration of the algorithm, since and its endpoints are not matched, then . Since this is the last run of the algorithm, a maximal matching of is computed on one machine. Since is maximal, at least one of the ’s must be matched in it. All of the edges of get added to in the last step. This yields a contradiction.
Next, consider the case that the algorithm fails. This occurs due to the set of edges having size larger than the memory in some iteration of the algorithm. Note that in a given iteration. By the Chernoff Bound it follows that with probability smaller than (since ). By Lemma 5.3 the algorithm completes in at most rounds, thus the total failure probability is bounded by using the union bound. ∎
We are now ready to show that Iterated-Sampling can be implemented in MPC rounds.
Proof of Theorem 5.1.
We show that Iterated-Sampling can be implemented in the model with machines of memory and MPC rounds. Combining this with Lemma 5.4 on the correctness of Iterated-Sampling , we immediately obtain Theorem 5.1 for the case of . Every iteration of Iterated-Sampling can be implemented in MPC rounds. Suppose the edges are initially distributed over all machines. We can sample every edge with the probability (in each machine) and then send the sampled edges to the first machine. With high probability we will have , so we can fit the sampled edges in the first machine. Therefore, the sampling can be done in one round. Computing the subgraph of induced by can be done in 2 rounds; one round to send the list of unmatched vertices from the first machine to all the other machines, and the other round to compute the subgraph , and send it to a single machine if it fits, or start the sampling again. ∎
6 A 3-round -approximation using HEDCSs
In graphs, edge degree constrained subgraphs (EDCS) [16, 17] have been used as a local condition for identifying large matchings, and leading to good dynamic and parallel algorithms [16, 17, 6]. These papers showed that an EDCS exists, that it contains a large matching, that it is robust to sampling and composition, and that it can be used as a coreset.
In this section, we present a generalization of EDCS for hypergraphs. We prove that an HEDCS exists, that it contains a large matching, that it is robust to sampling and composition, and that it can be used as a coreset. The proofs and algorithms, however, require significant developments beyond the graph case. We first present definitions of a fractional matching and HEDCS.
Definition 6.1 (Fractional matching).
In a hypergraph , a fractional matching is a mapping from such that for every edge we have and for every vertex : . The value of such fractional matching is equal to . For , a fractional matching is an -restricted fractional matching if for every edge :
Definition 6.2.
For any hypergraph and integers a hyperedge degree constraint subgraph HEDCS is a subgraph of satisfying:
-
•
(P1): For any hyperedge :
-
•
(P2): For any hyperedge :
We show via a constructive proof that a hypergraph contains an HEDCS when the parameters of this HEDCS satisfy the inequality .
Lemma 6.3.
Any hypergraph contains an HEDCS for any parameters .
Proof.
Consider the following simple procedure for creating an HEDCS of a given hypergraph : start by initializing to be equal to . And while is not an HEDCS(, find an edge which violates one of the properties of HEDCS and fix it. Fixing the edge implies removing it from if it was violating Property (P1) and adding it to
if it was violating Property (P2). The output of the above procedure is clearly an HEDCS of graph . However, a-priori it is not
clear that this procedure ever terminates as fixing one edge can result in many edges violating the
HEDCS properties, potentially undoing the previous changes. In the following, we use a potential
function argument to show that this procedure always terminates after a finite number of steps,
hence implying that a always exists.
We define the following potential function :
We argue that in any step of the procedure above, the value of increases by at least 1. Since the
maximum value of is at most , this immediately implies that this procedure terminates
in iterations.
Define and . Let be the edge we choose to fix at this step, be the subgraph before fixing the edge , and be the resulting subgraph. Suppose first that the edge was violating Property (P1) of HEDCS. As the only change is in the degrees of vertices , decreases by . On the other hand, originally (as was violating Property (P1) of HEDCS), and hence after removing , increases by . Additionally, for each edge incident upon in , after removing the edge , decreases by one. As there are at least choices for , this means that in total, increases by at least . As a result, in this case increases by at least 1 after fixing the edge .
Now suppose that the edge was violating Property (P2) of HEDCS instead. In this case, degree of vertices all increase by one, hence increases by . Additionally, note that since edge was violating Property (P2) we have , so the addition of edge decreases by at most . Moreover, for each edge incident upon , after adding the edge , increases by one and since there are at most choices for , decreases in total by at most . The total variation in is therefore equal to So if , we have that increases by at least 1 after fixing edge . ∎
The main result in this section shows that an HEDCS of a graph contains a large -restricted fractional matching that approximates the maximum matching by a factor less than .
Theorem 6.4.
Let be a -uniform hypergraph and . Let := HEDCS with and . Then contains an -restricted fractional matching with total value at least
In order to prove Theorem 6.4, we will need the following two lemmas. The first lemma we prove is an algebraic result that will help us bound the contribution of vertices in the -restricted fractional matching. The second lemma identifies additional structure on the HEDCS, that we will use to construct the fractional matching of the theorem. For proofs of both lemmas, see Appendix A.2
Lemma 6.5.
Let . If and for some , then .
Lemma 6.6.
Given any , we can find two disjoint sets of vertices and that satisfy the following properties:
-
1.
.
-
2.
There is a perfect matching in using edges in .
-
3.
Letting ,we have that .
-
4.
All edges in with vertices in have at least one other vertex in , and have vertices only in and .
We are now ready to prove Theorem 6.4.
Proof of theorem 6.4.
Suppose we have two sets and satisfying the properties of Lemma 6.6. We construct an -restricted fractional matching using the edges in such that
where is the value of the fractional matching . Now, by Property 2 of Lemma 6.6, contains a perfect matching using edges in . Let be a subset of obtained by randomly sampling exactly edges of and adding their endpoints to Let and observe that and .
Let be the subgraph of induced by (each edge in has vertices in only and ). We define a fractional matching on the edges of in which all edges have value at most . We will then let our final fractional matching be the fractional matching joined with the perfect matching in of (so assigns value to the edges in this perfect matching). is, by definition, an -restricted fractional matching.
We now give the details for the construction of . Let be the vertices of , and let be its edges. For any vertex , define to be the degree of in . Recall that by Property 4 of Lemma 6.6, if then all the edges of incident to go to (but some might go to ). Thus, for , we have .
We now define as follows. For every , we arbitrarily order the edges of incident to , and then we assign/add a value of to these edges one by one, stopping when either (the sum of values assigned to vertices incident to ) reaches 1 or there are no more edges in incident to , whichever comes first. In the case that reaches the last edge might have added value less than , where is the last edge to be considered.
We now verify that is a valid fractional matching in that all vertices have value at most 1. This is clearly true of vertices by construction. For a vertex , it suffices to show that each edge incident to receives a value of at most . To see this, first note that the only edges to which assigns non-zero values have at least two endpoints in . Any such edge receives value at most , but since is in and so in , we have by Property (P1) of an HEDCS that , and so .
By construction, for any , we have that the value of in satisfies :
Furthermore, we can bound the value of the fractional matching : as For convenience, we use such that
(1) | |||||
(2) |
Next we present a lemma, proved in Appendix A.2 that bounds for each vertex.
Lemma 6.7.
For any , .
This last lemma, combined with (2), allows us to lower bound the value of :
Note that we have constructed by taking the fractional value in and adding the perfect matching on edges from . The latter matching has size , and the value of is bounded by:
To complete the proof, recall that contains a perfect matching in of edges, so if then there already exists a matching in of size at least , and the theorem is true. We can thus assume that , in which case the previous equation yields that:
In both cases we get that
∎
6.1 Sampling and constructing an HEDCS in the MPC model
Our results in this section are general and applicable to every computation model. We prove structural properties about the HEDCSs that will help us construct them in the MPC model. We show that the degree distributions of every HEDCS (for the same parameters and ) are almost identical. In other words, the degree of any vertex is almost the same in every HEDCS of the same hypergraph . We show also that HEDCS are robust under edge sampling, i.e. that edge sampling from and HEDCS yields another HEDCS, and that the degree distributions of any two HEDCS for two different edge sampled subgraphs of is almost the same no matter how the two HEDCS are selected. In the following lemma, we argue that any two HEDCS of a graph (for the same parameters , ) are “somehow identical” in that their degree distributions are close to each other. In the rest of this section, we fix the parameters , and two subgraphs and that are both HEDCS.
Lemma 6.8.
(Degree Distribution Lemma). Fix a -uniform hypergraph and parameters , (for small enough). For any two subgraphs and that are HEDCS, and any vertex , then .
Proof.
Suppose that we have for some and that . We will show that if the , then this will lead to a contradiction. Let be one of the edges that are incident to in . so . From these edges, at most can be in in order to respect (P1), so at least we will have edges in , thus we have now covered edges in both and . Let’s keep focusing on the edge , and especially on one of its incident edges in . Let be such an edge. , therefore . The edges incident to in that we have covered so far are at most , therefore we still need at least new edges in to respect (P1). Out of these edges, at most can be in (because has already covered edges incident to it in ). Therefore at least are in . Thus, we have so far covered at least . One can see that we can keep doing this until we cover at least edges in both and . The number of edges in each of and cannot exceed (each vertex has degree ), therefore we will get a contradiction if , which holds if . Therefore
∎
The next corollary shows that if the hypergraph is linear (every two hyperedges intersect in at most on vertex), then the degree distribution is closer. The proof is in Appendix A.3.
Corollary 6.9.
For -uniform linear hypergraphs, the degree distribution is tighter, and
Next we prove two lemmas regarding the structure of different HEDCSs across sampled subgraphs. The first lemma shows that edge sampling an HEDCS results in another HEDCS for the sampled subgraph. The second lemma shows that the degree distributions of any two HEDCS for two different edge sampled subgraphs of is almost the same.
Lemma 6.10.
Let be a HEDCS( for parameters , and such that . Suppose is an edge sampled subgraph of G and ; then, with high probability:
-
1.
For any vertex :
-
2.
is a HEDCS of with parameters .
Proof.
Let . For any vertex , and by Property (P1) of HEDCS . Moreover, since each edge incident upon in is sampled in independently, by the Chernoff bound:
In the following, we condition on the event that:
This event happens with probability at least by above equation and a union bound on vertices. This finalizes the proof of the first part of the claim. We are now ready to prove that is indeed am HEDCS conditioned on this event. Consider any edge . Since , as well. Hence, we have,
because , where the inequality is by Property (P1) of HEDCS and the equality is by the choice of . As a result, satisfies Property (P1) of HEDCS for parameter . Now consider an edge . Since , as well. Hence,
∎
Lemma 6.11.
(HEDCS in Edge Sampled Subgraph). Fix any hypergraph and . Let and be two edge sampled subgraphs of with probability (chosen not necessarily independently). Let and be arbitrary HEDCSs of and with parameters . Suppose , then, with probability , simultaneously for all : .
Proof.
Let be an HEDCS(G, for the parameters and as defined in the previous lemma. The existence of follows since . Define and . By Lemma 6.10, (resp. ) is an HEDCS of (resp. ) with parameters with probability . In the following, we condition on this event. By Lemma 6.8 (Degree Distribution Lemma), since both (resp. ) and (resp. ) are HEDCSs for (resp. ), the degree of vertices in both of them should be “close” to each other. Moreover, since by Lemma 6.10 the degree of each vertex in and is close to times its degree in , we can argue that the vertex degrees in and are close. Formally, for any , we have
∎
Corollary 6.12.
If is linear, then .
We are now ready to present a parallel algorithm that will use the HEDCS subgraph. We first compute an HEDCS in parallel via edge sampling. Let be a random -partition of a graph . We show that if we compute an arbitrary HEDCS of each graph (with no coordination across different graphs) and combine them together, we obtain a HEDCS for the original graph . We then store this HEDCS in one machine and compute a maximal matching on it. We present our algorithm for all range of memory . Lemma 6.13 and Corollary 6.14 serve as a proof to Theorem 6.15.
Lemma 6.13.
Suppose . Then with high probability
-
1.
The subgraph is an HEDCS for parameters: .
-
2.
The total number of edges in each subgraph of is .
-
3.
If , then the graph can fit in the memory of one machine.
Proof of Lemma 6.13.
-
1.
Recall that each graph is an edge sampled subgraph of with sampling probability . By Lemma 6.11 for graphs and (for and their HEDCSs and , with probability , for all vertices :
By taking a union bound on all pairs of subgraphs and for , the above property holds for all , with probability at least . In the following, we condition on this event.
We now prove that is indeed a . First, consider an edge and let be such that as well. We have
Hence, satisfies Property (P1) of HEDCS for parameter . Now consider an edge and let be such that (recall that each edge in is sent to exactly one graph in the random -partition). We have,
-
2.
Let be the edges of . By the independent sampling of edges in an edge sampled subgraph, we have that . By Chernoff bound, with probability , the size of is . We can then take a union bound on all machines in and have that with probability , each graph is of size .
-
3.
The number of edges in is bounded by .
∎
Corollary 6.14.
If is linear, then by choosing and in HEDCS-Matching we have:
-
1.
With high probability, the subgraph is a HEDCS for parameters: .
-
2.
If then can fit on the memory of one machine.
Proof of Corollary 6.14.
-
1.
Similarly to Lemma 6.13 and by using corollary 6.12, we know that for graphs and (for and their HEDCSs and , with high probability , for all vertices :
By taking a union bound on all pairs of subgraphs and for , the above property holds for all , with probability at least . In the following, we condition on this event. Showing that is indeed a follows by the same analysis from the proof of Lemma 6.13.
-
2.
The number of edges in is bounded by .
∎
The previous lemmas allow us to formulate the following theorem.
Theorem 6.15.
HEDCS-Matching constructs a HEDCS of in 3 rounds on machines of memory in general and for linear hypergraphs.
Corollary 6.16.
HEDCS-Matching achieves a -approximation to the -Uniform Hypergraph Matching in 3 rounds with high probability.
Proof of Corollary 6.16.
We show that with high probability, verifies the assumptions of theorem 6.4. From Lemma 6.13, we get that with high probability, the subgraph is a for parameters: We can see that . Therefore by Theorem 6.4, contains a -approximate -restricted matching. Since the integrality gap of the -UHM is at most (see [19] for details), then contains a -approximate matching. Taking a maximal matching in multiplies the approximation factor by at most . Therefore, any maximal matching in is a -approximation. ∎
7 Computational Experiments
To understand the relative performance of the proposed algorithms, we conduct a wide variety of experiments on both random and real-life data [51, 57, 43]. We implement the three algorithms Greedy, Iterated-Sampling and HEDCS-Matching using Python, and more specifically relying on the module pygraph and its class pygraph.hypergraph to construct and perform operations on hypergraphs. We simulate the MPC model by computing a -partitioning and splitting it into different inputs. Parallel computations on different parts of the -partitioning are handled through the use of the multiprocessing library in Python. We compute the optimal matching through an Integer Program for small instances of random uniform hypergraphs ( as well as geometric hypergraphs. The experiments were conducted on a 2.6 GHz Intel Core i7 processor and 16 GB RAM workstation. The datasets differ in their number of vertices, hyperedges, vertex degree and hyperedge cardinality. In the following tables, and denote the number of vertices and number of hyperedges respectively, and is the size of hyperedges. For Table 3, the graphs might have different number of edges and denotes the average number of edges. is the number of machines used to distribute the hypergraph initially. We limit the number of edges that a machine can store to . In the columns Gr, IS and HEDCS, we store the average ratio between the size of the matching computed by the algorithms Greedy, Iterated-Sampling and HEDCS-Matching respectively, and the size of a benchmark. This ratio is computed by the percentage , where is the output of the algorithm, and denotes the size of the benchmark. The benchmarks include the size of optimal solution when it is possible to compute, or the size of a maximal matching computed via a sequential algorithm. denotes the number of instances of random graphs that we generated for fixed , and . and are the parameters used to construct the HEDCS subgraphs in HEDCS-Matching. These subgraphs are constructed using the procedure in the proof of Lemma 6.3.
7.1 Experiments with random hypergraphs
We perform experiments on two classes of -uniform random hypergraphs. The first contains random uniform hypergraphs, and the second contains random geometric hypergraphs.
Random Uniform Hypergraphs.
For a fixed , and , each potential hyperedge is sampled independently and uniformly at random from the set of vertices. In Table 2, we use the size of a perfect matching as a benchmark, because a perfect matching in random graphs exists with probability under some conditions on and . If is the expected degree of a random uniform hypergraph, Frieze
and Janson [25] showed that is a sufficient condition for the hypergraph to have a perfect matching with high probability. Kim [42] further weakened this condition to . We empirically verify, by solving the IP formulation, that for and for small instances of , our random graphs contain a perfect matching.
In Table 2, the benchmark is the size of a perfect matching, while in Table 5, it is the size of a greedy maximal matching. In terms of solution quality (Tables 2 and 5) HEDCS-Matching performs consistently better than Greedy, and Iterated-Sampling performs significantly better than the other two. None of the three algorithms are capable of finding perfect matchings for a significant number of the runs.
When compared to the size of a maximal matching, Iterated-Sampling still performs better, followed by HEDCS-Matching. However, the ratio is smaller when compared to a maximal matching, which is explained by the deterioration of the quality of greedy maximal matching as and grow. Dufosse et al. [23] confirm that the approximation quality of a greedy maximal matching on random graphs that contain a perfect matching degrades as a function of and . The performance of the algorithms decreases as grows, which is theoretically expected since their approximations ratio are both proportional to . The number of rounds for Iterated-Sampling grows slowly with , which is consistent with bound. Recall that the number of rounds for the other two algorithms is constant and equal to 3.
Geometric Random Hypergraphs. The second class we experimented on is random geometric hypergraphs. The vertices of a random geometric hypergraph (RGH) are randomly sampled from the uniform distribution of the space . A set of different vertices forms a hyperedge if, and only if, the distance between any and is less than a previously specified parameter . The parameters and fully characterize a RGH. We fix and generate different geometric hypergraphs by varying and . We compare the output of the algorithms to the optimal solution that we compute through the IP formulation. Table 3 shows that the performance of our three algorithms is almost similar with Iterated-Sampling outperforming Greedy and HEDCS-Matching as the size of the graphs grows. We also observe that random geometric hypergraphs do not contain perfect matchings, mainly because of the existence of some outlier vertices that do not belong to any edge. The number of rounds of Iterated-Sampling still grows with , confirming the theoretical bound and the results on random uniform hypergraphs.
#I | Gr | IS | HEDCS | Rounds IS | ||||||
---|---|---|---|---|---|---|---|---|---|---|
15 | 200 | 3 | 5 | 500 | 77.6% | 86.6% | 82.8% | 5 | 3 | 3.8 |
30 | 400 | 5 | 78.9% | 88.1% | 80.3% | 7 | 4 | 4.56 | ||
100 | 3200 | 10 | 81.7% | 93.4% | 83.1% | 5 | 2 | 5.08 | ||
300 | 4000 | 10 | 78.8% | 88.7% | 80.3% | 8 | 6 | 7.05 | ||
50 | 800 | 5 | 6 | 500 | 66.0% | 76.2% | 67.0% | 16 | 11 | 4.89 |
100 | 2,800 | 10 | 68.0% | 79.6% | 69.8% | 16 | 11 | 4.74 | ||
300 | 4,000 | 10 | 62.2% | 75.1% | 65.5% | 10 | 5 | 6.62 | ||
500 | 8,000 | 16 | 63.3% | 76.4% | 65.6% | 10 | 5 | 7.62 | ||
500 | 15,000 | 10 | 16 | 500 | 44.9% | 58.3% | 53.9% | 20 | 10 | 6.69 |
1,000 | 50,000 | 20 | 47.3% | 61.3% | 50.5% | 20 | 10 | 8.25 | ||
2,500 | 100,000 | 20 | 45.6% | 59.9% | 48.2% | 20 | 10 | 8.11 | ||
5,000 | 200,000 | 20 | 45.0% | 59.7% | 47.8% | 20 | 10 | 7.89 | ||
1,000 | 50,000 | 25 | 25 | 100 | 27.5% | 34.9% | 30.8% | 75 | 50 | 8.10 |
2,500 | 100,000 | 25 | 26.9% | 34.0% | 27.0% | 75 | 50 | 8.26 | ||
5,000 | 250,000 | 30 | 26.7% | 33.8% | 28.8% | 75 | 50 | 8.23 | ||
10,000 | 500,000 | 30 | 26.6% | 34.1% | 28.2% | 75 | 50 | 8.46 | ||
5,000 | 250,000 | 50 | 30 | 100 | 22.4% | 30.9% | 27.9% | 100 | 50 | 10.22 |
10,000 | 500,000 | 30 | 22.2% | 31.0% | 26.5% | 100 | 50 | 10.15 | ||
15,000 | 750,000 | 30 | 20.9% | 30.8% | 26.4% | 100 | 50 | 10.26 | ||
25,000 | 1,000,000 | 30 | 20.9% | 30.8% | 26.4% | 100 | 50 | 10.29 |
#I | Gr | IS | HEDCS | Rounds IS | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
100 | 0.2 | 930.1 323 | 3 | 5 | 100 | 88.3% | 89.0% | 89.6% | 3 | 5 | 4.1 |
100 | 0.25 | 1329.5 445 | 10 | 88.0% | 89.0% | 89.5% | 3 | 5 | 5.2 | ||
250 | 0.15 | 13222 3541 | 5 | 85.0% | 88.6% | 85.5% | 4 | 7 | 8.1 | ||
300 | 0.15 | 14838 4813 | 10 | 85.5% | 88.0% | 85.2% | 4 | 7 | 11.1 | ||
300 | 0.2 | 27281 8234 | 10 | 85.0% | 89.0% | 86.3% | 4 | 7 | 13.5 |
7.2 Experiments with real data
PubMed and Cora Datasets.
We employ two citation network datasets, namely the Cora and Pubmed datasets [51, 57]. These datasets are represented with a graph, with vertices being publications, and edges being a citation links from one article to another. We construct the hypergraphs in two steps 1) each article is a vertex; 2) each article is taken as a centroid and forms a hyperedge to connect those articles which have citation links to it (either citing it or being cited). The Cora hypergraph has an average edge size of , while the average in the Pubmed hypergraph is . The number of edges in both is significantly smaller than the number of vertices, therefore we allow each machine to store only edges. We randomly split the edges on each machine, and because the subgraphs are small, we are able to compute the optimal matchings on each machine, as well as on the whole hypergraphs. We perform ten runs of each algorithm with different random -partitioning and take the maximum cardinality obtained. Table 4 shows that none of the algorithms is able to retrieve the optimal matching. This behaviour can be explained by the loss of information that using parallel machines implies. We see that Iterated-Sampling, like in previous experiments, outperforms the other algorithms due to highly sequential design. HEDCS-Matching particularly performs worse than the other algorithms, mainly because it fails to construct sufficiently large HEDCSs.
Social Network Communities. We include two larger real-world datasets, orkut-groups and LiveJournal, from the Koblenz Network Collection [43]. We use two hypergraphs that were constructed from these datasets by Shun [52]. Vertices represent individual users, and hyperedges represent communities in the network. Because membership in these communities does not require the same commitment as collaborating on academic research, these hypergraphs have different characteristics from co-citation hypergraphs, in terms of size, vertex degree and hyperedge cardinality. We use the size of a maximal matching as a benchmark. Table 4 shows that Iterated-Sampling still provides the best approximation. HEDCS-Sampling performs worse than Greedy on Livejournal, mainly because the ratio is not big enough to construct an HEDCS with a large matching.
Name | Gr | IS | HEDCS | Rounds IS | |||
---|---|---|---|---|---|---|---|
Cora | 2,708 | 1,579 | 2 | 75.0% | 83.2% | 63.9% | 6 |
PubMed | 19,717 | 7,963 | 3 | 72.0% | 86.6% | 62.4% | 9 |
Orkut | 2,32 | 1,53 | 10 | 55.6% | 66.1% | 58.1% | 11 |
Livejournal | 3,20 | 7,49 | 3 | 44.0% | 55.2% | 43.3% | 10 |
7.3 Experimental conclusions
In the majority of our experiments, Iterated-Sampling provides the best approximation to the -UHM problem, which is consistent with its theoretical superiority. On random graphs, HEDCS-Matching performs consistently better than Greedy, even-though Greedy has a better theoretical approximation ratio. We suspect it is because the -approximation bound on HEDCS-Matching is loose. We conjecture that rounding an restricted matching can be done efficiently, which would improve the approximation ratio. The performance of the three algorithms decreases as grows. The results on the number of rounds of Iterated-Sampling also are consistent with the theoretical bound. However, due to its sequential design, and by centralizing the computation on one single machine while using the other machines simply to coordinate, Iterated-Sampling not only takes more rounds than the other two algorithms, but is also slower when we account for the absolute runtime as well as the runtime per round. We compared the runtimes of our three algorithms on a set of random uniform hypergraphs. Figure 1 and 2 show that the absolute and per round run-times of Iterated-Sampling grow considerably faster with the size of the hypergraphs. We can also see that HEDCS-Sampling is slower than Greedy, since the former performs heavier computation on each machine. This confirms the trade-off between the extent to which the algorithms use the power of parallelism and the quality of the approximations.
8 Conclusion
We have presented the first algorithms for the -UHM problem in the MPC model. Our theoretical and experimental results highlight the trade-off between the approximation ratio, the necessary memory per machine and the number of rounds it takes to run the algorithm. We have also introduced the notion of HEDCS subgraphs, and have shown that an HEDCS contains a good approximation for the maximum matching and that they can be constructed in few rounds in the MPC model. We believe better approximation algorithms should be possible, especially if we can give better rounding algorithms for a -restricted fractional hypergraph matching. For future work, it would be interesting to explore whether we can achieve better-than- approximation in the MPC model in a polylogarithmic number of rounds. Exploring algorithms relying on vertex sampling instead of edge sampling might be a good candidate. In addition, our analysis in this paper is specific to unweighted hypergraphs, and we would like to extend this to weighted hypergraphs.


References
- [1] S. Abbar, S. Amer-Yahia, P. Indyk, S. Mahabadi, and K. R. Varadarajan. Diverse near neighbor problem. In Proceedings of the twenty-ninth annual symposium on Computational geometry, pages 207–214. ACM, 2013.
- [2] K. J. Ahn and S. Guha. Access to data and number of iterations: Dual primal algorithms for maximum matching under resource constraints. ACM Transactions on Parallel Computing (TOPC), 4(4):17, 2018.
- [3] N. Alon, L. Babai, and A. Itai. A fast and simple randomized parallel algorithm for the maximal independent set problem. Journal of algorithms, 7(4):567–583, 1986.
- [4] A. Andoni, A. Nikolov, K. Onak, and G. Yaroslavtsev. Parallel algorithms for geometric graph problems. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing, pages 574–583. ACM, 2014.
- [5] A. Andoni, Z. Song, C. Stein, Z. Wang, and P. Zhong. Parallel graph connectivity in log diameter rounds. In 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), pages 674–685. IEEE, 2018.
- [6] S. Assadi, M. Bateni, A. Bernstein, V. Mirrokni, and C. Stein. Coresets meet edcs: algorithms for matching and vertex cover on massive graphs. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1616–1635. SIAM, 2019.
- [7] S. Assadi and S. Khanna. Randomized composable coresets for matching and vertex cover. In Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures, pages 3–12, 2017.
- [8] A. Badanidiyuru, B. Mirzasoleiman, A. Karbasi, and A. Krause. Streaming submodular maximization: Massive data summarization on the fly. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 671–680. ACM, 2014.
- [9] M.-F. F. Balcan, S. Ehrlich, and Y. Liang. Distributed -means and -median clustering on general topologies. In Advances in Neural Information Processing Systems, pages 1995–2003, 2013.
- [10] M. Bateni, A. Bhashkara, S. Lattanzi, and V. Mirrokni. Mapping core-sets for balanced clustering. Advances in Neural Information Processing Systems, 26:5–8, 2014.
- [11] P. Beame, P. Koutris, and D. Suciu. Communication steps for parallel query processing. In Proceedings of the 32nd ACM SIGMOD-SIGACT-SIGAI symposium on Principles of database systems, pages 273–284. ACM, 2013.
- [12] S. Behnezhad, M. Hajiaghayi, and D. G. Harris. Exponentially faster massively parallel maximal matching. In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), pages 1637–1649. IEEE, 2019.
- [13] P. Berman. A d/2 approximation for maximum weight independent set in d-claw free graphs. In Scandinavian Workshop on Algorithm Theory, pages 214–219. Springer, 2000.
- [14] P. Berman and M. Fürer. Approximating maximum independent set in bounded degree graphs. In Soda, volume 94, pages 365–371, 1994.
- [15] P. Berman and P. Krysta. Optimizing misdirection. In Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, pages 192–201. Society for Industrial and Applied Mathematics, 2003.
- [16] A. Bernstein and C. Stein. Fully dynamic matching in bipartite graphs. In International Colloquium on Automata, Languages, and Programming, pages 167–179. Springer, 2015.
- [17] A. Bernstein and C. Stein. Faster fully dynamic matchings with small approximation ratios. In Proceedings of the twenty-seventh annual ACM-SIAM symposium on Discrete algorithms, pages 692–711. Society for Industrial and Applied Mathematics, 2016.
- [18] A. Bhaskara, A. Rostamizadeh, J. Altschuler, M. Zadimoghaddam, T. Fu, and V. Mirrokni. Greedy column subset selection: New bounds and distributed algorithms. 2016.
- [19] Y. H. Chan and L. C. Lau. On linear and semidefinite programming relaxations for hypergraph matching. Mathematical programming, 135(1-2):123–148, 2012.
- [20] B. Chandra and M. M. Halldórsson. Greedy local improvement and weighted set packing approximation. Journal of Algorithms, 39(2):223–240, 2001.
- [21] M. Cygan. Improved approximation for 3-dimensional matching via bounded pathwidth local search. In Foundations of Computer Science (FOCS), 2013 IEEE 54th Annual Symposium on, pages 509–518. IEEE, 2013.
- [22] A. Czumaj, J. Łacki, A. Madry, S. Mitrovic, K. Onak, and P. Sankowski. Round compression for parallel matching algorithms. SIAM Journal on Computing, (0):STOC18–1, 2019.
- [23] F. Dufossé, K. Kaya, I. Panagiotas, and B. Uçar. Effective heuristics for matchings in hypergraphs. In International Symposium on Experimental Algorithms, pages 248–264. Springer, 2019.
- [24] M. Fischer, M. Ghaffari, and F. Kuhn. Deterministic distributed edge-coloring via hypergraph maximal matching. In Foundations of Computer Science (FOCS), 2017 IEEE 58th Annual Symposium on, pages 180–191. IEEE, 2017.
- [25] A. Frieze and S. Janson. Perfect matchings in random s-uniform hypergraphs. Random Structures & Algorithms, 7(1):41–57, 1995.
- [26] M. Furer and H. Yu. Approximate the k-set packing problem by local improvements. arXiv preprint arXiv:1307.2262, 2013.
- [27] M. Ghaffari, T. Gouleakis, C. Konrad, S. Mitrović, and R. Rubinfeld. Improved massively parallel computation algorithms for mis, matching, and vertex cover. In Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing, pages 129–138, 2018.
- [28] M. Ghaffari and J. Uitto. Sparsifying distributed algorithms with ramifications in massively parallel computation and centralized local computation. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1636–1653. SIAM, 2019.
- [29] M. T. Goodrich, N. Sitchinava, and Q. Zhang. Sorting, searching, and simulation in the mapreduce framework. In International Symposium on Algorithms and Computation, pages 374–383. Springer, 2011.
- [30] M. M. Halldórsson. Approximating discrete collections via local improvements. In SODA, volume 95, pages 160–169, 1995.
- [31] M. M. Halldórsson. Approximations of weighted independent set and hereditary subset problems. In Graph Algorithms And Applications 2, pages 3–18. World Scientific, 2004.
- [32] M. M. Halldórsson and J. Radhakrishnan. Greed is good: Approximating independent sets in sparse and bounded-degree graphs. Algorithmica, 18(1):145–163, 1997.
- [33] D. G. Harris. Distributed approximation algorithms for maximum matching in graphs and hypergraphs. arXiv preprint arXiv:1807.07645, 2018.
- [34] N. J. Harvey, C. Liaw, and P. Liu. Greedy and local ratio algorithms in the mapreduce model. In Proceedings of the 30th on Symposium on Parallelism in Algorithms and Architectures, pages 43–52, 2018.
- [35] E. Hazan, S. Safra, and O. Schwartz. On the complexity of approximating k-set packing. computational complexity, 15(1):20–39, 2006.
- [36] C. A. J. Hurkens and A. Schrijver. On the size of systems of sets every t of which have an sdr, with an application to the worst-case ratio of heuristics for packing problems. SIAM Journal on Discrete Mathematics, 2(1):68–72, 1989.
- [37] P. Indyk, S. Mahabadi, M. Mahdian, and V. S. Mirrokni. Composable core-sets for diversity and coverage maximization. In Proceedings of the 33rd ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, pages 100–108. ACM, 2014.
- [38] A. Israeli and A. Itai. A fast and simple randomized parallel algorithm for maximal matching. Information Processing Letters, 22(2):77–80, 1986.
- [39] V. Kann. Maximum bounded 3-dimensional matching in max snp-complete. Inf. Process. Lett., 37(1):27–35, 1991.
- [40] H. Karloff, S. Suri, and S. Vassilvitskii. A model of computation for mapreduce. In Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms, pages 938–948. SIAM, 2010.
- [41] R. M. Karp. Reducibility among combinatorial problems. In Complexity of computer computations, pages 85–103. Springer, 1972.
- [42] J. H. Kim. Perfect matchings in random uniform hypergraphs. Random Structures & Algorithms, 23(2):111–132, 2003.
- [43] J. Kunegis. Konect: the koblenz network collection. In Proceedings of the 22nd International Conference on World Wide Web, pages 1343–1350, 2013.
- [44] S. Lattanzi, B. Moseley, S. Suri, and S. Vassilvitskii. Filtering: a method for solving graph problems in mapreduce. In Proceedings of the twenty-third annual ACM symposium on Parallelism in algorithms and architectures, pages 85–94. ACM, 2011.
- [45] M. Luby. A simple parallel algorithm for the maximal independent set problem. SIAM journal on computing, 15(4):1036–1053, 1986.
- [46] V. Mirrokni and M. Zadimoghaddam. Randomized composable core-sets for distributed submodular maximization. In Proceedings of the forty-seventh annual ACM symposium on Theory of computing, pages 153–162. ACM, 2015.
- [47] B. Mirzasoleiman, A. Karbasi, R. Sarkar, and A. Krause. Distributed submodular maximization: Identifying representative elements in massive data. In Advances in Neural Information Processing Systems, pages 2049–2057, 2013.
- [48] C. H. Papadimitriou. Computational complexity. John Wiley and Sons Ltd., 2003.
- [49] T. Sandholm. Algorithm for optimal winner determination in combinatorial auctions. Artificial intelligence, 135(1-2):1–54, 2002.
- [50] T. Sandholm, K. Larson, M. Andersson, O. Shehory, and F. Tohmé. Coalition structure generation with worst case guarantees. Artificial Intelligence, 111(1-2):209–238, 1999.
- [51] P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93–93, 2008.
- [52] J. Shun. Practical parallel hypergraph algorithms. In Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pages 232–249, 2020.
- [53] S. S. Skiena. The algorithm design manual: Text, volume 1. Springer Science & Business Media, 1998.
- [54] M. Sviridenko and J. Ward. Large neighborhood local search for the maximum set packing problem. In International Colloquium on Automata, Languages, and Programming, pages 792–803. Springer, 2013.
- [55] L. Trevisan. Non-approximability results for optimization problems on bounded degree instances. In Proceedings of the thirty-third annual ACM symposium on Theory of computing, pages 453–461, 2001.
- [56] R. Vemuganti. Applications of set covering, set packing and set partitioning models: A survey. In Handbook of combinatorial optimization, pages 573–746. Springer, 1998.
- [57] J. Yang and J. Leskovec. Defining and evaluating network communities based on ground-truth. Knowledge and Information Systems, 42(1):181–213, 2015.
Appendix A Omitted proofs
A.1 Proof of Lemma 6.7
See 6.7
Proof.
We distinguish three cases:
-
•
: in this case and . This implies that , so that :
Now consider the case in which . Then
because , and by a Chernoff Bound :
(3) -
•
Let us now consider the case and . With probability at least , we have that:
Thus with probability at least , we have that and:
-
•
The only case that we need to check is and , so that . Again we have that with probability at least
In other words, with probability at least , we have , so that . We just showed that in all cases ∎
A.2 Proof of Lemma 6.5 and Lemma 6.6
See 6.5
Proof.
We will provide a proof for that is easy to generalize. We first show that if , then . The claim is true if or or . Suppose that and and . Then :
By Nesbitt’s Inequality we know that
and therefore .
By the general Nesbitt’s Inequality (See appendix D), we know that for
So if , then . Now, let . To complete the proof, it is sufficient to show that we always have . To prove this inequality, note that if then and thus Now, if then:
which is increasing in and maximized at , in which case . In the end we get:
∎
See 6.6
Proof.
Let be some maximum integral matching in . Some of the edges in are in , while others are in . Let contain all vertices incident to edges in
, and let contain all vertices incident to
edges in . We now show that and satisfy
the first three properties of the lemma. Property 1 is
satisfied because consists of all matched vertices
in . Property 2 is satisfied by definition of . To
see that Property 3 is satisfied, remark that the vertices of each contribute exactly . Now, consists of
disjoint edge in , and
by Property P2 of a HEDCS, for each such edge :
and by Lemma 6.5, we have and each one of these vertices contributes in average at least to , just as desired. Loosely
speaking, will end up corresponding to the
profit gained by vertex in the fractional matching .
Consider and from above. These sets might not satisfy the Property 4 (that all edges in with an endpoint in have at least one other endpoint in ). Can we transform these into sets and , such that the first three properties still hold and there are no hyperedges with endpoints in and ; at this stage, however, there will be possibly edges in with different endpoints in . To construct ,, we start with and , and present a transformation that terminates with and . Recall that has a perfect matching using edges in . The set will maintain this property throughout the transformation, and each vertex has always a unique mate . The construction does the following : as long as there exists an edge in containing and only endpoints in and , let be the mate of , we then remove the endpoints of from and add the endpoints of to . Property 1 is maintained because we have removed vertices from and added to . Property 2 is maintained because the vertices we added to were connected by an edge in . Property 3 is maintained because clearly still has a perfect matching in , and for the vertices , the average contribution is still at least , as above. We continue this process while there is an edge with endpoints in and . The process terminates because each time we are removing vertices from and adding vertices to . We end up with two sets and such that the first three properties of the lemma are satisfied and there are no edges with endpoints in and . This means that for any edges in incident to , this edge is either incident to as well or incident to only points in .
We now set and and show how to transform and into two sets that satisfy all four properties of the lemma. Recall that still contains a perfect matching using edges in ; denote this matching . Our final set, however, will not guarantee such a perfect matching. Let be a maximal matching in using edges in (with edges not incident to , because they already satisfy Property 4). Consider the edge set .

We now perform the following simple transformation, we remove the endpoints of the edges in from and add them directly to . Property 1 is preserved because we are deleting and adding the same number of vertices from and to respectively. Property 2 is preserved because the endpoints we add to are matched in and thus in . We will see later for Property 3.
Let’s check if Property 4 is preserved. To see this, note that we took to be maximal among edges of that contain only endpoints in , and moved all the matched vertices in to . Thus all vertices that remain in are free in , and so there are no edges between only endpoints in after the transformation. There are also no edges in containing endpoints from and , because originally they don’t exist for
Let’s check if the Property 3 is preserved. As before, this involves showing that after the transformation, the average contribution of a vertex in to is at least .
(Because every vertex in is incident to an edge in , each vertex is accounted for in the transformation.) Now, all vertices that were in remain in , so
their average contribution remains at . We thus
need to show that the average contribution to among
vertices in remains at least after the
transformation.
Let and . Since is a perfect matching on , we always have . If , then all vertices of are transferred to and clearly their average contribution to is . Now consider when . Let the edges of be and these of be . Let be the set of vertices that remain in . Because the edges of are not , the by two properties of HEDCS :
The sum of the degrees of vertices in can be written as the following difference:
Now we prove that (A.2) implies that the contribution of vertices from on average at least .
Claim A.1.
.
By claim A.1, the average contribution among the vertices that are considered is
which proves Property 3 and completes the proof of the lemma.∎
Proof of claim A.1.
Let’s denote . Recall that the number of vertices in is equal to and . Here we will prove it for . This means that we will prove that
If , then the result clearly holds by Lemma 6.5. Let’s suppose and thus . By Lemma 6.5, we know that:
(5) |
We also know that :
(6) | |||||
(7) |
By convexity of the function :
(9) |
Where the last inequality is due to
A.3 Proof of Corollary 6.9
See 6.9
Proof.
For linear hypergraphs, assume that with and . The difference in the analysis is that, for every edge belonging to the edges that are incident to in , we can find a new set of at least edges in . In fact, for such an edge , every one of the edges that verify intersect in exactly on vertex that is not . The same goes for the subset of at least that are in A, these edges already intersect , and can at most have one intersection in between them. At most of these edges can be considered simultaneously for different from the edges incident to . Therefore, for every edge , we can find at least a set of new edges that in . This means that at this point we have already covered edges in both and . One can see that we can continue covering new edges just like in the previous lemma, such that at iteration , the number of covered edges is at least
for . It is easy to see that for , we will have if , which will be a contradiction. ∎
Appendix B Figures and Tables
#I | Gr | IS | HEDCS | ||||||
---|---|---|---|---|---|---|---|---|---|
15 | 200 | 3 | 5 | 500 | 79.1% | 87.5% | 82.3% | 5 | 3 |
30 | 400 | 5 | 82.6% | 91.3% | 84.4% | 7 | 4 | ||
100 | 3200 | 10 | 83.9% | 96.2% | 88.2% | 5 | 2 | ||
300 | 4000 | 10 | 81.1% | 92.0% | 86.3% | 4 | 2 | ||
50 | 800 | 5 | 6 | 500 | 76.0% | 89.1% | 78.5% | 16 | 11 |
100 | 2,800 | 10 | 77.9% | 92.1% | 81.9% | 16 | 11 | ||
300 | 4,000 | 10 | 77.8% | 93.9% | 87.1% | 10 | 5 | ||
500 | 8,000 | 16 | 79.2% | 94.3% | 85.9% | 10 | 5 | ||
500 | 15,000 | 10 | 16 | 500 | 71.8% | 90.6% | 79.2% | 20 | 10 |
1,000 | 50,000 | 20 | 73.8% | 92.3% | 81.5% | 20 | 10 | ||
2,500 | 100,000 | 20 | 72.2% | 91.5% | 80.7% | 20 | 10 | ||
5,000 | 200,000 | 20 | 72.5% | 90.7% | 79.8% | 20 | 10 | ||
1,000 | 5,0000 | 25 | 20 | 100 | 68.2% | 87.5% | 75.6% | 75 | 50 |
2,500 | 100,000 | 25 | 69.0% | 87.9% | 74.3% | 75 | 50 | ||
5,000 | 250,000 | 30 | 67.8% | 87.3% | 75.1% | 75 | 50 | ||
10,000 | 500,000 | 30 | 67.2% | 86.9% | 73.7% | 75 | 50 | ||
5,000 | 250,000 | 50 | 30 | 100 | 67.4% | 86.6% | 74.0% | 100 | 50 |
10,000 | 500,000 | 30 | 68.1% | 87.1% | 73.4% | 100 | 50 | ||
15,000 | 750,000 | 30 | 66.9% | 86.2% | 73.2% | 100 | 50 | ||
25,000 | 1,000,000 | 30 | 67.3% | 86.0% | 72.8% | 100 | 50 |
Appendix C Chernoff Bound
Let be independent random variables taking value in and . Then, for any
Appendix D Nesbitt’s Inequality
Nesbitt’s inequality states that for positive real numbers , and ,
with equality when all the variables are equal. And generally, if are positive real numbers and , then:
with equality when all the are equal.