Parallelized Conflict Graph Cut Generation
Abstract
A conflict graph represents logical relations between binary variables, and effective use of the graph can significantly accelerate branch-and-cut solvers for mixed-integer programming (MIP). In this paper we develop efficient parallel conflict graph management: conflict detection; maximal clique generation; clique extension; and clique merging. We leverage parallel computing in order to intensify computational effort on the conflict graph, thereby generating a much larger pool of cutting planes than what can be practically achieved in serial. Computational experiments demonstrate that the expanded pool of cuts enabled by parallel computing lead to substantial reductions in total MIP solve time, especially for more challenging cases.
1 Introduction
Various progress and benchmarking reports from the literature [9, 7, 3, 22] indicate the increasing importance of parallelization as well as preprocessing subroutines in solvers for mixed-integer programming (MIP). We build upon this progress by parallelizing conflict graph (CG) management, a key subroutine in branch-and-cut solvers [19, 5, 1, 10] that begins at the preprocessing stage and is subsequently deployed throughout the search tree (see, e.g. [34]). This work focuses on modest levels of parallelism—say, cores with shared memory—typical of personal computing setups now and in the near future; in contrast, work on massively parallel MIP (e.g. [29, 13, 31, 23, 32, 28]) address certain issues associated with distributed computing such as higher communication costs. Moreover, our work deals with general-purpose MIP; for instance, certain stochastic optimization problems have special problem structures amenable to specially tailored decomposition-based distributed schemes (see e.g. [27, 30, 25]).
Perhaps the most closely related work on parallel MIP is by Gleixner, Gottwald, and Hoen [14], in which a host of preprocessing techniques are parallelized. Using 32 threads, their solver PaPILO is reported to reduce presolve time by almost 50% in shifted geometric mean, with 5x speedup on preprocessing-intensive instances. Our method attains over 80% reduction using 64 threads (see Figure 2); we emphasize, however, that a direct comparison is misguided as both our works are complementary due to parallelization of different procedures. Likewise, our work complements other efforts such as parallel LP solvers (e.g. [21, 20, 4, 18]), and concurrent solves (see e.g. [22]). Our approach is further differentiated from the literature as we do not solely accelerate existing serial algorithms—indeed, the small fraction of total runtime from typical CG management suggests rather modest potential from this due to Amdahl’s law [17]— but instead modify the serial CG procedures of Brito and Santos [10] to generate more cuts. We observe empirically that our more intensive cut management scheme is ineffective in serial implementation, but attains substantial overall time speedups when executed in parallel.
2 Parallel Conflict Graph Management
The serial algorithm development in this section follows predominantly the recent work of Brito and Santos [10] on CG management, which has been implemented in the COIN-OR branch-and-cut solver. Furthermore, in parallel computing discussions here it is assumed we have threads with shared memory among cores. Moreover, theoretical results herein hold irrespective of the particular PRAM configuration (EREW, CRCW, etc.) due to the lack of conflicting transactions in our algorithms.
Consider a mixed integer program (MIP) in the following generic form:
(1) | ||||
s.t. | ||||
with parameters , , , , and ; variables with for ; and constraints with relations for each row . Furthermore, let be the set of indices for all binary variables.
A conflict graph over has one node for each binary variable , representing an assigned value , and another node for the complement variable assigned to 1. An edge between the nodes represents conflict, i.e. an infeasible assignment; for instance, one can automatically place an edge between each variable and its complement. Conflicts need to be inferred or detected from the problem formulation or else during the branch-and-bound procedure. In this paper we consider two types of constraints from which conflicts can be extracted: set packing constraints and conflicting knapsack constraints. A set packing constraint is defined as
(2) |
for some . Since each variable in has a conflict with all others, forms a clique in the conflict graph.
A knapsack constraint is defined as
(3) |
with . Suppose WLOG that are the two largest elements from coefficients . Then if , we call this a conflicting knapsack constraint as a CG clique can be generated from (and possibly other variables); in the absence of this condition, no conflicts can be inferred from the knapsack.
Set packing and knapsack constraints can, in turn, be inferred from the general MIP formulation. For a given mixed integer constraint , if is , we extract a pure binary variables constraint (PBC) denoted as :
(4) |
Note that if is , we can rewrite the constraint as , and if is , we can split the constraint into two constraints and . So we only consider constraints with the form of in the remainder of this section.
The remainder of this section describes both serial and parallel algorithms for certain aspects of CG management, presented in the order of execution in branch-and-bound code. Both serial and parallel algorithms are analyzed in terms of worst-case and average-case complexity.
2.1 Serial Pure Binary Constraint Extraction
At the preprocessing stage, given a , we perform a one-round simple presolve to detect set packing constraints and conflicting knapsack constraints for subsequent conflict graph cuts generation, which is described in Alg 1. This applies well-studied techniques from the CG literature (see e.g. Achterberg et al. [2]).
All techniques mentioned in line 3 can be found in [2, Sec 3.1 and 3.2]. For , we transfer it as . And for , we treat it as two inequalities and .
In line 4 of Alg 1, if is a set packing constraint, we call it an original set packing (OSP) constraint and collect all such constraints in the set (line 6).
In line 10, for with such that , the lower or upper bounds are tightened as follows: if , ; otherwise, .
In line 11 of Alg 1, if is a set packing constraint, we call it a inferred set packing constraint (ISP) and collect all such constraints in the set .
In line 14 we collect all (i.e. both original and PBC-inferred) conflicting knapsack constraints (CK) in the constraint set (line 15).
The complexity of Alg 1 is , where is the number of nonzero elements in . We implement this procedure solely in serial since it executes very quickly in practice.
2.2 Parallel Maximal Clique Detection from Knapsack Constraint
Following PBC generation, we proceed to detect maximal cliques from conflicting knapsack constraints with the Clique Detection method of Brito and Santos [10, Algorithm 1]. We modify the method, described as (serial) Alg 2: namely, instead of returning all detected maximal cliques in a single outputted set, we separately return the first detected maximal clique (see line 7 in Alg 2) in (the original maximal clique) and all other cliques in (other maximal cliques). This is used for more intensive clique extension and merging applied specifically to the original cliques, described in Section 2.4. Alg 2 for is, in turn, called in parallel via Alg 3.
Furthermore, in line 2 of Alg 3, we randomly shuffle and then partition into subsets of equal cardinality (modulo the last processor) from Alg 1 as a simple and fast heuristic for load balancing. Moreover, this can be efficiently parallelized [6, 33]. The same trick is used in Alg 5 and Alg 7. Note that the general problem of optimal load balancing—dividing up tasks with known computational load as evenly as possible across cores—is an NP-hard partitioning problem [11, 12]. The shuffling heuristic is justified both by average-case analysis as well as computational experiments indicating high parallel efficiency on hard instances.
The complexity for Algorithm 2 is (on average) and (worst case) (see [10, Page 4, Paragraph 2-3]), where is the number of elements in the constraint. The average-case complexity (average over random shuffles) for Algorithm 3 is , where is number of constraints, and worst case is in (see Proposition A.1 in Appendix A).
2.3 Parallel Conflict Graph Construction
Following clique detection from PBC-derived conflicting knapsack constraints, we proceed to CG construction. Suppose there are binary variables in , and so complementary variables. We represent the CG with a sparse matrix , with the first set of rows representing the original variables and the next set representing the complements . We build by Alg 5 in parallel.
We choose a sparse adjacency matrix here instead of a clique table in order to enable faster clique extension (see Section 2.4) by avoiding redundant computation. A clique table, however, is substantially more efficient with memory, so as a workaround to the sparse data structure we set limits on the number of nonzeros considered (see Section 2.6). Moreover, on cliques that we choose not to extend (namely ), we adopt the more memory-efficient data structure described in Section 2.3 of [10].
The initialization of the clique set in line 1 includes: the trivial pairwise conflicts between variables and their complements (i.e. for all ); the clique of from set packing constraints obtained by Alg 1; and cliques extracted in Alg 3.
As with Alg 3, in line 2 random shuffle and partition is applied as a load balancing heuristic.
In line 5, each is updated in parallel using the clique subset via Alg 4.
In lines 7-12, we reduce the individual to the global via binary combination (see Fig 1). In line 9, the OR logical merge is applied to every element in CGs, e.g., .
2.4 Parallel Clique Extension and Merging
After generating the conflict graph, we apply clique strengthening [1, 2, 10]—in particular, we modify the Clique Extension of Brito and Santos [10]: a greedy algorithm used to generate one strengthened clique based on the original clique and the CG (see [10, Algorithm 2]). Our modification (see Alg 6) involves (potentially) multiple clique strengthenings, and the increase in computational load is made practical by parallelization.
In line 1 of Alg 6, we generate a list including all variables that do not belong to the clique, but for which conflicts with all variables in the clique. In lines 19-20, we isolate a largest extended clique from the clique set . We distinguish this longest extended clique from all other cliques in order to perform cut management, described in Section 2.5.
Strengthening each clique is an independent procedure, and so we propose to apply strengthening in parallel as Alg 7.
Complexity for Alg 6 is (see Lemma A.6 in Appendix A) for one clique; thus for cliques. Alg 7 has complexity (see Proposition A.7 in Appendix A).
After obtaining extended cliques from Alg 7, we check for domination between extended cliques and remove dominated cliques. For a clique defined as a set pack (Equation (2)), we will only store the index set ; thus, to check the domination between two cliques and , we only need to check whether or , which can be performed in due to sparse data structures.
Given cliques, the domination checking process is in . The parallel version of this, Alg 8, has complexity .
2.5 Cut Management
Cut management applies the results from previous subsections (which are conducted in presolve) throughout the branch-and-cut tree.
In Alg 1, three constraint sets are generated: . Subsequently, cliques are extracted from using Alg 3, yielding and . Applying Alg 5 on the four sets , the CG is constructed. Cliques are then strengthened/extended from the three sets with parallel Alg 7. As a result, this process yields clique sets: (from ), (from ), (from ), and . However, the number of cliques from these sets can be quite large, and so judicious management is critical for practical application. For instance, incorporating all corresponding inequalities at the root node is impractical both due to resource limitations on linear programming solves as well as potential numerical issues from excessive cuts.
Our cut management procedure applies only a few key inequalities from the longest cliques are added at the root node and apply the remaining cuts throughout the search tree:
First, we replace in the MIP formulation (deleting it in line 5 of Alg 1) with the strengthened constraints from .
Second, all cliques from , , , and are incorporated via user cuts, which are placed in the cut pool and may be added to the model at any node in the branch-and-cut search tree to cut off relaxation solutions (see e.g. [16, Page 769, Lazy Attribute]).
Third, for cliques from and , we either add them to the root node or else place them as user cuts depended on following rules:
-
1.
If the size of is excessively large (greater than 20000) or small (less than 100), we add cliques from .
-
2.
If and , add cliques from to the root node; otherwise, place them as user cuts.
-
3.
If and , add cliques from to the root node; otherwise, place them as user cuts.
-
4.
If the size of is excessively large (greater than 12000) or small (less than 1500), add cliques from to the root node; otherwise, place them as user cuts.
This heuristic attempts to prioritize some inequalities and place them directly in the formulation at the root node, leaving the remaining inequalities for Gurobi to manage in the search tree via user cuts.
2.6 Computation Limits
CG Management is time-consuming (see e.g. [2, Page 491 Paragraph 2]) and memory-consuming. For instance, in CG construction, the sparse adjacency matrix costs memory for the clique with length . However, we are not obliged to consider every possible variable, nor identify every clique, etc.; as such, to avoid excessive memory or time costs, we set the following limits.
For CG construction in Alg 4, if the length of the clique is greater than , we randomly select elements from to construct CG. For Clique Extension in Alg 7, we set each thread at most process nonzero terms and return generated extended cliques to the main thread. For Clique Merging in Alg 8, if the number of cliques is more than , we give up to conduct Alg 8.
3 Numerical Experiments
All code and data can be found in our repository111https://github.com/foreverdyz/ParallelCliqueMerge.
3.1 Experimental Setup
3.1.1 Test Set
Experiments are conducted on the benchmark set of the MIPLIB 2017 Collection [15], consisting of instances. Because our parallel presolve method focuses on set packing constraints and conflicting knapsack constraints, we remove instances where the simple presolve procedure yields or fewer such constraints. The results below are run on the remaining cases.
3.1.2 Software and Hardware
All algorithms are implemented in Julia 1.10.2 [8] and a desktop running 64-bit Windows 11 PRO with an AMD Ryzen Threadripper PRO 5975WX 32-Core CPU and 64 GB. This CPU has 32 physical cores and 64 logical processors. We solve both original MIPs and MIPs after applying the conflict graph management with Gurobi 11.0.0 [16] and an alpha (prototype) version of JuMP 1.28 [24].
3.1.3 Configurations
Experiments are run on (serial), , , , , , threads. For benchmark experiments (MIP without our CG) we apply a single core for Gurobi with 10 threads active; this is the empirically-observed best setting over the benchmark as performance degredation is observed with more threads.
3.1.4 Time Measurement
Measured presolving time excludes overhead from the reading time of the input file as well as the runtime of Alg 1 because Alg 1 is already implemented in Gurobi [2] and the focus of experiments is on the effects of novelties. Moreover, for the cases where and from Alg 1 are empty sets, we do not deploy CG management since Gurobi has already implemented CG management for cliques from —note that such cases are nevertheless included in our performance comparisons. Times are always given in seconds and represent wall-clock measurements. Furthermore, for all aggregations, we calculate the geometric mean—note that we apply a 1 second shift to runtime means, as is typical in MIP literature (e.g. [15]).
3.2 Parallel CG Performance
In the following experiments we analyze the parallel efficiency of our parallel CG management algorithms (excluding branch-and-cut solver time). We select the cases for which the CG procedure can be run with a single thread within a time of seconds. We furthermore exclude in the subsection cases which can be trivially handled in less than seconds with a single thread. On the remaining cases, parallel speed-ups (serial runtime divided by the parallel runtime) are presented in Fig. 2. The parallel efficiency, naturally, depends on the total work available. When CG takes more than 50 seconds we observe substantial speedups up to 64 threads, and on more modest instances there is a tail-off at 32 threads.
3.3 Gurobi Performance
In this section we consider the impact of our CG procedure on total solver time, using Gurobi as our benchmark. As aforementioned, we focus on 173 cases from MIPLIB that have constraints applicable to our CG procedure. We exclude from comparison in this subsection an additional 74 cases for which our method cannot derive additional inequalities beyond standard CG techniques, namely those instances for which . For these 74 excluded cases our procedure will simply reduce to a standard (albeit parallelized) CG procedure that is already incorporated into most solvers’ presolve routines (including Gurobi). This leaves us with 99 cases; among these 99, 3 more are excluded in the aggregated results as they exceed the excessive clique size caused issues in either JuMP (1 instance) or our procedure (2 instances). Note that a simple maximum clique-size limit would allow for skipping the CG procedure with minimal effect on runtime in these 3 instances. Thus cases are considered for comparison purposes to analyze the impact of our CG routine on Gurobi.
We present results in two parts: one with the cases excluding CG management/overhead time, and one with cases including CG management time and varying the number of threads used. Since our CG implementation will duplicate certain similar efforts in Gurobi’s CG itself, we expect a fully-integrated implementation to have less overhead than what is reported; hence, the two times presented provide optimistic (without overhead), and conservative (with overhead) bounds on potential impact, respectively.
3.3.1 Gurobi excluding CG management time
Table 1 compares Gurobi with and without our CG procedure: Runtime Comp. is the total solve time of Gurobi+CG ignoring CG time divided by that of Gurobi alone; Nodes Comp. is the same ratio in terms of number of search tree nodes; Faster and Slower are the number of solves for which the CG procedure led to faster or slower (respectively) runtimes. Excluded from this table are cases that Gurobi cannot solve within seconds; including our CG procedure solves of them. The CG procedure appears especially effective on harder instances, seemingly driven by greater reduction in search tree nodes.
Bracket | Number of Cases | Runtime Comp. | Nodes Comp. | Faster | Slower |
---|---|---|---|---|---|
All | 88 | 0.88 | 0.60 | 42 | 24 |
sec | 80 | 0.83 | 0.57 | 42 | 22 |
sec | 62 | 0.76 | 0.46 | 37 | 16 |
sec | 35 | 0.69 | 0.47 | 25 | 8 |
sec | 7 | 0.60 | 0.35 | 7 | 0 |
3.3.2 Gurobi including CG management time
Table 2 compares Gurobi with and without our parallel CG procedure: Runtime Comp. is the total solve time of Gurobi+CG with CG time on different numbers of threads divided by that of Gurobi alone; Nodes Comp. is the same ratio in terms of number of search tree nodes; Faster and Slower are the number of solves for which the CG procedure led to faster or slower (respectively) runtimes. Again, excluded from this table are cases unsolved by Gurobi within seconds, 1 of which can be solved after including our CG procedure.
Table 3 breaks out the results on 64 threads in terms of problem difficulty. The benefit of our parallel CG management procedure increases a problem difficulty increases, with 40% overall runtime reduction on instances that takes default Gurobi over 1000 seconds. On such problems our CG procedure overhead is relatively negligible, as roughly the same reduction is reported in Table 1. This suggests that perhaps further gains on average speedup can be had if problem difficulty can be reasonably estimated a priori—for instance, by only activating parallel CG on sufficiently large instances.
Considering these total time results, our CG procedure has modest, and possibly slight deceleration in serial, but demonstrates significant advantage in parallel. The gains from parallelization tails off substantially around 16 threads but remains positive or breakeven up to 64 threads. This suggests promise for personal computers, but more limited gains with massively parallel compute.
Bracket | Runtime Comp. | Faster | Slower |
---|---|---|---|
1 Thread | 1.10 | 36 | 41 |
2 Threads | 1.04 | 38 | 40 |
4 Threads | 0.96 | 38 | 39 |
8 Threads | 0.93 | 38 | 36 |
16 Threads | 0.91 | 39 | 35 |
32 Threads | 0.90 | 39 | 34 |
64 Threads | 0.90 | 39 | 34 |
Bracket | Number of Cases | Runtime Comp. | Nodes Comp. | Faster | Slower |
---|---|---|---|---|---|
All | 88 | 0.90 | 0.60 | 39 | 34 |
sec | 80 | 0.87 | 0.57 | 39 | 29 |
sec | 62 | 0.79 | 0.46 | 38 | 20 |
sec | 35 | 0.70 | 0.47 | 26 | 8 |
sec | 7 | 0.60 | 0.35 | 7 | 0 |
4 Conclusion
We develop an efficient parallel CG management that enables the effective use of a larger number of valid inequalities than could otherwise be practically found in serial. Computational experiments demonstrate that this approach yields substantial overall speedups to Gurobi primarily due to search tree node reduction from the larger pool of generated cuts and valid inequalities. Rather than accelerate an existing subroutine in solvers, we leverage parallelism and intensify effort by modifying the serial subroutine to conduct more computation given the same time budget. This parallelization approach enables substantial parallel acceleration from personal computing setups, and could yield benefits on other subroutines used in branch-and-cut solvers.
Acknowledgements
This work was funded by the Office of Naval Research under grant N00014-23-1-2632.
References
- [1] Achterberg, T.: Conflict analysis in mixed integer programming. Discrete Optimization 4(1), 4–20 (2007)
- [2] Achterberg, T., Bixby, R.E., Gu, Z., Rothberg, E., Weninger, D.: Presolve reductions in mixed integer programming. INFORMS Journal on Computing 32(2), 473–506 (2020)
- [3] Achterberg, T., Wunderling, R.: Mixed integer programming: Analyzing 12 years of progress. In: Facets of combinatorial optimization: Festschrift for martin grötschel, pp. 449–481. Springer (2013)
- [4] Andersen, E.D., Gondzio, J., Mészáros, C., Xu, X., et al.: Implementation of interior point methods for large scale linear programming. HEC/Université de Geneve (1996)
- [5] Atamtürk, A., Nemhauser, G.L., Savelsbergh, M.W.: Conflict graphs in solving integer programming problems. European Journal of Operational Research 121(1), 40–55 (2000)
- [6] Bacher, A., Bodini, O., Hollender, A., Lumbroso, J.: Mergeshuffle: A very fast, parallel random permutation algorithm (2015)
- [7] Berthold, T., Gamrath, G., Gleixner, A., Heinz, S., Koch, T., Shinano, Y.: Solving mixed integer linear and nonlinear problems using the scip optimization suite (2012)
- [8] Bezanson, J., Edelman, A., Karpinski, S., Shah, V.B.: Julia: A fresh approach to numerical computing. SIAM review 59(1), 65–98 (2017)
- [9] Bixby, R., Rothberg, E.: Progress in computational mixed integer programming–a look back from the other side of the tipping point. Annals of Operations Research 149(1), 37 (2007)
- [10] Brito, S.S., Santos, H.G.: Preprocessing and cutting planes with conflict graphs. Computers & Operations Research 128, 105176 (2021)
- [11] Chopra, S., Rao, M.R.: The partition problem. Mathematical programming 59(1-3), 87–115 (1993)
- [12] Devine, K.D., Boman, E.G., Karypis, G.: Partitioning and load balancing for emerging parallel applications and architectures. In: Parallel processing for scientific computing, pp. 99–126. SIAM (2006)
- [13] Eckstein, J., Hart, W.E., Phillips, C.A.: Pebbl: an object-oriented framework for scalable parallel branch and bound. Mathematical Programming Computation 7, 429–469 (2015)
- [14] Gleixner, A., Gottwald, L., Hoen, A.: Papilo: A parallel presolving library for integer and linear optimization with multiprecision support. INFORMS Journal on Computing (2023)
- [15] Gleixner, A., Hendel, G., Gamrath, G., Achterberg, T., Bastubbe, M., Berthold, T., Christophel, P.M., Jarck, K., Koch, T., Linderoth, J., Lübbecke, M., Mittelmann, H.D., Ozyurt, D., Ralphs, T.K., Salvagnin, D., Shinano, Y.: MIPLIB 2017: Data-Driven Compilation of the 6th Mixed-Integer Programming Library. Mathematical Programming Computation (2021)
- [16] Gurobi Optimization, LLC: Gurobi Optimizer Reference Manual (2023), https://www.gurobi.com
- [17] Gustafson, J.L.: Reevaluating amdahl’s law. Communications of the ACM 31(5), 532–533 (1988)
- [18] Hall, J.: Towards a practical parallelisation of the simplex method. Computational Management Science 7, 139–170 (2010)
- [19] Hoffman, K.L., Padberg, M.: Solving airline crew scheduling problems by branch-and-cut. Management science 39(6), 657–682 (1993)
- [20] Huangfu, Q., Hall, J.J.: Parallelizing the dual revised simplex method. Mathematical Programming Computation 10(1), 119–142 (2018)
- [21] Klabjan, D., Johnson, E.L., Nemhauser, G.L.: A parallel primal–dual simplex algorithm. Operations Research Letters 27(2), 47–55 (2000)
- [22] Koch, T., Berthold, T., Pedersen, J., Vanaret, C.: Progress in mathematical programming solvers from 2001 to 2020. EURO Journal on Computational Optimization 10, 10–31 (2022)
- [23] Koch, T., Ralphs, T., Shinano, Y.: Could we use a million cores to solve an integer program? Mathematical Methods of Operations Research 76, 67–93 (2012)
- [24] Lubin, M., Dowson, O., Garcia, J.D., Huchette, J., Legat, B., Vielma, J.P.: Jump 1.0: recent improvements to a modeling language for mathematical optimization. Mathematical Programming Computation 15, 581–589 (2023)
- [25] Munguía, L.M., Oxberry, G., Rajan, D., Shinano, Y.: Parallel pips-sbb: multi-level parallelism for stochastic mixed-integer programs. Computational Optimization and Applications 73, 575–601 (2019)
- [26] Nguyen, D.: A probabilistic approach to the moments of binomial random variables and application. The American Statistician 75(1), 101–103 (2021)
- [27] Papavasiliou, A., Oren, S.S., Rountree, B.: Applying high performance computing to transmission-constrained stochastic unit commitment for renewable energy integration. IEEE Transactions on Power Systems 30(3), 1109–1120 (2014)
- [28] Perumalla, K., Alam, M.: Design considerations for gpu-based mixed integer programming on parallel computing platforms. In: 50th International Conference on Parallel Processing Workshop. pp. 1–7 (2021)
- [29] Phillips, C.A., Eckstein, J., Hart, W.: Massively parallel mixed-integer programming: Algorithms and applications. In: Parallel Processing for Scientific Computing, pp. 323–340. SIAM (2006)
- [30] Ryan, K., Rajan, D., Ahmed, S.: Scenario decomposition for 0-1 stochastic programs: Improvements and asynchronous implementation. In: 2016 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). pp. 722–729. IEEE (2016)
- [31] Shinano, Y., Achterberg, T., Berthold, T., Heinz, S., Koch, T., Winkler, M.: Solving open mip instances with parascip on supercomputers using up to 80,000 cores. In: 2016 IEEE International Parallel and Distributed Processing Symposium (IPDPS). pp. 770–779. IEEE (2016)
- [32] Shinano, Y., Heinz, S., Vigerske, S., Winkler, M.: Fiberscip—a shared memory parallelization of scip. INFORMS Journal on Computing 30(1), 11–30 (2018)
- [33] Shun, J., Gu, Y., Blelloch, G.E., Fineman, J.T., Gibbons, P.B.: Sequential random permutation, list contraction and tree contraction are highly parallel. In: Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2015, San Diego, CA, USA, January 4-6, 2015. pp. 431–448. SIAM (2015)
- [34] Witzig, J., Berthold, T., Heinz, S.: Experiments with conflict analysis in mixed integer programming. In: Integration of AI and OR Techniques in Constraint Programming: 14th International Conference, CPAIOR 2017, Padua, Italy, June 5-8, 2017, Proceedings 14. pp. 211–220. Springer (2017)
Appendix A Complexity Results
Proposition A.1.
Alg 3 has an average runtime of and a worst case runtime of , where is the number of variables, is the number of constraints, and is the number of threads.
Proof.
In lines 3-10, there is a parallel for loop with threads. In each iteration, there are at most constraints, and we perform Alg 2 on each constraint. Therefore, the complexity for lines 3-10 is (on average) or (worst case) due to the underlying serial complexity of Alg 2.
Consequently, Alg 3 has an average (over random shuffles) runtime of and a worst case runtime of . ∎∎
Lemma A.2.
For a clique set , suppose that for each clique , whether a variable is in is given by a Bernoulli distribution with independent probability . Then Alg 4 has an average runtime of , where is the total number of binary variables, and is the number of cliques in .
Proof.
In line 2, there are cliques.
Corollary A.3.
Alg 4 has a worst case runtime of , where is the total number of binary variables, and is the number of cliques in .
Proof.
In line 2, there are cliques.
For each clique, there are at most variables. Therefore, in line 3, there are at most pairs of variables . So there are at most iterations. Since line 4 can be completed in , Alg 4 has a worst-case runtime in . ∎∎
Proposition A.4.
For a clique set , suppose that for each clique , whether a variable is in is given by a Bernoulli distribution with the probability . Then Alg 5 has an average runtime of , where is the total number of binary variables, is the number of cliques, and is the number of threads.
Proof.
In lines 3-6, there is a parallel for loop involving threads. In each iteration, we call Alg 4 to at most cliques. Due to Lemma A.2, the average complexity of lines 3-6 is .
In lines 7-11, there is a binary combination with iterations. In the -th iteration, we perform OR operations in parallel. The runtime of each OR is in ; thus the complexity of lines 7-11 (the binary combination) is .
Line 12 is another OR operation in which the complexity is .
Consequently, Alg 5 has an average runtime of . ∎∎
Corollary A.5.
Alg 5 has a worst-case runtime of , where is the number of binary variables, is the number of cliques, and is the number of threads.
Proof.
Lemma A.6.
Alg 6 has complexity , where is the total number of binary variables.
Proof.
A clique Clq has at most variables. In line 1, checking whether for a given and all takes at most operations. So line 1 has complexity .
In line 6, the iteration number is . In line 10, the iteration number is also . Since lines 11-15 can be completed in , the complexity for lines 6-18 is .
In line 19, the size of is ; thus line 19 can be completed in .
As a result, Alg 6 has complexity . ∎
Proposition A.7.
Alg 6 has complexity , where is the number of binary variables, is the number of cliques, and is the number of threads.