This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

New Characterizations and Efficient Local Search for General Integer Linear Programming

Peng Lin Shaowei Cai [email protected] Mengchuan Zou Jinkun Lin
Abstract

Integer linear programming (ILP) models a wide range of practical combinatorial optimization problems and significantly impacts industry and management sectors. This work proposes new characterizations of ILP with the concept of boundary solutions. Motivated by the new characterizations, we develop a new local search algorithm Local-ILP, which is efficient for solving general ILP validated on a large heterogeneous problem dataset. We propose a new local search framework that switches between three modes, namely Search, Improve, and Restore modes. Two new operators are proposed, namely the tight move and the lift move operators, which are associated with appropriate scoring functions. Different modes apply different operators to realize different search strategies and the algorithm switches between three modes according to the current search state. Putting these together, we develop a local search ILP solver called Local-ILP. Experiments conducted on the MIPLIB dataset show the effectiveness of our algorithm in solving large-scale hard ILP problems. In the aspect of finding a good feasible solution quickly, Local-ILP is competitive and complementary to the state-of-the-art commercial solver Gurobi and significantly outperforms the state-of-the-art non-commercial solver SCIP. Moreover, our algorithm establishes new records for 6 MIPLIB open instances. The theoretical analysis of our algorithm is also presented, which shows our algorithm could avoid visiting unnecessary regions.

keywords:
local search , integer linear programming , heuristic algorithm
journal: Artificial Intelligence
\affiliation

[label1]organization=State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, city=Beijing, country=China

\affiliation

[label2]organization=School of Computer Science and Technology, University of Chinese Academy of Sciences, city=Beijing, country=China \affiliation[label3]organization=SeedMath Technology Limited, city=Beijing, country=China

1 Introduction

Integer linear programming (ILP) is a fundamental problem in combinatorial optimization, where the objective is to optimize a linear objective function under linear constraints, while variables must take integer values. The ILP is NP-hard, and one of its special cases, the 0-1 ILP problem belongs to Karp’s 21 NP-complete problems list. Moreover, ILP has strong descriptive capability and is usually more convenient for describing practical combinatorial optimization problems than the SAT, MaxSAT, 0-1 ILP (PBO) formulations. Many combinatorial optimization problems, such as the knapsack problem, traveling salesman problem, warehouse location problem, decreasing costs and machinery selection problem, network and graph problems (maximum flow, set covering, matching, weighted matching, spanning trees, etc.), and scheduling problems, can all be formulated and solved as ILP problems [1].

Along with the powerful ability of ILP to model combinatorial optimization problems, there have also been plenty of efforts devoted to solving ILP problems. As the general ILP problem is NP-hard [2], there is no known polynomial algorithm that could solve ILP exactly. As a result, solving methods have been categorized into complete and incomplete methods: complete methods compute the exact optimal solution and prove the optimality; incomplete methods attempt to get a good solution quickly and do not need to guarantee the optimality.

Existing ILP solvers are primarily based on complete methods. The best-known complete approach is branch-and-bound (BnB), which is a classical approach for solving combinatorial optimization problems by iteratively dividing the feasible region and bounding the objective function [3, 4]. In addition, other methods such as cutting plane [5] and auxiliary procedures such as domain propagation [6, 7, 8], have been proposed for ILP, which are often integrated into the BnB process. This is what forms the hybrid approach, which are promising because they leverage the strengths of different methods in a complementary mode. Hybrid approaches are the infrastructure of state-of-the-art commercial and non-commercial ILP solvers such as Gurobi [9] and SCIP [8], both of which are based on the BnB algorithm, combining, cutting planes, domain propagation, etc. However, due to the worst-case exponential running time, complete algorithms are impractical for large-scale instances.

Local search is an important class of incomplete algorithms and is powerful for solving NP-hard combinatorial problems in many areas of computer science and operations research [10]. Local search algorithms iteratively explore the neighborhood of a solution and move towards a better one. It has shown great success in solving SAT and MaxSAT [11, 12, 13].

However, research on local search solvers for general ILP is pretty rare. Some attempts on sub-classes of ILP have been proposed, such as over-constrained ILP [14, 15] and 0-1 ILP [16, 17, 18, 19]. In [20] authors proposed a local search algorithm that accepts general ILP form, but it is only tested on one special type of problem. There has been a work [21] that proposes an local search algorithm Feasibility Jump for ILP that is tested on heterogeneous benchmark and won the 1st place in MIP 2022 Competition111https://www.mixedinteger.org/2022/competition/. But Feasibility Jump aims at finding feasible solutions and does not consider the objective function, which does not treat the original optimization purpose of ILP. In general, the ability of local search for in quickly computing good solutions of ILP remains to be explored.

1.1 Contributions

We develop a new local search algorithm Local-ILP, which is efficient for solving general ILP that is validated on a variety of different problems (i.e., the MIPLIB dataset).

Our algorithm is developed with two new operators tailored for ILP and a three-mode search framework. The two operators, namely the tight move and lift move, are designed based on a characterization of the solution space of ILP that involves the concept of boundary solutions. We show that searching for boundary solutions is complete for feasible and finite ILP instances, i.e., all optimal solutions belong to boundary solutions. We also design a local search framework that switches between three modes, namely Search, Improve, and Restore modes. Depending on the state of the best-found solution and the current solution, the framework selects the process of the appropriate mode to execute, where appropriate operators are leveraged to improve the quality of the current solution according to different situations.

For the Search and Restore modes, the tight move operator is adopted, which jointly considers variables and constraints’ information and tries to make some constraint tight. To distinguish important constraints and help guide the search to find feasible and high-quality solutions, we design a tailored weighting scheme and scoring function to select operations. For the Improve mode, an efficient operator lift move is used to improve the quality of the objective function while maintaining feasibility. To drive lift move operation, we propose a way to compute a variable’s new candidate values called local domain reduction. Additionally, we also design a specialized scoring function for lift move to pick a good operation. By putting these together, we develop a local search ILP solver called Local-ILP.

Experiments are conducted to evaluate Local-ILP on the MIPLIB benchmark, in which the ILP instances labeled hard and open are selected. We compare our algorithm with the state-of-the-art non-commercial ILP solver SCIP, as well as the state-of-the-art commercial solver Gurobi. Experimental results show that, in terms of finding a good feasible solution within a reasonable time, Local-ILP is competitive and complementary with Gurobi, and significantly better than SCIP. Experiments also demonstrate that Local-ILP significantly outperforms another local search algorithm Feasibility Jump, which won 1st place in MIP 2022 Workshop’s Computational Competition. Moreover, Local-ILP establishes 6 new records for MIPLIB open instances by finding the new best solutions.

Finally, we perform a theoretical analysis of our operators and algorithm based on the concept of boundary solutions, which are more explicit and closely related to the original form of ILP than the integral hull in polyhedral theory. We show that all feasible solutions visited by our algorithm are boundary solutions, efficiently avoiding unnecessary regions.

1.2 Outline

The remainder of the paper is organized as follows: Section 2 introduces the ILP problem and basic concepts for the local search algorithm, and then presents some basic characterizations of ILP we leveraged to design our local search algorithm. Section 3 proposes our local search framework for solving general ILP. In sections 4 and 5, we introduce two new operators that are key techniques for different modes of the framework. Section 6 presents detailed descriptions of how the Local-ILP algorithm implements the framework. The experimental results of our algorithm on public benchmark instances are reported in Section 7. The analysis of our algorithm is presented in Section 8, followed by the conclusion and future work in Section 9.

2 Preliminary

In this section, we present some fundamental integer linear programming and local search concepts pertinent to the paper.

2.1 Integer Linear Programming Problem

An instance of generalized ILP has the following form:

Minimizeobj(𝒙)=𝒄𝒙subjectto𝑨𝒙𝒃𝒙𝒍𝒙𝒙𝒖𝒙n\begin{split}Minimize\ \ \ &obj(\bm{x})=\bm{c}^{\top}\bm{x}\\ subject\ to\ \ \ \ &\bm{A}\bm{x}\leq\bm{b}\\ &\bm{x^{l}}\leq\bm{x}\leq\bm{x^{u}}\\ &\bm{x}\in\mathbb{Z}^{n}\\ \end{split} (1)

over integer variables 𝒙\bm{x}, where 𝑨m×n\bm{A}\in\mathbb{R}^{m\times n}, 𝒃m\bm{b}\in\mathbb{R}^{m}, 𝒄n\bm{c}\in\mathbb{R}^{n}, 𝒙𝒍()n\bm{x^{l}}\in(\mathbb{Z}\cup-\infty)^{n} and 𝒙𝒖(+)n\bm{x^{u}}\in(\mathbb{Z}\cup+\infty)^{n} are given inputs. The goal of ILP is to minimize the value of the objective function, while satisfying all constraints. We denote the i-th constraint in the constraint system 𝑨𝒙𝒃\bm{A}\bm{x}\leq\bm{b} by conicon_{i}: 𝑨𝒊𝒙𝒃𝒊\bm{A_{i}}\bm{x}\leq\bm{b_{i}}, where 𝑨𝒊n\bm{A_{i}}\in\mathbb{R}^{n}, 𝒃𝒊\bm{b_{i}}\in\mathbb{R}. The coefficient of xjx_{j} in conicon_{i} is AijA_{ij} and conicon_{i} contains xjx_{j} if Aij0A_{ij}\neq 0. The variables’ bounds are denoted by 𝒙𝒍𝒙𝒙𝒖\bm{x^{l}}\leq\bm{x}\leq\bm{x^{u}}; they are indeed parts of 𝑨𝒙𝒃\bm{A}\bm{x}\leq\bm{b}, but will be treated separately from the coefficient matrix 𝑨\bm{A} in practical algorithms. The infinite value of 𝒙𝒍\bm{x^{l}} or 𝒙𝒖\bm{x^{u}} indicates no lower or upper bound on the corresponding variable.

A solution 𝜶\bm{\alpha} for an ILP instance QQ is a vector of values assigned for each variable. Specifically, αj\alpha_{j} denotes the value of xjx_{j}. A solution 𝜶\bm{\alpha} of QQ satisfies conicon_{i} in 𝑨𝒙𝒃\bm{A}\bm{x}\leq\bm{b} if 𝑨𝒊𝜶𝒃𝒊\bm{A_{i}}\cdot\bm{\alpha}\leq\bm{b_{i}}, and 𝜶\bm{\alpha} is feasible if and only if it satisfies all constraints in QQ. The value of the objective function of a solution 𝜶\bm{\alpha} is denoted as obj(𝜶)obj(\bm{\alpha}).

2.2 Local Search Algorithm

The local search algorithm is well used in solving combinatorial optimization problems. It explores the search space comprised of all solutions, each of which is a candidate solution. Normally, local search starts with a solution and iteratively alters the solution by modifying the value of one variable, in order to find a feasible solution with a high-quality objective function value. The key to local search is to decide, under different circumstances, which variables to modify and to what value, to get a new solution.

An operator defines how the candidate solution is modified, i.e., given variables to be modified, how to fix them to new values. When an operator is instantiated by specifying the variable to operate on, we obtain an operation. Given an operation opop, the scoring function score(op)score(op) is used to measure how good opop is. An operation opop is said to be positive if score(op)>0score(op)>0, which indicates that performing opop can improve the quality of the current solution.

2.3 New Characterizations of ILP for Local Search

Currently, the most common theoretical tool to analyze ILP is polyhedral theory [22], of which the key component is the convex hull of all feasible solutions of an ILP (i.e., the integral hull), and it has established the equivalence between searching for the optimal solution and finding this integral hull. This equivalence provides additional theoretical explanations for the cutting plane method, which is widely used in commercial solvers, although the integral hull is hard to compute and could be done in exponential time.

However, the characterizations by integral hull are difficult to use for analyzing local search for ILP, as there is no explicit form of this integral hull and also no direct relations with constraints in the original form of ILP. To facilitate the analysis of local search algorithms for ILP, we introduce some new way to characterize solutions of ILP based on its original form, and show our local search operators and combinations are suited for ILP. We briefly present here the intuitions that we leveraged to design our local search algorithm and will give precise formulations and theoretical arguments in Section 8.

An important feature of ILP, is the linearity of its objective function and constraints. Although the ILP does not have true convexity, it still has some properties of this nature, such as all feasible solutions staying in the feasible domain of its LP relaxation, which is convex. From this aspect, some simple facts could be observed. Let J={1,,n}J=\{1,...,n\}, 𝒆j\bm{e}_{j} be the unit vector with 11 in jj-th coordinate and 0 in all other places. There is:

Fact 1.

If for 𝐱1,𝐱2n\bm{x}_{1},\bm{x}_{2}\in\mathbb{Z}^{n}, s.t. jJ,k,k>0\exists j\in J,k\in\mathbb{Z},k>0, 𝐱2=𝐱1+k𝐞j\bm{x}_{2}=\bm{x}_{1}+k\bm{e}_{j}, 𝐱1,𝐱2\bm{x}_{1},\bm{x}_{2} are both feasible for an ILP instance, then for 𝐱=𝐱1+t𝐞j\bm{x}^{\prime}=\bm{x}_{1}+t\bm{e}_{j}, t{0,,k}t\in\{0,...,k\}, 𝐱\bm{x}^{\prime} is also feasible.

Fact 2.

If for 𝐱1,𝐱2n\bm{x}_{1},\bm{x}_{2}\in\mathbb{Z}^{n}, s.t. jJ,k,k>0\exists j\in J,k\in\mathbb{Z},k>0, 𝐱2=𝐱1+k𝐞j\bm{x}_{2}=\bm{x}_{1}+k\bm{e}_{j}. Let obj(𝐱)obj(\bm{x}) be the objective function of an ILP. Then if obj(𝐱1)obj(𝐱2)obj(\bm{x}_{1})\leq obj(\bm{x}_{2}) then for 𝐱=𝐱1+t𝐞j\bm{x}^{\prime}=\bm{x}_{1}+t\bm{e}_{j}, t{0,,k}t\in\{0,...,k\}, obj(𝐱1)obj(𝐱)obj(𝐱2)obj(\bm{x}_{1})\leq obj(\bm{x}^{\prime})\leq obj(\bm{x}_{2}).

In other words, solutions that lie between two solutions that are different in only one dimension have the objective value that is also in between (or could be equal to) the two solutions.

Thus we could make the following observations about ILP feasible solutions and optimal solutions, which will be used later to design new operators and our three-mode framework:

(1) all feasible solutions lie within a region (the integral hull)

(2) all optimal solutions lie in the “boundary” of the region

Furthermore, we call a set of solutions a complete space to an ILP if all optimal solutions are contained in this space. We will formalize the concept of “boundary” in Section 8 and show that boundary solutions are complete for an ILP. We design our operators based on this concept such that all feasible solutions our algorithm visits are boundary solutions.

For a search algorithm, we can distinguish three different stages: (1) enter the integral hull (= find a feasible solution)

(2) find good solutions within the integral hull (= improve the quality of a feasible solution)

(3) get back to the integral hull when jumping out of it (= from an infeasible solution to get a good feasible solution)

The characters of the three stages motivate us to propose a three-mode framework, enabling our algorithm to have different behaviors in different situations, improving efficiency.

In the following, we will first give descriptions of our algorithm, and later present the formal definitions and analysis of our algorithm in Section 8.

3 A New Local Search Framework for ILP

As depicted in Figure 1, we propose a new local search framework that takes advantage of adopting three different modes, namely Search, Improve, and Restore modes. In each mode, we use tailored operators to explore new solutions and thus diversify the search behaviors during different periods.

Refer to caption

Figure 1: A Local Search Framework for ILP

(1) Search mode. The algorithm initially works in Search mode to find a feasible solution. When a feasible solution is found, the algorithm switches between Improve and Restore modes.

(2) Improve mode. Based on a known feasible solution, the Improve mode attempts to improve the value of the objective function while maintaining feasibility. If no such operation is found, the algorithm will break feasibility, obtain an infeasible solution, and enter Restore mode.

(3) Restore mode. The Restore mode is to repair an infeasible solution to a good feasible solution. For an infeasible solution, the Restore mode concentrates on searching for a new high-quality feasible solution by considering the objective function and repairing the feasibility, and then returns to Improve mode after success.

Input: ILP instance QQ, cut off time cutoff
Output: A solution 𝜶\bm{\alpha} of QQ and its objective value
1 𝜶\bm{\alpha}:= an initial solution; 𝜶\bm{\alpha^{*}}:=\emptyset ;   objobj^{*}:=++\infty;
2while running time << cutoff  do
3       if 𝛂\bm{\alpha} is feasible and obj(𝛂)<objobj(\bm{\alpha})<obj^{*} then  𝜶\bm{\alpha^{*}}:=α\alpha; objobj^{*}:=obj(𝜶)obj(\bm{\alpha}) ;
4       if 𝛂==\bm{\alpha^{*}}==\emptyset then  perform operation for Search Mode ;
5       else if 𝛂\bm{\alpha} is feasible then  perform operation for Improve Mode ;
6       else  perform operation for Restore Mode ;
7       if enough steps to not improve then  restart the search;
8      
9return 𝜶\bm{\alpha^{*}} and objobj^{*} ;
Algorithm 1 Local Search Framework for ILP

An outline of the framework is presented in Algorithm 1. In the beginning, the algorithm constructs a solution 𝜶\bm{\alpha}, and initializes the best-found solution 𝜶\bm{\alpha^{*}} as \emptyset and its objective value to ++\infty. The core of the algorithm consists of a loop (lines 2–7) in which solution 𝜶\bm{\alpha} is modified iteratively until a given time limit is reached. Depending on the state of the solutions 𝜶\bm{\alpha} and 𝜶\bm{\alpha^{*}}, it chooses, Search, Improve, or Restore modes to perform operations on variables (lines 4–6). Once a better feasible solution is discovered during the search, 𝜶\bm{\alpha^{*}} and objobj^{*} are updated correspondingly (line 3). The search is restarted if 𝜶\bm{\alpha^{*}} is not updated within a sufficient number of steps (line 7). When the time limit is reached, the algorithm returns the best-found solution 𝜶\bm{\alpha^{*}}, and its objective value objobj^{*}.

Since we have a common general framework to integrate different operators in each mode, we describe the general framework with the notion of “XX operation” to represent the corresponding operation used in each mode. In each mode X (X is Search, Improve or Restore), an X operation is iteratively picked to modify 𝜶\bm{\alpha}, where an X operation refers to an operation that is customized for mode X. When the feasibility of 𝜶\bm{\alpha} is changed in the current mode, the algorithm shifts to another mode as we explained. All three modes adopt a general procedure as described in Algorithm 2. It prefers to pick a positive operation (according to some heuristic) if one exists. If no such operation exists, at which point the algorithm is stuck in a local optimum, a random perturbation operation on 𝜶\bm{\alpha} is performed.

Input: ILP instance: QQ, a solution: 𝜶\bm{\alpha}
1 if \exists positive XX operations  then  op:=op:= a positive XX operation ;
2 else  op:=op:= a random perturbation XX operation to escape local optima ;
3
perform opop to modify 𝜶\bm{\alpha};
Algorithm 2 Process for Mode XX

In the following two sections, we present those operations (also called X operations) in each mode, which contain two new strategies raised by us, namely tight move and lift move, and section 6 presents the whole Local-ILP algorithm that implements the above framework.

4 Tight Move Operator

In this section, we propose the tight move (tm for short) operator, which is used in Search mode and Restore mode, with an efficient weighting scheme and a tailored scoring function.

4.1 Tight Move

For solving ILP that may have infinite domains, a naive operator of local search is to modify the value of a variable xjx_{j} by a fixed incremental value incinc, i.e., αj:=αj±inc\alpha_{j}:=\alpha_{j}\pm inc. However, the fixed incinc has drawbacks that it is hard to choose and may be instance-dependent: if incinc is too small, it may take many iterations before making any violated constraints satisfied; if incinc is too large, the algorithm may even become so troublesome that it can never satisfy some constraints, and therefore be hard to find a feasible solution. Our tight move operator modifies variable values according to the current solution and constraints and thus automatically adapts to different situations, which is defined below.

Definition 1.

Given a solution 𝛂\bm{\alpha}, the tight move operator, denoted as tm(xj,coni,𝛂)tm(x_{j},con_{i},\bm{\alpha}), assigns an integer variable xjx_{j} to the value making the constraint conicon_{i} as tight as possible and keeping the xjx_{j}’s bounds satisfied. Here, conicon_{i} contains xjx_{j} and could be either violated or satisfied. Precisely, let Δ=𝐛𝐢𝐀𝐢𝛂\Delta=\bm{b_{i}}-\bm{A_{i}}\cdot\bm{\alpha}, which is commonly known as a slack, a tmtm operation is:

(1) If Δ<0\Delta<0: conicon_{i} is violated, there exists a tm operation tm(xj,coni,𝛂)tm(x_{j},con_{i},\bm{\alpha}) for variable xjx_{j}:

If Aij<0A_{ij}<0, then tm(xj,coni,𝛂)tm(x_{j},con_{i},\bm{\alpha}) increases αj\alpha_{j} by min(Δ/Aij,xjuαj)min(\left\lceil{\Delta}/{A_{ij}}\right\rceil,x^{u}_{j}-\alpha_{j})

If Aij>0A_{ij}>0, then tm(xj,coni,𝛂)tm(x_{j},con_{i},\bm{\alpha}) decreases αj\alpha_{j} by min(|Δ/Aij|,|xjlαj|)min(\left|\left\lfloor{\Delta}/{A_{ij}}\right\rfloor\right|,\left|x^{l}_{j}-\alpha_{j}\right|)

(2) If Δ0\Delta\geq 0: conicon_{i} is satisfied, there exists a tm operation tm(xj,coni,𝛂)tm(x_{j},con_{i},\bm{\alpha}) for variable xjx_{j}:

If Aij<0A_{ij}<0, then tm(xj,coni,𝛂)tm(x_{j},con_{i},\bm{\alpha}) decreases αj\alpha_{j} by min(|Δ/Aij|,|xjlαj|)min(\left|\left\lceil{\Delta}/{A_{ij}}\right\rceil\right|,\left|x^{l}_{j}-\alpha_{j}\right|)

If Aij>0A_{ij}>0, then tm(xj,coni,𝛂)tm(x_{j},con_{i},\bm{\alpha}) increases αj\alpha_{j} by min(Δ/Aij,xjuαj)min(\left\lfloor{\Delta}/{A_{ij}}\right\rfloor,x^{u}_{j}-\alpha_{j})

Our tight move operator has two properties while keeping the xjx_{j}’s bounds satisfied:

(1) If conicon_{i} is violated, it makes conicon_{i} as close to being satisfied as possible while minimally influencing other constraints, as it selects the minimal change of xjx_{j} to make conicon_{i} do so.

(2)If conicon_{i} is satisfied, it can push xjx_{j} to its extreme value, which keeps conicon_{i} satisfied. This explores the influence of the maximal change of xjx_{j} to the objective function and also helps to jump out of the local optimum.

The tight move operator, which considers both the information of variables and constraints, takes the idea to modify a variable’s value to make a constraint tight. This idea resembles some previous works which also consider some form of tightening constraints. The Simplex algorithm [23] adopts similar operator and is commonly used to solve linear systems. Specifically, the tight move is inspired by an operator named critical move, which is proposed in the context of solving Satisfiability Modulo Integer Arithmetic Theories [24]. The critical move operator assigns a variable xjx_{j} to the threshold value making a violated literal (similar to a constraint in ILP) true, while our tight move operator keeps variables satisfying their global bounds, and allows modifications from satisfied constraints, thus expanding the range of operations to choose from.

We will show in Section 8 that although ILP does not have classical convexity, our tight move operator still has good theoretical properties to produce promising solutions and avoid visiting unnecessary search space.

4.2 Scoring Function for Tight Move

During the local search process, scoring functions are used to compare different operations and pick one to execute in each step. For the tight move, we propose a weighted scoring function. Our scoring function has two ingredients: a weighting scheme to distinguish important constraints, and score computations to select operations.

4.2.1 Weighting Scheme

In order to guide the search process, weighting techniques are widely applied in local search algorithms, and are used primarily to increase the weight of violated constraints, hence guiding the search process towards satisfying violated constraints.

Here we present the weighting scheme we utilize, which will be further used in the scoring function for selecting operations.

We assign an integral weight to each constraint and also the objective function, which are denoted as w(coni)w(con_{i}) and w(obj)w(obj), respectively. At the beginning of the search, these weights are initialized to 1. To prevent too large values, we set an upper limit ulconul_{con} and ulobjul_{obj} to them, respectively:

(1) For the weight of the constraints, ulcon=max(1000,nCons)ul_{con}=max(1000,nCons), where nConsnCons denotes the number of constraints in the instance.

(2) For the weight of the objective function, ulobj=ulcon/10ul_{obj}=ul_{con}/10.

For weight setting, only when a solution is feasible do we consider it meaningful to improve the quality of the objective function. With this consideration, w(obj)w(obj) should not be too large compared to the weights of the constraints.

Our weighting scheme is based on the probabilistic version of the PAWS scheme [25, 26, 24], which updates the weights according to a smoothing probability spsp, and we used the same setting as [24] to set sp=0.0003sp=0.0003. When the weighting scheme is activated, the weights of constraints and objective function are updated as follows:

(1) Once the weighting scheme is activated, the weights of the constraints are updated: with probability 1sp1-sp, for each violated constraint conicon_{i}, if w(coni)<ulconw(con_{i})<ul_{con}, w(coni)w(con_{i}) is increased by one; otherwise, for each satisfied constraint conicon_{i}, if w(coni)>1w(con_{i})>1, w(coni)w(con_{i}) is decreased by one.

(2) The weight of the objective function is updated only if any feasible solution is found: with probability 1sp1-sp, if obj(𝜶)objobj(\bm{\alpha})\geq obj^{*} and w(obj)<ulobjw(obj)<ul_{obj}, w(obj)w(obj) is increased by one; otherwise, if obj(𝜶)<objobj(\bm{\alpha})<obj^{*} and 1<w(obj)1<w(obj), w(obj)w(obj) is decreased by one.

By changing the weights of constraints and thus focusing on constraints that are often violated in local optima, we help the local search process find feasible solutions. The weight updates for the objective function help guide the search towards solutions with better objective values.

4.2.2 Score Computations

Based on the weighting scheme, we propose the scoring function for tight move, which helps the local search algorithm pick a tmtm operation to execute. Specifically, it has two parts: score for reducing the violation of constraints, and score for improving the objective function.

Score for reducing the violation of constraints. The score for reducing the violation of constraints is determined according to the satisfiability of the constraints in the following three cases:

(1) If a violated constraint conicon_{i} is satisfied after performing opop, it incurs a positive reward of w(coni)w(con_{i}).

(2) If a satisfied constraint conicon_{i} is violated after performing opop, it incurs a negative reward of w(coni)-w(con_{i}).

(3) If a violated constraint conicon_{i} is still violated after performing opop, but 𝑨𝒊𝜶\bm{A_{i}}\cdot\bm{\alpha} is reduced, i.e., conicon_{i} is closer to being satisfied, it incurs a positive reward of βw(coni)\beta\cdot w(con_{i}); otherwise, if 𝑨𝒊𝜶\bm{A_{i}}\cdot\bm{\alpha} is increased and exacerbates conicon_{i}’s violation, the negative reward is βw(coni)\beta\cdot-w(con_{i}), where β[0,1]\beta\in\left[0,1\right] is a parameter.

The score of this impact of a tmtm operation opop, denoted by scorereduce(op)score_{reduce}(op), is set as the sum of the reward over all constraints obtained by performing opop.

Score for improving the objective function. The score for improving the objective function of a tmtm operation opop is denoted by scoreimprove(op)score_{improve}(op). If the value of the objective function after performing opop is smaller than objobj^{*}, scoreimprove(op):=w(obj)score_{improve}(op):=w(obj); otherwise, scoreimprove(op):=w(obj)score_{improve}(op):=-w(obj). Note that scoreimprove(op)score_{improve}(op) is only meaningful in Restore mode, and is a constant in Search mode.

Definition 2.

The tight move score of a tight move operation opop, denoted as scoretm(op)score_{tm}(op), is defined as

scoretm(op)=scorereduce(op)+scoreimprove(op)score_{tm}(op)=score_{reduce}(op)+score_{improve}(op)

5 Lift Move Operator

In this section, we propose the lift move (lm for short) operator, which is the key technique of the Improve mode. The property of the lift move operator is that it can improve the quality of the objective function while maintaining feasibility. For this purpose, we propose a way to compute a variable’s new candidate values called local domain reduction, and a specialized scoring function for lift move.

5.1 Local Domain Reduction

To maintain the feasibility of a feasible solution, we must ensure that it satisfies every constraint. Therefore, we propose the local domain reduction to compute such a range to change a variable.

Definition 3.

For a variable xjx_{j} in a feasible solution 𝛂\bm{\alpha}, its local feasible domain, denoted as lfd(xj,𝛂)lfd(x_{j},\bm{\alpha}), is an interval for xjx_{j} that when xjx_{j} varies within this interval and all other variables stay unchanged, the satisfiability of all constraints will be kept unchanged. We call the local domain reduction the process to compute this local feasible domain lfd(xj,𝛂)lfd(x_{j},\bm{\alpha}).

In order to compute lfd(xj,𝜶)lfd(x_{j},\bm{\alpha}), we consider the feasible domain of xjx_{j} in conicon_{i}, which is denoted as ldc(xj,coni,𝜶)ldc(x_{j},con_{i},\bm{\alpha}) meaning the xjx_{j} can vary within this interval while keeping the satisfiability of conicon_{i}, assuming other variables in 𝜶\bm{\alpha} keep unchanged, where conicon_{i} is a constraint containing xjx_{j}. Specifically, let Δ=𝒃𝒊𝑨𝒊𝜶\Delta=\bm{b_{i}}-\bm{A_{i}}\cdot\bm{\alpha}, ldc(xj,coni,𝜶)ldc(x_{j},con_{i},\bm{\alpha}) is derived according to the sign of AijA_{ij}:

(1) If Aij< 0A_{ij}\ <\ 0, then ldc(xj,coni,𝜶)=[αj+Δ/Aij,+)ldc(x_{j},con_{i},\bm{\alpha})=\left[\alpha_{j}+\left\lceil{\Delta}/{A_{ij}}\right\rceil,+\infty\right)

(2) Otherwise, ldc(xj,coni,𝜶)=(,αj+Δ/Aij]ldc(x_{j},con_{i},\bm{\alpha})=\left(-\infty,\alpha_{j}+\left\lfloor\Delta/{A_{ij}}\right\rfloor\right].

Then, with ldc(xj,coni,𝜶)ldc(x_{j},con_{i},\bm{\alpha}), we calculate the local feasible domain of xjx_{j} as follows:

lfd(xj,𝜶)=(ildc(xj,coni,𝜶))[xjl,xju]lfd(x_{j},\bm{\alpha})=(\cap_{i}ldc(x_{j},con_{i},\bm{\alpha}))\cap[x_{j}^{l},x_{j}^{u}]

5.2 Lift Move

Clearly, moving xjx_{j} within integers of lfd(xj,𝜶)lfd(x_{j},\bm{\alpha}) does not break the feasibility of 𝜶\bm{\alpha} as long as the other variables are kept constant. So once the current solution 𝜶\bm{\alpha} is feasible, we can choose a reasonable integer in lfd(xj,𝜶)lfd(x_{j},\bm{\alpha}) for updating xjx_{j} to improve the quality of the objective function. We create the lift move operator for this purpose, which is based on the local domain reduction.

Definition 4.

For a feasible solution 𝛂\bm{\alpha}, the lift move operator, denoted as lm(xj,𝛂)lm(x_{j},\bm{\alpha}), assigns xjx_{j} to the upper or lower bound of lfd(xj,𝛂)lfd(x_{j},\bm{\alpha}) to improve the objective function at most. Specifically, let cjc_{j} denote the coefficient of xjx_{j} in 𝐜𝐱\bm{c^{\top}}\bm{x}, a lmlm operation is described as follows:

(1) If cj<0c_{j}<0, then lm(xj,𝛂)lm(x_{j},\bm{\alpha}) assigns αj\alpha_{j} to the upper bound of lfd(xj,𝛂)lfd(x_{j},\bm{\alpha}).

(2) If cj>0c_{j}>0, then lm(xj,𝛂)lm(x_{j},\bm{\alpha}) assigns αj\alpha_{j} to the lower bound of lfd(xj,𝛂)lfd(x_{j},\bm{\alpha}).

5.3 Scoring Function for Lift Move

If there are multiple variables in the objective function, then multiple lmlm operations could be constructed. To guide the search in Improve mode, we customize a scoring function to select a lmlm operation. Since all lmlm operations will maintain the feasibility of the solution, we propose lift score to measure the improvement of the objective function.

Definition 5.

The lift score of a lift move operation opop, denoted as scorelm(op)score_{lm}(op), is defined as

scorelm(op)=obj(𝜶)obj(𝜶)score_{lm}(op)=obj(\bm{\alpha})-obj(\bm{\alpha^{\prime}})

where 𝛂\bm{\alpha} and 𝛂\bm{\alpha}^{\prime} denotes the solution before and after performing opop.

In the Improve mode, we pick the operation with the best scorelm(op)score_{lm}(op).

6 Full Local-ILP Algorithm with Three Modes

Based on the ideas in previous sections, we develop a local search ILP solver called Local-ILP. As described in Section 3, after initialization, the algorithm works in three modes, namely Search, Improve, and Restore mode to iteratively modify 𝜶\bm{\alpha} until a given time limit is reached. This section is dedicated to the details of the initialization and the three modes of local search, as well as other optimization techniques.

Initialization: Local-ILP generates a solution 𝜶\bm{\alpha}, by assigning the variables one by one until all variables are assigned. As for a variable xjx_{j}, if xjl>0x^{l}_{j}>0, it is assigned with xjlx^{l}_{j}; if xju<0x^{u}_{j}<0, it is assigned with xjux^{u}_{j}. Otherwise, the variable is set to 0.

6.1 Search Mode

In Search mode (Algorithm 3), the goal of the algorithm is to find the first feasible solution.

Input: ILP instance: QQ, an infeasible solution: 𝜶\bm{\alpha}
1 if \exists positive tm operation in violated constraints then
2       op:=op:= such an operation with the greatest scoretmscore_{tm} ;
3      
4else
5       update constraint weights ;
6       op:=op:= a tmtm operation with the greatest scoretmscore_{tm} in a random violated constraint ;
7      
8
perform opop to modify 𝜶\bm{\alpha};
Algorithm 3 Process for Search Mode

If the algorithm fails to find any positive tmtm operations in violated constraints, it first updates the weights of constraints according to the weighting scheme described in Section 5 (line 6). Then, it picks a random violated constraint to choose a tmtm operation with the greatest scoretmscore_{tm} (line 7).

6.2 Improve Mode

The Improve mode (Algorithm 4) seeks to improve the quality of the objective function’s value while maintaining the feasibility of a feasible solution. If no such operation is found, the algorithm will break feasibility, obtain an infeasible solution, and enter Restore mode.

Input: ILP instance: QQ, a feasible solution: 𝜶\bm{\alpha}
1 if \exists positive lm operation  then
2       op:=op:= such an operation with the greatest scorelmscore_{lm} ;
3      
4else  op:=op:= a unit incremental move in the objective function within variables’ global bounds ;
perform opop to modify 𝜶\bm{\alpha};
Algorithm 4 Process for Improve Mode

If the algorithm fails to find any positive lmlm operation, it randomly picks a variable xjx_{j} in the objective function, and performs a simple unit incremental move in xjx_{j} according to its coefficient cjc_{j} (line 4-5), specifically, if cj<0c_{j}<0, then αj=αj+1\alpha_{j}=\alpha_{j}+1; otherwise αj=αj1\alpha_{j}=\alpha_{j}-1. If a unit incremental move will break the global bound of a variable, we randomly select another. A unit incremental move that keeps all global bounds must exist; otherwise, all variables are at the corresponding bound value that improves the objective function, which means the current solution is the optimal solution and we could finish the search.

6.3 Restore Mode

For an infeasible solution obtained from the Improve mode, the Restore mode (Algorithm 5) focuses on repairing the feasibility to obtain a new high-quality feasible solution.

Input: ILP instance: QQ, an infeasible solution: 𝜶\bm{\alpha}
1 if \exists positive tm operation in violated constraints then
2       op:=op:= such an operation with the greatest scoretmscore_{tm} ;
3      
4else if \exists positive tm operation in satisfied constraints then
5       op:=op:= such an operation with the greatest scoretmscore_{tm} ;
6      
7else
8       update constraint weights ;
9       op:=op:= a tmtm operation with the greatest scoretmscore_{tm} in a random violated constraint ;
10      
11
perform opop to modify 𝜶\bm{\alpha};
Algorithm 5 Process for Restore Mode

If the algorithm fails to find any positive tmtm operations in both violated and satisfied constraints, it updates weights and picks a random a tmtm operation similarly to the Search mode (lines 6-7).

The difference between Restore and Search modes is that Restore mode will pick tmtm operations in satisfied constraints because the number of violated constraints in Restore mode is usually much less than in Search mode. More importantly, since scoreimprove(op)score_{improve}(op) is meaningful in Restore mode while a constant in Search mode, scoretmscore_{tm} used in Restore mode considers the quality of the objective function while Search mode does not.

6.4 Optimization Techniques

The classical techniques of local search are applied in our algorithm, including random sampling, forbidding strategy, and restart mechanism.

Random Sampling: We use a sampling method called Best from Multiple Selections, dubbed as BMS [27]. For all constraints that need to be considered in Search mode and Restore mode, the algorithm randomly samples a certain number of them, and for all operations derived from the sampled constraints that could be selected to apply, the algorithm randomly samples a certain number of these operations and selects the one with the highest scoretmscore_{tm}. There are in total five parameters in the algorithm to control the number of samples, including: cvc_{v} for violated constraints, and ovo_{v} for tmtm operations in sampled violated constraints; csc_{s} for satisfied constraints, and oso_{s} for tmtm operations in sampled satisfied constraints; oro_{r} for tmtm operations in a random violated constraint.

Forbidding Strategy: We employ a forbidding strategy, the tabu strategy [28, 24], to address the cycle phenomenon. After an operation is executed, the tabu strategy forbids the reverse operation in the following tttt iterations, where tttt is called tabu tenure, and we used the same setting as [24] to set tt=3+rand(10)tt=3+rand(10). The tabu strategy is directly applied to Local-ILP. If a tmtm operation that increases (decreases, resp.) the value of an integer variable xjx_{j} is performed, then it is forbidden to decrease (increase, resp.) the value of xjx_{j} by tmtm operation in the following tttt iterations.

Restart Mechanism: The search is restarted when 𝜶\bm{\alpha^{*}} is not updated for enough steps iterations, which is simply set as 1500000. At each restart, we crossover 𝜶\bm{\alpha^{*}} and random integers within variables’ bounds to reset 𝜶\bm{\alpha}: xjx_{j} is assigned to αj\alpha^{*}_{j} or a random integer within [xjl,xju][x^{l}_{j},x^{u}_{j}] with 50% probability of each, respectively. Additionally, all weights are restored to 1 when restarting.

7 Experiments

We carry out experiments to evaluate Local-ILP on the MIPLIB dataset222https://miplib.zib.de/, which is a standard data set including a broad range of types of problems. We compare our Local-ILP with state-of-the-art ILP solvers and another heuristic Feasibility Jump in terms of their performance on finding a good feasible solution quickly. Also, experiments are conducted to analyze the effectiveness of the proposed new operators and framework. Additionally, the MIPLIB dataset records the best-known solutions for its instances, and we’ve established new records for 6 instances in the MIPLIB by Local-ILP.

7.1 Experiment Preliminaries

7.1.1 Implementation

Local-ILP is implemented in C++ and compiled by g++ with the ‘-O3’ option. There are two types of parameters: β\beta for the scoring function of tight move, and the sampling numbers for the BMS heuristic. The default values of the parameters are shown in Table 1 tuned with 20% randomly sampled instances.

Table 1: Tuned parameters of our proposed algorithms.
Parameter Range Final value
β\beta {0.1,0.2,…,1} 0.5
cvc_{v} {1,2,…,10} 3
ovo_{v} {1000,2000,…,10000} 2000
csc_{s} {10,20,…,100} 30
oso_{s} {50,100,…,500} 350
oro_{r} {50,100,…,500} 150

7.1.2 Competitors

We compare Local-ILP with the latest state-of-the-art commercial and non-commercial ILP solvers, namely Gurobi 10.0.0333https://www.gurobi.com/ and SCIP 8.0.1444https://www.scipopt.org/. For Gurobi, we use both its complete and heuristic versions. Moreover, we compare Local-ILP with another heuristic Feasibility Jump555https://github.com/sintef/feasibilityjump (FJ for short), which won 1st place in MIP 2022 Workshop’s Computational Competition. For all of the competitors, we always use their default parameter settings.

7.1.3 Benchmarks

MIPLIB is widely recognized as the standard benchmark set for integer linear programming problems. Our experiments are carried out with the union of MIPLIB 2003 [29], MIPLIB 2010 [30] and MIPLIB 2017 [31], selecting the ILP instances tagged hard and open. As Local-ILP is not an exhaustive algorithm, infeasible instances are excluded, resulting in a benchmark consisting of 121 instances.

7.1.4 Experiment Setup

All experiments are carried out on a server with AMD EPYC 7763 CPU and 2048G RAM under the system Ubuntu 20.04.4. For each instance, each solver is executed by one thread with time limits of 10, 60, and 300 seconds.

we use 3 metrics to evaluate the performance of each solver:

(1) #feas. To evaluate the ability to find a feasible solution for an instance, we count #feas, the number of instances where a solver can find a feasible solution within the time limit.

(2) #win. To evaluate the ability to find a high-quality feasible for an instance, we count #win, the number of instances where a solver finds the best solution among all solutions output by tested solvers within the time limit. Note that, for each instance, if the best solution is found by more than one solver, this instance is counted as #win for all these solvers; and if all solvers find no solution, this instance is not counted as #win for any solver.

(3) P(T). We also use a well-established measure, the primal integral P(T) [32], to evaluate the performance of each solver, which depends on the quality of solutions found during the solving process as well as on the points in time when they are found. To define the primal integral P(T), we first define the primal gap γ\gamma and the primal gap function p(t)p(t).

Let 𝒙~\bm{\tilde{x}} denote a solution for an ILP instance, and 𝒙~opt\bm{\tilde{x}}_{opt} denote an optimal (or best known) solution for that instance, primal gap γ\gamma is defined as

γ(𝒙~)={0,if|𝒄𝒙~opt|=|𝒄𝒙~|=0,1,if𝒄𝒙~opt𝒄𝒙~<0,|𝒄𝒙~opt𝒄𝒙~|max{|𝒄𝒙~opt|,|𝒄𝒙~|},else.\gamma(\bm{\tilde{x}})=\left\{\begin{array}[]{ll}0,&if\ |\bm{c^{\top}}\bm{\tilde{x}}_{opt}|=|\bm{c^{\top}}\bm{\tilde{x}}|=0,\\ 1,&if\ \bm{c^{\top}}\bm{\tilde{x}}_{opt}\cdot\bm{c^{\top}}\bm{\tilde{x}}<0,\\ \frac{|\bm{c^{\top}}\bm{\tilde{x}}_{opt}-\bm{c^{\top}}\bm{\tilde{x}}|}{max\{|\bm{c^{\top}}\bm{\tilde{x}}_{opt}|,|\bm{c^{\top}}\bm{\tilde{x}}|\}},&else.\end{array}\right. (2)

Let tmax0t_{max}\in\mathbb{R}_{\geq 0} be the time limit of an ILP solver. Its primal gap function p(t)p(t): [0,tmax][0,1][0,t_{max}]\rightarrow[0,1], is defined as:

p(t)={1,ifnofeasiblesolutionuntilpointt,γ(𝒙~(t)),𝒙~(t)isthebestfoundsolutionatpointt,else.p(t)=\left\{\begin{array}[]{ll}1,&if\ no\ feasible\ solution\ until\ point\ t,\\ \gamma(\bm{\tilde{x}}(t)),&\bm{\tilde{x}}(t)~{}is~{}the\ best\ found\ solution\ at\ point\ t,\ else.\\ \end{array}\right. (3)

Let T[0,tmax]T\in[0,t_{max}] and let ti[0,T]fori1,,I1t_{i}\in[0,T]\ for\ i\in 1,...,I-1 be the points in time when a new best solution is found, t0=0,tI=Tt_{0}=0,t_{I}=T. We define the primal integral P(T)P(T) of a run as:

P(T):=t=0Tp(t)𝑑t=i=1Ip(ti1)(titi1)P(T):=\int_{t=0}^{T}p(t)dt=\sum_{i=1}^{I}p(t_{i-1})\cdot(t_{i}-t_{i-1}) (4)

For a benchmark, the primal integral P(T)P(T) is the average primal integral of each instance. According to the above definition, smaller values of P(T)P(T) indicate better performance.

We organize the overall results into three types (#win, #feas, P(T)). For each time limit setting, the best results are marked in bold.

7.2 Comparisons with Competitors

In the MIPLIB dataset, each instance may contain multiple types of constraints666https://miplib.zib.de/statistics.html, such as knapsack constraints, set covering constraints, and so on. We categorize all instances by the type of their main constraint class. We mark the instance as hybrid if it contains multiple main constraint classes.

The results of the comparison with all competitors in terms of the ability to find a feasible solution, the quality of the best-found solution, and the primal integral are shown in Tables 2, 3 and 4, respectively.

Table 2: Empirical results on comparing Local-ILP with FJ, SCIP, and Gurobi in terms of the ability to find a feasible solution, with 10s, 60s, and 300s time limits. #inst denotes the number of instances in each class.
Main Constraint Class #inst #feas
10 s 60 s 300 s
Local-ILP FJ SCIP Gurobi Local-ILP FJ SCIP Gurobi Local-ILP FJ SCIP Gurobi
comp. heur. comp. heur. comp. heur.
Singleton 2 2 0 2 2 2 2 0 2 2 2 2 0 2 2 2
Aggregations 2 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1
Bin Packing 2 1 1 0 2 2 2 2 1 2 2 2 2 1 2 2
Equation Knapsack 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Knapsack 4 4 4 3 3 3 4 4 3 3 3 4 4 3 4 4
Set Packing 5 4 4 3 4 4 4 4 4 4 4 4 4 4 4 4
Cardinality 6 2 1 1 2 2 2 2 1 2 2 2 2 2 3 3
Hybrid 7 4 5 2 3 3 4 5 3 4 4 4 5 4 5 5
Mixed Binary 8 4 4 1 2 2 4 5 1 3 3 5 5 2 3 3
Set Partitioning 9 5 5 3 7 7 6 7 4 7 7 7 7 5 7 7
Set Covering 11 9 9 7 9 9 9 9 8 9 9 9 9 8 9 9
Precedence 13 12 9 11 12 12 12 10 12 12 12 12 10 12 12 12
General Linear 15 10 6 6 8 8 10 6 7 9 10 10 6 9 11 11
Variable Bound 16 14 12 10 13 13 15 13 12 14 14 15 13 12 14 14
Invariant Knapsack 18 13 12 10 12 12 13 12 11 12 12 13 12 13 15 15
Total 121 85 73 59 80 80 88 80 70 84 85 90 80 78 92 92

7.2.1 The Ability to Find a Feasible Solution

Here we present the results of the number of instances in which a feasible solution could be found by each solver.

As shown in Table 2, Local-ILP performs best with 12 types of all main constraint classes in the 10s time limit, 13 types in the 60s, and 10 types in the 300s. Local-ILP wins the most types in the 10s and 60s time limits, and the second most in the 300s.

In general, for the ability to find a feasible solution quickly, SCIP is the worst solver with the least #feas. It is obvious that Local-ILP, FJ, and Gurobi perform better than SCIP. Local-ILP performs best in the 10s and 60s time limits but loses to Gurobi in the 300s.

7.2.2 The Quality of the Best Found Solution

For each solver, we present the number of instances where it found the best solution among all candidate solvers.

Table 3: Empirical results on comparing Local-ILP with FJ, SCIP, and Gurobi in terms of the quality of the best-found solution, with 10s, 60s, and 300s time limits. #inst denotes the number of instances in each class.
Main Constraint Class #inst #win
10 s 60 s 300 s
Local-ILP FJ SCIP Gurobi Local-ILP FJ SCIP Gurobi Local-ILP FJ SCIP Gurobi
comp. heur. comp. heur. comp. heur.
Singleton 2 1 0 0 0 1 1 0 0 0 1 1 0 0 1 0
Aggregations 2 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0
Bin Packing 2 0 0 0 2 2 0 0 0 2 2 0 0 1 1 1
Equation Knapsack 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Knapsack 4 2 0 1 0 1 2 0 0 0 3 1 0 0 3 2
Set Packing 5 1 0 0 3 2 2 0 0 1 2 2 0 0 1 2
Cardinality 6 1 0 0 1 0 1 0 0 0 1 0 0 0 1 3
Hybrid 7 3 2 0 0 0 3 1 0 1 1 2 1 0 2 3
Mixed Binary 8 2 2 1 2 2 2 3 1 1 2 2 1 1 0 1
Set Partitioning 9 3 0 0 3 4 3 1 1 1 2 1 1 0 3 4
Set Covering 11 5 0 1 4 4 2 0 2 2 3 1 0 0 1 8
Precedence 13 6 0 0 3 5 8 0 1 2 1 5 0 1 1 5
General Linear 15 3 0 1 5 6 1 0 1 5 7 1 0 1 7 7
Variable Bound 16 11 1 0 1 3 9 1 2 3 7 6 0 1 4 9
Invariant Knapsack 18 10 1 2 5 5 8 1 1 7 6 4 0 2 10 8
Total 121 48 6 6 29 36 42 7 9 25 39 26 3 7 36 53

As shown in Table 3, Local-ILP performs best for 9 types of all 15 main constraint classes in the 10s and 60s time limit, and 4 types in the 300s time limit. Local-ILP wins the most types in the 10s and 60s time limits, and the second most in the 300s time limit. In particular, Local-ILP exhibits the best performance for the types of singleton, mixed binary, and precedence main constraint classes over all time limits.

Overall, for the ability to find a high-quality solution quickly, FJ performs the worst with the least #win in each time limit setting. Local-ILP and Gurobi perform much better than FJ and SCIP. For the comparison between Local-ILP and Gurobi, Local-ILP consistently performs best in the 10s and 60s time limits, but in the 300s time limits, Gurobi wins more instances than Local-ILP, especially its heuristic version.

Table 4: Empirical results on comparing Local-ILP with FJ, SCIP, and Gurobi in terms of the primal integral P(T)P(T), with 10s, 60s, and 300s time limits. #inst denotes the number of instances in each class.
Main Constraint Class #inst P(T)
10 s 60 s 300 s
Local-ILP FJ SCIP Gurobi Local-ILP FJ SCIP Gurobi Local-ILP FJ SCIP Gurobi
comp. heur. comp. heur. comp. heur.
Singleton 2 0.274 1.000 0.423 0.352 0.373 0.176 1.000 0.370 0.189 0.196 0.143 1.000 0.225 0.135 0.126
Aggregations 2 0.802 0.854 1.000 0.538 0.542 0.699 0.809 0.772 0.509 0.508 0.632 0.802 0.557 0.502 0.502
Bin Packing 2 0.989 0.997 1.000 0.960 0.959 0.964 0.994 0.999 0.752 0.760 0.954 0.994 0.987 0.604 0.606
Equation Knapsack 3 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
Knapsack 4 0.166 0.431 0.360 0.320 0.320 0.143 0.427 0.309 0.281 0.280 0.133 0.427 0.298 0.216 0.209
Set Packing 5 0.367 0.903 0.715 0.424 0.423 0.293 0.888 0.468 0.275 0.274 0.275 0.885 0.314 0.233 0.233
Cardinality 6 0.690 1.000 0.861 0.751 0.752 0.657 0.965 0.825 0.721 0.720 0.651 0.871 0.785 0.564 0.557
Hybrid 7 0.641 0.682 1.000 0.780 0.780 0.540 0.605 1.000 0.648 0.648 0.524 0.592 0.671 0.544 0.542
Mixed Binary 8 0.894 0.886 1.000 0.986 0.986 0.874 0.874 1.000 0.976 0.976 0.867 0.871 0.992 0.973 0.970
Set Partitioning 9 0.906 0.901 0.927 0.819 0.819 0.862 0.866 0.895 0.811 0.812 0.836 0.855 0.822 0.752 0.744
Set Covering 11 0.734 0.777 0.757 0.623 0.623 0.727 0.773 0.679 0.496 0.491 0.723 0.773 0.549 0.376 0.346
Precedence 13 0.694 0.993 0.877 0.729 0.722 0.547 0.993 0.727 0.625 0.622 0.439 0.993 0.629 0.513 0.517
General Linear 15 0.516 0.751 0.696 0.480 0.483 0.503 0.736 0.607 0.408 0.404 0.501 0.734 0.508 0.301 0.321
Variable Bound 16 0.506 0.982 0.831 0.624 0.619 0.371 0.981 0.633 0.479 0.472 0.335 0.981 0.464 0.356 0.321
Invariant Knapsack 18 0.670 0.860 0.754 0.720 0.720 0.621 0.858 0.732 0.619 0.628 0.563 0.857 0.693 0.483 0.496
Total 121 0.658 0.871 0.807 0.675 0.675 0.597 0.860 0.721 0.588 0.587 0.563 0.854 0.628 0.492 0.487

7.2.3 Primal Integral

Here we present the results of the average primal integral P(T)P(T) of each solver. As shown in Table 4, Local-ILP performs best with 8 types of all main constraint classes in the 10s time limits, 6 types in the 60s, and 4 types in the 300s. Moreover, Local-ILP performs best in most types in the 10s and 60s time limits. Particularly, Local-ILP exhibits the best performance for the types of knapsack, mixed binary, hybrid, and precedence main constraint classes over all time limits.

In general, for the primal integral, Local-ILP performs best in the 10s time limit, and is competitive with Gurobi in the 60s with a very close P(T)P(T), but obviously loses to it in the 300s. However, Local-ILP is significantly better than FJ and SCIP over all time limits.

7.3 Effectiveness of Proposed New Techniques

To analyze the effectiveness of our proposed new techniques in Local-ILP, we tested 6 variations of our Local-ILP algorithm as follows:

(1) To analyze the effectiveness of the tight move operator. We modify Local-ILP by replacing the tight move operator with the operator that directly modifies an integer variable by a fixed increment inc, leading to two versions v fix 1 and v fix 5, where inc is set as 1 and 5, respectively.

(2) To analyze the effectiveness of the lift move operator and Improve mode, we modify Local-ILP by removing the Improve mode from the framework and using only Search and Restore modes, leading to the version v no improve. To analyze the effectiveness of the Restore mode, we modify Local-ILP by removing the Restore mode from the framework and returning to Search mode when Improve mode stuck in a local optimum, leading to the version v no restore.

(3) To compare different ways to escape from the local optimum in Improve mode. We modify Local-ILP by replacing the unit incremental move with operators that have larger step sizes, leading to two versions v per bound and v per random, where the step size is set as the distance to the bound of the variable and a random size between 1 and the bound, respectively.

Table 5: Empirical results on comparing Local-ILP with 6 variations, with 10s, 60s and 300s time limits.
TimeLimit Local-ILP v fix 1 v fix 5 Local-ILP v no improve v no restore Local-ILP v per bound v per random
#win P(T) #win P(T) #win P(T) #win P(T) #win P(T) #win P(T) #win P(T) #win P(T) #win P(T)
10 s 64 0.658 27 0.723 5 0.924 61 0.658 7 0.811 37 0.662 56 0.658 27 0.682 37 0.680
60 s 71 0.597 31 0.675 1 0.910 63 0.597 4 0.797 35 0.608 54 0.597 23 0.627 45 0.618
300 s 70 0.563 31 0.649 2 0.902 67 0.563 4 0.793 31 0.585 58 0.563 26 0.602 41 0.576

We compare Local-ILP with these modified versions on the benchmark, with 10s, 60s, and 300s time limits. As shown in Table 5, Local-ILP outperforms all other variations, confirming the effectiveness of the strategies.

7.4 Stability with Repetitive Experiments

To examine the stability of Local-ILP which involves randomness, we run Local-ILP 10 times with 10 different seeds on the benchmark for 10s, 60s, and 300s time limits.

Table 6: Experimental results of Local-ILP with 10 different seeds on the benchmark for 10s, 60s, and 300s time limits.
10 s 60 s 300 s
avgP(T)avg_{P(T)} stdP(T)std_{P(T)} stdP(T)/avgP(T){std_{P(T)}}/{avg_{P(T)}} avgP(T)avg_{P(T)} stdP(T)std_{P(T)} stdP(T)/avgP(T){std_{P(T)}}/{avg_{P(T)}} avgP(T)avg_{P(T)} stdP(T)std_{P(T)} stdP(T)/avgP(T){std_{P(T)}}/{avg_{P(T)}}
0.65634 0.00278 0.00424 0.59599 0.00462 0.00775 0.56427 0.00592 0.01049

For all 10 times, we denote the average primal integral of each time by avgP(T)avg_{P(T)}, and the standard deviation of the primal integral by stdP(T)std_{P(T)}. The experimental results presented in Table 6 demonstrate that, for each time limit, the values of stdP(T)/avgP(T){std_{P(T)}}/{avg_{P(T)}} for Local-ILP are less than 1.1%, indicating Local-ILP exhibits stable performance.

7.5 New Records to Open Instances

The optimal solutions for the instances labeled open have not yet been reported. The current best solutions known for each open instance are available on the official website of MIPLIB 2017.

Table 7: New records to open instances
Instance Local-ILP Previous Objective Constraint Classification
sorrell7 -197 -196 variable bound
supportcase22 117 N/A set covering, aggregations, bin packing and mixed binary
cdc7-4-3-2 -294 -289 set packing
ns1828997 8 9 precedence, invariant knapsack, variable bound and cardinality
scpm1 544 554 set covering
scpn2 490 501 set covering

As shown in Table 7, Local-ILP established the new best-known objective values for 6 instances. These 6 instances contain different constraint types, which simultaneously demonstrate the strong solving power of Local-ILP and its applicability to diverse types of problems. The new records have been submitted to MIPLIB 2017 and have been accepted; the links to the website are denoted in the footnotes 777https://miplib.zib.de/instance details sorrell7.html 888https://miplib.zib.de/instance details supportcase22.html 999https://miplib.zib.de/instance details cdc7-4-3-2.html 101010https://miplib.zib.de/instance details ns1828997.html 111111https://miplib.zib.de/instance details scpn2.html, except for one to be published in the next updates.

8 Theoretical Aspects of Our Algorithm

In this section, we provide theoretical Aspects of our algorithm. While it is hard to have worst-case performance guarantee for general ILP heuristic algorithms, we can still reveal some underlying properties of our algorithm.

We are going to show the properties of our new operators and our algorithm on finite feasible ILP. Since our algorithm does not prove infeasibility or unboundedness, we do not analyze these two cases. In Algorithms 3-5 we can summarize that there are four ways to generate a solution in our algorithm:

(1) apply a tight move operator to an infeasible solution

(2) apply a lift move operator to a feasible solution

(3) apply a unit incremental move when no positive lmlm operation is found in Improve mode

(4) move a variable’s value to one side of its global bound

Following the observations in Section 2.3, we define the concept of boundary solutions, and show that all feasible solutions visited by our algorithm are boundary solutions. The proofs of all propositions are presented in the appendices.

8.1 Boundary Solutions

We assume an instance of ILP of the form:

Minimizeobj(𝒙)=𝒄𝒙subjectto𝑨𝒙𝒃𝒙n\begin{split}Minimize\ \ \ &obj(\bm{x})=\bm{c}^{\top}\bm{x}\\ subject\ to\ \ \ \ &\bm{A}\bm{x}\leq\bm{b}\\ &\bm{x}\in\mathbb{Z}^{n}\\ \end{split} (5)

For the convenience of analysis, here we do not distinguish variables’ bounds from other constraints, thus all variables’ bounds are included in 𝑨𝒙𝒃\bm{A}\bm{x}\leq\bm{b}, this is slightly different in Formula (1), and we take Formula (5) as the description of an ILP instance within Section 8. We assume at least one of the coefficients in the objective function is non-zero: j{1,,n}\exists j\in\{1,...,n\}, cj0c_{j}\neq 0, otherwise this ILP instance has a constant objective function 0 and turns to be a satisfiability problem. Since our algorithm does not prove feasibility, we exclude this case in analyzing our algorithm. We denote the index set I={1,,m}I=\{1,...,m\}, J={1,,n}J=\{1,...,n\}, J0={j|cj0,j=1,,n}J_{\neq 0}=\{j|c_{j}\neq 0,j=1,...,n\}, J0J_{\neq 0}\neq\emptyset. Moreover, we assume that there is no variable that is free (does not appear in any constraint, has both infinite upper/lower bounds, and does not appear in 𝒄𝒙\bm{c^{\top}}\bm{x}).

We assume the ILP has a finite optimal solution 𝒙\bm{x}^{*} and a finite optimal objective value optopt. We assume 𝒙\bm{x}^{*} exists (while it does not have to be unique), as our algorithm aims to find good solutions and does not try to prove an ILP is infeasible. We assume there is a finite optimal objective value; otherwise there is no meaningful optimal objective value.

Now we present the concept of boundary solutions that will be used to analyze our operators, and show that all optimal solutions are boundary solutions.

For an ILP instance in the form of Formula (5), let polyhedron P={𝒙n|𝑨𝒙𝒃}P=\{\bm{x}\in\mathbb{R}^{n}|\bm{A}\bm{x}\leq\bm{b}\}, then the set of feasible solutions of the ILP could be described as PnP\cap\mathbb{Z}^{n}, and all feasible solutions belong to the integer hull PI=conv(Pn)P_{I}=conv(P\cap\mathbb{Z}^{n}) where conv(S)nconv(S)\subseteq\mathbb{R}^{n} for SnS\subseteq\mathbb{Z}^{n} denotes the convex hull of a set of points SS. Let U={𝒆j,𝒆j}U=\cup\{\bm{e}_{j},-\bm{e}_{j}\}, jJj\in J. We call 𝒙+𝒅,𝒅U\bm{x}+\bm{d},\bm{d}\in U the neighbors of 𝒙\bm{x}. Given P={𝒙n|𝑨𝒙𝒃}P=\{\bm{x}\in\mathbb{R}^{n}|\bm{A}\bm{x}\leq\bm{b}\}, we define the set of boundary points of PP:

Definition 6.

𝒙n\bm{x}\in\mathbb{Z}^{n} is a boundary point of PP if 𝐱P\bm{x}\in P and 𝐝U\exists\bm{d}\in U, 𝐱+𝐝P\bm{x}+\bm{d}\notin P. The set of boundary points of PP is denoted by δ(P)\delta(P).

This definition says a boundary point has at least one neighbor that is out of the feasible region, which is similar to the definition of boundary in topology.

Refer to caption
Figure 2: An illustration of a polyhedron of LP relaxation of an ILP and different solutions: grey/red/blue points represent infeasible/boundary/other solutions

We call a solution 𝒙\bm{x} of Formula (5) a boundary solution if and only if 𝒙\bm{x} is a boundary point of the polyhedron of its LP relaxation PP. We can see in Figure 2 an illustration of different types (boundary/infeasible/others) of solutions. Then we exhibit the first property of ILP we use: every optimal solution is a boundary solution.

Proposition 1.

For an ILP instance as the Formula (5), its any optimal solution is a boundary solution, and also min𝐱Pnobj(𝐱)=min𝐱δ(P)nobj(𝐱)\min_{\bm{x}\in P\cap\mathbb{Z}^{n}}obj(\bm{x})=\min_{\bm{x}\in\delta(P)\cap\mathbb{Z}^{n}}obj(\bm{x}).

This proposition states that δ(P)\delta(P) is “complete” for obtaining an optimal solution:

Remark 1.

The search space of an ILP instance in the form of Formula (5) can be reduced to δ(P)\delta(P) and all optimal solutions are kept. Then δ(P)\delta(P) can be considered a complete space for ILP, in the sense that it would not miss the optimal solution.

We also show that the concept of δ(P)\delta(P) is significant: it is not a definition that is so general that it naturally contains all optimal solutions. We consider a polyhedral P={𝒙n|𝑨𝒙𝒃}P=\{\bm{x}\in\mathbb{R}^{n}|\bm{A}\bm{x}\leq\bm{b}\}, there could be different ILP instances specified by PP with different objective functions. We show a simple fact that for given PP and δ(P)\delta(P), any smaller subset of δ(P)\delta(P) may miss an optimal solution for some ILP instances consisting of PP with some objective function.

Fact 3.

For polyhedral P={𝐱n|𝐀𝐱𝐛}P=\{\bm{x}\in\mathbb{R}^{n}|\bm{A}\bm{x}\leq\bm{b}\}, for any boundary point 𝐱δ(P)\bm{x}\in\delta(P), there exist a linear objective function obj(𝐱)obj(\bm{x}) such that 𝐱\bm{x} is the optimal solution of the ILP instance consisting of PP and this objective function.

Since 𝒙P\bm{x}\in P and 𝒙+𝒆jP\bm{x}+\bm{e}_{j}\notin P for some jj, it is easy to obtain such an instance by just setting the objective function obj(𝒙)=𝒆j𝒙obj(\bm{x})=\bm{e}_{j}^{\top}\bm{x}. It’s trivial to see that 𝒙\bm{x} is optimal.

Fact 4.

For a 010-1 ILP instance (all variables are binary), all feasible solutions are boundary solutions.

8.2 Properties of our Operators

Now we show how our operators make use of this property. We first analyze tight move operator and lift move operator individually, and then show that all feasible solutions obtained by our algorithm are boundary solutions.

Tight Move Operator. We show a property of the tight move operator: if a solution obtained by the tight move operator is feasible, then it is a boundary solution.

We introduce some notions to facilitate the analysis of operators. Given 𝒙n\bm{x}\in\mathbb{Z}^{n}, jJ,iI,Aij0\forall j\in J,i\in I,A_{ij}\neq 0, let ϕji:nn\phi_{ji}:\mathbb{Z}^{n}\rightarrow\mathbb{Z}^{n} s.t. ϕji(𝒙)=𝒙\phi_{ji}(\bm{x})=\bm{x}^{\prime} if 𝒙\bm{x}^{\prime} is obtained from 𝒙\bm{x} by tm(xj,coni,𝒙)tm(x_{j},con_{i},\bm{x}).

Proposition 2.

Any feasible solution obtained by tmtm operator is a boundary solution: for 𝐱n\bm{x}\in\mathbb{Z}^{n}, jJ,iI\forall j\in J,i\in I, if 𝐱=ϕji(𝐱)\bm{x}^{\prime}=\phi_{ji}(\bm{x}) and 𝐱P\bm{x}^{\prime}\in P, then 𝐱δ(P)\bm{x}^{\prime}\in\delta(P).

Lift Move Operator. For the lift move operator, we show another property, that it maps a feasible solution to a boundary solution:

Given 𝒙n\bm{x}\in\mathbb{Z}^{n}, jJ\forall j\in J, let χj:nn\chi_{j}:\mathbb{Z}^{n}\rightarrow\mathbb{Z}^{n} s.t. χj(𝒙)=𝒙\chi_{j}(\bm{x})=\bm{x}^{\prime} if 𝒙\bm{x}^{\prime} is obtained from 𝒙\bm{x} by lm(xj,𝒙)lm(x_{j},\bm{x}).

Proposition 3.

The lift move operator maps a feasible solution to a boundary solution: for 𝐱Pn\bm{x}\in P\cap\mathbb{Z}^{n}, jJ\forall j\in J, χj(𝐱)δ(P)\chi_{j}(\bm{x})\in\delta(P).

Our Algorithm. As seen from Algorithms 3 - 5, our algorithm adopts a strategy to apply the lift move operator for feasible solutions and tight move for infeasible solutions. This coincides with the properties we showed in the preceding sections; additionally, we show here that the perturbations are also consistent with this strategy, and we can have the following argument for our algorithm:

Proposition 4.

Any feasible solution obtained in our algorithm is a boundary solution.

This property shows our algorithm avoids visiting points in Pδ(P)P\setminus\delta(P), which is very helpful for problems with large variable domains.

8.3 Different Behaviors in Three Modes

The three modes of our algorithm have different functionalities. The Search mode aims to find a feasible solution, i.e., locate PP in the full domain of variables; the Improve mode tries to generate a better solution while keeping the feasibility, i.e., walk inside PP in the direction that improves the objective function; and the Restore mode wants to reach another feasible solution in PP again when an infeasible solution is reached from a perturbation in Improve mode. Thus, different settings of operators provide different behaviors in the three modes, adapted to different purposes.

8.3.1 Scoring Functions

For Search and Restore modes, they both aim at finding feasible solutions, and both use tmtm operator. The tmtm scoring function is then focused on minimizing the degree of violation of constraints, which could be regarded as a “distance” to boundary solutions of PP. For Improve mode, the purpose is to improve the objective function. Since solutions are all feasible in Improve mode, the only measure is the objective value, which is set as the lmlm scoring function.

8.3.2 Step Size and Perturbations

As our algorithm does not have a fixed step size to generate new solutions, in each mode, the actual step size and degree of perturbation depend on different operators.

For Search and Restore modes which aim at the feasible solution, a larger discrepancy should be realized by the search strategy to get a rough location of PP quickly. This is handled in Algorithm 3 line 5 and Algorithm 5 line 7, allowing them to pick non-positive tmtm operations. This provides a larger degree of perturbation and results in a larger discrepancy in the full domain.

For Improve mode, since we showed the lmlm operator always reaches boundary solutions, the step size is automatically adapted to the actual status. Once the Improve mode encounters a local optimum, i.e., no lmlm operation to choose to stay in PP, then it goes out of the region of PP by taking a unit incremental move in a variable with only step size 11, to jump out of the local optimum. Since all feasible solutions are in PP, the smallest step size is set here to not go out at a large distance from PP. Therefore, the smallest step size helps to enter PP again.

9 Conclusions and Future Work

This work proposed new characterizations of ILP with the concept of boundary solutions. Motivated by the new characterizations, we proposed a new local search algorithm Local-ILP, which is an efficient incomplete solver for general integer linear programming validated on widely different types of problems. The main features of our algorithm include a new framework adopting three different modes, and two new operators designed for general ILP, the tight move and lift move operators, with tailored scoring functions for each. Experiments show that, in solving large-scale hard ILP problems within a reasonably short time, our algorithm is competitive and complementary to the state-of-the-art commercial solver Gurobi, and significantly outperforms the state-of-the-art non-commercial solver SCIP. More encouragingly, our algorithm established new records for 6 MIPLIB open instances. We also presented the theoretical analysis of our algorithm, which shows our algorithm could avoid visiting unnecessary regions.

As for future work, we would like to design more sophisticated operators and scoring functions related to the constraints and the objective function, which we believe can further improve the performance of local search algorithms for ILP. Given the promising results of Local-ILP, we would like to apply our local search framework to other combinatorial search problems, such as mixed integer programming, which is a more generalized model.

Acknowledgement

This work is supported by the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No. XDA0320000 and XDA0320300, and NSFC Grant 62122078.

Appendix A Proof of Proposition

A.1 Proof of Proposition 1

Let’s consider an optimal solution 𝒙\bm{x}^{*} of Formula (2), as we assumed, 𝒙\bm{x}^{*} exists and is finite. For a jJ0j\in J_{\neq 0}, let 𝒅=𝒆j\bm{d}=\bm{e}_{j} if cj<0c_{j}<0, and 𝒅=𝒆i\bm{d}=-\bm{e}_{i} if cj>0c_{j}>0. Assume 𝒙+𝒅P\bm{x}^{*}+\bm{d}\in P, then since 𝒙+𝒅n\bm{x}^{*}+\bm{d}\in\mathbb{Z}^{n}, 𝒙+𝒅\bm{x}^{*}+\bm{d} is a feasible solution of Formula (2) and

obj(𝒙+𝒅)=𝒄T𝒙+𝒄T𝒅=obj(𝒙)|cj|<obj(𝒙)obj(\bm{x}^{*}+\bm{d})=\bm{c}^{T}\bm{x}^{*}+\bm{c}^{T}\bm{d}=obj(\bm{x}^{*})-|c_{j}|<obj(\bm{x}^{*})

then there is another feasible solution with strictly smaller objective, contradiction with the assumption that 𝒙\bm{x}^{*} is optimal, so 𝒙+𝒅P\bm{x}^{*}+\bm{d}\notin P and 𝒙\bm{x}^{*} is a boundary solution. ∎

A.2 Proof of Proposition 2

Let Δ=bi(𝑨𝒊)𝒙\Delta=b_{i}-(\bm{A_{i}})\cdot\bm{x}, W.L.O.G. we assume Aij>0A_{ij}>0, and Δ<0\Delta<0 the other cases of AijA_{ij} and Δ\Delta could be showed similarly.

From Definition 1 we know 𝒙=𝒙min(|Δ/Aij|,|xjlxj|)𝒆j\bm{x}^{\prime}=\bm{x}-min(\left|\left\lfloor{\Delta}/{A_{ij}}\right\rfloor\right|,\left|x^{l}_{j}-x_{j}\right|)\cdot\bm{e}_{j}. Since Aij>0A_{ij}>0, we consider 𝒙+𝒆j\bm{x}^{\prime}+\bm{e}_{j}:

(1)If min(|Δ/Aij|,|xjlxj|)=|Δ/Aij|min(\left|\left\lfloor{\Delta}/{A_{ij}}\right\rfloor\right|,\left|x^{l}_{j}-x_{j}\right|)=\left|\left\lfloor{\Delta}/{A_{ij}}\right\rfloor\right|, since Δ<0,Aij>0\Delta<0,A_{ij}>0, |Δ/Aij|=Δ/Aij\left|\left\lfloor{\Delta}/{A_{ij}}\right\rfloor\right|=-\left\lfloor{\Delta}/{A_{ij}}\right\rfloor then

𝑨𝒊(𝒙+𝒆j)=𝑨𝒊𝒙+𝑨𝒊(Δ/Aij+1)𝒆j>𝑨𝒊𝒙+Aij(Δ/Aij)=bi\bm{A_{i}}\cdot(\bm{x}^{\prime}+\bm{e}_{j})=\bm{A_{i}}\cdot\bm{x}+\bm{A_{i}}(\left\lfloor{\Delta}/{A_{ij}}\right\rfloor+1)\bm{e}_{j}>\bm{A_{i}}\cdot\bm{x}+A_{ij}({\Delta}/{A_{ij}})=b_{i}

that is 𝑨𝒊(𝒙+𝒆j)>bi\bm{A_{i}}\cdot(\bm{x}^{\prime}+\bm{e}_{j})>b_{i}, which means 𝒙+𝒆jP\bm{x}^{\prime}+\bm{e}_{j}\notin P, since assumed 𝒙P\bm{x}^{\prime}\in P, then 𝒙δ(P)\bm{x}^{\prime}\in\delta(P).

(2) If min(|Δ/Aij|,|xjlxj|)=|xjlxj|min(\left|\left\lfloor{\Delta}/{A_{ij}}\right\rfloor\right|,\left|x^{l}_{j}-x_{j}\right|)=\left|x^{l}_{j}-x_{j}\right|, 𝒙=𝒙(|xjlxj|)𝒆j\bm{x}^{\prime}=\bm{x}-(\left|x^{l}_{j}-x_{j}\right|)\cdot\bm{e}_{j}, we consider 𝒙𝒆j\bm{x}^{\prime}-\bm{e}_{j}. Since xjx_{j} always satisfy its global bound(by definition, all our algorithm will not break the global bound), xjlxj0x^{l}_{j}-x_{j}\leq 0, then

(𝒙𝒆j)j=xj|xjlxj|1=xj+(xjlxj)1<xjl(\bm{x}^{\prime}-\bm{e}_{j})_{j}=x_{j}-\left|x^{l}_{j}-x_{j}\right|-1=x_{j}+(x^{l}_{j}-x_{j})-1<x^{l}_{j}

(𝒙𝒆j)(\bm{x}^{\prime}-\bm{e}_{j}) violated the global bound of the variable xjx_{j}, so 𝒙𝒆jP\bm{x}^{\prime}-\bm{e}_{j}\notin P, since assumed 𝒙P\bm{x}^{\prime}\in P then 𝒙δ(P)\bm{x}^{\prime}\in\delta(P). ∎

A.3 Proof of Proposition 3

Let 𝒙=χj(𝒙)\bm{x}^{\prime}=\chi_{j}(\bm{x}), Δ=bi(𝑨𝒊)𝒙\Delta=b_{i}-(\bm{A_{i}})\cdot\bm{x}. W.L.O.G assume cj<0c_{j}<0 and Aij>0A_{ij}>0, from Definition 5, lm(xj,𝒙)lm(x_{j},\bm{x}) changes 𝒙j\bm{x}^{\prime}_{j} to the upper bound of lfd(xj,𝒙)lfd(x_{j},\bm{x}). Since lfd(xj,𝒙)=(ildc(xj,coni,𝒙))[xjl,xju]lfd(x_{j},\bm{x})=(\cap_{i}ldc(x_{j},con_{i},\bm{x}))\cap[x_{j}^{l},x_{j}^{u}], the upper bound of lfd(xj,𝒙)lfd(x_{j},\bm{x}) is either the upper bound of [xjl,xju][x_{j}^{l},x_{j}^{u}] or of ldc(xj,coni,𝒙)ldc(x_{j},con_{i},\bm{x}) for some ii.

In the former case, 𝒙+𝒆j\bm{x}^{\prime}+\bm{e}_{j} exceeds the upper bound of [xjl,xju][x_{j}^{l},x_{j}^{u}].

In the latter case, from definition ldc(xj,coni,𝒙)=(,xj+Δ/Aij]ldc(x_{j},con_{i},\bm{x})=\left(-\infty,x_{j}+\left\lfloor{\Delta}/{A_{ij}}\right\rfloor\right] thus 𝒙=𝒙+Δ/Aij𝒆j\bm{x}^{\prime}=\bm{x}+\left\lfloor{\Delta}/{A_{ij}}\right\rfloor\bm{e}_{j}, then

𝑨𝒊(𝒙+𝒆j)=𝑨𝒊(𝒙+Δ/Aij𝒆j+𝒆j)>𝑨𝒊(𝒙+Δ/Aij𝒆j)=𝑨𝒊𝒙+Δ=bi\bm{A_{i}}(\bm{x}^{\prime}+\bm{e}_{j})=\bm{A_{i}}(\bm{x}+\left\lfloor{\Delta}/{A_{ij}}\right\rfloor\bm{e}_{j}+\bm{e}_{j})>\bm{A_{i}}(\bm{x}+{\Delta}/{A_{ij}}\bm{e}_{j})=\bm{A_{i}}\bm{x}+\Delta=b_{i}

thus 𝑨𝒊(𝒙+𝒆j)>bi\bm{A_{i}}(\bm{x}^{\prime}+\bm{e}_{j})>b_{i} which means 𝒙+𝒆jP\bm{x}^{\prime}+\bm{e}_{j}\notin P.

So in both case, we have 𝒙+𝒆jP\bm{x}^{\prime}+\bm{e}_{j}\notin P. Moreover, it’s easy to check 𝒙P\bm{x}^{\prime}\in P by definition that ldc()ldc() is computed satisfying all constraints and global bound for the variable’s bounds, and thus 𝒙δ(P)\bm{x}^{\prime}\in\delta(P). ∎

A.4 Proof of Proposition 4

In Algorithm 3 - 5 we can see that our algorithm has 4 ways to generate a new assignment:

(1) apply a tight move operator to an infeasible solution

(2) apply a lift move operator to a feasible solution

(3) (perturbation) apply a unit incremental move when no positive lmlm operation is found in Improve mode

(4) (perturbation) move a variable’s value to one side of its global bound

From Proposition 2 and Proposition 3, we know that the feasible solutions generated by case (1) and (2) must be boundary solutions.

The case (3) will generate an infeasible solution because when there is no positive lmlm operation, there is no operation to modify one variable’s value to get a better feasible solution. In this case, a unit incremental move must generate an infeasible solution; otherwise it contradicts the condition that no positive lmlm operation is found.

In case (4), it is trivial that the generated solution is a boundary solution if it is feasible as one variable is set to one side of its global bound.

In total, all feasible solutions generated by our algorithm are boundary solutions. ∎

References

  • Wolsey [2020] L. A. Wolsey, Integer programming, John Wiley & Sons, 2020.
  • KARP [1972] R. KARP, Reducibility among combinatorial problems, Complexity of Computer Computation (1972) 85–104.
  • Land and Doig [1960] A. Land, A. Doig, An automatic method of solving discrete programming problems, Econometrica: Journal of the Econometric Society (1960) 497–520.
  • Lawler and Wood [1966] E. L. Lawler, D. E. Wood, Branch-and-bound methods: A survey, Operations research 14 (1966) 699–719.
  • Gomory [1958] R. E. Gomory, Outline of an algorithm for integer solutions to linear programs, Bull. Amer. Math. Soc. 64 (1958) 275–278.
  • Brearley et al. [1975] A. Brearley, G. Mitra, H. P. Williams, Analysis of mathematical programming problems prior to applying the simplex algorithm, Mathematical programming 8 (1975) 54–83.
  • Savelsbergh [1994] M. W. Savelsbergh, Preprocessing and probing techniques for mixed integer programming problems, ORSA Journal on Computing 6 (1994) 445–454.
  • Achterberg [2009] T. Achterberg, Scip: solving constraint integer programs, Mathematical Programming Computation 1 (2009) 1–41.
  • Gurobi Optimization [2022] L. Gurobi Optimization, Gurobi optimizer ref. manual (2022).
  • Hoos and Stützle [2004] H. H. Hoos, T. Stützle, Stochastic local search: Foundations and applications, Elsevier, 2004.
  • Zhang [2004] W. Zhang, Configuration landscape analysis and backbone guided local search.: Part i: Satisfiability and maximum satisfiability, Artificial Intelligence 158 (2004) 1–26.
  • Biere et al. [2009] A. Biere, M. Heule, H. van Maaren, Handbook of satisfiability, volume 185, IOS press, 2009.
  • KhudaBukhsh et al. [2016] A. R. KhudaBukhsh, L. Xu, H. H. Hoos, K. Leyton-Brown, Satenstein: Automatically building local search sat solvers from components, Artificial Intelligence 232 (2016) 20–42.
  • Walser [1998] J. P. Walser, Domain-independent local search for linear integer optimization (1998).
  • Walser et al. [1998] J. P. Walser, R. Iyer, N. Venkatasubramanyan, An integer local search method with application to capacitated production planning, in: AAAI/IAAI, 1998, pp. 373–379.
  • Souza Brito et al. [2014] S. Souza Brito, H. Gambini Santos, B. H. Miranda Santos, A local search approach for binary programming: Feasibility search, in: Hybrid Metaheuristics: 9th International Workshop, HM 2014, Hamburg, Germany, June 11-13, 2014. Proceedings 9, Springer, 2014, pp. 45–55.
  • Umetani [2017] S. Umetani, Exploiting variable associations to configure efficient local search algorithms in large-scale binary integer programs, European Journal of Operational Research 263 (2017) 72–81.
  • Lei et al. [2021] Z. Lei, S. Cai, C. Luo, H. Hoos, Efficient local search for pseudo boolean optimization, in: Theory and Applications of Satisfiability Testing–SAT 2021: 24th International Conference, Barcelona, Spain, July 5-9, 2021, Proceedings 24, Springer, 2021, pp. 332–348.
  • Iser et al. [2023] M. Iser, J. Berg, M. Järvisalo, Oracle-based local search for pseudo-boolean optimization, in: ECAI 2023, IOS Press, 2023, pp. 1124–1131.
  • Prestwich and Verachi [2008] S. Prestwich, S. Verachi, Constructive vs perturbative local search for general integer linear programming, in: Proceedings of the Fifth International Workshop on Local Search Techniques in Constraint Satisfaction (LSCS), 2008.
  • Luteberget and Sartor [2023] B. Luteberget, G. Sartor, Feasibility jump: an lp-free lagrangian mip heuristic, Mathematical Programming Computation 15 (2023) 365–388.
  • Schrijver [1998] A. Schrijver, Theory of linear and integer programming, John Wiley & Sons, 1998.
  • Dantzig et al. [1955] G. B. Dantzig, A. Orden, P. Wolfe, et al., The generalized simplex method for minimizing a linear form under linear inequality restraints, Pacific Journal of Mathematics 5 (1955) 183–195.
  • Cai et al. [2022] S. Cai, B. Li, X. Zhang, Local search for smt on linear integer arithmetic, in: Computer Aided Verification: 34th International Conference, CAV 2022, Haifa, Israel, August 7–10, 2022, Proceedings, Part II, Springer, 2022, pp. 227–248.
  • Thornton et al. [2004] J. Thornton, D. N. Pham, S. Bain, V. Ferreira Jr, Additive versus multiplicative clause weighting for sat, in: AAAI, volume 4, 2004, pp. 191–196.
  • Cai and Su [2013] S. Cai, K. Su, Local search for boolean satisfiability with configuration checking and subscore, Artificial Intelligence 204 (2013) 75–98.
  • Cai [2015] S. Cai, Balance between complexity and quality: Local search for minimum vertex cover in massive graphs, in: Twenty-Fourth International Joint Conference on Artificial Intelligence, 2015.
  • Pappalardo and Ozkok [2013] E. Pappalardo, B. A. Ozkok, Handbook of combinatorial optimization, 2013.
  • Achterberg et al. [2006] T. Achterberg, T. Koch, A. Martin, Miplib 2003, Operations Research Letters 34 (2006) 361–372.
  • Koch et al. [2011] T. Koch, T. Achterberg, E. Andersen, O. Bastert, T. Berthold, R. E. Bixby, E. Danna, G. Gamrath, A. M. Gleixner, S. Heinz, et al., Miplib 2010: mixed integer programming library version 5, Mathematical Programming Computation 3 (2011) 103–163.
  • Gleixner et al. [2021] A. Gleixner, G. Hendel, G. Gamrath, T. Achterberg, M. Bastubbe, T. Berthold, P. Christophel, K. Jarck, T. Koch, J. Linderoth, et al., Miplib 2017: data-driven compilation of the 6th mixed-integer programming library, Mathematical Programming Computation 13 (2021) 443–490.
  • Berthold [2013] T. Berthold, Measuring the impact of primal heuristics, Operations Research Letters 41 (2013) 611–614.