This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Sample Complexity of Algorithm Selection Using Neural Networks and Its Applications to Branch-and-Cut

Hongyu Cheng Department of Applied Mathematics and Statistics, Johns Hopkins University {hongyucheng, khalife.sammy, bfiedor1, basu.amitabh}@jhu.edu Sammy Khalife Department of Applied Mathematics and Statistics, Johns Hopkins University {hongyucheng, khalife.sammy, bfiedor1, basu.amitabh}@jhu.edu Barbara Fiedorowicz Department of Applied Mathematics and Statistics, Johns Hopkins University {hongyucheng, khalife.sammy, bfiedor1, basu.amitabh}@jhu.edu Amitabh Basu Department of Applied Mathematics and Statistics, Johns Hopkins University {hongyucheng, khalife.sammy, bfiedor1, basu.amitabh}@jhu.edu
Abstract

Data-driven algorithm design is a paradigm that uses statistical and machine learning techniques to select from a class of algorithms for a computational problem an algorithm that has the best expected performance with respect to some (unknown) distribution on the instances of the problem. We build upon recent work in this line of research by considering the setup where, instead of selecting a single algorithm that has the best performance, we allow the possibility of selecting an algorithm based on the instance to be solved, using neural networks. In particular, given a representative sample of instances, we learn a neural network that maps an instance of the problem to the most appropriate algorithm for that instance. We formalize this idea and derive rigorous sample complexity bounds for this learning problem, in the spirit of recent work in data-driven algorithm design. We then apply this approach to the problem of making good decisions in the branch-and-cut framework for mixed-integer optimization (e.g., which cut to add?). In other words, the neural network will take as input a mixed-integer optimization instance and output a decision that will result in a small branch-and-cut tree for that instance. Our computational results provide evidence that our particular way of using neural networks for cut selection can make a significant impact in reducing branch-and-cut tree sizes, compared to previous data-driven approaches.

1 Background and motivation

Often there are several competing algorithms for a computational problem and no single algorithm dominates all the others. The choice of an algorithm in such cases is often dictated by the “typical" instance one expects to see, which may differ from one application context to another. Data-driven algorithm design has emerged in recent years as a way to formalize this question of algorithm selection and draws upon statistical and machine learning techniques; see Balcan (2020) for a survey and references therein. More formally, suppose one has a class \mathcal{I} of instances of some computational problem with some unknown distribution, and class of algorithms that can solve this problem parameterized by some “tunable parameters". Suppose that for each setting of the parameters, the corresponding algorithm can be evaluated by a score function that tells us how well the algorithm does on different instances (e.g., running time, memory usage etc.). We wish to find the set of parameters that minimizes (or maximizes, depending on the nature of the score) the expected score on the instances (with respect to the unknown distribution), after getting access to an i.i.d. sample of instances from the distribution. For example, \mathcal{I} could be a family of mixed-integer optimization problems and the class of algorithms are branch-and-cut methods with their different possible parameter settings, and the score function could be the size of the branch-and-cut tree.

In Balcan et al. (2021a), the authors prove a central result on the sample complexity of this problem: if for any fixed instance in \mathcal{I}, the score function has a piecewise structure as a function of the parameters, where the number of pieces is upper bounded by a constant RR independent of the instance, the pieces come from a family of functions of “low complexity", and the regions partitioning the parameter space into pieces have “low complexity" in terms of their shape, then the sample complexity is a precise function of RR and these two complexity numbers (formalized by the notion of pseudo-dimension – see below). See Theorem 3.3 in Balcan et al. (2021a) for details. The authors then proceed to apply this key result to a diverse variety of algorithmic problems ranging from computational biology, computational economics to integer optimization.

Our work in this paper is motivated by the following consideration. In many of these applications, one would like to select not a single algorithm, i.e., a single setting of the parameters, but would like to decide on the parameters after receiving a new instance of the problem. In other words, we would like to learn not the best parameter, but the best mapping from instances to parameters. We consider this version of the problem, where the mapping is taken from a class of neural network functions. At a high level, switching the problem from finding the “best” parameter to finding the “best” mapping in a parameterized family of mappings gives the same problem: we now have a new set of parameters – the ones parameterizing the mappings (neural networks) – and we have to select the best parameter. And indeed, the result from Balcan et al. (2021a) can be applied to this new setting, as long as one can prove good bounds on pseudo-dimension in the space of these new parameters. The key point of our result is two fold:

  1. 1.

    Even if original space of parameters for the algorithms is amenable to the analysis of pseudo-dimension as done in Balcan et al. (2021a), it is not clear that this immediately translates to a similar analysis in the parameter space of the neural networks, because the “low complexity piecewise structure" from the original parameter space may not be preserved.

  2. 2.

    We suspect that our proof technique results in tighter sample complexity bounds, compared to what one would obtain if one could do the analysis from Balcan et al. (2021a) in the parameter space of the neural network directly. However, we do not have quantitative evidence of this yet because their analysis seems difficult to carry out directly in the neural parameter space.

One of the foundational works in the field of selecting algorithms based on specific instances is Rice (1976), and recently, Gupta and Roughgarden (2016); Balcan et al. (2021c) have explored the sample complexity of learning mappings from instances to algorithms for particular problems. Our approach is also related to recent work on algorithm design with predictions; see Mitzenmacher and Vassilvitskii (2022) for a short introduction and the references therein for more thorough surveys and recent work. However, our emphasis and the nature of our results are quite different from the focus of previous research. We establish sample complexity bounds for a general learning framework that employs neural networks to map instances to algorithms, which is suitable for handling some highly complex score functions, such as the size of the branch-and-cut tree. The introduction of neural networks in our approach bring more intricate technical challenges than traditional settings like linear predictors or regression trees Gupta and Roughgarden (2016). On the application side, as long as we know some information about the algorithms (see Theorems 2.5 and 2.6), our results can be directly applied. This is demonstrated in this paper through applications to various cutting plane selection problems in branch-and-cut in Propositions 3.33.53.6 and 3.7. Several references to fundamental applied work in data-driven algorithm selection can be found in the Balcan (2020); Gupta and Roughgarden (2016); Balcan et al. (2021a, c).

1.1 Applications in branch-and-cut methods for mixed-integer linear optimization

Mixed-integer linear optimization is a powerful tool that is used in a diverse number of application domains. Branch-and-cut is the solution methodology of choice for all state-of-the-art solvers for mixed-integer optimization that is founded upon a well-developed theory of convexification and systematic enumeration Schrijver (1986); Nemhauser and Wolsey (1988); Conforti et al. (2014). However, even after decades of theoretical and computational advances, several key aspects of branch-and-cut are not well understood. During the execution of the branch-and-cut algorithm on an instance, the algorithm has to repeatedly make certain decisions such as which node of the search tree to process next, whether one should branch or add cutting planes, which cutting planes to add, or which branching strategy to use. The choices that give a small tree size for a particular instance may not be good choices for a different instance and result in a much larger tree. Thus, adapting these choices to the particular instance can be beneficial for overall efficiency. Of course, the strategies already in place in the best implementations of branch-and-cut have to adapt to the instances. For example, certain cutting plane choices may not be possible for certain instances. But even beyond that, there are certain heuristics in place that adapt these choices to the instance. These heuristics have been arrived at by decades of computational experience from practitioners. The goal in recent years is to provide a data driven approach to making these choices. In a recent series of papers  Balcan et al. (2021a, d, b, 2022, 2018), the authors apply the general sample complexity result from Balcan et al. (2021a) in the specific context of branch-and-cut methods for mixed-integer linear optimization to obtain several remarkable and first-of-their-kind results. We summarize here those results that are most relevant to the cut selection problem since this is the focus of our work.

  1. 1.

    In Balcan et al. (2021d), the authors consider the problem of selecting the best Chvátal-Gomory (CG) cutting plane (or collection of cutting planes) to be added at the root node of the branch-and-cut tree. Thus, the “tunable parameters" are the possible multipliers to be used to generate the CG cut at the root node. The score function is the size of the resulting branch-and-cut tree. The authors observe that for a fixed instance of a mixed-integer linear optimization problem, there are only finitely many CG cuts possible and the number can be bounded explicitly in terms of entries of the linear constraints of the problem. Via a sophisticated piece of analysis, this gives the required piecewise structure on the space of multipliers to apply the general result explained above. This gives concrete sample complexity bounds for choosing the multipliers with the best expected performance. See Theorem 3.3, 3.5 and 3.6 in Balcan et al. (2021d) for details. Note that this result is about selecting a single set of multipliers/cutting planes that has the best expected performance across all instances. This contrasts with selecting a good strategy to select multipliers depending on the instance, that has good expected performance (see point 2. below).

  2. 2.

    The authors in Balcan et al. (2021d) also consider the problem of learning a good strategy that selects multipliers based on the instance. In particular, they consider various auxiliary score functions used in integer optimization practice, that map a pair of instance II and a cutting plane cc for II that measures how well cc will perform for processing II. The strategy will be a linear function of these auxiliary scores, i.e., a weighted average of these auxiliary scores, and the learning problem becomes the problem of finding the best linear coefficients for the different auxiliary scores. So now these linear coefficients become the “tunable parameters" for the general result from Balcan et al. (2021a). It is not hard to find the piecewise structure in the space of these new parameters, given the analysis in the space of CG cut multipliers from point 1. above. This then gives concrete sample complexity bounds for learning the best set of weights for these auxiliary score functions for cut selection. See Theorem 4.1 and Corollary 4.2 in Balcan et al. (2021d) for details.

  3. 3.

    In all the previous results discussed above, the cutting planes considered were CG cuts, or it was essentially assumed that there are only a finite number of cutting planes available at any stage of the branch-and-cut algorithm. In the most recent paper Balcan et al. (2022), the authors consider general cutting plane paradigms, and also consider the possibility of allowing more general strategies to select cutting planes beyond using weighted combinations of auxiliary score functions. They uncover the subtlety that allowing general mappings from instances to cutting planes can lead to infinite sample complexity and learning such mappings could be impossible, if the class of mappings is allowed to be too rich. See Theorem 5.1 in Balcan et al. (2022). This point will be important when we discuss our approach below.

    On the positive side, they show that the well-known class of Gomory-Mixed-Integer (GMI) cuts has a similar structure to CG cuts, and therefore, using similar techniques as discussed above, they derive sample complexity bounds for selecting GMI cuts at the root node. See Theorem 5.5 in  Balcan et al. (2022). As far as we understand, the analysis should extend to the problem of learning weighted combinations of auxiliary score functions to select the GMI cuts as well using the same techniques as Balcan et al. (2021d), although the authors do not explicitly do this in Balcan et al. (2022).

Our approach and results.

Our point of departure from the line of work discussed above is that instead of using weighted combinations of auxiliary scores to select cutting planes, we wish to select these cutting planes using neural networks that map instances to cutting planes. In other words, in the general framework described above, the “tunable parameters" are the weights of the neural network. The overall score function is the size of the branch-and-cut tree after cuts are added at the root. We highlight the two main differences caused by this change in perspective.

  1. 1.

    In the approach where weighted combinations of auxiliary score functions are used, after the weights are learnt from the sample instances, for every new/unseen instance one has to compute the cut that maximizes the weighted score. This could be an expensive optimization problem in its own right. In contrast, with our neural approach, after training the net (i.e., learning the weights of the neural network), any new instance is just fed into the neural network and the output is the cutting plane(s) to be used for this instance. This is, in principle, a much simpler computational problem than optimizing the combined auxiliary score functions over the space of cuts.

  2. 2.

    Since we use the neural network to directly search for a good cut, bypassing the weighted combinations of auxuliary scores, we actually are able to find better cuts that reduce the tree sizes by a significant factor, compared to the approach of using weighted combinations auxiliary scores.

The above two points are clearly evidenced by our computational investigations which we present in Section 4. The theoretical sample complexity bounds for cut selection are presented in Section 3. As mentioned before, these are obtained using the main sample complexity result for using neural networks for data driven algorithm design that is presented in Section 2.

Comparison with prior work on cut selection using learning techniques.

As already discussed above, our theoretical sample complexity work in cut selection is closest in spirit to the work in Balcan et al. (2021a, d, b, 2022, 2018). However, there are several other works in the literature that use machine learning ideas to approach the problem of cut selection; see Deza and Khalil (2023) for a survey. Tang et al. Tang et al. (2020) initiated the exploration of applying Reinforcement Learning (RL) to select CG cuts derived from the simplex tableau. Huang et al. Huang et al. (2022) apply supervised learning to rank a “bag of cuts” from a set of cuts to reduce the total runtime. More recently, the authors in Turner et al. (2023) propose a novel strategy that use RL and Graph Neural Networks (GNN) to select instance-dependent weights for the combination of auxiliary score functions.

Our computational investigations, as detailed in Section 4, distinguishes itself from these prior computational explorations in several key aspects:

  1. 1.

    Previous methods were limited to a finite set of candidate cuts, requiring either an optimal simplex tableau or relying on a finite collection of combinatorial cuts. In contrast, our approach allows the possibility of selecting from a possibly infinite family of cutting planes. Moreover, our method eliminates the need for computing a simplex tableau which can lead to a significant savings in computation (see Table 1 and the discussion in Section 4.2).

  2. 2.

    Many prior studies aimed at improving the objective value rather than directly reducing the branch-and-cut runtime—with the exception of Huang et al. Huang et al. (2022), who explored this through supervised learning. To the best of our knowledge, our RL-based model is the first to directly target the reduction of the branch-and-cut tree size as its reward metric, which is strongly correlated with the overall running time.

  3. 3.

    Prior deep learning approaches are not underpinned by theoretical guarantees, such as sample complexity bounds. Our empirical work takes the theoretical insights for the branch-and-cut problem presented in Theorem 2.3 and Proposition 3.3 as its basis.

The limitations of our approach are discussed in Section 5.

2 Formal statement of results

We denote [d][d] as the set {1,2,,d}\{1,2,\ldots,d\} for any positive integer d+d\in\mathbb{Z}_{+}. For a set of vectors {𝐱1,,𝐱t}d\{\mathbf{x}^{1},\ldots,\mathbf{x}^{t}\}\subseteq\mathbb{R}^{d}, we use superscripts to denote vector indices, while subscripts specify the coordinates in a vector. For instance, 𝐱ji\mathbf{x}^{i}_{j} refers to the jj-th coordinate of 𝐱i\mathbf{x}^{i}. Additionally, the sign function, denoted as sgn:{0,1}\operatorname{sgn}:\mathbb{R}\rightarrow\{0,1\}, is defined such that for any xx\in\mathbb{R}, sgn(x)=0\operatorname{sgn}(x)=0 if x<0x<0, and 11 otherwise. This function is applied to each entry individually when applied to a vector. Lastly, the notation \lfloor\cdot\rfloor is used to indicate the elementwise floor function, rounding down each component of a vector to the nearest integer.

2.1 Preliminaries

2.1.1 Background from learning theory

Definition 2.1 (Parameterized function classes).

A parameterized function class is given by a function defined as

h:×𝒫𝒪,h:\mathcal{I}\times\mathcal{P}\rightarrow\mathcal{O},

where \mathcal{I} represents the input space, 𝒫\mathcal{P} denotes the parameter space, and 𝒪\mathcal{O} denotes the output space. For any fixed parameter setting 𝐩𝒫\mathbf{p}\in\mathcal{P}, we define a function h𝐩:𝒪h_{\mathbf{p}}:\mathcal{I}\rightarrow\mathcal{O} as h𝐩(I)=h(I,𝐩)h_{\mathbf{p}}(I)=h(I,\mathbf{p}) for all II\in\mathcal{I}. The set of all such functions defines the parameterized function class, a.k.a. the hypothesis class ={h𝐩:𝒪𝐩𝒫}\mathcal{H}=\{h_{\mathbf{p}}:\mathcal{I}\rightarrow\mathcal{O}\mid\mathbf{p}\in\mathcal{P}\} defined by hh.

Definition 2.2 (Pseudo-dimension).

Let \mathcal{F} be a non-empty collection of functions from an input space \mathcal{I} to \mathbb{R}. For any positive integer tt, we say that a set {I1,,It}\{I_{1},...,I_{t}\}\subseteq\mathcal{I} is pseudo-shattered by \mathcal{F} if there exist real numbers s1,,sts_{1},\dots,s_{t} such that

2t=|{(sgn(f(I1)s1),,sgn(f(It)st)):f}|.2^{t}=\left|\left\{\left(\operatorname{sgn}(f(I_{1})-s_{1}),\dots,\operatorname{sgn}(f(I_{t})-s_{t})\right):f\in\mathcal{F}\right\}\right|.

The pseudo-dimension of \mathcal{F}, denoted as Pdim(){+}\operatorname{Pdim}(\mathcal{F})\in\mathbb{N}\cup\{+\infty\}, is the size of the largest set that can be pseudo-shattered by \mathcal{F}.

The main goal of statistical learning theory is to solve a problem of the following form, given a fixed parameterized function class defined by some hh with output space 𝒪=\mathcal{O}=\mathbb{R}:

min𝐩𝒫𝔼I𝒟[h(I,𝐩)],\min_{\mathbf{p}\in\mathcal{P}}\;\mathbb{E}_{I\sim\mathcal{D}}[h(I,\mathbf{p})], (1)

for an unknown distribution 𝒟\mathcal{D}, given access to i.i.d. samples I1,,ItI_{1},\ldots,I_{t} from 𝒟\mathcal{D}. In other words, one tries to “learn" the best decision 𝐩\mathbf{p} for minimizing an expected “score" with respect to an unknown distribution given only samples from the distribution. Such a “best" decision can be thought of as a property of the unknown distribution and the problem is to “learn" this property of the unknown distribution, only given access to samples.

The following is a fundamental result in empirical processes theory and is foundational for the above learning problem; see, e.g., Chapters 17, 18 and 19 in Anthony et al. (1999), especially Theorem 19.2.

Theorem 2.3.

There exists a universal constant CC such that the following holds. Let \mathcal{H} be a hypothesis class defined by some h:×𝒫h:\mathcal{I}\times\mathcal{P}\to\mathbb{R} such that the range of hh is in [0,B][0,B] for some B>0B>0. For any distribution 𝒟\mathcal{D} on 𝒳\mathcal{X}, ϵ>0\epsilon>0, δ(0,1)\delta\in(0,1), and

tCB2ϵ2(Pdim()ln(Bϵ)+ln(1δ)),t\geq\frac{CB^{2}}{\epsilon^{2}}\left(\operatorname{Pdim}(\mathcal{H})\ln\left(\frac{B}{\epsilon}\right)+\ln\left(\frac{1}{\delta}\right)\right),

we have

|1ti=1th(Ii,𝐩)𝔼I𝒟[h(I,𝐩)]|ϵfor all 𝐩𝒫,\left|\frac{1}{t}\sum_{i=1}^{t}h(I_{i},\mathbf{p})-\mathbb{E}_{I\sim\mathcal{D}}\;[h(I,\mathbf{p})]\right|\leq\epsilon\;\;\text{for all }\mathbf{p}\in\mathcal{P},

with probability 1δ1-\delta over i.i.d. samples I1,,ItI_{1},\ldots,I_{t}\in\mathcal{I} of size tt drawn from 𝒟\mathcal{D}.

Thus, if one solves the sample average problem min𝐩𝒫1ti=1th(Ii,𝐩)\min_{\mathbf{p}\in\mathcal{P}}\frac{1}{t}\sum_{i=1}^{t}h(I_{i},\mathbf{p}) with a large enough sample to within O(ϵ)O(\epsilon) accuracy, the corresponding 𝐩\mathbf{p} would solve (1) to within O(ϵ)O(\epsilon) accuracy (with high probability over the sample). Thus, the pseudo-dimension Pdim()\operatorname{Pdim}(\mathcal{H}) is a key parameter that can be used to bound the size of a sample that is sufficient to solve the learning problem.

2.1.2 Neural networks

We formalize the definition of neural networks for the purposes of stating our results. Given any function σ:\sigma:\mathbb{R}\rightarrow\mathbb{R}, we will use the notation σ(𝐱)\sigma(\mathbf{x}) for 𝐱d\mathbf{x}\in\mathbb{R}^{d} to mean [σ(𝐱1),σ(𝐱2),,σ(𝐱d)]𝖳d[\sigma(\mathbf{x}_{1}),\sigma(\mathbf{x}_{2}),\ldots,\sigma(\mathbf{x}_{d})]^{\mathsf{T}}\in\mathbb{R}^{d}.

Definition 2.4 (Neural networks).

Let σ:\sigma:\mathbb{R}\to\mathbb{R} and let LL be a positive integer. A neural network with activation σ\sigma and architecture 𝒘=[w0,w1,,wL,wL+1]𝖳+L+2\bm{w}=[w_{0},w_{1},\dots,w_{L},w_{L+1}]^{\mathsf{T}}\in\mathbb{Z}_{+}^{L+2} is a paremterized function class, parameterized by L+1L+1 affine transformations {Ti:wi1wi\{T_{i}:\mathbb{R}^{w_{i-1}}\rightarrow\mathbb{R}^{w_{i}}, i[L+1]}i\in[L+1]\} with TL+1T_{L+1} linear, is defined as the function

TL+1σTLT2σT1.T_{L+1}\circ\sigma\circ T_{L}\circ\cdots T_{2}\circ\sigma\circ T_{1}.

LL denotes the number of hidden layers in the network, while wiw_{i} signifies the width of the ii-th hidden layer for i[L]i\in[L]. The input and output dimensions of the neural network are denoted by w0w_{0} and wL+1w_{L+1}, respectively. If TiT_{i} is represented by the matrix Aiwi×wi1A^{i}\in\mathbb{R}^{w_{i}\times w_{i-1}} and vector 𝐛iwi\mathbf{b}^{i}\in\mathbb{R}^{w_{i}}, i.e., Ti(𝐱)=Ai𝐱+𝐛iT_{i}(\mathbf{x})=A^{i}\mathbf{x}+\mathbf{b}^{i} for i[L+1]i\in[L+1], then the weights of neuron j[wi]j\in[w_{i}] in the ii-th hidden layer come from the entries of the jj-th row of AiA^{i} while the bias of the neuron is indicated by the jj-th coordinate of 𝐛i\mathbf{b}^{i}. The size of the neural network is defined as w1++wLw_{1}+\cdots+w_{L}, denoted by UU.

In the terminology of Definition 2.1, we define the neural network parameterized functions Nσ:w0×WwL+1N^{\sigma}:\mathbb{R}^{w_{0}}\times\mathbb{R}^{W}\to\mathbb{R}^{w_{L+1}}, with w0\mathbb{R}^{w_{0}} denoting the input space and W\mathbb{R}^{W} representing the parameter space. This parameter space is structured through the concatenation of all entries from the matrices AiA^{i} and vectors 𝐛i\mathbf{b}^{i}, for i[L+1]i\in[L+1], into a single vector of length WW. The functions are defined as Nσ(𝐱,𝐰)=TL+1(σ(TL(T2(σ(T1(𝐱))))))N^{\sigma}(\mathbf{x},\mathbf{w})=T_{L+1}(\sigma(T_{L}(\cdots T_{2}(\sigma(T_{1}(\mathbf{x})))\cdots))) for any 𝐱w0\mathbf{x}\in\mathbb{R}^{w_{0}} and 𝐰W\mathbf{w}\in\mathbb{R}^{W}, where each TiT_{i} represents the affine transformations associated with 𝐰W\mathbf{w}\in\mathbb{R}^{W}.

In the context of this paper, we will focus on the following activation functions:

  • sgn: The Linear Threshold (LT) activation function sgn:{0,1}\operatorname{sgn}:\mathbb{R}\rightarrow\{0,1\}, which is defined as sgn(x)=0\operatorname{sgn}(x)=0 if x<0x<0 and sgn(x)=1\operatorname{sgn}(x)=1 otherwise.

  • ReLU: The Rectified Linear Unit (ReLU) activation function ReLU:0\operatorname{ReLU}:\mathbb{R}\rightarrow\mathbb{R}_{\geq 0} is defined as ReLU(x)=max{0,x}\operatorname{ReLU}(x)=\max\{0,x\}.

  • CReLU: The Clipped Rectified Linear Unit (CReLU) activation function CReLU:[0,1]\operatorname{CReLU}:\mathbb{R}\rightarrow[0,1] is defined as CReLU(x)=min{max{0,x},1}\operatorname{CReLU}(x)=\min\{\max\{0,x\},1\}.

  • Sigmoid: The Sigmoid activation function Sigmoid:(0,1)\operatorname{Sigmoid}:\mathbb{R}\rightarrow(0,1) is defined as Sigmoid(x)=11+ex.\operatorname{Sigmoid}(x)=\frac{1}{1+e^{-x}}.

2.2 Our results

In this study, we extend the framework introduced by Balcan et al. Balcan et al. (2021a) to explore the learnability of tunable algorithmic parameters through neural networks. Consider a computational problem given by a family of instances \mathcal{I}. Let us say we have a suite of algorithms for this problem, parameterized by parameters in 𝒫\mathcal{P}. We also have a score function that evaluates how well a particular algorithm, given by specific settings of the parameters, performs on a particular instance. In other words, the score function is given by S:×𝒫[0,B]S:\mathcal{I}\times\mathcal{P}\rightarrow[0,B], where B+B\in\mathbb{R}_{+} determines a priori upper bound on the score. The main goal of data-driven algorithm design is to find a particular algorithm in our parameterized family of algorithms – equivalently, find a parameter setting 𝐩𝒫\mathbf{p}\in\mathcal{P} – that minimizes the expected score on the family of instances with respect to an unknown distribution on \mathcal{I}, given access to a sample of i.i.d instances from the distribution. This then becomes a special case of the general learning problem (1), where h=Sh=S and one can provide precise sample complexity bounds via Theorem 2.3, if one can bound the pseudo-dimension of the corresponding hypothesis class. A bound on this pseudo-dimension is precisely the central result in Balcan et al. (2021a); see the discussion in Section 1.

We assume the parameter space 𝒫\mathcal{P} is a Cartesian product of intervals [η1,τ1]××[η,τ][\eta_{1},\tau_{1}]\times\cdots\times[\eta_{\ell},\tau_{\ell}], where ηiτi\eta_{i}\leq\tau_{i} for each i[]i\in[\ell]. The transformation from the instance space \mathcal{I} to the parameter space 𝒫\mathcal{P} is structured through the following mappings:

  1. 1.

    An encoder function Enc:d\operatorname{Enc}:\mathcal{I}\rightarrow\mathbb{R}^{d} is defined to convert an instance II\in\mathcal{I} into a vector 𝐱=Enc(I)d\mathbf{x}=\operatorname{Enc}(I)\in\mathbb{R}^{d}, facilitating the instances to be suitably processed by a neural network. A simple example of such an encoder could be a compilation of all the instance’s numerical data into a single vector; but one can allow encodings that use some predetermined features of the instances.

  2. 2.

    A family of neural network mappings, denoted as Nσ:d×WN^{\sigma}:\mathbb{R}^{d}\times\mathbb{R}^{W}\rightarrow\mathbb{R}^{\ell}, is utilized. These mappings are characterized by an activation function σ\sigma, and a fixed architecture represented by 𝒘=[d,w1,,wL,]+L+2\bm{w}=[d,w_{1},\ldots,w_{L},\ell]\in\mathbb{Z}_{+}^{L+2}. For any given neural network parameters 𝐰W\mathbf{w}\in\mathbb{R}^{W}, this network maps an encoded instance 𝐱d\mathbf{x}\in\mathbb{R}^{d} into 𝐲:=Nσ(𝐱,𝐰)\mathbf{y}:=N^{\sigma}(\mathbf{x},\mathbf{w})\in\mathbb{R}^{\ell}.

  3. 3.

    A squeezing activation function, σ:[0,1]\sigma^{\prime}:\mathbb{R}\rightarrow[0,1], is introduced to adjust the neural network’s output to the parameter space 𝒫\mathcal{P}. The parameter 𝐩𝒫\mathbf{p}\in\mathcal{P} is computed by 𝐩i=ηi+(τiηi)σ(𝐲i)\mathbf{p}_{i}=\eta_{i}+(\tau_{i}-\eta_{i})\sigma^{\prime}(\mathbf{y}_{i}) for i=1,,i=1,\ldots,\ell.

The composite mapping from the instance space \mathcal{I} to the parameter space 𝒫\mathcal{P} is denoted by φ𝐰Nσ,σ\varphi_{\mathbf{w}}^{N^{\sigma},\sigma^{\prime}}, since the results of this study are applicable for any fixed and predetermined encoder function Enc\operatorname{Enc}.

The problem of learning the best neural mapping then becomes the learning problem (1) with h:×Wh:\mathcal{I}\times\mathbb{R}^{W}\to\mathbb{R} defined by h(I,𝐰):=S(I,φ𝐰Nσ,σ(I))h(I,\mathbf{w}):=S\left(I,\varphi_{\mathbf{w}}^{{N^{\sigma},\sigma^{\prime}}}(I)\right). We use

Nσ,σS:={S(,φ𝐰Nσ,σ()):[0,B]𝐰W}\mathcal{F}_{N^{\sigma},\sigma^{\prime}}^{S}:=\left\{S\left(\cdot,\varphi_{\mathbf{w}}^{{N^{\sigma},\sigma^{\prime}}}(\cdot)\right):\mathcal{I}\rightarrow[0,B]\mid\mathbf{w}\in\mathbb{R}^{W}\right\}

to denote the corresponding hypothesis class (Definition 2.1).

Our first result employs linear threshold neural networks for generating algorithm parameters, inspired by their analytically tractable structure and rich expressive capabilities, as supported by findings in Khalife et al. (2023).

Theorem 2.5.

Consider a set \mathcal{I} of instances of a computational problem with a suite of algorithms parameterized by 𝒫=[η1,τ1]××[η,τ]\mathcal{P}=[\eta_{1},\tau_{1}]\times\cdots\times[\eta_{\ell},\tau_{\ell}], with score function S:×𝒫[0,B]S:\mathcal{I}\times\mathcal{P}\rightarrow[0,B]. Suppose that, for any given instance II\in\mathcal{I}, there exist at most Γ\Gamma polynomials on \mathbb{R}^{\ell}, each of degree at most γ\gamma, such that within each region of 𝒫\mathcal{P} where these polynomials have the same sign pattern, the function S(I,)S(I,\cdot) is a polynomial on \mathbb{R}^{\ell} with degree at most λ\lambda. For linear threshold neural networks Nsgn:d×WN^{\operatorname{sgn}}:\mathbb{R}^{d}\times\mathbb{R}^{W}\rightarrow\mathbb{R}^{\ell} with a fixed architecture 𝒘=[d,w1,,wL,]+L+2\bm{w}=[d,w_{1},\dots,w_{L},\ell]\in\mathbb{Z}_{+}^{L+2}, having size UU and WW parameters (Definition 2.4), and using a Sigmoid squeezing function, we have

Pdim(Nsgn,SigmoidS)=𝒪(Wlog(UγΓ(λ+1))).\operatorname{Pdim}\left(\mathcal{F}^{S}_{N^{\operatorname{sgn}},\operatorname{Sigmoid}}\right)=\mathcal{O}\left(W\log(U\gamma\Gamma(\lambda+1))\right).

In addition to this, we investigate the sample complexity associated with the use of ReLU\operatorname{ReLU} neural networks for parameter selection.

Theorem 2.6.

Under the same conditions as Theorem 2.5, with ReLU neural networks NReLU:d×WN^{\operatorname{ReLU}}:\mathbb{R}^{d}\times\mathbb{R}^{W}\rightarrow\mathbb{R}^{\ell} having the same fixed architecture and clipped ReLU squeezing function, we have

Pdim(NReLU,CReLUS)=𝒪(LWlog(U+)+Wlog(γΓ(λ+1))).\operatorname{Pdim}\left(\mathcal{F}^{S}_{N^{\operatorname{ReLU}},\operatorname{CReLU}}\right)=\mathcal{O}\left(LW\log(U+\ell)+W\log(\gamma\Gamma(\lambda+1))\right).

It is not hard to adapt the proofs of Theorem 2.5 and Theorem 2.6 to show that if any dimension of the parameter space is all of \mathbb{R} rather than a bounded interval, the pseudo-dimension bounds will only be smaller, under the same conditions. Additionally, if 𝒫={𝐩1,,𝐩r}\mathcal{P}=\{\mathbf{p}_{1},\dots,\mathbf{p}_{r}\} is a finite set, the problem can be viewed as a multi-classification problem. That is, consider a neural network Nσ:d×WrN^{\sigma}:\mathbb{R}^{d}\times\mathbb{R}^{W}\to\mathbb{R}^{r}, where for any 𝐱d\mathbf{x}\in\mathbb{R}^{d} and 𝐰W\mathbf{w}\in\mathbb{R}^{W}, Nσ(𝐱,𝐰)N^{\sigma}(\mathbf{x},\mathbf{w}) outputs an rr-dimensional vector, and we select the parameter corresponding to the largest dimension. The pseudo-dimension of this problem is given by the following:

Corollary 2.7.

Under the same conditions as Theorem 2.5 and 2.6, but with 𝒫={𝐩1,,𝐩r}\mathcal{P}=\{\mathbf{p}_{1},\dots,\mathbf{p}_{r}\},

Pdim(Nsgn,SigmoidS)=𝒪(Wlog(Ur)) and Pdim(NReLU,CReLUS)=𝒪(LWlog(U+r)).\operatorname{Pdim}(\mathcal{F}^{S}_{N^{\operatorname{sgn}},\operatorname{Sigmoid}})=\mathcal{O}\left(W\log(Ur)\right)\text{ and }\operatorname{Pdim}(\mathcal{F}^{S}_{N^{\operatorname{ReLU}},\operatorname{CReLU}})=\mathcal{O}\left(LW\log(U+r)\right).

3 Application to branch-and-cut

3.1 Preliminaries

Definition 3.1 (Integer linear programming (ILP)).

Let m,n+m,n\in\mathbb{N}_{+} be fixed natural numbers, and let Am×n,𝐛m,𝐜nA\in\mathbb{Q}^{m\times n},\ \mathbf{b}\in\mathbb{Q}^{m},\ \mathbf{c}\in\mathbb{R}^{n}. The integer linear programming problem is formulated as

max{𝐜𝖳𝐱:A𝐱𝐛,𝐱0,𝐱n}.\max\{\mathbf{c}^{\mathsf{T}}\mathbf{x}:A\mathbf{x}\leq\mathbf{b},\mathbf{x}\geq 0,\mathbf{x}\in\mathbb{Z}^{n}\}.

The most successful algorithms and solvers for integer programming problems are based on a methodology called branch-and-cut. In a branch-and-cut algorithm, one maintains two things in every iteration of the algorithm: 1) a current guess for the optimal solution, 2) a collection of polyhedra that are subsets of the original polyhedral relaxation of the ILP. In every iteration, one of these polyhedra are selected and the continuous linear programming (LP) solution for that selected polyhedron is computed. If the solution has objective value worse than the current guess, this polyhedron is discarded from the list and the algorithm moves to the next iteration. Otherwise, if the solution is integral, the guess is updated with this integral solution and this polyhedron is removed from further consideration. If the LP solution is not integral, one decides to either add some cutting planes or branch. In the former case, additional linear constraints are added to this polyhedron under consideration without eliminating any feasible solutions. In the latter case, one selects a fractional variable 𝐱i\mathbf{x}_{i} in the LP solution and partitions the current polyhedron into two polyhedra by adding constraints 𝐱ifi\mathbf{x}_{i}\leq\lfloor f_{i}\rfloor and 𝐱ifi+1\mathbf{x}_{i}\geq\lfloor f_{i}\rfloor+1, where fif_{i} is the value of this fractional variable. The current polyhedron is then replaced in the list by these two new polyhedra. This entire process can be tracked by a branch-and-cut tree whose nodes are precisely the different polyhedra processed by the algorithm. The algorithm terminates when there are no more polyhedra left in the active list and the current guess is reported as the optimal solution. As is often done in practice, an a priori bound BB is set on the size of a tree; if this bound is exceeded by the algorithm at any stage, the algorithm exist early and the current guess for the solution is returned.

The branch-and-cut tree size is a very good indication of how long the algorithm takes to solve the problem since the main time is spent on solving the individual LPs in the iterations of the algorithm. We will thus use the tree size as the “score" function to decide how well branch-and-cut did on any instance.

There are many different strategies to generate cutting planes in branch-and-cut Conforti et al. (2014); Nemhauser and Wolsey (1988); Schrijver (1986). We will focus on the so-called Chvátal-Gomory (CG) cutting planes and Gomory Mixed-Integer (GMI) cuts Conforti et al. (2014). There are usually several choices of such cutting planes to add (and some families are even infinite in size Conforti et al. (2014)). We wish to apply the results of Section 2 to decide which cutting plane to select so that the branch-and-cut tree size is small.

3.2 Learnability of parameterized CG cut(s)

Let m,nm,n be positive integers. We consider the ILP instance space {(A,𝐛,𝐜):Am×n,𝐛m,𝐜n}\mathcal{I}\subseteq\{(A,\mathbf{b},\mathbf{c}):A\in\mathbb{Q}^{m\times n},\mathbf{b}\in\mathbb{Q}^{m},\mathbf{c}\in\mathbb{R}^{n}\}, along with a fixed encoder function Enc:d\operatorname{Enc}:\mathcal{I}\rightarrow\mathbb{R}^{d}. A simple encoder might stack all elements of (A,𝐛,𝐜)(A,\mathbf{b},\mathbf{c})\in\mathcal{I} into a single vector of length d=mn+m+nd=mn+m+n. We also impose the conditions that i=1mj=1n|Aij|a\sum_{i=1}^{m}\sum_{j=1}^{n}|A_{ij}|\leq a and i=1m|𝐛i|b\sum_{i=1}^{m}|\mathbf{b}_{i}|\leq b for any (A,𝐛,𝐜)(A,\mathbf{b},\mathbf{c})\in\mathcal{I}.

Following the discussion in Balcan et al. (2021d), we define fCG(I,𝐮)f_{\operatorname{CG}}(I,\mathbf{u}) as the size of the branch-and-bound tree for a given ILP instance II\in\mathcal{I} with a CG cut parameterized by a multiplier 𝐮[0,1]m\mathbf{u}\in[0,1]^{m} added at the root. We interpret fCGf_{\operatorname{CG}} as a score function elaborated in Section 2.2. The piecewise structure of fCGf_{\operatorname{CG}} in its parameters is characterized by:

Lemma 3.2 (Lemma 3.2 in Balcan et al. (2021d)).

For any ILP instance II\in\mathcal{I}, there are at most M:=2(a+b+n)M:=2(a+b+n) hyperplanes partitioning the parameter space [0,1]m[0,1]^{m} into regions where fCG(I,𝐮)f_{\operatorname{CG}}(I,\mathbf{u}) remains constant for all 𝐮\mathbf{u} within each region.

Applying Theorem 2.5 and Theorem 2.6 to fCGf_{\operatorname{CG}} yields these pseudo-dimension bounds:

Proposition 3.3.

Under the same conditions as Theorem 2.5 and 2.6, with the score function fCGf_{\operatorname{CG}},

Pdim(Nsgn,SigmoidfCG)\displaystyle\operatorname{Pdim}\left(\mathcal{F}_{N^{\operatorname{sgn}},\operatorname{Sigmoid}}^{f_{\operatorname{CG}}}\right) =𝒪(Wlog(UM)),\displaystyle=\mathcal{O}\left(W\log(UM)\right),
Pdim(NReLU,CReLUfCG)\displaystyle\operatorname{Pdim}\left(\mathcal{F}_{N^{\operatorname{ReLU}},\operatorname{CReLU}}^{f_{\operatorname{CG}}}\right) =𝒪(LWlog(U+m)+WlogM).\displaystyle=\mathcal{O}\left(LW\log(U+m)+W\log M\right).

Extending this to adding kk CG cuts sequentially, we define fCGk(I,(𝐮1,,𝐮k))f^{k}_{\operatorname{CG}}(I,(\mathbf{u}_{1},\dots,\mathbf{u}_{k})) as the branch-and-bound tree size after adding a sequence of kk CG cuts parameterized by 𝐮1,,𝐮k\mathbf{u}_{1},\dots,\mathbf{u}_{k} at the root for a given ILP instance II\in\mathcal{I}. The piecewise structure of fCGkf^{k}_{\operatorname{CG}} in its parameters is given by:

Lemma 3.4 (Lemma 3.4 in Balcan et al. (2021d)).

For any ILP instance II\in\mathcal{I}, there are 𝒪(k2kM)\mathcal{O}(k2^{k}M) multivariate polynomials with mk+k(k1)/2mk+k(k-1)/2 variables and degree at most kk partitioning the parameter space [0,1]mk+k(k1)/2[0,1]^{mk+k(k-1)/2} into regions where fCGk(I,(𝐮1,,𝐮k))f^{k}_{\operatorname{CG}}(I,(\mathbf{u}_{1},\dots,\mathbf{u}_{k})) remains constant for all (𝐮1,,𝐮k)(\mathbf{u}_{1},\dots,\mathbf{u}_{k}) within each region.

Accordingly, the pseudo-dimension bounds are

Proposition 3.5.

Under the same conditions as Theorem 2.5 and 2.6, with the score function fCGkf_{\operatorname{CG}}^{k},

Pdim\displaystyle\operatorname{Pdim} (Nsgn,SigmoidfCGk)=𝒪(Wlog(UM)+Wk),\displaystyle\left(\mathcal{F}_{N^{\operatorname{sgn}},\operatorname{Sigmoid}}^{f_{\operatorname{CG}}^{k}}\right)=\mathcal{O}\left(W\log(UM)+Wk\right),
Pdim\displaystyle\operatorname{Pdim} (NReLU,CReLUfCGk)=𝒪(LWlog(U+mk)+WlogM+Wk).\displaystyle\left(\mathcal{F}_{N^{\operatorname{ReLU}},\operatorname{CReLU}}^{f_{\operatorname{CG}}^{k}}\right)=\mathcal{O}\left(LW\log(U+mk)+W\log M+Wk\right).

3.3 Learnability of cutting plane(s) from a finite set

The selection of an optimal cut from an infinite pool of candidate cuts, as discussed in Proposition 3.3 and Proposition 3.5, is often difficult and inefficient in practice. Consequently, a popular way is to select cuts based on information from the simplex tableau (such as GMI cuts), as well as some combinatorial cuts, which inherently limit the number of candidate cuts considered to be finite.

Suppose we have a finite set of cuts 𝒞\mathcal{C}, and we define fROW(I,c)f_{\operatorname{ROW}}(I,c) as the branch-and-bound tree size after adding a cut c𝒞c\in\mathcal{C} at the root for a given ILP instance II\in\mathcal{I}. Then Corollary 2.7 implies

Proposition 3.6.

Under the same conditions as Theorem 2.5 and 2.6, with the score function fROWf_{\operatorname{ROW}},

Pdim(Nsgn,SigmoidfROW)=𝒪(Wlog(U|𝒞|)) and Pdim(NReLU,CReLUfROW)=𝒪(LWlog(U+|𝒞|)).\operatorname{Pdim}\left(\mathcal{F}_{N^{\operatorname{sgn}},\operatorname{Sigmoid}}^{f_{\operatorname{ROW}}}\right)=\mathcal{O}\left(W\log(U|\mathcal{C}|)\right)\text{ and }\operatorname{Pdim}\left(\mathcal{F}_{N^{\operatorname{ReLU}},\operatorname{CReLU}}^{f_{\operatorname{ROW}}}\right)=\mathcal{O}\left(LW\log(U+|\mathcal{C}|)\right).

3.4 Learnability of cut selection policy

One of the leading open-source solvers, SCIP Gamrath et al. (2020), uses cut selection methodologies that rely on combining several auxiliary score functions. For a finite set of cutting planes 𝒞\mathcal{C}, instead of directly using a neural network for selection, this model selects cargmaxc𝒞i=1μiScorei(c,I)c^{*}\in\operatorname{arg\,max}_{c\in\mathcal{C}}\sum_{i=1}^{\ell}\mu_{i}\operatorname{Score}_{i}(c,I) for each instance II\in\mathcal{I}. Here, Scorei\operatorname{Score}_{i} represents different heuristic scoring functions that assess various aspects of a cut, such as "Efficacy" Balas et al. (1996) and "Parallelism" Achterberg (2007), for a specific instance II, and the coefficients 𝝁[0,1]\bm{\mu}\in[0,1]^{\ell} are tunable weights for these scoring models. Since 𝒞\mathcal{C} is considered to be finite, the above optimization problem is solved through enumeration. The authors in Turner et al. (2023) have experimentally implemented the idea of using a neural network to map instances to weights. We provide an upper bound on the pseudo-dimension in the following proposition of this learning problem.

Proposition 3.7.

Under the same conditions as Theorem 2.5 and 2.6, let fS(I,𝝁)f_{S}(I,\bm{\mu}) denote the size of the branch-and-bound tree for II after adding a cutting plane determined by the weighted scoring model parameterized by 𝝁[0,1]\bm{\mu}\in[0,1]^{\ell}. The pseudo-dimension bounds are given by:

Pdim(Nsgn,SigmoidfS)\displaystyle\operatorname{Pdim}\left(\mathcal{F}_{N^{\operatorname{sgn}},\operatorname{Sigmoid}}^{f_{S}}\right) =𝒪(Wlog(U|𝒞|)),\displaystyle=\mathcal{O}\left(W\log(U|\mathcal{C}|)\right),
Pdim(NReLU,CReLUfS)\displaystyle\operatorname{Pdim}\left(\mathcal{F}_{N^{\operatorname{ReLU}},\operatorname{CReLU}}^{f_{S}}\right) =𝒪(LWlog(U+)+Wlog(|𝒞|)).\displaystyle=\mathcal{O}\left(LW\log(U+\ell)+W\log(|\mathcal{C}|)\right).

4 Numerical experiments

In this section, for a given ILP instance space \mathcal{I} conforming to the description in Section 3.2, and a fixed distribution 𝒟\mathcal{D} over it, we primarily attempt to employ ReLU neural networks for choosing a CG cut multiplier for each instance in the distribution, which translates into addressing the following neural network training (empirical risk minimization) problem:

min𝐰W1ti=1tfCG(Ii,φ𝐰NReLU,CReLU(Ii)),\min_{\mathbf{w}\in\mathbb{R}^{W}}\frac{1}{t}\sum_{i=1}^{t}f_{\operatorname{CG}}\left(I_{i},\varphi_{\mathbf{w}}^{{N^{\operatorname{ReLU}},\operatorname{CReLU}}}(I_{i})\right), (2)

where I1,,ItI_{1},\dots,I_{t}\in\mathcal{I} are i.i.d. samples drawn from 𝒟\mathcal{D}, and recall that φ𝐰NReLU,CReLU()=CReLU(Nσ(Enc(),𝐰))\varphi_{\mathbf{w}}^{{N^{\operatorname{ReLU}},\operatorname{CReLU}}}(\cdot)=\operatorname{CReLU}(N^{\sigma}(\operatorname{Enc}(\cdot),\mathbf{w})).

This problem, concerning the selection from an infinite pool of cuts, presents a significant challenge. The target function fCGf_{\operatorname{CG}}, in relation to its parameters, is an intricately complex, high-dimensional, and piecewise constant function with numerous pieces (recall Lemma 3.2), making direct application of classical gradient-based methods seemingly impractical. As such, we use an RL approach, with our goal switched to identify a relatively “good” neural network for this distribution. Then, based on Proposition 3.3 and Theorem 2.3 the average performance of the chosen parameter setting on sampled instances closely approximates the expected performance on the distribution in high probability, given that the sample size tt is sufficiently large.

4.1 Experimental setup

Data.

We consider the multiple knapsack problems Kellerer et al. (2004) with 16 items and 2 knapsacks, using the Chvátal distribution as utilized in Balcan et al. (2021b), originally proposed by Chvátal in Chvátal (1980). Our synthetic dataset has a training set of 5,000 instances and a test set of 1,000 instances from the same distribution.

Training.

Each instance II sampled from the distribution 𝒟\mathcal{D} is treated as a state in RL, with the outputs of the neural network considered as actions in the RL framework. The neural network thus functions as the actor in an actor-critic scheme Silver et al. (2014), where the reward is defined as the percentage reduction in the tree size after adding a cut, i.e., fCG(I,𝟎)fCG(I,𝐮)fCG(I,𝟎)\frac{f_{\operatorname{CG}}(I,\mathbf{0})-f_{\operatorname{CG}}(I,\mathbf{u})}{f_{\operatorname{CG}}(I,\mathbf{0})}. The Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm Fujimoto et al. (2018) is used here for the training of the neural network.

The experiments were conducted on a Linux machine with a 12-core Intel i7-12700F CPU, 32GB of RAM, and an NVIDIA RTX 3070 GPU with 8GB of VRAM. We used Gurobi 11.0.1 Gurobi Optimization, LLC (2023) to solve the ILPs, with default cuts, heuristics, and presolve settings turned off. The neural networks were implemented using PyTorch 2.3.0. The details of the implementation are available at https://github.com/Hongyu-Cheng/LearnCGusingNN.

4.2 Empirical results

Better cuts.

The experimental results indicate that even a suboptimal neural network parameterization can outperform the cut selection methodologies used in leading solvers like SCIP. Figure 1 presents the average tree size comparison across 1,000 novel test instances, with three distinct strategies:

  1. 1.

    The blue solid line represents the tree size when the cut is selected based on the highest convex combination score of cut efficacy and parallelism, adjusted by a parameter μ[0,1]\mu\in[0,1] (incremented in steps of 0.01). The candidate set of cuts includes all CG and GMI cuts generated from the appropriate rows of the optimal simplex tableau.

  2. 2.

    The green dash-dotted line demonstrates a notable reduction in tree size when using cuts generated through our RL approach;

  3. 3.

    The purple dash-dotted line follows the same approach as 2., but uses linear threshold neural networks to generate CG cut parameters. The training process uses the idea of Straight-Through Estimator (STE) Bengio et al. (2013); Yin et al. (2019).

Refer to caption
Figure 1: Comparison of branch-and-bound tree sizes using different cut selection strategies.
Faster selection of cuts.

The cut selection using neural networks only requires matrix multiplications and ReLU activations, and is notably rapid and lends itself to efficient parallelization on GPUs. In contrast, the procedure for selecting CG cuts from a simplex tableau, which involves solving relaxed LP problems, is considerably slower, even without taking into account the time to score and compare potential cuts. Our empirical studies, conducted on a test set of 1,000 instances repeated 100 times, highlight the significant disparity in computational speed, as shown in Table 1.

Table 1: Comparison of the computational speeds between ReLU neural network inference and LP solving. This table presents the total time in seconds for 100 runs on a test set of 1,000 instances.

Tasks Time
Computing cuts via trained neural network on GPU 0.010
Computing cuts via trained neural network on CPU 0.055
Solving required LPs using Gurobi 2.031

5 Discussions and open questions

In our study, we concentrated on adding CG cuts solely at the root of the branch-and-cut tree. However, devising a strategy that generates high quality cutting planes while being efficient across the entire branch-and-cut tree poses a significant and intriguing challenge for future research. Further, our theoretical findings are applicable to any encoder that maps instances into Euclidean spaces. Hence, utilizing a fixed encoder capable of converting ILPs with different number of constraints into a same Euclidean space can in principle enable the training of a unified neural network to generate cutting planes across the branch-and-cut tree. Moreover, an effective encoder could improve the neural network’s performance beyond the one achieved with the basic stacking encoder used in our paper.

The neural network training problem (2) also requires further study. We used an RL approach (see Section 4) to update the neural network parameters and minimize the average tree size for ILP instances sampled from a given distribution. However, this method does not guarantee convergence to optimal parameters, and relies heavily on the exploratory nature of the RL algorithm. For ILP distributions where random exploration of Chvátal multipliers is unlikely to yield smaller tree sizes, the RL algorithm may struggle to identify an effective parameter setting. Developing a more efficient and robust training methodology would greatly improve the practical value of our work.

Acknowledgments and Disclosure of Funding

All four authors gratefully acknowledge support from Air Force Office of Scientific Research (AFOSR) grant FA95502010341. Hongyu Cheng, Barara Fiedorowicz and Amitabh Basu also gratefully acknowledge support from National Science Foundation (NSF) grant CCF2006587.

References

  • Achterberg (2007) Tobias Achterberg. Constraint integer programming. 2007.
  • Anthony et al. (1999) Martin Anthony, Peter L Bartlett, Peter L Bartlett, et al. Neural network learning: Theoretical foundations, volume 9. cambridge university press Cambridge, 1999.
  • Balas et al. (1996) Egon Balas, Sebastián Ceria, and Gérard Cornuéjols. Mixed 0-1 programming by lift-and-project in a branch-and-cut framework. Management Science, 42(9):1229–1246, 1996.
  • Balcan (2020) Maria-Florina Balcan. Data-driven algirithm design. In Tim Roughgarden, editor, Beyond the Worst Case Analysis of Algorithms. Cambridge University Press, 2020.
  • Balcan et al. (2018) Maria-Florina Balcan, Travis Dick, Tuomas Sandholm, and Ellen Vitercik. Learning to branch. In International conference on machine learning, pages 344–353. PMLR, 2018.
  • Balcan et al. (2021a) Maria-Florina Balcan, Dan DeBlasio, Travis Dick, Carl Kingsford, Tuomas Sandholm, and Ellen Vitercik. How much data is sufficient to learn high-performing algorithms? generalization guarantees for data-driven algorithm design. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 919–932, 2021a.
  • Balcan et al. (2021b) Maria-Florina Balcan, Siddharth Prasad, Tuomas Sandholm, and Ellen Vitercik. Improved sample complexity bounds for branch-and-cut. arXiv preprint arXiv:2111.11207, 2021b.
  • Balcan et al. (2021c) Maria-Florina Balcan, Tuomas Sandholm, and Ellen Vitercik. Generalization in portfolio-based algorithm selection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 12225–12232, 2021c.
  • Balcan et al. (2021d) Maria-Florina F Balcan, Siddharth Prasad, Tuomas Sandholm, and Ellen Vitercik. Sample complexity of tree search configuration: Cutting planes and beyond. Advances in Neural Information Processing Systems, 34:4015–4027, 2021d.
  • Balcan et al. (2022) Maria-Florina F Balcan, Siddharth Prasad, Tuomas Sandholm, and Ellen Vitercik. Structural analysis of branch-and-cut and the learnability of gomory mixed integer cuts. Advances in Neural Information Processing Systems, 35:33890–33903, 2022.
  • Bartlett et al. (1998) Peter Bartlett, Vitaly Maiorov, and Ron Meir. Almost linear vc dimension bounds for piecewise polynomial networks. Advances in neural information processing systems, 11, 1998.
  • Bartlett et al. (2019) Peter L Bartlett, Nick Harvey, Christopher Liaw, and Abbas Mehrabian. Nearly-tight vc-dimension and pseudodimension bounds for piecewise linear neural networks. The Journal of Machine Learning Research, 20(1):2285–2301, 2019.
  • Bengio et al. (2013) Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
  • Chvátal (1980) Vasek Chvátal. Hard knapsack problems. Operations Research, 28(6):1402–1411, 1980.
  • Conforti et al. (2014) Michele Conforti, Gérard Cornuéjols, and Giacomo Zambelli. Integer programming, volume 271. Springer, 2014.
  • Deza and Khalil (2023) Arnaud Deza and Elias B Khalil. Machine learning for cutting planes in integer programming: A survey. arXiv preprint arXiv:2302.09166, 2023.
  • Edelsbrunner (1987) Herbert Edelsbrunner. Algorithms in combinatorial geometry, volume 10. Springer Science & Business Media, 1987.
  • Fujimoto et al. (2018) Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In International conference on machine learning, pages 1587–1596. PMLR, 2018.
  • Gamrath et al. (2020) Gerald Gamrath, Daniel Anderson, Ksenia Bestuzheva, Wei-Kun Chen, Leon Eifler, Maxime Gasse, Patrick Gemander, Ambros Gleixner, Leona Gottwald, Katrin Halbig, et al. The scip optimization suite 7.0. 2020.
  • Gupta and Roughgarden (2016) Rishi Gupta and Tim Roughgarden. A pac approach to application-specific algorithm selection. In Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science, pages 123–134, 2016.
  • Gurobi Optimization, LLC (2023) Gurobi Optimization, LLC. Gurobi Optimizer Reference Manual, 2023. URL https://www.gurobi.com.
  • Huang et al. (2022) Zeren Huang, Kerong Wang, Furui Liu, Hui-Ling Zhen, Weinan Zhang, Mingxuan Yuan, Jianye Hao, Yong Yu, and Jun Wang. Learning to select cuts for efficient mixed-integer programming. Pattern Recognition, 123:108353, 2022.
  • Kellerer et al. (2004) Hans Kellerer, Ulrich Pferschy, David Pisinger, Hans Kellerer, Ulrich Pferschy, and David Pisinger. Multidimensional knapsack problems. Springer, 2004.
  • Khalife et al. (2023) Sammy Khalife, Hongyu Cheng, and Amitabh Basu. Neural networks with linear threshold activations: structure and algorithms. Mathematical Programming, pages 1–24, 2023.
  • Matousek (1999) Jiri Matousek. Geometric discrepancy: An illustrated guide, volume 18. Springer Science & Business Media, 1999.
  • Mitzenmacher and Vassilvitskii (2022) Michael Mitzenmacher and Sergei Vassilvitskii. Algorithms with predictions. Communications of the ACM, 65(7):33–35, 2022.
  • Nemhauser and Wolsey (1988) George L Nemhauser and Laurence A Wolsey. Integer and combinatorial optimization, volume 18. Wiley New York, 1988.
  • Rice (1976) John R Rice. The algorithm selection problem. In Advances in computers, volume 15, pages 65–118. Elsevier, 1976.
  • Schrijver (1986) Alexander Schrijver. Theory of Linear and Integer Programming. John Wiley and Sons, New York, 1986.
  • Silver et al. (2014) David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In International conference on machine learning, pages 387–395. Pmlr, 2014.
  • Sontag et al. (1998) Eduardo D Sontag et al. Vc dimension of neural networks. NATO ASI Series F Computer and Systems Sciences, 168:69–96, 1998.
  • Tang et al. (2020) Yunhao Tang, Shipra Agrawal, and Yuri Faenza. Reinforcement learning for integer programming: Learning to cut. In International conference on machine learning, pages 9367–9376. PMLR, 2020.
  • Turner et al. (2023) Mark Turner, Thorsten Koch, Felipe Serrano, and Michael Winkler. Adaptive cut selection in mixed-integer linear programming. Open Journal of Mathematical Optimization, 4:1–28, 2023.
  • Yin et al. (2019) Penghang Yin, Jiancheng Lyu, Shuai Zhang, Stanley Osher, Yingyong Qi, and Jack Xin. Understanding straight-through estimator in training activation quantized neural nets. arXiv preprint arXiv:1903.05662, 2019.

Appendix A Auxiliary lemmas

Lemma A.1.

For any x1,,xn,λ1,,λn>0x_{1},\dots,x_{n},\lambda_{1},\dots,\lambda_{n}>0, the following inequalities hold:

logx1\displaystyle\log x_{1} x1λ1+log(λ1e),\displaystyle\leq\frac{x_{1}}{\lambda_{1}}+\log\left(\frac{\lambda_{1}}{e}\right), (3)
x1λ1xnλn\displaystyle x_{1}^{\lambda_{1}}\cdots x_{n}^{\lambda_{n}} (λ1x1++λnxnλ1++λn)λ1++λn.\displaystyle\leq\left(\frac{\lambda_{1}x_{1}+\cdots+\lambda_{n}x_{n}}{\lambda_{1}+\cdots+\lambda_{n}}\right)^{\lambda_{1}+\cdots+\lambda_{n}}. (4)
Lemma A.2 (Theorem 5.5 in Matousek [1999], Lemma 17 in Bartlett et al. [2019], Lemma 3.3 in Anthony et al. [1999], Theorem 1.3 in Edelsbrunner [1987]).

Let 𝒫\mathcal{P}\subseteq\mathbb{R}^{\ell} and let f1,,ft:f_{1},\ldots,f_{t}:\mathbb{R}^{\ell}\to\mathbb{R} with tt\geq\ell be functions that are polynomials of degree mm when restricted to 𝒫\mathcal{P}. Then

|{(sgn(f1(𝐩)),,sgn(ft(𝐩))):𝐩𝒫}|\displaystyle|\{\left(\operatorname{sgn}(f_{1}(\mathbf{p})),\ldots,\operatorname{sgn}(f_{t}(\mathbf{p}))\right):\mathbf{p}\in\mathcal{P}\}| =1,\displaystyle=1, m=0,\displaystyle m=0,
|{(sgn(f1(𝐩)),,sgn(ft(𝐩))):𝐩𝒫}|\displaystyle|\{\left(\operatorname{sgn}(f_{1}(\mathbf{p})),\ldots,\operatorname{sgn}(f_{t}(\mathbf{p}))\right):\mathbf{p}\in\mathcal{P}\}| (et+1)+1,\displaystyle\leq\left(\frac{et}{\ell+1}\right)^{\ell+1}, m=1,\displaystyle m=1,
|{(sgn(f1(𝐩)),,sgn(ft(𝐩))):𝐩𝒫}|\displaystyle|\{\left(\operatorname{sgn}(f_{1}(\mathbf{p})),\ldots,\operatorname{sgn}(f_{t}(\mathbf{p}))\right):\mathbf{p}\in\mathcal{P}\}| 2(2etm),\displaystyle\leq 2\left(\frac{2etm}{\ell}\right)^{\ell}, m2.\displaystyle m\geq 2.
Lemma A.3.

Let h:×𝒫h:\mathcal{I}\times\mathcal{P}\to\mathbb{R} define a parameterized function class with 𝒫\mathcal{P}\subseteq\mathbb{R}^{\ell}, and let \mathcal{H} be the corresponding hypothesis class (Definition 2.1). Let mm\in\mathbb{N} and R:R:\mathbb{N}\to\mathbb{N} be a function with the following property: for any tt\in\mathbb{N} and I1,,ItI_{1},\ldots,I_{t}\in\mathcal{I}, there exist R(t)R(t) subsets 𝒫1,,𝒫R(t)\mathcal{P}_{1},\ldots,\mathcal{P}_{R(t)} of 𝒫\mathcal{P} such that 𝒫=i=1R(t)𝒫i\mathcal{P}=\cup_{i=1}^{R(t)}\mathcal{P}_{i} and, for all i[R(t)]i\in[R(t)] and j[t]j\in[t], h(Ij,𝐩)h(I_{j},\mathbf{p}) restricted to 𝒫i\mathcal{P}_{i} is a polynomial function of degree at most mm depending on at most \ell^{\prime}\leq\ell of the coordinates. In other words, the map

𝐩(h(I1,𝐩),,h(It,𝐩))\mathbf{p}\mapsto(h(I_{1},\mathbf{p}),\ldots,h(I_{t},\mathbf{p}))

is a piecewise polynomial map from 𝒫\mathcal{P} to t\mathbb{R}^{t} with at most R(t)R(t) pieces. Then,

Pdim()sup{t1:2t1R(t)(2et(m+1))}\operatorname{Pdim}(\mathcal{H})\leq\sup\left\{t\geq 1:2^{t-1}\leq R(t)\left(\frac{2et(m+1)}{\ell^{\prime}}\right)^{\ell^{\prime}}\right\}
Proof.

Given any tt\in\mathbb{N} and (I1,si),,(It,st)×(I_{1},s_{i}),\ldots,(I_{t},s_{t})\in\mathcal{I}\times\mathbb{R}, we first bound the size of

{(sgn(h(I1,𝐩)s1),,sgn(h(It,𝐩)st)):𝐩𝒫}.\left\{\left(\operatorname{sgn}(h(I_{1},\mathbf{p})-s_{1}),\dots,\operatorname{sgn}(h(I_{t},\mathbf{p})-s_{t})\right):\mathbf{p}\in\mathcal{P}\right\}.

Within each 𝒫i\mathcal{P}_{i}, h(Ij,𝐩)sjh(I_{j},\mathbf{p})-s_{j} is a polynomial in pp for every j=1,,tj=1,\ldots,t, given the hypothesis. Applying Lemma A.2,

|{(sgn(h(I1,𝐩)s1),,sgn(h(It,𝐩)st)):𝐩𝒫i}|2(2et(m+1)).|\{\left(\operatorname{sgn}(h(I_{1},\mathbf{p})-s_{1}),\ldots,\operatorname{sgn}(h(I_{t},\mathbf{p})-s_{t})\right):\mathbf{p}\in\mathcal{P}_{i}\}|\leq 2\left(\frac{2et(m+1)}{\ell^{\prime}}\right)^{\ell^{\prime}}.

Summing over the different 𝒫i\mathcal{P}_{i}, i[R(t)]i\in[R(t)], we obtain that

|{(sgn(h(I1,𝐩)s1),,sgn(h(It,𝐩)st)):𝐩𝒫}|2R(t)(2et(m+1)).\left|\left\{\left(\operatorname{sgn}(h(I_{1},\mathbf{p})-s_{1}),\dots,\operatorname{sgn}(h(I_{t},\mathbf{p})-s_{t})\right):\mathbf{p}\in\mathcal{P}\right\}\right|\leq 2R(t)\left(\frac{2et(m+1)}{\ell^{\prime}}\right)^{\ell^{\prime}}.

Thus, Pdim()\operatorname{Pdim}(\mathcal{H}) is bounded by the largest tt such that 2t2R(t)(2et(m+1))2^{t}\leq 2R(t)\left(\frac{2et(m+1)}{\ell^{\prime}}\right)^{\ell^{\prime}}. ∎

Lemma A.4.

Let NReLU:d×WN^{\operatorname{ReLU}}:\mathbb{R}^{d}\times\mathbb{R}^{W}\to\mathbb{R}^{\ell} be a neural network function with ReLU\operatorname{ReLU} activation and architecture 𝒘=[d,w1,,wL,]\bm{w}=[d,w_{1},\ldots,w_{L},\ell] (Definition 2.4). Then for every natural number t>LWt>LW, and any 𝐱1,,𝐱td\mathbf{x}^{1},\ldots,\mathbf{x}^{t}\in\mathbb{R}^{d}, there exists subsets 𝒲1,,𝒲Q\mathcal{W}_{1},\ldots,\mathcal{W}_{Q} of W\mathbb{R}^{W} with Q2L(2eti=1L(iwi)LW)LWQ\leq 2^{L}\left(\frac{2et\sum_{i=1}^{L}(iw_{i})}{LW}\right)^{LW} whose union is all of W\mathbb{R}^{W}, such that N(𝐱j,𝐰)N(\mathbf{x}^{j},\mathbf{w}) restricted to 𝐰𝒲i\mathbf{w}\in\mathcal{W}_{i} is a polynomial function of degree at most L+1L+1 for all (i,j)[Q]×[t](i,j)\in[Q]\times[t].

Proof.

Follows from Section 2 in Bartlett et al. [1998] and Section 4 in Bartlett et al. [2019]. ∎

Lemma A.5 (Anthony et al. [1999], Sontag et al. [1998]).

Let Nsgn:w0×WN^{\operatorname{sgn}}:\mathbb{R}^{w_{0}}\times\mathbb{R}^{W}\to\mathbb{R}^{\ell} be a neural network function with sgn\operatorname{sgn} activation and architecture 𝒘=[w0,w1,,wL,]\bm{w}=[w_{0},w_{1},\ldots,w_{L},\ell], where the final linear transformation TL+1T_{L+1} is taken to be the identity (Definition 2.4). Let U=w1++wLU=w_{1}+\ldots+w_{L} denote the size of the neural network. Then for every tt\in\mathbb{N}, and any 𝐱1,,𝐱tw0\mathbf{x}^{1},\ldots,\mathbf{x}^{t}\in\mathbb{R}^{w_{0}}, there exists subsets 𝒲1,,𝒲Q\mathcal{W}_{1},\ldots,\mathcal{W}_{Q} of W\mathbb{R}^{W} with Q(etUW)WQ\leq\left(\frac{etU}{W^{\prime}}\right)^{W^{\prime}} whose union is all of W\mathbb{R}^{W}, where W=i=1L(wi1+1)wiW^{\prime}=\sum_{i=1}^{L}(w_{i-1}+1)w_{i}, such that Nsgn(𝐱j,𝐰)N^{\operatorname{sgn}}(\mathbf{x}^{j},\mathbf{w}) restricted to any 𝒲i\mathcal{W}_{i} is constant for all (i,j)[Q]×[t](i,j)\in[Q]\times[t].

Proof.

Fix a tt\in\mathbb{N} and 𝐱1,,𝐱tw0\mathbf{x}^{1},\ldots,\mathbf{x}^{t}\in\mathbb{R}^{w_{0}}. Consider a neuron in the first hidden layer. The output of this neuron on the input 𝐱j\mathbf{x}^{j} is sgn(𝐚,𝐱j+b)\operatorname{sgn}(\langle\mathbf{a},\mathbf{x}^{j}\rangle+b), where 𝐚w0,b\mathbf{a}\in\mathbb{R}^{w_{0}},b\in\mathbb{R} are the weights associated with this neuron. As a function of these weights, this is a linear function, i.e., a polynomial function of degree 1. Applying Lemma A.2, there are at most (etw0+1)w0+1\left(\frac{et}{w_{0}+1}\right)^{w_{0}+1} regions in the space (𝐚,b)w0×(\mathbf{a},b)\in\mathbb{R}^{w_{0}}\times\mathbb{R} such that within each region, the output of this neuron is constant. Applying the reasoning for the w1w_{1} neurons in the first layer, we obtain that the output of the first hidden layer (as a 0/10/1 vector in w1\mathbb{R}^{w_{1}}) is piecewise constant as a function of the parameters of the first layer, with at most (etw0+1)(w0+1)w1\left(\frac{et}{w_{0}+1}\right)^{(w_{0}+1)w_{1}} pieces. For a fixed output of the first hidden layer (which is a vector in w1\mathbb{R}^{w_{1}}), we can apply the same reasoning and partition space of weights of the second layer into (etw1+1)(w1+1)w2\left(\frac{et}{w_{1}+1}\right)^{(w_{1}+1)w_{2}} regions where the output of the second hidden layer is constant. Applying this argument iteratively across the hidden layers, and using the inequality (4) in Lemma A.1, we deduce that a decomposition exists for W\mathbb{R}^{W} with at most

(etw0+1)(w0+1)w1(etw1+1)(w1+1)w2(etwL1+1)(wL1+1)wL\displaystyle\left(\frac{et}{w_{0}+1}\right)^{(w_{0}+1)w_{1}}\left(\frac{et}{w_{1}+1}\right)^{(w_{1}+1)w_{2}}\cdots\left(\frac{et}{w_{L-1}+1}\right)^{(w_{L-1}+1)w_{L}}
\displaystyle\leq ((etw0+1)(w0+1)w1W(et(w1+1))(w1+1)w2W(etwL1+1)(wL1+1)wLW)W\displaystyle\left(\left(\frac{et}{w_{0}+1}\right)^{\frac{(w_{0}+1)w_{1}}{W^{\prime}}}\left(\frac{et}{(w_{1}+1)}\right)^{\frac{(w_{1}+1)w_{2}}{W^{\prime}}}\cdots\left(\frac{et}{w_{L-1}+1}\right)^{\frac{(w_{L-1}+1)w_{L}}{W^{\prime}}}\right)^{W^{\prime}}
\displaystyle\leq (etw1W++etwLW)W\displaystyle\left(\frac{etw_{1}}{W^{\prime}}+\cdots+\frac{etw_{L}}{W^{\prime}}\right)^{W^{\prime}}
=\displaystyle= (etUW)W\displaystyle\left(\frac{etU}{W^{\prime}}\right)^{W^{\prime}}

regions, such that within each such region the output of the last hidden layer of the neural network is constant, as a function of the neural network parameters, for all the vectors 𝐱1,,𝐱t\mathbf{x}^{1},\ldots,\mathbf{x}^{t}. ∎

Appendix B Proofs of main results

Proof of Theorem 2.5.

Let h:×Wh:\mathcal{I}\times\mathbb{R}^{W}\to\mathbb{R} as

h(I,𝐰):=S(I,φ𝐰Nsgn,Sigmoid(I)),h(I,\mathbf{w}):=S(I,\varphi^{N^{\operatorname{sgn}},\operatorname{Sigmoid}}_{\mathbf{w}}(I)),

using the notation from Section 2.2. We wish apply Lemma A.3 on the parameterized function class given by hh with 𝒫=W\mathcal{P}=\mathbb{R}^{W}. Accordingly, we need to find a function R:R:\mathbb{N}\to\mathbb{N} such that for any natural number t>Wt>W, and any I1,,ItI_{1},\ldots,I_{t}, the function 𝐰(h(I1,𝐰),,h(It,𝐰))\mathbf{w}\mapsto(h(I_{1},\mathbf{w}),\ldots,h(I_{t},\mathbf{w})) is a piecewise polynomial function on W\mathbb{R}^{W} with at most R(t)R(t) pieces.

We consider the space of the neural parameters as a Cartesian product of the space 𝒲\mathcal{W}^{\prime} of all the parameters of the neural network, except for the last linear transformation TL+1T_{L+1}, and the space ×wL\mathbb{R}^{\ell\times w_{L}} of matrices representing this final linear transformation. Thus, we identify a one-to-one correspondence between W\mathbb{R}^{W} and 𝒲××wL\mathcal{W}^{\prime}\times\mathbb{R}^{\ell\times w_{L}}.

By Lemma A.5, there exist a decomposition of 𝒲\mathcal{W}^{\prime} into at most (etUW)W\left(\frac{etU}{W^{\prime}}\right)^{W^{\prime}} regions, where W=WwLW^{\prime}=W-\ell w_{L} is the number of parameters determining the space 𝒲\mathcal{W}^{\prime}, such that within each region, the output of the final hidden layer of the neural network is constant (as a function of the parameters in the region) for each input Enc(Ij)\operatorname{Enc}(I_{j}), j[t]j\in[t].

We fix the parameters 𝐰𝒲\mathbf{w}\in\mathcal{W}^{\prime} to be in one of these regions and let 𝐳j\mathbf{z}^{j} be the (constant) output corresponding to input Enc(Ij)\operatorname{Enc}(I_{j}) for any parameter settings in this region. Let us consider the behaviour of Sigmoid(AL+1𝐳j)\operatorname{Sigmoid}(A^{L+1}\mathbf{z}^{j}), which is the result of a sigmoid activation applied on the final output of the neural network, as a function of the final set of parameters encoded by the matrix AL+1×wLA^{L+1}\in\mathbb{R}^{\ell\times w_{L}}. We follow the approach used in the proof of Theorem 8.11 in Anthony et al. [1999]. For each k[]k\in[\ell],

(Sigmoid(AL+1𝐳j))k=11+exp(i=1wL(AkiL+1𝐳ij))=i=1wL(eAkiL+1)i=1wL(eAkiL+1)+i=1wL(eAkiL+1)1+𝐳ij.(\operatorname{Sigmoid}(A^{L+1}\mathbf{z}^{j}))_{k}=\frac{1}{1+\exp(-\sum_{i=1}^{w_{L}}(A^{L+1}_{ki}\mathbf{z}^{j}_{i}))}=\frac{\prod_{i=1}^{w_{L}}(e^{-A^{L+1}_{ki}})}{\prod_{i=1}^{w_{L}}(e^{-A^{L+1}_{ki}})+\prod_{i=1}^{w_{L}}(e^{-A^{L+1}_{ki}})^{1+\mathbf{z}^{j}_{i}}}.

Let θki=eAkiL+1\theta_{ki}=e^{-A^{L+1}_{ki}} for i[wL]i\in[w_{L}], we have

(Sigmoid(AL+1𝐳j))k=i=1wLθkii=1wLθki+(i=1wLθki1+𝐳ij).(\operatorname{Sigmoid}(A^{L+1}\mathbf{z}^{j}))_{k}=\frac{\prod_{i=1}^{w_{L}}\theta_{ki}}{\prod_{i=1}^{w_{L}}\theta_{ki}+\left(\prod_{i=1}^{w_{L}}\theta_{ki}^{1+\mathbf{z}^{j}_{i}}\right)}.

Note that the right hand side above is a ratio of polynomials in θki\theta_{ki} with degrees at most 2wL2w_{L}.

Next, as per the hypothesis of Theorem 2.5, let ψ1j,,ψΓj\psi^{j}_{1},\ldots,\psi^{j}_{\Gamma} be the polynomials on \mathbb{R}^{\ell}, each of degree at most γ\gamma, such that the function S(Ij,)S(I_{j},\cdot) is a polynomial with degree at most λ\lambda within each of the regions where the signs of ψ1j,,ψΓj\psi^{j}_{1},\ldots,\psi^{j}_{\Gamma} are constant. Moreover, let T:[0,1]𝒫T:[0,1]^{\ell}\to\mathcal{P} be the affine linear map T(𝐮)k=ηk+(τkηk)𝐮kT(\mathbf{u})_{k}=\eta_{k}+(\tau_{k}-\eta_{k})\mathbf{u}_{k}. We observe then that for all AL+1A^{L+1} such that the functions

ψ1j(T(Sigmoid(AL+1𝐳j))),,ψΓj(T(Sigmoid(AL+1𝐳j)))\psi^{j}_{1}(T(\operatorname{Sigmoid}(A^{L+1}\mathbf{z}^{j}))),\ldots,\psi^{j}_{\Gamma}(T(\operatorname{Sigmoid}(A^{L+1}\mathbf{z}^{j})))

have the same signs, then h(Ij,(𝐰,AL+1))=S(Ij,φ𝐰,AL+1Nsgn,Sigmoid(Ij))h(I_{j},(\mathbf{w},A^{L+1}))=S(I_{j},\varphi^{N^{\operatorname{sgn}},\operatorname{Sigmoid}}_{\mathbf{w},A^{L+1}}(I_{j})) is a polynomial of degree at most λ.\lambda. By the observations made above, the functions ψ1j(T(Sigmoid(AL+1𝐳j))),,ψΓj(T(Sigmoid(AL+1𝐳j)))\psi^{j}_{1}(T(\operatorname{Sigmoid}(A^{L+1}\mathbf{z}^{j}))),\ldots,\psi^{j}_{\Gamma}(T(\operatorname{Sigmoid}(A^{L+1}\mathbf{z}^{j}))) are rational functions, i.e., ratios of polynomials, in the transformed parameters θki\theta_{ki} and the numerators and denominators of these rational functions have degrees bounded by 2wLγ2w_{L}\gamma. Since sgn(PQ)=sgn(PQ)\operatorname{sgn}\left(\frac{P}{Q}\right)=\operatorname{sgn}(PQ) for any multivariate polynomials PP, QQ (whenever the denominator is nonzero), we can bound the total number of sign patterns for ψ1j(T(Sigmoid(AL+1𝐳j))),,ψΓj(T(Sigmoid(AL+1𝐳j)))\psi^{j}_{1}(T(\operatorname{Sigmoid}(A^{L+1}\mathbf{z}^{j}))),\ldots,\psi^{j}_{\Gamma}(T(\operatorname{Sigmoid}(A^{L+1}\mathbf{z}^{j}))) using Lemma A.2 where the polynomials defining the regions have degree at most 3wLγ3w_{L}\gamma on the transformed parameters θki\theta_{ki}. We have to consider all the functions ψ1j(T(Sigmoid(AL+1𝐳j))),,ψΓj(T(Sigmoid(AL+1𝐳j)))\psi^{j}_{1}(T(\operatorname{Sigmoid}(A^{L+1}\mathbf{z}^{j}))),\ldots,\psi^{j}_{\Gamma}(T(\operatorname{Sigmoid}(A^{L+1}\mathbf{z}^{j}))) for j[t]j\in[t], giving us a total of tΓt\Gamma rational functions. Thus, an application of Lemma A.2 gives us a decomposition of ×wL\mathbb{R}^{\ell\times w_{L}} into at most

2(2etΓ3γwLwL)wL2(6etγΓ)wL2\left(\frac{2e\cdot t\Gamma\cdot 3\gamma w_{L}}{\ell w_{L}}\right)^{\ell w_{L}}\leq 2\left(\frac{6et\gamma\Gamma}{\ell}\right)^{\ell w_{L}}

regions such that h(Ij,(𝐰,AL+1))h(I_{j},(\mathbf{w},A^{L+1})) is a polynomial of degree at most λ\lambda within each region. Combined with the bound (etUW)W\left(\frac{etU}{W^{\prime}}\right)^{W^{\prime}} on the number of regions for 𝐰𝒲\mathbf{w}\in\mathcal{W}^{\prime}, we obtain a decomposition of the full parameter space 𝒲××wL\mathcal{W}^{\prime}\times\mathbb{R}^{\ell\times w_{L}} into at most

R(t):=(etUW)W2(6etγΓ)wLR(t):=\left(\frac{etU}{W^{\prime}}\right)^{W^{\prime}}\cdot 2\left(\frac{6et\gamma\Gamma}{\ell}\right)^{\ell w_{L}}

regions, such that within each region h(Ij,(𝐰,AL+1))h(I_{j},(\mathbf{w},A^{L+1})), as a function of (𝐰,AL+1)(\mathbf{w},A^{L+1}), is a polynomial of degree at most λ\lambda, for every j[t]j\in[t]. Moreover, note that within each such region, h(Ij,(𝐰,AL+1))h(I_{j},(\mathbf{w},A^{L+1})) depends only on AL+1A^{L+1}. Applying Lemma A.3, Pdim(Nsgn,SigmoidS)\operatorname{Pdim}\left(\mathcal{F}^{S}_{N^{\operatorname{sgn}},\operatorname{Sigmoid}}\right) is bounded by the largest tt\in\mathbb{N} such that

2t1(etUW)W2(6etγΓ)wL(2et(λ+1)wL)wL.2^{t-1}\leq\left(\frac{etU}{W^{\prime}}\right)^{W^{\prime}}\cdot 2\left(\frac{6et\gamma\Gamma}{\ell}\right)^{\ell w_{L}}\cdot\left(\frac{2et(\lambda+1)}{\ell w_{L}}\right)^{\ell w_{L}}.

Taking logarithms on both sides, we want the largest tt such that

12(t2)Wlog(etUW)+wLlog(6etγΓ)+wLlog(2et(λ+1)wL)\frac{1}{2}(t-2)\leq W^{\prime}\log\left(\frac{etU}{W^{\prime}}\right)+\ell w_{L}\log\left(\frac{6et\gamma\Gamma}{\ell}\right)+\ell w_{L}\log\left(\frac{2et(\lambda+1)}{\ell w_{L}}\right)\\

As we only need to derive an upper bound for the pseudo-dimension, we can loosen the inequality above using the inequality (3) in Lemma A.1:

12(t2)\displaystyle\frac{1}{2}(t-2)\leq W(etU/W8eU+log(8U))+wL(6etγΓ/48eγΓwL+log(48γΓwL))\displaystyle W^{\prime}\left(\frac{etU/W^{\prime}}{8eU}+\log(8U)\right)+\ell w_{L}\left(\frac{6et\gamma\Gamma/\ell}{48e\gamma\Gamma w_{L}}+\log(48\gamma\Gamma w_{L})\right)
+wL(2et(λ+1)/(wL)16e(λ+1)+log(16(λ+1)))\displaystyle+\ell w_{L}\left(\frac{2et(\lambda+1)/(\ell w_{L})}{16e(\lambda+1)}+\log(16(\lambda+1))\right)
\displaystyle\leq 18t+Wlog(8U)+18t+wLlog(48γΓwL)+18t+wLlog(16(λ+1))\displaystyle\frac{1}{8}t+W^{\prime}\log(8U)+\frac{1}{8}t+\ell w_{L}\log(48\gamma\Gamma w_{L})+\frac{1}{8}t+\ell w_{L}\log(16(\lambda+1))
\displaystyle\leq 38t+Wlog(8U)+wLlog(48γΓwL16(λ+1)8U)\displaystyle\frac{3}{8}t+W\log(8U)+\ell w_{L}\log\left(\frac{48\gamma\Gamma w_{L}\cdot 16(\lambda+1)}{8U}\right)
\displaystyle\leq 38t+Wlog(8U)+wLlog(96γΓ(λ+1)wL/U)\displaystyle\frac{3}{8}t+W\log(8U)+\ell w_{L}\log(96\gamma\Gamma(\lambda+1)w_{L}/U)

then it’s not hard to see that

Pdim(Nsgn,TSigmoidS)=𝒪(WlogU+wLlog(γΓ(λ+1)))=𝒪(Wlog(UγΓ(λ+1))).\operatorname{Pdim}\left(\mathcal{F}^{S}_{N^{\operatorname{sgn}},T\circ\operatorname{Sigmoid}}\right)=\mathcal{O}\left(W\log U+\ell w_{L}\log\left(\gamma\Gamma(\lambda+1)\right)\right)=\mathcal{O}\left(W\log(U\gamma\Gamma(\lambda+1))\right).

Proof of Theorem 2.6.

Let h:×Wh:\mathcal{I}\times\mathbb{R}^{W}\to\mathbb{R} as

h(I,𝐰):=S(I,φ𝐰NReLU,CReLU(I)),h(I,\mathbf{w}):=S(I,\varphi^{N^{\operatorname{ReLU}},\operatorname{CReLU}}_{\mathbf{w}}(I)),

using the notation from Section 2.2. We wish apply Lemma A.3 on the parameterized function class given by hh with 𝒫=W\mathcal{P}=\mathbb{R}^{W}. Accordingly, we need to find a function R:R:\mathbb{N}\to\mathbb{N} such that for any natural number t>LWt>LW, and any I1,,ItI_{1},\ldots,I_{t}, the function 𝐰(h(I1,𝐰),,h(It,𝐰))\mathbf{w}\mapsto(h(I_{1},\mathbf{w}),\ldots,h(I_{t},\mathbf{w})) is a piecewise polynomial function on W\mathbb{R}^{W} with at most R(t)R(t) pieces.

Note that φ𝐰ReLU,CReLU(I)\varphi^{\operatorname{ReLU},\operatorname{CReLU}}_{\mathbf{w}}(I) can be seen as the output of a neural network with ReLU activations and architecture [d,w1,,wL,2,][d,w_{1},\ldots,w_{L},2\ell,\ell], where the final linear function is the fixed function T(𝐮)k=(τkηk)𝐮kT(\mathbf{u})_{k}=(\tau_{k}-\eta_{k})\mathbf{u}_{k}. This is because CReLU(x)=ReLU(x)ReLU(x1)\operatorname{CReLU}(x)=\operatorname{ReLU}(x)-\operatorname{ReLU}(x-1) can be simulated using two ReLU\operatorname{ReLU} neurons. Applying Lemma A.4, this implies that given tt\in\mathbb{N} and I1,,ItI_{1},\ldots,I_{t}\in\mathcal{I}, there are

Q2L+1(2eti=1L+1(iwi)(L+1)W)(L+1)W2L+1(2et(U+2)W)(L+1)WQ\leq 2^{L+1}\left(\frac{2et\cdot\sum_{i=1}^{L+1}(iw_{i})}{(L+1)W}\right)^{(L+1)W}\leq 2^{L+1}\left(\frac{2et(U+2\ell)}{W}\right)^{(L+1)W}

regions 𝒲1,,𝒲Q\mathcal{W}_{1},\ldots,\mathcal{W}_{Q} whose union is all of W\mathbb{R}^{W}, such that φ𝐰NReLU,CReLU(Ij)\varphi^{N^{\operatorname{ReLU}},\operatorname{CReLU}}_{\mathbf{w}}(I_{j}) restricted to 𝐰𝒲i\mathbf{w}\in\mathcal{W}_{i} is a polynomial function of degree at most L+2L+2 for all (i,j)[Q]×[t](i,j)\in[Q]\times[t].

Next, as per the hypothesis of Theorem 2.5, let ψ1j,,ψΓj\psi^{j}_{1},\ldots,\psi^{j}_{\Gamma} be the polynomials on \mathbb{R}^{\ell}, each of degree at most γ\gamma, such that the function S(Ij,)S(I_{j},\cdot) is a polynomial with degree at most λ\lambda within each of the regions where the signs of ψ1j,,ψΓj\psi^{j}_{1},\ldots,\psi^{j}_{\Gamma} are constant. Thus, for all 𝐰W\mathbf{w}\in\mathbb{R}^{W} such that

ψ1j(φ𝐰ReLU,CReLU(Ij)),,ψΓj(φ𝐰ReLU,CReLU(Ij))\psi^{j}_{1}(\varphi^{\operatorname{ReLU},\operatorname{CReLU}}_{\mathbf{w}}(I_{j})),\ldots,\psi^{j}_{\Gamma}(\varphi^{\operatorname{ReLU},\operatorname{CReLU}}_{\mathbf{w}}(I_{j}))

have the same signs, then h(Ij,𝐰)=S(Ij,φ𝐰ReLU,CReLU(Ij))h(I_{j},\mathbf{w})=S(I_{j},\varphi^{\operatorname{ReLU},\operatorname{CReLU}}_{\mathbf{w}}(I_{j})) is a polynomial of degree at most λ(L+2).\lambda(L+2). The functions ψ1j(φ𝐰ReLU,CReLU(Ij)),,ψΓj(φ𝐰ReLU,CReLU(Ij))\psi^{j}_{1}(\varphi^{\operatorname{ReLU},\operatorname{CReLU}}_{\mathbf{w}}(I_{j})),\ldots,\psi^{j}_{\Gamma}(\varphi^{\operatorname{ReLU},\operatorname{CReLU}}_{\mathbf{w}}(I_{j})) are polynomials of degree at most γ(L+2)\gamma(L+2). Considering all these polynomials for j[t]j\in[t], by Lemma A.2, each 𝒲iW\mathcal{W}_{i}\subseteq\mathbb{R}^{W} from the decomposition above can be further decomposed into at most 2(2etΓγ(L+2)W)W2\left(\frac{2et\Gamma\gamma(L+2)}{W}\right)^{W} regions such that for all 𝐰\mathbf{w} in such a region, h(Ij,𝐰)h(I_{j},\mathbf{w}) is a polynomial function of 𝐰\mathbf{w} of degree at most λ(L+2)\lambda(L+2).

To summarize the arguments above, we have a decomposition of W\mathbb{R}^{W} into at most

R(t):=2L+1(2et(U+2)W)(L+1)W2(2etΓγ(L+2)W)WR(t):=2^{L+1}\left(\frac{2et(U+2\ell)}{W}\right)^{(L+1)W}\cdot 2\left(\frac{2et\Gamma\gamma(L+2)}{W}\right)^{W}

regions such that within each region, h(Ij,𝐰)h(I_{j},\mathbf{w}) is a polynomial function of 𝐰\mathbf{w} of degree at most λ(L+2)\lambda(L+2). Applying Lemma A.3, Pdim(NReLU,CReLUS)\operatorname{Pdim}\left(\mathcal{F}^{S}_{N^{\operatorname{ReLU}},\operatorname{CReLU}}\right) is bounded by the largest tt\in\mathbb{N} such that

2t12L+1(2et(U+2)W)(L+1)W2(2etΓγ(L+2)W)W(2et(λ+1)(L+2)W)W,2^{t-1}\leq 2^{L+1}\left(\frac{2et(U+2\ell)}{W}\right)^{(L+1)W}\cdot 2\left(\frac{2et\Gamma\gamma(L+2)}{W}\right)^{W}\cdot\left(\frac{2et(\lambda+1)(L+2)}{W}\right)^{W},

which is bounded by the largest tt such that

12(t1)\displaystyle\frac{1}{2}(t-1)\leq L+2+(L+1)Wlog(2et(U+2)W)+Wlog(2etΓγ(L+2)W)\displaystyle L+2+(L+1)W\log\left(\frac{2et(U+2\ell)}{W}\right)+W\log\left(\frac{2et\Gamma\gamma(L+2)}{W}\right)
+Wlog(2et(λ+1)(L+2)W)\displaystyle+W\log\left(\frac{2et(\lambda+1)(L+2)}{W}\right)
\displaystyle\leq L+2+18t+(L+1)Wlog(16(U+2))+18t+Wlog(16Γγ(L+2))\displaystyle L+2+\frac{1}{8}t+(L+1)W\log(16(U+2\ell))+\frac{1}{8}t+W\log(16\Gamma\gamma(L+2))
+18t+Wlog(16(λ+1)(L+2))\displaystyle+\frac{1}{8}t+W\log(16(\lambda+1)(L+2))
\displaystyle\leq L+2+38t+(L+1)Wlog(16(U+2))+Wlog(γΓ(λ+1))+2Wlog(16(L+2)),\displaystyle L+2+\frac{3}{8}t+(L+1)W\log(16(U+2\ell))+W\log(\gamma\Gamma(\lambda+1))+2W\log(16(L+2)),

where the inequality (3) in Lemma A.1 is applied in the second line. Then it’s not hard to see that

Pdim(NReLU,CReLUS)=𝒪(LWlog(U+)+Wlog(γΓ(λ+1))).\operatorname{Pdim}\left(\mathcal{F}^{S}_{N^{\operatorname{ReLU}},\operatorname{CReLU}}\right)=\mathcal{O}\left(LW\log(U+\ell)+W\log(\gamma\Gamma(\lambda+1))\right).

Proof of Corollary 2.7.

We introduce an auxiliary function f:r{𝐩1,,𝐩r}f:\mathbb{R}^{r}\rightarrow\{\mathbf{p}_{1},\dots,\mathbf{p}_{r}\} given by f(𝐱)=𝐩argmaxi[r]𝐱if(\mathbf{x})=\mathbf{p}_{\operatorname{arg\,max}_{i\in[r]}\mathbf{x}_{i}}, and let S:×rS^{\prime}:\mathcal{I}\times\mathbb{R}^{r}\rightarrow\mathbb{R} be

S(I,𝐱):=S(I,f(𝐱)).S^{\prime}(I,\mathbf{x}):=S(I,f(\mathbf{x})).

There exists a decomposition of the r\mathbb{R}^{r} space obtained by at most r(r1)2\frac{r(r-1)}{2} hyperplanes

{𝐱r:𝐱i=𝐱j},(i,j)[r]×[r],ij.\left\{\mathbf{x}\in\mathbb{R}^{r}:\mathbf{x}_{i}=\mathbf{x}_{j}\right\},\quad\forall(i,j)\in[r]\times[r],i\neq j.

Within each decomposed region, the largest coordinate of 𝐱\mathbf{x} is unchanged. Therefore, for any fixed II\in\mathcal{I}, the new score function S(I,)S^{\prime}(I,\cdot) remains constant in each of these regions. Then a direct application of Theorem 2.5 and Theorem 2.6 to SS^{\prime} yields the desired result. ∎

Proof of Proposition 3.7.

The proof of Proposition 3.7 is analogous to the proof of Theorem 4.1 in Balcan et al. [2021d]. For any fixed instance II\in\mathcal{I}, let the set of candidate cutting planes be 𝒞={c1,,c|𝒞|}\mathcal{C}=\{c_{1},\dots,c_{|\mathcal{C}|}\}. Comparing the overall scores for all cuts introduces the following hyperplanes:

i=1𝝁iScorei(cj,I)=i=1𝝁iScorei(ck,I),j,k[|𝒞|],jk.\sum_{i=1}^{\ell}\bm{\mu}_{i}\operatorname{Score}_{i}(c_{j},I)=\sum_{i=1}^{\ell}\bm{\mu}_{i}\operatorname{Score}_{i}(c_{k},I),\quad\forall j,k\in[|\mathcal{C}|],j\neq k.

There are at most |𝒞|(|𝒞|1)2\frac{|\mathcal{C}|(|\mathcal{C}|-1)}{2} hyperplanes decomposing the 𝝁\bm{\mu} space [0,1][0,1]^{\ell}. Within each region defined by these hyperplanes, the selected cut remains the same, so the branch-and-cut tree size is constant. This proves that fS(I,𝝁)f_{S}(I,\bm{\mu}) is a piecewise constant function on 𝝁\bm{\mu}, for any fixed II. We then apply Theorem 2.5 and Theorem 2.6 to derive the pseudo-dimension bounds. ∎