This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Controlling Continuous Relaxation for
Combinatorial Optimization

Yuma Ichikawa
Fujitsu Limited, Kanagawa, Japan
Department of Basic Science, University of Tokyo
Abstract

Unsupervised learning (UL)-based solvers for combinatorial optimization (CO) train a neural network that generates a soft solution by directly optimizing the CO objective using a continuous relaxation strategy. These solvers offer several advantages over traditional methods and other learning-based methods, particularly for large-scale CO problems. However, UL-based solvers face two practical issues: (I) an optimization issue, where UL-based solvers are easily trapped at local optima, and (II) a rounding issue, where UL-based solvers require artificial post-learning rounding from the continuous space back to the original discrete space, undermining the robustness of the results. This study proposes a Continuous Relaxation Annealing (CRA) strategy, an effective rounding-free learning method for UL-based solvers. CRA introduces a penalty term that dynamically shifts from prioritizing continuous solutions, effectively smoothing the non-convexity of the objective function, to enforcing discreteness, eliminating artificial rounding. Experimental results demonstrate that CRA significantly enhances the performance of UL-based solvers, outperforming existing UL-based solvers and greedy algorithms in complex CO problems. Additionally, CRA effectively eliminates artificial rounding and accelerates the learning process.

1 Introduction

The objective of combinatorial optimization (CO) problems is to find the optimal solution from a discrete space, and these problems are fundamental in many real-world applications (Papadimitriou and Steiglitz, 1998). Most CO problems are NP-hard or NP-complete; making it challenging to solve large-scale problems within feasible computational time. Traditional methods frequently depend on heuristics to find approximate solutions; however, considerable insights into the specific problems are required. Alternatively, problems can be formulated as integer linear programming (ILP) and solved using ILP solvers. However, ILP lacks scalability when applied to problems with graph structures.

Recently, several studies have used machine learning methods to handle CO problems by learning heuristics. Most of these studies focus on supervised learning (SL)-based solvers (Hudson et al., 2021; Joshi et al., 2019; Gasse et al., 2019; Selsam et al., 2018; Khalil et al., 2016), which require optimal solutions to CO problems as supervision during training. However, obtaining optimal solutions is challenging in practice, and SL-based solvers often fail to generalize well (Yehuda et al., 2020). Reinforcement learning (RL)-based solvers (Yao et al., 2019; Chen and Tian, 2019; Yolcu and Póczos, 2019; Nazari et al., 2018; Khalil et al., 2017; Bello et al., 2016) avoid the need for optimal solutions but often suffer from notoriously unstable training due to poor gradient estimation and hard explorations (Mnih et al., 2015; Tang et al., 2017; Espeholt et al., 2018). Unsupervised learning (UL)-based solvers have recently attracted much attention (Schuetz et al., 2022a; Karalias and Loukas, 2020; Amizadeh et al., 2018). UL-based solvers follow a continuous relaxation approach, training a UL model to output a soft solution to the relaxed CO problem by directly optimizing a differentiable objective function, offering significantly stable and fast training even for large-scale CO problems. Notably, the physics-inspired GNN (PI-GNN) solver (Schuetz et al., 2022a) employs graph neural networks (GNN) to automatically learn instance-specific heuristics and performs on par with or outperforms existing solvers for CO problems with millions of variables without optimal solutions. While these offer some advantages over traditional and other machine learning-based solvers, they face two practical issues. The first issue is an optimization issues where UL-based solvers are easily trapped at local optima. Due to this issue, Angelini and Ricci-Tersenghi (2023) demonstrated that the PI-GNN solver (Schuetz et al., 2022a) could not achieve results comparable to those of the degree-based greedy algorithm (DGA) (Angelini and Ricci-Tersenghi, 2019) on maximum independent set (MIS) problems in random regular graphs (RRG). Wang and Li (2023) also pointed out the importance of using dataset or history, and initializing the GNN with outputs from greedy solvers to help the PI-GNN solver overcome optimization challenges. This issue is a crucial bottleneck to the applicability of this method across various real-world applications. The second issue relates to the inherent ambiguity of the continuous relaxation approach. This approach necessitates artificial rounding from the soft solution, which may include continuous values, back to the original discrete solution, potentially undermining the robustness of the results. While linear relaxation can provide an optimal solution for original discrete problems on bipartite graphs (Hoffman and Kruskal, 2010), it typically leads to solutions with 1/21/2 values, which is known to half-integrality (Nemhauser and Trotter Jr, 1974), in which existing rounding methods (Schuetz et al., 2022b; Wang et al., 2022) completely lose their robustness. For NP-hard problems with graph structures, such as the MIS and MaxCut, semidefinite programming (SDP) relaxations have been proposed as effective approximation methods (Lovász, 1979; Goemans and Williamson, 1995). However, these approaches rely on rounding techniques, such as spectral clustering (Von Luxburg, 2007), to transform relaxed solutions into feasible ones, which often fails to obtain optimal solutions.

To address these issues, we propose the Continuous Relaxation Annealing (CRA). CRA introduces a penalty term to control the continuity and discreteness of the relaxed variables, with a parameter γ\gamma to regulate the intensity of this penalty term. When the parameter γ\gamma is small, the relaxed variable tends to favor continuous solutions, whereas a large γ\gamma biases them toward discrete values. This penalty term also effectively eliminates local optimum. Moreover, a small γ\gamma forces the loss function to approach a simple convex function, encouraging active exploration within the continuous space. CRA also includes an annealing process, where γ\gamma is gradually increased until the relaxed variables approach discrete values, eliminating the artificial rounding from the continuous to the original discrete space after learning. In this study, the solver that applies the CRA to the PI-GNN solver is referred to as the CRA-PI-GNN solver. We also demonstrate the benefits of the CRA through experiments on benchmark CO problems, including MIS, maximum cut (MaxCut), and diverse bipartite matching (DBM) problems across graphs of varying sizes and degrees. The experimental results show that the CRA significantly enhances the performance of the PI-GNN solver, outperforming the original PI-GNN solver, other state-of-the-art learning-based baselines, and greedy algorithms. This improvement is achieved by directly optimizing each instance without any history, e.g., previous optimal solutions and the information of other instances. Additionally, these experiments indicate that the CRA accelerates the learning process of the PI-GNN solver. Notably, these results overcome the limitations pointed out by Angelini and Ricci-Tersenghi (2023); Wang and Li (2023), highlighting the further potential of UL-based solvers.

Notation

We use the shorthand expression [N]={1,2,,N}[N]=\{1,2,\ldots,N\}, where NN\in{\mathbb{N}}. INN×NI_{N}\in{\mathbb{R}}^{N\times N} denotes an N×NN\times N identity matrix, 𝟏N{\bm{1}}_{N} denotes the vector (1,,1)N(1,\ldots,1)^{\top}\in{\mathbb{R}}^{N}, and 𝟎N{\bm{0}}_{N} denotes the vector (0,,0)N(0,\ldots,0)^{\top}\in{\mathbb{R}}^{N}. G(V,E)G(V,E) represents an undirected graph, where VV is the set of nodes with cardinality |V|=N|V|=N, and EV×VE\subseteq V\times V denotes the set of edges. For a graph G(V,E)G(V,E), AijA_{ij} denotes the adjacency matrix, where Aij=0A_{ij}=0 if an edge (i,j)(i,j) does not exist and Aij>0A_{ij}>0 if the edge is present.

2 Background

Combinatorial optimization

The goal of this study is to solve the following CO problem.

min𝒙{0,1}Nf(𝒙;C)s.t.𝒙𝒳(C),\min_{{\bm{x}}\in\{0,1\}^{N}}f({\bm{x}};C)~{}~{}~{}\mathrm{s.t.}~{}~{}~{}{\bm{x}}\in{\mathcal{X}}(C), (1)

where C𝒞C\in{\mathcal{C}} denotes instance-specific parameters, such as a graph G=(V,E)G=(V,E), and 𝒞{\mathcal{C}} represents the set of all possible instances. f:𝒳×𝒞f:{\mathcal{X}}\times{\mathcal{C}}\to{\mathbb{R}} denotes the cost function. Additionally, 𝒙=(xi)1iN{0,1}N{\bm{x}}=(x_{i})_{1\leq i\leq N}\in\{0,1\}^{N} is a binary vector to be optimized, and 𝒳(C){0,1}N{\mathcal{X}}(C)\subseteq\{0,1\}^{N} denotes the feasible solution space, typically defined by the following equality and inequality constraints.

𝒳(C)={𝒙{0,1}Ni[I],gi(𝒙;C)0,j[J],hj(𝒙;C)=0},I,J,{\mathcal{X}}(C)=\{{\bm{x}}\in\{0,1\}^{N}\mid\forall i\in[I],~{}g_{i}({\bm{x}};C)\leq 0,~{}\forall j\in[J],~{}h_{j}({\bm{x}};C)=0\},~{}~{}I,J\in{\mathbb{N}},

Here, for i[I]i\in[I], gi:{0,1}N×𝒞g_{i}:\{0,1\}^{N}\times{\mathcal{C}}\to{\mathbb{R}} denotes the inequality constraint, and for j[J]j\in[J], hj:{0,1}N×𝒞h_{j}:\{0,1\}^{N}\times{\mathcal{C}}\to{\mathbb{R}} denotes the equality constraint. Following UL-based solvers (Wang et al., 2022; Schuetz et al., 2022a; Karalias and Loukas, 2020), we reformulate the constrained problem into an equivalent unconstrained form using the penalty method (Smith et al., 1997):

min𝒙l(𝒙;C,𝝀),l(𝒙;C,𝝀)\ensurestackMath\stackon[1pt]=Δf(𝒙;C)+i=1I+Jλivi(𝒙;C).\min_{{\bm{x}}}l({\bm{x}};C,{\bm{\lambda}}),~{}~{}l({\bm{x}};C,{\bm{\lambda}})\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}f({\bm{x}};C)+\sum_{i=1}^{I+J}\lambda_{i}v_{i}({\bm{x}};C). (2)

Here, for all i[I+J]i\in[I+J], v:{0,1}N×𝒞v:\{0,1\}^{N}\times{\mathcal{C}}\to{\mathbb{R}} is the penalty term, which increases when the constraints are violated. For example, the penalty term is defined as follows:

i[I],j[J],vi(𝒙;C)=max(0,gi(𝒙;C)),j[J],vj(𝒙;C)=(hj(𝒙;C))2\displaystyle\forall i\in[I],j\in[J],~{}v_{i}({\bm{x}};C)=\max(0,g_{i}({\bm{x}};C)),~{}~{}\forall j\in[J],~{}v_{j}({\bm{x}};C)=(h_{j}({\bm{x}};C))^{2}

and 𝝀=(λi)1iI+JI+J{\bm{\lambda}}=(\lambda_{i})_{1\leq i\leq I+J}\in{\mathbb{R}}^{I+J} denotes the penalty parameters that control the trade-off between constraint satisfaction and cost optimization. Note that, as 𝝀{\bm{\lambda}} increases, the penalty for constraint violations becomes more significant. In the following, we provide an example of this formulation.

Example: MIS problem

The MIS problem is a fundamental NP-hard problem (Karp, 2010), defined as follows. Given an undirected graph G(V,E)G(V,E), an independent set (IS) is a subset of nodes V{\mathcal{I}}\in V where any two nodes are not adjacent. The MIS problem aims to find the largest IS, denoted as {\mathcal{I}}^{\ast}. In this study, ρ\rho denotes the IS density, defined as ρ=||/|V|\rho=|{\mathcal{I}}|/|V|. Following Schuetz et al. (2022a), a binary variable xix_{i} is assigned to each node iVi\in V. The MIS problem can be formulated as follows:

f(𝒙;G,λ)=iVxi+λ(i,j)Exixj,f({\bm{x}};G,\lambda)=-\sum_{i\in V}x_{i}+\lambda\sum_{(i,j)\in E}x_{i}x_{j}, (3)

where the first term maximizes the number of nodes assigned a value of 11, and the second term penalizes adjacent nodes assigned 11 according to the penalty parameter λ\lambda.

2.1 Unsupervised learning based solvers

Learning for CO problems involves training an algorithm 𝒜𝜽():𝒞{0,1}N{\mathcal{A}}_{{\bm{\theta}}}(\cdot):{\mathcal{C}}\to\{0,1\}^{N} parameterized by a neural network (NN), where 𝜽{\bm{\theta}} denotes the parameters. For a given instance C𝒞C\in{\mathcal{C}}, this algorithm generates a valid solution 𝒙^=𝒜θ(C)𝒳(C)\hat{{\bm{x}}}={\mathcal{A}}_{\theta}(C)\in{\mathcal{X}}(C) and aims to minimize f(𝒙^;C)f(\hat{{\bm{x}}};C). Several approaches have been proposed to train 𝒜θ{\mathcal{A}}_{\theta}. This study focuses on UL-based solvers, which do not use a labeled solution 𝒙argmin𝒙𝒳(C)f(𝒙;C){\bm{x}}^{\ast}\in\mathrm{argmin}_{{\bm{x}}\in{\mathcal{X}}(C)}f({\bm{x}};C) during training (Wang et al., 2022; Schuetz et al., 2022a; Karalias and Loukas, 2020; Amizadeh et al., 2018). In the following, we outline the details of the UL-based solvers.

The UL-based solvers employ a continuous relaxation strategy to train NN. This continuous relaxation strategy reformulates a CO problem into a continuous optimization problem by converting discrete variables into continuous ones. A typical example of continuous relaxation is expressed as follows:

min𝒑l^(𝒑;C,𝝀),l^(𝒑;C,𝝀)\ensurestackMath\stackon[1pt]=Δf^(𝒑;C)+i=1m+pλiv^i(𝒑;C),\min_{{\bm{p}}}\hat{l}({\bm{p}};C,{\bm{\lambda}}),~{}~{}\hat{l}({\bm{p}};C,{\bm{\lambda}})\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\hat{f}({\bm{p}};C)+\sum_{i=1}^{m+p}\lambda_{i}\hat{v}_{i}({\bm{p}};C),

where 𝒑=(pi)1iN[0,1]N{\bm{p}}=(p_{i})_{1\leq i\leq N}\in[0,1]^{N} represents a set of relaxed continuous variables, where each binary variable xi{0,1}x_{i}\in\{0,1\} is relaxed to a continuous counterpart pi[0,1]p_{i}\in[0,1], and f^:[0,1]N×𝒞\hat{f}:[0,1]^{N}\times{\mathcal{C}}\to{\mathbb{R}} denotes the relaxed form of ff such that f^(𝒙;C)=f(𝒙;C)\hat{f}({\bm{x}};C)=f({\bm{x}};C) for 𝒙{0,1}N{\bm{x}}\in\{0,1\}^{N}. The relation between each constraint viv_{i} and its relaxation v^i\hat{v}_{i} is similar for i[I+J]i\in[I+J], meaning that i[I+J],v^i(𝒙;C)=vi(𝒙;C)\forall i\in[I+J],~{}\hat{v}_{i}({\bm{x}};C)=v_{i}({\bm{x}};C) for 𝒙{0,1}N{\bm{x}}\in\{0,1\}^{N}. Wang et al. (2022) and Schuetz et al. (2022a) formulated 𝒜θ(C){\mathcal{A}}_{\theta}(C) as the relaxed continuous variables, defined as 𝒜θ():𝒞[0,1]n{\mathcal{A}}_{\theta}(\cdot):{\mathcal{C}}\to[0,1]^{n}. In the following discussions, we denote 𝒜θ{\mathcal{A}}_{\theta} as 𝒑θ{\bm{p}}_{\theta} to make the parametrization of the relaxed variables explicit. Then, 𝒑θ{\bm{p}}_{\theta} is optimized by directly minimizing the following label-independent function:

l^(𝜽;C,𝝀)\ensurestackMath\stackon[1pt]=Δf^(𝒑θ(C);C)+i=1I+Jλiv^i(𝒑θ(C);C).\hat{l}({\bm{\theta}};C,{\bm{\lambda}})\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\hat{f}({\bm{p}}_{\theta}(C);C)+\sum_{i=1}^{I+J}\lambda_{i}\hat{v}_{i}({\bm{p}}_{\theta}(C);C). (4)

After training, the relaxed solution 𝒑𝜽{\bm{p}}_{{\bm{\theta}}} is converted into discrete variables using artificial rounding 𝒑𝜽{\bm{p}}_{{\bm{\theta}}}, where i[N],xi=int(p𝜽,i(C))\forall i\in[N],~{}x_{i}=\mathrm{int}(p_{{\bm{\theta}},i}(C)) based on a threshold (Schuetz et al., 2022a), or alternatively, a greedy method (Wang et al., 2022). Two types of schemes for UL-based solvers have been developed based on this formulation.

(Type I) Learning generalized heuristics from history/data

One approach, proposed by Karalias and Loukas (2020), aims to automatically learn effective heuristics from historical dataset instances 𝒟={Cμ}μ=1P{\mathcal{D}}=\{C_{\mu}\}_{\mu=1}^{P} and then apply these learned heuristics to a new instance CC^{\ast}, through inference. Note that this method assumes that either the training dataset is easily obtainable or that meaningful data augmentation is feasible. Specifically, given a set of training instances 𝒟=(Cμ){\mathcal{D}}=(C_{\mu}), sampled independently and identically from a distribution P(C)P(C), the goal is to minimize the average loss function min𝜽μ=1Pl(𝜽;Cμ,𝝀)\min_{{\bm{\theta}}}\sum_{\mu=1}^{P}l({\bm{\theta}};C_{\mu},{\bm{\lambda}}). However, this method does not guarantee quality for a test instance, CC^{\ast}. Even if the training instances 𝒟{\mathcal{D}} are extensive and the test instance CC follows P(C)P(C), low average performance 𝔼CP(C)[l^(θ;C)]{\mathbb{E}}_{C\sim P(C)}[\hat{l}(\theta;C)] may not guarantee a low l(θ;C)l(\theta;C) for on a specific CC. To address this issue, Wang and Li (2023) introduced a meta-learning approach where NNs aim to provide good initialization for future instances rather than direct solutions.

(Type II) Learning effective heuristics on a specific single instance

Another approach, known as the PI-GNN solver (Schuetz et al., 2022a, b), automatically learns instance-specific heuristics for a single instance using the instance parameter CC by directly applying Eq. (4). This approach addresses CO problems on graphs, where C=G(V,E)C=G(V,E), and employs GNNs for the relaxed variables pθ(G)p_{\theta}(G). Here, an LL-layered GNN is trained to directly minimize l^(𝜽;C,𝝀)\hat{l}({\bm{\theta}};C,{\bm{\lambda}}), taking as input a graph GG and the embedding vectors on its nodes, and outputting the relaxed solution 𝒑𝜽(G)[0,1]N{\bm{p}}_{{\bm{\theta}}}(G)\in[0,1]^{N}. A detailed description of GNNs is provided in Appendix E.2. Note that this setting is applicable even when the training dataset 𝒟{\mathcal{D}} is difficult to obtain. The overparameterization of relaxed variables is expected to smooth the objective function by introducing additional parameters to the optimization problem, similar to the kernel method. However, minimizing Eq. 4 for a single instance can be time-consuming compared to the inference process. Nonetheless, for large-scale CO problems, this approach has been reported to outperform other solvers in terms of both computational time and solution quality (Schuetz et al., 2022a, b).

Note that, while both UL-based solvers for multiple instances (Type I) and individual instances (Type II) are valuable, this study focuses on advancing the latter: a UL-based solver for a single instance. Both types of solvers are applicable to cost functions that meet a particular requirement due to their reliance on a gradient-based algorithm to minimize Eq (4).

Assumption 2.1 (Differentiable cost function).

The relaxed loss function l^(𝜽;C,𝝀)\hat{l}({\bm{\theta}};C,{\bm{\lambda}}) and its partial derivative l^(𝜽;C,𝝀)/𝜽\nicefrac{{\partial\hat{l}({\bm{\theta}};C,{\bm{\lambda}})}}{{\partial{\bm{\theta}}}} are accessible during the optimization process.

These requirements encompass a nonlinear cost function and interactions involving many-body interactions, extending beyond simple two-body interactions.

3 Continuous relaxation annealing for UL-based solvers

In this section, we discuss the practical issues associated with UL-based solvers and then introduce continuous relaxation annealing (CRA) as a proposed solution.

3.1 Motivation: practical issues of UL-based solvers

UL-based solvers (Type II) (Schuetz et al., 2022a, b) are effective in addressing large-scale CO problems. However, these solvers present following two practical issues, highlighted in several recent studies (Wang and Li, 2023; Angelini and Ricci-Tersenghi, 2023). Additionally, we numerically validate these issues; see Appendix F.1 for detailed results.

(I) Ambiguity in rounding method after learning

UL-based solvers employ a continuous relaxation strategy to train NNs and then convert the relaxed continuous variables into discrete binary values through artificial rounding as discussed in Section 2.1. This inherent ambiguity in continuous relaxation strategy often results in potential discrepancies between the optimal solutions of the original discrete CO problem and those of the relaxed continuous one. Continuous relaxation expands the solution space, often producing continuous values that lower the cost compared to an optimal binary value. Indeed, while linear relaxation can provide an optimal solution for discrete problems on bipartite graphs (Hoffman and Kruskal, 2010), it typically results in solutions with 1/2\nicefrac{{1}}{{2}} values, which is known to half-integrality (Nemhauser and Trotter Jr, 1974). Existing rounding methods (Schuetz et al., 2022b; Wang et al., 2022) often lose robustness in these scenarios. In practice, PI-GNN solver often outputs values near 1/2\nicefrac{{1}}{{2}}, underscoring the limitations of current rounding techniques for UL-based solvers.

(II) Difficulty in optimizing NNs

Recently, Angelini and Ricci-Tersenghi (2023) demonstrated that PI-GNN solver falls short of achieving results comparable to those of the degree-based greedy algorithm (DGA) (Angelini and Ricci-Tersenghi, 2019) when solving the MIS problems on RRGs. Angelini and Ricci-Tersenghi (2023) further emphasized the importance of evaluating UL-based solvers on complex CO problems, where greedy algorithms typically perform worse. A representative example is the MIS problems on RRGs with a constant degree d>16d>16, where a clustering transition in the solution space creates barriers that impede optimization. Moreover, Wang and Li (2023) emphasized the importance of using training/historical datasets, 𝒟={Cμ}1μP{\mathcal{D}}=\{C_{\mu}\}_{1\leq\mu\leq P}, which contain various graphs and initialization using outputs from greedy solvers, such as DGA and RGA for MIS problems. Their numerical analysis indicated that PI-GNN solver tends to get trapped in local optima when directly optimized directly for a single instance without leveraging a training dataset 𝒟{\mathcal{D}}. However, in a practical setting, systematic methods for generating or collecting training datasets 𝒟{\mathcal{D}} to effectively avoid local optima remains unclear. Additionally, training on instances that do not contribute to escaping local optima is time-consuming. Therefore, it is crucial to develop an effective UL-based solver that can operate on a single instance without relying on training data, 𝒟{\mathcal{D}}. Our numerical experiments, detailed in Appendix F.1, also confirmed this optimization issue. They demonstrated that as problem complexity increases, the PI-GNN solver is often drawn into trivial local optima, 𝒑𝜽=𝟎N{\bm{p}}_{{\bm{\theta}}}={\bm{0}}_{N}, in certain problems. This entrapment results in prolonged plateaus that significantly slow down the learning process and, in especially challenging cases, can render learning entirely infeasible. Our numerical experiments, detailed in Appendix F.1, also validated this optimization issue, demonstrating that as the problem complexity increases, PI-GNN solver tends to be absorbed into the trivial local optima 𝒑𝜽=𝟎N{\bm{p}}_{{\bm{\theta}}}={\bm{0}}_{N} in some problems, resulting in prolonged plateaus which significantly decelerates the learning process and, in particularly challenging cases, can render learning entirely infeasible.

3.2 Continuous relaxation annealing

Penalty term to control discreteness and continuity

To address these issues, we propose a penalty term to control the balance between discreteness and continuity in the relaxed variables, formulated as follows:

r^(𝒑;C,𝝀,γ)=l^(𝒑;C,𝝀)+γΦ(𝒑),Φ(𝒑)\ensurestackMath\stackon[1pt]=Δi=1N(1(2pi1)α),α{2nn+},\hat{r}({\bm{p}};C,{\bm{\lambda}},\gamma)=\hat{l}({\bm{p}};C,{\bm{\lambda}})+\gamma\Phi({\bm{p}}),~{}~{}\Phi({\bm{p}})\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\sum_{i=1}^{N}(1-(2p_{i}-1)^{\alpha}),~{}~{}\alpha\in\{2n\mid n\in{\mathbb{N}}_{+}\}, (5)

where γ\gamma\in{\mathbb{R}} is a penalty parameter, and the even number α\alpha denote a curve rate. When γ\gamma is negative, i.e., γ<0\gamma<0, the relaxed variables tend to favor the continuous space, smoothing the non-convexity of the objective function l^(𝒑;C,𝝀)\hat{l}({\bm{p}};C,{\bm{\lambda}}) due to the convexity of the penalty term Φ(𝒑)\Phi({\bm{p}}). In contrast, when γ\gamma is positive, i.e., γ>0\gamma>0, the relaxed variables tend to favor discrete space, smoothing out the continuous solution into discrete solution. Formally, the following theorem holds as λ\lambda approaches ±\pm\infty.

Theorem 3.1.

Assuming the objective function l^(𝐩;C)\hat{l}({\bm{p}};C) is bounded within the domain [0,1]N[0,1]^{N}, as γ+\gamma\to+\infty, the relaxed solutions 𝐩argmin𝐩r^(𝐩;C,𝛌,γ){\bm{p}}^{\ast}\in\mathrm{argmin}_{{\bm{p}}}\hat{r}({\bm{p}};C,{\bm{\lambda}},\gamma) converge to the original solutions 𝐱argmin𝐱l(𝐱;C,𝛌){\bm{x}}^{\ast}\in\mathrm{argmin}_{{\bm{x}}}l({\bm{x}};C,{\bm{\lambda}}). Moreover, as γ\gamma\to-\infty, the loss function r^(𝐩;C,𝛌,γ)\hat{r}({\bm{p}};C,{\bm{\lambda}},\gamma) becomes convex, and the relaxed solution 𝟏N/2=argmin𝐩r^(𝐩,C,𝛌,γ)\nicefrac{{{\bm{1}}_{N}}}{{2}}=\mathrm{argmin}_{{\bm{p}}}\hat{r}({\bm{p}},C,{\bm{\lambda}},\gamma) is unique.

For the detailed proof, refer to Appendix B.1. Theorem 3.1 can be generalized for any convex function Φ(𝒑;C)\Phi({\bm{p}};C) that has a unique maximum at 𝟏N/2\nicefrac{{{\bm{1}}_{N}}}{{2}} and achieves a global minimum for all 𝒑{0,1}N{\bm{p}}\in\{0,1\}^{N}; an example is binary cross entropy ΦCE(𝒑)=i=1N(pilogpi+(1pi)log(1pi))\Phi_{\mathrm{CE}}({\bm{p}})=\sum_{i=1}^{N}(p_{i}\log p_{i}+(1-p_{i})\log(1-p_{i})), introduced by Sun et al. (2022); Sanokowski et al. (2024) for the UL-based solvers (Type I). Additionally, the penalty term eliminates the stationary point 𝒑=𝟎N{\bm{p}}^{\ast}={\bm{0}}_{N} described in Section 3.1, preventing convergence to a plateau. For UL-based solvers, the penalty term is expressed as follows:

r^(𝜽;C,𝝀,γ)=l^(𝜽;C,𝝀)+γΦ(𝜽;C),\hat{r}({\bm{\theta}};C,{\bm{\lambda}},\gamma)=\hat{l}({\bm{\theta}};C,{\bm{\lambda}})+\gamma\Phi({\bm{\theta}};C), (6)

where Φ(𝜽;C)\ensurestackMath\stackon[1pt]=ΔΦ(𝒑θ(C))\Phi({\bm{\theta}};C)\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}\Phi({\bm{p}}_{\theta}(C)). According to Theorem 3.1, setting a sufficiently large γ\gamma value cases the relaxed variables to approach nearly discrete values. We can also generalize this penalty term Φ(𝜽;C)\Phi({\bm{\theta}};C), to Potts variables optimization, including coloring problems (Schuetz et al., 2022b), and mixed-integer optimization; refer to Appendix C.1.

Annealing penalty term

Refer to caption
Figure 1: Annealing strategy. When γ<0\gamma<0, it facilitates exploration by reducing the non-convexity of the objective function. As γ\gamma increases, it promotes optimal discrete solutions by smoothing away suboptimal continuous ones.

We propose an annealing strategy that gradually anneals the penalty parameter γ\gamma in Eq. (6). Initially, a negative gamma value, i.e., γ<0\gamma<0, is chosen to leverage the properties, facilitating broad exploration by smoothing the non-convexity of l^(𝜽;C,𝝀)\hat{l}({\bm{\theta}};C,{\bm{\lambda}}) and eliminating the stationary point 𝒑=𝟎N{\bm{p}}^{\ast}={\bm{0}}_{N} to avoid the plateau, as discussed in Section 3.1. Subsequently, the penalty parameter γ\gamma is gradually increased to a positive value, γ>0\gamma>0, with each update of the trainable parameters (one epoch), until the penalty term approaches zero, i.e., Φ(𝜽,C)0\Phi({\bm{\theta}},C)\approx 0, to automatically round the relaxed variables by smoothing out suboptimal continuous solutions oscillating between 11 or 0. A conceptual diagram of this annealing process is shown in Fig. 1.

Note that employing the binary cross-entropy ΦCE(𝒑)\Phi_{\mathrm{CE}}({\bm{p}}) is infeasible for UL-based solvers when γ>0\gamma>0, as the gradient ΦCE(𝒑)/pi\nicefrac{{\partial\Phi_{\mathrm{CE}}({\bm{p}})}}{{\partial p_{i}}} diverges to ±\pm\infty at 0 or 11. In deed, when γ=0\gamma=0, most relaxed variables typically approach binary values, with a relatively small number of variables oscillating between 0 and 11. This gradient divergence issue in ΦCE(𝒑)\Phi_{\mathrm{CE}}({\bm{p}}) makes the learning infeasible without additional techniques, such as gradient clipping. In contrast, the gradient of the penalty term in Eq. 5, Φ(𝒑)/pi\nicefrac{{\partial\Phi({\bm{p}})}}{{\partial p_{i}}}, is bounded within [2α,2α][-2\alpha,2\alpha] for any γ\gamma, preventing the gradient divergence issue seen in ΦCE(𝒑)\Phi_{\mathrm{CE}}({\bm{p}}). Additionally, by increasing α\alpha, the absolute value of the gradient near 1/2\nicefrac{{1}}{{2}} becomes smaller, allowing for control over the smoothing strength toward a discrete solution near 1/2\nicefrac{{1}}{{2}}.

We also propose an early stopping strategy that monitors both the loss function and the penalty term, halting the annealing and learning processes when the penalty term approaches zero, i.e., Φ(𝜽;C)0\Phi({\bm{\theta}};C)\approx 0. Various annealing schedules can be considered; in this study, we employ the following scheduling: γ(τ+1)γ(τ)+ε\gamma(\tau+1)\leftarrow\gamma(\tau)+\varepsilon, where the scheduling rate ε+\varepsilon\in{\mathbb{R}}_{+} is a small constant, and τ\tau denotes the update iterations of the trainable parameters. We refer to the PI-GNN solver with this continuous relaxation annealing as CRA-PI-GNN solver. Here, two additional hyperparameters are introduced: the initial scheduling value γ(0)\gamma(0) and the scheduling rate ε\varepsilon. Numerical experiments suggest that better solutions are obtained when γ(0)\gamma(0) is set to a small negative value and ε\varepsilon is kept low. The ablation study are presented in Appendix F.5.

4 Related Work

Previous works on UL-based solvers have addressed various problems, such as MaxCut problems (Yao et al., 2019) and traveling salesman problems (Hudson et al., 2021), using carefully tailored problem-specific objectives. Some studies have also explored constraint satisfaction problems (Amizadeh et al., 2018; Toenshoff et al., 2019), but applying these approaches to broader CO problems often requires problem-specific reductions. Karalias and Loukas (2020) proposed Erdős Goes Neural (EGN) solver, an UL-based solver for general CO problems based on Erdős’ probabilistic method. This solver generate solutions through an inference process using training instances. Subsequently, Wang et al. (2022) proposed an entry-wise concave continuous relaxation, broadening the EGN solver to a wide range of CO problems. In contrast, Schuetz et al. (2022a, b) proposed PI-GNN solver, an UL-based solver for a single CO problems that automatically learns problem-specific heuristics during the training process. However, Angelini and Ricci-Tersenghi (2023); Boettcher (2023) pointed out the optimization difficulties where PI-GNN solver failed to achieve results comparable to those of greedy algorithms. Wang and Li (2023) also claimed optimization issues with PI-GNN solver, emphasizing the importance of learning from training data and history to overcome local optima. They then proposed Meta-EGN solvers, a meta-learning approach that updates NN network parameters for individual CO problem instances. Furthermore, to address these optimization issue, Lin et al. (2023); Sun et al. (2022); Sanokowski et al. (2024) proposed annealing strategy similar to simulated annealing (Kirkpatrick et al., 1983).

5 Experiments

We begin by evaluating the performance of CRA-PI-GNN solver on the MIS and the MaxCut benchmark problems across multiple graphs of varying sizes, demonstrating that CRA effectively overcomes optimization challenges without relying on data/history 𝒟{\mathcal{D}}. We then extend the evaluation to the DBM problems, showing the applicability to more practical CO problems. For the objective functions and the detailed explanations, refer to Appendix E.1.

5.1 Experimental settings

Baseline methods

In all experiments, the baseline methods include the PI-GNN solver (Schuetz et al., 2022a) as the direct baseline of a UL-based solver for a single instance. For the MIS problems, we also consider the random greedy algorithm (RGA) and DGA (Angelini and Ricci-Tersenghi, 2019) as heuristic baselines. For the MaxCut problems, RUN-CSP solver (Toenshoff et al., 2019) is considered as an additional baseline, and a standard greedy algorithm and SDP based approximation algorithm (Goemans and Williamson, 1995) are considered as an additional classical baseline. The parameters for the Goemans-Williamson (GW) approximation are all set according to the settings in Schuetz et al. (2022b). The implementation used the open-source CVXOPT solver with CVXPY (Mehta, 2019) as the modeling interface. Note that we do not consider UL-based solvers for learning generalized heuristics (Karalias and Loukas, 2020; Wang et al., 2022; Wang and Li, 2023), which rely on training instances 𝒟={Cμ}μ=1P{\mathcal{D}}=\{C_{\mu}\}_{\mu=1}^{P}. The primary objective of this study is to evaluate whether CRA-PI-GNN solver can surpass the performance of both PI-GNN solver and greedy algorithms. However, for the MIS problem, EGN solver (Karalias and Loukas, 2020) and Meta-EGN solver (Wang and Li, 2023) are considered to confirm that CRA can overcome the optimization issues without training instances.

Implementation

The objective of the numerical experiments is to compare the CRA-PI-GNN solver with the PI-GNN solver. Thus, we follow the same experimental configuration described as the experiments in Schuetz et al. (2022a), employing a simple two-layer GCV and GraphSAGE (Hamilton et al., 2017) implemented by the Deep Graph Library (Wang et al., 2019); Refer to Appendix D.1 for the detailed architectures.

We use the AdamW (Kingma and Ba, 2014) optimizer with a learning rate of η=104\eta=10^{-4} and weight decay of 10210^{-2}. The GNNs are trained for up to 5×1045\times 10^{4} epochs with early stopping, which monitors the summarized loss function s=1Sl^(P:,s)\sum_{s=1}^{S}\hat{l}(P_{:,s}) and the entropy term Φ(P;γ,α)\Phi(P;\gamma,\alpha) with tolerance 10510^{-5} and patience 10310^{3} epochs; Further details are provided in Appendix D.2. We set the initial scheduling value to γ(0)=20\gamma(0)=-20 for the MIS and matching problems, and we set γ(0)=6\gamma(0)=-6 for the MaxCut problems with the scheduling rate ε=103\varepsilon=10^{-3} and curve rate α=2\alpha=2 in Eq. (6). These values are not necessarily optimal, and refining these parameters can lead to better solutions; Refer to Appendix F.5 and Appendix F.6 for an ablation study of these parameters. Once the training process is complete, we apply projection heuristics to map the obtained soft solutions back to discrete solutions using simple projection, where for all i[N]i\in[N], we map pθ,ip_{\theta,i} into 0 if pθ,i0.5p_{\theta,i}\leq 0.5 and pθ,ip_{\theta,i} into 11 if pθ,i>0.5p_{\theta,i}>0.5. However, due to the early stopping in Section 3.2, the CRA-PI-GNN solver ensures that for all benchmark CO problems, the soft solution at the end of the training process became 0 or 1 within the 32-bit Floating Point range in Pytorch GPU; thus, it is robust against a given threshold, which we set to 0.50.5 in our experiments. Additionally, no violations of the constraints were observed in our numerical experiments. Thus, following results presented in are feasible solutions.

Evaluation metrics

Following Karalias and Loukas (2020), We use the approximation ratio (ApR), formulated as ApR=f(𝒙;C)/f(𝒙;C)\mathrm{ApR}=f({\bm{x}};C)/f({\bm{x}}^{\ast};C), where 𝒙{\bm{x}}^{\ast} is optimal solution. For the MIS problems, we evaluate the ApRs using the theoretical optimal cost (Barbier et al., 2013) and the independent set density ρ\rho relative to the theoretical results. For the MaxCut problems on RRGs, we adopt the cut ratio ν\nu against the theoretical upper bound (Parisi, 1980; Dembo et al., 2017); see Appendix E.1 for the details. All the results for the MIS and MaxCut problems are summarized based on 5 RRGs with different random seeds. In the case of the MaxCut Gset problem, the ApR is calculated compared to the known best cost functions. Regarding the DBM problems, we calculate the ApR against the global optimal, identified using Gurobi 10.0.1 solver with default settings.

5.2 MIS problems

Degree dependency of solutions using CRA

Refer to caption

Figure 2: Independent set density of the MIS problem on dd-RRG. Results for graphs with N=10,000N=10{,}000 nodes (Left) and N=20,000N=20{,}000 nodes (Right). the dashed lines represent the theoretical results (Barbier et al., 2013).

Refer to caption

Figure 3: Cut ratio of the MaxCut problem on dd-RRG as a function of the degree dd Results for N=10,000N=10{,}000 (Left) and N=20,000N=20{,}000 (Right). The dashed lines represents the theoretical upper bounds (Parisi, 1980).

First, we compare the performance of the PI-GNN and CRA-PI-GNN solvers using GCV, as in Schuetz et al. (2022a). Fig. 3 shows the independent set density ρd\rho_{d} as a function of degree dd obtained by the PI-GNN and CRA-PI-GNN solvers compared with the theoretical results (Barbier et al., 2013). Across all degrees dd, the CRA-PI-GNN solver outperformed the PI-GNN solver and approached the theoretical results, whereas the PI-GNN solver fail to find valid solutions, especially for d15d\geq 15, as pointed by the previous studies (Angelini and Ricci-Tersenghi, 2023; Wang and Li, 2023).

Response to Angelini and Ricci-Tersenghi (2023) and Wang and Li (2023)

Table 1: ApR in MIS problems on RRGs with 10,00010{,}000 nodes and node degree d=20,100d=20,100. “CRA” represents the CRA-PI-GNN solver.
Instance Method ApR
20-RRG RGA 0.776±0.0010.776\pm 0.001
DGA 0.891±0.0010.891\pm 0.001
EGN 0.7750.775 (2023)
META-EGN 0.8870.887 (2023)
PI-GNN (GCV) 0.000±0.0000.000\pm 0.000
PI-GNN (SAGE) 0.745±0.0030.745\pm 0.003
CRA (GCV) 0.937±0.0020.937\pm 0.002
CRA (SAGE) 0.963±0.001¯\underline{\mathbf{0.963\pm 0.001}}
100-RRG RGA 0.663±0.0010.663\pm 0.001
DGA 0.848±0.0020.848\pm 0.002
PI-GNN (GCV) 0.000±0.0000.000\pm 0.000
PI-GNN (SAGE) 0.000±0.0000.000\pm 0.000
CRA (GCV) 0.855±0.0040.855\pm 0.004
CRA (SAGE) 0.924±0.001¯\underline{\mathbf{0.924\pm 0.001}}
Refer to caption
Figure 4: (Right) computational runtime (in seconds) of the CRA-PI-GNN solvers with the GraphSage and Conv architectures on 100100-RRG with varying numbers of nodes NN.Error bars represent the standard deviations of the results.

MIS problems on RRGs with a degree dd larger than 1616 is known to be hard problems (Barbier et al., 2013). As discussed in Section 3.1, Angelini and Ricci-Tersenghi (2023); Wang and Li (2023) have posted the optimization concerns on UL-based solvers. However, we call these claim into question by substantially outperforming heuristics DGA and RGA for the MIS on graphs with d=20,100d=20,100, without training/historical instances 𝒟={Gμ}μ=1n{\mathcal{D}}=\{G^{\mu}\}_{\mu=1}^{n}, as shown in Table 1. See Appendix 6 for the results of solving all other Gsets, where consistently, CRA-PI-GNN provides better results as well. A comparison of the sampling-based solvers, RL-based solvers, SL-based solvers, Gurobi, and MIS-specific solvers is presented in Appendix F.2.

Acceleration of learning speed

We also compared the learning curve between PI-GNN and CRA-PI-GNN solver to confirmed that the CRA-PI-GNN solver does not become trapped in the plateau, 𝒑N=𝟎N{\bm{p}}_{N}={\bm{0}}_{N}, as discussed in Section 3.1. Fig. 5 shows the dynamics of the cost functions for the MIS problems with N=10,000N=10{,}000 across d=3,5,20,100d=3,5,20,100. Across all degrees, CRA-PI-GNN solver achieves a better solution with fewer epochs than PI-GNN solver. Specifically, PI-GNN solver becomes significantly slower due to getting trapped in the plateau even for graphs with low degrees, such as d=3,5d=3,5. In contrast, CRA-PI-GNN solver can effectively escape from plateaus through the smoothing and automatic rounding of the penalty term when the negative parameter γ>0\gamma>0.

Computational scaling

Refer to caption
Figure 5: The dynamics of cost function for MIS problems on RRGs with N=10,000N=10{,}000 nodes varying degrees dd as a function of the number of parameters updates NEPOCHN_{\mathrm{EPOCH}}.

We next evaluate the computational scaling of the CRA-PI-GNN solver for MIS problems with large-scale RRGs with a node degree of 100100 in Fig. 1, following previous studies (Schuetz et al., 2022a; Wang and Li, 2023). Fig. 1 demonstrated a moderate super-linear scaling of the total computational time, approximately N1.4\sim N^{1.4} for GCN and N1.7\sim N^{1.7} for GraphSage. This performance is nearly identical to that of the PI-GNN solver (Schuetz et al., 2022a) for problems on RRGs with lower degrees. It is important note that the runtimes of CRA-PI-GNN solver heavily depend on the optimizer for GNNs and annealing rate ε\varepsilon; thus this scaling remains largely unchanged for problems other than the MIS on 100100 RRG. Additionally, CRA demonstrate that the runtime remains nearly constant as graph order and density increase, indicating effective scalability with denser graphs which is presented in Appendix F.2.

5.3 MaxCut problem

Degree dependency of solutions

We first compare the performances of PI-GNN and CRA-PI-GNN solvers with GCV, following Schuetz et al. (2022a). Fig. 3 shows the cut ratio νd\nu_{d} as a function of the degree dd compared to the theoretical upper bound (Parisi, 1980; Dembo et al., 2017). Across all degrees dd, CRA-PI-GNN solver also outperforms PI-GNN solver, approaching the theoretical upper bound. In contrast, PI-GNN solver fails to find valid solutions for d>20d>20 as with the case of the MIS problems in Section 5.2.

Standard MaxCut benchmark test

Refer to caption
Figure 6: ApR on DBM problems.

Following Schuetz et al. (2022a), we next conducted additional experiments on standard MaxCut benchmark instances based on the publicly available Gset dataset (Ye, 2003), which is commonly used to evaluate MaxCut algorithms. Here, we provide benchmark results for seven distinct graphs with thousands of nodes, including Erdös-Renyi graphs with uniform edge probability, graphs in which the connectivity decays gradually from node 11 to NN, 44-regular toroidal graphs, and a very large Gset instance with N=10,000N=10{,}000 nodes. Table 2 shows, across all problems, CRA-PI-GNN solver outperforms both the PI-GNN, RUN-CSP solvers and other greedy algorithm. See Appendix 6 for the results of solving all other Gsets, where consistently, CRA-PI-GNN provides better results as well.

Table 2: ApR for MaxCut on Gset
GRAPH (NODES, EDGES) GREEDY SDP RUN-CSP PI-GNN CRA
G14 (800800, 4,6944{,}694) 0.9460.946 0.9700.970 0.9600.960 0.9880.988 0.994¯\underline{\mathbf{0.994}}
G15 (800800, 4,6614{,}661) 0.9390.939 0.9580.958 0.9600.960 0.9800.980 0.992¯\underline{\mathbf{0.992}}
G22 (2,0002{,}000, 19,99019{,}990) 0.9230.923 0.770.77 0.9750.975 0.9870.987 0.998¯\underline{\mathbf{0.998}}
G49 (3,0003{,}000, 6,0006{,}000) 1.000¯\underline{\mathbf{1.000}} 𝟏,𝟎𝟎¯\underline{\mathbf{1,00}} 1.000¯\underline{\mathbf{1.000}} 0.9860.986 1.000¯\mathbf{\underline{1.000}}
G50 (3,0003{,}000, 6,0006{,}000) 1.000¯\mathbf{\underline{1.000}} 1.000¯\mathbf{\underline{1.000}} 1.000¯\mathbf{\underline{1.000}} 0.9900.990 1.000¯\mathbf{\underline{1.000}}
G55 (5,0005{,}000, 12,46812{,}468) 0.8920.892 - 0.9820.982 0.9830.983 0.991¯\underline{\mathbf{0.991}}
G70 (10,00010{,}000, 9,9999{,}999) 0.8860.886 - 0.9700.970 0.9820.982 0.992¯\underline{\mathbf{0.992}}

5.4 Diverse bipartite matching

To evaluate the applicability of the CRA-PI-GNN solver to more practical problems not on graphs, we conducted experiments on DBM problems (Ferber et al., 2020; Mulamba et al., 2020; Mandi et al., 2022); refer to Appendix E.1 for details. This problems consists of 2727 distinct instances with varying properties, and each instance comprises 100100 nodes representing scientific publications, divided into two groups of 5050 nodes N1N_{1} and N2N_{2}. The optimization is formulated as follows:

l(𝒙;C,M,𝝀)=ijCijxij+λ1iReLU(jxij1)+λ2jReLU(ixij1)+λ3ReLU(pijxijijMijxij)+λ4ReLU(qijxijij(1Mij)xij),l({\bm{x}};C,M,{\bm{\lambda}})=-\textstyle\sum_{ij}C_{ij}x_{ij}+\lambda_{1}\textstyle\sum_{i}\mathrm{ReLU}\Big{(}\textstyle\sum_{j}x_{ij}-1\Big{)}+\lambda_{2}\textstyle\sum_{j}\mathrm{ReLU}\Big{(}\sum_{i}x_{ij}-1\Big{)}\\ +\lambda_{3}\mathrm{ReLU}\Big{(}p\textstyle\sum_{ij}x_{ij}-\textstyle\sum_{ij}M_{ij}x_{ij}\Big{)}+\lambda_{4}\mathrm{ReLU}\Big{(}q\sum_{ij}x_{ij}-\sum_{ij}(1-M_{ij})x_{ij}\Big{)},

where CN1×N2C\in{\mathbb{R}}^{N_{1}\times N_{2}} represents the likelihood of a link between each pair of nodes, an indicator MijM_{ij} is set to 0 if article ii and jj share the same subject field (11 otherwise) iN1\forall i\in N_{1}, and jN2j\in N_{2}. The parameters p,q[0,1]p,q\in[0,1] represent the probability of pairs sharing their field and of unrelated pairs, respectively. As in Mandi et al. (2022), we explore two variations of this problem, with p=q=p=q= being 2525% and 55%, respectively, and these variations are referred to as Matching-1 and Matching-2, respectively. In this experiment, we set λ1=λ2=10\lambda_{1}=\lambda_{2}=10 and λ3=λ4=25\lambda_{3}=\lambda_{4}=25. Fig 6 shows that the CRA-PI-GNN solver can find better solutions across all instances.

6 Conclusion

This study proposes CRA strategy to address the both optimization and rounding issue in UL-based solvers. CRA strategy introduces a penalty term that dynamically shifts from prioritizing continuous solutions, where the non-convexity of the objective function is effectively smoothed, to enforcing discreteness, thereby eliminating artificial rounding. Experimental results demonstrate that CRA-PI-GNN solver significantly outperforms both PI-GNN solver and greedy algorithms across various complex CO problems, including MIS, MaxCut, and DBM problems. CRA approach not only enhances solution quality but also accelerates the learning process.

Limitation

In this numerical experiments, most hyperparameters were fixed to their default values, as outlined in Section 5.1, with minimal tuning. However, tuning may be necessary for certain problems or to further enhance performance.

References

  • Papadimitriou and Steiglitz [1998] Christos H Papadimitriou and Kenneth Steiglitz. Combinatorial optimization: algorithms and complexity. Courier Corporation, 1998.
  • Hudson et al. [2021] Benjamin Hudson, Qingbiao Li, Matthew Malencia, and Amanda Prorok. Graph neural network guided local search for the traveling salesperson problem. arXiv preprint arXiv:2110.05291, 2021.
  • Joshi et al. [2019] Chaitanya K Joshi, Thomas Laurent, and Xavier Bresson. An efficient graph convolutional network technique for the travelling salesman problem. arXiv preprint arXiv:1906.01227, 2019.
  • Gasse et al. [2019] Maxime Gasse, Didier Chételat, Nicola Ferroni, Laurent Charlin, and Andrea Lodi. Exact combinatorial optimization with graph convolutional neural networks. Advances in neural information processing systems, 32, 2019.
  • Selsam et al. [2018] Daniel Selsam, Matthew Lamm, Benedikt Bünz, Percy Liang, Leonardo de Moura, and David L Dill. Learning a sat solver from single-bit supervision. arXiv preprint arXiv:1802.03685, 2018.
  • Khalil et al. [2016] Elias Khalil, Pierre Le Bodic, Le Song, George Nemhauser, and Bistra Dilkina. Learning to branch in mixed integer programming. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30, 2016.
  • Yehuda et al. [2020] Gal Yehuda, Moshe Gabel, and Assaf Schuster. It’s not what machines can learn, it’s what we cannot teach. In International conference on machine learning, pages 10831–10841. PMLR, 2020.
  • Yao et al. [2019] Weichi Yao, Afonso S Bandeira, and Soledad Villar. Experimental performance of graph neural networks on random instances of max-cut. In Wavelets and Sparsity XVIII, volume 11138, pages 242–251. SPIE, 2019.
  • Chen and Tian [2019] Xinyun Chen and Yuandong Tian. Learning to perform local rewriting for combinatorial optimization. Advances in Neural Information Processing Systems, 32, 2019.
  • Yolcu and Póczos [2019] Emre Yolcu and Barnabás Póczos. Learning local search heuristics for boolean satisfiability. Advances in Neural Information Processing Systems, 32, 2019.
  • Nazari et al. [2018] Mohammadreza Nazari, Afshin Oroojlooy, Lawrence Snyder, and Martin Takác. Reinforcement learning for solving the vehicle routing problem. Advances in neural information processing systems, 31, 2018.
  • Khalil et al. [2017] Elias Khalil, Hanjun Dai, Yuyu Zhang, Bistra Dilkina, and Le Song. Learning combinatorial optimization algorithms over graphs. Advances in neural information processing systems, 30, 2017.
  • Bello et al. [2016] Irwan Bello, Hieu Pham, Quoc V Le, Mohammad Norouzi, and Samy Bengio. Neural combinatorial optimization with reinforcement learning. arXiv preprint arXiv:1611.09940, 2016.
  • Mnih et al. [2015] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. nature, 518(7540):529–533, 2015.
  • Tang et al. [2017] Haoran Tang, Rein Houthooft, Davis Foote, Adam Stooke, OpenAI Xi Chen, Yan Duan, John Schulman, Filip DeTurck, and Pieter Abbeel. # exploration: A study of count-based exploration for deep reinforcement learning. Advances in neural information processing systems, 30, 2017.
  • Espeholt et al. [2018] Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Vlad Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. In International conference on machine learning, pages 1407–1416. PMLR, 2018.
  • Schuetz et al. [2022a] Martin JA Schuetz, J Kyle Brubaker, and Helmut G Katzgraber. Combinatorial optimization with physics-inspired graph neural networks. Nature Machine Intelligence, 4(4):367–377, 2022a.
  • Karalias and Loukas [2020] Nikolaos Karalias and Andreas Loukas. Erdos goes neural: an unsupervised learning framework for combinatorial optimization on graphs. Advances in Neural Information Processing Systems, 33:6659–6672, 2020.
  • Amizadeh et al. [2018] Saeed Amizadeh, Sergiy Matusevych, and Markus Weimer. Learning to solve circuit-sat: An unsupervised differentiable approach. In International Conference on Learning Representations, 2018.
  • Angelini and Ricci-Tersenghi [2023] Maria Chiara Angelini and Federico Ricci-Tersenghi. Modern graph neural networks do worse than classical greedy algorithms in solving combinatorial optimization problems like maximum independent set. Nature Machine Intelligence, 5(1):29–31, 2023.
  • Angelini and Ricci-Tersenghi [2019] Maria Chiara Angelini and Federico Ricci-Tersenghi. Monte carlo algorithms are very effective in finding the largest independent set in sparse random graphs. Physical Review E, 100(1):013302, 2019.
  • Wang and Li [2023] Haoyu Wang and Pan Li. Unsupervised learning for combinatorial optimization needs meta-learning. arXiv preprint arXiv:2301.03116, 2023.
  • Hoffman and Kruskal [2010] Alan J Hoffman and Joseph B Kruskal. Integral boundary points of convex polyhedra. 50 Years of Integer Programming 1958-2008: From the Early Years to the State-of-the-Art, pages 49–76, 2010.
  • Nemhauser and Trotter Jr [1974] George L Nemhauser and Leslie E Trotter Jr. Properties of vertex packing and independence system polyhedra. Mathematical programming, 6(1):48–61, 1974.
  • Schuetz et al. [2022b] Martin JA Schuetz, J Kyle Brubaker, Zhihuai Zhu, and Helmut G Katzgraber. Graph coloring with physics-inspired graph neural networks. Physical Review Research, 4(4):043131, 2022b.
  • Wang et al. [2022] Haoyu Peter Wang, Nan Wu, Hang Yang, Cong Hao, and Pan Li. Unsupervised learning for combinatorial optimization with principled objective relaxation. Advances in Neural Information Processing Systems, 35:31444–31458, 2022.
  • Lovász [1979] László Lovász. On the shannon capacity of a graph. IEEE Transactions on Information theory, 25(1):1–7, 1979.
  • Goemans and Williamson [1995] Michel X Goemans and David P Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM (JACM), 42(6):1115–1145, 1995.
  • Von Luxburg [2007] Ulrike Von Luxburg. A tutorial on spectral clustering. Statistics and computing, 17:395–416, 2007.
  • Smith et al. [1997] Alice E Smith, David W Coit, Thomas Baeck, David Fogel, and Zbigniew Michalewicz. Penalty functions. Handbook of evolutionary computation, 97(1):C5, 1997.
  • Karp [2010] Richard M Karp. Reducibility among combinatorial problems. Springer, 2010.
  • Sun et al. [2022] Haoran Sun, Etash K Guha, and Hanjun Dai. Annealed training for combinatorial optimization on graphs. arXiv preprint arXiv:2207.11542, 2022.
  • Sanokowski et al. [2024] Sebastian Sanokowski, Wilhelm Berghammer, Sepp Hochreiter, and Sebastian Lehner. Variational annealing on graphs for combinatorial optimization. Advances in Neural Information Processing Systems, 36, 2024.
  • Toenshoff et al. [2019] Jan Toenshoff, Martin Ritzert, Hinrikus Wolf, and Martin Grohe. Run-csp: unsupervised learning of message passing networks for binary constraint satisfaction problems. CoRR, abs/1909.08387, 2019.
  • Boettcher [2023] Stefan Boettcher. Inability of a graph neural network heuristic to outperform greedy algorithms in solving combinatorial optimization problems. Nature Machine Intelligence, 5(1):24–25, 2023.
  • Lin et al. [2023] Xi Lin, Zhiyuan Yang, Xiaoyuan Zhang, and Qingfu Zhang. Continuation path learning for homotopy optimization. In International Conference on Machine Learning, pages 21288–21311. PMLR, 2023.
  • Kirkpatrick et al. [1983] Scott Kirkpatrick, C Daniel Gelatt Jr, and Mario P Vecchi. Optimization by simulated annealing. science, 220(4598):671–680, 1983.
  • Mehta [2019] Hermish Mehta. Cvx graph algorithms. https://github.com/hermish/cvx-graph-algorithms, 2019.
  • Hamilton et al. [2017] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017.
  • Wang et al. [2019] Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, Chao Ma, Lingfan Yu, Yu Gai, et al. Deep graph library: A graph-centric, highly-performant package for graph neural networks. arXiv preprint arXiv:1909.01315, 2019.
  • Kingma and Ba [2014] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • Barbier et al. [2013] Jean Barbier, Florent Krzakala, Lenka Zdeborová, and Pan Zhang. The hard-core model on random graphs revisited. In Journal of Physics: Conference Series, volume 473, page 012021. IOP Publishing, 2013.
  • Parisi [1980] Giorgio Parisi. A sequence of approximated solutions to the sk model for spin glasses. Journal of Physics A: Mathematical and General, 13(4):L115, 1980.
  • Dembo et al. [2017] Amir Dembo, Andrea Montanari, and Subhabrata Sen. Extremal cuts of sparse random graphs. 2017.
  • Ye [2003] Y. Ye. The gset dataset. https://web.stanford.edu/~yyye/yyye/Gset/, 2003.
  • Ferber et al. [2020] Aaron Ferber, Bryan Wilder, Bistra Dilkina, and Milind Tambe. Mipaal: Mixed integer program as a layer. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 1504–1511, 2020.
  • Mulamba et al. [2020] Maxime Mulamba, Jayanta Mandi, Michelangelo Diligenti, Michele Lombardi, Victor Bucarey, and Tias Guns. Contrastive losses and solution caching for predict-and-optimize. arXiv preprint arXiv:2011.05354, 2020.
  • Mandi et al. [2022] Jayanta Mandi, Vıctor Bucarey, Maxime Mulamba Ke Tchomba, and Tias Guns. Decision-focused learning: through the lens of learning to rank. In International Conference on Machine Learning, pages 14935–14947. PMLR, 2022.
  • Bayati et al. [2010] Mohsen Bayati, David Gamarnik, and Prasad Tetali. Combinatorial approach to the interpolation method and scaling limits in sparse random graphs. In Proceedings of the forty-second ACM symposium on Theory of computing, pages 105–114, 2010.
  • Coja-Oghlan and Efthymiou [2015] Amin Coja-Oghlan and Charilaos Efthymiou. On independent sets in random graphs. Random Structures & Algorithms, 47(3):436–486, 2015.
  • Alidaee et al. [1994] Bahram Alidaee, Gary A Kochenberger, and Ahmad Ahmadian. 0-1 quadratic programming approach for optimum solutions of two scheduling problems. International Journal of Systems Science, 25(2):401–408, 1994.
  • Neven et al. [2008] Hartmut Neven, Geordie Rose, and William G Macready. Image recognition with an adiabatic quantum computer i. mapping to quadratic unconstrained binary optimization. arXiv preprint arXiv:0804.4457, 2008.
  • Deza and Laurent [1994] Michel Deza and Monique Laurent. Applications of cut polyhedra—ii. Journal of Computational and Applied Mathematics, 55(2):217–247, 1994.
  • Sen et al. [2008] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93–93, 2008.
  • Gilmer et al. [2017] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International conference on machine learning, pages 1263–1272. PMLR, 2017.
  • Scarselli et al. [2008] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE transactions on neural networks, 20(1):61–80, 2008.
  • Qiu et al. [2022] Ruizhong Qiu, Zhiqing Sun, and Yiming Yang. DIMES: A differentiable meta solver for combinatorial optimization problems. In Advances in Neural Information Processing Systems 35, 2022.
  • Sun et al. [2023] Haoran Sun, Katayoon Goshvadi, Azade Nova, Dale Schuurmans, and Hanjun Dai. Revisiting sampling for combinatorial optimization. In International Conference on Machine Learning, pages 32859–32874. PMLR, 2023.
  • Goshvadi et al. [2023] Katayoon Goshvadi, Haoran Sun, Xingchao Liu, Azade Nova, Ruqi Zhang, Will Grathwohl, Dale Schuurmans, and Hanjun Dai. Discs: A benchmark for discrete sampling. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, pages 79035–79066. Curran Associates, Inc., 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/f9ad87c1ebbae8a3555adb31dbcacf44-Paper-Datasets_and_Benchmarks.pdf.
  • Hoos and Stützle [2000] Holger H Hoos and Thomas Stützle. Satlib: An online resource for research on sat. Sat, 2000:283–292, 2000.

Appendix A Overview

This supplementary material provides extended explanations, implementation details, and additional results.

Appendix B Derivation

B.1 Proof of Theorem 3.1

First, we present three lemmas, and then we demonstrate Theorem 3.1 based on these lemmas.

Lemma B.1.

For any even natural number α{2nn+}\alpha\in\{2n\mid n\in{\mathbb{N}}_{+}\}, the function ϕ(p)=1(2p1)α\phi(p)=1-(2p-1)^{\alpha} defined on [0,1][0,1] achieves its maximum value of 11 when p=1/2p=1/2 and its minimum value of 0 when p=0p=0 or p=1p=1.

Proof.

The derivative of ϕ(p)\phi(p) relative to pp is dϕ(p)/dp=2α(2p1)\nicefrac{{d\phi(p)}}{{dp}}=-2\alpha(2p-1), which is zero when p=1/2p=1/2. This is a point where the function is maximized because the second derivative d2ϕ(p)/dp2=4α0\nicefrac{{d^{2}\phi(p)}}{{dp^{2}}}=-4\alpha\leq 0. In addition, this function is concave and symmetric relative to p=1/2p=1/2 because α\alpha is an even natural number, i.e., ϕ(p)=ϕ(1p)\phi(p)=\phi(1-p), thereby achieving its minimum value of 0 when p=0p=0 or p=1p=1. ∎

Lemma B.2.

For any even natural number α{2nn+}\alpha\in\{2n\mid n\in{\mathbb{N}}_{+}\}, if γ+\gamma\to+\infty, minimizing the penalty term γΦ(𝐩)=γi=1N(1(2pi1)α)=γi=1Nϕ(pi)\gamma\Phi({\bm{p}})=\gamma\sum_{i=1}^{N}(1-(2p_{i}-1)^{\alpha})=\gamma\sum_{i=1}^{N}\phi(p_{i}) enforces that the for all i[N]i\in[N], pip_{i} is either 0 or 11 and, if γ\gamma\to-\infty, the penalty term enforces 𝐩=𝟏N/2{\bm{p}}={\bm{1}}_{N}/2.

Proof.

From Lemma B.1, as γ+\gamma\to+\infty, ϕ(p)\phi(p) is minimal value when, for any i[N]i\in[N], pi=0p_{i}=0 or pi=1p_{i}=1. As γ\gamma\to-\infty, ϕ(p;α,γ)\phi(p;\alpha,\gamma) is minimal value when, for any i[N]i\in[N], pi=1/2p_{i}=1/2. ∎

Lemma B.3.

For any even number α{2nn+}\alpha\in\{2n\mid n\in{\mathbb{N}}_{+}\}, γΦ(𝐩)\gamma\Phi({\bm{p}}) is concave when λ>0\lambda>0 and is a convex function when λ<0\lambda<0.

Proof.

Note that γΦ(𝒑)=γi=1Nϕ(pi)=γi=1N(1(2pi1)α)\gamma\Phi({\bm{p}})=\gamma\sum_{i=1}^{N}\phi(p_{i})=\gamma\sum_{i=1}^{N}(1-(2p_{i}-1)^{\alpha}) is separable across its components pip_{i}. Thus, it is sufficient to prove that each γϕi(pi;α)\gamma\phi_{i}(p_{i};\alpha) is concave or convex in pip_{i} because the sum of the concave or convex functions is also concave (and vice versa). Thus, we consider the second derivative of γϕi(pi)\gamma\phi_{i}(p_{i}) with respect to pip_{i}:

γd2ϕi(pi)dpi2=4γα.\gamma\frac{d^{2}\phi_{i}(p_{i})}{dp_{i}^{2}}=-4\gamma\alpha.

If γ>0\gamma>0, the second derivative is negative for all pi[0,1]p_{i}\in[0,1], and this completes the proof that γΦ(𝒑)\gamma\Phi({\bm{p}}) is a concave function when γ\gamma is positive (and vice versa). ∎

Combining Lemma B.1, Lemma B.2 and Lemma B.3, one can show the following theorem.

Theorem B.4.

Under the assumption that the objective function l^(𝐩;C)\hat{l}({\bm{p}};C) is bounded within the domain [0,1]N[0,1]^{N}, as γ+\gamma\to+\infty, the soft solutions 𝐩argmin𝐩r^(𝐩;C,𝛌,γ){\bm{p}}^{\ast}\in\mathrm{argmin}_{{\bm{p}}}\hat{r}({\bm{p}};C,{\bm{\lambda}},\gamma) converge to the original solutions 𝐱argmin𝐱l(𝐱;C,𝛌){\bm{x}}^{\ast}\in\mathrm{argmin}_{{\bm{x}}}l({\bm{x}};C,{\bm{\lambda}}). In addition, as γ\gamma\to-\infty, the loss function r^(𝐩;C,𝛌,γ)\hat{r}({\bm{p}};C,{\bm{\lambda}},\gamma) becomes convex, and the soft solution 𝟏N/2=argmin𝐩r^(𝐩,C,𝛌,γ)\nicefrac{{{\bm{1}}_{N}}}{{2}}=\mathrm{argmin}_{{\bm{p}}}\hat{r}({\bm{p}},C,{\bm{\lambda}},\gamma) is unique.

Proof.

As λ+\lambda\to+\infty, the penalty term Φ(𝒑)\Phi({\bm{p}}) dominates the loss function r^(𝒑;C,𝝀,γ)\hat{r}({\bm{p}};C,{\bm{\lambda}},\gamma). According to Lemma B.2, this penalty term forces the optimal solution 𝒑{\bm{p}}^{\ast} to a binary vector whose components, for all i[N]i\in[N] pip_{i}^{\ast} that are either 0 or 11 because any non-binary value results in an infinitely large penalty. This effectively restricts the feasible region to the vertices of the unit hypercube, which correspond to the binary vector in {0,1}N\{0,1\}^{N}. Thus, as λ\lambda\to\infty, the solutions to the relaxed problem converge to those of the original problem. As λ\lambda\to-\infty, the penalty term Φ(𝒑)\Phi({\bm{p}}) also dominates the loss function r^(𝒑;C,𝝀,γ)\hat{r}({\bm{p}};C,{\bm{\lambda}},\gamma) and the r^(𝒑;C,𝝀)\hat{r}({\bm{p}};C,{\bm{\lambda}}) convex function from Lemma B.3. According to Lemma B.2, this penalty term forces the optimal solution 𝒑=𝟏N/2{\bm{p}}^{\ast}={\bm{1}}_{N}/2. ∎

The theorem holds for the cross entropy penalty given by

Φ(𝒑)=i=1N(pilog(pi)+(1pi)log(1pi))\Phi({\bm{p}})=\sum_{i=1}^{N}\left(p_{i}\log(p_{i})+(1-p_{i})\log(1-p_{i})\right) (7)

in the UL-based solver using data or history [Sun et al., 2022, Sanokowski et al., 2024] because Φ(𝒑)\Phi({\bm{p}}) can similarly satisfy Lemma B.1, Lemma B.2 and Lemma B.3.

Corollary B.5.

Theorem B.4 holds for the following penalty term:

Φ(𝒑)=i=1N(pilog(pi)+(1pi)log(1pi)).\Phi({\bm{p}})=\sum_{i=1}^{N}\left(p_{i}\log(p_{i})+(1-p_{i})\log(1-p_{i})\right). (8)

Appendix C Generalization of CRA

C.1 Generalization for to Potts variable optimization

This section generalize the penalty term Φ(𝜽;C)\Phi({\bm{\theta}};C) introduced for binary variables to KK-Potts variables. KK-Potts variable is the Kronecker delta δ(xi,xj)\delta(x_{i},x_{j}) which equas one whenever xi=xjx_{i}=x_{j} and zero otherwise and a decision variable takes on KK different values, i[N],xi=1,,K\forall i\in[N],~{}x_{i}=1,\ldots,K. For example, graph KK-coloring problems can be expressed as

f(𝒙;G(V,E))=(i,j)Eδ(xi,xj)f({\bm{x}};G(V,E))=-\sum_{(i,j)\in E}\delta(x_{i},x_{j}) (9)

For Potts variables, the output of the GNN is 𝒉𝜽L=P(𝜽;C)N×K{\bm{h}}_{{\bm{\theta}}}^{L}=P({\bm{\theta}};C)\in\mathbb{R}^{N\times K} and the penalty term can be generalized as follows.

Φ(𝜽;C)=i=1N(1k=1K(KPi,k(𝜽;C)1)α).\Phi({\bm{\theta}};C)=\sum_{i=1}^{N}\left(1-\sum_{k=1}^{K}\left(KP_{i,k}({\bm{\theta}};C)-1\right)^{\alpha}\right). (10)

Appendix D Additional implementation details

D.1 Architecture of GNNs

We describe the details of the GNN architectures used in all numerical experiments. The first convolutional layer takes H0H_{0}-dimensional node embedding vectors, 𝒉𝜽0{\bm{h}}_{{\bm{\theta}}}^{0} for each node, as input, yielding H1H_{1}-dimensional feature vectors 𝒉𝜽1{\bm{h}}_{{\bm{\theta}}}^{1}. Then, the ReLU function is applied as a component-wise nonlinear transformation. The second convolutional layer takes the H1H_{1}-dimensional feature vectors, 𝒉𝜽1{\bm{h}}^{1}_{{\bm{\theta}}}, as input, producing a H(2)H^{(2)}-dimensional vector 𝒉𝜽2{\bm{h}}^{2}_{{\bm{\theta}}}. Finally, a sigmoid function is applied to the H(2)H^{(2)}-dimensional vector 𝒉𝜽2{\bm{h}}^{2}_{{\bm{\theta}}}, and the output is the soft solution 𝒑𝜽[0,1]N{\bm{p}}_{{\bm{\theta}}}\in[0,1]^{N}. As in [Schuetz et al., 2022a], we set H0=int(N0.8)H_{0}=\mathrm{int}(N^{0.8}) or , H1=int(N0.8/2)H_{1}=\mathrm{int}(N^{0.8}/2) and H2=1H^{2}=1 for both GCN and GraphSAGE .

D.2 Training settings

For all numerical experiments, we use the AdamW [Kingma and Ba, 2014] optimizer with a learning rate of η=104\eta=10^{-4} and weight decay of 10210^{-2}, and we train the GNNs for up to 10510^{5} epochs with early stopping set to the absolute tolerance 10510^{-5} and patience 10310^{3}. As discussed in Schuetz et al. [2022a], the GNNs are initialized with five different random seeds for a single instance because the results are dependent on the initial values of the trainable parameters; thus selecting the best solution.

Appendix E Experiment details

E.1 Benchmark problems

MIS problems

There are some theoretical results for MIS problems on RRGs with the node degree set to dd, where each node is connected to exactly dd other nodes. The MIS problem is a fundamental NP-hard problem [Karp, 2010] defined as follows. Given an undirected graph G(V,E)G(V,E), an independent set (IS) is a subset of nodes V{\mathcal{I}}\in V where any two nodes in the set are not adjacent. The MIS problem attempts to find the largest IS, which is denoted {\mathcal{I}}^{\ast}. In this study, ρ\rho denotes the IS density, where ρ=||/|V|\rho=|{\mathcal{I}}|/|V|. To formulate the problem, a binary variable xix_{i} is assigned to each node iVi\in V. Then the MIS problem is formulated as follows:

f(𝒙;G,λ)=iVxi+λ(i,j)Exixj,f({\bm{x}};G,\lambda)=-\sum_{i\in V}x_{i}+\lambda\sum_{(i,j)\in E}x_{i}x_{j}, (11)

where the first term attempts to maximize the number of nodes assigned 11, and the second term penalizes the adjacent nodes marked 11 according to the penalty parameter λ\lambda. In our numerical experiments, we set λ=2\lambda=2, following Schuetz et al. [2022a], no violation is observed as in [Schuetz et al., 2022a]. First, for every dd, a specific value ρd\rho_{d}^{\ast}, which is dependent on only the degree dd, exists such that the independent set density ||/|V||{\mathcal{I}}^{\ast}|/|V| converges to ρd\rho_{d}^{\ast} with a high probability as NN approaches infinity [Bayati et al., 2010]. Second, a statistical mechanical analysis provides the typical MIS density ρdTheory\rho^{\mathrm{Theory}}_{d}, as shown in Fig. 3, and we clarify that for d>16d>16, the solution space of {\mathcal{I}} undergoes a clustering transition, which is associated with hardness in sampling [Barbier et al., 2013] because the clustering is likely to create relevant barriers that affect any algorithm searching for the MIS {\mathcal{I}}^{\ast}. Finally, the hardness is supported by analytical results in a large dd limit, which indicates that, while the maximum independent set density is known to have density ρd=2log(d)/d\rho^{\ast}_{d\to\infty}=2\log(d)/d, to the best of our knowledge, there is no known algorithm that can find an independent set density exceeding ρdalg=log(d)/d\rho^{\mathrm{alg}}_{d\to\infty}=\log(d)/d [Coja-Oghlan and Efthymiou, 2015].

MaxCut problems

The MaxCut problem is also a fundamental NP-hard problem [Karp, 2010] with practical application in machine scheduling [Alidaee et al., 1994], image recognition [Neven et al., 2008] and electronic circuit layout design [Deza and Laurent, 1994]. The MaxCut problem is also a fundamental NP-hard problem [Karp, 2010] Given an graph G=(V,E)G=(V,E), a cut set 𝒞E{\mathcal{C}}\in E is defined as a subset of the edge set between the node sets dividing (V1,V2V1V2=V,V1V2=)(V_{1},V_{2}\mid V_{1}\cup V_{2}=V,~{}V_{1}\cap V_{2}=\emptyset). The MaxCut problems aim to find the maximum cut set, denoted 𝒞{\mathcal{C}}^{\ast}. Here, the cut ratio is defined as ν=|𝒞|/|𝒱|\nu=|\mathcal{C}|/|\mathcal{V}|, where |𝒞||\mathcal{C}| is the cardinality of the cut set. To formulate this problem, each node is assigned a binary variable, where xi=1x_{i}=1 indicates that node ii belongs to V1V_{1}, and xi=0x_{i}=0 indicates that the node belongs to V2V_{2}. Here, xi+xj2xixj=1x_{i}+x_{j}-2x_{i}x_{j}=1 holds if the edge (i,j)𝒞(i,j)\in{\mathcal{C}}. As a result, we obtain the following:

f(𝒙;G)=i<jAij(2xixjxixj).f({\bm{x}};G)=\sum_{i<j}A_{ij}(2x_{i}x_{j}-x_{i}-x_{j}). (12)

This study has also focused on the MaxCut problems on dd-RRGs, for which several theoretical results have been established. Specifically, for each dd, the maximum cut ratio is given by νdd/4+Pd/4+𝒪(d)\nu_{d}^{\ast}\approx d/4+P_{\ast}\sqrt{d/4}+{\mathcal{O}}(\sqrt{d}), where P=0.7632P_{\ast}=0.7632\ldots with a high probability as NN approaches infinity [Parisi, 1980, Dembo et al., 2017]. Thus, we take νdUB=d/4+Pd/4\nu_{d}^{\mathrm{UB}}=d/4+P_{\ast}\sqrt{d/4} as an upper bound for the maximum cut ratio in the large nn limit.

DBM problems

Here, the topologies are taken from the Cora citation network [Sen et al., 2008], where each node has 1,4331{,}433 bag-of-words features, and each edge represents likelihood, as predicted by a machine learning model. Mandi et al. [2022] focused on disjoint topologies within the given topology, and they created 2727 distinct instances with varying properties. Each instance comprises 100100 nodes representing scientific publications, divided into two groups of 5050 nodes N1N_{1} and N2N_{2}. The optimization task is to find the maximum matching, where diversity constraints ensure connections among papers in the same field and between papers of different fields. This is formulated using a penalty method as follows.

l(𝒙;C,M,𝝀)=ijCijxij+λ1iReLU(jxij1)+λ2jReLU(ixij1)+λ3ReLU(pijxijijMijxij)+λ4ReLU(qijxijij(1Mij)xij),l({\bm{x}};C,M,{\bm{\lambda}})=-\textstyle\sum_{ij}C_{ij}x_{ij}+\lambda_{1}\textstyle\sum_{i}\mathrm{ReLU}\Big{(}\textstyle\sum_{j}x_{ij}-1\Big{)}+\lambda_{2}\textstyle\sum_{j}\mathrm{ReLU}\Big{(}\sum_{i}x_{ij}-1\Big{)}\\ +\lambda_{3}\mathrm{ReLU}\Big{(}p\textstyle\sum_{ij}x_{ij}-\textstyle\sum_{ij}M_{ij}x_{ij}\Big{)}+\lambda_{4}\mathrm{ReLU}\Big{(}q\sum_{ij}x_{ij}-\sum_{ij}(1-M_{ij})x_{ij}\Big{)}, (13)

where CN1×N2C\in{\mathbb{R}}^{N_{1}\times N_{2}} represents the likelihood of a link between each pair of nodes, an indicator MijM_{ij} is set to 0 if article ii and jj share the same subject field (11 otherwise) iN1\forall i\in N_{1}, and jN2j\in N_{2}. The parameters p,q[0,1]p,q\in[0,1] represent the probability of pairs sharing their field and of unrelated pairs, respectively. As in Mandi et al. [2022], we explore two variations of this problem, with p=q=p=q= being 2525% and 55%, respectively, and these variations are referred to as Matching-1 and Matching-2, respectively. In this experiment, we set λ1=λ2=10\lambda_{1}=\lambda_{2}=10 and λ3=λ4=25\lambda_{3}=\lambda_{4}=25.

E.2 GNNs

A GNN [Gilmer et al., 2017, Scarselli et al., 2008] is a specialized neural network for representation learning of graph-structured data. GNNs learn a vectorial representation of each node in two steps, i.e., the aggregate and combine steps. The aggregate step employs a permutation-invariant function to generate an aggregated node feature, and in the combine step, the aggregated node feature is passed through a trainable layer to generate a node embedding, known as "message passing" or the "readout phase." Formally, for a given graph G=(V,E)G=(V,E), where each node feature 𝒉v0H0{\bm{h}}^{0}_{v}\in{\mathbb{R}}^{H_{0}} is attached to each node vVv\in V, the GNN updates the following two steps iteratively. First, the aggregate step at each ll-th layer is defined as follows:

𝒂vl=Aggregate𝜽l({hul1,u𝒩v}),{\bm{a}}_{v}^{l}=\mathrm{Aggregate}_{{\bm{\theta}}}^{l}\left(\{h_{u}^{l-1},\forall u\in{\mathcal{N}}_{v}\}\right), (14)

where the neighborhood of vVv\in V is denoted 𝒩v={uV(v,u)E}{\mathcal{N}}_{v}=\{u\in V\mid(v,u)\in E\}, 𝒉ul1{\bm{h}}_{u}^{l-1} is the node feature of the neighborhood, and 𝒂vl{\bm{a}}_{v}^{l} is the aggregated node feature of the neighborhood. Then, the combined step at each ll-th layer is defined as follows:

𝒉vl=Combine𝜽l(𝒉vl1,𝒂vl),{\bm{h}}_{v}^{l}=\mathrm{Combine}^{l}_{{\bm{\theta}}}({\bm{h}}_{v}^{l-1},{\bm{a}}_{v}^{l}), (15)

where 𝒉vlHl{\bm{h}}_{v}^{l}\in{\mathbb{R}}^{H_{l}} denotes the node representation at the ll-th layer. Here, the hyperparameters for the total number of layers LL and the intermediate vector dimension NlN^{l} are determined empirically. Although numerous implementations of GNN architectures have been proposed to date, the most basic and widely used architecture is the GCN [Scarselli et al., 2008], which is given as follows:

𝒉vl=σ(Wlu𝒩(v)𝒉ul1|𝒩(v)|+Bl𝒉vl1),{\bm{h}}_{v}^{l}=\sigma\left(W^{l}\sum_{u\in{\mathcal{N}}(v)}\frac{{\bm{h}}_{u}^{l-1}}{|{\mathcal{N}}(v)|}+B^{l}{\bm{h}}_{v}^{l-1}\right), (16)

where WlW^{l} and BlB^{l} are trainable parameters, |𝒩(v)||{\mathcal{N}}(v)| serves as a normalization factor, and σ:HlHl\sigma:{\mathbb{R}}^{H_{l}}\to{\mathbb{R}}^{H_{l}} is some component-wise nonlinear activation function, e.g., the sigmoid or ReLU function.

Appendix F Additional experiments

F.1 Numerical validation of practical issues presented in Section 3.1

In this sectioin, we will examine the issue (I) with continuous relaxations and the issue (II), the difficulties of optimization, as pointed out by previous studies [Wang and Li, 2023, Angelini and Ricci-Tersenghi, 2023], in the NP-hard problems of MIS and the MaxCut problem. Therefere, we conducted numerical experiments using the PI-GNN solver for MIS and MaxCut problems on RRGs with higher degrees. To ensure the experimental impartiality, we adhered to the original settings of the PI-GNN solver [Schuetz et al., 2022b]. Refer to Section E for the detailed experimental settings. Fig. 7 (top) shows the solutions obtained by the PI-GNN solver as a function of the degree dd for the MIS and MaxCut problems with varying system sizes NN. These results indicate that finding independent and cut sets becomes unfeasible as the RRG becomes denser. In addition, to clarify the reasons for these failures, we analyzed the dynamics of the cost function for MIS problems with N=10,000N=10{,}000, with a specific focus on a graph with degrees d=5d=5 and d=20d=20, as depicted in Fig.  7 (bottom). For the d=5d=5 case, the cost function goes over the plateau of l^(𝜽;G,𝝀)=0\hat{l}({\bm{\theta}};G,{\bm{\lambda}})=0 with 𝒑θ(G)=𝟎N{\bm{p}}_{\theta}(G)={\bm{0}}_{N}, as investigated in the histogram, eventually yielding a solution comparable to those presented by Schuetz et al. [2022a]. Conversely, in the d=20d=20 case, the cost function remains stagnant on the plateau of l^(𝜽;G,𝝀)=0\hat{l}({\bm{\theta}};G,{\bm{\lambda}})=0 with 𝒑θ(G)=𝟎N{\bm{p}}_{\theta}(G)={\bm{0}}_{N}, thereby failing to find any independent nodes. Interpreting this phenomenon, we hypothesize that the representation capacity of the GNN is sufficiently large, leading us to consider the optimization of L^MIS(𝜽;G,λ)\hat{L}_{\mathrm{MIS}}({\bm{\theta}};G,\lambda) and L^MaxCut(𝜽;G)\hat{L}_{\mathrm{MaxCut}}({\bm{\theta}};G) as a variational optimization problem relative to 𝒑θ{\bm{p}}_{\theta}. In this case, 𝒑θ=𝟎N{\bm{p}}_{\theta}^{\ast}={\bm{0}}_{N} satisfies the first-order variational optimality conditions δl^MIS/δ𝒑θ|𝒑θ=𝒑=δl^MaxCut/δ𝒑θ|𝒑θ=𝒑=𝟎N\delta\hat{l}_{\mathrm{MIS}}/\delta{\bm{p}}_{\theta}|_{{\bm{p}}_{\theta}={\bm{p}}^{\ast}}=\delta\hat{l}_{\mathrm{MaxCut}}/\delta{\bm{p}}_{\theta}|_{{\bm{p}_{\theta}}={\bm{p}}^{\ast}}={\bm{0}}_{N}, which implies a potential reason for absorption into the plateau. However, this does not reveal the conditions for the convergence to the fixed point 𝒑{\bm{p}}^{\ast} during the early learning stage or the condition to escape from the fixed point 𝒑{\bm{p}}^{\ast}. Thus, an extensive theoretical evaluation through stability analysis remains an important topic for future work.

Refer to caption
Figure 7: The top graph shows the independent set density for MIS problems (left) and the cut ratio for MaxCut problems (right) as a function of degree dd using the PI-GNN solver with varying system size NN. Each data point represents the average result of five different graph instances, with the error bars indicating the standard deviation of those results. The bottom graph shows the cost as a function of the number of parameter updates NEPOCHN_{\mathrm{EPOCH}}, for N=10000N=10000 MIS problems on 55-RRG and 2020-RRG. The histogram represents the relaxed vector distribution with varying numbers of parameter updates NEPOCHN_{\mathrm{EPOCH}}. Each point in the bottom-left plot is linked to the corresponding bottom-right histogram.

In summary, UL-based solver, minimizing 𝜽{\bm{\theta}} can be challenging and unstable. In particular, the PI-GNN solver, which is one of the UL-based solvers employing GNNs, fails to optimize 𝜽{\bm{\theta}} due to a local solution in complex CO problems on relatively dense graphs where the performance of greedy algorithms worsens. This issues can be potential bottleneck for more practical and relatively dense problems, making it challenging to employ the PI-GNN solver confidently.

F.2 Additional results of MIS

Table 3: ApR and runtime are evaluated on three benchmarks provided by DIMES [Qiu et al., 2022]. The ApR is assessed relative to the results obtained by KaMIS. Runtime is reported as the total clock time, denoted in seconds (s), minutes (m), or hours (h). The runtime and solution quality are sourced from iSCO [Sun et al., 2023]. The baselines include solvers from the Operations Research (OR) community, as well as data-driven approaches utilizing Reinforcement Learning (RL), Supervised Learning (SL) combined with Tree Search (TS), Greedy decoding (G), or sampling (S). Methods that fail to produce results within 10 times the time limit of DIMES are marked as N/A.
Method Type ER-[700-800] ER-[9000-11000]
ApR Time ApR Time
KaMIS OR 1.000 52.13m 1.000 7.6h
Gurobi OR 0.922 50.00m N/A N/A
Intel SL+TS 0.865 20.00m N/A N/A
SL+G 0.777 6.06m 0.746 5.02m
DGL SL+TS 0.830 22.71m N/A N/A
LwD RL+S 0.918 6.33m 0.907 7.56m
DIMES RL+G 0.852 6.12m 0.841 5.21m
RL+S 0.937 12.01m 0.873 12.51m
iSCO fewer steps 0.998 1.38m 0.990 9.38m
more steps 1.006 5.56m 1.008 1.25h
CRA UL-based 0.928 47.30m 0.963 1.03h

We evaluate our method using the MIS benchmark dataset from recent studies [Goshvadi et al., 2023, Qiu et al., 2022], which includes graphs from SATLIB [Hoos and Stützle, 2000] and Erdős–Rényi graphs (ERGs) of varying sizes. Following Sun et al. [2023], our test set consists of 500500 SATLIB graphs, each containing between 403403 and 449449 clauses with up to 1,3471{,}347 nodes and 5,9785{,}978 edges, 128128 ERGs with 700700 to 800800 nodes each, and 1616 ERGs with 9,0009{,}000 to 11,00011{,}000 nodes each. We conducted numerical experiments on PQQA using four different configurations: parallel runs with S=100S=100 or S=1000S=1000 and shorter steps (3000 steps) or longer steps (30000 steps), similar to the approach in iSCO [Sun et al., 2023]. Table 3 presents the solution quality and runtime results. The results show that CRA, which optimizes the relaxed variables as an optimization of GNN parameters, takes extra time for smaller ER-[700-800] instances due to the smaller number of decision variables. However, for larger instances, CRA achieves results comparable to iSCO. Although limited space makes it difficult to present other benchmark results employed by iSCO, such as MaxCut and MaxClique, numerical experiments on these benchmarks also show that CRA is less effective for small problems. However, for larger problems, the results are comparable to or slightly inferior to those of iSCO.

We also investigated the relationship between the order of the graph and the solving time of the solver, and the results are shown in Table 4 and 5. The results demonstrate that the runtime remains nearly constant as graph order and density increase, indicating effective scalability with denser graphs.

Table 4: ApR of the MIS problem on RRG(N,d)\mathrm{RRG}(N,d). All the results are averaged based on 5 RRG\mathrm{RRG}s with different random seeds.
Problem ApR (CRA) ApR (PI) Time (CRA) Time (PI)
RRG(1,000,10)\mathrm{RRG}(1{,}000,10) 0.95 0.78 108 (s) 98 (s)
RRG(1,000,20)\mathrm{RRG}(1{,}000,20) 0.95 0.56 103 (s) 92 (s)
RRG(1,000,30)\mathrm{RRG}(1{,}000,30) 0.94 0.00 102 (s) 88 (s)
RRG(1,000,40)\mathrm{RRG}(1{,}000,40) 0.93 0.00 101 (s) 82 (s)
RRG(1,000,50)\mathrm{RRG}(1{,}000,50) 0.92 0.00 102 (s) 82 (s)
RRG(1,000,60)\mathrm{RRG}(1{,}000,60) 0.91 0.00 101 (s) 91 (s)
RRG(1,000,70)\mathrm{RRG}(1{,}000,70) 0.91 0.00 101 (s) 86 (s)
RRG(1,000,80)\mathrm{RRG}(1{,}000,80) 0.91 0.00 102 (s) 93 (s)
RRG(5,000,10)\mathrm{RRG}(5{,}000,10) 0.93 0.77 436 (s) 287 (s)
RRG(5,000,20)\mathrm{RRG}(5{,}000,20) 0.95 0.74 413 (s) 280 (s)
RRG(5,000,30)\mathrm{RRG}(5{,}000,30) 0.95 0.00 419 (s) 283 (s)
RRG(5,000,40)\mathrm{RRG}(5{,}000,40) 0.94 0.00 429 (s) 293 (s)
RRG(5,000,50)\mathrm{RRG}(5{,}000,50) 0.94 0.00 418 (s) 324 (s)
RRG(5,000,60)\mathrm{RRG}(5{,}000,60) 0.93 0.00 321 (s) 302 (s)
RRG(5,000,70)\mathrm{RRG}(5{,}000,70) 0.92 0.00 321 (s) 325 (s)
RRG(5,000,80)\mathrm{RRG}(5{,}000,80) 0.92 0.00 330 (s) 305 (s)
Table 5: The ApR of the MIS problem on ERG(N,d)\mathrm{ERG}(N,d) is evaluated against the results of KaMIS. Due to time limitations, the maximum running time for KaMIS was constrained. The results below present the average ApRs and runtimes across five different random seeds.
Problem CRA (ApR) PI (ApR) Time (CRA) Time (PI) Time (KaMIS)
ERG(1,000,0.05)\mathrm{ERG}(1{,}000,0.05) 0.97 0.01 103 (s) 98 (s) 100 (s)
ERG(1,000,0.10)\mathrm{ERG}(1{,}000,0.10) 0.95 0.00 100 (s) 98 (s) 210 (s)
ERG(1,000,0.15)\mathrm{ERG}(1{,}000,0.15) 0.94 0.00 100 (s) 92 (s) 315 (s)
ERG(1,000,0.20)\mathrm{ERG}(1{,}000,0.20) 0.91 0.00 99 (s) 88 (s) 557 (s)
ERG(1,000,0.25)\mathrm{ERG}(1{,}000,0.25) 0.93 0.00 98 (s) 82 (s) 733 (s)
ERG(1,000,0.30)\mathrm{ERG}(1{,}000,0.30) 0.90 0.00 98 (s) 82 (s) 1000 (s)
ERG(1,000,0.35)\mathrm{ERG}(1{,}000,0.35) 0.92 0.00 99 (s) 91 (s) 1000 (s)
ERG(1,000,0.40)\mathrm{ERG}(1{,}000,0.40) 0.91 0.00 97 (s) 86 (s) 1000 (s)

F.3 Additional results of Gset

We conducted experiments across the additional GSET collection to further validate that including CRA enhances PI-GNN results beyond previously achievable in Table 6.

Table 6: ApR for MaxCut on Gset
GRAPH (NODES, EDGES) GREEDY SDP RUN-CSP PI-GNN CRA
G1 (800800, 19,17619{,}176) 0.9420.942 0.9700.970 0.9790.979 0.9780.978 1.000\mathbf{1.000}
G2 (800800, 19,17619{,}176) 0.9510.951 0.9700.970 0.9810.981 0.9760.976 0.998\mathbf{0.998}
G3 (800800, 19,17619{,}176) 0.9450.945 0.9720.972 0.9820.982 0.9720.972 1.000\mathbf{1.000}
G4 (800800, 19,17619{,}176) 0.9490.949 0.9710.971 0.9800.980 0.9780.978 0.999\mathbf{0.999}
G5 (800800, 19,17619{,}176) 0.9490.949 0.9700.970 0.9800.980 0.9780.978 1.000\mathbf{1.000}
G14 (800800, 4,6944{,}694) 0.9460.946 0.9520.952 0.9560.956 0.9880.988 0.994\mathbf{0.994}
G15 (800800, 4,6614{,}661) 0.9390.939 0.9580.958 0.9520.952 0.9800.980 0.992\mathbf{0.992}
G16 (800800, 4,6724{,}672) 0.9480.948 0.9580.958 0.9530.953 0.9650.965 0.990\mathbf{0.990}
G17 (800800, 4,6674{,}667) 0.9460.946 - 0.9560.956 0.9670.967 0.990\mathbf{0.990}
G22 (2,0002{,}000, 19,99019{,}990) 0.9230.923 - 0.9720.972 0.9870.987 0.998\mathbf{0.998}
G23 (2,0002{,}000, 19,99019{,}990) 0.9270.927 - 0.9730.973 0.9680.968 0.997\mathbf{0.997}
G24 (2,0002{,}000, 19,99019{,}990) 0.9270.927 - 0.9730.973 0.9590.959 0.998\mathbf{0.998}
G25 (2,0002{,}000, 19,99019{,}990) 0.9290.929 - 0.9740.974 0.9740.974 0.998\mathbf{0.998}
G26 (2,0002{,}000, 19,99019{,}990) 0.9240.924 - 0.9740.974 0.9650.965 0.998\mathbf{0.998}
G35 (2,0002{,}000, 11,77811{,}778) 0.9420.942 - 0.9530.953 0.9680.968 0.990\mathbf{0.990}
G36 (2,0002{,}000, 11,76611{,}766) 0.9420.942 - 0.9530.953 0.9660.966 0.991\mathbf{0.991}
G37 (2,0002{,}000, 11,78511{,}785) 0.9460.946 - 0.9500.950 0.9660.966 0.997\mathbf{0.997}
G38 (2,0002{,}000, 11,77911{,}779) 0.9430.943 - 0.9490.949 0.9660.966 0.991\mathbf{0.991}
G43 (1,0001{,}000, 9,9909{,}990) 0.9280.928 0.9680.968 0.9760.976 0.9660.966 0.995\mathbf{0.995}
G44 (1,0001{,}000, 9,9909{,}990) 0.9200.920 0.9550.955 0.9780.978 0.9680.968 0.998\mathbf{0.998}
G45 (1,0001{,}000, 9,9909{,}990) 0.9300.930 0.9500.950 0.9790.979 0.9610.961 0.998\mathbf{0.998}
G46 (1,0001{,}000, 9,9909{,}990) 0.9300.930 0.9600.960 0.9760.976 0.9740.974 0.998\mathbf{0.998}
G47 (1,0001{,}000, 9,9909{,}990) 0.9310.931 0.9560.956 0.9760.976 0.9720.972 0.997\mathbf{0.997}
G48 (3,0003{,}000, 6,0006{,}000) 1.000\mathbf{1.000} 1.000\mathbf{1.000} 1.000\mathbf{1.000} 0.9120.912 1.000\mathbf{1.000}
G49 (3,0003{,}000, 6,0006{,}000) 1.000\mathbf{1.000} 1.000\mathbf{1.000} 1.000\mathbf{1.000} 0.9860.986 1.000\mathbf{1.000}
G50 (3,0003{,}000, 6,0006{,}000) 1.000\mathbf{1.000} 1.000\mathbf{1.000} 0.9990.999 0.9900.990 1.000\mathbf{1.000}
G51 (1,0001{,}000, 5,9095{,}909) 0.9490.949 0.9600.960 0.9540.954 0.9640.964 0.991\mathbf{0.991}
G52 (1,0001{,}000, 5,9165{,}916) 0.9440.944 0.9570.957 0.9550.955 0.9610.961 0.990\mathbf{0.990}
G53 (1,0001{,}000, 5,9145{,}914) 0.9450.945 0.9560.956 0.9500.950 0.9660.966 0.992\mathbf{0.992}
G54 (1,0001{,}000, 5,9165{,}916) 0.9000.900 0.9560.956 0.9520.952 0.9700.970 0.988\mathbf{0.988}
G55 (5,0005{,}000, 12,49812{,}498) 0.8920.892 - 0.9780.978 0.9830.983 0.991\mathbf{0.991}
G58 (5,0005{,}000, 29,57029{,}570) 0.9450.945 - 0.9800.980 0.9660.966 0.989\mathbf{0.989}
G60 (7,0007{,}000, 17,14817{,}148) 0.8890.889 - 0.9800.980 0.9450.945 0.991\mathbf{0.991}
G63 (7,0007{,}000, 41,45941{,}459) 0.9480.948 - 0.9470.947 0.9680.968 0.989\mathbf{0.989}
G70 (10,00010{,}000, 9,9999{,}999) 0.8860.886 - 0.9700.970 0.9820.982 0.992\mathbf{0.992}

F.4 Additional results of TSP

We conducted additional experiments on several TSP problems from the TSPLIB dataset111http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/, presenting results that illustrate the α\alpha-dependency.

Experiments calculated the ApR as the ratio of the optimal value to the CRA result, with the ApR representing the average and standard deviation over 3 seeds. The “–” symbol in PI-GNN denotes cases where most variables are continuous values and where no solution satisfying the constraint was found within the maximum number of epochs. The same GNN and optimizer settings were used as in the main text experiments in Section 5.1.

Table 7 shows that the CRA approach yielded solutions with an ApR exceeding 0.9 across various instances. Notably, for the burma14 problem, our method identified the global optimal solution (3,323) multiple times. However, the optimal value can vary based on the specific GNN architecture and problem structure, indicating that a more comprehensive ablation study could provide valuable insights in future work.

Refer to caption
Figure 8: (Top) IS density of N=10,000N=10{,}000 MIS problems on 100100-RRG as a function of initial scheduling γ(0)\gamma(0) and scheduling rate ε\varepsilon values obtained by the CRA-PI-GNN solver using GraphSage (Left) and GCV (Right). The color of the heat map represents the average IS over five different instances.
Refer to caption
Figure 9: (Left) Independent set density as a function of curveture rate α\alpha in Eq. (5). (Right) Cut ratio as a function of curveture rate α\alpha in Eq. (5). Error bars represent the standard deviations of the results.
Table 7: Comparison of ApR performance across different values of pp for various TSPLIB instances.
burma14 ulysses22 st70 gr96
ApR (p=2p=2) 0.91±0.080.91\pm 0.08 0.89±0.030.89\pm 0.03 0.96±0.010.96\pm 0.01 0.81±0.050.81\pm 0.05
ApR (p=4p=4) 0.98±0.100.98\pm 0.10 0.92±0.020.92\pm 0.02 0.85±0.030.85\pm 0.03 0.82±0.030.82\pm 0.03
ApR (p=6p=6) 0.97±0.140.97\pm 0.14 0.88±0.070.88\pm 0.07 0.88±0.040.88\pm 0.04 0.90±0.050.90\pm 0.05
ApR (p=8p=8) 0.99±0.060.99\pm 0.06 0.89±0.050.89\pm 0.05 0.80±0.020.80\pm 0.02 0.86±0.050.86\pm 0.05
ApR (PI) 0.736±1.210.736\pm 1.21
Optimum 3,323 7,013 675 5,5209

F.5 Ablation over initial scheduling value and scheduling rate

We conducted an ablation study focusing on the initial scheduling value γ(0)\gamma(0) and scheduling rate ε\varepsilon. This numerical experiment was conducted under the configuration described in Section 5 and E. Fig.  8 shows the IS density of N=10,000N=10{,}000 for MIS problems on a 100100-RRG as a function of the initial scheduling value γ(0)\gamma(0) and the scheduling rate ε\varepsilon using the CRA-PI-GNN with both GraphSage and GCV. As can be seen, smaller initial scheduling γ(0)\gamma(0) and scheduling rate ε\varepsilon values typically yield better solutions. However, the convergence time increases progressively as the initial scheduling γ(0)\gamma(0) and scheduling rate ε\varepsilon values become smaller. In addition, GraphSage consistently produces better solutions even with relatively larger initial scheduling γ(0)\gamma(0) and scheduling rate ε\varepsilon values, which implies that the GNN architecture influences both the solution quality and the effective regions of the initial scheduling γ(0)\gamma(0) and scheduling rate ε\varepsilon values for the annealing process.

F.6 Ablation over curve rate

Next, we investigated the effect of varying the curvature α\alpha in Eq. 5. Numerical experiments were performed on MIS problems with 10,00010{,}000 nodes and the degrees of 55 and 2020, as well as MaxCut problems with 10,00010{,}000 nodes and the degrees of 55 and 3535. The GraphSAGE architecture was employed, with other parameters set as Section  5 adn E. As shown in Fig. 9, we observed that α=2\alpha=2 consistently yielded the best scores across these problems.