This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Geometry Meets Vectors: Approximation
Algorithms for Multidimensional Packing

Arindam Khan {}^{*}    Eklavya Sharma {}^{*}    K. V. N. Sreenivas Department of Computer Science and Automation, Indian Institute of Science, Bengaluru, India. [email protected], [email protected], [email protected]
Abstract

We study the generalized multidimensional bin packing problem (GVBP) that generalizes both geometric packing and vector packing. Here, we are given nn rectangular items where the ithi^{\textrm{th}} item has width w(i)w(i), height h(i)h(i), and dd nonnegative weights v1(i),v2(i),,vd(i)v_{1}(i),v_{2}(i),\ldots,v_{d}(i). Our goal is to get an axis-parallel non-overlapping packing of the items into square bins so that for all j[d]j\in[d], the sum of the jthj^{\textrm{th}} weight of items in each bin is at most 1. This is a natural problem arising in logistics, resource allocation, and scheduling. Despite being well studied in practice, surprisingly, approximation algorithms for this problem have rarely been explored.

We first obtain two simple algorithms for GVBP having asymptotic approximation ratios 6(d+1)6(d+1) and 3(1+ln(d+1)+ε)3(1+\ln(d+1)+\varepsilon). We then extend the Round-and-Approx (R&A) framework [3, 6] to wider classes of algorithms, and show how it can be adapted to GVBP. Using more sophisticated techniques, we obtain better approximation algorithms for GVBP, and we get further improvement by combining them with the R&A framework. This gives us an asymptotic approximation ratio of 2(1+ln((d+4)/2))+ε2(1+\ln((d+4)/2))+\varepsilon for GVBP, which improves to 2.919+ε2.919+\varepsilon for the special case of d=1d=1. We obtain further improvement when the items are allowed to be rotated. We also present algorithms for a generalization of GVBP where the items are high dimensional cuboids.

1 Introduction

Bin packing and knapsack problems are classical NP-hard optimization problems. Two classical generalizations of these problems: geometric packing and vector packing have been well-studied from the 1980s [14, 17]. Geometric packing considers the packing of rectangular items, whereas, in vector packing items are multidimensional vectors. Fig. 1 illustrates the difference between geometric packing and vector packing. However, often in practice, we encounter a mixture of geometric and vector constraints. Consider the following airlines cargo problem [37]: We have boxes to load in an airline cargo container. In addition to the geometric constraint that all the boxes must fit within the container, we also have a constraint that the total weight of the loaded boxes is within a specified capacity. Thus, in this problem, three dimensions are geometric and the weight is a vector constraint.

0.20.20.40.40.80.80.40.40.40.40.80.80.40.40.20.20.40.40.40.40.40.4
Figure 1: Geometric & vector packing: We can pack rectangles (0.8,0.2)(0.8,0.2), (0.4,0.4)(0.4,0.4), (0.4,0.4)(0.4,0.4) into unit geometric bin but can’t pack vectors (0.8,0.2)(0.8,0.2), (0.4,0.4)(0.4,0.4), (0.4,0.4)(0.4,0.4) into unit vector bin.

Weight has been an important constraint to consider for packing in logistics and supply chain management, e.g., cranes and other equipment can be damaged by the bins being too heavy, or by a skewed weight distribution [1]. While the container loading problems mostly consider packing items inside a container, the container stowage planning problem considers the stacking of the containers onto and off cargo ships [36]. Even when different cargoes are packed into a fleet of aircraft for transport, one needs the individual cargoes to be not too heavy to ensure stability and less fuel consumption [2]. Similar problems find applications in vehicle routing with loading constraints [8]. Many practical heuristics [41, 43] have been proposed for these kinds of problems. Several companies (such as Driw, Boxify, Freightcom) and practical packages [46] have considered the problem. In many cases, we also want to ensure a limit on other attributes, such as the amount of magnetism, radioactivity, or toxicity. Each of these properties can be considered as additional vector dimensions.

Such multidimensional packing problems are also getting attention due to their connections with fair resource allocation [38]. In recent years, a considerable amount of research has focused on group fairness [27, 44] such that the algorithms are not biased towards (or against) some groups or categories. One such notion of fairness is restricted dominance [7], which upper bounds the number (or size) of items from a category. These different categories can be considered as dimensions. E.g., in a container packing problem for flood relief, one needs to ensure that the money spent on a container is fairly distributed among different types of items (such as medicine, food, garments). Hence, for each category, there is an upper bound on the value that can go into a container.

Formally, we are given nn items I{1,2,,n}I\coloneqq\{1,2,\dots,n\} that are (dgd_{g}, dvd_{v})-dimensional, i.e., item ii is a dgd_{g}-dimensional cuboid of lengths 1(i),2(i),,dg(i)\ell_{1}(i),\ell_{2}(i),\ldots,\ell_{d_{g}}(i) and has dvd_{v} non-negative weights v1(i),v2(i),,vdv(i)v_{1}(i),v_{2}(i),\ldots,v_{d_{v}}(i). A (dgd_{g}, dvd_{v})-dimensional bin is a dgd_{g}-dimensional cuboid of length 1 in each geometric dimension and weight capacity 1 in each of the dvd_{v} vector dimensions. A feasible packing of items into a bin is a packing where items are packed parallel to the axes without overlapping, and for all j[dv]j\in[d_{v}], the sum of the jthj^{\textrm{th}} weight-dimension of the items in the bin is at most 1. In the (dgd_{g}, dvd_{v}) bin packing problem (BP), we have to feasibly pack all items into the minimum number of bins. In the (dgd_{g}, dvd_{v}) knapsack problem (KS), each item ii also has an associated nonnegative profit p(i)p(i), and we have to feasibly pack a maximum-profit subset of the items into a single bin (also called ‘knapsack’). (dgd_{g}, dvd_{v}) packing problems generalize both dgd_{g}-dimensional geometric packing (when dv=0d_{v}=0) and dvd_{v}-dimensional vector packing (when dg=0d_{g}=0). Already for vector packing, if dvd_{v} is part of the input, there is an approximation hardness of dv1ε{d_{v}}^{1-\varepsilon} unless NP=ZPP [5]. Thus, throughout the paper we assume both dg,dvd_{g},d_{v} to be constants.

1.1 Our Results

We study the first approximation algorithms for general (dgd_{g}, dvd_{v}) BP, with a focus on dg=2d_{g}=2. We give two simple algorithms for (2, dd) BP, called 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} and 𝚋𝚎𝚝𝚝𝚎𝚛𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{better-simple-pack}}, having asymptotic approximation ratios (AARs) of 6(d+1)6(d+1) and 3(1+ln(d+1))+ε3(1+\ln(d+1))+\varepsilon, respectively, for any ε>0\varepsilon>0. For d=1d=1, 𝚋𝚎𝚝𝚝𝚎𝚛𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{better-simple-pack}}’s AAR improves to 4.216+ε\approx 4.216+\varepsilon.

Next, we modify the Round-and-Approx (R&A) framework [6] so that it works for (dgd_{g}, dvd_{v}) BP. We combine R&A with the 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} algorithm to get an AAR of 2(1+ln(3(d+1)))+ε2(1+\ln(3(d+1)))+\varepsilon for (2, dd) BP. This improves upon the AAR of 𝚋𝚎𝚝𝚝𝚎𝚛𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{better-simple-pack}} for d3d\geq 3.

Next, we obtain a more sophisticated algorithm for (2, dd) BP, called 𝚌𝚋𝚙𝚊𝚌𝚔\operatorname{\mathtt{cb-pack}}, that fits into the R&A framework and has an even better AAR.

Theorem 1.

For any ε>0\varepsilon>0, there is a polynomial-time algorithm for (2, dd) BP, called 𝚌𝚋𝚙𝚊𝚌𝚔\operatorname{\mathtt{cb-pack}}, having AAR 2(1+ln((d+4)/2))+ε2(1+\ln((d+4)/2))+\varepsilon (improves to 2(1+ln((d+3)/2))+ε2(1+\ln((d+3)/2))+\varepsilon when items can be rotated by 9090^{\circ}). For d=1d=1, the AAR improves to 2(1+ln(19/12))+ε2(1+\ln(19/12))+\varepsilon 2.919+ε\approx 2.919+\varepsilon (further improves to 2(1+ln(3/2))+ε2(1+\ln(3/2))+\varepsilon 2.811+ε\approx 2.811+\varepsilon when items can be rotated).

Table 1: Comparison of asymptotic approximation ratios (AARs) of algorithms for (2, dd) BP.
(2, dd) BP (2, 1) BP
𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} 6(d+1)6(d+1) 1212
𝚋𝚎𝚝𝚝𝚎𝚛𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{better-simple-pack}} 3(1+ln(d+1))+ε3(1+\ln(d+1))+\varepsilon 3(1+ln(3/2))+ε4.216+ε3(1+\ln(3/2))+\varepsilon\approx 4.216+\varepsilon
𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} with R&A 2(1+ln(3(d+1)))+ε2(1+\ln(3(d+1)))+\varepsilon 2(1+ln6)+ε2(1+\ln 6)+\varepsilon 5.5835+ε\approx 5.5835+\varepsilon
𝚌𝚋𝚙𝚊𝚌𝚔\operatorname{\mathtt{cb-pack}} (without rotation) 2(1+ln(d+42))+ε2(1+\ln(\frac{d+4}{2}))+\varepsilon 2(1+ln(19/12))+ε2(1+\ln(19/12))+\varepsilon 2.919+ε\approx 2.919+\varepsilon
𝚌𝚋𝚙𝚊𝚌𝚔\operatorname{\mathtt{cb-pack}} (with rotation) 2(1+ln(d+32))+ε{2(1+\ln(\frac{d+3}{2}))+\varepsilon} 2(1+ln(3/2))+ε2(1+\ln(3/2))+\varepsilon 2.811+ε\approx 2.811+\varepsilon

We also show how to extend 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} and 𝚋𝚎𝚝𝚝𝚎𝚛𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{better-simple-pack}} to (dgd_{g}, dvd_{v}) BP to obtain AARs 2b(dv+1)2b(d_{v}+1) and b(1+ln(dv+1)+ε)b(1+\ln(d_{v}+1)+\varepsilon), respectively, where b4dg+2dgb\coloneqq 4^{d_{g}}+2^{d_{g}}. We also give a similar algorithm for (dgd_{g}, dvd_{v}) KS having approximation ratio b(1+ε)b(1+\varepsilon).

1.2 Related Work

The bin packing problem (BP) has been the cornerstone of approximation algorithms [26]. The standard performance measure for BP algorithms is the asymptotic approximation ratio (AAR). An asymptotic polynomial time approximation scheme (APTAS) for BP was given by Fernandez de la Vega and Lueker [17], using linear grouping. Note that 1-D BP can be considered as (11, 0) BP as well as (0, 11) BP. The present best approximation algorithm for 1-D BP returns a packing in at most opt+O(logopt)\operatorname*{opt}+O(\log\operatorname*{opt}) bins, where opt\operatorname*{opt} is the optimal number of bins [23]. Knapsack problem (KS) is one of Karp’s 21 NP-complete problems. Lawler gave an FPTAS [34] for KS. For surveys on BP and KS, we refer the readers to [13, 28].

In [17], a (d+ε)(d+\varepsilon)-asymptotic approximation algorithm was given for the dd-dimensional vector bin packing (dd-D VBP). Chekuri and Khanna gave a O(logd)O(\log d) approximation for dd-D VBP [11]. The study of 2-D geometric bin packing (2-D GBP) was initiated by [14]. Caprara gave a Td1T_{\infty}^{d-1}-asymptotic approximation Harmonic-Decreasing-Height (HDH) algorithm for dd-D GBP [9], where T1.6901T_{\infty}\approx 1.6901. This is still the best known approximation for dd-D GBP for d3d\geq 3. Both 2-D GBP and 2-D VBP were shown to be APX-hard [4, 45].

Bansal, Caprara and Sviridenko [3] introduced the Round-and-Approx (R&A) framework to obtain improved approximations for both 2-D GBP and dd-D VBP. The R&A framework is a two stage process. First, a (possibly exponential-sized) set covering LP relaxation (called configuration LP) is solved approximately. Then, a randomized rounding procedure is applied for a few steps to pack a subset of items, after which only a ‘small’ fraction of items (called the residual instance) remain unpacked. In the second step, the residual instance is packed using a subset-oblivious algorithm. Intuitively, given a random subset SS of II where each element occurs with probability about 1/k1/k, a ρ\rho-approximate subset-oblivious algorithm produces a packing of SS in approximately ρopt(I)/k\rho\operatorname*{opt}(I)/k bins. In the R&A framework, one can obtain a (1+lnρ)(1+\ln\rho)-approximation algorithm using a ρ\rho-approximate subset oblivious algorithm. Two algorithms, 1-D BP APTAS [17] and HDH [9] were shown to be subset-oblivious based on various properties of dual-weighting functions. This led to an AAR of (1+ln(1.69))(1+\ln(1.69)) and (1+lnd)(1+\ln d) for 2-D GBP and dd-D VBP, respectively. However, it was cumbersome to extend subset-obliviousness to wider classes of algorithms.

Bansal and Khan [6] later extended the R&A framework for 2-D GBP to rounding-based algorithms, where the large dimensions are rounded up to O(1)O(1) values and the packing of items is container-based, i.e. each bin contains a constant number of rectangular regions called containers and items are packed in a special way into containers. For 2-D GBP, they used a 1.5-asymptotic-approximation algorithm [24] to obtain the present best (1+ln1.5)1.405(1+\ln 1.5)\approx 1.405-asymptotic-approximation. For dd-D VBP, Bansal et al. [5] used the R&A framework combined with a multi-objective budgeted matching problem, to obtain the present best AAR of (0.81+od(1)+lnd)(0.81+o_{d}(1)+\ln d).

Multidimensional knapsack is also well-studied. For dd-D vector knapsack (dd-D VKS), Frieze and Clarke gave a PTAS [18]. For 2-D geometric knapsack (GKS), Jansen and Zhang [25] gave a 2-approximation algorithm, while the present best approximation ratio is 1.89 [20]. It is not even known whether 2-D GKS is APX-hard or not. There are many other related important geometric packing problems, such as strip packing [21, 19] and maximum independent set of rectangles [31]. For surveys on multidimensional packing, see [12, 30].

1.3 Technical Contribution

One of our main contributions is the enhancement of R&A framework [6] to wider applications.

First, R&A framework now also works with (dgd_{g}, dvd_{v})-dimensional items, unifying the approach for geometric and vector packing. To use R&A, we need to solve the configuration LP of the corresponding bin packing problem. All previous applications (d-D VBP and 2-D GBP) of R&A solved the configuration LP within (1+ε)(1+\varepsilon) factor using a (1+O(ε))(1+O(\varepsilon))-approximate solution to KS. Due to the unavailability of a PTAS for (2, dd) KS, we had to adapt and use a different linear programming algorithm [40] that uses an η\eta-approximation algorithm for KS to (1+ε)η(1+\varepsilon)\eta-approximately solve the configuration LP of the corresponding BP problem, for any constants 1<η1<\eta, 0<ε<10<\varepsilon<1.

Second, we introduce more freedom in choosing the packing structure. Unlike the R&A framework in [6] that worked only for container-based packing, we allow either relaxing the packing structure to non-container-based (like in 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}}) or imposing packing constraints in addition to being container-based (like in 𝚌𝚋𝚙𝚊𝚌𝚔\operatorname{\mathtt{cb-pack}}). This generalization can help in finding better algorithms for other variants of BP.

Finally, we allow rounding items in ways other than rounding up, if we can find a suitable way of unrounding a packing of rounded items. In 𝚌𝚋𝚙𝚊𝚌𝚔\operatorname{\mathtt{cb-pack}}, we round down the width and height of some items to 0, and in 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}}, we round each (2, dd)-dimensional item ii to an item of width 1, height xx and each vector dimension xx, where xx is a value depending on the original dimensions of ii. As it was shown in [30], if the large coordinates of items are rounded to O(1)O(1) types, we cannot obtain better than dd and 4/34/3-approximation for dd-D VBP and 2-D GBP, respectively. However, as we now allow rounding down, we may be able to use the R&A framework with algorithms having better approximation ratios.

We also fix a minor error in the R&A framework of [30] (see Section E.1 for details).

In [3], it was mentioned: “One obstacle against the use of R&A for other problems is the difficulty in deriving subset-oblivious algorithms (or proving that existing algorithms are subset oblivious).” We expect that our progress will help in understanding the power of R&A to extend it to other set-cover type problems, e.g. round-SAP and round-UFP [16].

Our another major contribution is handling of the (2, dd) BP problem. This problem presents additional challenges over pure geometric BP, and our algorithm 𝚌𝚋𝚙𝚊𝚌𝚔\operatorname{\mathtt{cb-pack}} demonstrates how to circumvent those challenges. For example, in geometric packing, items having low total area can be packed into a small number of bins using the Next-Fit-Decreasing-Height (NFDH) algorithm (see [14] and Lemmas 9 and 10 in Appendix A). This need not be true when items have weights, since the geometric dimensions can be small but the vector dimensions may be large. To handle this, we divide the items into different classes based on density (i.e., weight/area). We use the facts that items of low density and low total area can be packed into a small number of bins, and items of high density that fit into a bin have low total area. Also, in geometric packing, we can sometimes move items with similar geometric dimensions across different bins (like in linear grouping [17, 39]). Vector dimensions again create problems here, since the items can have very different weights. To handle this, we only move items of similar geometric dimensions and density. This leads to a more structured packing and we show how to find such a near-optimal structured packing efficiently.

2 Preliminaries and Notation

A valid packing of a given set of (dgd_{g}, dvd_{v})-dimensional items into a (dgd_{g}, dvd_{v})-dimensional bin is an arrangement of the items in the bin such that: (i) All items are packed in an axis parallel manner, i.e., each item has its faces parallel to the faces of the bin. Formally, to pack an item ii into a bin, we need to decide its position (x1(i),x2(i),,xdg(i))(x_{1}(i),x_{2}(i),\ldots,x_{d_{g}}(i)) in the bin, and the item will be packed in the cuboidal region j=1dg[xj(i),xj(i)+j(i)]\prod_{j=1}^{d_{g}}[x_{j}(i),x_{j}(i)+\ell_{j}(i)]. (ii) The items are non-overlapping, i.e., the interiors of any two items do not intersect. Formally, for any two items i1i_{1} and i2i_{2} in the same bin, the sets j=1dg(xj(i1),xj(i1)+j(i1))\prod_{j=1}^{d_{g}}(x_{j}(i_{1}),x_{j}(i_{1})+\ell_{j}(i_{1})) and j=1dg(xj(i2),xj(i2)+j(i2))\prod_{j=1}^{d_{g}}(x_{j}(i_{2}),x_{j}(i_{2})+\ell_{j}(i_{2})) do not intersect. (iii) All items are packed completely inside the bin, i.e., for each item ii and each j[dg]j\in[d_{g}], xj(i)0x_{j}(i)\geq 0 and xj(i)+j(i)1x_{j}(i)+\ell_{j}(i)\leq 1. (iv) The total weight packed in each of the dvd_{v} vector dimensions is at most one.

Let 𝒫\mathcal{P} be a minimization problem and let \mathcal{I} denote the set of all input instances of 𝒫\mathcal{P}. Consider a polynomial-time approximation algorithm 𝒜\mathcal{A} for 𝒫\mathcal{P}. For any input II\in\mathcal{I}, let opt(I)\operatorname*{opt}(I) denote the cost of the optimal solution and let 𝒜(I)\mathcal{A}(I) denote the cost of 𝒜\mathcal{A}’s output on II. Then the quantity

ρ𝒜supI𝒜(I)opt(I)\displaystyle\rho_{\mathcal{A}}\coloneqq\sup_{I\in\mathcal{I}}\frac{\mathcal{A}(I)}{\operatorname*{opt}(I)}

is called the approximation ratio of 𝒜\mathcal{A} and the asymptotic approximation ratio (AAR) of 𝒜\mathcal{A} is defined as

ρ𝒜limzsupI{𝒜(I)opt(I)|opt(I)=z}.\displaystyle\rho_{\mathcal{A}}^{\infty}\coloneqq\lim_{z\to\infty}\sup_{I\in\mathcal{I}}\left\{\frac{\mathcal{A}(I)}{\operatorname*{opt}(I)}\Big{|}\operatorname*{opt}(I)=z\right\}.

Intuitively, AAR denotes the algorithm’s performance for inputs whose optimum value is large.

Let [n]{1,2,,n}[n]\coloneqq\{1,2,\ldots,n\}. Let poly(n)\operatorname{poly}(n) be the set of polynomial and sub-polynomial functions of nn. Define vmaxv_{\max}, vol\operatorname{vol}, and span\operatorname{span} as follows: vmax(i)maxj=1dvvj(i)v_{\max}(i)\coloneqq\max_{j=1}^{d_{v}}v_{j}(i), vol(i)j=1dgj(i)\operatorname{vol}(i)\coloneqq\prod_{j=1}^{d_{g}}\ell_{j}(i), span(i)max(vol(i),vmax(i))\operatorname{span}(i)\coloneqq\max(\operatorname{vol}(i),v_{\max}(i)). span(i)\operatorname{span}(i) is, intuitively, the measure of largeness of item i[n]i\in[n]. For convenience, let v0(i)vol(i)v_{0}(i)\coloneqq\operatorname{vol}(i). Assume w.l.o.g. that vol(i)=0\operatorname{vol}(i)=0 implies (j[dg],j(i)=0)(\forall j\in[d_{g}],\ell_{j}(i)=0). For a set II of items, given a function f:If:I\mapsto\mathbb{R}, for SIS\subseteq I, define f(S)iSf(i)f(S)\coloneqq\sum_{i\in S}f(i). This means, e.g., vol(S)iSvol(i)\operatorname{vol}(S)\coloneqq\sum_{i\in S}\operatorname{vol}(i). For any bin packing algorithm 𝒜\mathcal{A}, let 𝒜(I)\mathcal{A}(I) be the resulting bin packing of items II, and let |𝒜(I)||\mathcal{A}(I)| be the number of bins in 𝒜(I)\mathcal{A}(I). Define opt(I)\operatorname*{opt}(I) as the minimum number of bins needed to pack II.

Lemma 2.

For (dgd_{g}, dvd_{v}) items II, span(I)(dv+1)opt(I)\left\lceil\operatorname{span}(I)\right\rceil\leq(d_{v}+1)\operatorname*{opt}(I).

Proof.

Let there be mm bins in an optimal packing of (dgd_{g}, dvd_{v})-dimensional items II. In this optimal packing, let JjJ_{j} be the items in the jthj^{\textrm{th}} bin. Then

span(I)\displaystyle\left\lceil\operatorname{span}(I)\right\rceil =k=1miJkmaxj=0dvvj(i)k=1miJkj=0dvvj(i)k=1mj=0dv1=(dv+1)m\displaystyle=\left\lceil\sum_{k=1}^{m}\sum_{i\in J_{k}}\max_{j=0}^{d_{v}}v_{j}(i)\right\rceil\leq\left\lceil\sum_{k=1}^{m}\sum_{i\in J_{k}}\sum_{j=0}^{d_{v}}v_{j}(i)\right\rceil\leq\left\lceil\sum_{k=1}^{m}\sum_{j=0}^{d_{v}}1\right\rceil=(d_{v}+1)m\qed

For dg=2d_{g}=2, let w(i)1(i)w(i)\coloneqq\ell_{1}(i), h(i)2(i)h(i)\coloneqq\ell_{2}(i) be the width and height of item ii, respectively. The area of item ii is a(i)w(i)h(i)=vol(i)a(i)\coloneqq w(i)h(i)=\operatorname{vol}(i). The items in (22, 0) BP are called ‘rectangles’.

2.1 Configuration LP

For a bin packing instance II containing nn items, a configuration of II is a packing of a subset of items of II into a bin (see Fig. 2). We denote the set of all configurations of II by 𝒞\mathcal{C}. The configuration matrix of II is a matrix A{0,1}n×|𝒞|A\in\{0,1\}^{n\times|\mathcal{C}|} where A[i,C]A[i,C] is 1 if configuration CC contains item ii and 0 otherwise. To solve the bin packing problem, it is sufficient to decide the number of bins of each configuration. This gives us the following linear programming relaxation, called a configuration LP:

minx|𝒞|C𝒞xCwhereAx𝟏 and x0\min_{x\in\mathbb{R}^{|\mathcal{C}|}}\sum_{C\in\mathcal{C}}x_{C}\quad\textrm{where}\quad Ax\geq\mathbf{1}\textrm{ and }x\geq 0

A feasible solution xx to the configuration LP is a vector in |𝒞|\mathbb{R}^{|\mathcal{C}|}, and |𝒞||\mathcal{C}|, the number of configurations, can be exponential in nn. But if |support(x)|poly(n)|\operatorname*{support}(x)|\in\operatorname{poly}(n), then xx may have a polynomial-sized representation.

0.20.40.4 1 0.60.20.4 1 0.50.20.5 0.3 0.30.20.5 0.4 Configuration 11.00.8Configuration 20.80.6Configuration 30.90.4
Figure 2: Top row contains four (2, 2)-dimensional items (histograms show vector dimensions). Three configurations are shown below it. Placing all items into one bin would violate both geometric and vector constraints. Placing the green item in configuration 3 violates vector constraints.

3 Simple Algorithms

In this section, we look at simple algorithms for (2, dd) BP. They are based on Steinberg’s algorithm for rectangle packing [42], which has the following corollary:

Lemma 3 (Proof in Appendix B).

Let II be a set of rectangles where a(I)1a(I)\leq 1. Then there is a O(nlog2n/loglogn)O(n\log^{2}n/\log\log n)-time algorithm to pack II into 3 square bins of side length 1.

Let II be a (2, dd) BP instance. Let I^{span(i):iI}\widehat{I}\coloneqq\{\operatorname{span}(i):i\in I\}, i.e., I^\widehat{I} is a 1-D BP instance. The algorithm 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔(I)\operatorname{\mathtt{simple-pack}}(I) first runs the Next-Fit algorithm on I^\widehat{I}. Let [J^1,[\widehat{J}_{1}, J^2,\widehat{J}_{2}, ,\ldots, J^m]\widehat{J}_{m}] be the resulting bin packing of I^\widehat{I} into mm bins. For each J^jI^\widehat{J}_{j}\subseteq\widehat{I}, let JjJ_{j} be the corresponding items from II. Then k[dv],\forall k\in[d_{v}], vk(Jj)1v_{k}(J_{j})\leq 1 and vol(Jj)1\operatorname{vol}(J_{j})\leq 1. 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} then uses Steinberg’s algorithm to pack each JjJ_{j} into at most 3 bins, giving a packing of II into at most 3m3m bins. By the property of Next-Fit (see Claim 8 in Appendix A), we get that m2span(I)m\leq\left\lceil 2\operatorname{span}(I)\right\rceil. By Lemma 2, we get 32span(I)6(d+1)opt(I)3\left\lceil 2\operatorname{span}(I)\right\rceil\leq 6(d+1)\operatorname*{opt}(I). This gives us the following theorem.

Theorem 4.

For (2, dd) BP, 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} uses at most 32span(I)3\left\lceil 2\operatorname{span}(I)\right\rceil bins, so 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} is a 6(d+1)6(d+1)-approximation algorithm. 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} runs in O(nd+nlog2n/loglogn)O(nd+n\log^{2}n/\log\log n) time.

The algorithm 𝚋𝚎𝚝𝚝𝚎𝚛𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{better-simple-pack}} first computes I~\widetilde{I}, which is a (d+1)(d+1)-D VBP instance obtained by replacing the geometric dimensions of each item iIi\in I by a single vector dimension a(i)a(i). It computes a bin packing of I~\widetilde{I} using any algorithm 𝒜\mathcal{A}. It then uses Steinberg’s algorithm to obtain a bin packing of II into at most 3|𝒜(I~)|3|\mathcal{A}(\widetilde{I})| bins.

Note that opt(I~)opt(I)\operatorname*{opt}(\widetilde{I})\leq\operatorname*{opt}(I). If 𝒜\mathcal{A} has AAR α\alpha, then |𝒜(I~)|αopt(I~)+O(1)|\mathcal{A}(\widetilde{I})|\leq\alpha\operatorname*{opt}(\widetilde{I})+O(1). Therefore, 𝚋𝚎𝚝𝚝𝚎𝚛𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{better-simple-pack}} has AAR 3α3\alpha. The (d+1)(d+1)-D VBP algorithm by [3] (parametrized by a constant ε>0\varepsilon>0) gives α=1+ln(d+1)+ε\alpha=1+\ln(d+1)+\varepsilon and the algorithm by [5] gives α=1.5+ln((d+2)/2)+ε\alpha=1.5+\ln((d+2)/2)+\varepsilon (improves to α=1+ln(1.5)+ε\alpha=1+\ln(1.5)+\varepsilon for d=1d=1).

Using similar ideas, we can get an algorithm for (2, dd) KS (see Appendix D).

Although 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} has a worse AAR than 𝚋𝚎𝚝𝚝𝚎𝚛𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{better-simple-pack}}, the number of bins used by 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} is upper-bounded in terms of span\operatorname{span}, which is a useful property. Because of this, we will use it as a subroutine in other algorithms (like 𝚌𝚋𝚙𝚊𝚌𝚔\operatorname{\mathtt{cb-pack}}).

The algorithms for (2, dd) packing can be extended to (dgd_{g}, dvd_{v}) packing. We just need an algorithm for the following problem: given a set JJ of dgd_{g}-dimensional cuboids where vol(J)1\operatorname{vol}(J)\leq 1, pack JJ into a small number of bins. We used Steinberg’s algorithm when dg=2d_{g}=2. When dg=3d_{g}=3, we can use the algorithm of [15, Section 2] to pack JJ into at most 9 bins. For dg>3d_{g}>3, we can use a variant of the HDH4\mathrm{HDH}_{4} algorithm [10] to pack JJ into at most 4dg+2dg4^{d_{g}}+2^{d_{g}} bins (details in Appendix C). Therefore, 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} will use b2span(I)b\left\lceil 2\operatorname{span}(I)\right\rceil bins, where b3b\coloneqq 3 when dg=2d_{g}=2, b9b\coloneqq 9 when dg=3d_{g}=3, and b4dg+2dgb\coloneqq 4^{d_{g}}+2^{d_{g}} when dg>3d_{g}>3. Therefore, 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} is 2b(dv+1)2b(d_{v}+1)-approximate. Similarly, 𝚋𝚎𝚝𝚝𝚎𝚛𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{better-simple-pack}} has AAR b(1+ln(dv+1)+ε)b(1+\ln(d_{v}+1)+\varepsilon).

4 Round-and-Approx Framework

We enhance the R&A framework as a general outline for designing approximation algorithms for the generalized bin packing problem. We denote the algorithm for the R&A framework as 𝚛𝚗𝚊𝚙𝚊𝚌𝚔(I,β,ε)\operatorname{\mathtt{rna-pack}}(I,\beta,\varepsilon), which takes as input a set II of (dgd_{g}, dvd_{v})-dimensional items and parameters β1\beta\geq 1 and ε(0,1)\varepsilon\in(0,1). The steps of the algorithm are as follows (see Algorithm 1 for a more formal description).

Algorithm 1 𝚛𝚗𝚊𝚙𝚊𝚌𝚔(I,β,ε)\operatorname{\mathtt{rna-pack}}(I,\beta,\varepsilon): Computes a bin packing of II. II is a set of (dgd_{g}, dvd_{v})-dimensional items and β1\beta\geq 1
1:x^=𝚜𝚘𝚕𝚟𝚎𝚌𝚘𝚗𝚏𝚒𝚐𝚕𝚙(I)\widehat{x}=\operatorname{\mathtt{solve-config-lp}}(I)
2:repeat T(lnβ)x^1T\coloneqq\left\lceil(\ln\beta)\lVert\widehat{x}\rVert_{1}\right\rceil times
3:     Select a configuration CC with probability x^C/x^1\widehat{x}_{C}/\lVert\widehat{x}\rVert_{1}.
4:     Pack a bin according to CC.
5:end repeat
6:Let SS be the unpacked items from II. // SS is called the set of residual items.
7:Initialize JbestJ_{\mathrm{best}} to null.
8:for (I~,D)𝚛𝚘𝚞𝚗𝚍(I)(\widetilde{I},D)\in\operatorname{\mathtt{round}}(I) do // 𝚛𝚘𝚞𝚗𝚍(I)\operatorname{\mathtt{round}}(I) outputs a set of pairs.
9:     JD=𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔(SD)J_{D}=\operatorname{\mathtt{simple-pack}}(S\cap D)
10:     Let π\pi be a bijection from IDI-D to I~\widetilde{I}. Let S~{π(i):iSD}\widetilde{S}\coloneqq\{\pi(i):i\in S-D\}.
11:     J~=𝚌𝚘𝚖𝚙𝚕𝚎𝚡𝚙𝚊𝚌𝚔(S~)\widetilde{J}=\operatorname{\mathtt{complex-pack}}(\widetilde{S})
12:     J=𝚞𝚗𝚛𝚘𝚞𝚗𝚍(J~)J=\operatorname{\mathtt{unround}}(\widetilde{J})
13:     if JbestJ_{\mathrm{best}} is null or |JDJ|<|Jbest||J_{D}\cup J|<|J_{\mathrm{best}}| then
14:         Jbest=JDJJ_{\mathrm{best}}=J_{D}\cup J
15:     end if
16:end for
17:Pack SS according to JbestJ_{\mathrm{best}}.
  1. 1.

    Solve the Configuration LP of II. Let x^\widehat{x} be a μ\mu-asymptotic-approximate solution to the configuration LP. Note that each index of x^\widehat{x} corresponds to a configuration. In all previous applications of the R&A framework, μ=1+ε\mu=1+\varepsilon. However, in some of our applications we will have μ\mu to be a large constant.

  2. 2.

    Randomized rounding of configuration LP: For T(lnβ)x^1T\coloneqq\left\lceil(\ln\beta)\lVert\widehat{x}\rVert_{1}\right\rceil steps do the following: select a configuration CC with probability x^C/x^1\widehat{x}_{C}/\lVert\widehat{x}\rVert_{1}. Pack TT bins according to each of these selected TT configurations. Let SS be the remaining items that are not packed, called the residual instance.

  3. 3.

    Rounding of items: We define a subroutine 𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}} that takes items II and parameter ε\varepsilon as input111The input to 𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}} is II instead of SS because SS is random and we want to round items deterministically, i.e., the rounding of each item iSi\in S should not depend on which other items from II lie in SS. In fact, this is where the old R&A framework [30] introduced an error. See Section E.1 for details.. It discards a set DID\subseteq I of items such that span(D)εspan(I)\operatorname{span}(D)\leq\varepsilon\operatorname{span}(I) and then modifies each item in IDI-D to get a set I~\widetilde{I} of items. We say that the output of 𝚛𝚘𝚞𝚗𝚍(I,ε)\operatorname{\mathtt{round}}(I,\varepsilon) is (I~,D)(\widetilde{I},D), where items in I~\widetilde{I} are called rounded items. Intuitively, after rounding, the items in I~\widetilde{I} are of O(1)O(1) types, which makes packing easier. Also, since span(D)\operatorname{span}(D) is small, DSD\cap S can be packed into a small number of bins using 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}}.

    We impose some restrictions on 𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}}, which we denote as conditions C1 and C2, that we describe in Section 4.2. Previous versions of R&A only allowed modifications where each item’s dimensions were rounded up. We do not have this restriction; we also allow rounding down some dimensions. We also allow 𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}} to output a O(poly(n))O(\operatorname{poly}(n))-sized list of guesses of (I~,D)(\widetilde{I},D).

  4. 4.

    Pack rounded items: Let S~\widetilde{S} be the rounded items corresponding to SDS\setminus D. Pack S~\widetilde{S} into bins using any bin packing algorithm that satisfies ‘condition C3’, which we describe in Section 4.3. Let us name this algorithm 𝚌𝚘𝚖𝚙𝚕𝚎𝚡𝚙𝚊𝚌𝚔\operatorname{\mathtt{complex-pack}}.

  5. 5.

    Unrounding: Given a bin packing of S~\widetilde{S}, let 𝚞𝚗𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{unround}} be a subroutine that computes a bin packing of SDS\setminus D. 𝚞𝚗𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{unround}} is trivial in previous versions of R&A, because they only increase dimensions of items during rounding. In our applications, we may round down items, so 𝚞𝚗𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{unround}} can be non-trivial. 𝚞𝚗𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{unround}} can be any algorithm that satisfies ‘condition C4’, which we describe in Section 4.4.

We can think of the R&A framework as a meta-algorithm, i.e., we give it the algorithms 𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}}, 𝚌𝚘𝚖𝚙𝚕𝚎𝚡𝚙𝚊𝚌𝚔\operatorname{\mathtt{complex-pack}} and 𝚞𝚗𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{unround}} as inputs and it outputs the algorithm 𝚛𝚗𝚊𝚙𝚊𝚌𝚔\operatorname{\mathtt{rna-pack}}. The R&A framework requires that 𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}}, 𝚌𝚘𝚖𝚙𝚕𝚎𝚡𝚙𝚊𝚌𝚔\operatorname{\mathtt{complex-pack}} and 𝚞𝚗𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{unround}} satisfy four conditions C1, C2, C3, C4, which we describe in Sections 4.2, 4.3 and 4.4. Prospective users of the R&A framework need to design these three subroutines and prove that they satisfy these four conditions.

Intuitively, 𝚛𝚗𝚊𝚙𝚊𝚌𝚔\operatorname{\mathtt{rna-pack}} first packs some items into TT bins by randomized rounding of x^\widehat{x}. We can prove that Pr[iS]1/β\Pr[i\in S]\leq 1/\beta, so SS contains a small fraction of the items in II (see Lemma 5). We will then try to prove that if the rest of the algorithm (𝚛𝚘𝚞𝚗𝚍+𝚌𝚘𝚖𝚙𝚕𝚎𝚡𝚙𝚊𝚌𝚔+𝚞𝚗𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}}+\operatorname{\mathtt{complex-pack}}+\operatorname{\mathtt{unround}}) packs II into mm bins, then it will pack SS into roughly m/βm/\beta bins. This notion was referred to in [3] as subset-obliviousness. We will use subset-obliviousness to bound the AAR of 𝚛𝚗𝚊𝚙𝚊𝚌𝚔\operatorname{\mathtt{rna-pack}}.

Lemma 5.

iI\forall i\in I, Pr(iS)exp(Tx^1)1β\Pr(i\in S)\leq\exp\left(-\frac{T}{\left\lVert\widehat{x}\right\rVert_{1}}\right)\leq\frac{1}{\beta}.

Proof.

Let C1,C2,,CTC_{1},C_{2},\ldots,C_{T} be the configurations chosen during randomized rounding (line 3 in Algorithm 1). Let 𝒞i\mathcal{C}_{i} be the configurations that contain the element ii.

Pr(iS)\displaystyle\Pr(i\in S) =Pr(t=1T(Ct𝒞i))=t=1TPr(Ct𝒞i)\displaystyle=\Pr\left(\bigwedge_{t=1}^{T}(C_{t}\not\in\mathcal{C}_{i})\right)=\prod_{t=1}^{T}\Pr(C_{t}\not\in\mathcal{C}_{i}) (all CtC_{t} are independent)
=t=1T(1C𝒞iPr(Ct=C))=(1C𝒞ix^Cx^1)T\displaystyle=\prod_{t=1}^{T}\left(1-\sum_{C\in\mathcal{C}_{i}}\Pr(C_{t}=C)\right)=\left(1-\sum_{C\in\mathcal{C}_{i}}\frac{\widehat{x}_{C}}{\left\lVert\widehat{x}\right\rVert_{1}}\right)^{T}
(11x^1)T\displaystyle\leq\left(1-\frac{1}{\left\lVert\widehat{x}\right\rVert_{1}}\right)^{T} (constraint in configuration LP for item ii)
exp(Tx^1)1β\displaystyle\leq\exp\left(-\frac{T}{\left\lVert\widehat{x}_{1}\right\rVert}\right)\leq\frac{1}{\beta}\qed

Section 4.6 shows how to break 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} into 𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}}, 𝚌𝚘𝚖𝚙𝚕𝚎𝚡𝚙𝚊𝚌𝚔\operatorname{\mathtt{complex-pack}} and 𝚞𝚗𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{unround}} and use it with R&A.

4.1 Fractional Structured Packing

Let (I~,D)(\widetilde{I},D) be an output of 𝚛𝚘𝚞𝚗𝚍(I)\operatorname{\mathtt{round}}(I) and let X~\widetilde{X} be an arbitrary subset of I~\widetilde{I}. Our analysis of 𝚛𝚗𝚊𝚙𝚊𝚌𝚔\operatorname{\mathtt{rna-pack}} is based around a concept called fractional structured packing of X~\widetilde{X}. Note that the notion of fractional structured packing only appears in the analysis of 𝚛𝚗𝚊𝚙𝚊𝚌𝚔\operatorname{\mathtt{rna-pack}}. It is not needed to describe any algorithm.

First, we discuss the notion of structured packing. Several types of packings are used in bin packing algorithms, such as container-based [24], shelf-based (e.g. output of Caprara’s HDH algorithm [10]), guillotine-based [22], corridor-based [20], etc. A type of packing is called to be a structured packing if it satisfies downward closure, i.e., a structured packing remains structured even after removing some items from the packed bins. For example, Jansen and Prädel [24] showed that given any packing of a 2-D GBP instance into mm bins, we can slice some of the items and repack them into (1.5+ε)m+O(1)(1.5+\varepsilon)m+O(1) bins such that the resulting packing is container-based. Container-based roughly means that in each bin, items are packed into rectangular regions called containers, and containers’ heights and widths belong to a fixed set of O(1)O(1) values. Hence, container-based is an example of a structured packing as a container-based packing of bins remains a container-based packing, even if we remove some items from the packed bins. In fact, all the commonly used packing types are indeed structured packings. Also note that the set of all possible packings is trivially a structured packing. Our R&A framework gives algorithm designers the freedom to use or define the structured packing in any way they want, as long as they satisfy the downward closure property. Typically, the choice of which definition of structured packing to use will depend on the ease of proving Conditions C2 and C3 for that definition. This helped us to go beyond the R&A framework of Bansal and Khan [6], which only considered container-based packings of I~\widetilde{I}.

Intuitively, a fractional structured packing is one where we slice each item of X~\widetilde{X} into pieces and then find a structured packing of the pieces. Define fsopt(X~)\operatorname{fsopt}(\widetilde{X}) as the number of bins used in the optimal fractional structured packing of X~\widetilde{X}. To analyze the AAR of 𝚛𝚗𝚊𝚙𝚊𝚌𝚔\operatorname{\mathtt{rna-pack}}, we will bound the number of bins used to pack the residual instance SS in terms of fsopt(S~)\operatorname{fsopt}(\widetilde{S}), and then we will bound fsopt(S~)\operatorname{fsopt}(\widetilde{S}) in terms of opt(I)\operatorname*{opt}(I).

To define fractional structured packing, we first define what it means to slice an item. From a geometric perspective, slicing an item perpendicular to the kthk^{\textrm{th}} dimension means cutting the item into 2 parts using a hyperplane perpendicular to the kthk^{\textrm{th}} axis. The vector dimensions get split proportionately across the slices. E.g., for dg=2d_{g}=2, if k=1k=1 for item ii, then we slice ii using a vertical cut, and if k=2k=2, we slice ii using a horizontal cut.

Definition 1 (Slicing an item).

Let ii be a (dgd_{g}, dvd_{v})-dimensional item. Slicing ii perpendicular to geometric dimension kk with proportionality α\alpha (where 0<α<10<\alpha<1) is the operation of replacing ii by two items i1i_{1} and i2i_{2} such that: (i) jk,j(i)=j(i1)=j(i2)\forall j\neq k,\ell_{j}(i)=\ell_{j}(i_{1})=\ell_{j}(i_{2}), (ii) k(i1)=αk(i)\ell_{k}(i_{1})=\alpha\ell_{k}(i) and k(i2)=(1α)k(i)\ell_{k}(i_{2})=(1-\alpha)\ell_{k}(i), (iii) j[dv],vj(i1)=αvj(i)vj(i2)=(1α)vj(i)\forall j\in[d_{v}],v_{j}(i_{1})=\alpha v_{j}(i)\wedge v_{j}(i_{2})=(1-\alpha)v_{j}(i).

Definition 2 (Fractional packing).

Let I~\widetilde{I} be (dgd_{g}, dvd_{v})-dimensional items, where for each item iI~i\in\widetilde{I}, we are given a set X(i)X(i) of axes perpendicular to which we can repeatedly slice ii (X(i)X(i) can be empty, which would mean that the item cannot be sliced). If we slice items as per their given axes and then pack the slices into bins, then the resulting packing is called a fractional bin packing.

See Fig. 3 for an example of a fractional packing.

0.40.50.40.40.41+0.40.50.5
Figure 3: Example of a fractional packing of two items into a bin.

4.2 Properties of 𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}}

Definition 3 (Density vector).

The density vector of a (dgd_{g}, dvd_{v})-dimensional item ii is the vector vspan[v0(i)/span(i),v1(i)/span(i),,vdv(i)/span(i)]v_{\operatorname{span}}\coloneqq\left[v_{0}(i)/{\operatorname{span}(i)},{v_{1}(i)}/{\operatorname{span}(i)},\ldots,{v_{d_{v}}(i)}/{\operatorname{span}(i)}\right]. Recall that v0(i)vol(i)v_{0}(i)\coloneqq\operatorname{vol}(i) and note that vspan=1\|v_{\operatorname{span}}\|_{\infty}=1.

The subroutine 𝚛𝚘𝚞𝚗𝚍(I)\operatorname{\mathtt{round}}(I) returns a set of pairs of the form (I~,D)(\widetilde{I},D). Condition C1 is defined as the following constraints over each pair (I~,D)(\widetilde{I},D):

  • C1.1. Small discard: DID\subseteq I and span(D)εspan(I)\operatorname{span}(D)\leq\varepsilon\operatorname{span}(I).

  • C1.2. Bijection from IDI-D to I~\widetilde{I}: Each item in I~\widetilde{I} is obtained by modifying an item in IDI-D. Let π\pi be the bijection from IDI-D to I~\widetilde{I} that captures this relation.

  • C1.3. Homogeneity properties: 𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}} partitions items in I~\widetilde{I} into a constant number of classes: K~1,K~2,,K~q\widetilde{K}_{1},\widetilde{K}_{2},\ldots,\widetilde{K}_{q}. These classes should satisfy the following properties, which we call homogeneity properties:

    • All items in a class have the same density vector.

    • For each class K~j\widetilde{K}_{j}, we decide the set XX of axes perpendicular to which we can slice items in K~j\widetilde{K}_{j}. If items in a class K~j\widetilde{K}_{j} are not allowed to be sliced perpendicular to dimension kk, then all items in that class have the same length along dimension kk. (For example, if dg=2d_{g}=2 and vertical cuts are forbidden, then all items have the same width.)

  • C1.4. Bounded expansion: Let CC be any configuration of II and K~\widetilde{K} be any one of the classes of I~\widetilde{I}. Let C~{π(i):iCD}\widetilde{C}\coloneqq\{\pi(i):i\in C-D\}. Then we need to prove that span(K~C~)cmax\operatorname{span}(\widetilde{K}\cap\widetilde{C})\leq c_{\max} for some constant cmaxc_{\max}. Let us call this result ‘bounded expansion lemma’.

Intuitively, the homogeneity properties allow us to replace (a slice of) an item in a fractional packing by slices of other items of the same class. Thus, while trying to get a fractional packing, we can focus on the item classes, which are constant in number, instead of focusing on the nn items. Intuitively, the bounded expansion lemma ensures that we do not round up items too much.

Condition C2 (also called structural theorem): For some constant ρ>0\rho>0 and for some (I~,D)𝚛𝚘𝚞𝚗𝚍(I)(\widetilde{I},D)\in\operatorname{\mathtt{round}}(I), fsopt(I~)ρopt(I)+O(1)\operatorname{fsopt}(\widetilde{I})\leq\rho\operatorname*{opt}(I)+O(1).

Intuitively, the structural theorem says that allowing slicing as per 𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}} and imposing a structure on the packing does not increase the minimum number of bins by too much. The AAR of 𝚛𝚗𝚊𝚙𝚊𝚌𝚔\operatorname{\mathtt{rna-pack}} increases with ρ\rho, so we want ρ\rho to be small.

4.3 𝚌𝚘𝚖𝚙𝚕𝚎𝚡𝚙𝚊𝚌𝚔\operatorname{\mathtt{complex-pack}}

Condition C3: For some constant α>0\alpha>0 and for any (I~,D)𝚛𝚘𝚞𝚗𝚍(I)(\widetilde{I},D)\in\operatorname{\mathtt{round}}(I) and any X~I~\widetilde{X}\subseteq\widetilde{I}, 𝚌𝚘𝚖𝚙𝚕𝚎𝚡𝚙𝚊𝚌𝚔(X~)\operatorname{\mathtt{complex-pack}}(\widetilde{X}) packs X~\widetilde{X} into at most αfsopt(X~)+O(1)\alpha\operatorname{fsopt}(\widetilde{X})+O(1) bins.

Condition C3 says that we can find a packing that is close to the optimal fractional structured packing. The AAR of 𝚛𝚗𝚊𝚙𝚊𝚌𝚔\operatorname{\mathtt{rna-pack}} increases with α\alpha, so we want α\alpha to be small.

4.4 𝚞𝚗𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{unround}}

Condition C4: For some constant γ>0\gamma>0, if 𝚌𝚘𝚖𝚙𝚕𝚎𝚡𝚙𝚊𝚌𝚔(S~)\operatorname{\mathtt{complex-pack}}(\widetilde{S}) outputs a packing of S~\widetilde{S} into mm bins, then 𝚞𝚗𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{unround}} modifies that packing to get a packing of SDS-D into γm+O(1)\gamma m+O(1) bins.

Intuitively, condition C4 says that unrounding does not increase the number of bins by too much. The AAR of 𝚛𝚗𝚊𝚙𝚊𝚌𝚔\operatorname{\mathtt{rna-pack}} increases with γ\gamma, so a small γ\gamma is desirable. If 𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}} only increases the dimensions of items, then unrounding is trivial and γ=1\gamma=1.

4.5 AAR of R&A

Recall that 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} is a 2b(dv+1)2b(d_{v}+1)-approximation algorithm for (dgd_{g}, dvd_{v}) BP (see Section 3), where b3b\coloneqq 3 when dg=2d_{g}=2, b9b\coloneqq 9 when dg=3d_{g}=3, b4dg+2dgb\coloneqq 4^{d_{g}}+2^{d_{g}} when dg>3d_{g}>3, and b1b\coloneqq 1 when dg=1d_{g}=1. Our key ingredient in the analysis of R&A is the following lemma. We give a sketch of the proof here and defer the full proof to Appendix E.

Lemma 6.

Let S~\widetilde{S} be as computed by 𝚛𝚗𝚊𝚙𝚊𝚌𝚔(I,β,ε)\operatorname{\mathtt{rna-pack}}(I,\beta,\varepsilon). Then with high probability, we get

fsopt(S~)fsopt(I~)/β+2bμεopt(I)+O(1/ε2).\operatorname{fsopt}(\widetilde{S})\leq\operatorname{fsopt}(\widetilde{I})/\beta+2b\mu\varepsilon\operatorname*{opt}(I)+O(1/\varepsilon^{2}).
Proof sketch.

Our proof of Lemma 6 is inspired by the analysis in [30]. We prove it by analyzing the fractional structured configuration LP of I~\widetilde{I}.

Definition 4.

Let (I~,D)𝚛𝚘𝚞𝚗𝚍(I)(\widetilde{I},D)\in\operatorname{\mathtt{round}}(I). Suppose 𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}} partitioned I~\widetilde{I} into classes K~1,K~2,K~q\widetilde{K}_{1},\widetilde{K}_{2},\ldots\widetilde{K}_{q}. Let 𝒞f\mathcal{C}_{f} be the set of all structured configurations of items in I~\widetilde{I} that allow items to be sliced as per 𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}}. For any S~I~\widetilde{S}\subseteq\widetilde{I}, the fractional structured configuration LP of S~\widetilde{S}, denoted as fsconfigLP(S~)\operatorname{fsconfigLP}(\widetilde{S}), is

minx|𝒞f|C𝒞fxCwhereC𝒞fspan(CK~j)xCspan(S~K~j)j[q]xC0C𝒞f\begin{array}[]{*3{>{\displaystyle}l}}\min_{x\in\mathbb{R}^{|\mathcal{C}_{f}|}}&\lx@intercol\displaystyle\sum_{C\in\mathcal{C}_{f}}x_{C}\hfil\lx@intercol\\ \textrm{where}&\sum_{C\in\mathcal{C}_{f}}\operatorname{span}(C\cap\widetilde{K}_{j})x_{C}\;\geq\;\operatorname{span}(\widetilde{S}\cap\widetilde{K}_{j})&\forall j\in[q]\\ &x_{C}\geq 0&\forall C\in\mathcal{C}_{f}\end{array}

The integer version of this program is denoted as fsconfigIP(S~)\operatorname{fsconfigIP}(\widetilde{S}). The optimal objective values of fsconfigLP(S~)\operatorname{fsconfigLP}(\widetilde{S}) and fsconfigIP(S~)\operatorname{fsconfigIP}(\widetilde{S}) are denoted as fsconfigLP(S~)\operatorname{fsconfigLP}^{*}(\widetilde{S}) and fsconfigIP(S~)\operatorname{fsconfigIP}^{*}(\widetilde{S}), respectively.

Intuitively, fsconfigIP\operatorname{fsconfigIP} is the same as the structured fractional bin packing problem because of the downward closure property, so fsconfigIP(S~)=fsopt(S~)\operatorname{fsconfigIP}^{*}(\widetilde{S})=\operatorname{fsopt}(\widetilde{S})222There is a caveat in the definition of fsconfigIP(S~)\operatorname{fsconfigIP}^{*}(\widetilde{S}), because of which this isn’t completely true. See Lemma 13 in Appendix E for an explanation and a fix.. By the homogeneity property (C1.3), the number of constraints in this LP is a constant qq. So by rank lemma333Rank Lemma: the number of non-zero variables in an extreme-point solution to an LP is at most the number of constraints. See Lemma 2.1.4 in [33]., we can show that |fsopt(S~)fsconfigLP(S~)|O(1)|\operatorname{fsopt}(\widetilde{S})-\operatorname{fsconfigLP}^{*}(\widetilde{S})|\in O(1) (see Lemmas 13 and 14 in Appendix E). Now to prove Lemma 6, roughly, we need to show that fsconfigLP(S~)fsconfigLP(I~)/β\operatorname{fsconfigLP}^{*}(\widetilde{S})\lessapprox\operatorname{fsconfigLP}^{*}(\widetilde{I})/\beta.

The RHS in the jthj^{\textrm{th}} constraint of fsconfigLP(S~)\operatorname{fsconfigLP}(\widetilde{S}) is a random variable span(S~K~j)\operatorname{span}(\widetilde{S}\cap\widetilde{K}_{j}). The RHS in the jthj^{\textrm{th}} constraint of fsconfigLP(I~)\operatorname{fsconfigLP}(\widetilde{I}) is span(K~j)\operatorname{span}(\widetilde{K}_{j}). Note that 𝔼(span(S~K~j))span(K~)/β\operatorname*{\mathbb{E}}(\operatorname{span}(\widetilde{S}\cap\widetilde{K}_{j}))\leq\operatorname{span}(\widetilde{K})/\beta by Lemma 5. In fact, we can harness the randomness of S~\widetilde{S}, the bounded expansion property (C1.4), and a concentration inequality [35], to show that span(S~K~)span(K~)/β\operatorname{span}(\widetilde{S}\cap\widetilde{K})\lessapprox\operatorname{span}(\widetilde{K})/\beta. Therefore, if xx^{*} is an optimal solution to fsconfigLP(I~)\operatorname{fsconfigLP}(\widetilde{I}), then x/βx^{*}/\beta is roughly a solution to fsconfigLP(S~)\operatorname{fsconfigLP}(\widetilde{S}), which implies fsconfigLP(S~)fsconfigLP(I~)/β\operatorname{fsconfigLP}^{*}(\widetilde{S})\lessapprox\operatorname{fsconfigLP}^{*}(\widetilde{I})/\beta. ∎

Theorem 7.

With high probability, the number of bins used by 𝚛𝚗𝚊𝚙𝚊𝚌𝚔(I,β,ε)\operatorname{\mathtt{rna-pack}}(I,\beta,\varepsilon) is at most

((lnβ)μ+γαρβ+2b(dv+1+γαμ)ε)opt(I)+O(1/ε2).\left((\ln\beta)\mu+\frac{\gamma\alpha\rho}{\beta}+2b(d_{v}+1+\gamma\alpha\mu)\varepsilon\right)\operatorname*{opt}(I)+O(1/\varepsilon^{2}).
Proof.

Let JLPJ_{\mathrm{LP}} be the set of bins packed in the randomized rounding of configuration LP step (see line 4 in Algorithm 1 in Appendix E), JDJ_{D} be the set of bins used to pack the discarded items DSD\cap S, JJ be the set of bins used to pack the rest of the items SDS\setminus D, and J~\widetilde{J} be the set of bins used by 𝚌𝚘𝚖𝚙𝚕𝚎𝚡𝚙𝚊𝚌𝚔\operatorname{\mathtt{complex-pack}} to pack items in S~\widetilde{S}.

Then |JLP|T=(lnβ)x^1(lnβ)μopt(I)+O(1)|J_{\mathrm{LP}}|\leq T=\lceil(\ln\beta)\lVert\widehat{x}\rVert_{1}\rceil\leq(\ln\beta)\mu\operatorname*{opt}(I)+O(1).

Now, we have |JD|b2span(D)2bεspan(I)+b2b(dv+1)εopt(I)+b|J_{D}|\leq b\left\lceil 2\operatorname{span}(D)\right\rceil\leq 2b\varepsilon\operatorname{span}(I)+b\leq 2b(d_{v}+1)\varepsilon\operatorname*{opt}(I)+b. The first inequality follows from the property of 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}}, the second follows from C1.1 (Small Discard) and the last follows from Lemma 2. Finally,

|J|\displaystyle|J| γ|J~|+O(1)\displaystyle\leq\gamma|\widetilde{J}|+O(1) (property of 𝚞𝚗𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{unround}} (C4))
γαfsopt(S~)+O(1)\displaystyle\leq\gamma\alpha\operatorname{fsopt}(\widetilde{S})+O(1) (property of 𝚌𝚘𝚖𝚙𝚕𝚎𝚡𝚙𝚊𝚌𝚔\operatorname{\mathtt{complex-pack}} (C3))
γα(fsopt(I~)/β+2bμεopt(I))+O(1/ε2)\displaystyle\leq\gamma\alpha\left({\operatorname{fsopt}(\widetilde{I})}/{\beta}+2b\mu\varepsilon\operatorname*{opt}(I)\right)+O(1/\varepsilon^{2}) (by Lemma 6)
γα(ρ/β+2bμε)opt(I)+O(1/ε2)\displaystyle\leq\gamma\alpha\left(\rho/\beta+2b\mu\varepsilon\right)\operatorname*{opt}(I)+O(1/\varepsilon^{2})

Here, the last inequality follows from the structural theorem (C2), which says that (I~,D)𝚛𝚘𝚞𝚗𝚍(I)\exists(\widetilde{I},D)\in\operatorname{\mathtt{round}}(I) such that fsopt(I~)ρopt(I)+O(1)\operatorname{fsopt}(\widetilde{I})\leq\rho\operatorname*{opt}(I)+O(1). Hence, the total number of bins is at most

|JLP|+|JD|+|J|((lnβ)μ+γαρβ+2b(dv+1+γαμ)ε)opt(I)+O(1/ε2).\displaystyle|J_{\mathrm{LP}}|+|J_{D}|+|J|\leq\left((\ln\beta)\mu+\frac{\gamma\alpha\rho}{\beta}+2b(d_{v}+1+\gamma\alpha\mu)\varepsilon\right)\operatorname*{opt}(I)+O(1/\varepsilon^{2}).\qed

The AAR of 𝚛𝚗𝚊𝚙𝚊𝚌𝚔(I)\operatorname{\mathtt{rna-pack}}(I) is roughly μlnβ+γαρ/β\mu\ln\beta+\gamma\alpha\rho/\beta. This is minimized for β=γαρ/μ\beta=\gamma\alpha\rho/\mu and the minimum value is μ(1+ln(αγρ/μ))\mu\left(1+\ln\left(\alpha\gamma\rho/\mu\right)\right). As we require β1\beta\geq 1, we get this AAR only when γαρμ\gamma\alpha\rho\geq\mu. If μγαρ\mu\geq\gamma\alpha\rho, the optimal β\beta is 1 and the AAR is roughly γαρ\gamma\alpha\rho.

4.6 Example: 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}}

We will show how to use 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} with the R&A framework. Recall that 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} is a 2b(dv+1)2b(d_{v}+1)-approximation algorithm for (dgd_{g}, dvd_{v}) BP (see Section 3), where b3b\coloneqq 3 when dg=2d_{g}=2, b9b\coloneqq 9 when dg=3d_{g}=3, b4dg+2dgb\coloneqq 4^{d_{g}}+2^{d_{g}} when dg>3d_{g}>3, and b1b\coloneqq 1 when dg=1d_{g}=1. Using the R&A framework on 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} will improve its AAR from 2b(dv+1)2b(d_{v}+1) to b(1+ln(2(dv+1)))+O(ε)b(1+\ln(2(d_{v}+1)))+O(\varepsilon). To do this, we need to show how to implement 𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}}, 𝚌𝚘𝚖𝚙𝚕𝚎𝚡𝚙𝚊𝚌𝚔\operatorname{\mathtt{complex-pack}} and 𝚞𝚗𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{unround}}.

  1. 1.

    𝚜𝚘𝚕𝚟𝚎𝚌𝚘𝚗𝚏𝚒𝚐𝚕𝚙(I)\operatorname{\mathtt{solve-config-lp}}(I): Using the KS algorithm of Appendix D and the LP algorithm of [40], we get a b(1+ε)b(1+\varepsilon)-approximate solution to configLP(I)\operatorname{configLP}(I). Therefore, μ=b(1+ε)\mu=b(1+\varepsilon).

  2. 2.

    𝚛𝚘𝚞𝚗𝚍(I)\operatorname{\mathtt{round}}(I): returns just one pair: (I~,{})(\widetilde{I},\{\}), where I~{π(i):iI}\widetilde{I}\coloneqq\{\pi(i):i\in I\} and π(i)\pi(i) is an item having height (dgthd_{g}^{\textrm{th}} geometric dimension) equal to span(i)\operatorname{span}(i), all other geometric dimensions equal to 1, and all vector dimensions equal to span(i)\operatorname{span}(i). There is just one class in I~\widetilde{I} and all items are allowed to be sliced perpendicular to the height, so the homogeneity properties are satisfied. Also, cmax=dv+1c_{\max}=d_{v}+1 by Lemma 2.

  3. 3.

    Structural theorem: We take structured packing to be the set of all possible packings. Then fsopt(I~)=span(I)(dv+1)opt(I)\operatorname{fsopt}(\widetilde{I})=\left\lceil\operatorname{span}(I)\right\rceil\leq(d_{v}+1)\operatorname*{opt}(I). Therefore, ρ=dv+1\rho=d_{v}+1.

  4. 4.

    𝚌𝚘𝚖𝚙𝚕𝚎𝚡𝚙𝚊𝚌𝚔(S~)\operatorname{\mathtt{complex-pack}}(\widetilde{S}): We can treat S~\widetilde{S} as the 1-D bin packing instance {span(i):iS}\{\operatorname{span}(i):i\in S\} and pack it using Next-Fit. Therefore, |𝚌𝚘𝚖𝚙𝚕𝚎𝚡𝚙𝚊𝚌𝚔(S~)|2span(S)+12fsopt(S~)+1|\operatorname{\mathtt{complex-pack}}(\widetilde{S})|\leq 2\operatorname{span}(S)+1\leq 2\operatorname{fsopt}(\widetilde{S})+1. Therefore, α=2\alpha=2.

  5. 5.

    𝚞𝚗𝚛𝚘𝚞𝚗𝚍(J~)\operatorname{\mathtt{unround}}(\widetilde{J}): For each bin in J~\widetilde{J}, we can pack the corresponding unrounded items into bb bins (using Steinberg’s algorithm or HDH4\mathrm{HDH}_{4}). Therefore, γ=b\gamma=b.

Therefore, we get an AAR of μ(1+ln(γαρ/μ))+O(ε)b(1+ln(2(dv+1)))+O(ε)\mu(1+\ln(\gamma\alpha\rho/\mu))+O(\varepsilon)\approx b(1+\ln(2(d_{v}+1)))+O(\varepsilon).

For dg=2d_{g}=2, we can slightly improve the AAR by using the (2+ε)(2+\varepsilon)-approximation algorithm of [32] for (2, dvd_{v}) KS. This gives us an AAR of 2(1+ln(3(dv+1)))+O(ε)2(1+\ln(3(d_{v}+1)))+O(\varepsilon). This is better than the AAR of 𝚋𝚎𝚝𝚝𝚎𝚛𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{better-simple-pack}} for dv3d_{v}\geq 3.

The above example is presented only to illustrate an easy use of the R&A framework. It doesn’t exploit the full power of the R&A framework. The algorithm 𝚌𝚋𝚙𝚊𝚌𝚔\operatorname{\mathtt{cb-pack}}, which we outline in Section 5, uses more sophisticated subroutines 𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}}, 𝚌𝚘𝚖𝚙𝚕𝚎𝚡𝚙𝚊𝚌𝚔\operatorname{\mathtt{complex-pack}} and 𝚞𝚗𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{unround}}, and uses a more intricate definition of fractional structured packing to get an even better AAR of 2(1+ln(d+42))+ε2(1+\ln(\frac{d+4}{2}))+\varepsilon (improves to 2(1+ln(19/12))+ε2(1+\ln(19/12))+\varepsilon 2.919+ε\approx 2.919+\varepsilon for d=1d=1).

5 Improved Approximation Algorithms

In this section, we give an overview of the 𝚌𝚋𝚙𝚊𝚌𝚔\operatorname{\mathtt{cb-pack}} algorithm for (2, dd) BP, which is inspired from the 1.5 asymptotic approximation algorithm for 2-D GBP [39]. 𝚌𝚋𝚙𝚊𝚌𝚔\operatorname{\mathtt{cb-pack}} is based on the following two-step procedure, as common in many packing algorithms.

In the first step (structural step), we start with an optimal solution and using some transformation show the existence of a good structured solution. Our structural result roughly says that if items II can be packed into mm bins, then we can round II to get a new instance I~\widetilde{I} such that fsopt(I~)ρm+O(1)\operatorname{fsopt}(\widetilde{I})\leq\rho m+O(1) for some constant ρ\rho, where fsopt(I~)\operatorname{fsopt}(\widetilde{I}) is the number of bins in the optimal structured fractional packing of II. Roughly, the notion of structured packing that we use here, which we call compartmental packing, imposes the following additional constraints over the container-based packing of [39]:

  • An item ii is said to be dense iff vmax(i)/a(i)v_{\max}(i)/a(i) is above a certain threshold. If a bin contains dense items, then we reserve a sufficiently-large rectangular region exclusively for dense items, and dense items can only be packed into this region.

  • For a constant ε\varepsilon, for every j[d]j\in[d], the sum of the jthj^{\textrm{th}} weights of items in each bin is at most 1ε1-\varepsilon.

The proof of our structural result differs significantly from that of [39] because the presence of vector dimensions inhibit a straightforward application of their techniques.

In the second step (algorithmic step), we show how to find a near-optimal structured solution. 𝚌𝚋𝚙𝚊𝚌𝚔\operatorname{\mathtt{cb-pack}} first rounds II to I~\widetilde{I} and then uses a mixture of brute-force and LP to pack the items into containers, similar to [39] or [29]. This gives us the optimal structured fractional packing of I~\widetilde{I}. Then we convert that packing to a non-fractional packing of II with only a tiny increase in the number of bins. Intuitively, rounding is helpful here because it reduces the number of different types of items by partitioning the items into a constant number of classes such that items in each class are similar. This simplicity introduced by rounding enables us to find the optimal structured fractional packing efficiently.

Then we show that 𝚌𝚋𝚙𝚊𝚌𝚔\operatorname{\mathtt{cb-pack}} fits into the R&A framework and gives an AAR of roughly 2(1+ln(ρ/2))2(1+\ln(\rho/2)). In particular, LABEL:sec:rbbp-2d-extra:rna shows how components from 𝚌𝚋𝚙𝚊𝚌𝚔\operatorname{\mathtt{cb-pack}} can be used as the R&A subroutines 𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}}, 𝚌𝚘𝚖𝚙𝚕𝚎𝚡𝚙𝚊𝚌𝚔\operatorname{\mathtt{complex-pack}} and 𝚞𝚗𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{unround}}. To (approximately) solve the configuration LP, we use the linear programming algorithm from [40] and the (2+ε)(2+\varepsilon)-approximation algorithm for (2, dd) KS from [32].

We now give a brief overview of some key ideas used in our structural result. Due to space limitations, the details of the structural result and the algorithm can be found in Appendix F. Like [39], we start with a packing of input II into mm bins, and transform it into a structured fractional packing of I~\widetilde{I} into ρm+O(1)\rho m+O(1) bins. We do this in three steps:

  1. 1.

    We round up one geometric dimension of each item and pack the items into roughly ρm+O(1)\rho m+O(1) bins. We call these bins quarter-structured (see Sections F.2 and F.3).

  2. 2.

    We round the remaining dimensions of items and partition them into classes such that they satisfy the homogeneity properties (see Section 4.2). We allow slicing and repack the items into almost the same number of bins. We call the resulting bin packing semi-structured (see Sections F.4 and F.5).

  3. 3.

    Finally, we transform the packing into a compartmental packing (see LABEL:sec:rbbp-2d-extra:compart). Compartmental packings have nice properties which help us to find them efficiently.

In steps 1, 2 and 3, [39] uses the Next-Fit-Decreasing-Height (NFDH) algorithm [14] to pack items of O(εm)O(\varepsilon m) area into O(εm)O(\varepsilon m) bins. This does not work when vector dimensions are present; an item of low area can have large weights. In step 2, [39] uses linear grouping, i.e., each item is moved in place of a geometrically larger item so that it can be rounded up. Vector dimensions make such cross-bin movement difficult, since that can violate bins’ weight capacities. [39] uses cross-bin movement in step 1 too.

Our first crucial observation is that most difficulties associated with vector dimensions disappear if items’ density is upper-bounded by a constant. Here density of item ii is defined as vmax(i)/a(i)v_{\max}(i)/a(i). Specifically, if items of bounded density (we call them non-dense items) have small area, then we can use 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} to pack them into a small number of bins. Linear grouping can also be made to work for such items with some more effort. Therefore, our strategy is to segregate items as dense and non-dense. We reserve a thin rectangular region in bins for dense items, and the rest is reserved for non-dense items.

Furthermore, dense items in a bin must have low total area, due to their high density. If we reserve enough space for them in the bin, we can always pack them in their reserved region using NFDH (see Lemma 9 in Appendix A). Such a guarantee means that we can essentially ignore their geometric dimensions and simply treat them as vectors.

In step 2, we want to round up vector dimensions with only a marginal increase in the number of bins. To do this, we require each quarter-structured bin to be ε\varepsilon-slacked. ε\varepsilon-slackness roughly means that for a set JJ of items in a bin, j[d],vj(J)1ε\forall j\in[d],v_{j}(J)\leq 1-\varepsilon (see Section F.3 for a formal description). ε\varepsilon-slackness also helps us in designing the packing algorithm, because we can then use techniques from resource-augmented vector bin packing. Also, during the rounding step, we round down the weight of some dense items, and ε\varepsilon-slackness allows us to unround with no increase in the number of bins.

The observations above guide our definition of quarter-structured. Roughly, a packing is quarter-structured if items having large width have their width and xx-coordinate rounded to a multiple of ε12/4\varepsilon_{1}^{2}/4 and each bin is ε\varepsilon-slacked. We reserve a thin rectangular region of width ε1/2\varepsilon_{1}/2 for packing dense items (only if the bin contains dense items).

ε\varepsilon
Figure 4: Example of classifying items as blue, red and green based on an ε\varepsilon-strip.

In step 1, [39] uses a standard cutting-strip argument: They create a strip of width ε1\varepsilon_{1} next to an edge of the bin (see Fig. 4). Items lying completely inside the strip (called blue items), have small area and are packed separately using NFDH. Items intersecting the boundary of the strip (called red items), are removed. This creates an empty space of width ε1\varepsilon_{1} in the bin. Using this empty space, items lying outside the strip (called green items), can then have their width and xx-coordinate rounded to a multiple of ε12/2\varepsilon_{1}^{2}/2. Their key idea is how to pair up most bins so that red items from two bins can be rounded and packed together into a new bin. This is roughly why they get an AAR of 1.5+ε1.5+\varepsilon.

We use the cutting-strip argument too, but with some differences. We cannot freely mix red items from different bins if they have large weight, and we cannot simply pack blue items into a small number of bins. We also need bins to be slacked. So, we get a larger AAR of d+4+εd+4+\varepsilon. For d=1d=1, however, we allow mixing items using more sophisticated techniques, which improves the AAR to 19/6+ε19/6+\varepsilon. Also, we round green items to a multiple of ε12/4\varepsilon_{1}^{2}/4 instead of ε12/2\varepsilon_{1}^{2}/2, which leaves an empty strip of width ε1/2\varepsilon_{1}/2 in the bin even after rounding, and we reserve this space for dense items. This gives us a quarter-structured packing.

Based on the broad ideas above, we make more changes to the quarter-structured packing to get a compartmental packing.

Finally, in the algorithmic step, we use a combination of brute-force and LP to pack different types of items into different type of compartments and find a good compartmental packing efficiently.

Acknowledgements: We thank Nikhil Bansal and Thomas Rothvoss for their helpful comments.

References

  • [1] M. T. Alonso, R. Alvarez-Valdes, Manuel Iori, F. Parreño, and J. M. Tamarit. Mathematical models for multicontainer loading problems. Omega, 66:106–117, 2017. doi:10.1016/j.omega.2016.02.002.
  • [2] Samir V. Amiouny, John J. Bartholdi III, John H. Vande Vate, and Jixian Zhang. Balanced loading. Operations Research, 40(2):238–246, 1992. doi:10.1287/opre.40.2.238.
  • [3] Nikhil Bansal, Alberto Caprara, and Maxim Sviridenko. A new approximation method for set covering problems, with applications to multidimensional bin packing. SIAM Journal on Computing, 39(4):1256–1278, 2009. doi:10.1137/080736831.
  • [4] Nikhil Bansal, Jose R. Correa, Claire Kenyon, and Maxim Sviridenko. Bin packing in multiple dimensions: inapproximability results and approximation schemes. Mathematics of Operations Research, 31:31–49, 2006. doi:10.1287/moor.1050.0168.
  • [5] Nikhil Bansal, Marek Eliáš, and Arindam Khan. Improved approximation for vector bin packing. In ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1561–1579. SIAM, 2016. doi:10.1137/1.9781611974331.ch106.
  • [6] Nikhil Bansal and Arindam Khan. Improved approximation algorithm for two-dimensional bin packing. In SODA, pages 13–25, 2014. doi:10.1137/1.9781611973402.2.
  • [7] Suman Kalyan Bera, Deeparnab Chakrabarty, Nicolas Flores, and Maryam Negahbani. Fair algorithms for clustering. In Conference on Neural Information Processing Systems (NeurIPS), pages 4955–4966, 2019.
  • [8] Andreas Bortfeldt and Gerhard Wäscher. Constraints in container loading–a state-of-the-art review. European Journal of Operational Research, 229(1):1–20, 2013. doi:10.1016/j.ejor.2012.12.006.
  • [9] Alberto Caprara. Packing 2-dimensional bins in harmony. In FOCS, pages 490–499, 2002. doi:10.1109/SFCS.2002.1181973.
  • [10] Alberto Caprara. Packing dd-dimensional bins in dd stages. Mathematics of Operations Research, 33:203–215, 2008. doi:10.1287/moor.1070.0289.
  • [11] Chandra Chekuri and Sanjeev Khanna. On multidimensional packing problems. SIAM journal on computing, 33(4):837–851, 2004. doi:10.1137/S0097539799356265.
  • [12] Henrik I. Christensen, Arindam Khan, Sebastian Pokutta, and Prasad Tetali. Approximation and online algorithms for multidimensional bin packing: A survey. Computer Science Review, 24:63–79, 2017. doi:10.1016/j.cosrev.2016.12.001.
  • [13] Edward G. Coffman, János Csirik, Gábor Galambos, Silvano Martello, and Daniele Vigo. Bin packing approximation algorithms: Survey and classification. In Handbook of combinatorial optimization, pages 455–531. Springer New York, 2013. doi:10.1007/978-1-4419-7997-1_35.
  • [14] Edward G. Coffman, Michael R. Garey, David S. Johnson, and Robert E. Tarjan. Performance bounds for level-oriented two-dimensional packing algorithms. SIAM Journal on Computing, 9:808–826, 1980. doi:10.1137/0209062.
  • [15] Florian Diedrich, Rolf Harren, Klaus Jansen, Ralf Thöle, and Henning Thomas. Approximation algorithms for 3d orthogonal knapsack. Journal of Computer Science and Technology, 23(5):749, 2008. doi:10.1007/s11390-008-9170-7.
  • [16] Khaled M. Elbassioni, Naveen Garg, Divya Gupta, Amit Kumar, Vishal Narula, and Arindam Pal. Approximation algorithms for the unsplittable flow problem on paths and trees. In FSTTCS, volume 18 of Leibniz International Proceedings in Informatics (LIPIcs), pages 267–275, 2012. doi:10.4230/LIPIcs.FSTTCS.2012.267.
  • [17] Wenceslas Fernandez de la Vega and George S. Lueker. Bin packing can be solved within 1+ε1+\varepsilon in linear time. Combinatorica, 1:349–355, 1981. doi:10.1007/BF02579456.
  • [18] A. M. Frieze and M. R. B. Clarke. Approximation algorithms for the mm-dimensional 010-1 knapsack problem: worst-case and probabilistic analyses. EJOR, 15:100–109, 1984.
  • [19] Waldo Gálvez, Fabrizio Grandoni, Afrouz Jabal Ameli, Klaus Jansen, Arindam Khan, and Malin Rau. A tight (3/2+ ε\varepsilon) approximation for skewed strip packing. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPIcs.APPROX/RANDOM.2020.44.
  • [20] Waldo Gálvez, Fabrizio Grandoni, Sandy Heydrich, Salvatore Ingala, Arindam Khan, and Andreas Wiese. Approximating geometric knapsack via l-packings. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 260–271. IEEE, 2017. Full version available at http://www.dii.uchile.cl/~awiese/2DK_full_version.pdf. doi:10.1109/FOCS.2017.32.
  • [21] Waldo Gálvez, Fabrizio Grandoni, Salvatore Ingala, and Arindam Khan. Improved pseudo-polynomial-time approximation for strip packing. In FSTTCS, pages 9:1–9:14, 2016. doi:10.4230/LIPIcs.FSTTCS.2016.9.
  • [22] Paul C Gilmore and Ralph E Gomory. Multistage cutting stock problems of two and more dimensions. Operations research, 13(1):94–120, 1965. doi:10.1287/opre.13.1.94.
  • [23] Rebecca Hoberg and Thomas Rothvoss. A logarithmic additive integrality gap for bin packing. In SODA, pages 2616–2625, 2017. doi:10.1137/1.9781611974782.172.
  • [24] Klaus Jansen and Lars Prädel. New approximability results for two-dimensional bin packing. Algorithmica, 74(1):208–269, 2016. doi:10.1007/s00453-014-9943-z.
  • [25] Klaus Jansen and Guochuan Zhang. Maximizing the number of packed rectangles. In SWAT, pages 362–371, 2004. doi:10.1007/978-3-540-27810-8_31.
  • [26] D. S. Johnson. Approximation algorithms for combinatorial problems. In STOC, pages 38–49, 1973.
  • [27] Matthew Joseph, Michael J. Kearns, Jamie H. Morgenstern, and Aaron Roth. Fairness in learning: Classic and contextual bandits. In NeurIPS, pages 325–333, 2016.
  • [28] Hans Kellerer, Ulrich Pferschy, and David Pisinger. Knapsack problems. Springer, 2004.
  • [29] Claire Kenyon and Eric Rémila. Approximate strip packing. In 37th Annual Symposium on Foundations of Computer Science, FOCS ’96, Burlington, Vermont, USA, 14-16 October, 1996, pages 31–36, 1996. doi:10.1109/SFCS.1996.548461.
  • [30] Arindam Khan. Approximation algorithms for multidimensional bin packing. PhD thesis, Georgia Institute of Technology, Atlanta, GA, USA, 2016.
  • [31] Arindam Khan and Madhusudhan Reddy Pittu. On guillotine separability of squares and rectangles. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPIcs.APPROX/RANDOM.2020.47.
  • [32] Arindam Khan, Eklavya Sharma, and K. V. N. Sreenivas. Approximation algorithms for generalized multidimensional knapsack, 2021. arXiv:2102.05854.
  • [33] Lap Chi Lau, Ramamoorthi Ravi, and Mohit Singh. Iterative methods in combinatorial optimization, volume 46. Cambridge University Press, 2011.
  • [34] Eugene L Lawler. Fast approximation algorithms for knapsack problems. Mathematics of Operations Research, 4(4):339–356, 1979. doi:10.1287/moor.4.4.339.
  • [35] Colin McDiarmid. On the method of bounded differences. Surveys in combinatorics, 141(1):148–188, 1989.
  • [36] Maria Flavia Monaco, Marcello Sammarra, and Gregorio Sorrentino. The terminal-oriented ship stowage planning problem. EJOR, 239(1):256–265, 2014. doi:10.1016/j.ejor.2014.05.030.
  • [37] Célia Paquay, Michael Schyns, and Sabine Limbourg. A mixed integer programming formulation for the three-dimensional bin packing problem deriving from an air cargo application. International Transactions in Operational Research, 23(1-2):187–213, 2016. doi:10.1111/itor.12111.
  • [38] Deval Patel, Arindam Khan, and Anand Louis. Group fairness for knapsack problems. ArXiv, 2006.07832, 2020. arXiv:2006.07832.
  • [39] Lars Dennis Prädel. Approximation Algorithms for Geometric Packing Problems. PhD thesis, Kiel University, 2012. URL: https://macau.uni-kiel.de/servlets/MCRFileNodeServlet/dissertation_derivate_00004634/dissertation-praedel.pdf?AC=N.
  • [40] Eklavya Sharma. An approximation algorithm for covering linear programs and its application to bin-packing, 2020. arXiv:2011.11268.
  • [41] Knut Olav Brathaug Sørset. A heuristic approach to the three-dimensional bin packing problem with weight constraints. Master’s thesis, Høgskolen i Molde-Vitenskapelig høgskole i logistikk, 2019.
  • [42] A. Steinberg. A strip-packing algorithm with absolute performance bound 2. SIAM Journal on Computing, 26(2):401–409, 1997. doi:10.1137/S0097539793255801.
  • [43] Gregory S. Taylor, Yupo Chan, and Ghulam Rasool. A three-dimensional bin-packing model: exact multicriteria solution and computational complexity. Annals of Operations Research, 251(1-2):397–427, 2017. doi:10.1007/s10479-015-2048-5.
  • [44] Alan Tsang, Bryan Wilder, Eric Rice, Milind Tambe, and Yair Zick. Group-fairness in influence maximization. In IJCAI, pages 5997–6005, 2019.
  • [45] Gerhard J. Woeginger. There is no asymptotic PTAS for two-dimensional vector packing. Inf. Process. Lett., 64(6):293–297, 1997. doi:10.1016/S0020-0190(97)00179-8.
  • [46] Guang Yang. gbp: a bin packing problem solver, 2017. R package version 0.1.0.4. URL: https://CRAN.R-project.org/package=gbp.

Appendix A Some Simple Packing Algorithms

Claim 8 (Next-Fit).

Let I{i1,i2,,in}I\coloneqq\{i_{1},i_{2},\ldots,i_{n}\} be a 1D bin packing problem instance. The Next-Fit algorithm finds a packing of II into at most 2j=1nij\left\lceil 2\sum_{j=1}^{n}i_{j}\right\rceil bins.

Lemma 9 (NFDH for strip packing [14]).

A set II of rectangles can be packed into a strip of height at most 2a(I)+maxiIh(i)2a(I)+max_{i\in I}h(i) using the Next-Fit Decreasing Height (NFDH) algorithm.

Lemma 10 (NFDH for small items [14]).

Let II be a set of rectangles where each rectangle has width at most δW\delta_{W} and height at most δH\delta_{H}. Let there be a rectangular bin of width WW and height HH. If a(I)(WδW)(HδH)a(I)\leq(W-\delta_{W})(H-\delta_{H}), then NFDH can pack II into the bin.

Appendix B Omitted Proofs

Lemma 11 (Steinberg’s algorithm [42]).

Let II be a set of rectangles. Let wmaxmaxiIw(i)w_{\max}\coloneqq\max_{i\in I}w(i) and hmaxmaxiIh(i)h_{\max}\coloneqq\max_{i\in I}h(i). Consider a bin of width WW and height HH, where wmaxWw_{\max}\leq W and hmaxHh_{\max}\leq H. Then there is a O(nlog2n/loglogn)O(n\log^{2}n/\log\log n)-time algorithm to pack II into the bin if

2a(I)WH((max(2wmaxW,0))(max(2hmaxH,0)))2a(I)\leq WH-\Big{(}\big{(}\max(2w_{\max}-W,0))\cdot(\max(2h_{\max}-H,0)\big{)}\Big{)}
Proof of Lemma 3.

Pack II into a bin of width 2 and height 1 using Steinberg’s algorithm (see Lemma 11). Then cut the bin vertically into 2 unit-sized squares. The items which lie completely inside the left half can be packed into a unit-sized bin. The items which lie completely inside the right half can be packed into a unit-sized bin. The items which lie on the cutting line are stacked one-over-the-other, so we can pack them into a unit-sized bin. ∎

Appendix C The HDH4\mathrm{HDH}_{4} Algorithm

Lemma 12.

Let II be a set of dgd_{g}-dimensional cuboids. Then II can be packed into at most 4dg+2dgvol(I)4^{d_{g}}+2^{d_{g}}\operatorname{vol}(I) bins using a variant of HDH4\mathrm{HDH}_{4}.

Proof.

Define f4:[0,1][0,1]f_{4}:[0,1]\mapsto[0,1] and type4:[0,1][4]\operatorname{type}_{4}:[0,1]\mapsto[4] as

f4(x)\displaystyle f_{4}(x) ={1qx(1q+1,1q] for q[3]2xx14\displaystyle=\begin{cases}\frac{1}{q}&x\in\left(\left.\frac{1}{q+1},\frac{1}{q}\right]\right.\textrm{ for }q\in[3]\\ 2x&x\leq\frac{1}{4}\end{cases} type4(x)\displaystyle\operatorname{type}_{4}(x) ={qx(1q+1,1q] for q[3]4x14\displaystyle=\begin{cases}q&x\in\left(\left.\frac{1}{q+1},\frac{1}{q}\right]\right.\textrm{ for }q\in[3]\\ 4&x\leq\frac{1}{4}\end{cases}

Note that xf4(x)2xx\leq f_{4}(x)\leq 2x. Let ii be a dgd_{g}-dimensional cuboid. Define f4(i)f_{4}(i) as the cuboid of length f4(j(i))f_{4}(\ell_{j}(i)) in the jthj^{\textrm{th}} dimension. Therefore, vol(i)vol(f4(i))2dgvol(i)\operatorname{vol}(i)\leq\operatorname{vol}(f_{4}(i))\leq 2^{d_{g}}\operatorname{vol}(i). For a set II of cuboids, f4(I){f4(i):iI}f_{4}(I)\coloneqq\{f_{4}(i):i\in I\}. Define type(i)\operatorname{type}(i) to be a dd-length vector whose jthj^{\textrm{th}} component is type(j(i))\operatorname{type}(\ell_{j}(i)).

For a set II of cuboids, define I(k)I^{(k)} to be the set of items obtained by ignoring all dimensions other than the first kk. [10] (implicitly) defines a recursive algorithm HDH-unit-pack4[t](I,d)\textrm{HDH-unit-pack}_{4}^{[t]}(I,d). For a sequence II of ddD cuboids all having the same type4\operatorname{type}_{4} tt, where vol(f4((I{last(I)})(d)))<1\operatorname{vol}(f_{4}((I-\{\mathrm{last}(I)\})^{(d)}))<1, HDH-unit-pack4[t](I,d)\textrm{HDH-unit-pack}_{4}^{[t]}(I,d) returns a packing of I(d)I^{(d)} into a ddD bin. Here last(I)\mathrm{last}(I) is the last item in sequence II.

Now I’ll define a slight variation of the HDH4\mathrm{HDH}_{4} algorithm in [10]. This variant also works when rotation of items is allowed. First, partition the items by type4\operatorname{type}_{4}. The number of partitions is at most Q=4dgQ=4^{d_{g}}. Let I[q]I^{[q]} be the partition containing items of type4\operatorname{type}_{4} qq. Order the items in I[q]I^{[q]} arbitrarily. Then repeatedly pick the smallest prefix JJ of I[q]I^{[q]} such that either J=I[q]J=I^{[q]} or vol(f4(J))1\operatorname{vol}(f_{4}(J))\geq 1, and pack JJ into a bin using HDH-unit-pack4[q](J,dg)\textrm{HDH-unit-pack}_{4}^{[q]}(J,d_{g}).

Suppose HDH4(I[q])\mathrm{HDH}_{4}(I^{[q]}) produces m[q]m^{[q]} bins. Let Bj[q]B_{j}^{[q]} be the jthj^{\textrm{th}} of these bins. Given the way we choose prefixes, vol(f4(Bj[q]))1\operatorname{vol}(f_{4}(B_{j}^{[q]}))\geq 1 for j[m[q]1]j\in[m^{[q]}-1], i.e. at most 1 bin is partially-filled.

vol(f4(I[q]))=j=1m[q]vol(f4(Bj[q]))(m[q]1)\operatorname{vol}(f_{4}(I^{[q]}))=\sum_{j=1}^{m^{[q]}}\operatorname{vol}(f_{4}(B_{j}^{[q]}))\geq(m^{[q]}-1)

Total number of bins used is

q=1Qm[q]q=1Q(1+vol(f4(I[q])))Q+vol(f4(I))=4dg+2dgvol(I)\sum_{q=1}^{Q}m^{[q]}\leq\sum_{q=1}^{Q}(1+\operatorname{vol}(f_{4}(I^{[q]})))\leq Q+\operatorname{vol}(f_{4}(I))=4^{d_{g}}+2^{d_{g}}\operatorname{vol}(I)\qed

Appendix D Algorithm for Knapsack

Let II be a set of (2, dd)-dimensional items. Let p(i)p(i) be the profit of item ii. We want to pack a max-profit subset of II into a bin. Let I^\widehat{I} be a set of (d+1)(d+1)-dimensional vectors obtained by replacing the geometric dimensions of each item ii by a single vector dimension a(i)a(i). Let 𝒜\mathcal{A} be a (d+1)(d+1)-D VKS algorithm having approximation ratio α1\alpha\geq 1. 𝒜\mathcal{A} gives us a packing of items J^I^\widehat{J}\subseteq\widehat{I} into a bin. Let JJ be the corresponding items in II. Then vol(J)1\operatorname{vol}(J)\leq 1 and k[d],vk(J)1\forall k\in[d],v_{k}(J)\leq 1. Use Steinberg’s algorithm to compute a packing of JJ into 3 bins: J1,J2,J3J_{1},J_{2},J_{3}. W.l.o.g., assume p(J1)p(J2)p(J3)p(J_{1})\geq p(J_{2})\geq p(J_{3}). Then output the packing J1J_{1} as the answer to the (2, dd) KS problem. Since any feasible solution to the (2, dd) KS instance (I,p)(I,p) also gives a feasible solution to the VKS instance (I^,p)(\widehat{I},p), we get opt(I^,p)opt(I,p)\operatorname*{opt}(\widehat{I},p)\geq\operatorname*{opt}(I,p). Since 𝒜\mathcal{A} is α\alpha-approximate, we get p(J)opt(I^,p)/αp(J)\geq\operatorname*{opt}(\widehat{I},p)/\alpha. Hence,

p(J1)p(J)3opt(I^,p)3αopt(I,p)3αp(J_{1})\geq\frac{p(J)}{3}\geq\frac{\operatorname*{opt}(\widehat{I},p)}{3\alpha}\geq\frac{\operatorname*{opt}(I,p)}{3\alpha}

Therefore, we get a 3α3\alpha-approximation algorithm for (2, dd) KS. Using the PTAS for (d+1)(d+1)-D VKS by Frieze and Clarke [18], we get α=1+ε\alpha=1+\varepsilon.

Appendix E Details of the R&A Framework

Lemma 13.

fsopt(S~)fsconfigIP(S~)fsopt(S~)+q\operatorname{fsopt}(\widetilde{S})\leq\operatorname{fsconfigIP}^{*}(\widetilde{S})\leq\operatorname{fsopt}(\widetilde{S})+q.

Proof.

Due to the downward closure property, changing inequality constraints to equality constraints doesn’t affect the optimum values of the above LP and IP. Therefore, fsconfigIP(S~)\operatorname{fsconfigIP}(\widetilde{S}) is equivalent to the fractional structured bin packing problem.

A problem with the above definition of fsconfigLP(I~)\operatorname{fsconfigLP}(\widetilde{I}) is that the number of variables can be infinite if certain classes allow slicing. We circumvent this problem by discretizing the configurations: Let δ\delta be the smallest dimension of any item, i.e. δmin(minj=1dgj(i),minj=1dvvj(i))\delta\coloneqq\min\left(\min_{j=1}^{d_{g}}\ell_{j}(i),\min_{j=1}^{d_{v}}v_{j}(i)\right).

In any optimal integral solution to fsconfigLP(I~)\operatorname{fsconfigLP}(\widetilde{I}) that uses mm bins, we can slice out some items from each class in each bin so that the span\operatorname{span} of each class in each bin is a multiple of δdg/n\delta^{d_{g}}/n. In each class, the total size of sliced out items across all bins is at most δdg\delta^{d_{g}}. Therefore, for each class, slices of that class can fit into a single item of that class. If each such single item is packed in a separate bin, the total number of bins used is at most m+qm+q.

Therefore, we only need to consider configurations where either the span\operatorname{span} of each class is a multiple of δdg/n\delta^{d_{g}}/n or there is a single item in the configuration. This gives us a finite number of configurations and completes the proof. ∎

Lemma 14.

fsconfigLP(S~)fsconfigIP(S~)fsconfigLP(S~)+q\operatorname{fsconfigLP}^{*}(\widetilde{S})\leq\operatorname{fsconfigIP}^{*}(\widetilde{S})\leq\operatorname{fsconfigLP}^{*}(\widetilde{S})+q.

Proof.

By rank lemma, the number of positive-valued variables in an extreme-point solution to a linear program is at most the number of constraints (other than the variable non-negativity constraints).

Thus, an optimal extreme-point solution to fsconfigLP(S~)\operatorname{fsconfigLP}(\widetilde{S}) has at most qq positive-valued variables. Rounding up those variables to the nearest integer will give us an integral solution and increase the objective value by at most qq. Hence, fsconfigIP(S~)fsconfigLP(S~)+q\operatorname{fsconfigIP}^{*}(\widetilde{S})\leq\operatorname{fsconfigLP}^{*}(\widetilde{S})+q. ∎

Let configLP(I)\operatorname{configLP}(I) denote the configuration LP of items II and let configLP(I)\operatorname{configLP}^{*}(I) denote the optimal objective value of configLP(I)\operatorname{configLP}(I). Recall that 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} is a 2b(dv+1)2b(d_{v}+1)-approximation algorithm for (dgd_{g}, dvd_{v}) BP (see Section 3), where b3b\coloneqq 3 when dg=2d_{g}=2, b9b\coloneqq 9 when dg=3d_{g}=3, b4dg+2dgb\coloneqq 4^{d_{g}}+2^{d_{g}} when dg>3d_{g}>3, and b1b\coloneqq 1 when dg=1d_{g}=1.

Lemma 15.

For a set II of (dgd_{g}, dvd_{v})-dimensional items, configLP(I)Θ(span(I))+O(1)\operatorname{configLP}^{*}(I)\in\Theta(\operatorname{span}(I))+O(1).

Proof.

Let AA be the configuration matrix of II. Let xx^{*} be the optimal solution to configLP(I)\operatorname{configLP}(I). In configLP(I)\operatorname{configLP}(I), the constraint for item ii gives us C𝒞A[i,C]xC1\sum_{C\in\mathcal{C}}A[i,C]x^{*}_{C}\geq 1. Multiplying each constraint by span(i)\operatorname{span}(i) and adding these constraints together, we get

span(I)\displaystyle\operatorname{span}(I) C𝒞iIspan(i)A[i,C]xC=C𝒞span(C)xC\displaystyle\leq\sum_{C\in\mathcal{C}}\sum_{i\in I}\operatorname{span}(i)A[i,C]x^{*}_{C}=\sum_{C\in\mathcal{C}}\operatorname{span}(C)x^{*}_{C}
(dv+1)C𝒞xC=(dv+1)configLP(I).\displaystyle\leq(d_{v}+1)\sum_{C\in\mathcal{C}}x^{*}_{C}=(d_{v}+1)\operatorname{configLP}^{*}(I).

Therefore, configLP(I)span(I)/(dv+1)\operatorname{configLP}^{*}(I)\geq\operatorname{span}(I)/(d_{v}+1). We also have

configLP(I)opt(I)|𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔(I)|2bspan(I)+b.\operatorname{configLP}^{*}(I)\leq\operatorname*{opt}(I)\leq|\operatorname{\mathtt{simple-pack}}(I)|\leq 2b\operatorname{span}(I)+b.

Therefore, configLP(I)Θ(span(I))+O(1)\operatorname{configLP}^{*}(I)\in\Theta(\operatorname{span}(I))+O(1). ∎

Lemma 16 (Independent Bounded Difference Inequality [35]).

Let X[X1,X2,,Xn]X\coloneqq[X_{1},X_{2},\ldots,X_{n}] be random variables with XjAjX_{j}\in A_{j}. Let ϕ:i=1nAj\phi:\prod_{i=1}^{n}A_{j}\mapsto\mathbb{R} be a function such that |ϕ(x)ϕ(y)|cj\left\lvert\phi(x)-\phi(y)\right\rvert\leq c_{j} whenever vectors xx and yy differ only in the jthj^{\textrm{th}} coordinate. Then for any t0t\geq 0,

Pr[ϕ(X)𝔼(ϕ(X))t]exp(2t2j=1ncj2).\Pr[\phi(X)-\operatorname*{\mathbb{E}}(\phi(X))\geq t]\leq\exp\left(-\frac{2t^{2}}{\sum_{j=1}^{n}c_{j}^{2}}\right).
Lemma 17.

Let S~\widetilde{S} be as computed by 𝚛𝚗𝚊𝚙𝚊𝚌𝚔(I,β,ε)\operatorname{\mathtt{rna-pack}}(I,\beta,\varepsilon). Let ε(0,1)\varepsilon\in(0,1) be a constant. When span(I)\operatorname{span}(I) is large compared to 1/ε21/\varepsilon^{2}, we get that with high probability

fsconfigLP(S~)fsconfigLP(I~)β+2bμεopt(I)+O(1).\operatorname{fsconfigLP}^{*}(\widetilde{S})\leq\frac{\operatorname{fsconfigLP}^{*}(\widetilde{I})}{\beta}+2b\mu\varepsilon\operatorname*{opt}(I)+O(1).
Proof.

Let y𝒞Ty\in\mathcal{C}^{T} be the configurations chosen during randomized rounding. When viewed as a vector of length TT, all coordinates of yy are independent. Define uncovered(y)It=1Tyt\operatorname{uncovered}(y)\coloneqq I-\bigcup_{t=1}^{T}y_{t}.

Let K~1,K~2,,K~q\widetilde{K}_{1},\widetilde{K}_{2},\ldots,\widetilde{K}_{q} be the classes of I~\widetilde{I}. Let π\pi be the bijection from IDI-D to I~\widetilde{I}. For a set XIX\subseteq I, define I~[X]{π(i):iXD}\widetilde{I}[X]\coloneqq\{\pi(i):i\in X-D\}. For j[q]j\in[q], define ϕj𝒞T0\phi_{j}\in\mathcal{C}^{T}\mapsto\mathbb{R}_{\geq 0} as

ϕj(y)span(K~jI~[uncovered(y)]).\phi_{j}(y)\coloneqq\operatorname{span}\left(\widetilde{K}_{j}\cap\widetilde{I}[\operatorname{uncovered}(y)]\right).

For any set XIX\subseteq I, define gj(X)span(K~jI~[X])g_{j}(X)\coloneqq\operatorname{span}(\widetilde{K}_{j}\cap\widetilde{I}[X]). Then ϕj(y)=gj(uncovered(y))\phi_{j}(y)=g_{j}(\operatorname{uncovered}(y)) and gjg_{j} is a non-negative additive function.

Let y(1),y(2)𝒞Ty^{(1)},y^{(2)}\in\mathcal{C}^{T} such that y(1)y^{(1)} and y(2)y^{(2)} differ only in coordinate tt. Let y(1)t=C1y^{(1)}_{t}=C_{1} and y(2)t=C2y^{(2)}_{t}=C_{2}. Let S1=uncovered(y(1))S_{1}=\operatorname{uncovered}(y^{(1)}) and S2=uncovered(y(2))S_{2}=\operatorname{uncovered}(y^{(2)}).

It is easy to see (using Venn diagrams) that S1S2C2C1S_{1}-S_{2}\subseteq C_{2}-C_{1} and S2S1C1C2S_{2}-S_{1}\subseteq C_{1}-C_{2}.

|ϕj(y(1))ϕj(y(2))|\displaystyle\left\lvert\phi_{j}(y^{(1)})-\phi_{j}(y^{(2)})\right\rvert =|gj(S1)gj(S2)|\displaystyle=\left\lvert g_{j}(S_{1})-g_{j}(S_{2})\right\rvert
=|gj(S1S2)gj(S2S1)|\displaystyle=\left\lvert g_{j}(S_{1}-S_{2})-g_{j}(S_{2}-S_{1})\right\rvert (additivity of gjg_{j})
max(gj(S1S2),gj(S2S1))\displaystyle\leq\max(g_{j}(S_{1}-S_{2}),g_{j}(S_{2}-S_{1}))
max(gj(C2),gj(C1))\displaystyle\leq\max(g_{j}(C_{2}),g_{j}(C_{1}))
maxC𝒞span(K~jI~[C])cmax.\displaystyle\leq\max_{C\in\mathcal{C}}\operatorname{span}(\widetilde{K}_{j}\cap\widetilde{I}[C])\leq c_{\max}. (by bounded expansion lemma)
𝔼(ϕj(y))\displaystyle\operatorname*{\mathbb{E}}(\phi_{j}(y)) =𝔼(gj(S))\displaystyle=\operatorname*{\mathbb{E}}(g_{j}(S))
=iI~gj({i})Pr(iS)\displaystyle=\sum_{i\in\widetilde{I}}g_{j}(\{i\})\Pr(i\in S) (linearity of expectation and additivity of gjg_{j})
iI~gj({i})(1/β)\displaystyle\leq\sum_{i\in\widetilde{I}}g_{j}(\{i\})(1/\beta) (by Lemma 5)
=gj(I~)β=span(K~j)β.\displaystyle=\frac{g_{j}(\widetilde{I})}{\beta}=\frac{\operatorname{span}(\widetilde{K}_{j})}{\beta}.

j[q]\forall j\in[q], define QjQ_{j} as the smallest prefix of S~K~j\widetilde{S}\cap\widetilde{K}_{j} such that either Qj=S~K~jQ_{j}=\widetilde{S}\cap\widetilde{K}_{j} or span(Qj)εx^1/q\operatorname{span}(Q_{j})\geq\varepsilon\left\lVert\widehat{x}\right\rVert_{1}/q. Define Qj=1qQjQ\coloneqq\bigcup_{j=1}^{q}Q_{j}. Therefore,

span(Q)εx^1+qεμopt(I)+O(1).\operatorname{span}(Q)\leq\varepsilon\left\lVert\widehat{x}\right\rVert_{1}+q\leq\varepsilon\mu\operatorname*{opt}(I)+O(1).
fsconfigLP(S~)\displaystyle\operatorname{fsconfigLP}^{*}(\widetilde{S}) fsconfigLP(S~Q)+fsconfigLP(Q)\displaystyle\leq\operatorname{fsconfigLP}^{*}(\widetilde{S}-Q)+\operatorname{fsconfigLP}^{*}(Q)
fsconfigLP(S~Q)+b(2span(Q)+1)\displaystyle\leq\operatorname{fsconfigLP}^{*}(\widetilde{S}-Q)+b(2\operatorname{span}(Q)+1) (by Section 3)
fsconfigLP(S~Q)+2bμεopt(I)+O(1).\displaystyle\leq\operatorname{fsconfigLP}^{*}(\widetilde{S}-Q)+2b\mu\varepsilon\operatorname*{opt}(I)+O(1).

Now we will try to prove that with high probability, fsconfigLP(S~Q)fsconfigLP(I~)/β\operatorname{fsconfigLP}^{*}(\widetilde{S}-Q)\leq\operatorname{fsconfigLP}^{*}(\widetilde{I})/\beta.

If Qj=S~K~jQ_{j}=\widetilde{S}\cap\widetilde{K}_{j}, then span(K~j(S~Q))=0\operatorname{span}(\widetilde{K}_{j}\cap(\widetilde{S}-Q))=0. Otherwise,

Pr[span(K~j(S~Q))span(K~j)β]=Pr[span(K~jS~)span(K~j)βspan(Qj)]\displaystyle\Pr\left[\operatorname{span}(\widetilde{K}_{j}\cap(\widetilde{S}-Q))\geq\frac{\operatorname{span}(\widetilde{K}_{j})}{\beta}\right]=\Pr\left[\operatorname{span}(\widetilde{K}_{j}\cap\widetilde{S})-\frac{\operatorname{span}(\widetilde{K}_{j})}{\beta}\geq\operatorname{span}(Q_{j})\right]
Pr[ϕj(y)𝔼(ϕj(y))εqx^1]exp(2Tcmax2(εqx^1)2)\displaystyle\leq\Pr\left[\phi_{j}(y)-\operatorname*{\mathbb{E}}(\phi_{j}(y))\geq\frac{\varepsilon}{q}\left\lVert\widehat{x}\right\rVert_{1}\right]\leq\exp\left(-\frac{2}{Tc_{\max}^{2}}\left(\frac{\varepsilon}{q}\left\lVert\widehat{x}\right\rVert_{1}\right)^{2}\right) (Lemma 16)
exp(2ε2ln(β)cmax2q2x^1).\displaystyle\leq\exp\left(-\frac{2\varepsilon^{2}}{\ln(\beta)c_{\max}^{2}q^{2}}\left\lVert\widehat{x}\right\rVert_{1}\right).

Therefore, by union bound, we get

Pr[j=1q(span(K~j(S~Q))span(K~j)β)]qexp(2ε2ln(β)cmax2q2x^1).\Pr\left[\bigvee_{j=1}^{q}\left(\operatorname{span}(\widetilde{K}_{j}\cap(\widetilde{S}-Q))\geq\frac{\operatorname{span}(\widetilde{K}_{j})}{\beta}\right)\right]\leq q\exp\left(-\frac{2\varepsilon^{2}}{\ln(\beta)c_{\max}^{2}q^{2}}\left\lVert\widehat{x}\right\rVert_{1}\right).

Since configLP(I)x^1μconfigLP(I)+O(1)\operatorname{configLP}^{*}(I)\leq\left\lVert\widehat{x}\right\rVert_{1}\leq\mu\operatorname{configLP}^{*}(I)+O(1), and configLP(I)Θ(span(I))+O(1)\operatorname{configLP}^{*}(I)\in\Theta(\operatorname{span}(I))+O(1) (by Lemma 15), we get x^1Θ(span(I))+O(1)\left\lVert\widehat{x}\right\rVert_{1}\in\Theta(\operatorname{span}(I))+O(1). When span(I)\operatorname{span}(I) is very large compared to 1/ε21/\varepsilon^{2}, we get that with high probability, j[q]\forall j\in[q],

span(K~j(S~Q))span(K~j)β.\operatorname{span}(\widetilde{K}_{j}\cap(\widetilde{S}-Q))\leq\frac{\operatorname{span}(\widetilde{K}_{j})}{\beta}.

Let xx^{*} be the optimal solution to fsconfigLP(I~)\operatorname{fsconfigLP}(\widetilde{I}). Then with high probability, x/βx^{*}/\beta is a feasible solution to fsconfigLP(S~Q)\operatorname{fsconfigLP}(\widetilde{S}-Q). Therefore,

fsconfigLP(S~)\displaystyle\operatorname{fsconfigLP}^{*}(\widetilde{S}) fsconfigLP(S~Q)+2bμεopt(I)+O(1)\displaystyle\leq\operatorname{fsconfigLP}^{*}(\widetilde{S}-Q)+2b\mu\varepsilon\operatorname*{opt}(I)+O(1)
fsconfigLP(I~)/β+2bμεopt(I)+O(1).\displaystyle\leq\operatorname{fsconfigLP}^{*}(\widetilde{I})/\beta+2b\mu\varepsilon\operatorname*{opt}(I)+O(1).\qed

See 6

Proof.

When span(I)\operatorname{span}(I) is very large compared to 1/ε21/\varepsilon^{2}, we get

fsopt(S~)\displaystyle\operatorname{fsopt}(\widetilde{S}) fsconfigIP(S~)+O(1)\displaystyle\leq\operatorname{fsconfigIP}^{*}(\widetilde{S})+O(1) (by Lemma 13)
fsconfigLP(S~)+O(1)\displaystyle\leq\operatorname{fsconfigLP}^{*}(\widetilde{S})+O(1) (by Lemma 14)
fsconfigLP(I~)/β+2bμεopt(I)+O(1)\displaystyle\leq\operatorname{fsconfigLP}^{*}(\widetilde{I})/\beta+2b\mu\varepsilon\operatorname*{opt}(I)+O(1) (by Lemma 17)
fsopt(I~)/β+2bμεopt(I)+O(1).\displaystyle\leq\operatorname{fsopt}(\widetilde{I})/\beta+2b\mu\varepsilon\operatorname*{opt}(I)+O(1). (by Lemma 13)

Otherwise, if span(I)O(1/ε2)\operatorname{span}(I)\in O(1/\varepsilon^{2}), we get

fsopt(S~)\displaystyle\operatorname{fsopt}(\widetilde{S}) ρopt(I)+O(1)\displaystyle\leq\rho\operatorname*{opt}(I)+O(1) (by structural theorem)
ρ|𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔(I)|+O(1)\displaystyle\leq\rho|\operatorname{\mathtt{simple-pack}}(I)|+O(1)
Θ(span(I))+O(1)O(1/ε2).\displaystyle\leq\Theta(\operatorname{span}(I))+O(1)\leq O(1/\varepsilon^{2}). (by Section 3)

E.1 Error in Previous R&A Framework

Here we describe a minor error in the R&A framework of [30], and how it can be fixed.

We define (I~,D)(\widetilde{I},D) as an output of 𝚛𝚘𝚞𝚗𝚍(I)\operatorname{\mathtt{round}}(I) and for the residual instance SS, we define S~\widetilde{S} as the corresponding rounded items of SDS-D. Our proof of Lemma 6 relies on the fact that for any subset of rounded items, the span\operatorname{span} reduces by a factor of at least β\beta if we restrict our attention to the residual instance. Formally, this means that for any X~I~\widetilde{X}\subseteq\widetilde{I}, we have

𝔼(span(X~S~))=iX~span(i)Pr(iS~)span(X~)/β.\operatorname*{\mathbb{E}}(\operatorname{span}(\widetilde{X}\cap\widetilde{S}))=\sum_{i\in\widetilde{X}}\operatorname{span}(i)\Pr(i\in\widetilde{S})\leq\operatorname{span}(\widetilde{X})/\beta.

The equality follows from linearity of expectation and the fact that span(i)\operatorname{span}(i) is deterministic, i.e., it doesn’t depend on the randomness used in the randomized rounding of the configuration LP. This is because 𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}} is not given any information about what SS is. The inequality follows from Lemma 5, which says that Pr(iS)1/β\Pr(i\in S)\leq 1/\beta.

The R&A framework of [30] used similar techniques in their analysis. In their algorithm, however, they round items differently. Specifically, they define a subroutine 𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}} and define I~𝚛𝚘𝚞𝚗𝚍(I)\widetilde{I}\coloneqq\operatorname{\mathtt{round}}(I) and S~𝚛𝚘𝚞𝚗𝚍(S)\widetilde{S}\coloneqq\operatorname{\mathtt{round}}(S). They, too, claim that for any subset of rounded items, the span\operatorname{span} reduces by a factor of at least β\beta if we restrict our attention to the residual instance. While their claim is correct for input-agnostic rounding (where items are rounded up to some constant size collection values chosen independent of the problem instance), the claim is incorrect for input-sensitive rounding (where the values are chosen based on the specific problem instance). So the claim is incorrect if 𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}} is not deterministic, as then an item can be rounded differently depending on different residual instances.

In fact, they use their R&A framework with the algorithm of Jansen and Prädel [24], which uses linear grouping (along with some other techniques) for rounding. Linear grouping rounds items in an input-sensitive way, i.e., the rounding of each item depends on the sizes of items in SS which is a random subset.

Appendix F Algorithm for (2, dd) Bin Packing

Here we will see a rounding-based algorithm for (2, dd) bin packing, called 𝚌𝚋𝚙𝚊𝚌𝚔\operatorname{\mathtt{cb-pack}} (named after ‘compartment-based packing’). 𝚌𝚋𝚙𝚊𝚌𝚔\operatorname{\mathtt{cb-pack}} will be parametrized by a parameter ε\varepsilon, where ε12\varepsilon^{-1}\in 2\mathbb{Z} and ε1/8\varepsilon\leq 1/8.

Claim 18.

If a set JJ of items fits into a bin, then vmax(J)dv_{\max}(J)\leq d and span(J)d+1\operatorname{span}(J)\leq d+1.

We will first remove some items from the bin, so that the remaining items can be classified into useful categories.

F.1 Classifying Items

Definition 5.

For constants ε2<ε1\varepsilon_{2}<\varepsilon_{1}, a bin packing instance II is called (ε2,ε1)(\varepsilon_{2},\varepsilon_{1})-non-medium iff iI\forall i\in I, (w(i)(ε2,ε1])(h(i)(ε2,ε1])(j[d],vj(i)(ε2,ε1])(w(i)\not\in(\varepsilon_{2},\varepsilon_{1}])\wedge(h(i)\not\in(\varepsilon_{2},\varepsilon_{1}])\wedge(\forall j\in[d],v_{j}(i)\not\in(\varepsilon_{2},\varepsilon_{1}]).

An (ε2,ε1)(\varepsilon_{2},\varepsilon_{1})-non-medium instance has useful properties. Therefore, we want to remove some items from the input instance II so that it becomes (ε2,ε1)(\varepsilon_{2},\varepsilon_{1})-non-medium and the removed items can be packed into a small number of bins.

Definition 6.

Let δ0,ε(0,1]\delta_{0},\varepsilon\in(0,1] be constants and let f:(0,1](0,1]f:(0,1]\mapsto(0,1] be a function such that x(0,1],f(x)<x\forall x\in(0,1],f(x)<x. Let T(d+2)/εT\coloneqq\left\lceil(d+2)/\varepsilon\right\rceil. For t[T]t\in[T], define δtf(δt1)\delta_{t}\coloneqq f(\delta_{t-1}) and define

Jt{iI:w(i)(δt,δt1]h(i)(δt,δt1](j=1dvj(i)(δt,δt1])}.J_{t}\coloneqq\left\{i\in I:w(i)\in(\delta_{t},\delta_{t-1}]\vee h(i)\in(\delta_{t},\delta_{t-1}]\vee\left(\bigvee_{j=1}^{d}v_{j}(i)\in(\delta_{t},\delta_{t-1}]\right)\right\}.

Define removemedium(I,ε,f,δ0)\operatorname{remove-medium}(I,\varepsilon,f,\delta_{0}) as the tuple (Jr,δr,δr1)(J_{r},\delta_{r},\delta_{r-1}), where rargmint=1Tspan(Jt)r\coloneqq\operatorname*{argmin}_{t=1}^{T}\operatorname{span}(J_{t}).

Lemma 19.

Let (Imed,ε2,ε1)removemedium(I,ε,f,δ0)(I_{\mathrm{med}},\varepsilon_{2},\varepsilon_{1})\coloneqq\operatorname{remove-medium}(I,\varepsilon,f,\delta_{0}). Then span(Imed)εspan(I)\operatorname{span}(I_{\mathrm{med}})\leq\varepsilon\operatorname{span}(I).

Proof.

Each item belongs to at most d+2d+2 sets JtJ_{t}. Therefore,

span(Imed)=mint=1Tspan(Jt)1Tt=1Tspan(Jt)d+2(d+2)/εspan(I)εspan(I).\operatorname{span}(I_{\mathrm{med}})=\min_{t=1}^{T}\operatorname{span}(J_{t})\leq\frac{1}{T}\sum_{t=1}^{T}\operatorname{span}(J_{t})\leq\frac{d+2}{\left\lceil(d+2)/\varepsilon\right\rceil}\operatorname{span}(I)\leq\varepsilon\operatorname{span}(I).\qed

By Definition 6, IImedI-I_{\mathrm{med}} is (ε2,ε1)(\varepsilon_{2},\varepsilon_{1})-non-medium. By Lemmas 19 and 4, span(Imed)\operatorname{span}(I_{\mathrm{med}}) can be packed into at most 6(d+1)εopt(I)+36(d+1)\varepsilon\operatorname*{opt}(I)+3 bins using the 𝚜𝚒𝚖𝚙𝚕𝚎𝚙𝚊𝚌𝚔\operatorname{\mathtt{simple-pack}} algorithm.

We will choose ff to be independent of II, so ε1\varepsilon_{1} and ε2\varepsilon_{2} are constants. Also note that ε2f(ε1)\varepsilon_{2}\coloneqq f(\varepsilon_{1}) and ε1δ0\varepsilon_{1}\leq\delta_{0}. We choose δ0min(1/(4d+1),2ε/3){\delta_{0}\coloneqq\min\left(1/(4d+1),2\varepsilon/3\right)}, so δ01\delta_{0}^{-1}\in\mathbb{Z}. We will choose ff later. For now, we will impose these conditions: f(x)εx2/2f(x)\leq\varepsilon x^{2}/2, and (x1f(x)1)(x^{-1}\in\mathbb{Z}\implies f(x)^{-1}\in\mathbb{Z}). The second condition implies that ε11,ε21\varepsilon_{1}^{-1},\varepsilon_{2}^{-1}\in\mathbb{Z}.

Henceforth, assume all (2, dd) bin packing instances to be (ε2,ε1)(\varepsilon_{2},\varepsilon_{1})-non-medium.

Definition 7.

We can classify item ii by its geometric dimensions as follows:

  • Big item: w(i)>ε1w(i)>\varepsilon_{1} and h(i)>ε1h(i)>\varepsilon_{1}.

  • Wide item: w(i)>ε1w(i)>\varepsilon_{1} and h(i)ε2h(i)\leq\varepsilon_{2}.

  • Tall item: w(i)ε2w(i)\leq\varepsilon_{2} and h(i)>ε1h(i)>\varepsilon_{1}.

  • Small item: w(i)ε2w(i)\leq\varepsilon_{2} and h(i)ε2h(i)\leq\varepsilon_{2}.

When rotation of items is allowed, assume w.l.o.g. that there are no tall items in II.

Definition 8 (Dense items).

Item ii is dense iff either a(i)=0a(i)=0 or vmax(i)/a(i)>1/ε12v_{\max}(i)/a(i)>1/\varepsilon_{1}^{2}.

Note that big items cannot be dense.

Definition 9 (Heavy and light items).

A dense item ii is said to be heavy in vector dimension jj iff vj(i)ε1v_{j}(i)\geq\varepsilon_{1}. Otherwise ii is said to be light in dimension jj. If ii is heavy in some dimension, then ii is said to be heavy, otherwise ii is light.

F.2 Rounding One Side

In this subsection, we will show how a packing of II into bins can be modified to get a more structured packing where one of the geometric dimensions is rounded up.

Definition 10.

In a bin, assume a coordinate system where (0,0)(0,0) is at the bottom left and (1,1)(1,1) is on the top right. We define the following regions, called strips:

  • S(T)[0,1]×[1ε1,1]S^{(T)}\coloneqq[0,1]\times[1-\varepsilon_{1},1] and S(T)[0,1]×[1ε1/2,1]S^{(T^{\prime})}\coloneqq[0,1]\times[1-\varepsilon_{1}/2,1].

  • S(B)[0,1]×[0,ε1]S^{(B)}\coloneqq[0,1]\times[0,\varepsilon_{1}].

  • S(L)[0,ε1]×[0,1]S^{(L)}\coloneqq[0,\varepsilon_{1}]\times[0,1].

  • S(R)[1ε1,1]×[0,1]S^{(R)}\coloneqq[1-\varepsilon_{1},1]\times[0,1] and S(R)[1ε1/2,1]×[0,1]S^{(R^{\prime})}\coloneqq[1-\varepsilon_{1}/2,1]\times[0,1].

We say that an item intersects a strip iff a non-zero volume of that item lies inside the strip.

Property 11.

A bin is said to satisfy Property 11 iff both of these conditions hold:

  1. a

    The xx-coordinate and width of all non-dense wide and big items is a multiple of ε12/4\varepsilon_{1}^{2}/4.

  2. b

    If the bin contains dense items, then dense items are packed inside S(R)S^{(R^{\prime})} and no non-dense item intersects S(R)S^{(R^{\prime})}.

Property 12.

A bin is said to satisfy Property 12 iff both of these conditions hold:

  1. a

    The yy-coordinate and height of all non-dense tall and big items is a multiple of ε12/4\varepsilon_{1}^{2}/4.

  2. b

    If the bin contains dense items, then dense items are packed inside S(T)S^{(T^{\prime})} and no non-dense item intersects S(T)S^{(T^{\prime})}.

Equivalently, we can say that a bin satisfies Property 12 iff its mirror image about the line y=xy=x satisfies Property 11.

The main result of this subsection is the following:

Lemma 20.

Given a packing of items into a bin, we can round up the width of some wide and big non-dense items to a multiple of ε12/4\varepsilon_{1}^{2}/4 or round up the height of some tall and big non-dense items to a multiple of ε12/4\varepsilon_{1}^{2}/4 and get a packing into 2 bins and 2 boxes where:

  • Each bin satisfies either Property 11 or Property 12.

  • v1v_{1} of each box is at most 1/21/2.

  • One of the boxes has only dense items. Its larger dimension is 1 and its smaller dimension is δd2dε12+ε2\delta_{d}\coloneqq 2d\varepsilon_{1}^{2}+\varepsilon_{2}.

  • One of the boxes has only non-dense items. Its larger dimension is 1 and its smaller dimension is ε1+ε2\varepsilon_{1}+\varepsilon_{2}.

  • One of the boxes is horizontal, i.e., has width 1 and only contains wide and small items. The other box is vertical, i.e., has height 1 and only contains tall and small items.

Before proving Lemma 20, we first prove a few ancillary results.

For X{T,B}X\in\{T,B\}, the items lying completely inside S(X)S^{(X)} are either small or wide. Let C(X)C^{(X)} be the set of small and wide items that intersect S(X)S^{(X)}. For X{L,R}X\in\{L,R\}, the items lying completely inside S(X)S^{(X)} are either small or tall. Let C(X)C^{(X)} be the set of small and tall items that intersect S(X)S^{(X)}. Since 2ε1+ε212\varepsilon_{1}+\varepsilon_{2}\leq 1, we get C(T)C(B)=C(L)C(R)={}C^{(T)}\cap C^{(B)}=C^{(L)}\cap C^{(R)}=\{\}.

W.l.o.g., assume that v1(C(T))1/2v_{1}(C^{(T)})\leq 1/2 because v1(C(B)C(T))1v_{1}(C^{(B)}\cup C^{(T)})\leq 1 and if v1(C(T))>v1(C(B))v_{1}(C^{(T)})>v_{1}(C^{(B)}), then we can mirror-invert the bin along a horizontal axis. Similarly assume that v1(C(R))1/2v_{1}(C^{(R)})\leq 1/2.

Observation 21.

If a bin only contains tall and small items, it trivially satisfies Property 11a. If a bin only contains wide and small items, it trivially satisfies Property 12a.

Lemma 22.

Suppose we’re given a packing of items into a bin such that no item intersects S(R)S^{(R)}. Then we can increase the widths of all wide and big items to a multiple of ε12/4\varepsilon_{1}^{2}/4 and repack the items so that they satisfy Property 11a and no item intersects S(R)S^{(R^{\prime})}.

Proof.

Let yb(i)y_{b}(i) and yt(i)y_{t}(i) be the yy-coordinates of the bottom and top edge respectively of item ii. If an item jj intersects the strip [0,1]×[yb(i),yt(i)][0,1]\times[y_{b}(i),y_{t}(i)] and lies to the right of ii (i.e. the left edge of jj is to the right of the right edge of ii), we say that iimmji\prec_{\mathrm{imm}}j (see Fig. 5). Let \preceq denote the reflexive and transitive closure of the relation imm\prec_{\mathrm{imm}}. It is easy to see that \preceq is a partial ordering of II. Define iji\prec j as ijiji\preceq j\wedge i\neq j.

AABBCCDD
Figure 5: Items AA, BB, CC and DD in a bin. Here AimmDA\prec_{\mathrm{imm}}D but AimmCA\not\prec_{\mathrm{imm}}C. Also, AimmBimmCA\prec_{\mathrm{imm}}B\prec_{\mathrm{imm}}C, so ACA\preceq C.

Define pw(i)p_{w}(i) to be 1 if it is wide or big and to be 0 if it is neither wide nor big. Also, define nw(i)pw(i)+maxjinw(j)n_{w}(i)\coloneqq p_{w}(i)+\max_{j\prec i}n_{w}(j) (if there is no jij\prec i, define maxjinw(i)0\max_{j\prec i}n_{w}(i)\coloneqq 0). Intuitively, nw(i)n_{w}(i) denotes the length of the largest chain of wide items preceding ii. The xx-coordinate of the right edge of item ii is more than ε1nw(i)\varepsilon_{1}n_{w}(i). Therefore, nw(i)<1/ε11n_{w}(i)<1/\varepsilon_{1}-1.

Transformation 13.

Move each item ii to the right by (nw(i)pw(i))ε12/2(n_{w}(i)-p_{w}(i))\varepsilon_{1}^{2}/2. Additionally, if ii is wide or big, move it further to the right so that the xx-coordinate of its left edge is a multiple of ε12/4\varepsilon_{1}^{2}/4, and increase its width so that it is a multiple of ε12/4\varepsilon_{1}^{2}/4.

On applying Transformation 13 to item ii, the xx-coordinate of its right edge increases by less than nw(i)ε12/2n_{w}(i)\varepsilon_{1}^{2}/2. Since nw(i)<1/ε11n_{w}(i)<1/\varepsilon_{1}-1, the increase is less than ε1/2\varepsilon_{1}/2. Therefore, ii will not intersect S(R)S^{(R^{\prime})} after this transformation. Also, after applying this transformation to all items, the bin satisfies Property 11a.

We will now prove that after applying Transformation 13 to all items, no items overlap. If ii and jj are not relatively ordered by \preceq, they cannot overlap because we only moved items rightwards. Now assume w.l.o.g. that iji\prec j. The xx-coordinate of the right edge of ii increases by less than nw(i)ε12/2n_{w}(i)\varepsilon_{1}^{2}/2. The xx-coordinate of the left edge of jj increases by at least (nw(j)pw(j))ε12/2(n_{w}(j)-p_{w}(j))\varepsilon_{1}^{2}/2. Since nw(i)maxijnw(i)=nw(j)pw(j)n_{w}(i)\leq\max_{i^{\prime}\prec j}n_{w}(i^{\prime})=n_{w}(j)-p_{w}(j), ii and jj don’t overlap. ∎

Lemma 23.

Let RR be a set of wide and small items that are dense and have total weight at most 1. They can be packed in polynomial time into a box of width 1 and height δd2dε12+ε2\delta_{d}\coloneqq 2d\varepsilon_{1}^{2}+\varepsilon_{2}.

Proof.

a(R)ε12vmax(R)dε12a(R)\leq\varepsilon_{1}^{2}v_{\max}(R)\leq d\varepsilon_{1}^{2}.
So by Lemma 9, height used by NFDH is less than 2a(R)+ε22dε12+ε22a(R)+\varepsilon_{2}\leq 2d\varepsilon_{1}^{2}+\varepsilon_{2}. ∎

We can get an analogous result for tall and small dense items.

Proof of Lemma 20.

Suppose the bin contains items JJ. Then we can use Lemma 23 to move dense wide items to box DWD_{W} and move dense tall and small items to box DHD_{H}. We will later repack one of DWD_{W} and DHD_{H} into a bin.

v1(DWDH)1v_{1}(D_{W}\cup D_{H})\leq 1. This gives us 2 cases: v1(DW)1/2v_{1}(D_{W})\leq 1/2 or v1(DH)1/2v_{1}(D_{H})\leq 1/2. The first case is the same as the second one with the coordinate axes swapped, so assume w.l.o.g. that v1(DW)1/2v_{1}(D_{W})\leq 1/2.

Move C(R)C^{(R)} to a box of height 1 and width ε1+ε21/2\varepsilon_{1}+\varepsilon_{2}\leq 1/2. C(R)C^{(R)} only has tall and small non-dense items. Also, v1(C(R))1/2v_{1}(C^{(R)})\leq 1/2.

Let I(R)I^{(R)} be the set of big and wide items that intersect S(R)S^{(R)}. Move I(R)I^{(R)} to a separate bin. The items in I(R)I^{(R)} are stacked on top of each other. Therefore, we can round their widths to a multiple of ε12/4\varepsilon_{1}^{2}/4. I(R)I^{(R)} doesn’t have dense items. Therefore, this new bin satisfies the desired properties.

Since we removed C(R)C^{(R)} and I(R)I^{(R)} from the bin, S(R)S^{(R)} is empty. By Lemma 22, we can round the xx-coordinate and width of big and wide items in the bin to a multiple of ε12/4\varepsilon_{1}^{2}/4 and then repack the items in the bin so that the bin satisfies Property 11a and S(R)S^{(R^{\prime})} is empty. Observe that

δd=2dε12+ε22dε12+εε122ε12(4d+1)ε1ε12.\delta_{d}=2d\varepsilon_{1}^{2}+\varepsilon_{2}\leq 2d\varepsilon_{1}^{2}+\frac{\varepsilon\varepsilon_{1}^{2}}{2}\leq\frac{\varepsilon_{1}}{2}(4d+1)\varepsilon_{1}\leq\frac{\varepsilon_{1}}{2}.

Since δdε1/2\delta_{d}\leq\varepsilon_{1}/2, pack DHD_{H} into S(R)S^{(R^{\prime})}. Now this bin also satisfies Property 11b. In total, we used 2 bins and 2 boxes (C(R)C^{(R)} and DWD_{W}). The dense box is horizontal and the non-dense box is vertical. Refer to Fig. 6 for an example.

=ε1\varepsilon_{1}ε1+ε2\varepsilon_{1}+\varepsilon_{2}
Figure 6: A bin is split into 2 bins and 2 boxes. Then the widths and xx-coordinates of big and wide non-dense items are rounded up to a multiple of ε12/4\varepsilon_{1}^{2}/4. Dense items are shaded dark and non-dense items are shaded light.

Lemma 24.

When item rotation is allowed, given a packing of items into a bin, we can round up the width of some wide and big items to a multiple of ε12/4\varepsilon_{1}^{2}/4 and round up the height of some tall and big items to a multiple of ε12/4\varepsilon_{1}^{2}/4 and get a packing into 2 bins and 1 box where:

  • Each bin satisfies Property 11.

  • v1v_{1} of the box is at most 1/21/2.

  • The box has height 1 and width ε1+ε2\varepsilon_{1}+\varepsilon_{2}.

  • The box only contains tall and small non-dense items.

Proof sketch.

Suppose the bin contains items JJ. Move dense items to a vertical box DHD_{H} using Lemma 23. Move C(R)C^{(R)} to a box of height 1 and width ε1+ε2\varepsilon_{1}+\varepsilon_{2}. Move I(R)I^{(R)} to a new bin and round item widths to a multiple of ε12/4\varepsilon_{1}^{2}/4. Now S(R)S^{(R^{\prime})} is empty, so pack DHD_{H} into S(R)S^{(R^{\prime})}. The rest of the proof is similar to that of Lemma 20. ∎

F.3 Getting Slack in Weight of Bins

For a bin JJ, if j[d],vj(J)1ε\forall j\in[d],v_{j}(J)\leq 1-\varepsilon, then we can round up weights of items. Hence, we would like to have bins with (roughly) this property.

Definition 14.

A bin JJ is said to be ε\varepsilon-slacked iff at least one of these conditions holds:

  • j[d],vj(J)1ε\forall j\in[d],v_{j}(J)\leq 1-\varepsilon.

  • |J|=1|J|=1.

  • |J|=2|J|=2 and JJ only contains dense items and iJ,vmax(i)1/2\forall i\in J,v_{\max}(i)\leq 1/2.

A packing of items into multiple bins is said to be ε\varepsilon-slacked iff all bins in the packing are ε\varepsilon-slacked.

We say that packing of items in a bin is quarter-structured iff the bin is ε\varepsilon-slacked and satisfies either Property 11 or Property 12. We would like to round up the width or height of each item in II and repack the items into bins such that each bin is quarter-structured.

Lemma 25.

Let D[d]D\subseteq[d] and we have a parameter δ1/4\delta\leq 1/4. Let II be a set of items where jD,vj(I)Vj\forall j\in D,v_{j}(I)\leq V_{j}. Then we can partition II into at most |D|+1\left\lvert D\right\rvert+1 disjoint subsets such that each subset II^{\prime} satisfies one of these properties:

  • |I|=1|I^{\prime}|=1.

  • jD,vj(I)(1δ)Vj\forall j\in D,v_{j}(I^{\prime})\leq(1-\delta)V_{j}.

Proof.

Let IL{iI:jD,vj(i)>(12δ)Vj}I_{L}\coloneqq\{i\in I:\exists j\in D,v_{j}(i)>(1-2\delta)V_{j}\}. Move each item in ILI_{L} to a new box. Let D{jD:vj(IL)>(12δ)Vj}D^{\prime}\coloneqq\{j\in D:v_{j}(I_{L})>(1-2\delta)V_{j}\}. Then |D||IL||D^{\prime}|\geq|I_{L}| and jD,vj(IIL)<2δVj(1δ)Vj\forall j\in D^{\prime},v_{j}(I-I_{L})<2\delta V_{j}\leq(1-\delta)V_{j}.

Order the items in IILI-I_{L} arbitrarily. For each jDDj\in D-D^{\prime}, find the smallest prefix PjP_{j} such that vj(Pj)δVjv_{j}(P_{j})\geq\delta V_{j}. Let iji_{j} be the last item in PjP_{j}. Then vj(Pjij)<δVjv_{j}(P_{j}-i_{j})<\delta V_{j}. Since we removed items from ILI_{L}, vj(ij)(12δ)Vjv_{j}(i_{j})\leq(1-2\delta)V_{j}. Therefore, vj(Pj)(1δ)Vjv_{j}(P_{j})\leq(1-\delta)V_{j}.

Now order these prefixes in non-decreasing order of cardinality. Let them be P1P_{1}^{\prime}, P2P_{2}^{\prime}, \ldots, P|DD|P_{|D-D^{\prime}|}^{\prime}. The sets P1P_{1}^{\prime}, P2P1P_{2}^{\prime}-P_{1}^{\prime}, P3P2P_{3}^{\prime}-P_{2}^{\prime}, \ldots form a partition of P|DD|P_{|D-D^{\prime}|}. Put each such set in a new box, if the set is not empty. The items which remain in the original box are QIILP|DD|Q\coloneqq I-I_{L}-P_{|D-D^{\prime}|}^{\prime}. jDD,QIILPj\forall j\in D-D^{\prime},Q\subseteq I-I_{L}-P_{j}. Since vj(Pj)δVjv_{j}(P_{j})\geq\delta V_{j}, we get that jDD,vj(Q)(1δ)Vj\forall j\in D-D^{\prime},v_{j}(Q)\leq(1-\delta)V_{j}.

Therefore, total number of boxes needed is at most 1+|IL|+|DD|1+|D|+|DD||D|+11+|I_{L}|+|D-D^{\prime}|\leq 1+|D^{\prime}|+|D-D^{\prime}|\leq|D|+1. ∎

Lemma 26.

Given a packing of items II into mm bins, we can round up the width of some non-dense wide and big items in II to a multiple of ε12/4\varepsilon_{1}^{2}/4 and round up the height of some non-dense tall and big items in II to a multiple of ε12/4\varepsilon_{1}^{2}/4 and repack the items into (d+4)m(d+4)m ε\varepsilon-slacked bins such that each bin satisfies either Property 11 or Property 12.

Proof.

Let B1,B2,,BmB_{1},B_{2},\ldots,B_{m} be a packing of items II into mm bins. For each bin BkB_{k}, we can use Lemma 20 to round up some items in BkB_{k} and split BkB_{k} into bins JkJ_{k} and KkK_{k} and boxes WkW_{k} and HkH_{k}. W.l.o.g., assume WkW_{k} is a horizontal box. Put each box in a new bin. Then WkW_{k} satisfies Property 12 and HkH_{k} satisfies Property 11.

Let Dk{j[d]:vj(Jk)>(1ε)}D_{k}\coloneqq\{j\in[d]:v_{j}(J_{k})>(1-\varepsilon)\}, Ek{j[d]:vj(Kk)>(1ε)}E_{k}\coloneqq\{j\in[d]:v_{j}(K_{k})>(1-\varepsilon)\}, Fk{j[d]:vj(Wk)>(1ε)}F_{k}\coloneqq\{j\in[d]:v_{j}(W_{k})>(1-\varepsilon)\} and Gk{j[d]:vj(Hk)>(1ε)}G_{k}\coloneqq\{j\in[d]:v_{j}(H_{k})>(1-\varepsilon)\}. DkD_{k}, EkE_{k}, FkF_{k} and GkG_{k} are pairwise disjoint and they are subsets of [d][d]. Now use Lemma 25 with parameter δ=ε\delta=\varepsilon on bin JkJ_{k} with dimensions DkD_{k}. This splits JkJ_{k} into |Dk|+1|D_{k}|+1 ε\varepsilon-slacked bins. Similarly, by splitting KkK_{k}, WkW_{k} and HkH_{k}, we get |Ek|+1|E_{k}|+1, |Fk|+1|F_{k}|+1 and |Gk|+1|G_{k}|+1 ε\varepsilon-slacked bins respectively.

The total number of bins from BkB_{k} is |Dk|+|Ek|+|Fk|+|Gk|+4d+4|D_{k}|+|E_{k}|+|F_{k}|+|G_{k}|+4\leq d+4. Therefore, we get a total of (d+4)m(d+4)m bins.

JkJ_{k}, KkK_{k}, WkW_{k} and HkH_{k} satisfy the desired properties except possibly ε\varepsilon-slackness. When we split a bin into multiple bins, the position of items relative to the bin isn’t changed. Therefore, the split bins continue to satisfy these properties. ∎

Lemma 27.

Given a packing of items II, if item rotations are allowed, we can round up the width of some non-dense wide and big items in II to a multiple of ε12/4\varepsilon_{1}^{2}/4 and round up the height of some non-dense tall and big items in II to a multiple of ε12/4\varepsilon_{1}^{2}/4 and repack II into (d+3)m(d+3)m ε\varepsilon-slacked bins such that each bin satisfies Property 11.

Proof sketch.

Use Lemma 24 on each bin in the packing to get 3 bins. The rest of the proof is similar to Lemma 26. ∎

We will now try to improve upon Lemmas 26 and 27 for the special case d=1d=1.

Lemma 28.

Let there be mm boxes of width 1 and height at most ε\varepsilon. Let the weight of each box be at most kεk\varepsilon in each dimension, where k{1,2}k\in\{1,2\}. Then we can pack these boxes into 1+mkε/(1kε)1+m\cdot k\varepsilon/(1-k\varepsilon) bins such that the resulting bins are kεk\varepsilon-slacked.

Proof.

We can pack 1/kε11/k\varepsilon-1 boxes in 1 bin with the resulting bin being kεk\varepsilon-slacked. This is because the sum of weights is at most 1kε1-k\varepsilon in each dimension and the total height of the boxes is at most 1/kε11/k-\varepsilon\leq 1. The total number of bins used is at most m/((1/kε)1)\left\lceil m/((1/k\varepsilon)-1)\right\rceil which in turn is at most 1+mkε/(1kε)1+m\cdot k\varepsilon/(1-k\varepsilon). ∎

The above lemma can also be used for vertical boxes, i.e. height 1 and width at most ε\varepsilon.

Lemma 29.

Let d=1d=1. Let there be mm boxes of width 1 and height at most ε\varepsilon containing only non-big non-dense items. Let the weight of each box be at most 1/21/2. Then we can pack these boxes into 2+m(1/2+ε/(1ε))2+m\left(1/2+\varepsilon/(1-\varepsilon)\right) bins such that the resulting bins are ε\varepsilon-slacked.

Proof.

Let ii be an item in the box. Boxes only have non-big items, so a(i)ε2a(i)\leq\varepsilon_{2}. Boxes only have non-dense items, so v1(i)a(i)/ε12ε/2v_{1}(i)\leq a(i)/\varepsilon_{1}^{2}\leq\varepsilon/2.

From each box, choose the smallest prefix SS for which v1(S)ε/2v_{1}(S)\geq\varepsilon/2. Then v1(S)εv_{1}(S)\leq\varepsilon. Remove SS and put it in a new box of the same dimensions.

This gives us mm boxes of weight at most (1ε)/2(1-\varepsilon)/2. We can pair them up and pack them into m/2m/2+1\left\lceil m/2\right\rceil\leq m/2+1 bins. Those bins will be ε\varepsilon-slacked.

We also get mm new boxes of weight at most ε\varepsilon. We can pack 1/ε11/\varepsilon-1 such boxes into a bin. This gives us at most 1+mε/(1ε)1+m\cdot\varepsilon/(1-\varepsilon) bins. These bins are ε\varepsilon-slacked.

Total number of bins used is (m2+1)+(1+mε/(1ε))=2+m(1/2+ε/(1ε))\left(\frac{m}{2}+1\right)+\left(1+m\cdot\varepsilon/(1-\varepsilon)\right)=2+m\left(1/2+\varepsilon/(1-\varepsilon)\right). ∎

Lemma 30.

Let d=1d=1. Let there be mm boxes of width 1 and height δd\delta_{d}. Suppose the boxes only have dense items, and each box has total weight at least ε\varepsilon and at most 1/21/2. Then we can pack the items of these boxes into at most 3+2m/33+2m/3 bins such that the resulting bins are ε\varepsilon-slacked.

Proof.

Let there be tt boxes that have an item of weight at least 1/22ε1/2-2\varepsilon. No item has weight more than half. Therefore, we can pair up these high-weight items into at most t/2+1t/2+1 ε\varepsilon-slacked bins. In each of these tt boxes, the remaining items have weight at most 2ε2\varepsilon. Since εδd\varepsilon\geq\delta_{d}, by Lemma 28, we can pack them into 1+2εt/(12ε)1+2\varepsilon t/(1-2\varepsilon) number of ε\varepsilon-slacked bins.

mtm-t boxes have all items of weight less than 1/22ε1/2-2\varepsilon. Each of these boxes has total weight between ε\varepsilon and 1/21/2. Each box can be split into 2 boxes as follows: Order the items in a box and find the smallest prefix of weight at least ε\varepsilon. Since there are no items of weight more than 1/22ε1/2-2\varepsilon, such a prefix has weight between ε\varepsilon and 1/2ε1/2-\varepsilon.

Arrange the mtm-t boxes into groups of at most 3 boxes each. Let C1C_{1}, C2C_{2}, C3C_{3} be these boxes in one such group. Split C1C_{1} and C2C_{2} into 2 boxes each by the above method. Let the resulting boxes be C1,C1,C2,C2C_{1}^{\prime},C_{1}^{\prime\prime},C_{2}^{\prime},C_{2}^{\prime\prime} respectively. Assume w.l.o.g. that v1(Cj)v1(Cj)v_{1}(C_{j}^{\prime})\leq v_{1}(C_{j}^{\prime\prime}) for j[2]j\in[2]. Pack C1,C2,C1C_{1}^{\prime},C_{2}^{\prime},C_{1}^{\prime\prime} into 1 bin. It has weight at most 1ε1-\varepsilon, so it is ε\varepsilon-slacked. Pack C2C_{2}^{\prime\prime} and C3C_{3} into 1 bin. It has weight at most 1ε1-\varepsilon, so it is ε\varepsilon-slacked. Therefore, we can convert a group of 3 boxes into 2 ε\varepsilon-slacked bins.

Total bins used is at most (1+2εt/(12ε))+2(mt)/33+2m/3t(232ε12ε)\left(1+2\varepsilon t/(1-2\varepsilon)\right)+2\left\lceil(m-t)/3\right\rceil\leq 3+2m/3-t\left(\frac{2}{3}-\frac{2\varepsilon}{1-2\varepsilon}\right) which is at most 3+2m/33+2m/3, assuming ε1/5\varepsilon\leq 1/5. ∎

The above lemma can also be used for vertical boxes, i.e. height 1 and width at most δd\delta_{d}.

Lemma 31.

Given a packing of items II into mm bins, when d=1d=1, we can round up the width of some non-dense wide and big items in II to a multiple of ε12/4\varepsilon_{1}^{2}/4 and round up the height of some non-dense tall and big items in II to a multiple of ε12/4\varepsilon_{1}^{2}/4 and repack II into (3+16+ε1ε)m+12\left(3+\frac{1}{6}+\frac{\varepsilon}{1-\varepsilon}\right)m+12 ε\varepsilon-slacked bins such that each bin satisfies either Property 11 or Property 12.

Proof.

Let B1,B2,,BmB_{1},B_{2},\ldots,B_{m} be a packing of items II into mm bins. For each bin BkB_{k}, we can use Lemma 20 to round up some items in BkB_{k} and split BkB_{k} into bins JkJ_{k} and KkK_{k} and boxes WkW_{k} and HkH_{k}, where WkW_{k} is a horizontal box and HkH_{k} is a vertical box, i.e. the width of WkW_{k} is 1 and the height of HkH_{k} is 1.

Classifying bins:

We will now classify the bins B1,B2,,BmB_{1},B_{2},\ldots,B_{m}.

Type 1: v1(Wk)εv_{1}(W_{k})\leq\varepsilon and v1(Hk)εv_{1}(H_{k})\leq\varepsilon:

Among JkJ_{k} and KkK_{k}, at most 1 bin will have weight more than 1ε1-\varepsilon. Use Lemma 25 to split it into 2 bins. So for each original bin of type 1, we get at most 3 ε\varepsilon-slacked bins and 2 boxes, one horizontal and one vertical, each of total weight at most ε\varepsilon.

Both JkJ_{k} and KkK_{k} satisfy either Property 11 or Property 12. When we split a bin into multiple bins, the position of items relative to the bin isn’t changed. Therefore, the bins continue to satisfy these properties.

Type 2: v1(Wk)>εv_{1}(W_{k})>\varepsilon and v1(Hk)εv_{1}(H_{k})\leq\varepsilon:

v1(Wk)>εv_{1}(W_{k})>\varepsilon implies that v1(Jk)1εv_{1}(J_{k})\leq 1-\varepsilon and v1(Kk)1εv_{1}(K_{k})\leq 1-\varepsilon, so JkJ_{k} and KkK_{k} are already ε\varepsilon-slacked. Pack WkW_{k} in a bin. Since v1(Wk)1/21εv_{1}(W_{k})\leq 1/2\leq 1-\varepsilon, WkW_{k} is ε\varepsilon-slacked. WkW_{k} satisfies Property 12. So for each original bin of type 2, we get at most 3 ε\varepsilon-slacked bins and 1 vertical box of weight at most ε\varepsilon.

Type 3: v1(Wk)εv_{1}(W_{k})\leq\varepsilon and v1(Hk)>εv_{1}(H_{k})>\varepsilon:

The analysis is similar to type 2. For each original bin of type 3, we get at most 3 ε\varepsilon-slacked bins and 1 horizontal box of weight at most ε\varepsilon.

Type 4: v1(Wk)>εv_{1}(W_{k})>\varepsilon and v1(Hk)>εv_{1}(H_{k})>\varepsilon:

v1(Wk)>εv_{1}(W_{k})>\varepsilon implies that v1(Jk)1εv_{1}(J_{k})\leq 1-\varepsilon and v1(Kk)1εv_{1}(K_{k})\leq 1-\varepsilon, so JkJ_{k} and KkK_{k} are already ε\varepsilon-slacked. So we have at most 2 ε\varepsilon-slacked bins and 2 boxes of weight at most 1/21/2.

Repacking boxes:

We will now try to pack the boxes into bins. Each of these bins packs some dense boxes and some non-dense boxes. If multiple dense boxes were packed in a bin, we can use Lemma 23 to repack them into a single dense box and move that box to an edge of the bin. Bins that only pack horizontal boxes satisfy Property 12. Bins that only pack vertical boxes satisfy Property 11.

Among B1,B2,,BmB_{1},B_{2},\ldots,B_{m}, let there be mkm_{k} bins of type kk.

The number of ε\varepsilon-slacked bins is at most 3m1+3m2+3m3+2m43mm43m_{1}+3m_{2}+3m_{3}+2m_{4}\leq 3m-m_{4}. We also have m1+m3m_{1}+m_{3} horizontal boxes and m1+m2m_{1}+m_{2} vertical boxes of weight at most ε\varepsilon each. Since δdε1/2ε\delta_{d}\leq\varepsilon_{1}/2\leq\varepsilon and ε1+ε22ε/3+2ε3/9ε\varepsilon_{1}+\varepsilon_{2}\leq 2\varepsilon/3+2\varepsilon^{3}/9\leq\varepsilon, each box has the smaller geometric dimension at most ε\varepsilon. By Lemma 28, the number of bins we need to pack them is at most 2+ε1ε(2m1+m2+m3)2+\frac{\varepsilon}{1-\varepsilon}(2m_{1}+m_{2}+m_{3}).

We have m4m_{4} horizontal boxes and m4m_{4} vertical boxes that each have weight between ε\varepsilon and 1/21/2. m4m_{4} of these are dense boxes and m4m_{4} are non-dense boxes.

The non-dense boxes don’t have big items. Since εε1+ε2\varepsilon\geq\varepsilon_{1}+\varepsilon_{2}, by Lemma 29, the number of bins needed to pack them is at most 4+(12+ε1ε)m44+\left(\frac{1}{2}+\frac{\varepsilon}{1-\varepsilon}\right)m_{4}.

By Lemma 30, we can pack the dense boxes into 6+2m4/36+2m_{4}/3 bins, where each bin is ε\varepsilon-slacked.

The total number of bins used is at most

(3mm4)+(2+ε1ε(2m1+m2+m3))+(4+(12+ε1ε)m4)+(6+23m4)\displaystyle(3m-m_{4})+\left(2+\frac{\varepsilon}{1-\varepsilon}(2m_{1}+m_{2}+m_{3})\right)+\left(4+\left(\frac{1}{2}+\frac{\varepsilon}{1-\varepsilon}\right)m_{4}\right)+\left(6+\frac{2}{3}m_{4}\right)
=12+(3+16+ε1ε)m+ε1εm1+m4m6\displaystyle=12+\left(3+\frac{1}{6}+\frac{\varepsilon}{1-\varepsilon}\right)m+\frac{\varepsilon}{1-\varepsilon}m_{1}+\frac{m_{4}-m}{6}
12+(3+16+ε1ε)m(16ε1ε)m1\displaystyle\leq 12+\left(3+\frac{1}{6}+\frac{\varepsilon}{1-\varepsilon}\right)m-\left(\frac{1}{6}-\frac{\varepsilon}{1-\varepsilon}\right)m_{1} (m4mm1m_{4}\leq m-m_{1})
12+(3+16+ε1ε)m.\displaystyle\leq 12+\left(3+\frac{1}{6}+\frac{\varepsilon}{1-\varepsilon}\right)m. (ε1/8\varepsilon\leq 1/8)

Lemma 32.

Given a packing of items II into mm bins, when d=1d=1 and item rotations are allowed, we can round up the width of some non-dense wide and big items in II to a multiple of ε12/4\varepsilon_{1}^{2}/4 and round up the height of some non-dense tall and big items in II to a multiple of ε12/4\varepsilon_{1}^{2}/4 and repack II into (3+ε1ε)m+1\left(3+\frac{\varepsilon}{1-\varepsilon}\right)m+1 ε\varepsilon-slacked bins such that each bin satisfies Property 11.

Proof sketch.

Using techniques from the proof of Lemma 31, we get at most 3m3m ε\varepsilon-slacked bins and at most mm vertical boxes, where each box has total weight at most ε\varepsilon. Using Lemma 28, we get the desired results. ∎

We can summarize Lemmas 26, 27, 31 and 32 as follows:

Theorem 33.

Given a packing of II into mm bins, we can round up the width of some non-dense wide and big items in II to a multiple of ε12/4\varepsilon_{1}^{2}/4 and round up the height of some non-dense tall and big items in II to a multiple of ε12/4\varepsilon_{1}^{2}/4 and repack II into at most am+bam+b ε\varepsilon-slacked bins such that each bin satisfies either Property 11 or Property 12. Here the values of aa and bb depend on the value of dd and whether item rotations are allowed. See Table 2.

Table 2: Values of aa and bb for Theorem 33.
aa bb Lemma
rotations forbidden d+4d+4 0 Lemma 26
rotations allowed d+3d+3 0 Lemma 27
rotations forbidden and d=1d=1 3+16+ε1ε{\displaystyle 3+\frac{1}{6}+\frac{\varepsilon}{1-\varepsilon}} 1212 Lemma 31
rotations allowed and d=1d=1 3+ε1ε{\displaystyle 3+\frac{\varepsilon}{1-\varepsilon}} 11 Lemma 32

F.4 Rounding Weights

Definition 15 (Weight classes).

Two items i1i_{1} and i2i_{2} are said to belong to the same weight class iff one of these conditions hold:

  • i1i_{1} and i2i_{2} are big and j[d],vj(i1)=vj(i2)\forall j\in[d],v_{j}(i_{1})=v_{j}(i_{2})

  • i1i_{1} and i2i_{2} are non-dense and wide and j[d],vj(i1)/h(i1)=vj(i2)/h(i2)\forall j\in[d],v_{j}(i_{1})/h(i_{1})=v_{j}(i_{2})/h(i_{2}).

  • i1i_{1} and i2i_{2} are non-dense and tall and j[d],vj(i1)/w(i1)=vj(i2)/w(i2)\forall j\in[d],v_{j}(i_{1})/w(i_{1})=v_{j}(i_{2})/w(i_{2}).

  • i1i_{1} and i2i_{2} are non-dense and small and j[d],vj(i1)/a(i1)=vj(i2)/a(i2)\forall j\in[d],v_{j}(i_{1})/a(i_{1})=v_{j}(i_{2})/a(i_{2}).

  • i1i_{1} and i2i_{2} are dense and light and j[d],vj(i1)/vmax(i1)=vj(i2)/vmax(i2)\forall j\in[d],v_{j}(i_{1})/v_{\max}(i_{1})=v_{j}(i_{2})/v_{\max}(i_{2}).

  • i1i_{1} and i2i_{2} are dense and heavy and j[d],vj(i1)=vj(i2)\forall j\in[d],v_{j}(i_{1})=v_{j}(i_{2}).

Big items that have the same geometric dimensions and the same weight class are identical. We will round the geometric dimensions of dense items to 0, so heavy items of the same weight class would be identical.

Definition 16 (Slicing and fractional packing).

Let II be a set of items. I^\widehat{I} is called a slicing of II iff I^\widehat{I} can be obtained from II by one or more of the following operations:

  • Horizontally slicing a non-dense wide item (i.e. if a wide item ii is sliced into items i1i_{1} and i2i_{2}, then w(i)=w(i1)=w(i2)w(i)=w(i_{1})=w(i_{2}) and h(i)=h(i1)+h(i2)h(i)=h(i_{1})+h(i_{2})).

  • Vertically slicing a non-dense tall item (i.e. if a tall item ii is sliced into items i1i_{1} and i2i_{2}, then h(i)=h(i1)=h(i2)h(i)=h(i_{1})=h(i_{2}) and w(i)=w(i1)+w(i2)w(i)=w(i_{1})+w(i_{2})).

  • Slicing a non-dense small item in one or both dimensions.

  • Slicing a light dense item of zero area.

A packing of I^\widehat{I} into bins is called a fractional packing of II.

Wide non-dense items of the same width and of the same weight class can be used interchangeably while trying to get a fractional packing. Similarly, tall non-dense items of the same height and of the same weight class are interchangeable, small non-dense items of the same weight class are interchangeable and light dense items of zero area and the same weight class are interchangeable.

We will now see how to round the weights of items so that they belong to a constant number of weight classes.

F.4.1 Rounding Weights of Non-Dense Items

Transformation 17.

Given an item ii, j[d]\forall j\in[d],

  • If ii is big, round up vj(i)v_{j}(i) to a positive multiple of ε12ε/8\varepsilon_{1}^{2}\varepsilon/8.

  • If ii is wide and non-dense, round up vj(i)v_{j}(i) to a positive multiple of h(i)ε1ε/8h(i)\varepsilon_{1}\varepsilon/8.

  • If ii is tall and non-dense, round up vj(i)v_{j}(i) to a positive multiple of w(i)ε1ε/8w(i)\varepsilon_{1}\varepsilon/8.

  • If ii is small and non-dense, round up vj(i)v_{j}(i) to a positive multiple of a(i)ε/8a(i)\varepsilon/8.

Lemma 34.

Transformation 17 is valid, i.e. for any item ii, j[d],vj(i)1\forall j\in[d],v_{j}(i)\leq 1 after the transformation.

Proof.

Since ε11,ε1\varepsilon_{1}^{-1},\varepsilon^{-1}\in\mathbb{Z}, the transformed weight of a big item can be at most 1.

Since ε2(1ε)ε12\varepsilon_{2}\leq(1-\varepsilon)\varepsilon_{1}^{2}, the weight of a non-dense wide/tall/small item is at most ε2/ε121ε\varepsilon_{2}/\varepsilon_{1}^{2}\leq 1-\varepsilon and rounding it up can only increase it by at most ε/8\varepsilon/8. ∎

Lemma 35.

Let a bin JJ be μ\mu-slacked, for ε/8με\varepsilon/8\leq\mu\leq\varepsilon. Then after applying Transformation 17, the bin will be (με/8)(\mu-\varepsilon/8)-slacked.

Proof.

If the bin contains a single item, it will remain μ\mu-slacked after the transformation.

Suppose the bin contains multiple items, then j[d],vj(J)1μ\forall j\in[d],v_{j}(J)\leq 1-\mu. Let there be pp big items. Let the total height of wide non-dense items be HH. Let the total width of tall non-dense items be WW. Let the total area of small non-dense items be AA.

The total area of big items is at least pε12p\varepsilon_{1}^{2}, of wide items is at least ε1H\varepsilon_{1}H and of tall items is at least ε1W\varepsilon_{1}W. Since the total area of items in JJ is at most 1,

ε12p+ε1H+ε1W+A1.\varepsilon_{1}^{2}p+\varepsilon_{1}H+\varepsilon_{1}W+A\leq 1.

The total increase in vj(J)v_{j}(J) is at most

ε8(ε12p+ε1H+ε1W+A)ε8.\frac{\varepsilon}{8}(\varepsilon_{1}^{2}p+\varepsilon_{1}H+\varepsilon_{1}W+A)\leq\frac{\varepsilon}{8}.

Therefore, the resulting bin is (με/8)(\mu-\varepsilon/8)-slacked. ∎

Observation 36.

Since we’re rounding up weights of non-dense items in Transformation 17, some items may not continue to satisfy the non-denseness property, i.e. vj(i)/a(i)v_{j}(i)/a(i) may exceed 1/ε121/\varepsilon_{1}^{2} for some items ii and some j[d]j\in[d].

However, this will not affect us much. Formally:

  1. a

    Big items will remain non-dense, since a(i)>ε12a(i)>\varepsilon_{1}^{2} and vmax(i)1v_{\max}(i)\leq 1.

  2. b

    Small items will remain non-dense, since vmax(i)/a(i)v_{\max}(i)/a(i) will be rounded up to a multiple of ε/8\varepsilon/8, and ε/8\varepsilon/8 divides 1/ε121/\varepsilon_{1}^{2}.

  3. c

    For wide items, vmax(i)/a(i)v_{\max}(i)/a(i) may rise to at most 1/ε12+ε/81/\varepsilon_{1}^{2}+\varepsilon/8. Furthermore, vmax(i)/h(i)v_{\max}(i)/h(i) will be rounded to a multiple of ε1ε/8\varepsilon_{1}\varepsilon/8, and ε1ε/8\varepsilon_{1}\varepsilon/8 divides 1/ε121/\varepsilon_{1}^{2}, so vmax(i)/h(i)v_{\max}(i)/h(i) will continue to be at most 1/ε121/\varepsilon_{1}^{2}.

  4. d

    For tall items, vmax(i)/a(i)v_{\max}(i)/a(i) may rise to at most 1/ε12+ε/81/\varepsilon_{1}^{2}+\varepsilon/8. Furthermore, vmax(i)/w(i)v_{\max}(i)/w(i) will be rounded to a multiple of ε1ε/8\varepsilon_{1}\varepsilon/8, and ε1ε/8\varepsilon_{1}\varepsilon/8 divides 1/ε121/\varepsilon_{1}^{2}, so vmax(i)/w(i)v_{\max}(i)/w(i) will continue to be at most 1/ε121/\varepsilon_{1}^{2}.

Even if vmax(i)/a(i)v_{\max}(i)/a(i) of some non-dense items exceeds 1/ε121/\varepsilon_{1}^{2} after Transformation 17, we will continue to consider them non-dense items.

Lemma 37.

Define nbwc(8/(ε12ε))d,nwwc(8/(ε13ε))d,nswc(8/(ε12ε))dn_{\mathrm{bwc}}\coloneqq\left(8/(\varepsilon_{1}^{2}\varepsilon)\right)^{d},n_{\mathrm{wwc}}\coloneqq\left(8/(\varepsilon_{1}^{3}\varepsilon)\right)^{d},n_{\mathrm{swc}}\coloneqq\left(8/(\varepsilon_{1}^{2}\varepsilon)\right)^{d}. After Transformation 17, the number of weight classes of big items is at most nbwcn_{\mathrm{bwc}}, of wide non-dense items is at most nwwcn_{\mathrm{wwc}}, of tall non-dense items is at most nwwcn_{\mathrm{wwc}} and of small non-dense items is at most nswcn_{\mathrm{swc}}.

F.4.2 Rounding Weights of Dense Items

Transformation 18.

For item ii, if ii is non-dense, do nothing. If ii is dense, set w(i)w(i) and h(i)h(i) to 0 and for each j[d]j\in[d], if vj(i)(ε/8d)vmax(i)v_{j}(i)\leq(\varepsilon/8d)v_{\max}(i), then set vj(i)v_{j}(i) to 0.

Since Transformation 18 rounds down weights, we need to prove that we can easily undo this transformation.

Lemma 38.

Let JJ be a set of items. Let JJ^{\prime} be the items obtained by applying Transformation 18 to JJ. Suppose we’re given a packing of JJ^{\prime} into a bin that satisfies Property 11b and is μ\mu-slacked, for some με/8\mu\geq\varepsilon/8.

Then there is a polynomial-time algorithm to convert the packing of JJ^{\prime} into a packing of JJ that satisfies Property 11b, is (με/8)(\mu-\varepsilon/8)-slacked, and the position of non-dense items in the packing of JJ is the same as the position of non-dense items in the packing of JJ^{\prime}.

(Analogous lemma holds for Property 12b)

Proof.

By Lemma 23, we can always pack dense items in JJ in polynomial time into a box of height 1 and width δd\delta_{d}. Since δdε1/2\delta_{d}\leq\varepsilon_{1}/2, this box fits in S(R)S^{(R^{\prime})}. Therefore, Property 11b is satisfied.

Now we will prove that the packing of JJ is (με/8)(\mu-\varepsilon/8)-slacked. There are 3 cases to consider:

Case 1: j[d],vj(J)1μ\forall j\in[d],v_{j}(J^{\prime})\leq 1-\mu:
For each j[d]j\in[d] and each dense item ii, reverting the transformation increases vj(i)v_{j}(i) by at most (ε/8d)vmax(i)(\varepsilon/8d)v_{\max}(i). So by Claim 18, we get

vj(J)vj(J)+ε8dvmax(J)vj(J)+ε81(με8).v_{j}(J)\leq v_{j}(J^{\prime})+\frac{\varepsilon}{8d}v_{\max}(J^{\prime})\leq v_{j}(J^{\prime})+\frac{\varepsilon}{8}\leq 1-\left(\mu-\frac{\varepsilon}{8}\right).

Therefore, JJ is (με/8)(\mu-\varepsilon/8)-slacked.

Case 2: |J|=1|J^{\prime}|=1:
Then |J|=1|J|=1, so JJ is μ\mu-slacked.

Case 3: |J|=2|J^{\prime}|=2 and JJ^{\prime} only has dense items and iJ,1/2μvmax(i)1/2\forall i\in J^{\prime},1/2-\mu\leq v_{\max}(i)\leq 1/2:
The 0-value dimensions of ii increase to (ε/8d)vmax(i)vmax(i)(\varepsilon/8d)v_{\max}(i)\leq v_{\max}(i), so vmax(i)v_{\max}(i) remains the same across this transformation. So JJ is μ\mu-slacked. ∎

As the first step in rounding dense items, we apply Transformation 18 to II.

Since ε11/4d\varepsilon_{1}\leq 1/4d, we get ε2(ε/8d)ε1\varepsilon_{2}\leq(\varepsilon/8d)\varepsilon_{1}. Hence, Transformation 18 forces heavy items to be heavy in all non-zero dimensions.

Now we will use different transformations on heavy and light items.

Transformation 19.

For a dense item ii, if vj(i)>ε1v_{j}(i)>\varepsilon_{1}, round up vj(i)v_{j}(i) to a multiple of ε1ε/8\varepsilon_{1}\varepsilon/8.

Lemma 39.

Let JJ be a packing of items into a bin that is μ\mu-slacked, for some με/8\mu\geq\varepsilon/8. Let JJ^{\prime} be the packing obtained by applying Transformation 19 to dense items in JJ. Then JJ^{\prime} is a (με/8)(\mu-\varepsilon/8)-slacked packing.

Proof.

Case 1: j[d],vj(J)1μ\forall j\in[d],v_{j}(J^{\prime})\leq 1-\mu:
For each j[d]j\in[d], there are less than ε11\varepsilon_{1}^{-1} items ii in JJ such that vj(i)>ε1v_{j}(i)>\varepsilon_{1}. For each such item, vj(i)v_{j}(i) increases by less than ε1ε/8\varepsilon_{1}\varepsilon/8. Therefore, vj(J)<vj(J)+ε/8v_{j}(J^{\prime})<v_{j}(J)+\varepsilon/8. Therefore, JJ is (με/8)(\mu-\varepsilon/8)-slacked.

Case 2: |J|=1|J|=1:
|J|=1|J^{\prime}|=1, so JJ^{\prime} is μ\mu-slacked.

Case 3: |J|=2|J|=2 and JJ only has dense items and iJ,1/2μvmax(i)1/2\forall i\in J,1/2-\mu\leq v_{\max}(i)\leq 1/2:
ε11ε1\varepsilon_{1}^{-1}\varepsilon^{-1}\in\mathbb{Z}, so 1/21/2 is a multiple of ε1ε/8\varepsilon_{1}\varepsilon/8. Therefore, for each item ii, vmax(i)v_{\max}(i) increases to at most 1/21/2. So JJ is μ\mu-slacked. ∎

Lemma 40.

The number of distinct heavy items after Transformation 19 is at most

nhwc(8ε(1ε11))d.n_{\mathrm{hwc}}\coloneqq\left(\frac{8}{\varepsilon}\left(\frac{1}{\varepsilon_{1}}-1\right)\right)^{d}.
Proof.

This is because large vector dimensions are rounded to at most 8/ε1ε8/ε8/\varepsilon_{1}\varepsilon-8/\varepsilon distinct values. ∎

Transformation 20.

For a dense item ii, if vmax(i)ε2v_{\max}(i)\leq\varepsilon_{2}, then for each j[d]j\in[d], round up vj(i)/vmax(i)v_{j}(i)/v_{\max}(i) to a power of 1/(1+ε/8)1/(1+\varepsilon/8) if vj(i)>0v_{j}(i)>0.

Lemma 41.

Let JJ be a packing of items into a bin that is μ\mu-slacked, for some ε/8με\varepsilon/8\leq\mu\leq\varepsilon. Let JJ^{\prime} be the packing obtained by applying Transformation 20 to dense items in JJ. Then JJ^{\prime} is a (με/8)(\mu-\varepsilon/8)-slacked packing.

Proof.

Case 1: j[d],vj(J)1μ\forall j\in[d],v_{j}(J^{\prime})\leq 1-\mu:
For each j[d]j\in[d], vj(i)v_{j}(i) increases by a factor of at most 1+ε/81+\varepsilon/8. So,

vj(J)vj(J)(1+ε8)vj(J)+ε8.v_{j}(J^{\prime})\leq v_{j}(J)\left(1+\frac{\varepsilon}{8}\right)\leq v_{j}(J)+\frac{\varepsilon}{8}.

Therefore, JJ^{\prime} is (με/8)(\mu-\varepsilon/8)-slacked.

Case 2: |J|=1|J|=1:
vmax(i)1v_{\max}(i)\leq 1 after the transformation, so the packing is valid and |J|=1|J^{\prime}|=1. Therefore, JJ^{\prime} is μ\mu-slacked.

Case 3: |J|=2|J|=2 and JJ only has dense items and iJ,1/2μvmax(i)1/2\forall i\in J,1/2-\mu\leq v_{\max}(i)\leq 1/2:
Since ε21/2ε1/2μ\varepsilon_{2}\leq 1/2-\varepsilon\leq 1/2-\mu and the transformation only applies when vmax(i)ε2v_{\max}(i)\leq\varepsilon_{2}, JJ remains the same after the transformation. ∎

Lemma 42.

After Transformation 20, the number of distinct values of the weight vector of light items is at most

ln(8d/ε)ln(1+ε/8)d18+εεln(8dε)d1nlwc\left\lceil\frac{\ln(8d/\varepsilon)}{\ln(1+\varepsilon/8)}\right\rceil^{d-1}\leq\left\lceil\frac{8+\varepsilon}{\varepsilon}\ln\left(\frac{8d}{\varepsilon}\right)\right\rceil^{d-1}\coloneqq n_{\mathrm{lwc}}
Proof.

This is because vj(i)/vmax(i)v_{j}(i)/v_{\max}(i) is lower-bounded by ε/8d\varepsilon/8d because of Transformation 18. ∎

Transformation 21 (Weight-rounding).

For a set II of items, weight-rounding is the process of applying Transformations 17, 18, 19 and 20 to all items. A set II of items is said to be weight-rounded iff II is invariant under Transformations 17, 18, 19 and 20.

F.5 Rounding the Other Side

So far, for each item, we seen how to round their weights and how to round one geometric dimension. In this subsection, we will see how to use linear grouping to round the other geometric dimension. We will show that after this operation, the items will belong to a constant number of homogeneous classes (see Condition C1.3 in Section 4.2).

F.5.1 Rounding Geometric Dimensions of Non-Dense Items

Transformation 22 (Linear grouping).

Suppose we are given a packing of items II into mm bins, where II is invariant under Transformation 17 and each bin satisfies Property 11a.

Partition the big items in II by their width and weight class. Partition the tall non-dense items in II by their weight class. The number of partitions is constant by Property 11a and Lemma 37. Let δlgεε1/(d+1)\delta_{\mathrm{lg}}\coloneqq\varepsilon\varepsilon_{1}/(d+1) (so δlg1\delta_{\mathrm{lg}}^{-1}\in\mathbb{Z}).

For each partition SS of big items, do the following:

  1. 1.

    Order the items in SS in non-increasing order of height.

  2. 2.

    Let kδlg|S|+1k\coloneqq\left\lfloor\delta_{\mathrm{lg}}|S|\right\rfloor+1. Let S1S_{1} be the first kk items, S2S_{2} be the next kk items, and so on, till STS_{T}, where T=|S|/k1/δlgT=\left\lceil|S|/k\right\rceil\leq 1/\delta_{\mathrm{lg}}. For t[T]t\in[T], StS_{t} is called the ttht^{\textrm{th}} linear group of SS. The first item in StS_{t} is called the leader of StS_{t}, denoted as leader(St)\operatorname{leader}(S_{t}).

  3. 3.

    Increase the height of each item in S1S_{1} to h(leader(S1))h(\operatorname{leader}(S_{1})). Unpack the items S1{leader(S1)}S_{1}-\{\operatorname{leader}(S_{1})\}.

  4. 4.

    For each t[T]{1}t\in[T]-\{1\} and each j[|St|]{1}j\in[|S_{t}|]-\{1\}, let ii be the jthj^{\textrm{th}} item in StS_{t} and let ii^{\prime} be the jthj^{\textrm{th}} item in St1S_{t-1}. Note that h(i)h(leader(St))h(i)h(i^{\prime})\geq h(\operatorname{leader}(S_{t}))\geq h(i). Increase h(i)h(i) to h(leader(St))h(\operatorname{leader}(S_{t})) and pack ii where ii^{\prime} was packed. Since ii has the same width and weights as ii^{\prime}, the geometric constraints are not violated and the total weights of the bin do not increase. The number of distinct heights in SS now becomes at most T1/δlgT\leq 1/\delta_{\mathrm{lg}}.

For each partition SS of tall items, do the following:

  1. 1.

    Order the items in SS in non-increasing order of height and arrange them side-by-side on the real line, starting from the origin.

  2. 2.

    Let the total width of SS be WW. Let StS_{t} be the items in the interval [(t1)δlgW,tδlgW][(t-1)\delta_{\mathrm{lg}}W,t\delta_{\mathrm{lg}}W]. Slice the items if they lie on the boundaries of the interval. StS_{t} is called the ttht^{\textrm{th}} linear group of SS. The first item in StS_{t} is called the leader of StS_{t}, denoted as leader(St)\operatorname{leader}(S_{t}).

  3. 3.

    Increase the height of each item in S1S_{1} to h(leader(S1))h(\operatorname{leader}(S_{1})). Then unpack S1S_{1}.

  4. 4.

    For each t[1/δlg]{1}t\in[1/\delta_{\mathrm{lg}}]-\{1\}, move the items in StS_{t} to the space occupied by St1S_{t-1} (items in StS_{t} may need to be sliced for this) and increase the height of each item iSti\in S_{t} to h(leader(St))h(\operatorname{leader}(S_{t})). This doesn’t violate geometric constraints since StS_{t} and St1S_{t-1} have the same total width and this doesn’t increase the total weights of the bin because all items in SS have the same weight class. The number of distinct heights in SS now becomes at most 1/δlg1/\delta_{\mathrm{lg}}.

We can extend the definition of this transformation to bins satisfying Property 12a by swapping vertical and horizontal directions. (Partition big items by height and weight class and partition wide items by weight class. Then round up the width of items in each partition using the above techniques.)

Lemma 43.

Let JJ be a set of big and tall items and let μ[0,1)\mu\in[0,1) be a constant. Define span(i)max(w(i),min(vmax(i)1μ,1))\operatorname{span}^{\prime}(i)\coloneqq\max\left(w(i),\min\left(\frac{v_{\max}(i)}{1-\mu},1\right)\right). Then iJspan(i)1\sum_{i\in J}\operatorname{span}^{\prime}(i)\leq 1 implies JJ can be packed into a μ\mu-slacked bin where all items touch the bottom of the bin.

Proof.

iJw(i)iJspan(i)1\sum_{i\in J}w(i)\leq\sum_{i\in J}\operatorname{span}^{\prime}(i)\leq 1.

iJ,span(i)>0\forall i\in J,\operatorname{span}^{\prime}(i)>0. So if span(i)=1\operatorname{span}^{\prime}(i)=1 for some iJi\in J, then |J|=1|J|=1. So JJ can be packed into a bin, and the bin is μ\mu-slacked since |J|=1|J|=1.

Now let span(i)<1\operatorname{span}^{\prime}(i)<1 for all iJi\in J. So vmax(i)<1μv_{\max}(i)<1-\mu and j[d]\forall j\in[d],

vj(J)=iJvj(i)(1μ)iJvmax(i)1μ(1μ)iJspan(i)1μ.v_{j}(J)=\sum_{i\in J}v_{j}(i)\leq(1-\mu)\sum_{i\in J}\frac{v_{\max}(i)}{1-\mu}\leq(1-\mu)\sum_{i\in J}\operatorname{span}^{\prime}(i)\leq 1-\mu.

Therefore, JJ can be packed into a μ\mu-slacked bin. ∎

Lemma 44.

Suppose we are given a packing of items II into mm bins, where II is invariant under Transformation 17 and each bin satisfies Property 11a. Let UU be the items unpacked by linear grouping (Transformation 22). Then UU can be repacked into 2ε1εm+1\frac{2\varepsilon}{1-\varepsilon}m+1 number of ε\varepsilon-slacked bins that satisfy Property 11.

Proof.

Define span(i)max(w(i),min(vmax(i)1ε,1))\operatorname{span}^{\prime}(i)\coloneqq\max\left(w(i),\min\left(\frac{v_{\max}(i)}{1-\varepsilon},1\right)\right). Let KK be the set of big and tall non-dense items in II. For any JKJ\subseteq K, define span(J)iJspan(i)\operatorname{span}^{\prime}(J)\coloneqq\sum_{i\in J}\operatorname{span}^{\prime}(i).

Interpret each item iUi\in U as a 1D item of size span(i)\operatorname{span}^{\prime}(i). Order the items such that big items in UU appear before tall non-dense items in UU. Pack them on the bottom of new bins using the Next-Fit algorithm. By Lemma 43, they will require at most 2span(U)+12\operatorname{span}^{\prime}(U)+1 ε\varepsilon-slacked bins. These bins satisfy Property 11a since the width of all big items in UU is a multiple of ε12/4\varepsilon_{1}^{2}/4, and they satisfy Property 11b since UU only contains non-dense items.

In Transformation 22, we partitioned all big items in II by width and weight class. Let SIS\subseteq I be one such partition. Given the way we created the groups S1,S2,S_{1},S_{2},\ldots, we get |SU|δlg|S||S\cap U|\leq\left\lfloor\delta_{\mathrm{lg}}|S|\right\rfloor. Since all items in SS have the same width and weights, span(i)\operatorname{span}^{\prime}(i) is the same for each iSi\in S. Therefore,

span(SU)=span(i)|SU|span(i)δlg|S|δlgspan(S).\operatorname{span}^{\prime}(S\cap U)=\operatorname{span}^{\prime}(i)|S\cap U|\leq\operatorname{span}^{\prime}(i)\left\lfloor\delta_{\mathrm{lg}}|S|\right\rfloor\leq\delta_{\mathrm{lg}}\operatorname{span}^{\prime}(S).

In Transformation 22, we partitioned all tall non-dense items in II by weight class. Let SIS\subseteq I be one such partition. Given the way we created the groups S1,S2,S_{1},S_{2},\ldots, we get w(SU)=δlgw(S)w(S\cap U)=\delta_{\mathrm{lg}}w(S). All items in SS have the same weights-to-width ratio, which is at most 1/ε121/\varepsilon_{1}^{2} by 36d. Since ε2ε12(1ε)\varepsilon_{2}\leq\varepsilon_{1}^{2}(1-\varepsilon), we get vj(i)1εv_{j}(i)\leq 1-\varepsilon for all iSi\in S, so span(i)/w(i)\operatorname{span}^{\prime}(i)/w(i) is the same for each iSi\in S. Let that common ratio be α\alpha. Then,

span(SU)=αw(SU)αδlgw(S)=δlgspan(S).\operatorname{span}^{\prime}(S\cap U)=\alpha w(S\cap U)\leq\alpha\delta_{\mathrm{lg}}w(S)=\delta_{\mathrm{lg}}\operatorname{span}^{\prime}(S).

Summing over all partitions SS, we get

span(U)=Sspan(US)Sδlgspan(S)δlgspan(K).\operatorname{span}^{\prime}(U)=\sum_{S}\operatorname{span}^{\prime}(U\cap S)\leq\sum_{S}\delta_{\mathrm{lg}}\operatorname{span}^{\prime}(S)\leq\delta_{\mathrm{lg}}\operatorname{span}^{\prime}(K). (1)

For iKi\in K, we get

span(i)span(i)max(w(i),vmax(i)1μ)max(w(i)h(i),vmax(i))11μmax(w(i),vmax(i))max(w(i)ε1,vmax(i))1(1μ)ε1.\displaystyle\frac{\operatorname{span}^{\prime}(i)}{\operatorname{span}(i)}\leq\frac{\max\left(w(i),\frac{v_{\max}(i)}{1-\mu}\right)}{\max\left(w(i)h(i),v_{\max}(i)\right)}\leq\frac{\frac{1}{1-\mu}\max\left(w(i),v_{\max}(i)\right)}{\max\left(w(i)\varepsilon_{1},v_{\max}(i)\right)}\leq\frac{1}{(1-\mu)\varepsilon_{1}}.

The last inequality follows because for big and tall items, h(i)ε1h(i)\geq\varepsilon_{1}.

The number of bins used to pack UU is

2span(U)+1\displaystyle 2\operatorname{span}^{\prime}(U)+1 2δlgspan(K)+12δlgε1(1ε)span(K)+1\displaystyle\leq 2\delta_{\mathrm{lg}}\operatorname{span}^{\prime}(K)+1\leq\frac{2\delta_{\mathrm{lg}}}{\varepsilon_{1}(1-\varepsilon)}\operatorname{span}(K)+1
2(d+1)δlgε1(1ε)m+1=2ε1εm+1.\displaystyle\leq\frac{2(d+1)\delta_{\mathrm{lg}}}{\varepsilon_{1}(1-\varepsilon)}m+1=\frac{2\varepsilon}{1-\varepsilon}m+1.

The first inequality follows from (1) and the third inequality follows from Claim 18. ∎

Lemma 45.

Suppose we are given a packing of items II into mm bins, where II is weight-rounded, each bin is μ\mu-slacked for some με\mu\leq\varepsilon, and each bin satisfies either Property 11 or Property 12. Then after applying linear grouping (Transformation 22) to this packing of II, we get a packing of items I^\widehat{I} into mm^{\prime} bins, where all of the following hold:

  • I^\widehat{I} is a rounding-up of II and contains a constant number of homogeneous classes (see Condition C1.3 in Section 4.2).

  • Each bin in the packing of I^\widehat{I} is μ\mu-slacked and satisfies either Property 11 or Property 12.

  • m(1+2ε1ε)m+2{\displaystyle m^{\prime}\leq\left(1+\frac{2\varepsilon}{1-\varepsilon}\right)m+2}.

Proof sketch.

Follows from the definition of linear grouping and Lemma 44. Note that we apply linear grouping separately to bins satisfying Property 11 and bins satisfying Property 12. ∎

F.5.2 Coarse and Fine Partitioning

Our approach so far has been to start from an optimal packing of items and show how to modify it to obtain an approximately-optimal structured packing of a rounded instance. However, the rounding algorithm must round items without knowing the optimal packing. To design such an algorithm, we first need to introduce additional concepts: coarse partitioning and fine partitioning.

At a high level, our rounding algorithm first partitions the items by weight classes to get a coarse partitioning. It then further partitions the coarse partitions to get a fine partitioning. It then rounds up the geometric dimensions of items in each fine partition to make that partition homogeneous.

We will first formally define coarse and fine partitioning. We will then restate Theorems 33 and 45 using the language of fine partitioning. Then in Section F.6, we will see an algorithm for computing a fine partitioning of II.

  • Let B(I)B(I) be the set of big items in II.

  • Let W(I)W(I) be the set of wide non-dense items in II.

  • Let H(I)H(I) be the set of tall non-dense items in II.

  • Let S(I)S(I) be the set of small non-dense items in II.

  • Let Dl,1(I)D^{l,1}(I) be the set of light dense items in II that are either tall or small.

  • Let Dl,2(I)D^{l,2}(I) be the set of light dense wide items in II.

  • Let Dh,1(I)D^{h,1}(I) be the set of heavy dense items in II that are either tall or small.

  • Let Dh,2(I)D^{h,2}(I) be the set of heavy dense wide items in II.

When the set of items II is clear from context, we will use BB, WW, HH, SS, Dl,1D^{l,1}, Dl,2D^{l,2}, Dh,1D^{h,1}, Dh,2D^{h,2} to refer to these sets.

Definition 23 (Coarse partitioning).

Let II be a weight-rounded instance. Partition items II by their weight classes. Then for each partition containing dense items, split that partition into 2 partitions: one containing only tall and small items and the other containing only wide items. The resulting partitioning is called a coarse partitioning of II.

We number the coarse partitions in BB arbitrarily from 11 onwards. There will be at most nbwcn_{\mathrm{bwc}} such partitions by Lemma 37. Denote the pthp^{\textrm{th}} coarse partition by BpB_{p}.

Similarly, denote the pthp^{\textrm{th}} coarse partition

  • in WW by WpW_{p}, where p[nwwc]p\in[n_{\mathrm{wwc}}].

  • in HH by HpH_{p}, where p[nwwc]p\in[n_{\mathrm{wwc}}].

  • in SS by SpS_{p}, where p[nswc]p\in[n_{\mathrm{swc}}].

  • in Dl,1D^{l,1} by Dl,1pD^{l,1}_{p}, where p[nlwc]p\in[n_{\mathrm{lwc}}].

  • in Dl,2D^{l,2} by Dl,2pD^{l,2}_{p}, where p[nlwc]p\in[n_{\mathrm{lwc}}].

  • in Dh,1D^{h,1} by Dh,1pD^{h,1}_{p}, where p[nhwc]p\in[n_{\mathrm{hwc}}].

  • in Dh,2D^{h,2} by Dh,2pD^{h,2}_{p}, where p[nhwc]p\in[n_{\mathrm{hwc}}].

Observation 46.

There is a unique coarse partitioning of II. Furthermore, the unique coarse partitioning can be found in O(|I|)O(|I|) time.

In Theorems 33 and 45, widths of wide and big items are rounded. The rounding is different for Theorems 33 and 45: In Theorem 33, we round the widths of some items to multiples of ε12/4\varepsilon_{1}^{2}/4 so that the bin satisfies Property 11a, and in Lemma 45, we round the widths of items in bins satisfying Property 12a using linear grouping. To get a rounding algorithm, we have to guess whether the bin of a wide or big item will satisfy Property 11 or Property 12. We will capture these guesses in the fine partitioning.

Definition 24 (Fine partitioning).

Let Q[4ε1+1,4ε12]Q\coloneqq\mathbb{Z}\cap\left[\frac{4}{\varepsilon_{1}}+1,\frac{4}{\varepsilon_{1}^{2}}\right], R{1,2,,1δlg}R\coloneqq\left\{1,2,\ldots,\frac{1}{\delta_{\mathrm{lg}}}\right\}, Qq{x:(q1)ε124<xqε124}Q_{q}\coloneqq\left\{x\in\mathbb{R}:(q-1)\frac{\varepsilon_{1}^{2}}{4}<x\leq q\frac{\varepsilon_{1}^{2}}{4}\right\}.

Given a coarse partitioning of a set II of items, let (Bpw,Bph)(B_{p}^{w},B_{p}^{h}) be a partitioning of BpB_{p}, (Wpw,Wph)(W_{p}^{w},W_{p}^{h}) be a partitioning of WpW_{p} and (Hpw,Hqh)(H_{p}^{w},H_{q}^{h}) be a partitioning of HpH_{p}.

  • BpwB_{p}^{w} is partitioned into sets {Bp,q,rw:qQ,rR}\{B_{p,q,r}^{w}:q\in Q,r\in R\} where iBwp,q,rw(i)Qqi\in B^{w}_{p,q,r}\implies w(i)\in Q_{q}.

  • BphB_{p}^{h} is partitioned into sets {Bp,q,rh:qQ,rR}\{B_{p,q,r}^{h}:q\in Q,r\in R\} where iBhp,q,rh(i)Qqi\in B^{h}_{p,q,r}\implies h(i)\in Q_{q}.

  • WpwW_{p}^{w} is partitioned into sets {Wp,qw:qQ}\{W_{p,q}^{w}:q\in Q\} where iWwp,qw(i)qε12/4i\in W^{w}_{p,q}\implies w(i)\leq q\varepsilon_{1}^{2}/4.

  • WphW_{p}^{h} is partitioned into sets {Wp,rh:rR}\{W_{p,r}^{h}:r\in R\}.

  • HpwH_{p}^{w} is partitioned into sets {Hp,rw:rR}\{H_{p,r}^{w}:r\in R\}.

  • HphH_{p}^{h} is partitioned into sets {Hp,qh:qQ}\{H_{p,q}^{h}:q\in Q\} where iHhp,qh(i)qε12/4i\in H^{h}_{p,q}\implies h(i)\leq q\varepsilon_{1}^{2}/4.

A fine partitioning of II is any partitioning of II into sets of the form Bwp,q,rB^{w}_{p,q,r}, Bhp,q,rB^{h}_{p,q,r}, Wwp,qW^{w}_{p,q}, Whp,rW^{h}_{p,r}, Hwp,rH^{w}_{p,r}, Hhp,qH^{h}_{p,q}, SpS_{p}, Dl,1pD^{l,1}_{p}, Dl,2pD^{l,2}_{p}, Dh,1pD^{h,1}_{p}, Dh,2pD^{h,2}_{p}.

Note that for a given set II of items, there can be multiple fine partitionings.

Given a fine partitioning, we use the ‘*’ character in superscript or subscript to denote the union of some partitions. For example, Bwp,,rqBwp,q,rB^{w}_{p,*,r}\coloneqq\bigcup_{q}B^{w}_{p,q,r} and Ww,p,qWwp,qW^{w}_{*,*}\coloneqq\bigcup_{p,q}W^{w}_{p,q}, and D,1pDl,1pDh,1pD^{*,1}_{p}\coloneqq D^{l,1}_{p}\cup D^{h,1}_{p}.

When item rotations are allowed, the fine partitioning includes information on which items to rotate, and we can assume w.l.o.g. that H(I)=D,2=Bh,,=Wh,=Hh,={}H(I)=D^{*,2}=B^{h}_{*,*,*}=W^{h}_{*,*}=H^{h}_{*,*}=\{\}.

Transformation 25.

Given a fine partitioning of II, execute the following operations:

  • iBw,q,Ww,q\forall i\in B^{w}_{*,q,*}\cup W^{w}_{*,q}, increase w(i)w(i) to qε12/4q\varepsilon_{1}^{2}/4.

  • iBh,q,Hh,q\forall i\in B^{h}_{*,q,*}\cup H^{h}_{*,q}, increase h(i)h(i) to qε12/4q\varepsilon_{1}^{2}/4.

  • iBwp,q,r\forall i\in B^{w}_{p,q,r}, increase h(i)h(i) to maxiBwp,q,rh(i)\max_{i\in B^{w}_{p,q,r}}h(i).

  • iBhp,q,r\forall i\in B^{h}_{p,q,r}, increase w(i)w(i) to maxiBhp,q,rw(i)\max_{i\in B^{h}_{p,q,r}}w(i).

  • iWhp,r\forall i\in W^{h}_{p,r}, increase w(i)w(i) to maxiWhp,rw(i)\max_{i\in W^{h}_{p,r}}w(i).

  • iHwp,r\forall i\in H^{w}_{p,r}, increase h(i)h(i) to maxiHwp,rh(i)\max_{i\in H^{w}_{p,r}}h(i).

The number of fine partitions is constant and after applying Transformation 25, each partition is homogeneous.

Definition 26 (Semi-structured packing of fine partitioning).

Suppose we are given a fine partitioning of items II. A packing of items JIJ\subseteq I into a bin is said to be ‘division-1 semi-structured’ with respect to the fine partitioning iff JJ doesn’t contain items from Bh,,B^{h}_{*,*,*}, Wh,W^{h}_{*,*}, Hh,H^{h}_{*,*} and D,2D^{*,2} and JJ satisfies Property 11.

A packing of items JIJ\subseteq I into a bin is said to be ‘division-2 semi-structured’ with respect to the fine partitioning iff JJ doesn’t contain items from Bw,,B^{w}_{*,*,*}, Ww,W^{w}_{*,*}, Hw,H^{w}_{*,*} and D,1D^{*,1} and JJ satisfies Property 12.

Packing of items into bins is called semi-structured iff each bin is either division-1 semi-structured or division-2 semi-structured.

Definition 27 (Balanced fine partitioning).

A fine partitioning is said to be balanced iff it satisfies all of the following conditions:

  • p,r,h(Whp,r)=δlgh(Whp,)\forall p,\forall r,h(W^{h}_{p,r})=\delta_{\mathrm{lg}}h(W^{h}_{p,*})

  • p,r,w(Hwp,r)=δlgw(Hwp,)\forall p,\forall r,w(H^{w}_{p,r})=\delta_{\mathrm{lg}}w(H^{w}_{p,*})

  • p,q\forall p,\forall q, the sets {Bwp,q,r:r}\{B^{w}_{p,q,r}:\forall r\} can be obtained from Bwp,q,B^{w}_{p,q,*} by ordering the items in Bwp,q,B^{w}_{p,q,*} in non-increasing order of height (breaking ties arbitrarily) and putting the first kk items in Bwp,q,1B^{w}_{p,q,1}, the next kk items in Bwp,q,2B^{w}_{p,q,2}, and so on, where kδlg|Bwp,q,|+1k\coloneqq\left\lfloor\delta_{\mathrm{lg}}|B^{w}_{p,q,*}|\right\rfloor+1.

  • p,q\forall p,\forall q, the sets {Bhp,q,r:r}\{B^{h}_{p,q,r}:\forall r\} can be obtained from Bhp,q,B^{h}_{p,q,*} by ordering the items in Bhp,q,B^{h}_{p,q,*} in non-increasing order of width (breaking ties arbitrarily) and putting the first kk items in Bhp,q,1B^{h}_{p,q,1}, the next kk items in Bhp,q,2B^{h}_{p,q,2}, and so on, where kδlg|Bhp,q,|+1k\coloneqq\left\lfloor\delta_{\mathrm{lg}}|B^{h}_{p,q,*}|\right\rfloor+1.

We now restate Theorems 33 and 45 in terms of fine partitioning.

Lemma 47.

Let II be a set of items and I^\widehat{I} be the items obtained by weight-rounding II. Then there exists a balanced fine partitioning of a slicing of I^\widehat{I} such that after applying Transformation 25 to I^\widehat{I}, there is a semi-structured (5ε/8)(5\varepsilon/8)-slacked fractional packing of I^\widehat{I} into (1+2ε1ε)(aopt(I)+b)+2\left(1+\frac{2\varepsilon}{1-\varepsilon}\right)(a\operatorname*{opt}(I)+b)+2 bins. Here aa and bb are as defined in Table 2 in Theorem 33.

Proof.

By Theorem 33, we can round up the width of some big and wide non-dense items in II to the nearest multiple of ε12/4\varepsilon_{1}^{2}/4 and round up the height of some big and tall non-dense items in II to the nearest multiple of ε12/4\varepsilon_{1}^{2}/4 and then pack II into aopt(I)+ba\operatorname*{opt}(I)+b ε\varepsilon-slacked bins such that each bin satisfies either Property 11 or Property 12. Let \mathcal{B} be such a bin packing of II. By Lemmas 35, 39 and 41, \mathcal{B} gives us a (5ε/8)(5\varepsilon/8)-slacked packing of I^\widehat{I}.

Call the bins in \mathcal{B} that satisfy Property 11 division-1 bins. Call the rest of the bins division-2 bins. The items whose width needs to be rounded up to a multiple of ε12/4\varepsilon_{1}^{2}/4 are the big and wide items in division-1 bins and the items whose height needs to be rounded up to a multiple of ε12/4\varepsilon_{1}^{2}/4 are the big and tall items in division-2 bins. No other items need to have their width or height rounded up in the packing \mathcal{B} produced by Theorem 33.

Let I^w\widehat{I}^{w} and I^h\widehat{I}^{h} be the items of I^\widehat{I} in division-1 bins and division-2 bins respectively.

We can compute the coarse partitioning of I^\widehat{I}. Define

Bpw\displaystyle B_{p}^{w} BpI^w\displaystyle\coloneqq B_{p}\cap\widehat{I}^{w} Wpw\displaystyle W_{p}^{w} WpI^w\displaystyle\coloneqq W_{p}\cap\widehat{I}^{w} Hpw\displaystyle H_{p}^{w} HpI^w\displaystyle\coloneqq H_{p}\cap\widehat{I}^{w}
Bph\displaystyle B_{p}^{h} BpI^h\displaystyle\coloneqq B_{p}\cap\widehat{I}^{h} Wph\displaystyle W_{p}^{h} WpI^h\displaystyle\coloneqq W_{p}\cap\widehat{I}^{h} Hph\displaystyle H_{p}^{h} HpI^h\displaystyle\coloneqq H_{p}\cap\widehat{I}^{h}

Define

  • Bwp,q{iBwp:(q1)ε12/4<w(i)qε12/4}B^{w}_{p,q}\coloneqq\{i\in B^{w}_{p}:(q-1)\varepsilon_{1}^{2}/4<w(i)\leq q\varepsilon_{1}^{2}/4\}.

  • Bhp,q{iBhp:(q1)ε12/4<h(i)qε12/4}B^{h}_{p,q}\coloneqq\{i\in B^{h}_{p}:(q-1)\varepsilon_{1}^{2}/4<h(i)\leq q\varepsilon_{1}^{2}/4\}.

  • Wwp,q{iWwp:(q1)ε12/4<w(i)qε12/4}W^{w}_{p,q}\coloneqq\{i\in W^{w}_{p}:(q-1)\varepsilon_{1}^{2}/4<w(i)\leq q\varepsilon_{1}^{2}/4\}.

  • Hhp,q{iHhp:(q1)ε12/4<h(i)qε12/4}H^{h}_{p,q}\coloneqq\{i\in H^{h}_{p}:(q-1)\varepsilon_{1}^{2}/4<h(i)\leq q\varepsilon_{1}^{2}/4\}.

Define

  • Bwp,q,rB^{w}_{p,q,r} as the rthr^{\textrm{th}} linear group of Bwp,qB^{w}_{p,q} (see Transformation 22).

  • Bhp,q,rB^{h}_{p,q,r} as the rthr^{\textrm{th}} linear group of Bhp,qB^{h}_{p,q}.

  • Whp,rW^{h}_{p,r} as the rthr^{\textrm{th}} linear group of WhpW^{h}_{p}.

  • Hwp,rH^{w}_{p,r} as the rthr^{\textrm{th}} linear group of HwpH^{w}_{p}.

This is how we get a fine partitioning of a slicing of I^\widehat{I}.

As per Lemma 45, on applying Transformation 25 to I^\widehat{I}, the resulting instance can be sliced and packed into (1+2ε1ε)(aopt(I)+b)+2\left(1+\frac{2\varepsilon}{1-\varepsilon}\right)(a\operatorname*{opt}(I)+b)+2 number of (5ε/8)(5\varepsilon/8)-slacked bins. ∎

F.6 Rounding Algorithm

Let II be a set of weight-rounded items. To use Lemma 47 to get an approximately-optimal packing of items II, we would like to iterate over all balanced fine partitionings of slicings of II. However, even if we don’t consider slicings, doing that will take exponential time, since for each big, wide and tall item, we need to decide whether to designate it as a division-1 item or a division-2 item.

We can get around this problem by iterating over a polynomial-sized set 𝒮Π\mathcal{S}_{\Pi} of fine partitionings such that each balanced fine partitioning 𝒫\mathcal{P} of a slicing of II is ‘close to’ a fine partitioning 𝒫^\widehat{\mathcal{P}} in 𝒮Π\mathcal{S}_{\Pi}. We will now define what we mean by ‘close to’.

Definition 28 (Predecessor of a set of items).

Let I1I_{1} and I2I_{2} be sets of items. Interpret each item iI1i\in I_{1} as a bin whose geometric dimensions are the same as that of ii and whose weight capacities are the same as the weights of ii. I2I_{2} is said to be a predecessor of I1I_{1} (denoted as I2I1I_{2}\preceq I_{1}) iff I2I_{2} can be sliced and packed into I1I_{1}.

We will design a polynomial-time algorithm 𝚒𝚝𝚎𝚛𝚏𝚒𝚗𝚎𝚙𝚊𝚛𝚝𝚜\operatorname{\mathtt{iter-fine-parts}} that takes as input a weight-rounded set II of items and outputs a set 𝒮Π\mathcal{S}_{\Pi} of pairs such that for each balanced fine partitioning 𝒫\mathcal{P} of a slicing of II, there exists a pair (D,𝒫^)𝒮Π(D,\widehat{\mathcal{P}})\in\mathcal{S}_{\Pi} such that all of these conditions hold:

  • 𝒫^\widehat{\mathcal{P}} is a fine partitioning of IDI-D.

  • After applying Transformation 25, each partition in 𝒫^\widehat{\mathcal{P}} is a predecessor of the corresponding partition in 𝒫\mathcal{P}.

  • DD is a set of non-dense items (called discarded items) such that span(D)\operatorname{span}(D) is small compared to span(I)\operatorname{span}(I).

F.6.1 Big Items

Let 𝚙𝚊𝚛𝚝𝚋𝚒𝚐\operatorname{\mathtt{part-big}} be an algorithm that takes a coarse partition BpB_{p} of big items as input, and outputs multiple fine partitionings of BpB_{p}. We can use 𝚙𝚊𝚛𝚝𝚋𝚒𝚐\operatorname{\mathtt{part-big}} as a subroutine in 𝚒𝚝𝚎𝚛𝚏𝚒𝚗𝚎𝚙𝚊𝚛𝚝𝚜\operatorname{\mathtt{iter-fine-parts}}.

To design 𝚙𝚊𝚛𝚝𝚋𝚒𝚐\operatorname{\mathtt{part-big}}, we will guess the cardinality of sets Bwp,q,rB^{w}_{p,q,r} and Bhp,q,rB^{h}_{p,q,r}. We will then guess the maximum height in Bwp,q,rB^{w}_{p,q,r} and the maximum width in Bhp,q,rB^{h}_{p,q,r}. Then for each guess, we will solve a max-flow problem to check if items in BpB_{p} can be assigned to these sets. We skip the details here since this approach is similar to that of Prädel (see Section 3.3.1 in [39]).

Formally 𝚙𝚊𝚛𝚝𝚋𝚒𝚐(Bp)\operatorname{\mathtt{part-big}}(B_{p}) outputs a set of pairs of the form ({},𝒫^)(\{\},\widehat{\mathcal{P}}), where 𝒫^\widehat{\mathcal{P}} is supposed to be a fine partitioning of BpB_{p}.

Claim 48.

𝚙𝚊𝚛𝚝𝚋𝚒𝚐(Bp)\operatorname{\mathtt{part-big}}(B_{p}) generates O(n2|Q|(1/δlg+1)1)=O(n8(d+1)/εε13)O\left(n^{2|Q|(1/\delta_{\mathrm{lg}}+1)-1}\right)=O\left(n^{8(d+1)/\varepsilon\varepsilon_{1}^{3}}\right) values, where n|Bp|n\coloneqq|B_{p}|, and the running time per value is O(n2/εε1)O(n^{2}/\varepsilon\varepsilon_{1}).

Claim 49.

Let 𝒫{Bwp,q,r:q,r}{Bhp,q,r:q,r}\mathcal{P}\coloneqq\{B^{w}_{p,q,r}:\forall q,\forall r\}\cup\{B^{h}_{p,q,r}:\forall q,\forall r\} be a balanced fine partitioning of BpB_{p}. Then there is an output ({},𝒫^)(\{\},\widehat{\mathcal{P}}) of 𝚙𝚊𝚛𝚝𝚋𝚒𝚐(Bp)\operatorname{\mathtt{part-big}}(B_{p}) where 𝒫^{B^wp,q,r:q,r}{B^hp,q,r:q,r}\widehat{\mathcal{P}}\coloneqq\{\widehat{B}^{w}_{p,q,r}:\forall q,\forall r\}\cup\{\widehat{B}^{h}_{p,q,r}:\forall q,\forall r\} such that 𝒫^\widehat{\mathcal{P}} is a fine partitioning of BpB_{p} and after applying Transformation 25,

q,r,B^wp,q,rBwp,q,r and B^hp,q,rBhp,q,r.\forall q,\forall r,\widehat{B}^{w}_{p,q,r}\preceq B^{w}_{p,q,r}\textrm{ and }\widehat{B}^{h}_{p,q,r}\preceq B^{h}_{p,q,r}.

𝚙𝚊𝚛𝚝𝚋𝚒𝚐\operatorname{\mathtt{part-big}} for the rotational case is similar to the non-rotational case: When rotations are allowed, assume w.l.o.g. that Bh,,={}B^{h}_{*,*,*}=\{\}. We will guess the cardinality and maximum height in sets Bwp,q,rB^{w}_{p,q,r}. Then for each guess, we will solve a max-flow problem to check if items in BpB_{p} can be assigned to these sets, possibly after rotating some items.

F.6.2 Wide and Tall Items

Let 𝚙𝚊𝚛𝚝𝚠𝚒𝚍𝚎\operatorname{\mathtt{part-wide}} be an algorithm that takes a coarse partition WpW_{p} of wide items as input, and outputs multiple fine partitionings of WpW_{p}. We can use 𝚙𝚊𝚛𝚝𝚠𝚒𝚍𝚎\operatorname{\mathtt{part-wide}} as a subroutine in 𝚒𝚝𝚎𝚛𝚏𝚒𝚗𝚎𝚙𝚊𝚛𝚝𝚜\operatorname{\mathtt{iter-fine-parts}}.

Let 𝒫{Wp,qw:q}{Wp,rh:r}\mathcal{P}\coloneqq\{W_{p,q}^{w}:\forall q\}\cup\{W_{p,r}^{h}:\forall r\} be a balanced fine partitioning of a slicing of WpW_{p}. We want 𝚙𝚊𝚛𝚝𝚠𝚒𝚍𝚎\operatorname{\mathtt{part-wide}} to find a fine partitioning 𝒫^{W^p,qw:q}{W^p,rh:r}\widehat{\mathcal{P}}\coloneqq\{\widehat{W}_{p,q}^{w}:\forall q\}\cup\{\widehat{W}_{p,r}^{h}:\forall r\} of a large subset of WpW_{p} such that after applying Transformation 25 to 𝒫\mathcal{P} and 𝒫^\widehat{\mathcal{P}}, every fine partition in 𝒫^\widehat{\mathcal{P}} is a predecessor of the corresponding fine partition in 𝒫\mathcal{P}.

For any JWpJ\subseteq W_{p}, define h(J)iJh(i)h(J)\coloneqq\sum_{i\in J}h(i) and w(J)maxiJw(i)w(J)\coloneqq\max_{i\in J}w(i). We will create a rectangular box for each fine partition and then try to pack a large subset of WpW_{p} into these boxes. Let Q[4ε1+1,4ε12]Q\coloneqq\mathbb{Z}\cap\left[\frac{4}{\varepsilon_{1}}+1,\frac{4}{\varepsilon_{1}^{2}}\right]. For each qQq\in Q, let swqs^{w}_{q} be a box of width qε12/4q\varepsilon_{1}^{2}/4 and height h(Wwp,q)h(W^{w}_{p,q}). For each r[1/δlg]r\in[1/\delta_{\mathrm{lg}}], let shrs^{h}_{r} be a box of width w(Whp,r)w(W^{h}_{p,r}) and height h(Whp,r)h(W^{h}_{p,r}). Since we don’t know Whp,rW^{h}_{p,r} and Wwp,qW^{w}_{p,q}, we will guess the value of w(Whp,r)w(W^{h}_{p,r}) and we will guess very close lower bounds on h(Wwp,q)h(W^{w}_{p,q}) and h(Whp,r)h(W^{h}_{p,r}). We will then try to pack most of the items from WpW_{p} into these boxes.

Let W^wp,q\widehat{W}^{w}_{p,q} be the items packed into swqs^{w}_{q} and let W^hp,r\widehat{W}^{h}_{p,r} be the items packed into shrs^{h}_{r}. Then 𝒫^{W^p,qw:q}{W^p,rh:r}\widehat{\mathcal{P}}\coloneqq\{\widehat{W}_{p,q}^{w}:\forall q\}\cup\{\widehat{W}_{p,r}^{h}:\forall r\} is a fine partitioning of a large subset of WpW_{p}. After applying Transformation 25 to 𝒫\mathcal{P} and 𝒫^\widehat{\mathcal{P}}, each item in Wwp,qW^{w}_{p,q} and W^wp,q\widehat{W}^{w}_{p,q} has width qε12/4q\varepsilon_{1}^{2}/4. Since h(W^wp,q)h(swq)h(Wwp,q)h(\widehat{W}^{w}_{p,q})\leq h(s^{w}_{q})\leq h(W^{w}_{p,q}), we get W^wp,qWwp,q\widehat{W}^{w}_{p,q}\preceq W^{w}_{p,q} after Transformation 25. We can similarly prove that W^hp,rWhp,r\widehat{W}^{h}_{p,r}\preceq W^{h}_{p,r}. Therefore, 𝒫^\widehat{\mathcal{P}} is a suitable fine partitioning.

The details on how to approximately guess the size of boxes and how to pack a large subset of items into boxes can be deduced from section 3.3.1 of [39].

Formally, 𝚙𝚊𝚛𝚝𝚠𝚒𝚍𝚎(Wp)\operatorname{\mathtt{part-wide}}(W_{p}) outputs a set of pairs of the form (D,𝒫^)(D,\widehat{\mathcal{P}}), where items in DD are called discarded items and 𝒫^\widehat{\mathcal{P}} is supposed to be a fine partitioning of WpDW_{p}-D.

Claim 50.

For every output (D,𝒫^)(D,\widehat{\mathcal{P}}) of 𝚙𝚊𝚛𝚝𝚠𝚒𝚍𝚎(Wp)\operatorname{\mathtt{part-wide}}(W_{p}),

h(D)(3ε2)(d+1εε1+4ε124ε1).h(D)\leq(3\varepsilon_{2})\left(\frac{d+1}{\varepsilon\varepsilon_{1}}+\frac{4}{\varepsilon_{1}^{2}}-\frac{4}{\varepsilon_{1}}\right).
Claim 51.

Let 𝒫{Wp,qw:q}{Wp,rh:r}\mathcal{P}\coloneqq\{W_{p,q}^{w}:\forall q\}\cup\{W_{p,r}^{h}:\forall r\} be a balanced fine partitioning of a slicing of WpW_{p}. Then for some output (D,𝒫^)(D,\widehat{\mathcal{P}}) of 𝚙𝚊𝚛𝚝𝚠𝚒𝚍𝚎(Wp)\operatorname{\mathtt{part-wide}}(W_{p}), 𝒫^\widehat{\mathcal{P}} is a fine partitioning of WpDW_{p}-D and after applying Transformation 25 to 𝒫\mathcal{P} and 𝒫^\widehat{\mathcal{P}},

(q,W^wp,qWwp,q) and (r,W^hp,rWhp,r).(\forall q,\widehat{W}^{w}_{p,q}\preceq W^{w}_{p,q})\textrm{ and }(\forall r,\widehat{W}^{h}_{p,r}\preceq W^{h}_{p,r}).
Claim 52.

Let there be nn items in WpW_{p}. Let nq4/ε124/ε1n_{q}\coloneqq 4/\varepsilon_{1}^{2}-4/\varepsilon_{1}. Then 𝚙𝚊𝚛𝚝𝚠𝚒𝚍𝚎(Wp)\operatorname{\mathtt{part-wide}}(W_{p}) outputs at most δlgnnq+1+1/δlg\delta_{\mathrm{lg}}n^{n_{q}+1+1/\delta_{\mathrm{lg}}} distinct values. The running time per value is O(nlogn)O(n\log n).

𝚙𝚊𝚛𝚝𝚠𝚒𝚍𝚎\operatorname{\mathtt{part-wide}} can analogously be used for sub-partitioning coarse partitions of tall items.

When item rotations are allowed, 𝚙𝚊𝚛𝚝𝚠𝚒𝚍𝚎(Wp)\operatorname{\mathtt{part-wide}}(W_{p}) gives us Hwp,rH^{w}_{p,r} instead of Whp,rW^{h}_{p,r}.

F.6.3 Rounding Algorithm

Based on 𝚙𝚊𝚛𝚝𝚋𝚒𝚐\operatorname{\mathtt{part-big}} and 𝚙𝚊𝚛𝚝𝚠𝚒𝚍𝚎\operatorname{\mathtt{part-wide}}, we define an algorithm 𝚒𝚝𝚎𝚛𝚏𝚒𝚗𝚎𝚙𝚊𝚛𝚝𝚜\operatorname{\mathtt{iter-fine-parts}} (Algorithm 2) that returns a set of fine partitionings. We then use 𝚒𝚝𝚎𝚛𝚏𝚒𝚗𝚎𝚙𝚊𝚛𝚝𝚜\operatorname{\mathtt{iter-fine-parts}} to design the algorithm 𝚛𝚘𝚞𝚗𝚍\operatorname{\mathtt{round}} (Algorithm 3).

Algorithm 2 𝚒𝚝𝚎𝚛𝚏𝚒𝚗𝚎𝚙𝚊𝚛𝚝𝚜(I)\operatorname{\mathtt{iter-fine-parts}}(I): II is a set of weight-rounded items. Returns a set of pairs of the form (D,𝒫^)(D,\widehat{\mathcal{P}}), where DD is a subset of items to discard and 𝒫^\widehat{\mathcal{P}} is a fine partitioning of IDI-D.
1:outputs={}\texttt{outputs}=\{\}
2:{Bp:p}{Wp:p}{Hp:p}(small and dense partitions)=coarse-partition(I)\{B_{p}:\forall p\}\cup\{W_{p}:\forall p\}\cup\{H_{p}:\forall p\}\cup\textrm{(small and dense partitions)}=\hyperref@@ii[obs:coarse-part]{\operatorname{coarse-partition}}(I)
3:iters=p=1nbwc𝚙𝚊𝚛𝚝𝚋𝚒𝚐(Bp)×p=1nwwc𝚙𝚊𝚛𝚝𝚠𝚒𝚍𝚎(Wp)×p=1nwwc𝚙𝚊𝚛𝚝𝚠𝚒𝚍𝚎(Hp)\texttt{iters}={\displaystyle\operatorname*{\prod}_{p=1}^{\hyperref@@ii[lem:num-weight-classes]{n_{\mathrm{bwc}}}}\operatorname{\mathtt{part-big}}(B_{p})\times\operatorname*{\prod}_{p=1}^{\hyperref@@ii[lem:num-weight-classes]{n_{\mathrm{wwc}}}}\operatorname{\mathtt{part-wide}}(W_{p})\times\operatorname*{\prod}_{p=1}^{\hyperref@@ii[lem:num-weight-classes]{n_{\mathrm{wwc}}}}\operatorname{\mathtt{part-wide}}(H_{p})}
4:for iters\mathcal{L}\in\texttt{iters} do // \mathcal{L} is a list of pairs
5:     𝒫^=(small and dense partitions)\widehat{\mathcal{P}}=\textrm{(small and dense partitions)}
6:     D={}D=\{\}
7:     for (Dj,𝒫^j)(D_{j},\widehat{\mathcal{P}}_{j})\in\mathcal{L} do
8:         D=DDjD=D\cup D_{j}
9:         Include partitions of 𝒫^j\widehat{\mathcal{P}}_{j} into 𝒫^\widehat{\mathcal{P}}.
10:     end for
11:     𝚘𝚞𝚝𝚙𝚞𝚝𝚜.𝚊𝚍𝚍((D,𝒫^))\operatorname{\mathtt{outputs.add}}((D,\widehat{\mathcal{P}}))
12:end for
13:return outputs
Algorithm 3 𝚛𝚘𝚞𝚗𝚍(I,ε)\operatorname{\mathtt{round}}(I,\varepsilon): Returns a set of pairs of the form (I~,D)(\widetilde{I},D^{\prime}), where DD^{\prime} is a subset of items to discard and I~\widetilde{I} is a rounding of IDI-D^{\prime}.
1:outputs={}\texttt{outputs}=\{\}
2:δ0min(14d+1,2ε3)\delta_{0}\coloneqq{\displaystyle\min\left(\frac{1}{4d+1},\frac{2\varepsilon}{3}\right)}
3:(Imed,ε2,ε1)=remove-medium(I,ε,f,δ0)(I_{\mathrm{med}},\varepsilon_{2},\varepsilon_{1})=\operatorname{\hyperref@@ii[defn:remmed]{\operatorname{remove-medium}}}(I,\varepsilon,f,\delta_{0}) // ff will be described later
4:// Assume ε\varepsilon and ε1\varepsilon_{1} are passed as parameters to all subroutines and transformations.
5:Let I^\widehat{I} be the weight-rounding (Transformation 21) of IImedI-I_{\mathrm{med}}.
6:for (D,𝒫^)iter-fine-parts(I^)(D,\widehat{\mathcal{P}})\in\operatorname{\hyperref@@ii[algo:ifp]{\operatorname{\mathtt{iter-fine-parts}}}}(\widehat{I}) do
7:     Let I~\widetilde{I} be the instance obtained by applying Transformation 25 to I^D\widehat{I}-D based on the fine partitioning 𝒫^\widehat{\mathcal{P}}.
8:     𝚘𝚞𝚝𝚙𝚞𝚝𝚜.𝚊𝚍𝚍((I~,DImed))\operatorname{\mathtt{outputs.add}}((\widetilde{I},D\cup I_{\mathrm{med}})).
9:end for
10:return outputs

Assume that in an output (I~,D)(\widetilde{I},D) of 𝚛𝚘𝚞𝚗𝚍(I,ε)\operatorname{\mathtt{round}}(I,\varepsilon), the knowledge of the associated fine partitioning is implicitly present in I~\widetilde{I}.

Lemma 53 (Polynomial-time).

The total number of outputs of 𝚛𝚘𝚞𝚗𝚍(I)\operatorname{\mathtt{round}}(I) is O(nγ)O(n^{\gamma}), where there are nn items in II and

γnbwc8(d+1)εε13+2nwwc(4ε12+d+1εε1).\gamma\coloneqq\hyperref@@ii[lem:num-weight-classes]{n_{\mathrm{bwc}}}\frac{8(d+1)}{\varepsilon\varepsilon_{1}^{3}}+2\hyperref@@ii[lem:num-weight-classes]{n_{\mathrm{wwc}}}\left(\frac{4}{\varepsilon_{1}^{2}}+\frac{d+1}{\varepsilon\varepsilon_{1}}\right).

The time for each output is at most O(n2/εε1)O(n^{2}/\varepsilon\varepsilon_{1}).

Proof.

Follows from Claims 48 and 52. ∎

Lemma 54 (Low discard).

Let (I~,D)(\widetilde{I},D^{\prime}) be an output of 𝚛𝚘𝚞𝚗𝚍(I,ε)\operatorname{\mathtt{round}}(I,\varepsilon). Then

span(D)\displaystyle\operatorname{span}(D^{\prime}) εspan(I)+6ε2nwwcε12(d+1εε1+4ε124ε1)\displaystyle\leq\varepsilon\operatorname{span}(I)+\frac{6\varepsilon_{2}\hyperref@@ii[lem:num-weight-classes]{n_{\mathrm{wwc}}}}{\varepsilon_{1}^{2}}\left(\frac{d+1}{\varepsilon\varepsilon_{1}}+\frac{4}{\varepsilon_{1}^{2}}-\frac{4}{\varepsilon_{1}}\right)
εspan(I)+6(d+5)ε2nwwcε14.\displaystyle\leq\varepsilon\operatorname{span}(I)+6(d+5)\frac{\varepsilon_{2}\hyperref@@ii[lem:num-weight-classes]{n_{\mathrm{wwc}}}}{\varepsilon_{1}^{4}}.
Proof.

Let D=DImedD^{\prime}=D\cup I_{\mathrm{med}}. By Lemma 19, span(Imed)εspan(I)\operatorname{span}(I_{\mathrm{med}})\leq\varepsilon\operatorname{span}(I). Let D1D_{1} be the wide items in DD and D2D_{2} be the tall items in DD. Since all items in DD come from 𝚙𝚊𝚛𝚝𝚠𝚒𝚍𝚎\operatorname{\mathtt{part-wide}}, DD only contains non-dense items and D=D1D2D=D_{1}\cup D_{2}. Let

λnwwc(3ε2)(d+1εε1+4ε124ε1).\lambda\coloneqq n_{\mathrm{wwc}}(3\varepsilon_{2})\left(\frac{d+1}{\varepsilon\varepsilon_{1}}+\frac{4}{\varepsilon_{1}^{2}}-\frac{4}{\varepsilon_{1}}\right).

By Claim 50, h(D1)λh(D_{1})\leq\lambda and w(D2)λw(D_{2})\leq\lambda.

span(D1)=iD1max(a(i),vmax(i))iD1max(h(i),h(i)ε12)iD1h(i)ε12λε12.\operatorname{span}(D_{1})=\sum_{i\in D_{1}}\max(a(i),v_{\max}(i))\leq\sum_{i\in D_{1}}\max\left(h(i),\frac{h(i)}{\varepsilon_{1}^{2}}\right)\leq\sum_{i\in D_{1}}\frac{h(i)}{\varepsilon_{1}^{2}}\leq\frac{\lambda}{\varepsilon_{1}^{2}}.

Similarly, span(D2)λ/ε12\operatorname{span}(D_{2})\leq\lambda/\varepsilon_{1}^{2}. ∎

Lemma 55 (Homogeneity).

Let (I~,D)(\widetilde{I},D) be an output of 𝚛𝚘𝚞𝚗𝚍(I,ε)\operatorname{\mathtt{round}}(I,\varepsilon). Then the number of types of items in I~\widetilde{I} is at most a constant:

8(d+1)nbwcεε13+2nwwc(d+1εε1+4ε12)+nswc+2(nlwc+nhwc).\frac{8(d+1)\hyperref@@ii[lem:num-weight-classes]{n_{\mathrm{bwc}}}}{\varepsilon\varepsilon_{1}^{3}}+2\hyperref@@ii[lem:num-weight-classes]{n_{\mathrm{wwc}}}\left(\frac{d+1}{\varepsilon\varepsilon_{1}}+\frac{4}{\varepsilon_{1}^{2}}\right)+\hyperref@@ii[lem:num-weight-classes]{n_{\mathrm{swc}}}+2(\hyperref@@ii[lem:round-up-light-n]{n_{\mathrm{lwc}}}+\hyperref@@ii[lem:round-up-heavy-n]{n_{\mathrm{hwc}}}).
Proof.

After applying Transformation 25 to I^D\widehat{I}-D, all non-dense items in each fine partition have:

  • the same weight class.

  • the same width, if the fine partition contains only wide or big items.

  • the same height, if the fine partition contains only tall or big items.

From the definition of fine partitioning, the number of different partitions of non-dense items is at most

8nbwcε12δlg+2nwwc(1δlg+4ε12)+nswc.\frac{8n_{\mathrm{bwc}}}{\varepsilon_{1}^{2}\delta_{\mathrm{lg}}}+2n_{\mathrm{wwc}}\left(\frac{1}{\delta_{\mathrm{lg}}}+\frac{4}{\varepsilon_{1}^{2}}\right)+n_{\mathrm{swc}}.

In each division, there are nhwcn_{\mathrm{hwc}} distinct heavy dense items and nlwcn_{\mathrm{lwc}} weight classes in light dense items by Lemmas 40 and 42. ∎

Table 3: Upper bound on the number of different types of items

no. of typesDivision-1 big items (Bw,,)4(d+1)n