Geometry Meets Vectors: Approximation
Algorithms for Multidimensional Packing
Abstract
We study the generalized multidimensional bin packing problem (GVBP) that generalizes both geometric packing and vector packing. Here, we are given rectangular items where the item has width , height , and nonnegative weights . Our goal is to get an axis-parallel non-overlapping packing of the items into square bins so that for all , the sum of the weight of items in each bin is at most 1. This is a natural problem arising in logistics, resource allocation, and scheduling. Despite being well studied in practice, surprisingly, approximation algorithms for this problem have rarely been explored.
We first obtain two simple algorithms for GVBP having asymptotic approximation ratios and . We then extend the Round-and-Approx (R&A) framework [3, 6] to wider classes of algorithms, and show how it can be adapted to GVBP. Using more sophisticated techniques, we obtain better approximation algorithms for GVBP, and we get further improvement by combining them with the R&A framework. This gives us an asymptotic approximation ratio of for GVBP, which improves to for the special case of . We obtain further improvement when the items are allowed to be rotated. We also present algorithms for a generalization of GVBP where the items are high dimensional cuboids.
1 Introduction
Bin packing and knapsack problems are classical NP-hard optimization problems. Two classical generalizations of these problems: geometric packing and vector packing have been well-studied from the 1980s [14, 17]. Geometric packing considers the packing of rectangular items, whereas, in vector packing items are multidimensional vectors. Fig. 1 illustrates the difference between geometric packing and vector packing. However, often in practice, we encounter a mixture of geometric and vector constraints. Consider the following airlines cargo problem [37]: We have boxes to load in an airline cargo container. In addition to the geometric constraint that all the boxes must fit within the container, we also have a constraint that the total weight of the loaded boxes is within a specified capacity. Thus, in this problem, three dimensions are geometric and the weight is a vector constraint.
Weight has been an important constraint to consider for packing in logistics and supply chain management, e.g., cranes and other equipment can be damaged by the bins being too heavy, or by a skewed weight distribution [1]. While the container loading problems mostly consider packing items inside a container, the container stowage planning problem considers the stacking of the containers onto and off cargo ships [36]. Even when different cargoes are packed into a fleet of aircraft for transport, one needs the individual cargoes to be not too heavy to ensure stability and less fuel consumption [2]. Similar problems find applications in vehicle routing with loading constraints [8]. Many practical heuristics [41, 43] have been proposed for these kinds of problems. Several companies (such as Driw, Boxify, Freightcom) and practical packages [46] have considered the problem. In many cases, we also want to ensure a limit on other attributes, such as the amount of magnetism, radioactivity, or toxicity. Each of these properties can be considered as additional vector dimensions.
Such multidimensional packing problems are also getting attention due to their connections with fair resource allocation [38]. In recent years, a considerable amount of research has focused on group fairness [27, 44] such that the algorithms are not biased towards (or against) some groups or categories. One such notion of fairness is restricted dominance [7], which upper bounds the number (or size) of items from a category. These different categories can be considered as dimensions. E.g., in a container packing problem for flood relief, one needs to ensure that the money spent on a container is fairly distributed among different types of items (such as medicine, food, garments). Hence, for each category, there is an upper bound on the value that can go into a container.
Formally, we are given items that are (, )-dimensional, i.e., item is a -dimensional cuboid of lengths and has non-negative weights . A (, )-dimensional bin is a -dimensional cuboid of length 1 in each geometric dimension and weight capacity 1 in each of the vector dimensions. A feasible packing of items into a bin is a packing where items are packed parallel to the axes without overlapping, and for all , the sum of the weight-dimension of the items in the bin is at most 1. In the (, ) bin packing problem (BP), we have to feasibly pack all items into the minimum number of bins. In the (, ) knapsack problem (KS), each item also has an associated nonnegative profit , and we have to feasibly pack a maximum-profit subset of the items into a single bin (also called ‘knapsack’). (, ) packing problems generalize both -dimensional geometric packing (when ) and -dimensional vector packing (when ). Already for vector packing, if is part of the input, there is an approximation hardness of unless NP=ZPP [5]. Thus, throughout the paper we assume both to be constants.
1.1 Our Results
We study the first approximation algorithms for general (, ) BP, with a focus on . We give two simple algorithms for (2, ) BP, called and , having asymptotic approximation ratios (AARs) of and , respectively, for any . For , ’s AAR improves to .
Next, we modify the Round-and-Approx (R&A) framework [6] so that it works for (, ) BP. We combine R&A with the algorithm to get an AAR of for (2, ) BP. This improves upon the AAR of for .
Next, we obtain a more sophisticated algorithm for (2, ) BP, called , that fits into the R&A framework and has an even better AAR.
Theorem 1.
For any , there is a polynomial-time algorithm for (2, ) BP, called , having AAR (improves to when items can be rotated by ). For , the AAR improves to (further improves to when items can be rotated).
(2, ) BP | (2, 1) BP | |
---|---|---|
with R&A | ||
(without rotation) | ||
(with rotation) |
We also show how to extend and to (, ) BP to obtain AARs and , respectively, where . We also give a similar algorithm for (, ) KS having approximation ratio .
1.2 Related Work
The bin packing problem (BP) has been the cornerstone of approximation algorithms [26]. The standard performance measure for BP algorithms is the asymptotic approximation ratio (AAR). An asymptotic polynomial time approximation scheme (APTAS) for BP was given by Fernandez de la Vega and Lueker [17], using linear grouping. Note that 1-D BP can be considered as (, ) BP as well as (, ) BP. The present best approximation algorithm for 1-D BP returns a packing in at most bins, where is the optimal number of bins [23]. Knapsack problem (KS) is one of Karp’s 21 NP-complete problems. Lawler gave an FPTAS [34] for KS. For surveys on BP and KS, we refer the readers to [13, 28].
In [17], a -asymptotic approximation algorithm was given for the -dimensional vector bin packing (-D VBP). Chekuri and Khanna gave a approximation for -D VBP [11]. The study of 2-D geometric bin packing (2-D GBP) was initiated by [14]. Caprara gave a -asymptotic approximation Harmonic-Decreasing-Height (HDH) algorithm for -D GBP [9], where . This is still the best known approximation for -D GBP for . Both 2-D GBP and 2-D VBP were shown to be APX-hard [4, 45].
Bansal, Caprara and Sviridenko [3] introduced the Round-and-Approx (R&A) framework to obtain improved approximations for both 2-D GBP and -D VBP. The R&A framework is a two stage process. First, a (possibly exponential-sized) set covering LP relaxation (called configuration LP) is solved approximately. Then, a randomized rounding procedure is applied for a few steps to pack a subset of items, after which only a ‘small’ fraction of items (called the residual instance) remain unpacked. In the second step, the residual instance is packed using a subset-oblivious algorithm. Intuitively, given a random subset of where each element occurs with probability about , a -approximate subset-oblivious algorithm produces a packing of in approximately bins. In the R&A framework, one can obtain a -approximation algorithm using a -approximate subset oblivious algorithm. Two algorithms, 1-D BP APTAS [17] and HDH [9] were shown to be subset-oblivious based on various properties of dual-weighting functions. This led to an AAR of and for 2-D GBP and -D VBP, respectively. However, it was cumbersome to extend subset-obliviousness to wider classes of algorithms.
Bansal and Khan [6] later extended the R&A framework for 2-D GBP to rounding-based algorithms, where the large dimensions are rounded up to values and the packing of items is container-based, i.e. each bin contains a constant number of rectangular regions called containers and items are packed in a special way into containers. For 2-D GBP, they used a 1.5-asymptotic-approximation algorithm [24] to obtain the present best -asymptotic-approximation. For -D VBP, Bansal et al. [5] used the R&A framework combined with a multi-objective budgeted matching problem, to obtain the present best AAR of .
Multidimensional knapsack is also well-studied. For -D vector knapsack (-D VKS), Frieze and Clarke gave a PTAS [18]. For 2-D geometric knapsack (GKS), Jansen and Zhang [25] gave a 2-approximation algorithm, while the present best approximation ratio is 1.89 [20]. It is not even known whether 2-D GKS is APX-hard or not. There are many other related important geometric packing problems, such as strip packing [21, 19] and maximum independent set of rectangles [31]. For surveys on multidimensional packing, see [12, 30].
1.3 Technical Contribution
One of our main contributions is the enhancement of R&A framework [6] to wider applications.
First, R&A framework now also works with (, )-dimensional items, unifying the approach for geometric and vector packing. To use R&A, we need to solve the configuration LP of the corresponding bin packing problem. All previous applications (d-D VBP and 2-D GBP) of R&A solved the configuration LP within factor using a -approximate solution to KS. Due to the unavailability of a PTAS for (2, ) KS, we had to adapt and use a different linear programming algorithm [40] that uses an -approximation algorithm for KS to -approximately solve the configuration LP of the corresponding BP problem, for any constants , .
Second, we introduce more freedom in choosing the packing structure. Unlike the R&A framework in [6] that worked only for container-based packing, we allow either relaxing the packing structure to non-container-based (like in ) or imposing packing constraints in addition to being container-based (like in ). This generalization can help in finding better algorithms for other variants of BP.
Finally, we allow rounding items in ways other than rounding up, if we can find a suitable way of unrounding a packing of rounded items. In , we round down the width and height of some items to 0, and in , we round each (2, )-dimensional item to an item of width 1, height and each vector dimension , where is a value depending on the original dimensions of . As it was shown in [30], if the large coordinates of items are rounded to types, we cannot obtain better than and -approximation for -D VBP and 2-D GBP, respectively. However, as we now allow rounding down, we may be able to use the R&A framework with algorithms having better approximation ratios.
We also fix a minor error in the R&A framework of [30] (see Section E.1 for details).
In [3], it was mentioned: “One obstacle against the use of R&A for other problems is the difficulty in deriving subset-oblivious algorithms (or proving that existing algorithms are subset oblivious).” We expect that our progress will help in understanding the power of R&A to extend it to other set-cover type problems, e.g. round-SAP and round-UFP [16].
Our another major contribution is handling of the (2, ) BP problem. This problem presents additional challenges over pure geometric BP, and our algorithm demonstrates how to circumvent those challenges. For example, in geometric packing, items having low total area can be packed into a small number of bins using the Next-Fit-Decreasing-Height (NFDH) algorithm (see [14] and Lemmas 9 and 10 in Appendix A). This need not be true when items have weights, since the geometric dimensions can be small but the vector dimensions may be large. To handle this, we divide the items into different classes based on density (i.e., weight/area). We use the facts that items of low density and low total area can be packed into a small number of bins, and items of high density that fit into a bin have low total area. Also, in geometric packing, we can sometimes move items with similar geometric dimensions across different bins (like in linear grouping [17, 39]). Vector dimensions again create problems here, since the items can have very different weights. To handle this, we only move items of similar geometric dimensions and density. This leads to a more structured packing and we show how to find such a near-optimal structured packing efficiently.
2 Preliminaries and Notation
A valid packing of a given set of (, )-dimensional items into a (, )-dimensional bin is an arrangement of the items in the bin such that: (i) All items are packed in an axis parallel manner, i.e., each item has its faces parallel to the faces of the bin. Formally, to pack an item into a bin, we need to decide its position in the bin, and the item will be packed in the cuboidal region . (ii) The items are non-overlapping, i.e., the interiors of any two items do not intersect. Formally, for any two items and in the same bin, the sets and do not intersect. (iii) All items are packed completely inside the bin, i.e., for each item and each , and . (iv) The total weight packed in each of the vector dimensions is at most one.
Let be a minimization problem and let denote the set of all input instances of . Consider a polynomial-time approximation algorithm for . For any input , let denote the cost of the optimal solution and let denote the cost of ’s output on . Then the quantity
is called the approximation ratio of and the asymptotic approximation ratio (AAR) of is defined as
Intuitively, AAR denotes the algorithm’s performance for inputs whose optimum value is large.
Let . Let be the set of polynomial and sub-polynomial functions of . Define , , and as follows: , , . is, intuitively, the measure of largeness of item . For convenience, let . Assume w.l.o.g. that implies . For a set of items, given a function , for , define . This means, e.g., . For any bin packing algorithm , let be the resulting bin packing of items , and let be the number of bins in . Define as the minimum number of bins needed to pack .
Lemma 2.
For (, ) items , .
Proof.
Let there be bins in an optimal packing of (, )-dimensional items . In this optimal packing, let be the items in the bin. Then
For , let , be the width and height of item , respectively. The area of item is . The items in (, ) BP are called ‘rectangles’.
2.1 Configuration LP
For a bin packing instance containing items, a configuration of is a packing of a subset of items of into a bin (see Fig. 2). We denote the set of all configurations of by . The configuration matrix of is a matrix where is 1 if configuration contains item and 0 otherwise. To solve the bin packing problem, it is sufficient to decide the number of bins of each configuration. This gives us the following linear programming relaxation, called a configuration LP:
A feasible solution to the configuration LP is a vector in , and , the number of configurations, can be exponential in . But if , then may have a polynomial-sized representation.
3 Simple Algorithms
In this section, we look at simple algorithms for (2, ) BP. They are based on Steinberg’s algorithm for rectangle packing [42], which has the following corollary:
Lemma 3 (Proof in Appendix B).
Let be a set of rectangles where . Then there is a -time algorithm to pack into 3 square bins of side length 1.
Let be a (2, ) BP instance. Let , i.e., is a 1-D BP instance. The algorithm first runs the Next-Fit algorithm on . Let be the resulting bin packing of into bins. For each , let be the corresponding items from . Then and . then uses Steinberg’s algorithm to pack each into at most 3 bins, giving a packing of into at most bins. By the property of Next-Fit (see Claim 8 in Appendix A), we get that . By Lemma 2, we get . This gives us the following theorem.
Theorem 4.
For (2, ) BP, uses at most bins, so is a -approximation algorithm. runs in time.
The algorithm first computes , which is a -D VBP instance obtained by replacing the geometric dimensions of each item by a single vector dimension . It computes a bin packing of using any algorithm . It then uses Steinberg’s algorithm to obtain a bin packing of into at most bins.
Note that . If has AAR , then . Therefore, has AAR . The -D VBP algorithm by [3] (parametrized by a constant ) gives and the algorithm by [5] gives (improves to for ).
Using similar ideas, we can get an algorithm for (2, ) KS (see Appendix D).
Although has a worse AAR than , the number of bins used by is upper-bounded in terms of , which is a useful property. Because of this, we will use it as a subroutine in other algorithms (like ).
The algorithms for (2, ) packing can be extended to (, ) packing. We just need an algorithm for the following problem: given a set of -dimensional cuboids where , pack into a small number of bins. We used Steinberg’s algorithm when . When , we can use the algorithm of [15, Section 2] to pack into at most 9 bins. For , we can use a variant of the algorithm [10] to pack into at most bins (details in Appendix C). Therefore, will use bins, where when , when , and when . Therefore, is -approximate. Similarly, has AAR .
4 Round-and-Approx Framework
We enhance the R&A framework as a general outline for designing approximation algorithms for the generalized bin packing problem. We denote the algorithm for the R&A framework as , which takes as input a set of (, )-dimensional items and parameters and . The steps of the algorithm are as follows (see Algorithm 1 for a more formal description).
-
1.
Solve the Configuration LP of . Let be a -asymptotic-approximate solution to the configuration LP. Note that each index of corresponds to a configuration. In all previous applications of the R&A framework, . However, in some of our applications we will have to be a large constant.
-
2.
Randomized rounding of configuration LP: For steps do the following: select a configuration with probability . Pack bins according to each of these selected configurations. Let be the remaining items that are not packed, called the residual instance.
-
3.
Rounding of items: We define a subroutine that takes items and parameter as input111The input to is instead of because is random and we want to round items deterministically, i.e., the rounding of each item should not depend on which other items from lie in . In fact, this is where the old R&A framework [30] introduced an error. See Section E.1 for details.. It discards a set of items such that and then modifies each item in to get a set of items. We say that the output of is , where items in are called rounded items. Intuitively, after rounding, the items in are of types, which makes packing easier. Also, since is small, can be packed into a small number of bins using .
We impose some restrictions on , which we denote as conditions C1 and C2, that we describe in Section 4.2. Previous versions of R&A only allowed modifications where each item’s dimensions were rounded up. We do not have this restriction; we also allow rounding down some dimensions. We also allow to output a -sized list of guesses of .
-
4.
Pack rounded items: Let be the rounded items corresponding to . Pack into bins using any bin packing algorithm that satisfies ‘condition C3’, which we describe in Section 4.3. Let us name this algorithm .
-
5.
Unrounding: Given a bin packing of , let be a subroutine that computes a bin packing of . is trivial in previous versions of R&A, because they only increase dimensions of items during rounding. In our applications, we may round down items, so can be non-trivial. can be any algorithm that satisfies ‘condition C4’, which we describe in Section 4.4.
We can think of the R&A framework as a meta-algorithm, i.e., we give it the algorithms , and as inputs and it outputs the algorithm . The R&A framework requires that , and satisfy four conditions C1, C2, C3, C4, which we describe in Sections 4.2, 4.3 and 4.4. Prospective users of the R&A framework need to design these three subroutines and prove that they satisfy these four conditions.
Intuitively, first packs some items into bins by randomized rounding of . We can prove that , so contains a small fraction of the items in (see Lemma 5). We will then try to prove that if the rest of the algorithm () packs into bins, then it will pack into roughly bins. This notion was referred to in [3] as subset-obliviousness. We will use subset-obliviousness to bound the AAR of .
Lemma 5.
, .
Proof.
Let be the configurations chosen during randomized rounding (line 3 in Algorithm 1). Let be the configurations that contain the element .
(all are independent) | ||||
(constraint in configuration LP for item ) | ||||
Section 4.6 shows how to break into , and and use it with R&A.
4.1 Fractional Structured Packing
Let be an output of and let be an arbitrary subset of . Our analysis of is based around a concept called fractional structured packing of . Note that the notion of fractional structured packing only appears in the analysis of . It is not needed to describe any algorithm.
First, we discuss the notion of structured packing. Several types of packings are used in bin packing algorithms, such as container-based [24], shelf-based (e.g. output of Caprara’s HDH algorithm [10]), guillotine-based [22], corridor-based [20], etc. A type of packing is called to be a structured packing if it satisfies downward closure, i.e., a structured packing remains structured even after removing some items from the packed bins. For example, Jansen and Prädel [24] showed that given any packing of a 2-D GBP instance into bins, we can slice some of the items and repack them into bins such that the resulting packing is container-based. Container-based roughly means that in each bin, items are packed into rectangular regions called containers, and containers’ heights and widths belong to a fixed set of values. Hence, container-based is an example of a structured packing as a container-based packing of bins remains a container-based packing, even if we remove some items from the packed bins. In fact, all the commonly used packing types are indeed structured packings. Also note that the set of all possible packings is trivially a structured packing. Our R&A framework gives algorithm designers the freedom to use or define the structured packing in any way they want, as long as they satisfy the downward closure property. Typically, the choice of which definition of structured packing to use will depend on the ease of proving Conditions C2 and C3 for that definition. This helped us to go beyond the R&A framework of Bansal and Khan [6], which only considered container-based packings of .
Intuitively, a fractional structured packing is one where we slice each item of into pieces and then find a structured packing of the pieces. Define as the number of bins used in the optimal fractional structured packing of . To analyze the AAR of , we will bound the number of bins used to pack the residual instance in terms of , and then we will bound in terms of .
To define fractional structured packing, we first define what it means to slice an item. From a geometric perspective, slicing an item perpendicular to the dimension means cutting the item into 2 parts using a hyperplane perpendicular to the axis. The vector dimensions get split proportionately across the slices. E.g., for , if for item , then we slice using a vertical cut, and if , we slice using a horizontal cut.
Definition 1 (Slicing an item).
Let be a (, )-dimensional item. Slicing perpendicular to geometric dimension with proportionality (where ) is the operation of replacing by two items and such that: (i) , (ii) and , (iii) .
Definition 2 (Fractional packing).
Let be (, )-dimensional items, where for each item , we are given a set of axes perpendicular to which we can repeatedly slice ( can be empty, which would mean that the item cannot be sliced). If we slice items as per their given axes and then pack the slices into bins, then the resulting packing is called a fractional bin packing.
See Fig. 3 for an example of a fractional packing.
4.2 Properties of
Definition 3 (Density vector).
The density vector of a (, )-dimensional item is the vector . Recall that and note that .
The subroutine returns a set of pairs of the form . Condition C1 is defined as the following constraints over each pair :
-
•
C1.1. Small discard: and .
-
•
C1.2. Bijection from to : Each item in is obtained by modifying an item in . Let be the bijection from to that captures this relation.
-
•
C1.3. Homogeneity properties: partitions items in into a constant number of classes: . These classes should satisfy the following properties, which we call homogeneity properties:
-
–
All items in a class have the same density vector.
-
–
For each class , we decide the set of axes perpendicular to which we can slice items in . If items in a class are not allowed to be sliced perpendicular to dimension , then all items in that class have the same length along dimension . (For example, if and vertical cuts are forbidden, then all items have the same width.)
-
–
-
•
C1.4. Bounded expansion: Let be any configuration of and be any one of the classes of . Let . Then we need to prove that for some constant . Let us call this result ‘bounded expansion lemma’.
Intuitively, the homogeneity properties allow us to replace (a slice of) an item in a fractional packing by slices of other items of the same class. Thus, while trying to get a fractional packing, we can focus on the item classes, which are constant in number, instead of focusing on the items. Intuitively, the bounded expansion lemma ensures that we do not round up items too much.
Condition C2 (also called structural theorem): For some constant and for some , .
Intuitively, the structural theorem says that allowing slicing as per and imposing a structure on the packing does not increase the minimum number of bins by too much. The AAR of increases with , so we want to be small.
4.3
Condition C3: For some constant and for any and any , packs into at most bins.
Condition C3 says that we can find a packing that is close to the optimal fractional structured packing. The AAR of increases with , so we want to be small.
4.4
Condition C4: For some constant , if outputs a packing of into bins, then modifies that packing to get a packing of into bins.
Intuitively, condition C4 says that unrounding does not increase the number of bins by too much. The AAR of increases with , so a small is desirable. If only increases the dimensions of items, then unrounding is trivial and .
4.5 AAR of R&A
Recall that is a -approximation algorithm for (, ) BP (see Section 3), where when , when , when , and when . Our key ingredient in the analysis of R&A is the following lemma. We give a sketch of the proof here and defer the full proof to Appendix E.
Lemma 6.
Let be as computed by . Then with high probability, we get
Proof sketch.
Our proof of Lemma 6 is inspired by the analysis in [30]. We prove it by analyzing the fractional structured configuration LP of .
Definition 4.
Let . Suppose partitioned into classes . Let be the set of all structured configurations of items in that allow items to be sliced as per . For any , the fractional structured configuration LP of , denoted as , is
The integer version of this program is denoted as . The optimal objective values of and are denoted as and , respectively.
Intuitively, is the same as the structured fractional bin packing problem because of the downward closure property, so 222There is a caveat in the definition of , because of which this isn’t completely true. See Lemma 13 in Appendix E for an explanation and a fix.. By the homogeneity property (C1.3), the number of constraints in this LP is a constant . So by rank lemma333Rank Lemma: the number of non-zero variables in an extreme-point solution to an LP is at most the number of constraints. See Lemma 2.1.4 in [33]., we can show that (see Lemmas 13 and 14 in Appendix E). Now to prove Lemma 6, roughly, we need to show that .
The RHS in the constraint of is a random variable . The RHS in the constraint of is . Note that by Lemma 5. In fact, we can harness the randomness of , the bounded expansion property (C1.4), and a concentration inequality [35], to show that . Therefore, if is an optimal solution to , then is roughly a solution to , which implies . ∎
Theorem 7.
With high probability, the number of bins used by is at most
Proof.
Let be the set of bins packed in the randomized rounding of configuration LP step (see line 4 in Algorithm 1 in Appendix E), be the set of bins used to pack the discarded items , be the set of bins used to pack the rest of the items , and be the set of bins used by to pack items in .
Then .
Now, we have . The first inequality follows from the property of , the second follows from C1.1 (Small Discard) and the last follows from Lemma 2. Finally,
(property of (C4)) | ||||
(property of (C3)) | ||||
(by Lemma 6) | ||||
Here, the last inequality follows from the structural theorem (C2), which says that such that . Hence, the total number of bins is at most
The AAR of is roughly . This is minimized for and the minimum value is . As we require , we get this AAR only when . If , the optimal is 1 and the AAR is roughly .
4.6 Example:
We will show how to use with the R&A framework. Recall that is a -approximation algorithm for (, ) BP (see Section 3), where when , when , when , and when . Using the R&A framework on will improve its AAR from to . To do this, we need to show how to implement , and .
-
1.
: Using the KS algorithm of Appendix D and the LP algorithm of [40], we get a -approximate solution to . Therefore, .
-
2.
: returns just one pair: , where and is an item having height ( geometric dimension) equal to , all other geometric dimensions equal to 1, and all vector dimensions equal to . There is just one class in and all items are allowed to be sliced perpendicular to the height, so the homogeneity properties are satisfied. Also, by Lemma 2.
-
3.
Structural theorem: We take structured packing to be the set of all possible packings. Then . Therefore, .
-
4.
: We can treat as the 1-D bin packing instance and pack it using Next-Fit. Therefore, . Therefore, .
-
5.
: For each bin in , we can pack the corresponding unrounded items into bins (using Steinberg’s algorithm or ). Therefore, .
Therefore, we get an AAR of .
For , we can slightly improve the AAR by using the -approximation algorithm of [32] for (2, ) KS. This gives us an AAR of . This is better than the AAR of for .
The above example is presented only to illustrate an easy use of the R&A framework. It doesn’t exploit the full power of the R&A framework. The algorithm , which we outline in Section 5, uses more sophisticated subroutines , and , and uses a more intricate definition of fractional structured packing to get an even better AAR of (improves to for ).
5 Improved Approximation Algorithms
In this section, we give an overview of the algorithm for (2, ) BP, which is inspired from the 1.5 asymptotic approximation algorithm for 2-D GBP [39]. is based on the following two-step procedure, as common in many packing algorithms.
In the first step (structural step), we start with an optimal solution and using some transformation show the existence of a good structured solution. Our structural result roughly says that if items can be packed into bins, then we can round to get a new instance such that for some constant , where is the number of bins in the optimal structured fractional packing of . Roughly, the notion of structured packing that we use here, which we call compartmental packing, imposes the following additional constraints over the container-based packing of [39]:
-
•
An item is said to be dense iff is above a certain threshold. If a bin contains dense items, then we reserve a sufficiently-large rectangular region exclusively for dense items, and dense items can only be packed into this region.
-
•
For a constant , for every , the sum of the weights of items in each bin is at most .
The proof of our structural result differs significantly from that of [39] because the presence of vector dimensions inhibit a straightforward application of their techniques.
In the second step (algorithmic step), we show how to find a near-optimal structured solution. first rounds to and then uses a mixture of brute-force and LP to pack the items into containers, similar to [39] or [29]. This gives us the optimal structured fractional packing of . Then we convert that packing to a non-fractional packing of with only a tiny increase in the number of bins. Intuitively, rounding is helpful here because it reduces the number of different types of items by partitioning the items into a constant number of classes such that items in each class are similar. This simplicity introduced by rounding enables us to find the optimal structured fractional packing efficiently.
Then we show that fits into the R&A framework and gives an AAR of roughly . In particular, LABEL:sec:rbbp-2d-extra:rna shows how components from can be used as the R&A subroutines , and . To (approximately) solve the configuration LP, we use the linear programming algorithm from [40] and the -approximation algorithm for (2, ) KS from [32].
We now give a brief overview of some key ideas used in our structural result. Due to space limitations, the details of the structural result and the algorithm can be found in Appendix F. Like [39], we start with a packing of input into bins, and transform it into a structured fractional packing of into bins. We do this in three steps:
-
1.
We round up one geometric dimension of each item and pack the items into roughly bins. We call these bins quarter-structured (see Sections F.2 and F.3).
-
2.
We round the remaining dimensions of items and partition them into classes such that they satisfy the homogeneity properties (see Section 4.2). We allow slicing and repack the items into almost the same number of bins. We call the resulting bin packing semi-structured (see Sections F.4 and F.5).
-
3.
Finally, we transform the packing into a compartmental packing (see LABEL:sec:rbbp-2d-extra:compart). Compartmental packings have nice properties which help us to find them efficiently.
In steps 1, 2 and 3, [39] uses the Next-Fit-Decreasing-Height (NFDH) algorithm [14] to pack items of area into bins. This does not work when vector dimensions are present; an item of low area can have large weights. In step 2, [39] uses linear grouping, i.e., each item is moved in place of a geometrically larger item so that it can be rounded up. Vector dimensions make such cross-bin movement difficult, since that can violate bins’ weight capacities. [39] uses cross-bin movement in step 1 too.
Our first crucial observation is that most difficulties associated with vector dimensions disappear if items’ density is upper-bounded by a constant. Here density of item is defined as . Specifically, if items of bounded density (we call them non-dense items) have small area, then we can use to pack them into a small number of bins. Linear grouping can also be made to work for such items with some more effort. Therefore, our strategy is to segregate items as dense and non-dense. We reserve a thin rectangular region in bins for dense items, and the rest is reserved for non-dense items.
Furthermore, dense items in a bin must have low total area, due to their high density. If we reserve enough space for them in the bin, we can always pack them in their reserved region using NFDH (see Lemma 9 in Appendix A). Such a guarantee means that we can essentially ignore their geometric dimensions and simply treat them as vectors.
In step 2, we want to round up vector dimensions with only a marginal increase in the number of bins. To do this, we require each quarter-structured bin to be -slacked. -slackness roughly means that for a set of items in a bin, (see Section F.3 for a formal description). -slackness also helps us in designing the packing algorithm, because we can then use techniques from resource-augmented vector bin packing. Also, during the rounding step, we round down the weight of some dense items, and -slackness allows us to unround with no increase in the number of bins.
The observations above guide our definition of quarter-structured. Roughly, a packing is quarter-structured if items having large width have their width and -coordinate rounded to a multiple of and each bin is -slacked. We reserve a thin rectangular region of width for packing dense items (only if the bin contains dense items).
In step 1, [39] uses a standard cutting-strip argument: They create a strip of width next to an edge of the bin (see Fig. 4). Items lying completely inside the strip (called blue items), have small area and are packed separately using NFDH. Items intersecting the boundary of the strip (called red items), are removed. This creates an empty space of width in the bin. Using this empty space, items lying outside the strip (called green items), can then have their width and -coordinate rounded to a multiple of . Their key idea is how to pair up most bins so that red items from two bins can be rounded and packed together into a new bin. This is roughly why they get an AAR of .
We use the cutting-strip argument too, but with some differences. We cannot freely mix red items from different bins if they have large weight, and we cannot simply pack blue items into a small number of bins. We also need bins to be slacked. So, we get a larger AAR of . For , however, we allow mixing items using more sophisticated techniques, which improves the AAR to . Also, we round green items to a multiple of instead of , which leaves an empty strip of width in the bin even after rounding, and we reserve this space for dense items. This gives us a quarter-structured packing.
Based on the broad ideas above, we make more changes to the quarter-structured packing to get a compartmental packing.
Finally, in the algorithmic step, we use a combination of brute-force and LP to pack different types of items into different type of compartments and find a good compartmental packing efficiently.
Acknowledgements: We thank Nikhil Bansal and Thomas Rothvoss for their helpful comments.
References
- [1] M. T. Alonso, R. Alvarez-Valdes, Manuel Iori, F. Parreño, and J. M. Tamarit. Mathematical models for multicontainer loading problems. Omega, 66:106–117, 2017. doi:10.1016/j.omega.2016.02.002.
- [2] Samir V. Amiouny, John J. Bartholdi III, John H. Vande Vate, and Jixian Zhang. Balanced loading. Operations Research, 40(2):238–246, 1992. doi:10.1287/opre.40.2.238.
- [3] Nikhil Bansal, Alberto Caprara, and Maxim Sviridenko. A new approximation method for set covering problems, with applications to multidimensional bin packing. SIAM Journal on Computing, 39(4):1256–1278, 2009. doi:10.1137/080736831.
- [4] Nikhil Bansal, Jose R. Correa, Claire Kenyon, and Maxim Sviridenko. Bin packing in multiple dimensions: inapproximability results and approximation schemes. Mathematics of Operations Research, 31:31–49, 2006. doi:10.1287/moor.1050.0168.
- [5] Nikhil Bansal, Marek Eliáš, and Arindam Khan. Improved approximation for vector bin packing. In ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1561–1579. SIAM, 2016. doi:10.1137/1.9781611974331.ch106.
- [6] Nikhil Bansal and Arindam Khan. Improved approximation algorithm for two-dimensional bin packing. In SODA, pages 13–25, 2014. doi:10.1137/1.9781611973402.2.
- [7] Suman Kalyan Bera, Deeparnab Chakrabarty, Nicolas Flores, and Maryam Negahbani. Fair algorithms for clustering. In Conference on Neural Information Processing Systems (NeurIPS), pages 4955–4966, 2019.
- [8] Andreas Bortfeldt and Gerhard Wäscher. Constraints in container loading–a state-of-the-art review. European Journal of Operational Research, 229(1):1–20, 2013. doi:10.1016/j.ejor.2012.12.006.
- [9] Alberto Caprara. Packing 2-dimensional bins in harmony. In FOCS, pages 490–499, 2002. doi:10.1109/SFCS.2002.1181973.
- [10] Alberto Caprara. Packing -dimensional bins in stages. Mathematics of Operations Research, 33:203–215, 2008. doi:10.1287/moor.1070.0289.
- [11] Chandra Chekuri and Sanjeev Khanna. On multidimensional packing problems. SIAM journal on computing, 33(4):837–851, 2004. doi:10.1137/S0097539799356265.
- [12] Henrik I. Christensen, Arindam Khan, Sebastian Pokutta, and Prasad Tetali. Approximation and online algorithms for multidimensional bin packing: A survey. Computer Science Review, 24:63–79, 2017. doi:10.1016/j.cosrev.2016.12.001.
- [13] Edward G. Coffman, János Csirik, Gábor Galambos, Silvano Martello, and Daniele Vigo. Bin packing approximation algorithms: Survey and classification. In Handbook of combinatorial optimization, pages 455–531. Springer New York, 2013. doi:10.1007/978-1-4419-7997-1_35.
- [14] Edward G. Coffman, Michael R. Garey, David S. Johnson, and Robert E. Tarjan. Performance bounds for level-oriented two-dimensional packing algorithms. SIAM Journal on Computing, 9:808–826, 1980. doi:10.1137/0209062.
- [15] Florian Diedrich, Rolf Harren, Klaus Jansen, Ralf Thöle, and Henning Thomas. Approximation algorithms for 3d orthogonal knapsack. Journal of Computer Science and Technology, 23(5):749, 2008. doi:10.1007/s11390-008-9170-7.
- [16] Khaled M. Elbassioni, Naveen Garg, Divya Gupta, Amit Kumar, Vishal Narula, and Arindam Pal. Approximation algorithms for the unsplittable flow problem on paths and trees. In FSTTCS, volume 18 of Leibniz International Proceedings in Informatics (LIPIcs), pages 267–275, 2012. doi:10.4230/LIPIcs.FSTTCS.2012.267.
- [17] Wenceslas Fernandez de la Vega and George S. Lueker. Bin packing can be solved within in linear time. Combinatorica, 1:349–355, 1981. doi:10.1007/BF02579456.
- [18] A. M. Frieze and M. R. B. Clarke. Approximation algorithms for the -dimensional knapsack problem: worst-case and probabilistic analyses. EJOR, 15:100–109, 1984.
- [19] Waldo Gálvez, Fabrizio Grandoni, Afrouz Jabal Ameli, Klaus Jansen, Arindam Khan, and Malin Rau. A tight (3/2+ ) approximation for skewed strip packing. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPIcs.APPROX/RANDOM.2020.44.
- [20] Waldo Gálvez, Fabrizio Grandoni, Sandy Heydrich, Salvatore Ingala, Arindam Khan, and Andreas Wiese. Approximating geometric knapsack via l-packings. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 260–271. IEEE, 2017. Full version available at http://www.dii.uchile.cl/~awiese/2DK_full_version.pdf. doi:10.1109/FOCS.2017.32.
- [21] Waldo Gálvez, Fabrizio Grandoni, Salvatore Ingala, and Arindam Khan. Improved pseudo-polynomial-time approximation for strip packing. In FSTTCS, pages 9:1–9:14, 2016. doi:10.4230/LIPIcs.FSTTCS.2016.9.
- [22] Paul C Gilmore and Ralph E Gomory. Multistage cutting stock problems of two and more dimensions. Operations research, 13(1):94–120, 1965. doi:10.1287/opre.13.1.94.
- [23] Rebecca Hoberg and Thomas Rothvoss. A logarithmic additive integrality gap for bin packing. In SODA, pages 2616–2625, 2017. doi:10.1137/1.9781611974782.172.
- [24] Klaus Jansen and Lars Prädel. New approximability results for two-dimensional bin packing. Algorithmica, 74(1):208–269, 2016. doi:10.1007/s00453-014-9943-z.
- [25] Klaus Jansen and Guochuan Zhang. Maximizing the number of packed rectangles. In SWAT, pages 362–371, 2004. doi:10.1007/978-3-540-27810-8_31.
- [26] D. S. Johnson. Approximation algorithms for combinatorial problems. In STOC, pages 38–49, 1973.
- [27] Matthew Joseph, Michael J. Kearns, Jamie H. Morgenstern, and Aaron Roth. Fairness in learning: Classic and contextual bandits. In NeurIPS, pages 325–333, 2016.
- [28] Hans Kellerer, Ulrich Pferschy, and David Pisinger. Knapsack problems. Springer, 2004.
- [29] Claire Kenyon and Eric Rémila. Approximate strip packing. In 37th Annual Symposium on Foundations of Computer Science, FOCS ’96, Burlington, Vermont, USA, 14-16 October, 1996, pages 31–36, 1996. doi:10.1109/SFCS.1996.548461.
- [30] Arindam Khan. Approximation algorithms for multidimensional bin packing. PhD thesis, Georgia Institute of Technology, Atlanta, GA, USA, 2016.
- [31] Arindam Khan and Madhusudhan Reddy Pittu. On guillotine separability of squares and rectangles. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPIcs.APPROX/RANDOM.2020.47.
- [32] Arindam Khan, Eklavya Sharma, and K. V. N. Sreenivas. Approximation algorithms for generalized multidimensional knapsack, 2021. arXiv:2102.05854.
- [33] Lap Chi Lau, Ramamoorthi Ravi, and Mohit Singh. Iterative methods in combinatorial optimization, volume 46. Cambridge University Press, 2011.
- [34] Eugene L Lawler. Fast approximation algorithms for knapsack problems. Mathematics of Operations Research, 4(4):339–356, 1979. doi:10.1287/moor.4.4.339.
- [35] Colin McDiarmid. On the method of bounded differences. Surveys in combinatorics, 141(1):148–188, 1989.
- [36] Maria Flavia Monaco, Marcello Sammarra, and Gregorio Sorrentino. The terminal-oriented ship stowage planning problem. EJOR, 239(1):256–265, 2014. doi:10.1016/j.ejor.2014.05.030.
- [37] Célia Paquay, Michael Schyns, and Sabine Limbourg. A mixed integer programming formulation for the three-dimensional bin packing problem deriving from an air cargo application. International Transactions in Operational Research, 23(1-2):187–213, 2016. doi:10.1111/itor.12111.
- [38] Deval Patel, Arindam Khan, and Anand Louis. Group fairness for knapsack problems. ArXiv, 2006.07832, 2020. arXiv:2006.07832.
- [39] Lars Dennis Prädel. Approximation Algorithms for Geometric Packing Problems. PhD thesis, Kiel University, 2012. URL: https://macau.uni-kiel.de/servlets/MCRFileNodeServlet/dissertation_derivate_00004634/dissertation-praedel.pdf?AC=N.
- [40] Eklavya Sharma. An approximation algorithm for covering linear programs and its application to bin-packing, 2020. arXiv:2011.11268.
- [41] Knut Olav Brathaug Sørset. A heuristic approach to the three-dimensional bin packing problem with weight constraints. Master’s thesis, Høgskolen i Molde-Vitenskapelig høgskole i logistikk, 2019.
- [42] A. Steinberg. A strip-packing algorithm with absolute performance bound 2. SIAM Journal on Computing, 26(2):401–409, 1997. doi:10.1137/S0097539793255801.
- [43] Gregory S. Taylor, Yupo Chan, and Ghulam Rasool. A three-dimensional bin-packing model: exact multicriteria solution and computational complexity. Annals of Operations Research, 251(1-2):397–427, 2017. doi:10.1007/s10479-015-2048-5.
- [44] Alan Tsang, Bryan Wilder, Eric Rice, Milind Tambe, and Yair Zick. Group-fairness in influence maximization. In IJCAI, pages 5997–6005, 2019.
- [45] Gerhard J. Woeginger. There is no asymptotic PTAS for two-dimensional vector packing. Inf. Process. Lett., 64(6):293–297, 1997. doi:10.1016/S0020-0190(97)00179-8.
- [46] Guang Yang. gbp: a bin packing problem solver, 2017. R package version 0.1.0.4. URL: https://CRAN.R-project.org/package=gbp.
Appendix A Some Simple Packing Algorithms
Claim 8 (Next-Fit).
Let be a 1D bin packing problem instance. The Next-Fit algorithm finds a packing of into at most bins.
Lemma 9 (NFDH for strip packing [14]).
A set of rectangles can be packed into a strip of height at most using the Next-Fit Decreasing Height (NFDH) algorithm.
Lemma 10 (NFDH for small items [14]).
Let be a set of rectangles where each rectangle has width at most and height at most . Let there be a rectangular bin of width and height . If , then NFDH can pack into the bin.
Appendix B Omitted Proofs
Lemma 11 (Steinberg’s algorithm [42]).
Let be a set of rectangles. Let and . Consider a bin of width and height , where and . Then there is a -time algorithm to pack into the bin if
Proof of Lemma 3.
Pack into a bin of width 2 and height 1 using Steinberg’s algorithm (see Lemma 11). Then cut the bin vertically into 2 unit-sized squares. The items which lie completely inside the left half can be packed into a unit-sized bin. The items which lie completely inside the right half can be packed into a unit-sized bin. The items which lie on the cutting line are stacked one-over-the-other, so we can pack them into a unit-sized bin. ∎
Appendix C The Algorithm
Lemma 12.
Let be a set of -dimensional cuboids. Then can be packed into at most bins using a variant of .
Proof.
Define and as
Note that . Let be a -dimensional cuboid. Define as the cuboid of length in the dimension. Therefore, . For a set of cuboids, . Define to be a -length vector whose component is .
For a set of cuboids, define to be the set of items obtained by ignoring all dimensions other than the first . [10] (implicitly) defines a recursive algorithm . For a sequence of D cuboids all having the same , where , returns a packing of into a D bin. Here is the last item in sequence .
Now I’ll define a slight variation of the algorithm in [10]. This variant also works when rotation of items is allowed. First, partition the items by . The number of partitions is at most . Let be the partition containing items of . Order the items in arbitrarily. Then repeatedly pick the smallest prefix of such that either or , and pack into a bin using .
Suppose produces bins. Let be the of these bins. Given the way we choose prefixes, for , i.e. at most 1 bin is partially-filled.
Total number of bins used is
Appendix D Algorithm for Knapsack
Let be a set of (2, )-dimensional items. Let be the profit of item . We want to pack a max-profit subset of into a bin. Let be a set of -dimensional vectors obtained by replacing the geometric dimensions of each item by a single vector dimension . Let be a -D VKS algorithm having approximation ratio . gives us a packing of items into a bin. Let be the corresponding items in . Then and . Use Steinberg’s algorithm to compute a packing of into 3 bins: . W.l.o.g., assume . Then output the packing as the answer to the (2, ) KS problem. Since any feasible solution to the (2, ) KS instance also gives a feasible solution to the VKS instance , we get . Since is -approximate, we get . Hence,
Therefore, we get a -approximation algorithm for (2, ) KS. Using the PTAS for -D VKS by Frieze and Clarke [18], we get .
Appendix E Details of the R&A Framework
Lemma 13.
.
Proof.
Due to the downward closure property, changing inequality constraints to equality constraints doesn’t affect the optimum values of the above LP and IP. Therefore, is equivalent to the fractional structured bin packing problem.
A problem with the above definition of is that the number of variables can be infinite if certain classes allow slicing. We circumvent this problem by discretizing the configurations: Let be the smallest dimension of any item, i.e. .
In any optimal integral solution to that uses bins, we can slice out some items from each class in each bin so that the of each class in each bin is a multiple of . In each class, the total size of sliced out items across all bins is at most . Therefore, for each class, slices of that class can fit into a single item of that class. If each such single item is packed in a separate bin, the total number of bins used is at most .
Therefore, we only need to consider configurations where either the of each class is a multiple of or there is a single item in the configuration. This gives us a finite number of configurations and completes the proof. ∎
Lemma 14.
.
Proof.
By rank lemma, the number of positive-valued variables in an extreme-point solution to a linear program is at most the number of constraints (other than the variable non-negativity constraints).
Thus, an optimal extreme-point solution to has at most positive-valued variables. Rounding up those variables to the nearest integer will give us an integral solution and increase the objective value by at most . Hence, . ∎
Let denote the configuration LP of items and let denote the optimal objective value of . Recall that is a -approximation algorithm for (, ) BP (see Section 3), where when , when , when , and when .
Lemma 15.
For a set of (, )-dimensional items, .
Proof.
Let be the configuration matrix of . Let be the optimal solution to . In , the constraint for item gives us . Multiplying each constraint by and adding these constraints together, we get
Therefore, . We also have
Therefore, . ∎
Lemma 16 (Independent Bounded Difference Inequality [35]).
Let be random variables with . Let be a function such that whenever vectors and differ only in the coordinate. Then for any ,
Lemma 17.
Let be as computed by . Let be a constant. When is large compared to , we get that with high probability
Proof.
Let be the configurations chosen during randomized rounding. When viewed as a vector of length , all coordinates of are independent. Define .
Let be the classes of . Let be the bijection from to . For a set , define . For , define as
For any set , define . Then and is a non-negative additive function.
Let such that and differ only in coordinate . Let and . Let and .
It is easy to see (using Venn diagrams) that and .
(additivity of ) | ||||
(by bounded expansion lemma) |
(linearity of expectation and additivity of ) | ||||
(by Lemma 5) | ||||
, define as the smallest prefix of such that either or . Define . Therefore,
(by Section 3) | ||||
Now we will try to prove that with high probability, .
See 6
Proof.
E.1 Error in Previous R&A Framework
Here we describe a minor error in the R&A framework of [30], and how it can be fixed.
We define as an output of and for the residual instance , we define as the corresponding rounded items of . Our proof of Lemma 6 relies on the fact that for any subset of rounded items, the reduces by a factor of at least if we restrict our attention to the residual instance. Formally, this means that for any , we have
The equality follows from linearity of expectation and the fact that is deterministic, i.e., it doesn’t depend on the randomness used in the randomized rounding of the configuration LP. This is because is not given any information about what is. The inequality follows from Lemma 5, which says that .
The R&A framework of [30] used similar techniques in their analysis. In their algorithm, however, they round items differently. Specifically, they define a subroutine and define and . They, too, claim that for any subset of rounded items, the reduces by a factor of at least if we restrict our attention to the residual instance. While their claim is correct for input-agnostic rounding (where items are rounded up to some constant size collection values chosen independent of the problem instance), the claim is incorrect for input-sensitive rounding (where the values are chosen based on the specific problem instance). So the claim is incorrect if is not deterministic, as then an item can be rounded differently depending on different residual instances.
In fact, they use their R&A framework with the algorithm of Jansen and Prädel [24], which uses linear grouping (along with some other techniques) for rounding. Linear grouping rounds items in an input-sensitive way, i.e., the rounding of each item depends on the sizes of items in which is a random subset.
Appendix F Algorithm for (2, ) Bin Packing
Here we will see a rounding-based algorithm for (2, ) bin packing, called (named after ‘compartment-based packing’). will be parametrized by a parameter , where and .
Claim 18.
If a set of items fits into a bin, then and .
We will first remove some items from the bin, so that the remaining items can be classified into useful categories.
F.1 Classifying Items
Definition 5.
For constants , a bin packing instance is called -non-medium iff , .
An -non-medium instance has useful properties. Therefore, we want to remove some items from the input instance so that it becomes -non-medium and the removed items can be packed into a small number of bins.
Definition 6.
Let be constants and let be a function such that . Let . For , define and define
Define as the tuple , where .
Lemma 19.
Let . Then .
Proof.
Each item belongs to at most sets . Therefore,
By Definition 6, is -non-medium. By Lemmas 19 and 4, can be packed into at most bins using the algorithm.
We will choose to be independent of , so and are constants. Also note that and . We choose , so . We will choose later. For now, we will impose these conditions: , and . The second condition implies that .
Henceforth, assume all (2, ) bin packing instances to be -non-medium.
Definition 7.
We can classify item by its geometric dimensions as follows:
-
•
Big item: and .
-
•
Wide item: and .
-
•
Tall item: and .
-
•
Small item: and .
When rotation of items is allowed, assume w.l.o.g. that there are no tall items in .
Definition 8 (Dense items).
Item is dense iff either or .
Note that big items cannot be dense.
Definition 9 (Heavy and light items).
A dense item is said to be heavy in vector dimension iff . Otherwise is said to be light in dimension . If is heavy in some dimension, then is said to be heavy, otherwise is light.
F.2 Rounding One Side
In this subsection, we will show how a packing of into bins can be modified to get a more structured packing where one of the geometric dimensions is rounded up.
Definition 10.
In a bin, assume a coordinate system where is at the bottom left and is on the top right. We define the following regions, called strips:
-
•
and .
-
•
.
-
•
.
-
•
and .
We say that an item intersects a strip iff a non-zero volume of that item lies inside the strip.
Property 11.
A bin is said to satisfy Property 11 iff both of these conditions hold:
-
a
The -coordinate and width of all non-dense wide and big items is a multiple of .
-
b
If the bin contains dense items, then dense items are packed inside and no non-dense item intersects .
Property 12.
A bin is said to satisfy Property 12 iff both of these conditions hold:
-
a
The -coordinate and height of all non-dense tall and big items is a multiple of .
-
b
If the bin contains dense items, then dense items are packed inside and no non-dense item intersects .
Equivalently, we can say that a bin satisfies Property 12 iff its mirror image about the line satisfies Property 11.
The main result of this subsection is the following:
Lemma 20.
Given a packing of items into a bin, we can round up the width of some wide and big non-dense items to a multiple of or round up the height of some tall and big non-dense items to a multiple of and get a packing into 2 bins and 2 boxes where:
-
•
Each bin satisfies either Property 11 or Property 12.
-
•
of each box is at most .
-
•
One of the boxes has only dense items. Its larger dimension is 1 and its smaller dimension is .
-
•
One of the boxes has only non-dense items. Its larger dimension is 1 and its smaller dimension is .
-
•
One of the boxes is horizontal, i.e., has width 1 and only contains wide and small items. The other box is vertical, i.e., has height 1 and only contains tall and small items.
Before proving Lemma 20, we first prove a few ancillary results.
For , the items lying completely inside are either small or wide. Let be the set of small and wide items that intersect . For , the items lying completely inside are either small or tall. Let be the set of small and tall items that intersect . Since , we get .
W.l.o.g., assume that because and if , then we can mirror-invert the bin along a horizontal axis. Similarly assume that .
Observation 21.
If a bin only contains tall and small items, it trivially satisfies Property 11a. If a bin only contains wide and small items, it trivially satisfies Property 12a.
Lemma 22.
Suppose we’re given a packing of items into a bin such that no item intersects . Then we can increase the widths of all wide and big items to a multiple of and repack the items so that they satisfy Property 11a and no item intersects .
Proof.
Let and be the -coordinates of the bottom and top edge respectively of item . If an item intersects the strip and lies to the right of (i.e. the left edge of is to the right of the right edge of ), we say that (see Fig. 5). Let denote the reflexive and transitive closure of the relation . It is easy to see that is a partial ordering of . Define as .
Define to be 1 if it is wide or big and to be 0 if it is neither wide nor big. Also, define (if there is no , define ). Intuitively, denotes the length of the largest chain of wide items preceding . The -coordinate of the right edge of item is more than . Therefore, .
Transformation 13.
Move each item to the right by . Additionally, if is wide or big, move it further to the right so that the -coordinate of its left edge is a multiple of , and increase its width so that it is a multiple of .
On applying Transformation 13 to item , the -coordinate of its right edge increases by less than . Since , the increase is less than . Therefore, will not intersect after this transformation. Also, after applying this transformation to all items, the bin satisfies Property 11a.
We will now prove that after applying Transformation 13 to all items, no items overlap. If and are not relatively ordered by , they cannot overlap because we only moved items rightwards. Now assume w.l.o.g. that . The -coordinate of the right edge of increases by less than . The -coordinate of the left edge of increases by at least . Since , and don’t overlap. ∎
Lemma 23.
Let be a set of wide and small items that are dense and have total weight at most 1. They can be packed in polynomial time into a box of width 1 and height .
Proof.
.
So by Lemma 9, height used by NFDH is less than
.
∎
We can get an analogous result for tall and small dense items.
Proof of Lemma 20.
Suppose the bin contains items . Then we can use Lemma 23 to move dense wide items to box and move dense tall and small items to box . We will later repack one of and into a bin.
. This gives us 2 cases: or . The first case is the same as the second one with the coordinate axes swapped, so assume w.l.o.g. that .
Move to a box of height 1 and width . only has tall and small non-dense items. Also, .
Let be the set of big and wide items that intersect . Move to a separate bin. The items in are stacked on top of each other. Therefore, we can round their widths to a multiple of . doesn’t have dense items. Therefore, this new bin satisfies the desired properties.
Since we removed and from the bin, is empty. By Lemma 22, we can round the -coordinate and width of big and wide items in the bin to a multiple of and then repack the items in the bin so that the bin satisfies Property 11a and is empty. Observe that
Since , pack into . Now this bin also satisfies Property 11b. In total, we used 2 bins and 2 boxes ( and ). The dense box is horizontal and the non-dense box is vertical. Refer to Fig. 6 for an example.
∎
Lemma 24.
When item rotation is allowed, given a packing of items into a bin, we can round up the width of some wide and big items to a multiple of and round up the height of some tall and big items to a multiple of and get a packing into 2 bins and 1 box where:
-
•
Each bin satisfies Property 11.
-
•
of the box is at most .
-
•
The box has height 1 and width .
-
•
The box only contains tall and small non-dense items.
F.3 Getting Slack in Weight of Bins
For a bin , if , then we can round up weights of items. Hence, we would like to have bins with (roughly) this property.
Definition 14.
A bin is said to be -slacked iff at least one of these conditions holds:
-
•
.
-
•
.
-
•
and only contains dense items and .
A packing of items into multiple bins is said to be -slacked iff all bins in the packing are -slacked.
We say that packing of items in a bin is quarter-structured iff the bin is -slacked and satisfies either Property 11 or Property 12. We would like to round up the width or height of each item in and repack the items into bins such that each bin is quarter-structured.
Lemma 25.
Let and we have a parameter . Let be a set of items where . Then we can partition into at most disjoint subsets such that each subset satisfies one of these properties:
-
•
.
-
•
.
Proof.
Let . Move each item in to a new box. Let . Then and .
Order the items in arbitrarily. For each , find the smallest prefix such that . Let be the last item in . Then . Since we removed items from , . Therefore, .
Now order these prefixes in non-decreasing order of cardinality. Let them be , , , . The sets , , , form a partition of . Put each such set in a new box, if the set is not empty. The items which remain in the original box are . . Since , we get that .
Therefore, total number of boxes needed is at most . ∎
Lemma 26.
Given a packing of items into bins, we can round up the width of some non-dense wide and big items in to a multiple of and round up the height of some non-dense tall and big items in to a multiple of and repack the items into -slacked bins such that each bin satisfies either Property 11 or Property 12.
Proof.
Let be a packing of items into bins. For each bin , we can use Lemma 20 to round up some items in and split into bins and and boxes and . W.l.o.g., assume is a horizontal box. Put each box in a new bin. Then satisfies Property 12 and satisfies Property 11.
Let , , and . , , and are pairwise disjoint and they are subsets of . Now use Lemma 25 with parameter on bin with dimensions . This splits into -slacked bins. Similarly, by splitting , and , we get , and -slacked bins respectively.
The total number of bins from is . Therefore, we get a total of bins.
, , and satisfy the desired properties except possibly -slackness. When we split a bin into multiple bins, the position of items relative to the bin isn’t changed. Therefore, the split bins continue to satisfy these properties. ∎
Lemma 27.
Given a packing of items , if item rotations are allowed, we can round up the width of some non-dense wide and big items in to a multiple of and round up the height of some non-dense tall and big items in to a multiple of and repack into -slacked bins such that each bin satisfies Property 11.
Proof sketch.
Lemma 28.
Let there be boxes of width 1 and height at most . Let the weight of each box be at most in each dimension, where . Then we can pack these boxes into bins such that the resulting bins are -slacked.
Proof.
We can pack boxes in 1 bin with the resulting bin being -slacked. This is because the sum of weights is at most in each dimension and the total height of the boxes is at most . The total number of bins used is at most which in turn is at most . ∎
The above lemma can also be used for vertical boxes, i.e. height 1 and width at most .
Lemma 29.
Let . Let there be boxes of width 1 and height at most containing only non-big non-dense items. Let the weight of each box be at most . Then we can pack these boxes into bins such that the resulting bins are -slacked.
Proof.
Let be an item in the box. Boxes only have non-big items, so . Boxes only have non-dense items, so .
From each box, choose the smallest prefix for which . Then . Remove and put it in a new box of the same dimensions.
This gives us boxes of weight at most . We can pair them up and pack them into bins. Those bins will be -slacked.
We also get new boxes of weight at most . We can pack such boxes into a bin. This gives us at most bins. These bins are -slacked.
Total number of bins used is . ∎
Lemma 30.
Let . Let there be boxes of width 1 and height . Suppose the boxes only have dense items, and each box has total weight at least and at most . Then we can pack the items of these boxes into at most bins such that the resulting bins are -slacked.
Proof.
Let there be boxes that have an item of weight at least . No item has weight more than half. Therefore, we can pair up these high-weight items into at most -slacked bins. In each of these boxes, the remaining items have weight at most . Since , by Lemma 28, we can pack them into number of -slacked bins.
boxes have all items of weight less than . Each of these boxes has total weight between and . Each box can be split into 2 boxes as follows: Order the items in a box and find the smallest prefix of weight at least . Since there are no items of weight more than , such a prefix has weight between and .
Arrange the boxes into groups of at most 3 boxes each. Let , , be these boxes in one such group. Split and into 2 boxes each by the above method. Let the resulting boxes be respectively. Assume w.l.o.g. that for . Pack into 1 bin. It has weight at most , so it is -slacked. Pack and into 1 bin. It has weight at most , so it is -slacked. Therefore, we can convert a group of 3 boxes into 2 -slacked bins.
Total bins used is at most which is at most , assuming . ∎
The above lemma can also be used for vertical boxes, i.e. height 1 and width at most .
Lemma 31.
Given a packing of items into bins, when , we can round up the width of some non-dense wide and big items in to a multiple of and round up the height of some non-dense tall and big items in to a multiple of and repack into -slacked bins such that each bin satisfies either Property 11 or Property 12.
Proof.
Let be a packing of items into bins. For each bin , we can use Lemma 20 to round up some items in and split into bins and and boxes and , where is a horizontal box and is a vertical box, i.e. the width of is 1 and the height of is 1.
Classifying bins:
We will now classify the bins .
Type 1: and :
Among and , at most 1 bin will have weight more than . Use Lemma 25 to split it into 2 bins. So for each original bin of type 1, we get at most 3 -slacked bins and 2 boxes, one horizontal and one vertical, each of total weight at most .
Both and satisfy either Property 11 or Property 12. When we split a bin into multiple bins, the position of items relative to the bin isn’t changed. Therefore, the bins continue to satisfy these properties.
Type 2: and :
implies that and , so and are already -slacked. Pack in a bin. Since , is -slacked. satisfies Property 12. So for each original bin of type 2, we get at most 3 -slacked bins and 1 vertical box of weight at most .
Type 3: and :
The analysis is similar to type 2. For each original bin of type 3, we get at most 3 -slacked bins and 1 horizontal box of weight at most .
Type 4: and :
implies that and , so and are already -slacked. So we have at most 2 -slacked bins and 2 boxes of weight at most .
Repacking boxes:
We will now try to pack the boxes into bins. Each of these bins packs some dense boxes and some non-dense boxes. If multiple dense boxes were packed in a bin, we can use Lemma 23 to repack them into a single dense box and move that box to an edge of the bin. Bins that only pack horizontal boxes satisfy Property 12. Bins that only pack vertical boxes satisfy Property 11.
Among , let there be bins of type .
The number of -slacked bins is at most . We also have horizontal boxes and vertical boxes of weight at most each. Since and , each box has the smaller geometric dimension at most . By Lemma 28, the number of bins we need to pack them is at most .
We have horizontal boxes and vertical boxes that each have weight between and . of these are dense boxes and are non-dense boxes.
The non-dense boxes don’t have big items. Since , by Lemma 29, the number of bins needed to pack them is at most .
By Lemma 30, we can pack the dense boxes into bins, where each bin is -slacked.
The total number of bins used is at most
() | |||
() |
∎
Lemma 32.
Given a packing of items into bins, when and item rotations are allowed, we can round up the width of some non-dense wide and big items in to a multiple of and round up the height of some non-dense tall and big items in to a multiple of and repack into -slacked bins such that each bin satisfies Property 11.
Proof sketch.
Theorem 33.
Given a packing of into bins, we can round up the width of some non-dense wide and big items in to a multiple of and round up the height of some non-dense tall and big items in to a multiple of and repack into at most -slacked bins such that each bin satisfies either Property 11 or Property 12. Here the values of and depend on the value of and whether item rotations are allowed. See Table 2.
Lemma | |||
---|---|---|---|
rotations forbidden | Lemma 26 | ||
rotations allowed | Lemma 27 | ||
rotations forbidden and | Lemma 31 | ||
rotations allowed and | Lemma 32 |
F.4 Rounding Weights
Definition 15 (Weight classes).
Big items that have the same geometric dimensions and the same weight class are identical. We will round the geometric dimensions of dense items to 0, so heavy items of the same weight class would be identical.
Definition 16 (Slicing and fractional packing).
Let be a set of items. is called a slicing of iff can be obtained from by one or more of the following operations:
-
•
Horizontally slicing a non-dense wide item (i.e. if a wide item is sliced into items and , then and ).
-
•
Vertically slicing a non-dense tall item (i.e. if a tall item is sliced into items and , then and ).
-
•
Slicing a non-dense small item in one or both dimensions.
-
•
Slicing a light dense item of zero area.
A packing of into bins is called a fractional packing of .
Wide non-dense items of the same width and of the same weight class can be used interchangeably while trying to get a fractional packing. Similarly, tall non-dense items of the same height and of the same weight class are interchangeable, small non-dense items of the same weight class are interchangeable and light dense items of zero area and the same weight class are interchangeable.
We will now see how to round the weights of items so that they belong to a constant number of weight classes.
F.4.1 Rounding Weights of Non-Dense Items
Transformation 17.
Given an item , ,
-
•
If is big, round up to a positive multiple of .
-
•
If is wide and non-dense, round up to a positive multiple of .
-
•
If is tall and non-dense, round up to a positive multiple of .
-
•
If is small and non-dense, round up to a positive multiple of .
Lemma 34.
Transformation 17 is valid, i.e. for any item , after the transformation.
Proof.
Since , the transformed weight of a big item can be at most 1.
Since , the weight of a non-dense wide/tall/small item is at most and rounding it up can only increase it by at most . ∎
Lemma 35.
Let a bin be -slacked, for . Then after applying Transformation 17, the bin will be -slacked.
Proof.
If the bin contains a single item, it will remain -slacked after the transformation.
Suppose the bin contains multiple items, then . Let there be big items. Let the total height of wide non-dense items be . Let the total width of tall non-dense items be . Let the total area of small non-dense items be .
The total area of big items is at least , of wide items is at least and of tall items is at least . Since the total area of items in is at most 1,
The total increase in is at most
Therefore, the resulting bin is -slacked. ∎
Observation 36.
Since we’re rounding up weights of non-dense items in Transformation 17, some items may not continue to satisfy the non-denseness property, i.e. may exceed for some items and some .
However, this will not affect us much. Formally:
-
a
Big items will remain non-dense, since and .
-
b
Small items will remain non-dense, since will be rounded up to a multiple of , and divides .
-
c
For wide items, may rise to at most . Furthermore, will be rounded to a multiple of , and divides , so will continue to be at most .
-
d
For tall items, may rise to at most . Furthermore, will be rounded to a multiple of , and divides , so will continue to be at most .
Even if of some non-dense items exceeds after Transformation 17, we will continue to consider them non-dense items.
Lemma 37.
Define . After Transformation 17, the number of weight classes of big items is at most , of wide non-dense items is at most , of tall non-dense items is at most and of small non-dense items is at most .
F.4.2 Rounding Weights of Dense Items
Transformation 18.
For item , if is non-dense, do nothing. If is dense, set and to 0 and for each , if , then set to 0.
Since Transformation 18 rounds down weights, we need to prove that we can easily undo this transformation.
Lemma 38.
Let be a set of items. Let be the items obtained by applying Transformation 18 to . Suppose we’re given a packing of into a bin that satisfies Property 11b and is -slacked, for some .
Then there is a polynomial-time algorithm to convert the packing of into a packing of that satisfies Property 11b, is -slacked, and the position of non-dense items in the packing of is the same as the position of non-dense items in the packing of .
(Analogous lemma holds for Property 12b)
Proof.
By Lemma 23, we can always pack dense items in in polynomial time into a box of height 1 and width . Since , this box fits in . Therefore, Property 11b is satisfied.
Now we will prove that the packing of is -slacked. There are 3 cases to consider:
Case 1: :
For each and each dense item ,
reverting the transformation increases by at most .
So by Claim 18, we get
Therefore, is -slacked.
Case 2: :
Then , so is -slacked.
Case 3: and only has dense items
and :
The 0-value dimensions of increase to ,
so remains the same across this transformation. So is -slacked.
∎
As the first step in rounding dense items, we apply Transformation 18 to .
Since , we get . Hence, Transformation 18 forces heavy items to be heavy in all non-zero dimensions.
Now we will use different transformations on heavy and light items.
Transformation 19.
For a dense item , if , round up to a multiple of .
Lemma 39.
Let be a packing of items into a bin that is -slacked, for some . Let be the packing obtained by applying Transformation 19 to dense items in . Then is a -slacked packing.
Proof.
Case 1: :
For each , there are less than items in
such that . For each such item, increases by less than .
Therefore, . Therefore, is -slacked.
Case 2: :
, so is -slacked.
Case 3: and only has dense items
and :
, so is a multiple of .
Therefore, for each item , increases to at most .
So is -slacked.
∎
Lemma 40.
The number of distinct heavy items after Transformation 19 is at most
Proof.
This is because large vector dimensions are rounded to at most distinct values. ∎
Transformation 20.
For a dense item , if , then for each , round up to a power of if .
Lemma 41.
Let be a packing of items into a bin that is -slacked, for some . Let be the packing obtained by applying Transformation 20 to dense items in . Then is a -slacked packing.
Proof.
Case 1: :
For each , increases by a factor of at most .
So,
Therefore, is -slacked.
Case 2: :
after the transformation, so the packing is valid and .
Therefore, is -slacked.
Case 3: and only has dense items
and :
Since and
the transformation only applies when ,
remains the same after the transformation.
∎
Lemma 42.
After Transformation 20, the number of distinct values of the weight vector of light items is at most
Proof.
This is because is lower-bounded by because of Transformation 18. ∎
Transformation 21 (Weight-rounding).
For a set of items, weight-rounding is the process of applying Transformations 17, 18, 19 and 20 to all items. A set of items is said to be weight-rounded iff is invariant under Transformations 17, 18, 19 and 20.
F.5 Rounding the Other Side
So far, for each item, we seen how to round their weights and how to round one geometric dimension. In this subsection, we will see how to use linear grouping to round the other geometric dimension. We will show that after this operation, the items will belong to a constant number of homogeneous classes (see Condition C1.3 in Section 4.2).
F.5.1 Rounding Geometric Dimensions of Non-Dense Items
Transformation 22 (Linear grouping).
Suppose we are given a packing of items into bins, where is invariant under Transformation 17 and each bin satisfies Property 11a.
Partition the big items in by their width and weight class. Partition the tall non-dense items in by their weight class. The number of partitions is constant by Property 11a and Lemma 37. Let (so ).
For each partition of big items, do the following:
-
1.
Order the items in in non-increasing order of height.
-
2.
Let . Let be the first items, be the next items, and so on, till , where . For , is called the linear group of . The first item in is called the leader of , denoted as .
-
3.
Increase the height of each item in to . Unpack the items .
-
4.
For each and each , let be the item in and let be the item in . Note that . Increase to and pack where was packed. Since has the same width and weights as , the geometric constraints are not violated and the total weights of the bin do not increase. The number of distinct heights in now becomes at most .
For each partition of tall items, do the following:
-
1.
Order the items in in non-increasing order of height and arrange them side-by-side on the real line, starting from the origin.
-
2.
Let the total width of be . Let be the items in the interval . Slice the items if they lie on the boundaries of the interval. is called the linear group of . The first item in is called the leader of , denoted as .
-
3.
Increase the height of each item in to . Then unpack .
-
4.
For each , move the items in to the space occupied by (items in may need to be sliced for this) and increase the height of each item to . This doesn’t violate geometric constraints since and have the same total width and this doesn’t increase the total weights of the bin because all items in have the same weight class. The number of distinct heights in now becomes at most .
We can extend the definition of this transformation to bins satisfying Property 12a by swapping vertical and horizontal directions. (Partition big items by height and weight class and partition wide items by weight class. Then round up the width of items in each partition using the above techniques.)
Lemma 43.
Let be a set of big and tall items and let be a constant. Define . Then implies can be packed into a -slacked bin where all items touch the bottom of the bin.
Proof.
.
. So if for some , then . So can be packed into a bin, and the bin is -slacked since .
Now let for all . So and ,
Therefore, can be packed into a -slacked bin. ∎
Lemma 44.
Suppose we are given a packing of items into bins, where is invariant under Transformation 17 and each bin satisfies Property 11a. Let be the items unpacked by linear grouping (Transformation 22). Then can be repacked into number of -slacked bins that satisfy Property 11.
Proof.
Define . Let be the set of big and tall non-dense items in . For any , define .
Interpret each item as a 1D item of size . Order the items such that big items in appear before tall non-dense items in . Pack them on the bottom of new bins using the Next-Fit algorithm. By Lemma 43, they will require at most -slacked bins. These bins satisfy Property 11a since the width of all big items in is a multiple of , and they satisfy Property 11b since only contains non-dense items.
In Transformation 22, we partitioned all big items in by width and weight class. Let be one such partition. Given the way we created the groups , we get . Since all items in have the same width and weights, is the same for each . Therefore,
In Transformation 22, we partitioned all tall non-dense items in by weight class. Let be one such partition. Given the way we created the groups , we get . All items in have the same weights-to-width ratio, which is at most by 36d. Since , we get for all , so is the same for each . Let that common ratio be . Then,
Summing over all partitions , we get
(1) |
For , we get
The last inequality follows because for big and tall items, .
Lemma 45.
Suppose we are given a packing of items into bins, where is weight-rounded, each bin is -slacked for some , and each bin satisfies either Property 11 or Property 12. Then after applying linear grouping (Transformation 22) to this packing of , we get a packing of items into bins, where all of the following hold:
-
•
is a rounding-up of and contains a constant number of homogeneous classes (see Condition C1.3 in Section 4.2).
-
•
Each bin in the packing of is -slacked and satisfies either Property 11 or Property 12.
-
•
.
Proof sketch.
Follows from the definition of linear grouping and Lemma 44. Note that we apply linear grouping separately to bins satisfying Property 11 and bins satisfying Property 12. ∎
F.5.2 Coarse and Fine Partitioning
Our approach so far has been to start from an optimal packing of items and show how to modify it to obtain an approximately-optimal structured packing of a rounded instance. However, the rounding algorithm must round items without knowing the optimal packing. To design such an algorithm, we first need to introduce additional concepts: coarse partitioning and fine partitioning.
At a high level, our rounding algorithm first partitions the items by weight classes to get a coarse partitioning. It then further partitions the coarse partitions to get a fine partitioning. It then rounds up the geometric dimensions of items in each fine partition to make that partition homogeneous.
We will first formally define coarse and fine partitioning. We will then restate Theorems 33 and 45 using the language of fine partitioning. Then in Section F.6, we will see an algorithm for computing a fine partitioning of .
-
•
Let be the set of big items in .
-
•
Let be the set of wide non-dense items in .
-
•
Let be the set of tall non-dense items in .
-
•
Let be the set of small non-dense items in .
-
•
Let be the set of light dense items in that are either tall or small.
-
•
Let be the set of light dense wide items in .
-
•
Let be the set of heavy dense items in that are either tall or small.
-
•
Let be the set of heavy dense wide items in .
When the set of items is clear from context, we will use , , , , , , , to refer to these sets.
Definition 23 (Coarse partitioning).
Let be a weight-rounded instance. Partition items by their weight classes. Then for each partition containing dense items, split that partition into 2 partitions: one containing only tall and small items and the other containing only wide items. The resulting partitioning is called a coarse partitioning of .
We number the coarse partitions in arbitrarily from onwards. There will be at most such partitions by Lemma 37. Denote the coarse partition by .
Similarly, denote the coarse partition
-
•
in by , where .
-
•
in by , where .
-
•
in by , where .
-
•
in by , where .
-
•
in by , where .
-
•
in by , where .
-
•
in by , where .
Observation 46.
There is a unique coarse partitioning of . Furthermore, the unique coarse partitioning can be found in time.
In Theorems 33 and 45, widths of wide and big items are rounded. The rounding is different for Theorems 33 and 45: In Theorem 33, we round the widths of some items to multiples of so that the bin satisfies Property 11a, and in Lemma 45, we round the widths of items in bins satisfying Property 12a using linear grouping. To get a rounding algorithm, we have to guess whether the bin of a wide or big item will satisfy Property 11 or Property 12. We will capture these guesses in the fine partitioning.
Definition 24 (Fine partitioning).
Let , , .
Given a coarse partitioning of a set of items, let be a partitioning of , be a partitioning of and be a partitioning of .
-
•
is partitioned into sets where .
-
•
is partitioned into sets where .
-
•
is partitioned into sets where .
-
•
is partitioned into sets .
-
•
is partitioned into sets .
-
•
is partitioned into sets where .
A fine partitioning of is any partitioning of into sets of the form , , , , , , , , , , .
Note that for a given set of items, there can be multiple fine partitionings.
Given a fine partitioning, we use the ‘’ character in superscript or subscript to denote the union of some partitions. For example, and , and .
When item rotations are allowed, the fine partitioning includes information on which items to rotate, and we can assume w.l.o.g. that .
Transformation 25.
Given a fine partitioning of , execute the following operations:
-
•
, increase to .
-
•
, increase to .
-
•
, increase to .
-
•
, increase to .
-
•
, increase to .
-
•
, increase to .
The number of fine partitions is constant and after applying Transformation 25, each partition is homogeneous.
Definition 26 (Semi-structured packing of fine partitioning).
Suppose we are given a fine partitioning of items . A packing of items into a bin is said to be ‘division-1 semi-structured’ with respect to the fine partitioning iff doesn’t contain items from , , and and satisfies Property 11.
A packing of items into a bin is said to be ‘division-2 semi-structured’ with respect to the fine partitioning iff doesn’t contain items from , , and and satisfies Property 12.
Packing of items into bins is called semi-structured iff each bin is either division-1 semi-structured or division-2 semi-structured.
Definition 27 (Balanced fine partitioning).
A fine partitioning is said to be balanced iff it satisfies all of the following conditions:
-
•
-
•
-
•
, the sets can be obtained from by ordering the items in in non-increasing order of height (breaking ties arbitrarily) and putting the first items in , the next items in , and so on, where .
-
•
, the sets can be obtained from by ordering the items in in non-increasing order of width (breaking ties arbitrarily) and putting the first items in , the next items in , and so on, where .
We now restate Theorems 33 and 45 in terms of fine partitioning.
Lemma 47.
Let be a set of items and be the items obtained by weight-rounding . Then there exists a balanced fine partitioning of a slicing of such that after applying Transformation 25 to , there is a semi-structured -slacked fractional packing of into bins. Here and are as defined in Table 2 in Theorem 33.
Proof.
By Theorem 33, we can round up the width of some big and wide non-dense items in to the nearest multiple of and round up the height of some big and tall non-dense items in to the nearest multiple of and then pack into -slacked bins such that each bin satisfies either Property 11 or Property 12. Let be such a bin packing of . By Lemmas 35, 39 and 41, gives us a -slacked packing of .
Call the bins in that satisfy Property 11 division-1 bins. Call the rest of the bins division-2 bins. The items whose width needs to be rounded up to a multiple of are the big and wide items in division-1 bins and the items whose height needs to be rounded up to a multiple of are the big and tall items in division-2 bins. No other items need to have their width or height rounded up in the packing produced by Theorem 33.
Let and be the items of in division-1 bins and division-2 bins respectively.
We can compute the coarse partitioning of . Define
Define
-
•
.
-
•
.
-
•
.
-
•
.
Define
-
•
as the linear group of (see Transformation 22).
-
•
as the linear group of .
-
•
as the linear group of .
-
•
as the linear group of .
This is how we get a fine partitioning of a slicing of .
As per Lemma 45, on applying Transformation 25 to , the resulting instance can be sliced and packed into number of -slacked bins. ∎
F.6 Rounding Algorithm
Let be a set of weight-rounded items. To use Lemma 47 to get an approximately-optimal packing of items , we would like to iterate over all balanced fine partitionings of slicings of . However, even if we don’t consider slicings, doing that will take exponential time, since for each big, wide and tall item, we need to decide whether to designate it as a division-1 item or a division-2 item.
We can get around this problem by iterating over a polynomial-sized set of fine partitionings such that each balanced fine partitioning of a slicing of is ‘close to’ a fine partitioning in . We will now define what we mean by ‘close to’.
Definition 28 (Predecessor of a set of items).
Let and be sets of items. Interpret each item as a bin whose geometric dimensions are the same as that of and whose weight capacities are the same as the weights of . is said to be a predecessor of (denoted as ) iff can be sliced and packed into .
We will design a polynomial-time algorithm that takes as input a weight-rounded set of items and outputs a set of pairs such that for each balanced fine partitioning of a slicing of , there exists a pair such that all of these conditions hold:
-
•
is a fine partitioning of .
-
•
After applying Transformation 25, each partition in is a predecessor of the corresponding partition in .
-
•
is a set of non-dense items (called discarded items) such that is small compared to .
F.6.1 Big Items
Let be an algorithm that takes a coarse partition of big items as input, and outputs multiple fine partitionings of . We can use as a subroutine in .
To design , we will guess the cardinality of sets and . We will then guess the maximum height in and the maximum width in . Then for each guess, we will solve a max-flow problem to check if items in can be assigned to these sets. We skip the details here since this approach is similar to that of Prädel (see Section 3.3.1 in [39]).
Formally outputs a set of pairs of the form , where is supposed to be a fine partitioning of .
Claim 48.
generates values, where , and the running time per value is .
Claim 49.
Let be a balanced fine partitioning of . Then there is an output of where such that is a fine partitioning of and after applying Transformation 25,
for the rotational case is similar to the non-rotational case: When rotations are allowed, assume w.l.o.g. that . We will guess the cardinality and maximum height in sets . Then for each guess, we will solve a max-flow problem to check if items in can be assigned to these sets, possibly after rotating some items.
F.6.2 Wide and Tall Items
Let be an algorithm that takes a coarse partition of wide items as input, and outputs multiple fine partitionings of . We can use as a subroutine in .
Let be a balanced fine partitioning of a slicing of . We want to find a fine partitioning of a large subset of such that after applying Transformation 25 to and , every fine partition in is a predecessor of the corresponding fine partition in .
For any , define and . We will create a rectangular box for each fine partition and then try to pack a large subset of into these boxes. Let . For each , let be a box of width and height . For each , let be a box of width and height . Since we don’t know and , we will guess the value of and we will guess very close lower bounds on and . We will then try to pack most of the items from into these boxes.
Let be the items packed into and let be the items packed into . Then is a fine partitioning of a large subset of . After applying Transformation 25 to and , each item in and has width . Since , we get after Transformation 25. We can similarly prove that . Therefore, is a suitable fine partitioning.
The details on how to approximately guess the size of boxes and how to pack a large subset of items into boxes can be deduced from section 3.3.1 of [39].
Formally, outputs a set of pairs of the form , where items in are called discarded items and is supposed to be a fine partitioning of .
Claim 50.
For every output of ,
Claim 51.
Let be a balanced fine partitioning of a slicing of . Then for some output of , is a fine partitioning of and after applying Transformation 25 to and ,
Claim 52.
Let there be items in . Let . Then outputs at most distinct values. The running time per value is .
can analogously be used for sub-partitioning coarse partitions of tall items.
When item rotations are allowed, gives us instead of .
F.6.3 Rounding Algorithm
Based on and , we define an algorithm (Algorithm 2) that returns a set of fine partitionings. We then use to design the algorithm (Algorithm 3).
Assume that in an output of , the knowledge of the associated fine partitioning is implicitly present in .
Lemma 53 (Polynomial-time).
The total number of outputs of is , where there are items in and
The time for each output is at most .
Lemma 54 (Low discard).
Let be an output of . Then
Proof.
Lemma 55 (Homogeneity).
Let be an output of . Then the number of types of items in is at most a constant:
Proof.
After applying Transformation 25 to , all non-dense items in each fine partition have:
-
•
the same weight class.
-
•
the same width, if the fine partition contains only wide or big items.
-
•
the same height, if the fine partition contains only tall or big items.
From the definition of fine partitioning, the number of different partitions of non-dense items is at most
In each division, there are distinct heavy dense items and weight classes in light dense items by Lemmas 40 and 42. ∎