This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Diamonds are Forever in the Blockchain:
Geometric Polyhedral Point-Set Pattern Matching

   Gill Barequet Dept. of Computer Science, Technion—Israel Inst. of Technology, [email protected]       Shion Fukuzawa Dept. of Computer Science, University of California, Irvine, {fukuzaws,goodrich,mosegued,eozel}@uci.edu       Michael T. Goodrich    David M. Mount Dept. of Computer Science, University of Maryland, College Park, [email protected]     Martha C. Osegueda       Evrim Ozel
Abstract

Motivated by blockchain technology for supply-chain tracing of ethically sourced diamonds, we study geometric polyhedral point-set pattern matching as minimum-width polyhedral annulus problems under translations and rotations. We provide two (1+ε)(1+\varepsilon)-approximation schemes under translations with O(εdn)O(\varepsilon^{-d}n)-time for dd dimensions and O(nlogε1+ε2)O(n\log\varepsilon^{-1}+\varepsilon^{-2})-time for two dimensions, and we give an O(fd1ε12dn)O(f^{d-1}\varepsilon^{1-2d}n)-time algorithm when also allowing for rotations, parameterized on ff, which we define as the slimness of the point set.

1 Introduction

A notable recent computational geometry application is for tracking supply chains for natural diamonds, for which the industry and customers are strongly motivated to prefer ethically-sourced provenance (e.g., to avoid so-called “blood diamonds”). For example, the Tracr system employs a blockchain for tracing the supply chain for a diamond from its being mined as a rough diamond to a customer purchasing a polished diamond [23]. (See Figure 1.)

Refer to caption
Figure 1: Blockchain transactions in a diamond supply chain, providing provenance, traceability, and authenticity of an ethically-sourced diamond.

Essential steps in the Tracr blockchain supply-chain process require methods to match point sets against geometric shapes, e.g., to guarantee that a diamond has not been replaced with one of questionable provenance [23]. Currently, the Tracr system uses standard machine-learning techniques to perform the shape matching steps. We believe, however, that better accuracy can be achieved by using computational geometry approaches. In particular, motivated by the Tracr application, we are interested in this paper in efficient methods for matching point sets against geometric shapes, such as polyhedra. Formalizing this problem, we study the problem of finding the best translation and/or rotation of the boundary of a convex polytope, PP (e.g., defining a polished diamond shape), to match a set of nn points in a dd-dimensional (d3d\geq 3) space, where the point set is a “good” sample of the boundary of a polytope that is purported to be PP. Since there may be small inaccuracies in the sampling process, our aim is to compute a minimum width polyhedral annulus determined by PP that contains the sampled points. In the interest of optimizing running time, rather than seeking an exact solution, we seek an approximate solution that deviates from the real solution by a predefined quantity ε>0\varepsilon>0.

Related Work.

We are not familiar with any previous work on the problems we study in this paper. Nevertheless, there is considerable prior work on the general area of matching a geometric shape to a set of points, especially in the plane. For example, Barequet, Bose, Dickerson, and Goodrich [12] give solutions to several constrained polygon annulus placement problems for offset and scaled polygons including an algorithm for finding the translation for the minimum offset of an mm-vertex polygon that contains a set of nn points in O(nlog2n+m)O(n\log^{2}n+m) time. Barequet, Dickerson, and Scharf [13] study the problem of covering a maximum number of nn points with an mm-vertex polygon (not just its boundary) under translations, rotations, and/or scaling, giving, e.g., an algorithm running in time O(n3m4log(nm))O(n^{3}m^{4}\log(nm)) for the general problem. There has also been work on finding a minimum-width annulus for rectangles and squares, e.g., see [19, 9, 11, 21].

Chan [15] presents a (1+ε)(1+\varepsilon)-approximation method that finds a minimum-width spherical annulus of nn points in dd dimensions in O(nlog(1/ε)+εO(1))O(n\log(1/\varepsilon)+\varepsilon^{O(1)}) time, and Agarwal, Har-Peled, and Varadarajan [1] improve this to O(n+1/εO(d2))O(n+1/\varepsilon^{O(d^{2})}) time via coresets [2, 22, 24, 3]. A line of work has considered computing the spherical annulus under stronger assumptions on the points samples. Most notably Devillers and Ramos [17] combine various definitions for “minimum quality assumptions” by Melhorn, Shermer and Yap [20] and Bose and Morin [14] and show that under this assumption the spherical annulus can be computed in linear time for d=2d=2 and present empirical evidence for higher dimensions. Arya, da Fonseca, and Mount [6] show how to find an ε\varepsilon-approximation of the width of nn points in O(nlog(1/ε)+1/ε(d1)/2+α)O(n\log(1/\varepsilon)+1/\varepsilon^{(d-1)/2+\alpha}) time, for a constant α>0\alpha>0. Bae [10] shows how to find a min-width dd-dimensional hypercubic shell in O(nd/2logd1n)O(n^{\lfloor d/2\rfloor}\log^{d-1}n) expected time.

Our Results.

Given a set of nn points in 𝐑d{\bf R}^{d}, we provide an O(εdn)O(\varepsilon^{-d}n)-time (1+ε)(1+\varepsilon)-approximate polytope-matching algorithm under translations, for d3d\geq 3, and O(nlogε1+ε2)O(n\log\varepsilon^{-1}+\varepsilon^{-2}) time for d=2d=2, and we provide an O(fd1ε12dn)O(f^{d-1}\varepsilon^{1-2d}n)-time algorithm when also allowing for rotations, where the complexity of the polytope is constant and for rotations is parameterized by ff, which we define as the slimness of the point set.

The paper is organized as follows. In Section 2, we set the ground for this work by providing some necessary definitions. In Section 3, we approximate the MWA under only translations. In this section, we provide a constant factor approximation scheme, a (1+ε)(1+\varepsilon)-approximation scheme and describe how to improve the running time in two dimensions. In Section 4, we consider the MWA under rotations.

2 Preliminaries

Following previous convention [4, 7, 8, 18, 5], we say that a point set SS is a δ\delta-uniform sample of a surface Σd\Sigma\subset\mathbb{R}^{d} if for every point pΣp\in\Sigma, there exists a point qSq\in S such that d(p,q)δd(p,q)\leq\delta. Let CdC\subset\mathbb{R}^{d} be a closed, convex polyhedron containing the origin in its interior. Given CC, and xdx\in\mathbb{R}^{d}, define x+C={x+y:yC}x+C=\{x+y:y\in C\} (the translation of CC by xx), and for rr\in\mathbb{R}, define rC={ry:yC}rC=\{ry:y\in C\}. A placement of CC is a pair (x,r)(x,r), where xdx\in\mathbb{R}^{d} and r0r\in\mathbb{R}^{\geq 0}, representing the translated and scaled copy x+rCx+rC. We refer to xx and rr as the center and radius of the placement, respectively. Two placements are concentric if they share the same center.

Let CC be any closed convex body in d\mathbb{R}^{d} containing the origin in its interior. The convex distance function induced by CC is the function dC:d×d0d_{C}:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{\geq 0}, where

dC(p,q)=min{r:r0andqp+rC}d_{C}(p,q)=\min\{r:r\geq 0~{}\mathrm{and}~{}q\in p+rC\}

Thus, the convex distance between pp and qq is determined by the minimum radius placement of CC centered at pp that contains qq (see Figure 2). When CC is centrally symmetric, this defines a metric, but for general CC, the function dCd_{C} may not be symmetric. We call the original shape CC the unit ball UCU_{C} under the distance function dCd_{C}. Note that dC(a,c)=dC(a,b)+dC(b,c)d_{C}(a,c)=d_{C}(a,b)+d_{C}(b,c) when aa, bb and cc are collinear and appear in that order.

Define an annulus for CC to be the set-theoretic difference of two concentric placements (p+RC)(p+rC)(p+RC)\setminus(p+rC), for 0rR0\leq r\leq R. The width of the annulus is RrR-r. Given a δ\delta-uniform sample of points, SS, there are three placements of CC we are interested in:

\bullet Minimum enclosing ball (MinBall): A placement of CC of the smallest radius that contains all of the points in SS.

\bullet Maximum enclosed ball (MaxBall): A placement of CC of the largest radius, centered within the convex hull of SS, that contains no points in SS.

\bullet Minimum width annulus (MWA): A placement of an annulus for CC of minimum width, that contains all of the points in SS.

Note that, following the definition of the MaxBall, we require that the center of the MWA must also lie within the convex hull of SS. For each of the above placements, we also refer to parameterized versions, for example MinBall(pp), MaxBall(pp), or MWA(pp). These respectively refer to the minimum enclosing ball, maximum enclosed ball, or minimum width annulus centered at the point pp.

Further, we use |MinBall(p)||{\rm MinBall}(p)| and |MaxBall(p)||{\rm MaxBall}(p)| to denote the radius of MinBall(p)(p) and MaxBall(p)(p), respectively, and we use |MWA(p)||{\rm MWA}(p)| to denote the width of MWA(p)(p).

The ratio, FF, of the MinBall over the MaxBall of SdS\subset\mathbb{R}^{d} under distance function dCd_{C} defines the fatness of SS under dCd_{C}, such that F:=|MinBall|/|MaxBall|F:=|{\rm MinBall}|/|{\rm MaxBall}|. Also, we define the concentric fatness as the ratio of the MinBall and MaxBall centered at the MWA, such that Fc:=|MinBall(copt)|/|MaxBall(copt)|F_{c}:=|{\rm MinBall}(c_{opt})|/|{\rm MaxBall}(c_{opt})| where coptc_{opt} is the center of the MWA. Conversely, we define the slimness to be f1=1Fc1f^{-1}=1-F_{c}^{-1}, which corresponds to the ratio of the MinBall(copt){\rm MinBall}(c_{opt}) over the MWA, i.e., f:=|MinBall(copt)|/|MWA|f:=|{\rm MinBall}(c_{opt})|/|{\rm MWA}|.

Refer to caption
Figure 2: Left: a visual representation of a polyhedral distance function and the distance between two points. Center: The MinBall under dCd_{C} containing all points in SS, centered at cc. Right: The MWA of SS with all points within MinBall(c)(c)\setminusMaxBall(c).
Remark 1

In order for a δ\delta-uniform sample to represent the surface, Σ\Sigma, with sufficient accuracy for a meaningful MWA{\rm MWA}, we assume that the sample must contain at least one point between corresponding facets of the MWA{\rm MWA}. Where corresponding facets refer to facets of the MinBall{\rm MinBall} and MaxBall{\rm MaxBall} representing the same facet of UCU_{C}. Therefore, in the remainder of the paper, we assume we have a δ\delta-uniform sample and that δ\delta is small enough to guarantee this condition for even the smallest facets.

In practice, it would be easy to determine a small enough δ\delta before sampling Σ\Sigma, since only sufficiently slim surfaces would benefit from finding the MWA{\rm MWA}, and very fat surfaces would yield increasingly noisy MaxBall{\rm MaxBall}. One easy approach would be setting δ\delta to the smallest facet of the MinBall{\rm MinBall} and scaling down by an arbitrary constant larger than the maximum expected fatness, such as 100. This example imposes a very generous bound on fatness since it would allow the inner shell to be 1% of the size of the outer shell, practically a single digit constant would often suffice.

Also, note that, for a given center point cc, MWA(c){\rm MWA}(c) is uniquely defined as the annulus centered at cc with inner radius minpSdC(c,p)\min_{p\in S}d_{C}(c,p) and outer radius maxpSdC(c,p)\max_{p\in S}d_{C}(c,p). Further, let us assume that the reference polytope defining our polyhedral distance function has mm facets, where mm is a fixed constant, since the sample size is expected to be much larger than mm. Thus, dCd_{C} can be calculated in O(m)O(m) time; hence, MWA(c){\rm MWA}(c) can be found in O(mn)O(mn) time, which is O(n)O(n) under our assumption.

3 Approximating the Minimum Width Annulus

Let us first describe how to find a constant factor approximation of MWA{\rm MWA} under translations. Note that, by assumption, the center cc of our approximation lies within the convex hull of SS. Let us denote the center, outer radius, inner radius, and width of the optimal MWA as coptc_{opt}, RoptR_{opt}, roptr_{opt}, and woptw_{opt}.

We begin with Lemma 3.1, where we prove coptc_{opt} is within a certain distance from the center of the MinBall  cc, providing a search region for coptc_{opt}. In Lemma 3.3, we bound the width achieved by a center-point that is sufficiently close to coptc_{opt}. We then use this in Lemma 3.5 to prove that |MWA(c)||{\rm MWA}(c)| achieves a constant factor approximation.

Lemma 3.1.

The center of the MWA{\rm MWA}, coptc_{opt}, is within distance woptw_{opt} of the center of the MinBall{\rm MinBall}, cc. That is, dC(c,copt)woptd_{C}(c,c_{opt})\leq w_{opt}.

Proof 3.2.

Recall our assumption from Remark 1. By our assumption that at least one sample point lies on each facet, MinBall cannot shrink past any facets of MaxBall(coptc_{opt}).

Suppose for contradiction that dC(c,copt)>woptd_{C}(c,c_{opt})>w_{opt}. Let ss be the point where a ray projected from cc through coptc_{opt} intersects the boundary of MaxBall(coptc_{opt}), and let RR denote the radius of the MinBall. Observe that RR must be large enough for MinBall to contain ss and therefore RdC(c,s)R\geq d_{C}(c,s).

R\displaystyle R dC(c,copt)+dC(copt,s)\displaystyle\geq d_{C}(c,c_{opt})+d_{C}(c_{opt},s) by collinearity
>wopt+dC(copt,s)\displaystyle>w_{opt}+d_{C}(c_{opt},s) by assumption
=wopt+ropt\displaystyle=w_{opt}+r_{opt} by MaxBall(coptc_{opt}).

Thus, since wopt+ropt=Roptw_{opt}+r_{opt}=R_{opt}, we find R>RoptR>R_{opt}, which is a contradiction since RR must be the smallest radius of the MinBall across all possible centers. Therefore, we have that dC(c,copt)d_{C}(c,c_{opt}) cannot be larger than woptw_{opt}.

Lemma 3.1 helps us constrain the region within which cc must be contained. Let us now reason about how a given center point, cc, would serve as an approximation. For convenience, let us define R:=|MinBall(c)|R:=|{\rm MinBall}(c)| and r:=|MaxBall(c)|r:=|{\rm MaxBall}(c)| as the radii of the MinBall and MaxBall centered at cc, respectively.

Lemma 3.3.

Suppose cc is an arbitrary center-point in our search region, and the two directed distances between cc and coptc_{opt} are at most tt, i.e., tmax{dC(c,copt),dC(copt,c)}t\geq\max\{d_{C}(c,c_{opt}),d_{C}(c_{opt},c)\}. Then, we have that |MWA(c)|wopt+2t|{\rm MWA}(c)|\leq w_{opt}+2t.

Proof 3.4.

Knowing that all sample points must be contained within the MWA{\rm MWA}, the MWA(c){\rm MWA}(c) cannot expand past the furthest or closest point in MWA{\rm MWA} from cc under dCd_{C}. Let us now define these two points and use them to bound the radii for MinBall(c){\rm MinBall}(c) and MaxBall(c){\rm MaxBall}(c).

Let pp be the point where the ray from cc through coptc_{opt} intersects the boundary of MinBall(coptc_{opt}). MinBall(c){\rm MinBall}(c) cannot extend further than pp.

dC(c,p)=dC(c,copt)+dC(copt,p)\displaystyle d_{C}(c,p)=d_{C}(c,c_{opt})+d_{C}(c_{opt},p) t+dC(copt,p)\displaystyle\leq t+d_{C}(c_{opt},p)
R\displaystyle R Ropt+t.\displaystyle\leq R_{opt}+t.

Conversely, let qq be the intersection point where the ray projected from coptc_{opt} through cc intersects the boundary of MaxBall(coptc_{opt}), in which case MaxBall(c){\rm MaxBall}(c) cannot collapse further than qq.

dC(c,q)=dC(copt,q)dC(copt,c)\displaystyle d_{C}(c,q)=d_{C}(c_{opt},q)-d_{C}(c_{opt},c) dC(copt,q)t\displaystyle\geq d_{C}(c_{opt},q)-t
r\displaystyle r roptt.\displaystyle\geq r_{opt}-t.

Combining these bounds with the fact that |MWA(c)|=Rr|MWA(c)|=R-r we find that |MWA(c)|wopt+2t|\text{MWA}(c)|\leq w_{opt}+2t.

For simplicity, let us consider two points a,ba,b to be tt-close (under CC) whenever tmax{dC(a,b),dC(b,a)}t\geq\max\{d_{C}(a,b),d_{C}(b,a)\}.

Lemma 3.5.

If cc is the center of MinBall{\rm MinBall}, then MWA(c){\rm MWA}(c) is a constant factor approximation of the MWA{\rm MWA}, that is, |MWA(c)|b|MWA||{\rm MWA}(c)|\leq b|{\rm MWA}|, for some constant b1b\geq 1, under translations.

Proof 3.6.

From Lemma 3.1, we have that dC(c,copt)woptd_{C}(c,c_{opt})\leq w_{opt}. If cc and coptc_{opt} are woptw_{opt}-close, then we can directly apply the second part of Lemma 3.3 to find rroptwoptr\geq r_{opt}-w_{opt} and RRoptR\leq R_{opt}, such that |MWA(c)|Ropt(roptwopt)|{\rm MWA}(c)|\leq R_{opt}-(r_{opt}-w_{opt}), thus proving that this is a 2-approximation. If dCd_{C} is a metric, then dC(copt,c)=dC(c,copt)d_{C}(c_{opt},c)=d_{C}(c,c_{opt}) and this must always be the case. However, if dC(copt,c)>woptd_{C}(c_{opt},c)>w_{opt}, then we must use the Euclidean distance to find dC(copt,c)d_{C}(c_{opt},c). Let vector u:=ccoptu:=c-c_{opt}, and let us define unit vectors with respect to dCd_{C} and dC¯d_{\overline{C}}, such that

u^C=udC(copt,c),u^C¯=u¯dC(c,copt)\displaystyle\widehat{u}_{C}=\frac{u}{d_{C}(c_{opt},c)},\hskip 14.22636pt\widehat{u}_{\overline{C}}=\frac{\overline{u}}{d_{C}(c,c_{opt})}
u^CdC(copt,c)=u=u^C¯dC(c,copt)\displaystyle||\widehat{u}_{C}||\kern 1.0ptd_{C}(c_{opt},c)~{}=~{}||u||~{}=~{}||\widehat{u}_{\overline{C}}||\kern 1.0ptd_{C}(c,c_{opt})
dC(copt,c)u^C¯u^Cwopt\displaystyle d_{C}(c_{opt},c)\leq\frac{||\widehat{u}_{\overline{C}}||}{||\widehat{u}_{C}||}w_{opt} from Lemma 3.1.

Under any convex distance function, u^C¯u^C\frac{||\widehat{u}_{\overline{C}}||}{||\widehat{u}_{C}||} is bounded from above by A=maxvdv^C¯v^CA=\max_{v\in\mathbb{R}^{d}}\frac{||\widehat{v}_{\overline{C}}||}{||\widehat{v}_{C}||}, which corresponds to finding the direction, vv, of the largest asymmetry in UCU_{C}. Thus, by Lemma 3.3, |MWA(c)|(A+1)wopt|{\rm MWA}(c)|\leq(A+1)w_{opt}. Under our (fixed) polyhedral distance function, AA is constant; hence, MWA(cc) is a constant-factor approximation.

(𝟏+𝜺)(1+\varepsilon)-approximation.

Let us now describe how to compute a (1+ε)(1+\varepsilon)-approximation of MWA{\rm MWA}.

We begin with Lemma 3.7, which defines how close to coptc_{opt} is sufficient for a (1+ε)(1+\varepsilon)-approximation. In Theorem 3.9, we define a grid of candidate center-points so that any point in the search region has a gridpoint sufficiently close to it.

Lemma 3.7.

Suppose coptc_{opt} and cc are (εw/(2b))({\varepsilon w}/(2b))-close, where w=|MWA(cM)|w=|{\rm MWA}(c_{M})|, cMc_{M} is the center of MinBall{\rm MinBall}, and bb is the constant from Lemma 3.5. Then, MWA(c){\rm MWA}(c) is a (1+ε)(1+\varepsilon)-approximation of MWA{\rm MWA} under translations.

Proof 3.8.

It suffices to show that the width of our approximation only exceeds the optimal width by a factor of at most (1+ε)(1+\varepsilon).

Assuming cc and coptc_{opt} are tt-close, and using Lemma 3.3, we require that wopt+2t(1+ε)woptw_{opt}+2t\leq(1+\varepsilon)w_{opt}, i.e., tεwopt/2t\leq\varepsilon w_{opt}/2.

Let us then choose tεw/(2b)t\leq{\varepsilon w}/(2b), knowing that wbwoptw\leq bw_{opt} from Lemma 3.5, which is sufficient for achieving a (1+ε)(1+\varepsilon)-approximation.

Knowing how close our approximation’s center must be, we can now present a (1+ε)(1+\varepsilon)-approximation algorithm to find a center satisfying this condition.

Theorem 3.9.

One can achieve a (1+ε)(1+\varepsilon)-approximation of the MWA{\rm MWA} under translations in O(εdn)O(\varepsilon^{-d}n) time.

Proof 3.10.

The MinBall can be computed in O(n)O(n) time [16]. By Lemma 3.1, we have that dC(c,copt)woptd_{C}(c,c_{opt})\leq w_{opt}, where cc is the MinBall center. This implies that coptc_{opt} must lie within the placement c+woptCc+w_{opt}C or more generously in PP, defined as c+wCc+wC. Furthermore, from Lemma 3.7, we know that being (εw/(2b))({\varepsilon w}/(2b))-close to coptc_{opt} suffices for an (1+ε)(1+\varepsilon)-approximation. Therefore, overlaying a grid GG that covers PP, such that any point in pPp\in P is (εw/(2b))({\varepsilon w}/(2b))-close to a gridpoint, guarantees the existence of a point gGg\in G for which MWA(g){\rm MWA}(g) is a (1+ε)(1+\varepsilon)-approximation.

Since PP and (εw/(2b))({\varepsilon w}/(2b))-closeness are both defined under dCd_{C}, we translate this to a cubic grid for simplicity. Let QQ be the smallest cube enclosing PP and qq be the largest cube enclosed by (εw/(2b))C({\varepsilon w}/(2b))C. Let us now define a grid, GG, to span over QQ with cells the size of qq.

This grid, GG, has O(Fb/ε)O(Fb/\varepsilon) gridpoints per direction and O(Fdbdεd)O(F^{d}b^{d}\varepsilon^{-d}) gridpoints in total, where FF corresponds to the fatness of CC under the cubic distance function.

Let us define the cubic distance function, dqd_{q}, with unit cube Uq=q(2b)/(εw)U_{q}=q\cdot(2b)/(\varepsilon w), such that UqU_{q} is the largest cube enclosed by CC.

The grid GG guarantees that for every point pp, there exists a gridpoint gGg\in G such that dq(p,g)εw/(2b)d_{q}(p,g)\leq{\varepsilon w}/(2b). Since the unit cube is contained within the unit polyhedron, we have that dC(a,b)dq(a,b)a,bd_{C}(a,b)\leq d_{q}(a,b)\ \forall a,b; and since dqd_{q} defines a metric, pp must also be (εw/(2b))({\varepsilon w}/(2b))-close under dCd_{C}. Finding the gridpoint providing the (1+ε)(1+\varepsilon)-approximation takes O(Fdbdεdn)O(F^{d}b^{d}\varepsilon^{-d}n) time,111For metrics, MinBall provides a 2-approximation, thus b=2b{=}2. For non-metrics, we can remove this constant by first using this algorithm with ε=1\varepsilon{=}1 in order to find a 2-approximation in linear-time, and using this approximation for gridding in the main step. which, under a fixed dCd_{C}, is O(εdn)O(\varepsilon^{-d}n) time.

Faster grid-search in two dimensions.

The algorithm of Theorem 3.9 recalculates the MWA at every gridpoint. However, small movements along the grid should not affect the MWA much. We use this insight to speed up MWA recalculations for two dimensions.

Let us first define the contributing edge of a sample point, pSp\in S, as the edge of C+gC+g intersected by the ray emanating from a gridpoint, gg, towards pp. Under this center-point, pp will only directly affect the placement of the contributing edge. Observe that given vectors vC\overrightarrow{v}\in C, defined as the vectors directed from the center towards each vertex, the planar subdivision, created by rays for each v\overrightarrow{v} originating from gg, separates points by their contributing edge. For any two gridpoints, g1g_{1} and g2g_{2}, and rays projected from them parallel to v\overrightarrow{v}, any points within these two rays will contribute to different edges under g1g_{1} and g2g_{2}. We denote this region as the vertex slab of vertex vv, and the regions outside of this as edge slabs. Points within an edge slab contribute to the same edge under both gridpoints, maintaining the constraints this imposes on the MWA, can therefore be achieved with the two extreme points per edge slab. If we consider vertex slabs for all gGg\in G, we must be able to quickly calculate the strictest constraints imposed by points in a subset of vertex slabs. An example of the planar subdivision for two points is shown in Figure 3.

Refer to caption
Figure 3: Planar subdivision defining vertex slabs (red) and edge slabs (blue) for two candidate center-points, and showing membership of some sample points.

Given a grid GG, we write gi,jGg_{i,j}\in G to be the gridpoint at index (i,j)(i,j). Consider the set of all grid lines LvL_{v} defined by rays parallel to v\overrightarrow{v} starting at each gridpoint. LvL_{v} defines a planar subdivision corresponding to the edge slabs between gridpoints. Before attempting to identify the extreme points for each edge slab, we first need to find a quick way to identify the slab in LvL_{v} that contains a given sample-point, pp.

Lemma 3.11.

For a specific vector v\overrightarrow{v} and an m×mm\times m grid, we can identify which slab contains a sample point, pp, in O(logm)O(\log m) time with O(m2)O(m^{2})-time preprocessing.

Proof 3.12.

Consider the orthogonal projection of grid lines in LvL_{v} onto a line v\overrightarrow{v_{\bot}} perpendicular to v\overrightarrow{v}, the order in which these lines appear in v\overrightarrow{v_{\bot}} defines the possible slabs that could contain pp (see Figure 4(a)). We can project a given grid line lLvl\in L_{v} onto v\overrightarrow{v_{\bot}} in constant time. With the grid lines in sorted order, we can perform a binary search through the m2m^{2} points in O(logm)O(\log m) time to identify the slab containing pp.

Using general sorting algorithms, we could sort the grid lines in O(m2logm)O(m^{2}\log m) time. However, since these lines belong to a grid, we can exploit the uniformity to sort them in only O(m2)O(m^{2}) time. Consider the two basis vectors defining gridpoint positions ı^=g(1,0)g(0,0)\hat{\imath}=g_{(1,0)}-g_{(0,0)} and ȷ^=g(0,1)g(0,0)\hat{\jmath}=g_{(0,1)}-g_{(0,0)}, and their sizes after orthogonal projection onto v\overrightarrow{v_{\bot}}, |ı^||\hat{\imath}_{\bot}|, and |ȷ^||\hat{\jmath}_{\bot}|. Without loss of generality, assume that |ı^||ȷ^||\hat{\imath}_{\bot}|\geq|\hat{\jmath}_{\bot}|, in which case grid lines originating from adjacent gridpoints in the same row must be exactly |ı^||\hat{\imath}_{\bot}| away. In addition, any region |ı^||\hat{\imath}_{\bot}|-wide, that does not start at a grid line, must contain at most a single point from each row. Furthermore, since points in the same row are always |ı^||\hat{\imath}_{\bot}| away, they must appear in the same order in each region.

We can therefore initially split v\overrightarrow{v_{\bot}} into regions |ı^||\hat{\imath}_{\bot}| wide. Sorting the grid lines lLvl\in L_{v} into their region can therefore be calculated in O(m2)O(m^{2}) time. Now we can sort the mm points in the region containing points from every row in O(mlogm)O(m\log m) time. Since each region has the same order, we can place points in other regions by following the order found in our sorted region, thus taking O(m2)O(m^{2}) preprocessing time for sorting the points.

Refer to caption
(a) A demonstration of the point location problem with the subdivision, LvL_{v}, and a visualization of the gridpoints and sample point projections onto v\overrightarrow{v_{\bot}}.
Refer to caption
(b) Finding the extreme points (red) under eL\overrightarrow{e}_{L} in subdivision LvL_{v} for each region (solid) and for all regions to its left (dashed).
Figure 4: A visual representation of the projections involved while point locating within the vertex slabs and while finding the extreme points in each slab.

Recall that points to the left of a given line lLvl\in L_{v} contribute to the edge to the left of vv, i.e., all points belonging to slabs to the left of ll. We can therefore isolate the points in these slabs causing the largest potential change in MWA.

Lemma 3.13.

For a vertex vCv\in C and grid line lLvl\in L_{v} through gridpoint gg, let lLl_{L} and lRl_{R} refer to the slabs on the subdivision imposed by LvL_{v} immediately to the left and right of ll, respectively. Assuming lLl_{L} maintains the points to the left of ll imposing the strictest constraints on MWA(g){\rm MWA}(g), and lRl_{R} to the right, one can calculate MWA(g){\rm MWA}(g) in O(1)O(1) time.

Proof 3.14.

Finding minpSdC(g,p)\min_{p\in S}d_{C}(g,p) and maxpSdC(g,p)\max_{p\in S}d_{C}(g,p) can now be achieved by optimizing only over the set of points in {lLlR,vC}\{l_{L}\cup l_{R},\,\forall v{\in}C\} and all points in edge slabs. This set would contain two points per vertex and two points per edge, yielding a constant number of points. Thus, MWA(gg) can be found in constant time.

Theorem 3.15.

A (1+ε)(1+\varepsilon)-approximation of the MWA{\rm MWA} in two dimensions can be found in O(nlogε1+ε2)O(n\log\varepsilon^{-1}+\varepsilon^{-2}) time under translations.

Proof 3.16.

For each vertex, vv, we use Lemma 3.11 to identify the slab for every sample point. For each slab, we maintain only the two extreme points for each of the edges incident on v\overrightarrow{v}. Let eLC\overrightarrow{e}_{L}\in C denote the vector describing the edge incident on v\overrightarrow{v} from the left, and vice versa for eRC\overrightarrow{e}_{R}\in C incident from the right. For each slab, we maintain only points which when projected in the relevant direction, e\overrightarrow{e}, cause the furthest and closest intersections with the boundary (shown for eL\overrightarrow{e}_{L} in Figure 4(b)).

With a left-to-right pass, we update a slab’s extreme points relative to eL\overrightarrow{e}_{L} to maintain the extreme points for itself and slabs to its left. With a right-to-left pass, we do the same for eR\overrightarrow{e}_{R} and maintain points in its slab and slabs to its right.

Thus, for each vertex, we create the slabs in O(ε2)O(\varepsilon^{-2}) time, place every sample point in its slab in O(nlogε1)O(n\log\varepsilon^{-1}) time, and maintain only the extreme points per slab in constant time per sample point. With O(ε2)O(\varepsilon^{-2}) time to update each slab after processing the sample points, we can update the slabs such that they hold the extreme points across all slabs to their left or right (relative to eL\overrightarrow{e}_{L} and eR\overrightarrow{e}_{R}, respectively).

For each edge slab, finding the extreme points is much simpler since finding mindC(g,p)\min d_{C}(g,p) and maxdC(g,p)\max d_{C}(g,p) will always be based on the same contributing facet for all points within the same edge slab .

Thus, after finding the extreme points in both vertex slabs, we can calculate MWA(g){\rm MWA}(g) in constant time as described in Lemma 3.13. Taking O(ε2)O(\varepsilon^{-2}) time to find mingGMWA(g)\min_{g\in G}{\rm MWA}(g), which by Theorem 3.9 provides a (1+ε)(1+\varepsilon)-approximation of the minimum width annulus, and considering the O(nlogε1)O(n\log\varepsilon^{-1}) pre-processing time completes the proof of the claimed time bound.

4 Approximating MWA allowing rotations

In this section we consider rotations. As with Lemma 3.7, our goal is to find the maximum tolerable rotation sufficient for a (1+ε)(1+\varepsilon)-approximation. Observe that when centered about the global optimum, the solution found under both rotation and translation is at least as good as the solution found solely through rotation (i.e., under a fixed center). We will therefore first prove necessary bounds for a (1+ε)(1+\varepsilon)-approximation under rotation only with the understanding that they remain when also allowing for translation.

Consider the polyhedral cone around v\overrightarrow{v}, and define the bottleneck angle as the narrowest angle between a point on the surface of the polyhedral cone and v\overrightarrow{v}. Let θ\theta be the smallest bottleneck angle across all vC\overrightarrow{v}\in C. Let MWA(cα{}_{\alpha}(c) denote the MWA centered at cc, where C has been rotated by angle α\alpha. Let us also use similar notations for MinBall and MaxBall.

Lemma 4.1.

Rotating by α\alpha causes MinBallα(c){\rm MinBall}_{\alpha}(c) to grow by at most sin(πθα)sinθ\frac{\sin(\pi-\theta-\alpha)}{\sin{\theta}} (and the reciprocal for MaxBallα(c){\rm MaxBall}_{\alpha}(c)).

Proof 4.2.

Similarly to Lemma 3.3, all sample points must be contained within MinBall(c){\rm MinBall}(c). MinBall(c)α{}_{\alpha}(c) can only expand to the furthest point within MinBall(c){\rm MinBall}(c) under the new rotated distance function. Let us now consider the triangle formed between cc, the vertex vv of the original MinBall, v0v_{0}, and the rotated vertex vαv_{\alpha} (shown in Figure 5(a)). Since our calculations focus towards the same vertex, we can work with Euclidean distances. The quantity |v0c||v_{0}-c| defines the radius r1r_{1} of the original polyhedron, and r2=|vαc|r_{2}=|v_{\alpha}-c| the radius of the rotated one. With γ=πθα\gamma=\pi-\theta-\alpha as the remaining angle in our triangle and using the sine rule, we find that

r2r1=sinγsinθ=sin(πθα)sinθ.\displaystyle\frac{r_{2}}{r_{1}}=\frac{\sin\gamma}{\sin{\theta}}=\frac{\sin(\pi-\theta-\alpha)}{\sin{\theta}}.

Observe that θ\theta is the angle maximizing this scale difference. This applies to rotating by α\alpha in any direction about v\overrightarrow{v} (as shown in Figure 5(b)), and since this direction need not coincide with θ\theta, the scaled polyhedron might not touch the original.

For MaxBall(c)α{}_{\alpha}(c) to be contained within MaxBall(c)(c), the same example holds after switching references to the scaled and original. In this case, θ\theta minimizes r1/r2r_{1}/r_{2}.

Refer to caption
(a) A demonstration of the scale increase necessary for a polyhedron rotated by α\alpha to contain the original.
Refer to caption
(b) A rotation by α\alpha in an arbitrary direction about v\overrightarrow{v}.
Figure 5: Visual representations for the effect of rotating by α\alpha, demonstrating the scale increase and demonstrating how a rotation by α\alpha is defined for higher dimensions.

Let us now determine the rotation from the optimal orientation that achieves a (1+ε)(1+\varepsilon)-approximation.

Lemma 4.3.

Given a center cc, we have that MWAα(c){\rm MWA}_{\alpha}(c) is a (1+ε)(1+\varepsilon)-approximation when α\alpha is smaller than

arcsin(sinθ2f(1+ε±(1+ε)2+4f(f1)))θ.\displaystyle\arcsin\left(\frac{\sin\theta}{2f}\left(1{+}\varepsilon\pm\sqrt{(1{+}\varepsilon)^{2}+4f(f-1)}\right)\right)-\theta.
Proof 4.4.

Define ff as the ratio of the radius of MinBall(coptc_{opt}) to woptw_{opt} (i.e., fwopt=|MinBall(copt)|fw_{opt}=|\text{MinBall}(c_{opt})|). Note that ff corresponds to the slimness of SS under dCd_{C} over all rotations of CC. Using Lemma 4.1, we know that

|MWAα(c)|sinγsinθ|MinBall(copt)|sinθsinγ|MaxBall(copt)|\displaystyle|\text{MWA}_{\alpha}(c)|\leq\frac{\sin{\gamma}}{\sin{\theta}}|{\rm MinBall}(c_{opt})|-\frac{\sin{\theta}}{\sin{\gamma}}|{\rm MaxBall}(c_{opt})|
sinγsinθfwoptsinθsinγ(f1)wopt(1+ε)wopt\displaystyle\frac{\sin{\gamma}}{\sin{\theta}}fw_{opt}-\frac{\sin{\theta}}{\sin{\gamma}}(f{-}1)w_{opt}\leq(1{+}\varepsilon)w_{opt} (1)
sinγsinθfsinθsinγ(f1)(1+ε).\displaystyle\frac{\sin{\gamma}}{\sin{\theta}}f-\frac{\sin{\theta}}{\sin{\gamma}}(f{-}1)\leq(1{+}\varepsilon). (2)

For a (1+ε)(1+\varepsilon)-approximation, |MWAα(c)|(1+ε)wopt|{\rm MWA}_{\alpha}(c)|\leq(1{+}\varepsilon)w_{opt} imposing the right side of Relation (1), its left side follows by definition of ff, and Relation (2) by cancellation of woptw_{opt}. Since θ\theta is constant, we can rearrange the above into a quadratic equation and solve for sinγ\sin\gamma.

sinγ=sinθ2f(1+ε±(1+ε)2+4f(f1)).\displaystyle\sin{\gamma}=\frac{\sin\theta}{2f}\left(1{+}\varepsilon\pm\sqrt{(1{+}\varepsilon)^{2}+4f(f{-}1)}\right). (3)

However, arcsin\arcsin will find γπ\gamma\leq\pi, whereas we need the obtuse angle πγ\pi-\gamma.

Thus, proving this lemma’s titular bound,

and achieving a (1+ε)(1+\varepsilon)-approximation.

Let us now establish a more generous lower-bound that will prove helpful when developing algorithms.

Lemma 4.5.

The angular deflection required for a (1+ε)(1+\varepsilon)-approximation is larger than θε/(2f){\theta\varepsilon}/(2f).

Proof 4.6.

Observe that γ\gamma is of the form arcsin(ksinθ)\arcsin(k\sin\theta) and thus, in order for α=γθ\alpha=\gamma{-}\theta to be positive, we must have θ<π/2\theta<\pi/2 and k>1k>1. We will prove this is the case.

k=1+ε2f+(1+ε2f)21f+1\displaystyle k=\frac{1{+}\varepsilon}{2f}+\sqrt{\left(\frac{1{+}\varepsilon}{2f}\right)^{2}-\frac{1}{f}+1} (4)
14f21f+1=|112f|\displaystyle\sqrt{\frac{1}{4f^{2}}-\frac{1}{f}+1}=\left|1-\frac{1}{2f}\right| (5)
k>1+ε2f+|112f|=1+ε2f.\displaystyle k>\frac{1{+}\varepsilon}{2f}+\left|1-\frac{1}{2f}\right|=1+\frac{\varepsilon}{2f}.\hskip 11.38092pt (6)

Equation (4) follows from Equation (3) after expanding. Relation (6) follows after using Equation (5) as a lower bound for the square root term in Equation (4) since ε>0\varepsilon>0 and f>1f>1. This allows us to bound arcsin((1+ε2f)sinθ)\arcsin\left(\left(1+\dfrac{\varepsilon}{2f}\right)\sin\theta\right) by using a Taylor’s series expansion to find (1+k)θarcsin((1+k)sinθ)(1+k)\cdot\theta\leq\arcsin((1+k)\sin\theta),

thus proving that the bound from Lemma 4.3 is greater than θε2f\frac{\theta\varepsilon}{2f}.

Lemma 4.7.

For fixed rotation of CC, assume we have an O(g(n))O(g(n))-time algorithm for the optimal minimum-width annulus under translation.

We can find a(1+ε)(1+\varepsilon)-approximation of the MWA{\rm MWA} under rotations and translations in O(fd1ε1dg(n))O(f^{d-1}\varepsilon^{1-d}g(n)) time.

Proof 4.8.

A dd-dimensional shape has a (d1)(d{-}1)-dimensional axis of rotation. Let us evenly divide the unit circle into kk directions. Let us also define a collection of all possible direction combinations as a grid of directions. For each grid direction, rotate CC by the defined direction and calculate the MWA in O(g(n))O(g(n)) time. The optimal orientation must lie between the (d1)(d{-}1)-dimensional cube formed by 2d12^{d-1} grid directions.

Therefore, as long as the diagonal is smaller than θεf\frac{\theta\varepsilon}{f}, there exists a grid direction within θε2f\frac{\theta\varepsilon}{2f} of the optimal orientation, which implies a (1+ε)(1+\varepsilon)-approximation by Lemma 4.5. Thus, we can achieve a (1+ε)(1+\varepsilon)-approximation in time O(g(n)(2πfd1θε)d1)O\left(g(n)\cdot\left(\frac{2\pi f\sqrt{d-1}}{\theta\varepsilon}\right)^{d-1}\right), where dd and θ\theta are constant under a fixed distance function dCd_{C}.

With a fixed center, Lemma 4.7 can be used to approximate MWA{\rm MWA} under rotations in O(nfd1ε1d)O(nf^{d-1}\varepsilon^{1-d}) time.

Theorem 4.9.

One can find a (1+ε)(1+\varepsilon)-approximation of MWA{\rm MWA} under rotations and translations in O(fd1ε12dn)O(f^{d-1}\varepsilon^{1-2d}n) time for d3d{\geq}3, and O(fnε1logε1+fε3)O(fn\varepsilon^{-1}\log\varepsilon^{-1}+f\varepsilon^{-3}) time for d=2d{=}2.

Proof 4.10.

Consider using an approximation algorithm (from Theorems 3.9 or 3.15) instead of an exact algorithm as in Lemma 4.7. Let us define (1+ξ)(1+\xi) as the approximation ratio necessary from the subroutines in order to achieve an overall approximation ratio of (1+ε)(1+\varepsilon), such that (1+ξ)2=1+ε(1+\xi)^{2}=1+\varepsilon. Since ξ=1+ε1\xi=\sqrt{1+\varepsilon}-1 and 0<ε<10<\varepsilon<1, ξ\xi must be larger than (21)ε(\sqrt{2}-1)\cdot\varepsilon, and thus, we can always pick a value for ξ\xi which is O(ε)O(\varepsilon) and achieves the desired approximation. Thus, by following Lemma 4.7, we can find a (1+(21)ε)(1+(\sqrt{2}-1)\cdot\varepsilon)-approximation using the (1+(21)ε)(1+(\sqrt{2}-1)\cdot\varepsilon)-approximation algorithm from Theorem 3.9 to find a (1+ε)(1+\varepsilon)-approximation in O(fd1ε1dεdn)O(f^{d-1}\varepsilon^{1-d}\cdot\varepsilon^{-d}n) time. Alternatively, for two dimensions, we can instead use the algorithm from Theorem 3.15 to find a (1+ε)(1+\varepsilon)-approximation in O(fnε1logε1+fε3)O(fn\varepsilon^{-1}\log\varepsilon^{-1}+f\varepsilon^{-3}) time.

References

  • [1] P. K. Agarwal, S. Har-Peled, and K. R. Varadarajan. Approximating extent measures of points. J. ACM, 51(4):606–635, 2004.
  • [2] P. K. Agarwal, S. Har-Peled, K. R. Varadarajan, et al. Geometric approximation via coresets. Combinatorial and Computational Geometry, 52(1), 2005.
  • [3] P. K. Agarwal, S. Har-Peled, and H. Yu. Robust shape fitting via peeling and grating coresets. Discrete & Computational Geometry, 39(1):38–58, 2008.
  • [4] N. Amenta, D. Attali, and O. Devillers. Size of Delaunay triangulation for points distributed over lower-dimensional polyhedra: a tight bound. Neural Information Processing Systems (NeurIPS): Topological Learning, 2007.
  • [5] N. Amenta and M. Bern. Surface reconstruction by Voronoi filtering. Discrete & Computational Geometry, 22(4):481–504, 1999.
  • [6] S. Arya, G. D. da Fonseca, and D. M. Mount. Approximate convex intersection detection with applications to width and Minkowski sums. In 26th European Symposium on Algorithms (ESA). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2018.
  • [7] D. Attali and J.-D. Boissonnat. Complexity of the delaunay triangulation of points on polyhedral surfaces. Discrete & Computational Geometry, 30(3):437–452, 2003.
  • [8] D. Attali and J.-D. Boissonnat. A linear bound on the complexity of the Delaunay triangulation of points on polyhedral surfaces. Discrete & Computational Geometry, 31(3):369–384, 2004.
  • [9] S. W. Bae. Computing a minimum-width square annulus in arbitrary orientation. Theoretical Computer Science, 718:2–13, 2018.
  • [10] S. W. Bae. Computing a minimum-width cubic and hypercubic shell. Operations Research Letters, 47(5):398–405, 2019.
  • [11] S. W. Bae. On the minimum-area rectangular and square annulus problem. Computational Geometry, 92:101697, 2021.
  • [12] G. Barequet, P. Bose, M. T. Dickerson, and M. T. Goodrich. Optimizing a constrained convex polygonal annulus. J. Discrete Algorithms, 3(1):1–26, 2005.
  • [13] G. Barequet, M. T. Dickerson, and Y. Scharf. Covering points with a polygon. Computational Geometry, 39(3):143–162, 2008.
  • [14] P. Bose and P. Morin. Testing the Quality of Manufactured Disks and Cylinders. In G. Goos, J. Hartmanis, J. van Leeuwen, K.-Y. Chwa, and O. H. Ibarra, editors, Algorithms and Computation, volume 1533, pages 130–138. Springer Berlin Heidelberg, Berlin, Heidelberg, 1998.
  • [15] T. M. Chan. Approximating the diameter, width, smallest enclosing cylinder, and minimum-width annulus. In 16th Symposium on Computational Geometry (SoCG), pages 300–309, 2000.
  • [16] S. Das, A. Nandy, and S. Sarvottamananda. Linear time algorithm for 1-Center in d\mathfrak{{R}}^{d} under convex polyhedral distance function. In D. Zhu and S. Bereg, editors, Frontiers in Algorithmics, volume 9711, pages 41–52. Springer, 2016.
  • [17] O. Devillers and P. A. Ramos. Computing roundness is easy if the set is almost round. International Journal of Computational Geometry & Applications, 12(3):229–248, 2002.
  • [18] J. Erickson. Nice point sets can have nasty Delaunay triangulations. In 17th Symposium on Computational Geometry (SoCG), pages 96–105, 2001.
  • [19] O. N. Gluchshenko, H. W. Hamacher, and A. Tamir. An optimal O(nlogn)O(n\log n) algorithm for finding an enclosing planar rectilinear annulus of minimum width. Operations Research Letters, 37(3):168–170, 2009.
  • [20] K. Mehlhorn, T. C. Shermer, and C. K. Yap. A complete roundness classification procedure. In Proceedings of the Thirteenth Annual Symposium on Computational Geometry - SCG ’97, pages 129–138, Nice, France, 1997. ACM Press.
  • [21] J. Mukherjee, P. R. Sinha Mahapatra, A. Karmakar, and S. Das. Minimum-width rectangular annulus. Theoretical Computer Science, 508:74–80, 2013.
  • [22] J. M. Phillips. Coresets and sketches. In Handbook of Discrete and Computational Geometry, pages 1269–1288. Chapman and Hall/CRC, 2017.
  • [23] U. Thakker, R. Patel, S. Tanwar, N. Kumar, and H. Song. Blockchain for diamond industry: Opportunities and challenges. IEEE Internet of Things Journal, pages 1–1, 2020.
  • [24] H. Yu, P. K. Agarwal, R. Poreddy, and K. R. Varadarajan. Practical methods for shape fitting and kinetic data structures using coresets. Algorithmica, 52(3):378–402, 2008.