This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Dynamic geometric set cover and hitting set

Pankaj K. Agarwal 111Department of Computer Science, Duke University, USA
[email protected]
   Hsien-Chih Chang 11footnotemark: 1
[email protected]
   Subhash Suri 222Department of Computer Science, University of California at Santa Barbara, USA
[email protected]
   Allen Xiao 11footnotemark: 1
[email protected]
   Jie Xue 22footnotemark: 2
[email protected]
Abstract

We investigate dynamic versions of geometric set cover and hitting set where points and ranges may be inserted or deleted, and we want to efficiently maintain an (approximately) optimal solution for the current problem instance. While their static versions have been extensively studied in the past, surprisingly little is known about dynamic geometric set cover and hitting set. For instance, even for the most basic case of one-dimensional interval set cover and hitting set, no nontrivial results were known. The main contribution of our paper are two frameworks that lead to efficient data structures for dynamically maintaining set covers and hitting sets in 1\mathbb{R}^{1} and 2\mathbb{R}^{2}. The first framework uses bootstrapping and gives a (1+ε)(1+\varepsilon)-approximate data structure for dynamic interval set cover in 1\mathbb{R}^{1} with O(nα/ε)O(n^{\alpha}/\varepsilon) amortized update time for any constant α>0\alpha>0; in 2\mathbb{R}^{2}, this method gives O(1)O(1)-approximate data structures for unit-square (and quadrant) set cover and hitting set with O(n1/2+α)O(n^{1/2+\alpha}) amortized update time. The second framework uses local modification, and leads to a (1+ε)(1+\varepsilon)-approximate data structure for dynamic interval hitting set in 1\mathbb{R}^{1} with O~(1/ε)\widetilde{O}(1/\varepsilon) amortized update time; in 2\mathbb{R}^{2}, it gives O(1)O(1)-approximate data structures for unit-square (and quadrant) set cover and hitting set in the partially dynamic settings with O~(1)\widetilde{O}(1) amortized update time.

1 Introduction

Given a pair (S,)(S,\mathcal{R}) where SS is a set of points and \mathcal{R} is a collection of geometric ranges in a Euclidean space, the geometric set cover (resp., hitting set) problem is to find the smallest number of ranges in \mathcal{R} (resp., points in SS) that cover all points in SS (resp., hit all ranges in \mathcal{R}). Geometric set cover and hitting set are classical geometric optimization problems, with numerous applications in databases, sensor networks, VLSI design, etc.

In many applications, the problem instance can change over time and re-computing a new solution after each change is too costly. In these situations, a dynamic algorithm that can update the solution after a change more efficiently than constructing the entire new solution from scratch is highly desirable. This motivates the main problem studied in our paper: dynamically maintaining geometric set covers and hitting sets under insertion and deletion of points and ranges.

Although (static) geometric set cover and hitting set have been extensively studied over the years, their dynamic variants are surprisingly open. For example, even for the most fundamental case, dynamic interval set cover and hitting set in one dimension, no nontrivial results were previously known. In this paper, we propose two algorithmic frameworks for the problems, which lead to efficient data structures for dynamic set cover and hitting set for intervals in 1\mathbb{R}^{1} and unit squares and quadrants in 2\mathbb{R}^{2}. We believe that our approaches can be extended to solve dynamic set cover and hitting set in other geometric settings, or more generally, other dynamic problems in computational geometry.

1.1 Related work

The set cover and hitting set problems in general setting are well-known to be NP-complete [18]. A simple greedy algorithm achieves an O(logn)O(\log n)-approximation [11, 19, 21], which is tight under appropriate complexity-theoretic assumptions [13, 20]. In many geometric settings, the problems remain NP-hard or even hard to approximate [5, 22, 23]. However, by exploiting the geometric nature of the problems, efficient algorithms with better approximation factors can be obtained. For example, Mustafa and Ray [25] showed the existence of polynomial-time approximation schemes (PTAS) for halfspace hitting set in 3\mathbb{R}^{3} and disk hitting set. There is also a PTAS for unit-square set cover given by Erlebach and van Leeuwen [14]. Agarwal and Pan [2] proposed approximation algorithms with near-linear running time for the set cover and hitting set problems for halfspaces in 3\mathbb{R}^{3}, disks in 2\mathbb{R}^{2}, and orthogonal rectangles.

Dynamic problems have received a considerable attention in recent years [3, 6, 7, 9, 10, 17, 26, 27]. In particular, dynamic set cover in general setting has been studied in [1, 8, 16]. All the results were achieved in the partially dynamic setting where ranges are fixed and only points are dynamic. Gupta et al. [16] showed that an O(logn)O(\log n)-approximation can be maintained using O(flogn)O(f\log n) amortized update time and an O(f3)O(f^{3})-approximation can be maintained using O(f2)O(f^{2}) amortized update time, where ff is the maximum number of ranges that a point belongs to. Bhattacharya et al. [8] gave an O(f2)O(f^{2})-approximation data structure for dynamic set cover with O(flogn)O(f\log n) amortized update time. Abboud et al. [1] proved that one can maintain a (1+ε)f(1+\varepsilon)f-approximation using O(f2logn/ε5)O(f^{2}\log n/\varepsilon^{5}) amortized update time.

In geometric settings, only the dynamic hitting set problem has been considered [15]. Ganjugunte [15] studied two different dynamic settings: (i) only the range set \mathcal{R} is dynamic and (ii) \mathcal{R} is dynamic and SS is semi-dynamic (i.e., insertion-only). Ganjugunte [15] showed that, for pseudo-disks in 2\mathbb{R}^{2}, dynamic hitting set in setting (i) can be solved using O(γ(n)log4n)O(\gamma(n)\log^{4}n) amortized update time with approximation factor O(log2n)O(\log^{2}n), and that in setting (ii) can be solved using O(γ(n)nlog4n)O(\gamma(n)\sqrt{n}\log^{4}n) amortized update time with approximation factor O(log6n/loglogn)O(\log^{6}n/\log\log n), where γ(n)\gamma(n) is the time for finding a point in XX contained in a query pseudo-trapezoid (see [15] for details). Dynamic geometric hitting set in the fully dynamic setting (where both points and ranges can be inserted or deleted) as well as dynamic geometric set cover has not yet been studied before, to the best of our knowledge.

1.2 Our results

Let (S,)(S,\mathcal{R}) be a dynamic geometric set cover (resp., hitting set) instance. We are interested in proposing efficient data structures to maintain an (approximately) optimal solution for (S,)(S,\mathcal{R}). This may have various definitions, resulting in different variants of the problem. A natural variant is to maintain the number 𝗈𝗉𝗍\mathsf{opt}, which is the size of an optimal set cover (resp., hitting set) for (S,)(S,\mathcal{R}), or an approximation of 𝗈𝗉𝗍\mathsf{opt}. However, in many applications, only maintaining the optimum is not satisfactory and one may hope to maintain a “real” set cover (resp., hitting set) for the dynamic instance. Therefore, in this paper, we formulate the problem as follows. We require the data structure to, after each update, store (implicitly) a solution for the current problem instance satisfying some quality requirement such that certain information about the solution can be queried efficiently. For example, one can ask how large the solution is, whether a specific element in \mathcal{R} (resp., SS) is used in the solution, what the entire solution is, etc. We will make this more precise shortly.

In dynamic settings, it is usually more natural and convenient to consider a multiset solution for the set cover (resp., hitting set) instance. That is, we allow the solution to be a multiset of elements in \mathcal{R} (resp., SS) that cover all points in SS (resp., hit all ranges in \mathcal{R}), and the quality of the solution is also evaluated in terms of the multiset cardinality. In the static problems, one can always efficiently remove the duplicates in a multiset solution to obtain a (ordinary) set cover or hitting set with even better quality (i.e., smaller cardinality), hence computing a multiset solution is essentially equivalent to computing an ordinary solution. However, in dynamic settings, the update time has to be sublinear, and in general this is not sufficient for detecting and removing duplicates. Therefore, in this paper, we mainly focus on multiset solutions (though some of our data structures can maintain an ordinary set cover or hitting set). Unless explicitly mentioned otherwise, solutions for set cover and hitting set always refer to multiset solutions hereafter.

Precisely, we require a dynamic set cover (resp., hitting set) data structure to store implicitly, after each update, a set cover \mathcal{R}^{\prime} (resp., a hitting set SS^{\prime}) for the current instance (S,)(S,\mathcal{R}) such that the following queries are supported.

  • Size query: reporting the (multiset) size of \mathcal{R}^{\prime} (resp., SS^{\prime}).

  • Membership query: reporting, for a given range RR\in\mathcal{R} (resp., a given point aSa\in S), the number of copies of RR (resp., aa) contained in \mathcal{R}^{\prime} (resp., SS^{\prime}).

  • Reporting query: reporting all the elements in \mathcal{R}^{\prime} (resp., SS^{\prime}).

We require the size query to be answered in O(1)O(1) time, a membership query to be answered in O(log||)O(\log|\mathcal{R}^{\prime}|) time (resp., O(log|S|)O(\log|S^{\prime}|) time), and the reporting query to be answered in O(||)O(|\mathcal{R}^{\prime}|) time (resp., O(|S|)O(|S^{\prime}|) time); this is the best one can expect in the pointer machine model.

We say that a set cover (resp., hitting set) instance is fully dynamic if insertions and deletions on both points and ranges are allowed, and partially dynamic if only the points (resp., ranges) can be inserted and deleted. This paper mainly focuses on the fully dynamic setting, while some results are achieved in the partially dynamic setting. Thus, unless explicitly mentioned otherwise, problems are always considered in the fully dynamic setting.

The main contribution of this paper are two frameworks for designing dynamic geometric set cover and hitting set data structures, leading to efficient data structures in 1\mathbb{R}^{1} and 2\mathbb{R}^{2} (see Table 1). The first framework is based on bootstrapping, which results in efficient (approximate) dynamic data structures for interval set cover, quadrant/unit-square set cover and hitting set (see the first three rows of Table 1 for detailed bounds). The second framework is based on local modification, which results in efficient (approximate) dynamic data structures for interval hitting set, quadrant/unit-square set cover and hitting set in the partially dynamic setting (see the last three rows of Table 1 for detailed bounds).

Framework Problem Range Approx. factor Update time Setting
Bootstrapping SC Interval 1+ε1+\varepsilon O~(nα/ε)\widetilde{O}(n^{\alpha}/\varepsilon) Fully dynamic
SC & HS Quadrant O(1)O(1) O~(n1/2+α)\widetilde{O}(n^{1/2+\alpha}) Fully dynamic
SC & HS Unit square O(1)O(1) O~(n1/2+α)\widetilde{O}(n^{1/2+\alpha}) Fully dynamic
Local modification HS Interval 1+ε1+\varepsilon O~(1/ε)\widetilde{O}(1/\varepsilon) Fully dynamic
SC & HS Quadrant O(1)O(1) O~(1)\widetilde{O}(1) Part. dynamic
SC & HS Unit square O(1)O(1) O~(1)\widetilde{O}(1) Part. dynamic
Table 1: Summary of our results for dynamic geometric set cover and hitting set (SC = set cover and HS = hitting set). All update times are amortized. The notation O~()\widetilde{O}(\cdot) hides logarithmic factors, nn is the size of the current instance, and α>0\alpha>0 is any small constant. All data structures can be constructed in O~(n0)\widetilde{O}(n_{0}) time where n0n_{0} is the size of the initial instance.
Organization.

The rest of the paper is organized as follows. Section 2 gives the preliminaries required for the paper and Section 3 gives an overview of our two frameworks. Our first framework (bootstrapping) and second framework (local modification) for designing efficient dynamic geometric set cover and hitting set data structures are presented in Section 4 and Section 5, respectively. To make the paper more readable, the proofs of the technical lemmas and some details are deferred to the appendix.

2 Preliminaries

In this section, we introduce the basic notions used throughout the paper.

Multi-sets and disjoint union.

A multi-set is a set in which elements can have multiple copies. The multiplicity of an element aa in a multi-set AA is the number of the copies of aa in AA. For two multi-sets AA and BB, we use ABA\sqcup B to denote the disjoint union of AA and BB, in which the multiplicity of an element aa is the sum of the multiplicities of aa in AA and BB.

Basic data structures.

A data structure built on a dataset (e.g., point set, range set, set cover or hitting set instances) of size nn is basic if it can be constructed in O~(n)\widetilde{O}(n) time and can be dynamized with O~(1)\widetilde{O}(1) update time (with a bit abuse of terminology, sometimes we also use “basic data structures” to denote the dynamized version of such data structures).

Output-sensitive algorithms.

In some set cover and hitting set problems, if the problem instance is properly stored in some data structure, it is possible to compute an (approximate) optimal solution in sub-linear time. An output-sensitive algorithm for a set cover or hitting set problem refers to an algorithm that can compute an (approximate) optimal solution in O~(𝗈𝗎𝗍)\widetilde{O}(\mathsf{out}) time (where 𝗈𝗎𝗍\mathsf{out} is the size of the output solution), by using some basic data structure built on the problem instance.

3 An overview of our two frameworks

The basic idea of our first framework is bootstrapping. Namely, we begin from a simple inefficient dynamic set cover or hitting set data structure (e.g., a data structure that re-computes a solution after each update), and repeatedly use the current data structure to obtain an improved one. The main challenge here is to design the bootstrapping procedure: how to use a given data structure to construct a new data structure with improved update time. We achieve this by using output-sensitive algorithms and carefully partitioning the problem instances to sub-instances.

Our second framework is much simpler, which is based on local modification. Namely, we construct a new solution by slightly modifying the previous one after each update, and re-compute a new solution periodically using an output-sensitive algorithm. This framework applies to the problems which are stable, in the sense that the optimum of a dynamic instance does not change significantly.

4 First framework: Bootstrapping

In this section, we present our first frame work for dynamic geometric set cover and hitting set, which is based on bootstrapping and results in sub-linear data structures for dynamic interval set cover and dynamic quadrant and unit-square set cover (resp., hitting set).

4.1 Warm-up: 1D set cover for intervals

As a warm up, we first study the 1D problem: dynamic interval set cover. First, we observe that interval set cover admits a simple exact output-sensitive algorithm. Indeed, interval set cover can be solved using the greedy algorithm that repeatedly picks the leftmost uncovered point and covers it using the interval with the rightmost right endpoint, and the algorithm can be easily made output-sensitive if we store the points and intervals in binary search trees.

Lemma 1.

Interval set cover admits an exact output-sensitive algorithm.

This algorithm will serve an important role in the design of our data structure.

4.1.1 Bootstrapping

As mentioned before, our data structure is designed using bootstrapping. Specifically, we prove the following bootstrapping theorem, which is the technical heart of our result. The theorem roughly states that given a dynamic interval set cover data structure, one can obtain another dynamic interval set cover data structure with improved update time.

Theorem 2.

Let α[0,1]\alpha\in[0,1] be a number. If there exists a (1+ε)(1+\varepsilon)-approximate dynamic interval set cover data structure 𝒟old\mathcal{D}_{\textnormal{old}} with O~(nα/ε1α)\widetilde{O}(n^{\alpha}/\varepsilon^{1-\alpha}) amortized update time and O~(n0)\widetilde{O}(n_{0}) construction time for any ε>0\varepsilon>0, then there exists a (1+ε)(1+\varepsilon)-approximate dynamic interval set cover data structure 𝒟new\mathcal{D}_{\textnormal{new}} with O~(nα/ε1α)\widetilde{O}(n^{\alpha^{\prime}}/\varepsilon^{1-\alpha^{\prime}}) amortized update time and O~(n0)\widetilde{O}(n_{0}) construction time for any ε>0\varepsilon>0, where α=α/(1+α)\alpha^{\prime}=\alpha/(1+\alpha). Here nn (resp., n0n_{0}) denotes the size of the current (resp., initial) problem instance.

Assuming the existence of 𝒟old\mathcal{D}_{\text{old}} as in the theorem, we are going to design the improved data structure 𝒟new\mathcal{D}_{\text{new}}. Let (S,)(S,\mathcal{I}) be a dynamic interval set cover instance, and ε>0\varepsilon>0 be the approximation factor. We denote by nn (resp., n0n_{0}) the size of the current (resp., initial) (S,)(S,\mathcal{I}).

The construction of 𝒟new\mathcal{D}_{\text{new}}.

Initially, |S|+||=n0|S|+|\mathcal{I}|=n_{0}. Essentially, our data structure 𝒟new\mathcal{D}_{\text{new}} consists of two parts333In implementation level, we may need some additional support data structures (which are very simple). For simplicity of exposition, we shall mention them when discussing the implementation details.. The first part is the basic data structure 𝒜\mathcal{A} required for the output-sensitive algorithm of Lemma 1. The second part is a family of 𝒟old\mathcal{D}_{\text{old}} data structures defined as follows. Let ff be a function to be determined shortly. We partition the real line \mathbb{R} into r=n0/f(n0,ε)r=\lceil n_{0}/f(n_{0},\varepsilon)\rceil connected portions (i.e., intervals) J1,,JrJ_{1},\dots,J_{r} such that each portion JiJ_{i} contains O(f(n0,ε))O(f(n_{0},\varepsilon)) points in SS and O(f(n0,ε))O(f(n_{0},\varepsilon)) endpoints of the intervals in \mathcal{I}. Define Si=SJiS_{i}=S\cap J_{i} and define i\mathcal{I}_{i}\subseteq\mathcal{I} as the sub-collections consisting of the intervals that “partially intersect” JiJ_{i}, i.e., i={I:JiI and JiI}\mathcal{I}_{i}=\{I\in\mathcal{I}:J_{i}\cap I\neq\emptyset\text{ and }J_{i}\nsubseteq I\}. When the instance (S,)(S,\mathcal{I}) is updated, the partition J1,,JrJ_{1},\dots,J_{r} will remain unchanged, but the SiS_{i}’s and i\mathcal{I}_{i}’s will change along with SS and \mathcal{I}. We view each (Si,i)(S_{i},\mathcal{I}_{i}) as a dynamic interval set cover instance, and let 𝒟old(i)\mathcal{D}_{\text{old}}^{(i)} be the data structure 𝒟old\mathcal{D}_{\text{old}} built on (Si,i)(S_{i},\mathcal{I}_{i}) for the approximation parameter ε~=ε/2\tilde{\varepsilon}=\varepsilon/2. Thus, 𝒟old(i)\mathcal{D}_{\text{old}}^{(i)} maintains a (1+ε~)(1+\tilde{\varepsilon})-approximate optimal set cover for (Si,i)(S_{i},\mathcal{I}_{i}). The second part of 𝒟new\mathcal{D}_{\text{new}} consists of the data structures 𝒟old(1),,𝒟old(r)\mathcal{D}_{\text{old}}^{(1)},\dots,\mathcal{D}_{\text{old}}^{(r)}.

Update and reconstruction.

After an operation on (S,)(S,\mathcal{I}), we update the basic data structure 𝒜\mathcal{A}. Also, we update the data structure 𝒟old(i)\mathcal{D}_{\text{old}}^{(i)} if the instance (Si,i)(S_{i},\mathcal{I}_{i}) changes due to the operation. Note that an operation on SS changes exactly one SiS_{i} and an operation on \mathcal{I} changes at most two i\mathcal{I}_{i}’s (because an interval can belong to at most two i\mathcal{I}_{i}’s). Thus, we in fact only need to update at most two 𝒟old(i)\mathcal{D}_{\text{old}}^{(i)}’s. Besides the update, we also reconstruct the entire data structure 𝒟new\mathcal{D}_{\text{new}} periodically (as is the case for many dynamic data structures). Specifically, the first reconstruction of 𝒟new\mathcal{D}_{\text{new}} happens after processing f(n0,ε)f(n_{0},\varepsilon) operations. The reconstruction is the same as the initial construction of 𝒟new\mathcal{D}_{\text{new}}, except that n0n_{0} is replaced with n1n_{1}, the size of (S,)(S,\mathcal{I}) at the time of reconstruction. Then the second reconstruction happens after processing f(n1,ε)f(n_{1},\varepsilon) operations since the first reconstruction, and so forth.

Maintaining a solution.

We now discuss how to maintain a (1+ε)(1+\varepsilon)-approximate optimal set cover appx\mathcal{I}_{\text{appx}} for (S,)(S,\mathcal{I}). Let 𝗈𝗉𝗍\mathsf{opt} denote the optimum (i.e., the size of an optimal set cover) of the current (S,)(S,\mathcal{I}). Set δ=min{(6+2ε)r/ε,n}\delta=\min\{(6+2\varepsilon)\cdot r/\varepsilon,n\}. If 𝗈𝗉𝗍δ\mathsf{opt}\leq\delta, then the output-sensitive algorithm can compute an optimal set cover for (S,)(S,\mathcal{I}) in O~(δ)\widetilde{O}(\delta) time. Thus, we simulate the output-sensitive algorithm within that amount of time. If the algorithm successfully computes a solution, we use it as our appx\mathcal{I}_{\text{appx}}. Otherwise, we construct appx\mathcal{I}_{\text{appx}} as follows. For i{1,,r}i\in\{1,\dots,r\}, we say JiJ_{i} is coverable if there exists II\in\mathcal{I} such that JiIJ_{i}\subseteq I and uncoverable otherwise. Let P={i:Ji is coverable}P=\{i:J_{i}\text{ is coverable}\} and P={i:Ji is uncoverable}P^{\prime}=\{i:J_{i}\text{ is uncoverable}\}. We try to use the intervals in \mathcal{I} to “cover” all coverable portions. That is, for each iPi\in P, we find an interval in \mathcal{I} that contains JiJ_{i}, and denote by \mathcal{I}^{*} the collection of these intervals. Then we consider the uncoverable portions. If for some iPi\in P^{\prime}, the data structure 𝒟old(i)\mathcal{D}_{\text{old}}^{(i)} tells us that the current (Si,i)(S_{i},\mathcal{I}_{i}) does not have a set cover, then we immediately make a no-solution decision, i.e., decide that the current (S,)(S,\mathcal{I}) has no feasible set cover, and continue to the next operation. Otherwise, for every iPi\in P^{\prime}, the data structure 𝒟old(i)\mathcal{D}_{\text{old}}^{(i)} maintains a (1+ε~)(1+\tilde{\varepsilon})-approximate optimal set cover i\mathcal{I}_{i}^{*} for (Si,i)(S_{i},\mathcal{I}_{i}). We then define appx=(Ji𝒫i)\mathcal{I}_{\text{appx}}=\mathcal{I}^{*}\sqcup\left(\bigsqcup_{J_{i}\in\mathcal{P}^{\prime}}\mathcal{I}_{i}^{*}\right).

Later we will prove that appx\mathcal{I}_{\text{appx}} is always a (1+ε)(1+\varepsilon)-approximate optimal set cover for (S,)(S,\mathcal{I}). Before this, let us consider how to store appx\mathcal{I}_{\text{appx}} properly to support the size, membership, and reporting queries in the required query times. If appx\mathcal{I}_{\text{appx}} is computed by the output-sensitive algorithm, then the size of appx\mathcal{I}_{\text{appx}} is at most δ\delta, and we have all the elements of appx\mathcal{I}_{\text{appx}} in hand. In this case, it is not difficult to build a data structure on appx\mathcal{I}_{\text{appx}} to support the desired queries. On the other hand, if appx\mathcal{I}_{\text{appx}} is defined as the disjoint union of \mathcal{I}^{*} and i\mathcal{I}_{i}^{*}’s, the size of appx\mathcal{I}_{\text{appx}} might be very large and thus we are not able to explicitly extract all elements of appx\mathcal{I}_{\text{appx}}. Fortunately, in this case, each i\mathcal{I}_{i}^{*} is already maintained in the data structure 𝒟old(i)\mathcal{D}_{\text{old}}^{(i)}. Therefore, we actually only need to compute PP, PP^{\prime}, and \mathcal{I}^{*}; with these in hand, one can already build a data structure to support the desired queries for appx\mathcal{I}_{\text{appx}}. We defer the detailed discussion to Appendix B.

Correctness.

We now prove the correctness of our data structure 𝒟new\mathcal{D}_{\text{new}}. We first show the correctness of the no-solution decision.

Lemma 3.

𝒟new\mathcal{D}_{\textnormal{new}} makes a no-solution decision iff the current (S,)(S,\mathcal{I}) has no set cover.

Next, we show that the solution appx\mathcal{I}_{\text{appx}} maintained by 𝒟new\mathcal{D}_{\text{new}} is truly a (1+ε)(1+\varepsilon)-approximate optimal set cover for (S,)(S,\mathcal{I}). If appx\mathcal{I}_{\text{appx}} is computed by the output-sensitive algorithm, then it is an optimal set cover for (S,)(S,\mathcal{I}). Otherwise, 𝗈𝗉𝗍>δ=min{(6+2ε)r/ε,n}\mathsf{opt}>\delta=\min\{(6+2\varepsilon)\cdot r/\varepsilon,n\}, i.e., either 𝗈𝗉𝗍>(6+2ε)r/ε\mathsf{opt}>(6+2\varepsilon)\cdot r/\varepsilon or 𝗈𝗉𝗍>n\mathsf{opt}>n. If 𝗈𝗉𝗍>n\mathsf{opt}>n, then the current (S,)(S,\mathcal{I}) has no set cover (i.e., 𝗈𝗉𝗍=\mathsf{opt}=\infty) and thus 𝒟new\mathcal{D}_{\text{new}} makes a no-solution decision by Lemma 3. So assume 𝗈𝗉𝗍>(6+2ε)r/ε\mathsf{opt}>(6+2\varepsilon)\cdot r/\varepsilon. In this case, appx=(iPi)\mathcal{I}_{\text{appx}}=\mathcal{I}^{*}\sqcup(\bigsqcup_{i\in P^{\prime}}\mathcal{I}_{i}^{*}). For each iPi\in P^{\prime}, let 𝗈𝗉𝗍i\mathsf{opt}_{i} be the optimum of the instance (Si,i)(S_{i},\mathcal{I}_{i}). Then we have |i|(1+ε~)𝗈𝗉𝗍i|\mathcal{I}_{i}^{*}|\leq(1+\tilde{\varepsilon})\cdot\mathsf{opt}_{i} for all iPi\in P^{\prime} where ε~=ε/2\tilde{\varepsilon}=\varepsilon/2. Since ||r|\mathcal{I}^{*}|\leq r, we have

|appx|=||+iP|i|r+(1+ε2)iP𝗈𝗉𝗍i.|\mathcal{I}_{\text{appx}}|=|\mathcal{I}^{*}|+\sum_{i\in P^{\prime}}|\mathcal{I}_{i}^{*}|\leq r+\left(1+\frac{\varepsilon}{2}\right)\sum_{i\in P^{\prime}}\mathsf{opt}_{i}. (1)

Let opt\mathcal{I}_{\text{opt}} be an optimal set cover for (S,)(S,\mathcal{I}). We observe that for iPi\in P^{\prime}, opti\mathcal{I}_{\text{opt}}\cap\mathcal{I}_{i} is a set cover for (Si,i)(S_{i},\mathcal{I}_{i}), because JiJ_{i} is uncoverable (so the points in SiS_{i} cannot be covered by any interval in \i\mathcal{I}\backslash\mathcal{I}_{i}). It immediately follows that 𝗈𝗉𝗍i|opti|\mathsf{opt}_{i}\leq|\mathcal{I}_{\text{opt}}\cap\mathcal{I}_{i}| for all iPi\in P^{\prime}. Therefore, we have

iP𝗈𝗉𝗍iiP|opti|.\sum_{i\in P^{\prime}}\mathsf{opt}_{i}\leq\sum_{i\in P^{\prime}}|\mathcal{I}_{\text{opt}}\cap\mathcal{I}_{i}|. (2)

The right-hand side of the above inequality can be larger than |opt||\mathcal{I}_{\text{opt}}| as some intervals in opt\mathcal{I}_{\text{opt}} can belong to two i\mathcal{I}_{i}’s. The following lemma bounds the number of such intervals.

Lemma 4.

There are at most 2r2r intervals in opt\mathcal{I}_{\text{opt}} that belong to exactly two i\mathcal{I}_{i}’s.

The above lemma immediately implies

iP|opti||opt|+2r=𝗈𝗉𝗍+2r.\sum_{i\in P^{\prime}}|\mathcal{I}_{\text{opt}}\cap\mathcal{I}_{i}|\leq|\mathcal{I}_{\text{opt}}|+2r=\mathsf{opt}+2r. (3)

Combining Inequalities 1, 2, and 3, we deduce that

|appx|\displaystyle|\mathcal{I}_{\text{appx}}| r+(1+ε2)iP𝗈𝗉𝗍i\displaystyle\leq r+\left(1+\frac{\varepsilon}{2}\right)\sum_{i\in P^{\prime}}\mathsf{opt}_{i}
r+(1+ε2)iP|opti|\displaystyle\leq r+\left(1+\frac{\varepsilon}{2}\right)\sum_{i\in P^{\prime}}|\mathcal{I}_{\text{opt}}\cap\mathcal{I}_{i}|
r+(1+ε2)(𝗈𝗉𝗍+2r)=(3+ε)r+(1+ε2)𝗈𝗉𝗍\displaystyle\leq r+\left(1+\frac{\varepsilon}{2}\right)\cdot(\mathsf{opt}+2r)=(3+\varepsilon)\cdot r+\left(1+\frac{\varepsilon}{2}\right)\cdot\mathsf{opt}
<ε2𝗈𝗉𝗍+(1+ε2)𝗈𝗉𝗍=(1+ε)𝗈𝗉𝗍,\displaystyle<\frac{\varepsilon}{2}\cdot\mathsf{opt}+\left(1+\frac{\varepsilon}{2}\right)\cdot\mathsf{opt}=(1+\varepsilon)\cdot\mathsf{opt},

where the last inequality follows from the assumption 𝗈𝗉𝗍>(6+2ε)r/ε\mathsf{opt}>(6+2\varepsilon)\cdot r/\varepsilon.

Time analysis.

We briefly discuss the update and construction time of 𝒟new\mathcal{D}_{\text{new}}; a detailed analysis can be found in Appendix B. Since 𝒟new\mathcal{D}_{\text{new}} is reconstructed periodically, it suffices to consider the first period (i.e., the period before the first reconstruction). The construction of 𝒟new\mathcal{D}_{\text{new}} can be easily done in O~(n0)\widetilde{O}(n_{0}) time. The update time of 𝒟new\mathcal{D}_{\text{new}} consists of the time for updating the data structures 𝒜\mathcal{A} and 𝒟old(1),,𝒟old(r)\mathcal{D}_{\text{old}}^{(1)},\dots,\mathcal{D}_{\text{old}}^{(r)}, the time for maintaining the solution, and the time for reconstruction. Since the period consists of f(n0,ε)f(n_{0},\varepsilon) operations, the size of each (Si,i)(S_{i},\mathcal{I}_{i}) is always bounded by O(f(n0,ε))O(f(n_{0},\varepsilon)) during the period. As argued before, we only need to update at most two 𝒟old(i)\mathcal{D}_{\text{old}}^{(i)}’s after each operation. Thus, updating the 𝒟old\mathcal{D}_{\text{old}} data structures takes O~(f(n0,ε)α/ε1α)\widetilde{O}(f(n_{0},\varepsilon)^{\alpha}/\varepsilon^{1-\alpha}) amortized time. Maintaining the solution can be done in O~(δ+r)\widetilde{O}(\delta+r) time, with a careful implementation. The time for reconstruction is bounded by O~(n0+f(n0,ε))\widetilde{O}(n_{0}+f(n_{0},\varepsilon)); we amortize it over the f(n0,ε)f(n_{0},\varepsilon) operations in the period and the amortized time cost is then O~(n0/f(n0,ε))\widetilde{O}(n_{0}/f(n_{0},\varepsilon)), i.e., O~(r)\widetilde{O}(r). In total, the amortized update time of 𝒟new\mathcal{D}_{\text{new}} (during the first period) is O~(f(n0,ε)α/ε1α+δ+r)\widetilde{O}(f(n_{0},\varepsilon)^{\alpha}/\varepsilon^{1-\alpha}+\delta+r). If we set f(n,ε)=min{n1α/εα,n/2}f(n,\varepsilon)=\min\{n^{1-\alpha^{\prime}}/\varepsilon^{\alpha^{\prime}},n/2\} where α\alpha^{\prime} is as defined in Theorem 2, a careful calculation (see Appendix B) shows that the amortized update time becomes O~(nα/ε1α)\widetilde{O}(n^{\alpha^{\prime}}/\varepsilon^{1-\alpha^{\prime}}).

4.1.2 Putting everything together

With the bootstrapping theorem in hand, we are now able to design our dynamic interval set cover data structure. The starting point is a “trivial” data structure, which simply uses the output-sensitive algorithm of Lemma 1 to re-compute an optimal interval set cover after each update. Clearly, the update time of this data structure is O~(n)\widetilde{O}(n) and the construction time is O~(n0)\widetilde{O}(n_{0}). Thus, there exists a (1+ε)(1+\varepsilon)-approximate dynamic interval set cover data structure with O~(nα0/ε1α0)\widetilde{O}(n^{\alpha_{0}}/\varepsilon^{1-\alpha_{0}}) amortized update time for α0=1\alpha_{0}=1 and O~(n0)\widetilde{O}(n_{0}) construction time. Define αi=αi1/(1+αi1)\alpha_{i}=\alpha_{i-1}/(1+\alpha_{i-1}) for i1i\geq 1. By applying Theorem 2 ii times for a constant i1i\geq 1, we see the existence of a (1+ε)(1+\varepsilon)-approximate dynamic interval set cover data structure with O~(nαi/ε1αi)\widetilde{O}(n^{\alpha_{i}}/\varepsilon^{1-\alpha_{i}}) amortized update time and O~(n0)\widetilde{O}(n_{0}) construction time. One can easily verify that αi=1/(i+1)\alpha_{i}=1/(i+1) for all i0i\geq 0. Therefore, for any constant α>0\alpha>0, we have an index i0i\geq 0 satisfying αi<α\alpha_{i}<\alpha and hence O~(nαi/ε1αi)=O(nα/ε)\widetilde{O}(n^{\alpha_{i}}/\varepsilon^{1-\alpha_{i}})=O(n^{\alpha}/\varepsilon). We finally conclude the following.

Theorem 5.

For a given approximation factor ε>0\varepsilon>0 and any constant α>0\alpha>0, there exists a (1+ε)(1+\varepsilon)-approximate dynamic interval set cover data structure 𝒟\mathcal{D} with O(nα/ε)O(n^{\alpha}/\varepsilon) amortized update time and O~(n0)\widetilde{O}(n_{0}) construction time.

4.2 2D set cover and hitting set for quadrants and unit squares

In this section, we present our bootstrapping framework for 2D dynamic set cover and hitting set. Our framework works for quadrants and unit squares.

We first show that dynamic unit-square set cover, dynamic unit-square hitting set, and dynamic quadrant hitting set can all be reduced to dynamic quadrant set cover.

Lemma 6.

Suppose there exists a cc-approximate dynamic quadrant set cover data structure with f(n)f(n) amortized update time and O~(n0)\widetilde{O}(n_{0}) construction time, where ff is an increasing function. Then there exist O(c)O(c)-approximate dynamic unit-square set cover, dynamic unit-square hitting set, and dynamic quadrant hitting set data structures with O~(f(n))\widetilde{O}(f(n)) amortized update time and O~(n0)\widetilde{O}(n_{0}) construction time.

Now it suffices to consider dynamic quadrant set cover. In order to do bootstrapping, we need an output-sensitive algorithm for quadrant set cover, analog to the one in Lemma 1 for intervals. To design such an algorithm is considerably more difficult compared to the 1D case, and we defer it to Section 4.2.2. Before this, let us first discuss the bootstrapping procedure, assuming the existence of a μ\mu-approximate output-sensitive algorithm for quadrant set cover.

4.2.1 Bootstrapping

We prove the following bootstrapping theorem, which is the technical heart of our result.

Theorem 7.

Assume quadrant set cover admits a μ\mu-approximate output-sensitive algorithm for some constant μ1\mu\geq 1. Then we have the following result.
(*) Let α[0,1]\alpha\in[0,1] be a number. If there exists a (μ+ε)(\mu+\varepsilon)-approximate dynamic quadrant set cover data structure 𝒟old\mathcal{D}_{\textnormal{old}} with O~(nα/ε1α)\widetilde{O}(n^{\alpha}/\varepsilon^{1-\alpha}) amortized update time and O~(n0)\widetilde{O}(n_{0}) construction time for any ε>0\varepsilon>0, then there exists a (μ+ε)(\mu+\varepsilon)-approximate dynamic quadrant set cover data structure 𝒟new\mathcal{D}_{\textnormal{new}} with O~(nα/ε1α)\widetilde{O}(n^{\alpha^{\prime}}/\varepsilon^{1-\alpha^{\prime}}) amortized update time and O~(n0)\widetilde{O}(n_{0}) construction time for any ε>0\varepsilon>0, where α=2α/(1+2α)\alpha^{\prime}=2\alpha/(1+2\alpha). Here nn (resp., n0n_{0}) denotes the size of the current (resp., initial) problem instance.

Assuming the existence of 𝒟old\mathcal{D}_{\text{old}} as in the theorem, we are going to design the improved data structure 𝒟new\mathcal{D}_{\text{new}}. Let (S,𝒬)(S,\mathcal{Q}) be a dynamic quadrant set cover instance. As before, we denote by nn (resp., n0n_{0}) the size of the current (resp., initial) (S,𝒬)(S,\mathcal{Q}).

Refer to caption
Figure 1: The r×rr\times r grid. Note that the cells may have different sizes.
Refer to caption
Figure 2: A quadrant QQ that left intersects i,j\Box_{i,j}.
The construction of 𝒟new\mathcal{D}_{\text{new}}.

Initially, |S|+|𝒬|=n0|S|+|\mathcal{Q}|=n_{0}. Essentially, our data structure 𝒟new\mathcal{D}_{\text{new}} consists of two parts. The first part is the data structure 𝒜\mathcal{A} required for the μ\mu-approximate output-sensitive algorithm. The second part is a family of 𝒟old\mathcal{D}_{\text{old}} data structures defined as follows. Let ff be a function to be determined shortly. We use an orthogonal grid to partition the plane 2\mathbb{R}^{2} into r×rr\times r cells for r=n0/f(n0,ε)r=\lceil n_{0}/f(n_{0},\varepsilon)\rceil such that each row (resp., column) of the grid contains O(f(n0,ε))O(f(n_{0},\varepsilon)) points in SS and O(f(n0,ε))O(f(n_{0},\varepsilon)) vertices of the quadrants in 𝒬\mathcal{Q} (see Figure 1 for an illustration). Denote by i,j\Box_{i,j} the cell in the ii-th row and jj-th column. Define Si,j=Si,jS_{i,j}=S\cap\Box_{i,j}. Also, we need to define a sub-collection 𝒬i,j𝒬\mathcal{Q}_{i,j}\subseteq\mathcal{Q}. Recall that in the 1D case, we define i\mathcal{I}_{i} as the sub-collection of intervals in \mathcal{I} that partially intersect the portion JiJ_{i}. However, for technical reasons, here we cannot simply define 𝒬i,j\mathcal{Q}_{i,j} as the sub-collection of quadrants in 𝒬\mathcal{Q} that partially intersects i,j\Box_{i,j}. Instead, we define 𝒬i,j\mathcal{Q}_{i,j} as follows. We include in 𝒬i,j\mathcal{Q}_{i,j} all the quadrants in 𝒬\mathcal{Q} whose vertices lie in i,j\Box_{i,j}. Besides, we also include in 𝒬i,j\mathcal{Q}_{i,j} the following (at most) four special quadrants. We say a quadrant QQ left intersects i,j\Box_{i,j} if QQ partially intersects i,j\Box_{i,j} and contains the left edge of i,j\Box_{i,j} (see Figure 2 for an illustration); similarly, we define “right intersects”, “top intersects”, and “bottom intersects”. Among a collection of quadrants, the leftmost/rightmost/topmost/bottommost quadrant refers to the quadrant whose vertex is the leftmost/rightmost/topmost/bottommost. We include in 𝒬i,j\mathcal{Q}_{i,j} the rightmost quadrant in 𝒬\mathcal{Q} that left intersects i,j\Box_{i,j}, the leftmost quadrant in 𝒬\mathcal{Q} that right intersects i,j\Box_{i,j}, the bottommost quadrant in 𝒬\mathcal{Q} that top intersects i,j\Box_{i,j}, and the topmost quadrant in 𝒬\mathcal{Q} that bottom intersects i,j\Box_{i,j} (if these quadrants exist). When the instance (S,𝒬)(S,\mathcal{Q}) is updated, the grid keeps unchanged, but the Si,jS_{i,j}’s and 𝒬i,j\mathcal{Q}_{i,j}’s change along with SS and 𝒬\mathcal{Q}. We view each (Si,j,𝒬i,j)(S_{i,j},\mathcal{Q}_{i,j}) as a dynamic quadrant set cover instance, and let 𝒟old(i,j)\mathcal{D}_{\text{old}}^{(i,j)} be the data structure 𝒟old\mathcal{D}_{\text{old}} built on (Si,j,𝒬i,j)(S_{i,j},\mathcal{Q}_{i,j}) for the approximation factor ε~=ε/2\tilde{\varepsilon}=\varepsilon/2. The second part of 𝒟new\mathcal{D}_{\text{new}} consists of the data structures 𝒟old(i,j)\mathcal{D}_{\text{old}}^{(i,j)} for i,j{1,,r}i,j\in\{1,\dots,r\}.

Update and reconstruction.

After each operation on (S,𝒬)(S,\mathcal{Q}), we update the data structure 𝒜\mathcal{A}. Also, if some (Si,j,𝒬i,j)(S_{i,j},\mathcal{Q}_{i,j}) changes, we update the data structure 𝒟old(i,j)\mathcal{D}_{\text{old}}^{(i,j)}. Note that an operation on SS changes exactly one Si,jS_{i,j}, and an operation on 𝒬\mathcal{Q} may only change the 𝒬i,j\mathcal{Q}_{i,j}’s in one row and one column (specifically, if the vertex of the inserted/deleted quadrant lies in i,j\Box_{i,j}, then only 𝒬i,1,,𝒬i,r,𝒬1,j,,𝒬r,j\mathcal{Q}_{i,1},\dots,\mathcal{Q}_{i,r},\mathcal{Q}_{1,j},\dots,\mathcal{Q}_{r,j} may change). Thus, we in fact only need to update the 𝒟old(i,j)\mathcal{D}_{\text{old}}^{(i,j)}’s in one row and one column. Besides the update, we also reconstruct the entire data structure 𝒟new\mathcal{D}_{\text{new}} periodically, where the (first) reconstruction happens after processing f(n0,ε)f(n_{0},\varepsilon) operations. This part is totally the same as in our 1D data structure.

Maintaining a solution.

We now discuss how to maintain a (μ+ε)(\mu+\varepsilon)-approximate optimal set cover 𝒬appx\mathcal{Q}_{\text{appx}} for (S,𝒬)(S,\mathcal{Q}). Let 𝗈𝗉𝗍\mathsf{opt} denote the optimum of the current (S,𝒬)(S,\mathcal{Q}). Set δ=min{(8μ+4ε+2)r2/ε,n}\delta=\min\{(8\mu+4\varepsilon+2)\cdot r^{2}/\varepsilon,n\}. If 𝗈𝗉𝗍δ\mathsf{opt}\leq\delta, then the output-sensitive algorithm can compute a μ\mu-approximate optimal set cover for (S,𝒬)(S,\mathcal{Q}) in O~(μδ)\widetilde{O}(\mu\delta) time. Thus, we simulate the output-sensitive algorithm within that amount of time. If the algorithm successfully computes a solution, we use it as our 𝒬appx\mathcal{Q}_{\text{appx}}. Otherwise, we construct 𝒬appx\mathcal{Q}_{\text{appx}} as follows. We say the cell i,j\Box_{i,j} is coverable if there exists Q𝒬Q\in\mathcal{Q} that contains i,j\Box_{i,j} and uncoverable otherwise. Let P={(i,j):i,j is coverable}P=\{(i,j):\Box_{i,j}\text{ is coverable}\} and P={(i,j):i,j is uncoverable}P^{\prime}=\{(i,j):\Box_{i,j}\text{ is uncoverable}\}. We try to use the quadrants in \mathcal{I} to “cover” all coverable cells. That is, for each (i,j)P(i,j)\in P, we find a quadrant in 𝒬\mathcal{Q} that contains i,j\Box_{i,j}, and denote by 𝒬\mathcal{Q}^{*} the set of all these quadrants. Then we consider the uncoverable cells. If for some (i,j)P(i,j)\in P^{\prime}, the data structure 𝒟old(i,j)\mathcal{D}_{\text{old}}^{(i,j)} tells us that the instance (Si,j,𝒬i,j)(S_{i,j},\mathcal{Q}_{i,j}) has no set cover, then we immediately make a no-solution decision, i.e., decide that the current (S,𝒬)(S,\mathcal{Q}) has no feasible set cover, and continue to the next operation. Otherwise, for each (i,j)P(i,j)\in P^{\prime}, the data structure 𝒟old(i,j)\mathcal{D}_{\text{old}}^{(i,j)} maintains a (μ+ε~)(\mu+\tilde{\varepsilon})-approximate optimal set cover 𝒬i,j\mathcal{Q}_{i,j}^{*} for (Si,j,𝒬i,j)(S_{i,j},\mathcal{Q}_{i,j}). We then define 𝒬appx=𝒬((i,j)P𝒬i,j)\mathcal{Q}_{\text{appx}}=\mathcal{Q}^{*}\sqcup\left(\bigsqcup_{(i,j)\in P^{\prime}}\mathcal{Q}_{i,j}^{*}\right).

We will see later that 𝒬appx\mathcal{Q}_{\text{appx}} is always a (μ+ε)(\mu+\varepsilon)-approximate optimal set cover for (S,𝒬)(S,\mathcal{Q}). Before this, let us briefly discuss how to store 𝒬appx\mathcal{Q}_{\text{appx}} to support the desired queries. If 𝒬appx\mathcal{Q}_{\text{appx}} is computed by the output-sensitive algorithm, then we have all the elements of 𝒬appx\mathcal{Q}_{\text{appx}} in hand and can easily store them in a data structure to support the queries. Otherwise, 𝒬appx\mathcal{Q}_{\text{appx}} is defined as the disjoint union of 𝒬\mathcal{Q}^{*} and 𝒬i,j\mathcal{Q}_{i,j}^{*}’s. In this case, the size and reporting queries can be handled in the same way as that in the 1D problem, by taking advantage of the fact that 𝒬i,j\mathcal{Q}_{i,j}^{*} is maintained in 𝒟old(i,j)\mathcal{D}_{\text{old}}^{(i,j)}. However, the situation for the membership query is more complicated, because now a quadrant in 𝒬\mathcal{Q} may belong to many 𝒬i,j\mathcal{Q}_{i,j}^{*}’s. This issue can be handled by collecting all special quadrants in 𝒬appx\mathcal{Q}_{\text{appx}} and building on them a data structure that supports the membership query. We defer the detailed discussion to Appendix C.

Correctness.

We now prove the correctness of our data structure 𝒟new\mathcal{D}_{\text{new}}. First, we show that the no-solution decision made by our data structure is correct.

Lemma 8.

𝒟new\mathcal{D}_{\textnormal{new}} makes a no-solution decision iff the current (S,𝒬)(S,\mathcal{Q}) has no set cover.

Next, we show that the solution 𝒬appx\mathcal{Q}_{\text{appx}} maintained by 𝒟new\mathcal{D}_{\text{new}} is truly a (μ+ε)(\mu+\varepsilon)-approximate optimal set cover for (S,𝒬)(S,\mathcal{Q}). If 𝒬appx\mathcal{Q}_{\text{appx}} is computed by the output-sensitive algorithm, then it is a μ\mu-approximate optimal set cover for (S,𝒬)(S,\mathcal{Q}). Otherwise, 𝗈𝗉𝗍>δ=min{(8μ+4ε+2)r2/ε,n}\mathsf{opt}>\delta=\min\{(8\mu+4\varepsilon+2)\cdot r^{2}/\varepsilon,n\}, i.e., either 𝗈𝗉𝗍>(8μ+4ε+2)r2/ε\mathsf{opt}>(8\mu+4\varepsilon+2)\cdot r^{2}/\varepsilon or 𝗈𝗉𝗍>n\mathsf{opt}>n. If 𝗈𝗉𝗍>n\mathsf{opt}>n, then (S,𝒬)(S,\mathcal{Q}) has no set cover (i.e., 𝗈𝗉𝗍=\mathsf{opt}=\infty) and 𝒟new\mathcal{D}_{\text{new}} makes a no-solution decision by Lemma 8. So assume 𝗈𝗉𝗍>(8μ+4ε+2)r2/ε\mathsf{opt}>(8\mu+4\varepsilon+2)r^{2}/\varepsilon. In this case, 𝒬appx=𝒬((i,j)P𝒬i,j)\mathcal{Q}_{\text{appx}}=\mathcal{Q}^{*}\sqcup(\bigsqcup_{(i,j)\in P^{\prime}}\mathcal{Q}_{i,j}^{*}). For each (i,j)P(i,j)\in P^{\prime}, let 𝗈𝗉𝗍i,j\mathsf{opt}_{i,j} be the optimum of (Si,j,𝒬i,j)(S_{i,j},\mathcal{Q}_{i,j}). Then we have |𝒬i,j|(μ+ε~)𝗈𝗉𝗍i,j|\mathcal{Q}_{i,j}^{*}|\leq(\mu+\tilde{\varepsilon})\cdot\mathsf{opt}_{i,j} for all (i,j)P(i,j)\in P^{\prime} where ε~=ε/2\tilde{\varepsilon}=\varepsilon/2. Since |𝒬|r2|\mathcal{Q}^{*}|\leq r^{2}, we have

𝒬appx=|𝒬|+(i,j)P|𝒬i,j|r2+(μ+ε2)(i,j)P𝗈𝗉𝗍i,j.\mathcal{Q}_{\text{appx}}=|\mathcal{Q}^{*}|+\sum_{(i,j)\in P^{\prime}}|\mathcal{Q}_{i,j}^{*}|\leq r^{2}+\left(\mu+\frac{\varepsilon}{2}\right)\sum_{(i,j)\in P^{\prime}}\mathsf{opt}_{i,j}. (4)

Let 𝒬i,j𝒬i,j\mathcal{Q}_{i,j}^{\prime}\subseteq\mathcal{Q}_{i,j} consist of the non-special quadrants, i.e., those whose vertices are in i,j\Box_{i,j}.

Lemma 9.

We have |𝒬opt𝒬i,j|+4𝗈𝗉𝗍i,j|\mathcal{Q}_{\textnormal{opt}}\cap\mathcal{Q}_{i,j}^{\prime}|+4\geq\mathsf{opt}_{i,j} for all (i,j)P(i,j)\in P^{\prime}, and in particular,

𝗈𝗉𝗍+4r2=|𝒬opt|+4r2(i,j)P𝗈𝗉𝗍i,j.\mathsf{opt}+4r^{2}=|\mathcal{Q}_{\textnormal{opt}}|+4r^{2}\geq\sum_{(i,j)\in P^{\prime}}\mathsf{opt}_{i,j}. (5)

Using Equations 4 and 5, we deduce that

|𝒬appx|\displaystyle|\mathcal{Q}_{\text{appx}}| r2+(μ+ε2)(i,j)P𝗈𝗉𝗍i,j\displaystyle\leq r^{2}+\left(\mu+\frac{\varepsilon}{2}\right)\sum_{(i,j)\in P^{\prime}}\mathsf{opt}_{i,j}
r2+(μ+ε2)(𝗈𝗉𝗍+4r2)\displaystyle\leq r^{2}+\left(\mu+\frac{\varepsilon}{2}\right)(\mathsf{opt}+4r^{2})
(4μ+2ε+1)r2+(μ+ε2)𝗈𝗉𝗍\displaystyle\leq(4\mu+2\varepsilon+1)\cdot r^{2}+\left(\mu+\frac{\varepsilon}{2}\right)\cdot\mathsf{opt}
<ε2𝗈𝗉𝗍+(μ+ε2)𝗈𝗉𝗍=(μ+ε)𝗈𝗉𝗍,\displaystyle<\frac{\varepsilon}{2}\cdot\mathsf{opt}+\left(\mu+\frac{\varepsilon}{2}\right)\cdot\mathsf{opt}=(\mu+\varepsilon)\cdot\mathsf{opt},

where the last inequality follows from the fact that 𝗈𝗉𝗍>(8μ+4ε+2)r2/ε\mathsf{opt}>(8\mu+4\varepsilon+2)\cdot r^{2}/\varepsilon.

Time analysis.

We briefly discuss the update and construction time of 𝒟new\mathcal{D}_{\text{new}}; a detailed analysis can be found in Appendix C. It suffices to consider the first period (i.e., the period before the first reconstruction). We first observe the following fact.

Lemma 10.

At any time in the first period, we have k=1r(|Si,k|+|𝒬i,k|)=O(f(n0,ε)+r)\sum_{k=1}^{r}(|S_{i,k}|+|\mathcal{Q}_{i,k}|)=O(f(n_{0},\varepsilon)+r) for all i{1,,r}i\in\{1,\dots,r\} and k=1r(|Sk,j|+|𝒬k,j|)=O(f(n0,ε)+r)\sum_{k=1}^{r}(|S_{k,j}|+|\mathcal{Q}_{k,j}|)=O(f(n_{0},\varepsilon)+r) for all j{1,,r}j\in\{1,\dots,r\}.

The above lemma implies that the sum of the sizes of all (Si,j,𝒬i,j)(S_{i,j},\mathcal{Q}_{i,j}) is O(n0+r2)O(n_{0}+r^{2}) at any time in the first period. Therefore, constructing 𝒟new\mathcal{D}_{\text{new}} can be easily done in O~(n0+r2)\widetilde{O}(n_{0}+r^{2}) time. The update time of 𝒟new\mathcal{D}_{\text{new}} consists of the time for reconstruction, the time for updating 𝒜\mathcal{A} and 𝒟old(i,j)\mathcal{D}_{\text{old}}^{(i,j)}’s, and the time for maintaining the solution. Using almost the same analysis as in the 1D problem, we can show that the reconstruction takes O~(r+r2/f(n0,ε))\widetilde{O}(r+r^{2}/f(n_{0},\varepsilon)) amortized time and maintaining the solution can be done in O~(δ+r2)\widetilde{O}(\delta+r^{2}) time, with a careful implementation. The time for updating the 𝒟old\mathcal{D}_{\text{old}} data structures requires a different analysis. Let mi,jm_{i,j} denote the current size of (Si,j,𝒬i,j)(S_{i,j},\mathcal{Q}_{i,j}). As argued before, we in fact only need to update the 𝒟old\mathcal{D}_{\text{old}} data structures in one row and one column (say the ii-th row and jj-th column). Hence, updating the 𝒟old\mathcal{D}_{\text{old}} data structures takes O~(k=1rmi,kα/ε1α+k=1rmk,jα/ε1α)\widetilde{O}(\sum_{k=1}^{r}m_{i,k}^{\alpha}/\varepsilon^{1-\alpha}+\sum_{k=1}^{r}m_{k,j}^{\alpha}/\varepsilon^{1-\alpha}) amortized time. Lemma 10 implies that k=1rmi,k=O(f(n0,ε)+r)\sum_{k=1}^{r}m_{i,k}=O(f(n_{0},\varepsilon)+r) and k=1rmk,j=O(f(n0,ε)+r)\sum_{k=1}^{r}m_{k,j}=O(f(n_{0},\varepsilon)+r). Since α1\alpha\leq 1, by Hölder’s inequality and Lemma 10,

k=1rmi,kα(k=1rmi,kr)αr=O(r1α(f(n0,ε)+r)α)\sum_{k=1}^{r}m_{i,k}^{\alpha}\leq\left(\frac{\sum_{k=1}^{r}m_{i,k}}{r}\right)^{\alpha}\cdot r=O(r^{1-\alpha}\cdot(f(n_{0},\varepsilon)+r)^{\alpha})

and similarly k=1rmk,jα=O(r1α(f(n0,ε)+r)α)\sum_{k=1}^{r}m_{k,j}^{\alpha}=O(r^{1-\alpha}\cdot(f(n_{0},\varepsilon)+r)^{\alpha}). It follows that updating the 𝒟old\mathcal{D}_{\text{old}} data structures takes O~(r1α(f(n0,ε)+r)α/ε1α)\widetilde{O}(r^{1-\alpha}\cdot(f(n_{0},\varepsilon)+r)^{\alpha}/\varepsilon^{1-\alpha}) amortized time. In total, the amortized update time of 𝒟new\mathcal{D}_{\text{new}} (during the first period) is O~(r1α(f(n0,ε)+r)α+δ+r2)\widetilde{O}(r^{1-\alpha}\cdot(f(n_{0},\varepsilon)+r)^{\alpha}+\delta+r^{2}). If we set f(n,ε)=min{n1α/2/(ε)α,n/2}f(n,\varepsilon)=\min\{n^{1-\alpha^{\prime}/2}/(\sqrt{\varepsilon})^{\alpha^{\prime}},n/2\} where α\alpha^{\prime} is as defined in Theorem 7, a careful calculation (see Appendix C) shows that the amortized update time becomes O~(nα/ε1α)\widetilde{O}(n^{\alpha^{\prime}}/\varepsilon^{1-\alpha^{\prime}}) .

4.2.2 An output-sensitive quadrant set cover algorithm

We propose an O(1)O(1)-approximate output-sensitive algorithm for quadrant set cover, which is needed for applying Theorem 7. Let (S,𝒬)(S,\mathcal{Q}) be a quadrant set cover instance of size nn, and 𝗈𝗉𝗍\mathsf{opt} be its optimum. Our goal is to compute an O(1)O(1)-approximate optimal set cover for (S,𝒬)(S,\mathcal{Q}) in O~(𝗈𝗉𝗍)\widetilde{O}(\mathsf{opt}) time, using some basic data structure built on (S,𝒬)(S,\mathcal{Q}).

For simplicity, let us assume that (S,𝒬)(S,\mathcal{Q}) has a set cover; how to handle the no-solution case is discussed in Appendix D.1. There are four types of quadrants in 𝒬\mathcal{Q}, southeast, southwest, northeast, northwest; we denote by 𝒬SE,𝒬SW,𝒬NE,𝒬NW𝒬\mathcal{Q}^{\text{SE}},\mathcal{Q}^{\text{SW}},\mathcal{Q}^{\text{NE}},\mathcal{Q}^{\text{NW}}\subseteq\mathcal{Q} the sub-collections of these types of quadrants, respectively. Let USEU^{\text{SE}} denote the union of the quadrants in 𝒬SE\mathcal{Q}^{\text{SE}}, and define USW,UNE,UNWU^{\text{SW}},U^{\text{NE}},U^{\text{NW}} similarly. Since (S,𝒬)(S,\mathcal{Q}) has a set cover, we have S=(SUSE)(SUSW)(SUNE)(SUNW)S=(S\cap U^{\text{SE}})\cup(S\cap U^{\text{SW}})\cup(S\cap U^{\text{NE}})\cup(S\cap U^{\text{NW}}). Therefore, if we can compute O(1)O(1)-approximate optimal set covers for (SUSE,𝒬)(S\cap U^{\text{SE}},\mathcal{Q}), (SUSW,𝒬)(S\cap U^{\text{SW}},\mathcal{Q}), (SUNE,𝒬)(S\cap U^{\text{NE}},\mathcal{Q}), and (SUNW,𝒬)(S\cap U^{\text{NW}},\mathcal{Q}), then the union of these four set covers is an O(1)O(1)-approximate optimal set cover for (S,𝒬)(S,\mathcal{Q}).

Refer to caption
Figure 3: Illustrating the curve γ\gamma and the point σ\sigma.

With this observation, it now suffices to show how to compute an O(1)O(1)-approximate optimal set cover for (SUSE,𝒬)(S\cap U^{\text{SE}},\mathcal{Q}) in O~(𝗈𝗉𝗍SE)\widetilde{O}(\mathsf{opt}^{\text{SE}}) time, where 𝗈𝗉𝗍SE\mathsf{opt}^{\text{SE}} is the optimum of (SUSE,𝒬)(S\cap U^{\text{SE}},\mathcal{Q}). The main challenge is to guarantee the running time and approximation ratio simultaneously. We begin by introducing some notation. Let γ\gamma denote the boundary of USEU^{\text{SE}}, which is an orthogonal staircase curve from bottom-left to top-right. If γUSW\gamma\cap U^{\text{SW}}\neq\emptyset, then γUSW\gamma\cap U^{\text{SW}} is a connected portion of γ\gamma that contains the bottom-left end of γ\gamma. Define σ\sigma as the “endpoint” of γUSW\gamma\cap U^{\text{SW}}, i.e., the point on γUSW\gamma\cap U^{\text{SW}} that is closest the top-right end of γ\gamma. See Figure 3 for an illustration. If γUSW=\gamma\cap U^{\text{SW}}=\emptyset, we define σ\sigma as the bottom-left end of γ\gamma (which is a point whose yy-coordinate equals to -\infty). For a number y~\tilde{y}\in\mathbb{R}, we define ϕ(y~)\phi(\tilde{y}) as the leftmost point in SUSES\cap U^{\text{SE}} whose yy-coordinate is greater than y~\tilde{y}; we say ϕ(y~)\phi(\tilde{y}) does not exist if no point in SUSES\cap U^{\text{SE}} has yy-coordinate greater than y~\tilde{y}. For a point a2a\in\mathbb{R}^{2} and a collection 𝒫\mathcal{P} of quadrants, we define Φ(a,𝒫)\varPhi_{\rightarrow}(a,\mathcal{P}) and Φ(a,𝒫)\varPhi_{\uparrow}(a,\mathcal{P}) as the rightmost and topmost quadrants in 𝒫\mathcal{P} that contains aa, respectively. For a quadrant QQ, we denote by x(Q)x(Q) and y(Q)y(Q) the xx- and yy-coordinates of the vertex of QQ, respectively.

To get some intuition, let us consider a very simple case, where 𝒬\mathcal{Q} only consists of southeast quadrants. In this case, one can compute an optimal set cover for (SUSE,𝒬)(S\cap U^{\text{SE}},\mathcal{Q}) using a greedy algorithm similar to the 1D interval set cover algorithm: repeatedly pick the leftmost uncovered point in SUSES\cap U^{\text{SE}} and cover it using the topmost (southeast) quadrant in 𝒬\mathcal{Q}. Using the notations defined above, we can describe this algorithm as follows. Set 𝒬ans\mathcal{Q}_{\text{ans}}\leftarrow\emptyset and y~\tilde{y}\leftarrow-\infty initially, and repeatedly do aϕ(y~)a\leftarrow\phi(\tilde{y}), QΦ(σ,𝒬SE)Q\leftarrow\varPhi_{\uparrow}(\sigma,\mathcal{Q}^{\text{SE}}), 𝒬ans𝒬ans{Q}\mathcal{Q}_{\text{ans}}\leftarrow\mathcal{Q}_{\text{ans}}\cup\{Q\}, y~y(Q)\tilde{y}\leftarrow y(Q) until ϕ(y~)\phi(\tilde{y}) does not exist. Eventually, 𝒬ans\mathcal{Q}_{\text{ans}} is the set cover we want.

Now we try to extend this algorithm to the general case. However, the situation here becomes much more complicated, since we may have three other types of quadrants in 𝒬\mathcal{Q}, which have to be carefully dealt with in order to guarantee the correctness. But the intuition remains the same: we still construct the solution in a greedy manner. The following procedure describes our algorithm.

  1. 1.

    𝒬ans\mathcal{Q}_{\text{ans}}\leftarrow\emptyset. y~\tilde{y}\leftarrow-\infty. If ϕ(y~)\phi(\tilde{y}) does not exist, then go to Step 6.

  2. 2.

    𝒬ans{Φ(σ,𝒬SW),Φ(σ,𝒬SE)}\mathcal{Q}_{\text{ans}}\leftarrow\{\varPhi_{\rightarrow}(\sigma,\mathcal{Q}^{\text{SW}}),\varPhi_{\uparrow}(\sigma,\mathcal{Q}^{\text{SE}})\}. y~y(Φ(σ,𝒬SE))\tilde{y}\leftarrow y(\varPhi_{\uparrow}(\sigma,\mathcal{Q}^{\text{SE}})). If ϕ(y~)\phi(\tilde{y}) exists, then aϕ(y~)a\leftarrow\phi(\tilde{y}), else go to Step 6.

  3. 3.

    If aUNEa\in U^{\text{NE}}, then 𝒬ans𝒬ans{Φ(a,𝒬NE),Φ(a,𝒬SE)}\mathcal{Q}_{\text{ans}}\leftarrow\mathcal{Q}_{\text{ans}}\cup\{\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{NE}}),\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{SE}})\} and go to Step 6.

  4. 4.

    If aUNWa\in U^{\text{NW}}, then 𝒬ans𝒬ans{Φ(a,𝒬NW),Φ(a,𝒬SE)}\mathcal{Q}_{\text{ans}}\leftarrow\mathcal{Q}_{\text{ans}}\cup\{\varPhi_{\rightarrow}(a,\mathcal{Q}^{\text{NW}}),\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{SE}})\} and QΦ(v,𝒬SE)Q\leftarrow\varPhi_{\uparrow}(v,\mathcal{Q}^{\text{SE}}) where vv is the vertex of Φ(a,𝒬NW)\varPhi_{\rightarrow}(a,\mathcal{Q}^{\text{NW}}), otherwise QΦ(a,𝒬SE)Q\leftarrow\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{SE}}).

  5. 5.

    𝒬ans𝒬ans{Q}\mathcal{Q}_{\text{ans}}\leftarrow\mathcal{Q}_{\text{ans}}\cup\{Q\}. y~y(Q)\tilde{y}\leftarrow y(Q). If ϕ(y~)\phi(\tilde{y}) exists, then aϕ(y~)a\leftarrow\phi(\tilde{y}) and go to Step 3.

  6. 6.

    Output 𝒬ans\mathcal{Q}_{\text{ans}}.

The following lemma proves the correctness of our algorithm.

Lemma 11.

𝒬ans\mathcal{Q}_{\textnormal{ans}} covers all points in SUSES\cap U^{\textnormal{SE}}, and |𝒬ans|=O(𝗈𝗉𝗍SE)|\mathcal{Q}_{\textnormal{ans}}|=O(\mathsf{opt}^{\textnormal{SE}}).

The remaining task is to show how to perform our algorithm in O~(𝗈𝗉𝗍SE)\widetilde{O}(\mathsf{opt}^{\text{SE}}) time using basic data structures. It is clear that our algorithm terminates in O(𝗈𝗉𝗍SE)O(\mathsf{opt}^{\text{SE}}) steps, since we include at least one quadrant to 𝒬ans\mathcal{Q}_{\text{ans}} in each iteration of the loop Step 3–5 and eventually |𝒬ans|=O(𝗈𝗉𝗍SE)|\mathcal{Q}_{\text{ans}}|=O(\mathsf{opt}^{\text{SE}}) by Lemma 11. Thus, it suffices to show that each step can be done in O~(1)\widetilde{O}(1) time. In every step of our algorithm, all work can be done in constant time except the tasks of computing the point σ\sigma, testing whether aUNEa\in U^{\text{NE}} and aUNWa\in U^{\text{NW}} for a given point aa, computing the quadrants Φ(a,𝒬SW),Φ(a,𝒬NW),Φ(a,𝒬SE),Φ(a,𝒬NE)\varPhi_{\rightarrow}(a,\mathcal{Q}^{\text{SW}}),\varPhi_{\rightarrow}(a,\mathcal{Q}^{\text{NW}}),\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{SE}}),\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{NE}}) for a given point aa, and computing ϕ(y~)\phi(\tilde{y}) for a given number y~\tilde{y}. All these tasks except the computation of ϕ()\phi(\cdot) can be easily done in O~(1)\widetilde{O}(1) time by storing the quadrants in binary search trees. To compute ϕ()\phi(\cdot) in O~(1)\widetilde{O}(1) time is more difficult, and we achieve this by properly using range trees built on both SS and 𝒬SE\mathcal{Q}^{\text{SE}}. The details are presented in Appendix D.2.

Using the above algorithm, we can compute O(1)O(1)-approximate optimal set covers for (SUSE,𝒬)(S\cap U^{\text{SE}},\mathcal{Q}), (SUSW,𝒬)(S\cap U^{\text{SW}},\mathcal{Q}), (SUNE,𝒬)(S\cap U^{\text{NE}},\mathcal{Q}), and (SUNW,𝒬)(S\cap U^{\text{NW}},\mathcal{Q}). As argued before, the union of these four set covers, denoted by 𝒬\mathcal{Q}^{*}, is an O(1)O(1)-approximate optimal set covers for (S,𝒬)(S,\mathcal{Q}).

Theorem 12.

Quadrant set cover admits an O(1)O(1)-approximate output-sensitive algorithm.

4.2.3 Putting everything together

With the bootstrapping theorem in hand, we are now able to design our dynamic quadrant set cover data structure. Again, the starting point is a “trivial” data structure which uses the output-sensitive algorithm of Theorem 12 to re-compute an optimal quadrant set cover after each update. Clearly, the update time of this data structure is O~(n)\widetilde{O}(n) and the construction time is O~(n0)\widetilde{O}(n_{0}). Let μ=O(1)\mu=O(1) be the approximation ratio of the output-sensitive algorithm. The trivial data structure implies the existence of a (μ+ε)(\mu+\varepsilon)-approximate dynamic quadrant set cover data structure with O~(nα0/ε1α0)\widetilde{O}(n^{\alpha_{0}}/\varepsilon^{1-\alpha_{0}}) amortized update time for α0=1\alpha_{0}=1 and O~(n0)\widetilde{O}(n_{0}) construction time. Define αi=2αi1/(1+2αi1)\alpha_{i}=2\alpha_{i-1}/(1+2\alpha_{i-1}) for i1i\geq 1. By applying Theorem 7 ii times for a constant i1i\geq 1, we see the existence of a (μ+ε)(\mu+\varepsilon)-approximate dynamic quadrant set cover data structure with O~(nαi/ε1αi)\widetilde{O}(n^{\alpha_{i}}/\varepsilon^{1-\alpha_{i}}) amortized update time and O~(n0)\widetilde{O}(n_{0}) construction time. One can easily verify that αi=2i/(2i+11)\alpha_{i}=2^{i}/(2^{i+1}-1) for all i0i\geq 0. Therefore, for any constant α>0\alpha>0, we have an index i0i\geq 0 satisfying αi<1/2+α\alpha_{i}<1/2+\alpha and hence O~(nαi/ε1αi)=O(n1/2+α/ε)\widetilde{O}(n^{\alpha_{i}}/\varepsilon^{1-\alpha_{i}})=O(n^{1/2+\alpha}/\varepsilon). Setting ε\varepsilon to be any constant, we finally conclude the following.

Theorem 13.

For any constant α>0\alpha>0, there exists an O(1)O(1)-approximate dynamic quadrant set cover data structure with O(n1/2+α)O(n^{1/2+\alpha}) amortized update time and O~(n0)\widetilde{O}(n_{0}) construction time.

By the reduction of Lemma 6, we have the following corollary.

Corollary 14.

For any constant α>0\alpha>0, there exist O(1)O(1)-approximate dynamic unit-square set cover, dynamic unit-square hitting set, and dynamic quadrant hitting set data structures with O(n1/2+α)O(n^{1/2+\alpha}) amortized update time and O~(n0)\widetilde{O}(n_{0}) construction time.

5 Second framework: Local modification

In this section, we give our second framework for dynamic geometric set cover and hitting set, which results in poly-logarithmic update time data structures for dynamic interval hitting set, as well as dynamic quadrant and unit-square set cover (resp., hitting set) in the partially dynamic setting. This framework applies to stable dynamic geometric set cover and hitting set problems, in which each operation cannot change the optimum of the problem instance significantly. One can easily see that any dynamic set cover (resp., hitting set) problem in the partially dynamic setting is stable. Interestingly, we will see that the dynamic interval hitting set problem, even in the fully dynamic setting, is stable.

The basic idea behind this framework is quite simple, and we call it local modification. Namely, at the beginning, we compute a good solution SS^{*} for the problem instance using an output-sensitive algorithm, and after each operation, we slightly modify SS^{*}, by adding/removing a constant number of elements, to guarantee that SS^{*} is a feasible solution for the current instance. After processing many operations, we re-compute a new SS^{*} and repeat this procedure. The stability of the problem can guarantee that SS^{*} is always a good approximation for the optimal solution.

Next, we formally define the notion of stability. The quasi-optimum of a set cover instance (S,)(S,\mathcal{R}) is the size of an optimal set cover for (S,)(S^{\prime},\mathcal{R}), where SSS^{\prime}\subseteq S consists of the points that can be covered by \mathcal{R}. The quasi-optimum of a hitting set instance is defined in a similar way. For a dynamic set cover (resp., hitting set) instance Π\varPi, we use 𝗈𝗉𝗍i(Π)\mathsf{opt}_{i}(\varPi) to denote the quasi-optimum of Π\varPi at the time ii (i.e., after the ii-th operation).

Definition 15.

A dynamic set cover (resp., hitting set) instance Π\varPi is stable if |𝗈𝗉𝗍i(Π)𝗈𝗉𝗍i+1(Π)|1|\mathsf{opt}_{i}(\varPi)-\mathsf{opt}_{i+1}(\varPi)|\leq 1 for all i0i\geq 0. A dynamic set cover (resp., hitting set) problem is stable if any instance of the problem is stable.

Lemma 16.

Any dynamic set cover (resp., hitting set) problem in the partially dynamic setting is stable. The dynamic interval hitting set problem is stable.

5.1 Dynamic interval hitting set

We show how to use the idea of local modification to obtain a (1+ε)(1+\varepsilon)-approximate dynamic interval hitting set data structure 𝒟\mathcal{D} with O~(1/ε)\widetilde{O}(1/\varepsilon) amortized update time.

Similarly to interval set cover, the interval hitting set problem can also be solved using a simple greedy algorithm, which can be performed in O~(𝗈𝗉𝗍)\widetilde{O}(\mathsf{opt}) time if we store (properly) the points and intervals in some basic data structure (e.g., binary search trees). We omit the proof of the following lemma, as it is almost the same as the proof of Lemma 1.

Lemma 17.

The interval hitting set problem yields an exact output-sensitive algorithm.

Let (S,)(S,\mathcal{I}) be a dynamic interval hitting set instance. We denote by nn (resp., n0n_{0}) the size of the current (resp., initial) (S,)(S,\mathcal{I}), and by 𝗈𝗉𝗍\mathsf{opt} the optimum of the current (S,)(S,\mathcal{I}).

The data structure.

Our data structure 𝒟\mathcal{D} basically consists of three parts. The first part is the basic data structure 𝒜\mathcal{A} built on (S,)(S,\mathcal{I}) required for the output-sensitive algorithm. The second part is the following dynamic data structure \mathcal{B} which can tell us whether the current instance has a hitting set.

Lemma 18.

There exists a dynamic data structure \mathcal{B} built on (S,)(S,\mathcal{I}) with O~(1)\widetilde{O}(1) update time and O~(n0)\widetilde{O}(n_{0}) construction time that can indicate whether the current (S,)(S,\mathcal{I}) has a hitting set or not.

The third part consists of the following two basic data structures 𝒞1\mathcal{C}_{1} and 𝒞2\mathcal{C}_{2}.

Lemma 19.

One can store SS in a basic data structure 𝒞1\mathcal{C}_{1} such that given a point aa, the rightmost (resp., leftmost) point in SS to the left (resp., right) of aa can be reported in O~(1)\widetilde{O}(1) time (if they exist).

Lemma 20.

One can store SS in a basic data structure 𝒞2\mathcal{C}_{2} such that given an interval II, a point aSa\in S that is contained in II can be reported in O~(1)\widetilde{O}(1) time (if it exists).

Maintaining a solution.

Let (S0,0)(S_{0},\mathcal{I}_{0}) be the initial (S,)(S,\mathcal{I}), and define 00\mathcal{I}_{0}^{\prime}\subseteq\mathcal{I}_{0} as the sub-collection consisting of all intervals that are hit by S0S_{0}. In the construction of 𝒟\mathcal{D}, after building the data structures 𝒜\mathcal{A} and \mathcal{B}, we compute an optimal hitting set SS^{*} for (S0,0)(S_{0},\mathcal{I}_{0}^{\prime}). Set 𝖼𝗇𝗍=0\mathsf{cnt}=0 and 𝗈𝗉𝗍=|S|\mathsf{opt}^{\sim}=|S^{*}| initially. The data structure 𝒟\mathcal{D} handles the operations as follows. After each operation, 𝒟\mathcal{D} first increment 𝖼𝗇𝗍\mathsf{cnt} by 1 and query the data structure \mathcal{B} to see whether the current (S,)(S,\mathcal{I}) has a hitting set. Whenever 𝖼𝗇𝗍ε𝗈𝗉𝗍/(2+ε)\mathsf{cnt}\geq\varepsilon\cdot\mathsf{opt}^{\sim}/(2+\varepsilon) and the current (S,)(S,\mathcal{I}) has a hitting set, we re-compute an optimal hitting set SS^{*} for the current (S,)(S,\mathcal{I}) using the output-sensitive algorithm and reset 𝖼𝗇𝗍=0\mathsf{cnt}=0 and 𝗈𝗉𝗍=|S|\mathsf{opt}^{\sim}=|S^{*}|. Otherwise, we apply local modification on SS^{*} (i.e., “slightly” modify SS^{*}) as follows.

  • If the operation inserts a point aa to SS, then include aa in SS^{*}. If the operation deletes a point aa from SS, then remove aa from SS^{*} (if aSa\in S^{*}) and include in SS^{*} the rightmost point in SS to the left of aa and the leftmost point in SS to the right of aa (if they exist), which can be found using the data structure 𝒞1\mathcal{C}_{1} of Lemma 19.

  • If the operation inserts an interval II to \mathcal{I}, then include in SS^{*} an arbitrary point in SS that hits II (if it exists), which can be found using the data structure 𝒞2\mathcal{C}_{2} of Lemma 20. If the operation deletes an interval from \mathcal{I}, keep SS^{*} unchanged.

Note that we do local modification on SS^{*} no matter whether the current (S,)(S,\mathcal{I}) has a hitting set or not. After this, our data structure 𝒟\mathcal{D} uses SS^{*} as the solution for the current (S,)(S,\mathcal{I}), if the current (S,)(S,\mathcal{I}) has a hitting set. In order to support the desired queries for SS^{*}, we can simply store SS^{*} in a (dynamic) binary search tree. When we re-compute SS^{*}, we re-build the binary search tree that stores SS^{*}. For each local modification on SS^{*}, we do insertion/deletion on the binary search tree according to the change of SS^{*}.

Correctness.

To see the correctness of our data structure, we have to show that SS^{*} is always a (1+ε)(1+\varepsilon)-approximate optimal hitting set for the current (S,)(S,\mathcal{I}), whenever (S,)(S,\mathcal{I}) has a hitting set. To this end, we observe some simple facts. First, the counter 𝖼𝗇𝗍\mathsf{cnt} is exactly the number of the operations processed after the last re-computation of SS^{*}, and the number 𝗈𝗉𝗍\mathsf{opt}^{\sim} is the optimum at the last re-computation of SS^{*}. Second, after each local modification, the size of SS^{*} either increases by 1 or keeps unchanged. Based on these two facts and the stability of dynamic interval hitting set (Lemma 16), we can easily bound the size of SS^{*}.

Lemma 21.

We have |S|(1+ε)𝗈𝗉𝗍|S^{*}|\leq(1+\varepsilon)\cdot\mathsf{opt} at any time.

With the above lemma in hand, it now suffices to show that SS^{*} is a feasible hitting set.

Lemma 22.

Whenever the current (S,)(S,\mathcal{I}) has a hitting set, SS^{*} is a hitting set for (S,)(S,\mathcal{I}).

Time analysis.

The construction time of 𝒟\mathcal{D} is clearly O~(n0)\widetilde{O}(n_{0}). Specifically, when constructing 𝒟\mathcal{D}, we need to build the data structures 𝒜,,𝒞1,𝒞2\mathcal{A},\mathcal{B},\mathcal{C}_{1},\mathcal{C}_{2}, all of which can be built in O~(n0)\widetilde{O}(n_{0}) time. Also, we need to compute an optimal hitting set for (S0,0)(S_{0},\mathcal{I}_{0}^{\prime}) using the output-sensitive algorithm, which can also be done in O~(n0)\widetilde{O}(n_{0}) time. We show that the amortized update time of 𝒟\mathcal{D} is O~(1/ε)\widetilde{O}(1/\varepsilon). After each operation, we need to update the data structures 𝒜,,𝒞1,𝒞2\mathcal{A},\mathcal{B},\mathcal{C}_{1},\mathcal{C}_{2}, which takes O~(1)\widetilde{O}(1) time. If we do local modification on SS^{*}, then it takes O~(1)\widetilde{O}(1) time. Indeed, in each local modification, there are O(1)O(1) elements to be added to or removed from SS^{*}. These elements can be found in O~(1)\widetilde{O}(1) time using the data structures 𝒞1\mathcal{C}_{1} and 𝒞2\mathcal{C}_{2}. After these elements are found, we need to modify the binary search tree that stores SS^{*} for supporting the desired queries, which can also be done in O~(1)\widetilde{O}(1) time. In the case where we re-compute a new SS^{*}, it takes O~(𝗈𝗉𝗍)\widetilde{O}(\mathsf{opt}) time. However, we can amortize this time over the operations in between the last re-computation of SS^{*} and the current one. Let 𝗈𝗉𝗍\mathsf{opt}^{\sim} be the quasi-optimum of (S,)(S,\mathcal{I}) at the last re-computation. Suppose there are tt operations in between the two re-computations. Then tε𝗈𝗉𝗍/(2+ε)t\geq\varepsilon\cdot\mathsf{opt}^{\sim}/(2+\varepsilon). By the stability of dynamic interval hitting set (Lemma 16), we have 𝗈𝗉𝗍𝗈𝗉𝗍+t\mathsf{opt}\leq\mathsf{opt}^{\sim}+t, because 𝗈𝗉𝗍\mathsf{opt} is also the quasi-optimum of the current (S,)(S,\mathcal{I}). It follows that

𝗈𝗉𝗍t𝗈𝗉𝗍+tt2+εε+1=O(1/ε).\frac{\mathsf{opt}}{t}\leq\frac{\mathsf{opt}^{\sim}+t}{t}\leq\frac{2+\varepsilon}{\varepsilon}+1=O(1/\varepsilon).

Thus, the amortized time for the current re-computation is O~(1/ε)\widetilde{O}(1/\varepsilon). We conclude the following.

Theorem 23.

For a given approximation factor ε>0\varepsilon>0, there is a (1+ε)(1+\varepsilon)-approximate dynamic interval hitting data structure 𝒟\mathcal{D} with O~(1/ε)\widetilde{O}(1/\varepsilon) amortized update time and O~(n0)\widetilde{O}(n_{0}) construction time.

5.2 Dynamic set cover and hitting set in the partially dynamic settings

The idea of local modification can also be applied to dynamic geometric set cover (resp., hitting set) problems in the partially dynamic setting (which are stable by Lemma 16), as long as we have an output-sensitive algorithm for the problem and an efficient dynamic data structure that can indicate whether the current instance has a feasible solution or not.

For an example, let us consider dynamic quadrant set cover in the partially dynamic setting. Let (S,𝒬)(S,\mathcal{Q}) be a dynamic quadrant set cover instance with only point updates. As before, we denote by nn (resp., n0n_{0}) the size of the current (resp., initial) (S,𝒬)(S,\mathcal{Q}), and by 𝗈𝗉𝗍\mathsf{opt} the optimum of the current (S,𝒬)(S,\mathcal{Q}).

The data structure.

Our data structure 𝒟\mathcal{D} consists of three parts. The first part is the basic data structure 𝒜\mathcal{A} required for performing the output-sensitive algorithm of Theorem 12. The second part is the following dynamic data structure \mathcal{B} which can tell us whether the current instance has a hitting set.

Lemma 24.

There exists a dynamic data structure \mathcal{B} built on (S,𝒬)(S,\mathcal{Q}) with O~(1)\widetilde{O}(1) update time and O~(n0)\widetilde{O}(n_{0}) construction time that can indicate whether the current (S,𝒬)(S,\mathcal{Q}) has a set cover or not.

The third part is a simple (static) data structure 𝒞\mathcal{C} built on 𝒬\mathcal{Q} that can report in O~(1)\widetilde{O}(1) time, for a given point a2a\in\mathbb{R}^{2}, a quadrant in 𝒬\mathcal{Q} that contains aa (if it exists); this can be done using a standard orthogonal range-search data structure, e.g., range tree, whose construction time is O~(|𝒬|)\widetilde{O}(|\mathcal{Q}|).

Maintaining a solution.

Let S0S_{0} be the initial SS, and define S0S0S_{0}^{\prime}\subseteq S_{0} as the subset consisting of the points covered by 𝒬\mathcal{Q}. At the beginning, we compute a μ\mu-approximate optimal set cover 𝒬\mathcal{Q}^{*} for (S0,𝒬)(S_{0}^{\prime},\mathcal{Q}), where μ\mu is the approximation ratio of the output-sensitive algorithm. This can be done as follows. We first compute S0S_{0}^{\prime} by testing whether each point in SS is covered by some range in 𝒬\mathcal{Q} using the data structure 𝒞\mathcal{C}, and then build on (S0,𝒬)(S_{0}^{\prime},\mathcal{Q}) the basic data structure required for the output-sensitive algorithm and perform the output-sensitive algorithm on (S0,𝒬)(S_{0}^{\prime},\mathcal{Q}). Set 𝖼𝗇𝗍=0\mathsf{cnt}=0 and 𝗈𝗉𝗍=|𝒬|\mathsf{opt}^{\sim}=|\mathcal{Q}^{*}|. After each iteration, 𝒟\mathcal{D} first increments 𝖼𝗇𝗍\mathsf{cnt} by 1 and checks whether the current (S,𝒬)(S,\mathcal{Q}) has a set cover or not using the data structure \mathcal{B}. Whenever 𝖼𝗇𝗍(ε/μ)𝗈𝗉𝗍/(2+ε)\mathsf{cnt}\geq(\varepsilon/\mu)\cdot\mathsf{opt}^{\sim}/(2+\varepsilon) and the current (S,𝒬)(S,\mathcal{Q}) has a set cover, we re-compute a μ\mu-approximate optimal set cover 𝒬\mathcal{Q}^{*} for the current (S,𝒬)(S,\mathcal{Q}) using the output-sensitive algorithm, and reset 𝖼𝗇𝗍=0\mathsf{cnt}=0 and 𝗈𝗉𝗍=|𝒬|\mathsf{opt}^{\sim}=|\mathcal{Q}^{*}|. Otherwise, we apply local modification on 𝒬\mathcal{Q}^{*} as follows. If the operation inserts a point aa to SS, then include in 𝒬\mathcal{Q}^{*} an arbitrary range in 𝒬\mathcal{Q} that contains aa, which can be found using the data structure 𝒞\mathcal{C} (if it exists). If the operation deletes a point aa from SS, then keep 𝒬\mathcal{Q}^{*} unchanged. After this, our data structure 𝒟\mathcal{D} uses 𝒬\mathcal{Q}^{*} as the solution for the current (S,𝒬)(S,\mathcal{Q}). As in the last section, we can store 𝒬\mathcal{Q}^{*} in a (dynamic) binary search tree to support the desired queries (by fixing some order on the family of quadrants).

Correctness.

Using the same argument as in the last section, we can show that |𝒬|(μ+ε)𝗈𝗉𝗍|\mathcal{Q}^{*}|\leq(\mu+\varepsilon)\cdot\mathsf{opt} at any time. Furthermore, it is also easy to see that 𝒬\mathcal{Q}^{*} is always a feasible set cover.

Lemma 25.

Whenever the current (S,𝒬)(S,\mathcal{Q}) has a set cover, 𝒬\mathcal{Q}^{*} is a set cover for (S,𝒬)(S,\mathcal{Q}).

Time analysis.

Using the same analysis as in the last section, we can show that the construction time of the data structure 𝒟\mathcal{D} is O~(n)\widetilde{O}(n), and the amortized update time is O~(1/ε)\widetilde{O}(1/\varepsilon). Setting ε\varepsilon to be any constant, we conclude the following.

Theorem 26.

There exists an O(1)O(1)-approximate partially dynamic quadrant set cover data structure with O~(1)\widetilde{O}(1) amortized update time and O~(n0)\widetilde{O}(n_{0}) construction time.

Using the reduction of Lemma 6, we also have the following result. Note that although the reduction of Lemma 6 is described for the fully dynamic setting, it actually applies to reduce dynamic unit-square set cover (resp., hitting set) in the partially dynamic setting and dynamic quadrant hitting set in the partially dynamic setting to dynamic quadrant set cover data structure in the partially dynamic setting.

Corollary 27.

There exist an O(1)O(1)-approximate partially dynamic unit-square set cover, partially dynamic unit-square hitting set, and partially dynamic quadrant hitting set data structures with O~(1)\widetilde{O}(1) amortized update time and O~(n0)\widetilde{O}(n_{0}) construction time.

References

  • [1] Amir Abboud, Raghavendra Addanki, Fabrizio Grandoni, Debmalya Panigrahi, and Barna Saha. Dynamic set cover: improved algorithms and lower bounds. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, pages 114–125. ACM, 2019.
  • [2] Pankaj K Agarwal and Jiangwei Pan. Near-linear algorithms for geometric hitting sets and set covers. In Proceedings of the thirtieth annual symposium on Computational geometry, page 271. ACM, 2014.
  • [3] Surender Baswana, Manoj Gupta, and Sandeep Sen. Fully dynamic maximal matching in o(logn)o(\log n) update time. SIAM Journal on Computing, 44(1):88–113, 2015.
  • [4] Jon Louis Bentley. Decomposable searching problems. Technical report, Carnegie-Mellon University, Department of Computer Science, 1978.
  • [5] Piotr Berman and Bhaskar DasGupta. Complexities of efficient solutions of rectilinear polygon cover problems. Algorithmica, 17(4):331–356, 1997.
  • [6] Aaron Bernstein and Cliff Stein. Faster fully dynamic matchings with small approximation ratios. In Proceedings of the twenty-seventh annual ACM-SIAM Symposium on Discrete Algorithms, pages 692–711. Society for Industrial and Applied Mathematics, 2016.
  • [7] Sayan Bhattacharya, Deeparnab Chakrabarty, and Monika Henzinger. Deterministic fully dynamic approximate vertex cover and fractional matching in o(1)o(1) amortized update time. In International Conference on Integer Programming and Combinatorial Optimization, pages 86–98. Springer, 2017.
  • [8] Sayan Bhattacharya, Monika Henzinger, and Giuseppe F Italiano. Design of dynamic algorithms via primal-dual method. In International Colloquium on Automata, Languages, and Programming, pages 206–218. Springer, 2015.
  • [9] Sayan Bhattacharya, Monika Henzinger, and Giuseppe F Italiano. Deterministic fully dynamic data structures for vertex cover and matching. SIAM Journal on Computing, 47(3):859–887, 2018.
  • [10] Sayan Bhattacharya, Monika Henzinger, and Danupon Nanongkai. New deterministic approximation algorithms for fully dynamic matching. In Proceedings of the forty-eighth annual ACM Symposium on Theory of Computing, pages 398–411. ACM, 2016.
  • [11] Vasek Chvatal. A greedy heuristic for the set-covering problem. Mathematics of operations research, 4(3):233–235, 1979.
  • [12] Thomas H Cormen, Charles E Leiserson, Ronald L Rivest, and Clifford Stein. Introduction to algorithms. MIT press, 2009.
  • [13] Irit Dinur and David Steurer. Analytical approach to parallel repetition. In Proceedings of the forty-sixth annual ACM Symposium on Theory of Computing, pages 624–633. ACM, 2014.
  • [14] Thomas Erlebach and Erik Jan Van Leeuwen. Ptas for weighted set cover on unit squares. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, pages 166–177. Springer, 2010.
  • [15] Shashidhara K Ganjugunte. Geometric hitting sets and their variants. PhD thesis, Duke University, 2011.
  • [16] Anupam Gupta, Ravishankar Krishnaswamy, Amit Kumar, and Debmalya Panigrahi. Online and dynamic algorithms for set cover. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pages 537–550. ACM, 2017.
  • [17] Manoj Gupta and Richard Peng. Fully dynamic (1+e)-approximate matchings. In 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, pages 548–557. IEEE, 2013.
  • [18] Juris Hartmanis. Computers and intractability: a guide to the theory of np-completeness. Siam Review, 24(1):90, 1982.
  • [19] David S Johnson. Approximation algorithms for combinatorial problems. Journal of computer and system sciences, 9(3):256–278, 1974.
  • [20] Subhash Khot and Oded Regev. Vertex cover might be hard to approximate to within 2-ε\varepsilon. Journal of Computer and System Sciences, 74(3):335–349, 2008.
  • [21] László Lovász. On the ratio of optimal integral and fractional covers. Discrete mathematics, 13(4):383–390, 1975.
  • [22] Nimrod Megiddo and Kenneth J Supowit. On the complexity of some common geometric location problems. SIAM journal on computing, 13(1):182–196, 1984.
  • [23] Nimrod Megiddo and Arie Tamir. On the complexity of locating linear facilities in the plane. Operations research letters, 1(5):194–197, 1982.
  • [24] Christian Worm Mortensen. Fully dynamic orthogonal range reporting on ram. SIAM Journal on Computing, 35(6):1494–1525, 2006.
  • [25] Nabil H Mustafa and Saurabh Ray. Improved results on geometric hitting set problems. Discrete & Computational Geometry, 44(4):883–895, 2010.
  • [26] Ofer Neiman and Shay Solomon. Simple deterministic algorithms for fully dynamic maximal matching. ACM Transactions on Algorithms (TALG), 12(1):7, 2016.
  • [27] Shay Solomon. Fully dynamic maximal matching in constant update time. In 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), pages 325–334. IEEE, 2016.

Appendix A Missing proofs

In this section, we give the proofs of the technical lemmas in Section 4.1 and Section 4.2, which are missing in the main body of the paper.

A.1 Proof of Lemma 1

Consider an interval set cover instance (S,)(S,\mathcal{I}) of size nn.

Lemma 28.

One can store SS in a basic data structure that can report in O~(1)\widetilde{O}(1) time, for a query point qq\in\mathbb{R}, the leftmost point in SS that is to the right of qq (if it exists).

Proof.

We simply store SS in a binary search tree. Given a query point qq\in\mathbb{R}, one can find the leftmost point in SS to the right of qq by searching in the tree in O~(1)\widetilde{O}(1) time. Clearly, the tree can be built in O~(|S|)\widetilde{O}(|S|) time and dynamized with O~(1)\widetilde{O}(1) update time, and hence it is basic. ∎

Lemma 29.

One can store \mathcal{I} in a basic data structure that can report in O~(1)\widetilde{O}(1) time, for a query point qq\in\mathbb{R}, the interval in \mathcal{I} with the rightmost right endpoint that contains qq (if it exists).

Proof.

We store \mathcal{I} in a binary search tree 𝒯\mathcal{T} where the key of each interval is its left endpoint. We augment each node 𝐮𝒯\mathbf{u}\in\mathcal{T} with an additional field which stores the interval in the subtree rooted at 𝐮\mathbf{u} that has the rightmost right endpoint. Given a query point qq\in\mathbb{R}, we first look for the interval in 𝒯\mathcal{T} with the rightmost right endpoint whose key is smaller than or equal to qq. With the augmented fields, this interval can be found in O~(1)\widetilde{O}(1) time. If this interval contains qq, then we report it; otherwise, no interval in \mathcal{I} contains qq. Clearly, 𝒯\mathcal{T} can be built in O~(||)\widetilde{O}(|\mathcal{I}|) time and dynamized with O~(1)\widetilde{O}(1) update time, and hence it is basic. ∎

We store SS in the data structure of Lemma 28 and \mathcal{I} in the data structure of Lemma 29.

Recall the greedy algorithm for the static interval set cover problem. The algorithm computes an optimal set cover opt\mathcal{I}_{\text{opt}} for (S,)(S,\mathcal{I}) by repeatedly picking the leftmost uncovered point and covering it using the right-rightmost interval. Formally, the procedure is presented below.

  1. 1.

    qq\leftarrow-\infty and opt\mathcal{I}_{\text{opt}}\leftarrow\emptyset.

  2. 2.

    Find the leftmost point aSa\in S to the right of qq.

  3. 3.

    optopt{Ia}\mathcal{I}_{\text{opt}}\leftarrow\mathcal{I}_{\text{opt}}\cup\{I_{a}\}, where IaI_{a} is the right-rightmost interval in \mathcal{I} covering aa.

  4. 4.

    qq\leftarrow the right endpoint of IaI_{a}, then go to Step 2.

With the data structures of Lemma 28 and Lemma 29 in hand, the greedy algorithm can be performed in O~(𝗈𝗉𝗍)\widetilde{O}(\mathsf{opt}) time, where 𝗈𝗉𝗍\mathsf{opt} is the optimum of (S,)(S,\mathcal{I}). Indeed, the data structure of Lemma 28 can be used to implement Step 2 and the data structure of Lemma 29 can be used to implement Step 3, both in O~(1)\widetilde{O}(1) time. Since the algorithm terminates after O(𝗈𝗉𝗍)O(\mathsf{opt}) iterations, the total time cost is O~(𝗈𝗉𝗍)\widetilde{O}(\mathsf{opt}).

A.2 Proof of Lemma 3

Our algorithm makes a no-solution decision iff (Si,i)(S_{i},\mathcal{I}_{i}) has no set cover for some iPi\in P^{\prime}. If (Si,i)(S_{i},\mathcal{I}_{i}) has a set cover for all iPi\in P^{\prime}, then (S,)(S,\mathcal{I}) clearly has a set cover, because the portions JiJ_{i} for iPi\in P are coverable. If (Si,i)(S_{i},\mathcal{I}_{i}) has no set cover for some iPi\in P^{\prime}, then (S,)(S,\mathcal{I}) has no set cover, because JiJ_{i} is uncoverable and thus the points in SiS_{i} can only be covered by the intervals in i\mathcal{I}_{i}. Therefore, the no-solution decision of our data structure is correct.

A.3 Proof of Lemma 4

Suppose the portions J1,,JrJ_{1},\dots,J_{r} are sorted from left to right. Let sis_{i} be the separation point of JiJ_{i} and Ji+1J_{i+1}. Observe that an interval IoptI\in\mathcal{I}_{\text{opt}} belongs to exactly two i\mathcal{I}_{i}’s only if II contains one of the separation points s1,,sr1s_{1},\dots,s_{r-1}. We claim that for each sis_{i}, at most two intervals in opt\mathcal{I}_{\text{opt}} contain sis_{i}. Assume there are three intervals I,I,I+I^{-},I,I^{+} that contain sis_{i}. Without loss of generality, assume that II^{-} (resp., I+I^{+}) has the leftmost left endpoint (resp., the rightmost right endpoint) among I,I,I+I^{-},I,I^{+}. Then one can easily see that III+I\subseteq I^{-}\cup I^{+}. Therefore, opt\{I}\mathcal{I}_{\text{opt}}\backslash\{I\} is also a set cover for (S,)(S,\mathcal{I}), contradicting the optimality of opt\mathcal{I}_{\text{opt}}. Thus, at most two intervals in opt\mathcal{I}_{\text{opt}} contain sis_{i}. It follows that there are at most 2(r1)2(r-1) intervals in opt\mathcal{I}_{\text{opt}} that contain some separation point, and only these intervals can belong to exactly two i\mathcal{I}_{i}’s, which proves the lemma.

A.4 Proof of Lemma 6

We first observe that unit-square set cover and hitting set are equivalent. A unit square ZZ contains a point a2a\in\mathbb{R}^{2} iff the unit square centered at aa contains the center of ZZ. Therefore, given a unit-square set cover instance (S,𝒵)(S,\mathcal{Z}), if we replace each point aSa\in S with a unit square ZaZ_{a} centered at aa and replace each unit square Z𝒵Z\in\mathcal{Z} with the center cZc_{Z} of ZZ, then (S,𝒵)(S,\mathcal{Z}) is reduced to an equivalent unit-square hitting set instance ({cZ:Z𝒵},{Za:aS})(\{c_{Z}:Z\in\mathcal{Z}\},\{Z_{a}:a\in S\}). In the same way, one can also reduce a unit-square hitting set instance to an equivalent unit-square set cover instance. Thus, we only need to consider unit-square set cover.

In order to reduce dynamic unit-square set cover to dynamic quadrant set cover, we apply a grid technique. To this end, let us first consider a static unit-square set cover instance (S,𝒵)(S,\mathcal{Z}). We build a grid Γ\varGamma on the plane consisting of square cells of side-length 1. For each cell \Box of the grid, we define S=SS_{\Box}=S\cap\Box and 𝒵={Z𝒵:Z}\mathcal{Z}_{\Box}=\{Z\in\mathcal{Z}:Z\cap\Box\neq\emptyset\}. We call a cell nonempty if SS_{\Box}\neq\emptyset or 𝒵\mathcal{Z}_{\Box}\neq\emptyset, and truly nonempty if SS_{\Box}\neq\emptyset. Let Ψ\varPsi be the set of all truly nonempty cells. It is clear that (S,𝒵)(S,\mathcal{Z}) has a set cover iff (S,𝒵)(S_{\Box},\mathcal{Z}_{\Box}) has a set cover for all Ψ\Box\in\varPsi, since the points in SS_{\Box} can only be covered using the unit squares in 𝒵\mathcal{Z}_{\Box}. Let 𝒵𝒵\mathcal{Z}_{\Box}^{*}\subseteq\mathcal{Z}_{\Box} be a cc-approximate optimal set cover for (S,𝒵)(S_{\Box},\mathcal{Z}_{\Box}). We claim that Ψ𝒵\bigsqcup_{\Box\in\varPsi}\mathcal{Z}_{\Box}^{*} is a 4c4c-approximate optimal set cover for (S,𝒵)(S,\mathcal{Z}). Consider an optimal set cover 𝒵\mathcal{Z}^{*} for (S,𝒵)(S,\mathcal{Z}). We have |𝒵|c|𝒵𝒵||\mathcal{Z}_{\Box}^{*}|\leq c\cdot|\mathcal{Z}^{*}\cap\mathcal{Z}_{\Box}|, because 𝒵𝒵\mathcal{Z}^{*}\cap\mathcal{Z}_{\Box} is a set cover for (S,𝒵)(S_{\Box},\mathcal{Z}_{\Box}). On the other hand, we have Ψ|𝒵𝒵|4|𝒵|\sum_{\Box\in\varPsi}|\mathcal{Z}^{*}\cap\mathcal{Z}_{\Box}|\leq 4|\mathcal{Z}^{*}|, since any unit square can intersect at most four cells of Γ\varGamma. It follows that

|Ψ𝒵|=Ψ|𝒵|Ψc|𝒵𝒵|4c|𝒵|.\left|\bigsqcup_{\Box\in\varPsi}\mathcal{Z}_{\Box}^{*}\right|=\sum_{\Box\in\varPsi}|\mathcal{Z}_{\Box}^{*}|\leq\sum_{\Box\in\varPsi}c\cdot|\mathcal{Z}^{*}\cap\mathcal{Z}_{\Box}|\leq 4c\cdot|\mathcal{Z}^{*}|.

In this way, we reduce the instance (S,𝒵)(S,\mathcal{Z}) to the instances (S,𝒵)(S_{\Box},\mathcal{Z}_{\Box}) for Ψ\Box\in\varPsi. It seems that solving each (S,𝒵)(S_{\Box},\mathcal{Z}_{\Box}) is still a unit-square set cover problem. However, it is in fact a quadrant set cover problem. Indeed, a cell \Box is a square of side-length 1, hence the intersection of any unit square ZZ and \Box is equal to the intersection of a quadrant QZQ_{Z} and \Box, where QZQ_{Z} is the quadrant obtained by removing the two edges of ZZ outside \Box. Since the points in SS_{\Box} are all contained in \Box, the quadrant QZ𝒵Q_{Z}\in\mathcal{Z}_{\Box} covers the same set of points in SS_{\Box} as the unit square ZZ. Therefore, (S,𝒵)(S_{\Box},\mathcal{Z}_{\Box}) is equivalent to the quadrant set cover instance (S,{QZ:Z𝒵})(S_{\Box},\{Q_{Z}:Z\in\mathcal{Z}_{\Box}\}).

The above observation gives us a reduction from dynamic unit-square set cover to dynamic quadrant set cover. Suppose we have a cc-approximate dynamic quadrant set cover data structure 𝒟\mathcal{D} with f(n)f(n) amortized update time and O~(n0)\widetilde{O}(n_{0}) construction time, where ff is an increasing function. We are going to design a 4c4c-approximate dynamic unit-square set cover data structure 𝒟\mathcal{D}^{\prime}. Let (S,𝒵)(S,\mathcal{Z}) be a dynamic unit-square set cover instance. When constructing 𝒟\mathcal{D}^{\prime}, we build the grid Γ\varGamma described above and compute (S,𝒵)(S_{\Box},\mathcal{Z}_{\Box}) for all nonempty cells. The grid Γ\varGamma is always fixed, while the instances (S,𝒵)(S_{\Box},\mathcal{Z}_{\Box}) change along with the change of SS and 𝒵\mathcal{Z}. As argued before, we can regard each (S,𝒵)(S_{\Box},\mathcal{Z}_{\Box}) as a dynamic quadrant set cover instance. We then construct a data structure 𝒟\mathcal{D}_{\Box} for each (S,𝒵)(S_{\Box},\mathcal{Z}_{\Box}), which is the dynamic quadrant set cover data structure 𝒟\mathcal{D} built on (S,𝒵)(S_{\Box},\mathcal{Z}_{\Box}). If (S,𝒵)(S_{\Box},\mathcal{Z}_{\Box}) has a set cover, then 𝒟\mathcal{D}_{\Box} maintains a cc-approximate optimal set cover for (S,𝒵)(S_{\Box},\mathcal{Z}_{\Box}), which we denote by 𝒵\mathcal{Z}_{\Box}^{*}. Furthermore, we also need two data structures to maintain the nonempty and truly nonempty cells of Γ\varGamma, respectively. We store the nonempty cells in a (dynamic) binary search tree 𝒯1\mathcal{T}_{1} (by fixing some total order of the cells of Γ\varGamma). Each node 𝐮𝒯1\mathbf{u}\in\mathcal{T}_{1} has a pointer pointing to the data structure 𝒟\mathcal{D}_{\Box} where \Box is the cell corresponding to 𝐮\mathbf{u}. Similarly, we store all the truly nonempty cells in a (dynamic) binary search tree 𝒯2\mathcal{T}_{2}, with a pointer at each node pointing to the data structure 𝒟\mathcal{D}_{\Box} for the corresponding cell \Box. The data structures 𝒟\mathcal{D}_{\Box} and the binary search trees 𝒯1,𝒯2\mathcal{T}_{1},\mathcal{T}_{2} form our dynamic unit-square set cover data structure 𝒟\mathcal{D}^{\prime}. Besides this, 𝒟\mathcal{D}^{\prime} also maintains two numbers mm and ss, where mm is the number of the truly nonempty cells \Box such that (S,𝒵)(S_{\Box},\mathcal{Z}_{\Box}) has no set cover and ss is the sum of |𝒵||\mathcal{Z}_{\Box}^{*}| over all truly nonempty cells \Box such that (S,𝒵)(S_{\Box},\mathcal{Z}_{\Box}) has a set cover. The construction time of 𝒟\mathcal{D}^{\prime} is O~(n0)\widetilde{O}(n_{0}). Indeed, finding the nonempty (and truly nonempty) cells and building the binary search trees 𝒯1,𝒯2\mathcal{T}_{1},\mathcal{T}_{2} can be easily done in O~(n0)\widetilde{O}(n_{0}) time. Furthermore, the sum of the sizes of all (S,𝒵)(S_{\Box},\mathcal{Z}_{\Box}) is O~(n0)\widetilde{O}(n_{0}), since each point in SS lies in exactly one cell and each unit-square in 𝒵\mathcal{Z} intersects at most four cells. Therefore, computing the instances (S,𝒵)(S_{\Box},\mathcal{Z}_{\Box}) can be done in O~(n0)\widetilde{O}(n_{0}) time. The time for building each data structure 𝒟\mathcal{D}_{\Box} is near-linear in the size of (S,𝒵)(S_{\Box},\mathcal{Z}_{\Box}), thus building all 𝒟\mathcal{D}_{\Box} takes O~(n0)\widetilde{O}(n_{0}) time.

Next, we consider how to handle updates on (S,𝒵)(S,\mathcal{Z}). For an update on (S,𝒵)(S,\mathcal{Z}), there are (at most) four cells \Box for which (S,𝒵)(S_{\Box},\mathcal{Z}_{\Box}) changes, and we call them involved cells. For each involved cell \Box, we do the following. First, if \Box is previously empty (resp., not truly nonempty) and becomes nonempty (resp., truly nonempty) after the update, we insert a new node to the binary search tree 𝒯1\mathcal{T}_{1} (resp., 𝒯2\mathcal{T}_{2}) corresponding to \Box. Conversely, if \Box is previously nonempty (resp., truly nonempty) and becomes empty (resp., not truly nonempty) after the update, we delete the node corresponding to \Box from 𝒯1\mathcal{T}_{1} (resp., 𝒯2\mathcal{T}_{2}). After this, we need to update the data structure 𝒟\mathcal{D}_{\Box}. If \Box is previously empty and becomes nonempty after the update, we create a new data structure 𝒟\mathcal{D}_{\Box} for \Box, otherwise we find the data structure 𝒟\mathcal{D}_{\Box} (which can be done by searching the node 𝐮\mathbf{u} corresponding to \Box in 𝒯1\mathcal{T}_{1} and using the pointer at 𝐮\mathbf{u}) and update it. We observe that all the above work can be done in O~(f(n))\widetilde{O}(f(n)) amortized time. Indeed, updating the binary search trees 𝒯1\mathcal{T}_{1} and 𝒯2\mathcal{T}_{2} takes logarithmic time. By assumption, the amortized update time of 𝒟\mathcal{D}_{\Box} is f(n)f(n_{\Box}), where nn_{\Box} is the size of the current (S,𝒵)(S_{\Box},\mathcal{Z}_{\Box}), which is smaller than or equal to f(n)f(n) as ff is an increasing function. Since there are at most four involved cells, the amortized update time of 𝒟\mathcal{D}^{\prime} is O~(f(n))\widetilde{O}(f(n)).

At any point, if m>0m>0, then 𝒟\mathcal{D}^{\prime} directly decides that the current (S,𝒵)(S,\mathcal{Z}) has no set cover. Otherwise, 𝒟\mathcal{D}^{\prime} uses 𝒵appx=Ψ𝒵\mathcal{Z}_{\text{appx}}=\bigsqcup_{\Box\in\varPsi}\mathcal{Z}_{\Box}^{*} as the set cover solution for the current (S,𝒵)(S,\mathcal{Z}), where Ψ\varPsi is the set of the truly nonempty cells. By our previous observation, 𝒵appx\mathcal{Z}_{\text{appx}} is a 4c4c-approximate optimal set cover for (S,𝒵)(S,\mathcal{Z}). To support the desired queries for 𝒵appx\mathcal{Z}_{\text{appx}} is quite easy. First, the size of 𝒵appx\mathcal{Z}_{\text{appx}} is just the number ss maintained by 𝒟\mathcal{D}^{\prime}, hence the size query for 𝒵appx\mathcal{Z}_{\text{appx}} can be answered in O(1)O(1) time. To answer a membership query, let Z𝒵Z\in\mathcal{Z} be the query element. We want to know the multiplicity of ZZ in 𝒵appx\mathcal{Z}_{\text{appx}}. There are at most four cells of Γ\varGamma that intersect ZZ. For each such cell \Box, we search in the binary search tree 𝒯2\mathcal{T}_{2}. If we cannot find a node of 𝒯2\mathcal{T}_{2} corresponding to \Box, then Ψ\Box\notin\varPsi. Otherwise, the node 𝐮𝒯2\mathbf{u}\in\mathcal{T}_{2} corresponding to \Box gives us a pointer to the data structure 𝒟\mathcal{D}_{\Box}, which can report the multiplicity of ZZ in 𝒵\mathcal{Z}_{\Box}^{*} in O(log|𝒵|)O(\log|\mathcal{Z}_{\Box}^{*}|) time. The sum of all these multiplicities is just the multiplicity of ZZ in 𝒵appx\mathcal{Z}_{\text{appx}}. The time cost for answer the membership query is O(log|Ψ|+log|𝒵appx|)O(\log|\varPsi|+\log|\mathcal{Z}_{\text{appx}}|), since the size of 𝒯2\mathcal{T}_{2} is |Ψ||\varPsi|. Note that |Ψ||𝒵appx||\varPsi|\leq|\mathcal{Z}_{\text{appx}}|, since 𝒵\mathcal{Z}_{\Box}^{*}\neq\emptyset for all Ψ\Box\in\varPsi (for otherwise 𝒵\mathcal{Z}_{\Box}^{*} is not a set cover for SS_{\Box}). Thus, a membership query takes O(log|𝒵appx|)O(\log|\mathcal{Z}_{\text{appx}}|) time. To answer the reporting query, we do a traversal in 𝒯2\mathcal{T}_{2}. For each node 𝐮𝒯2\mathbf{u}\in\mathcal{T}_{2}, we use the data structure 𝒟\mathcal{D}_{\Box} to report the elements in 𝒵\mathcal{Z}_{\Box}^{*} in O(|𝒵|)O(|\mathcal{Z}_{\Box}^{*}|) time, where Ψ\Box\in\varPsi is the truly nonempty cell corresponding to 𝐮\mathbf{u}. The time cost is O(|Ψ|+|𝒵appx|)O(|\varPsi|+|\mathcal{Z}_{\text{appx}}|). Again, since |Ψ||𝒵appx||\varPsi|\leq|\mathcal{Z}_{\text{appx}}|, the reporting query can be answered in O(|𝒵appx|)O(|\mathcal{Z}_{\text{appx}}|) time. To summarize, 𝒟\mathcal{D}^{\prime} is a 4c4c-approximate dynamic unit-square set cover data structure with O~(f(n))\widetilde{O}(f(n)) amortized update time and O~(n0)\widetilde{O}(n_{0}) construction time.

Finally, we show how to reduce dynamic quadrant hitting set to dynamic quadrant set cover. Let us first consider a static quadrant hitting set instance (S,𝒬)(S,\mathcal{Q}). There are four types of quadrants in 𝒬\mathcal{Q}, southeast, southwest, northeast, and northwest; we denote by 𝒬SE,𝒬SW,𝒬NE,𝒬NW𝒬\mathcal{Q}^{\text{SE}},\mathcal{Q}^{\text{SW}},\mathcal{Q}^{\text{NE}},\mathcal{Q}^{\text{NW}}\subseteq\mathcal{Q} the sub-collections consisting of these types of quadrants, respectively. Let S1,S2,S3,S4S_{1},S_{2},S_{3},S_{4} be cc-approximate optimal hitting sets for the instances (S,𝒬SE),(S,𝒬SE),(S,𝒬SE),(S,𝒬SE)(S,\mathcal{Q}^{\text{SE}}),(S,\mathcal{Q}^{\text{SE}}),(S,\mathcal{Q}^{\text{SE}}),(S,\mathcal{Q}^{\text{SE}}), respectively. We claim that i=14Si\bigcup_{i=1}^{4}S_{i} is a 4c4c-approximate hitting set for (S,𝒬)(S,\mathcal{Q}). Indeed, we have |S1|c𝗈𝗉𝗍SEc𝗈𝗉𝗍|S_{1}|\leq c\cdot\mathsf{opt}^{SE}\leq c\cdot\mathsf{opt}, where 𝗈𝗉𝗍SE\mathsf{opt}^{SE} is the optimum of (S,𝒬SE)(S,\mathcal{Q}^{\text{SE}}) and 𝗈𝗉𝗍\mathsf{opt} is the optimum of (S,𝒬)(S,\mathcal{Q}). Similarly, we can show that |Si|c𝗈𝗉𝗍|S_{i}|\leq c\cdot\mathsf{opt} for all i{1,,4}i\in\{1,\dots,4\}. Thus, |i=14Si|=i=14|Si|4c𝗈𝗉𝗍|\bigcup_{i=1}^{4}S_{i}|=\sum_{i=1}^{4}|S_{i}|\leq 4c\cdot\mathsf{opt}. Therefore, if we can solve dynamic quadrant hitting set for a single type of quadrants with an approximation factor cc, we are able to solve the general dynamic quadrant hitting set problem with an approximation factor 4c4c.

It is easy to see that dynamic quadrant hitting set for a single type of quadrants is equivalent to dynamic quadrant set cover for a single type of quadrants. To see this, let us consider southeast quadrants. Note that a point aa is contained in a southeast quadrant QQ iff the northwest quadrant whose vertex is aa (denoted by QaQ_{a}) contains the vertex vQv_{Q} of QQ. Therefore, given a quadrant hitting set instance (S,𝒬SE)(S,\mathcal{Q}^{\text{SE}}) where 𝒬SE\mathcal{Q}^{\text{SE}} only contains southeast quadrants, if we replace each point aSa\in S with the northwest quadrant QaQ_{a} and replace each southeast quadrant Q𝒬SEQ\in\mathcal{Q}^{\text{SE}} with its vertex vQv_{Q}, then (S,𝒬SE)(S,\mathcal{Q}^{\text{SE}}) is reduced to an equivalent quadrant set cover instance ({vQ:Q𝒬SE},{Qa:aS})(\{v_{Q}:Q\in\mathcal{Q}^{\text{SE}}\},\{Q_{a}:a\in S\}). To summarize, if we can solve dynamic quadrant set cover with an approximation factor cc, then we can also solve dynamic quadrant hitting set for a single type of quadrants with an approximation factor cc and in turn solve the general dynamic quadrant hitting set problem with an approximation factor 4c4c. This completes the proof.

A.5 Proof of Lemma 8

Our algorithm makes a no-solution decision iff (Si,j,𝒬i,j)(S_{i,j},\mathcal{Q}_{i,j}) has no set cover for some (i,j)P(i,j)\in P^{\prime}. If (Si,j,𝒬i,j)(S_{i,j},\mathcal{Q}_{i,j}) has a set cover for all (i,j)P(i,j)\in P^{\prime}, then (S,𝒬)(S,\mathcal{Q}) clearly has a set cover, because the cells i,j\Box_{i,j} for (i,j)P(i,j)\in P are coverable. Suppose (Si,j,𝒬i,j)(S_{i,j},\mathcal{Q}_{i,j}) has no set cover for some (i,j)P(i,j)\in P^{\prime}. Then there exists a point aSi,ja\in S_{i,j} that is not covered by any quadrant in 𝒬i,j\mathcal{Q}_{i,j}. We claim that aa is not covered by any quadrant in 𝒬\mathcal{Q}. Consider a quadrant Q𝒬Q\in\mathcal{Q}. If QQ does not intersect i,j\Box_{i,j}, then aQa\notin Q. Otherwise, QQ must partially intersect i,j\Box_{i,j}, because i,j\Box_{i,j} is uncoverable. If the vertex of QQ lies in i,j\Box_{i,j}, then Q𝒬i,jQ\in\mathcal{Q}_{i,j} and thus aQa\notin Q. The remaining case is that QQ partially intersects i,j\Box_{i,j} and contains one edge of i,j\Box_{i,j}. Without loss of generality, assume QQ left intersects i,j\Box_{i,j}. The rightmost quadrant QQ^{\prime} that left intersects i,j\Box_{i,j} is contained in 𝒬i,j\mathcal{Q}_{i,j}. Since Qi,jQi,jQ\cap\Box_{i,j}\subseteq Q^{\prime}\cap\Box_{i,j} and aQa\notin Q^{\prime}, we have aQa\notin Q. It follows that aa is not covered by 𝒬\mathcal{Q} and hence (S,𝒬)(S,\mathcal{Q}) has no set cover. Therefore, the no-solution decision of our data structure is correct.

A.6 Proof of Lemma 9

Fix (i,j)P(i,j)\in P^{\prime}. Let 𝒬𝒬opt\mathcal{Q}^{\sim}\subseteq\mathcal{Q}_{\textnormal{opt}} consist of all quadrants in 𝒬opt\mathcal{Q}_{\textnormal{opt}} that intersect i,j\Box_{i,j}. Then 𝒬\mathcal{Q}^{\sim} covers all points in Si,jS_{i,j}, because the points in Si,jS_{i,j} can only be covered by quadrants that intersect i,j\Box_{i,j}. Since i,j\Box_{i,j} is uncoverable by 𝒬\mathcal{Q}, all quadrants in 𝒬\mathcal{Q}^{\sim} must partially intersect i,j\Box_{i,j}. It follows that a quadrant in 𝒬\mathcal{Q}^{\sim} either has its vertex in i,j\Box_{i,j} or contains one edge of i,j\Box_{i,j}. We claim that if a point ai,ja\in\Box_{i,j} is covered by 𝒬\mathcal{Q}^{\sim}, then aa is covered by either 𝒬𝒬i,j\mathcal{Q}^{\sim}\cap\mathcal{Q}_{i,j}^{\prime} or a special quadrant in 𝒬i,j\mathcal{Q}_{i,j}. Let Q𝒬Q\in\mathcal{Q}^{\sim} be a quadrant such that aQa\in Q. If Q𝒬i,jQ\in\mathcal{Q}_{i,j}^{\prime}, we are done. Otherwise, QQ must contain one edge of i,j\Box_{i,j}, say the left edge. In this case, 𝒬i,j\mathcal{Q}_{i,j} should contain a special quadrant Q0Q_{0}, which is the rightmost quadrant that left intersects i,j\Box_{i,j}. It is clear that Qi,jQ0i,jQ\cap\Box_{i,j}\subseteq Q_{0}\cap\Box_{i,j}, which implies that aQ0a\in Q_{0}. From this claim, it directly follows that all points in Si,jS_{i,j} are covered by (𝒬𝒬i,j)𝒬i,j′′(\mathcal{Q}^{\sim}\cap\mathcal{Q}_{i,j}^{\prime})\cup\mathcal{Q}_{i,j}^{\prime\prime}, where 𝒬i,j′′𝒬i,j\mathcal{Q}_{i,j}^{\prime\prime}\subseteq\mathcal{Q}_{i,j} consists of the special quadrants. Since (𝒬𝒬i,j)𝒬i,j′′𝒬i,j(\mathcal{Q}^{\sim}\cap\mathcal{Q}_{i,j}^{\prime})\cup\mathcal{Q}_{i,j}^{\prime\prime}\subseteq\mathcal{Q}_{i,j} and (𝒬𝒬i,j)𝒬i,j′′(\mathcal{Q}^{\sim}\cap\mathcal{Q}_{i,j}^{\prime})\cup\mathcal{Q}_{i,j}^{\prime\prime} covers Si,jS_{i,j}, we have |(𝒬𝒬i,j)𝒬i,j′′|𝗈𝗉𝗍i,j|(\mathcal{Q}^{\sim}\cap\mathcal{Q}_{i,j}^{\prime})\cup\mathcal{Q}_{i,j}^{\prime\prime}|\leq\mathsf{opt}_{i,j}. Note that 𝒬𝒬i,j=𝒬𝗈𝗉𝗍𝒬i,j\mathcal{Q}^{\sim}\cap\mathcal{Q}_{i,j}^{\prime}=\mathcal{Q}_{\mathsf{opt}}\cap\mathcal{Q}_{i,j}^{\prime} and |𝒬i,j′′|4|\mathcal{Q}_{i,j}^{\prime\prime}|\leq 4. So we have

|𝒬opt𝒬i,j|+4=|𝒬𝒬i,j|+4|(𝒬𝒬i,j)𝒬i,j′′|𝗈𝗉𝗍i,j.|\mathcal{Q}_{\textnormal{opt}}\cap\mathcal{Q}_{i,j}^{\prime}|+4=|\mathcal{Q}^{\sim}\cap\mathcal{Q}_{i,j}^{\prime}|+4\geq|(\mathcal{Q}^{\sim}\cap\mathcal{Q}_{i,j}^{\prime})\cup\mathcal{Q}_{i,j}^{\prime\prime}|\geq\mathsf{opt}_{i,j}.

Equation 5 then follows from the equation

𝗈𝗉𝗍=|𝒬opt|=i=1rj=1r|𝒬opt𝒬i,j|(i,j)P|𝒬opt𝒬i,j|.\mathsf{opt}=|\mathcal{Q}_{\textnormal{opt}}|=\sum_{i=1}^{r}\sum_{j=1}^{r}|\mathcal{Q}_{\textnormal{opt}}\cap\mathcal{Q}_{i,j}^{\prime}|\geq\sum_{(i,j)\in P^{\prime}}|\mathcal{Q}_{\textnormal{opt}}\cap\mathcal{Q}_{i,j}^{\prime}|.

A.7 Proof of Lemma 10

It suffices to show k=1r(|Si,k|+|𝒬i,k|)=O(f(n0,ε)+r)\sum_{k=1}^{r}(|S_{i,k}|+|\mathcal{Q}_{i,k}|)=O(f(n_{0},\varepsilon)+r) for all i{1,,r}i\in\{1,\dots,r\} at any time in the first period. Fix i{1,,r}i\in\{1,\dots,r\}. Initially, the ii-th row of the grid contains O(f(n0,ε))O(f(n_{0},\varepsilon)) points in SS and O(f(n0,ε))O(f(n_{0},\varepsilon)) vertices of the quadrants in 𝒬\mathcal{Q}. Since the period only consists of f(n0,ε)f(n_{0},\varepsilon) operations, the ii-th row always contains O(f(n0,ε))O(f(n_{0},\varepsilon)) points in SS and O(f(n0,ε))O(f(n_{0},\varepsilon)) vertices of the quadrants in 𝒬\mathcal{Q} during the entire period. Thus, at any time in the period, k=1r|Si,k|=O(f(n0,ε))\sum_{k=1}^{r}|S_{i,k}|=O(f(n_{0},\varepsilon)) and the total number of the non-special quadrants in 𝒬i,1,,𝒬i,r\mathcal{Q}_{i,1},\dots,\mathcal{Q}_{i,r} is bounded by O(f(n0,ε))O(f(n_{0},\varepsilon)). The number of the special quadrants in 𝒬i,1,,𝒬i,r\mathcal{Q}_{i,1},\dots,\mathcal{Q}_{i,r} is clearly O(r)O(r), since each 𝒬i,k\mathcal{Q}_{i,k} contains at most four special quadrants. Thus, the equation k=1r(|Si,k|+|𝒬i,k|)=O(f(n0,ε)+r)\sum_{k=1}^{r}(|S_{i,k}|+|\mathcal{Q}_{i,k}|)=O(f(n_{0},\varepsilon)+r) holds at any time in the period.

A.8 Proof of Lemma 11

We first prove an invariant of our algorithm: whenever aa is updated in the algorithm, the following two properties hold.

  1. 1.

    aa is set to be the leftmost point in SUSES\cap U^{\text{SE}} that is not covered by 𝒬ans\mathcal{Q}_{\text{ans}}.

  2. 2.

    Any point in SUSES\cap U^{\text{SE}} above aa is not covered by 𝒬ans\mathcal{Q}_{\text{ans}}.

The first update of aa happens in Step 2. At this time, 𝒬ans={Φ(σ,𝒬SW),Φ(σ,𝒬SE)}\mathcal{Q}_{\text{ans}}=\{\varPhi_{\rightarrow}(\sigma,\mathcal{Q}^{\text{SW}}),\varPhi_{\uparrow}(\sigma,\mathcal{Q}^{\text{SE}})\}, y~=y(Φ(σ,𝒬SE))\tilde{y}=y(\varPhi_{\uparrow}(\sigma,\mathcal{Q}^{\text{SE}})), and aa is set to be ϕ(y~)\phi(\tilde{y}), i.e., the leftmost point in SUSES\cap U^{\text{SE}} whose yy-coordinate is larger than y~\tilde{y}. In order to prove the invariant, we only need to show that (1) any point in SUSES\cap U^{\text{SE}} strictly to the left of aa is covered by 𝒬ans\mathcal{Q}_{\text{ans}} and (2) any point in SUSES\cap U^{\text{SE}} above aa (including aa itself) is not covered by 𝒬ans\mathcal{Q}_{\text{ans}}. Let bSUSEb\in S\cap U^{\text{SE}} be a point strictly to the left of aa. First, the yy-coordinate of bb is at most y~\tilde{y}, since aa is the leftmost point in SUSES\cap U^{\text{SE}} whose yy-coordinate is larger than y~\tilde{y}. Therefore, if bb is to the right of σ\sigma, then bΦ(σ,𝒬SE)b\in\varPhi_{\uparrow}(\sigma,\mathcal{Q}^{\text{SE}}). Now assume bb is strictly to the left of σ\sigma. Because σ\sigma is on the boundary γ\gamma of USEU^{\text{SE}}, bb must be below σ\sigma. Thus, bΦ(σ,𝒬SW)b\in\varPhi_{\rightarrow}(\sigma,\mathcal{Q}^{\text{SW}}). We see that 𝒬ans\mathcal{Q}_{\text{ans}} covers bb. Let aSUSEa^{\prime}\in S\cap U^{\text{SE}} be a point above aa. Then the yy-coordinate of aa^{\prime} is greater than y~\tilde{y}, which implies aΦ(σ,𝒬SE)a^{\prime}\notin\varPhi_{\uparrow}(\sigma,\mathcal{Q}^{\text{SE}}). Furthermore, aa^{\prime} must be strictly to the right of σ\sigma. Indeed, if aa^{\prime} is to the left of σ\sigma, then aa^{\prime} is below σ\sigma (as argued above) and hence the yy-coordinate of aa^{\prime} is at most y~\tilde{y}, resulting in a contradiction. Now we see aa^{\prime} is strictly above σ\sigma and strictly to the right of σ\sigma. Note that σ\sigma is on the boundary of USWU^{\text{SW}}, which implies aUSWa^{\prime}\notin U^{\text{SW}} and in particular aΦ(σ,𝒬SW)a^{\prime}\notin\varPhi_{\rightarrow}(\sigma,\mathcal{Q}^{\text{SW}}). So aa^{\prime} is not covered by 𝒬ans\mathcal{Q}_{\text{ans}}. The invariant holds when we update aa in Step 2.

In the rest of the procedure, aa is only updated in Step 5. Note that Step 3–5 form a loop in which we include a constant number of quadrants to cover aa, see Figure 4. It suffices to show that if the two properties hold at the beginning of an iteration of the loop (before Step 3), then they also hold after we update aa in Step 5 of that iteration. Suppose we are at the beginning of an iteration, and the two properties hold. If aUNEa\in U^{\text{NE}}, then we go directly from Step 3 to Step 6 and the algorithm terminates. Otherwise, we go to Step 4. Here we need to distinguish two cases: aUNWa\in U^{\text{NW}} and aUNWa\notin U^{\text{NW}}.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 4: Different options for covering aa, depending on what other quadrant types cover aa. The shaded region contains all remaining uncovered points (top-left figure). If aUNEa\in U^{\text{NE}}, then the entire shaded region can be covered by two quadrants (top-right figure). Otherwise (bottom figures), some progress can be made using up to three quadrants from 𝒬SE\mathcal{Q}^{\text{SE}} and 𝒬NW\mathcal{Q}^{\text{NW}}. After choosing these quadrants, the remaining uncovered points of SUSES\cap U^{\text{SE}} are not covered by any quadrant of 𝒬SE\mathcal{Q}^{\text{SE}}, 𝒬NE\mathcal{Q}^{\text{NE}}, and 𝒬NW\mathcal{Q}^{\text{NW}} that also covers aa. Note that aUSWa\not\in U^{\text{SW}}.

[Case 1] We first consider the case aUNWa\notin U^{\text{NW}}. In this case, we only include in 𝒬ans\mathcal{Q}_{\text{ans}} a new quadrant Q=Φ(a,𝒬SE)Q=\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{SE}}). Note that after including QQ, 𝒬ans\mathcal{Q}_{\text{ans}} covers all the points in SUSES\cap U^{\text{SE}} whose yy-coordinates are at most y(Q)y(Q). To see this, consider a point bSUSEb\in S\cap U^{\text{SE}} whose yy-coordinate is at most y(Q)y(Q). If bb is strictly to the left of aa, then bb is covered by 𝒬ans\mathcal{Q}_{\text{ans}} even before we include QQ, since aa is the leftmost point in SUSES\cap U^{\text{SE}} that is not covered by 𝒬ans\mathcal{Q}_{\text{ans}} at the beginning of this iteration (by our assumption). Otherwise, bb is to the right of aa and is hence covered by QQ. On the other hand, we also notice that after including QQ, 𝒬ans\mathcal{Q}_{\text{ans}} does not cover any the points in SUSES\cap U^{\text{SE}} whose yy-coordinates are greater than y(Q)y(Q). To see this, consider a point bSUSEb\in S\cap U^{\text{SE}} whose yy-coordinate is greater than y(Q)y(Q). Then bQb\notin Q and bb is strictly above aa (since aQa\in Q). By our assumption, at the beginning of this iteration (when QQ is not included in 𝒬ans\mathcal{Q}_{\text{ans}}), 𝒬ans\mathcal{Q}_{\text{ans}} does not cover any point above aa, and in particular does not cover bb. Since bQb\notin Q, we see that, after we include QQ in 𝒬ans\mathcal{Q}_{\text{ans}}, the new 𝒬ans\mathcal{Q}_{\text{ans}} still does not cover bb. To summarize, the new 𝒬ans\mathcal{Q}_{\text{ans}} contains all points in SUSES\cap U^{\text{SE}} whose yy-coordinates are at most y(Q)y(Q) and no points in SUSES\cap U^{\text{SE}} whose yy-coordinates are greater than y(Q)y(Q). Therefore, when we set aa to be ϕ(y(Q))\phi(y(Q)) in Step 5, aa is the leftmost point in SUSES\cap U^{\text{SE}} that is not covered by 𝒬ans\mathcal{Q}_{\text{ans}} and any point in SUSES\cap U^{\text{SE}} above aa is not covered by 𝒬ans\mathcal{Q}_{\text{ans}}.

[Case 2] Next, we consider the case aUNWa\in U^{\text{NW}}. In this case, we include in 𝒬ans\mathcal{Q}_{\text{ans}} three new quadrants: Φ(a,𝒬NW)\varPhi_{\rightarrow}(a,\mathcal{Q}^{\text{NW}}), Φ(a,𝒬SE)\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{SE}}), and Q=Φ(v,𝒬SE)Q=\varPhi_{\uparrow}(v,\mathcal{Q}^{\text{SE}}) where vv is the vertex of Φ(a,𝒬NW)\varPhi_{\rightarrow}(a,\mathcal{Q}^{\text{NW}}). We first show that after including these three new quadrants, 𝒬ans\mathcal{Q}_{\text{ans}} covers all the points in SUSES\cap U^{\text{SE}} whose yy-coordinates are at most y(Q)y(Q). Consider a point bSUSEb\in S\cap U^{\text{SE}} whose yy-coordinate is at most y(Q)y(Q). If the xx-coordinate of bb is at least x(Q)x(Q), then bQb\in Q. Otherwise, bb is strictly to the left of vv (because vQv\in Q). Thus, if bb is above vv, then bΦ(a,𝒬NW)b\in\varPhi_{\rightarrow}(a,\mathcal{Q}^{\text{NW}}). It now suffices to consider the case when bb is strictly below vv. In this case, bb is strictly below aa, since vv is the vertex of Φ(a,𝒬NW)\varPhi_{\rightarrow}(a,\mathcal{Q}^{\text{NW}}). If bb is to the right of aa, then bΦ(a,𝒬SE)b\in\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{SE}}). If bb is strictly to the left of aa, then bb is covered by 𝒬ans\mathcal{Q}_{\text{ans}} even before we include the three new quadrants, since aa is the leftmost point in SUSES\cap U^{\text{SE}} that is not covered by 𝒬ans\mathcal{Q}_{\text{ans}} at the beginning of this iteration (by our assumption). In any case, bb is covered by 𝒬ans\mathcal{Q}_{\text{ans}}. On the other hand, we notice that after including these three new quadrants, 𝒬ans\mathcal{Q}_{\text{ans}} does not cover any points in SUSES\cap U^{\text{SE}} whose yy-coordinates are greater than y(Q)y(Q). To see this, consider a point bSUSEb\in S\cap U^{\text{SE}} whose yy-coordinate is greater than y(Q)y(Q). We first establish some obvious facts. Since the NW quadrant Φ(a,𝒬NW)\varPhi_{\rightarrow}(a,\mathcal{Q}^{\text{NW}}) and the SE quadrant Φ(a,𝒬SE)\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{SE}}) both contain aa, we have vΦ(a,𝒬SE)v\in\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{SE}}). It follows that y(Φ(a,𝒬SE))y(Q)y(\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{SE}}))\leq y(Q), by the definition of QQ. Therefore, bb is strictly above aa, since the yy-coordinate of aa is at most y(Φ(a,𝒬SE))y(\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{SE}})). By our assumption, at the beginning of this iteration (when the three new quadrants are not included in 𝒬ans\mathcal{Q}_{\text{ans}}), 𝒬ans\mathcal{Q}_{\text{ans}} does not cover any point above aa, and in particular does not cover bb. Also, the fact y(Φ(a,𝒬SE))y(Q)y(\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{SE}}))\leq y(Q) implies bΦ(a,𝒬SE)b\notin\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{SE}}). To see bΦ(a,𝒬NW)b\notin\varPhi_{\rightarrow}(a,\mathcal{Q}^{\text{NW}}), let Q𝒬SEQ^{\prime}\in\mathcal{Q}^{\text{SE}} be a quadrant contains bb (such a quadrant exists as bUSEb\in U^{\text{SE}}). Then y(Q)>y(Q)y(Q^{\prime})>y(Q). By the definition of QQ, we have vQv\notin Q^{\prime} and thus QΦ(a,𝒬NW)=Q^{\prime}\cap\varPhi_{\rightarrow}(a,\mathcal{Q}^{\text{NW}})=\emptyset, which implies bΦ(a,𝒬NW)b\notin\varPhi_{\rightarrow}(a,\mathcal{Q}^{\text{NW}}). Finally, it is clear that bQb\notin Q. So after we include the three new quadrants in 𝒬ans\mathcal{Q}_{\text{ans}}, the new 𝒬ans\mathcal{Q}_{\text{ans}} still does not cover bb. To summarize, the new 𝒬ans\mathcal{Q}_{\text{ans}} contains all points in SUSES\cap U^{\text{SE}} whose yy-coordinates are at most y(Q)y(Q) and no points in SUSES\cap U^{\text{SE}} whose yy-coordinates are greater than y(Q)y(Q). Therefore, when we set aa to be ϕ(y(Q))\phi(y(Q)) in Step 5, aa is the leftmost point in SUSES\cap U^{\text{SE}} that is not covered by 𝒬ans\mathcal{Q}_{\text{ans}} and any point in SUSES\cap U^{\text{SE}} above aa is not covered by 𝒬ans\mathcal{Q}_{\text{ans}}.

Combining the discussions for the two cases, the invariant is proved. With the invariant in hand, we now prove the lemma. For convenience, we denote by aia_{i} the point aa in the ii-th iteration (before the update of Step 5) of the loop Step 3–5. The invariant of our algorithm guarantees that aia_{i} is the leftmost point in SUSES\cap U^{\text{SE}} that is not covered by 𝒬ans\mathcal{Q}_{\text{ans}} at the beginning of the ii-th iteration. Furthermore, in the ii-th iteration, we always add the quadrant Φ(ai,𝒬SE)\varPhi_{\uparrow}(a_{i},\mathcal{Q}^{\text{SE}}) to 𝒬ans\mathcal{Q}_{\text{ans}}, hence aia_{i} is covered by 𝒬ans\mathcal{Q}_{\text{ans}} at the end of the ii-th iteration. This implies that our algorithm always terminates, because 𝒬ans\mathcal{Q}_{\text{ans}} covers at least one more point in SUSES\cap U^{\text{SE}} in each iteration.

We first show (the eventual) 𝒬ans\mathcal{Q}_{\text{ans}} covers all points in SUSES\cap U^{\text{SE}}. If we go to Step 6 from Step 1, then SUSE=S\cap U^{\text{SE}}=\emptyset and nothing needs to be proved. If we go to Step 6 from Step 2, we know that the yy-coordinates of all points in SUSES\cap U^{\text{SE}} are at most y(Φ(σ,𝒬SE))y(\varPhi_{\uparrow}(\sigma,\mathcal{Q}^{\text{SE}})). Consider a point aSUSEa\in S\cap U^{\text{SE}}. If aa is to the right of σ\sigma, then aΦ(σ,𝒬SE)a\in\varPhi_{\uparrow}(\sigma,\mathcal{Q}^{\text{SE}}), as the yy-coordinate of aa is at most y(Φ(σ,𝒬SE))y(\varPhi_{\uparrow}(\sigma,\mathcal{Q}^{\text{SE}})). If aa is to the left of σ\sigma, then aΦ(σ,𝒬SW)a\in\varPhi_{\rightarrow}(\sigma,\mathcal{Q}^{\text{SW}}), because any point in USEU^{\text{SE}} to the left of σ\sigma is covered by Φ(σ,𝒬SW)\varPhi_{\rightarrow}(\sigma,\mathcal{Q}^{\text{SW}}). Thus, aa is covered by 𝒬ans\mathcal{Q}_{\text{ans}}, implying that all points in SUSES\cap U^{\text{SE}} are covered by 𝒬ans\mathcal{Q}_{\text{ans}}. The last case is that we go to Step 6 from Step 3. By the invariant, we know that aa is the leftmost point in SUSES\cap U^{\text{SE}} that is not covered by 𝒬ans\mathcal{Q}_{\text{ans}} just before Step 3. When we include the two quadrants Φ(a,𝒬NE)\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{NE}}) and Φ(a,𝒬SE)\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{SE}}) in 𝒬ans\mathcal{Q}_{\text{ans}}, any point to the right of aa is covered by 𝒬ans\mathcal{Q}_{\text{ans}}. Thus, all points in SUSES\cap U^{\text{SE}} are covered by 𝒬ans\mathcal{Q}_{\text{ans}} when we go from Step 3 to Step 6. The remaining case is that we go to Step 6 from Step 5. This case happens only when ϕ(y~)\phi(\tilde{y}) does not exist, or equivalently, the yy-coordinate of every point in SUSES\cap U^{\text{SE}} is at most y(Q)y(Q). Recall that when proving the invariant, we showed that after we include QQ in 𝒬ans\mathcal{Q}_{\text{ans}} in Step 5, 𝒬ans\mathcal{Q}_{\text{ans}} covers all the points in SUSES\cap U^{\text{SE}} whose yy-coordinates are at most y(Q)y(Q). Hence, all points in SUSES\cap U^{\text{SE}} are covered by 𝒬ans\mathcal{Q}_{\text{ans}} when we go from Step 5 to Step 6.

We then show that |𝒬ans|=O(𝗈𝗉𝗍SE)|\mathcal{Q}_{\text{ans}}|=O(\mathsf{opt}^{\text{SE}}). To this end, we notice that in each iteration of the loop Step 3–5, we include in 𝒬ans\mathcal{Q}_{\text{ans}} a constant number of quadrants. Suppose we do kk iterations in total during the algorithm. It suffices to show that k=O(𝗈𝗉𝗍SE)k=O(\mathsf{opt}^{\text{SE}}). We claim that any quadrant Q𝒬Q\in\mathcal{Q} contains at most one of a1,,aka_{1},\dots,a_{k}. First, we observe that any quadrant in 𝒬NE\mathcal{Q}^{\text{NE}} may only contain aka_{k}. Indeed, if aiUSEa_{i}\in U^{\text{SE}} for some i<ki<k, then in the ii-th iteration, the algorithm goes from Step 3 to Step 6 and terminates. Next, we show that any quadrant in 𝒬SW\mathcal{Q}^{\text{SW}} does not contain aia_{i} (or equivalently, aiUSWa_{i}\notin U^{\text{SW}}) for all i{1,,k}i\in\{1,\dots,k\}. Let i{1,,k}i\in\{1,\dots,k\}. By the invariant we proved before, aia_{i} is the leftmost point in SUSES\cap U^{\text{SE}} that is not covered by 𝒬ans\mathcal{Q}_{\text{ans}} at the beginning of the ii-th iteration. Since we include Φ(σ,𝒬SW)\varPhi_{\rightarrow}(\sigma,\mathcal{Q}^{\text{SW}}) and Φ(σ,𝒬SE)\varPhi_{\uparrow}(\sigma,\mathcal{Q}^{\text{SE}}) in 𝒬ans\mathcal{Q}_{\text{ans}} Step 2, we know that aiΦ(σ,𝒬SW)a_{i}\notin\varPhi_{\rightarrow}(\sigma,\mathcal{Q}^{\text{SW}}) and aiΦ(σ,𝒬SE)a_{i}\notin\varPhi_{\uparrow}(\sigma,\mathcal{Q}^{\text{SE}}). Thus, aia_{i} must be strictly above σ\sigma, since any point below σ\sigma is covered by either Φ(σ,𝒬SW)\varPhi_{\rightarrow}(\sigma,\mathcal{Q}^{\text{SW}}) or Φ(σ,𝒬SE)\varPhi_{\uparrow}(\sigma,\mathcal{Q}^{\text{SE}}). It follows that aia_{i} is to the right of σ\sigma, because aiUSEa_{i}\in U^{\text{SE}} and σ\sigma is on the boundary γ\gamma of USEU^{\text{SE}}. In fact, aia_{i} is strictly to the right of σ\sigma, since any point in SUSES\cap U^{\text{SE}} that has the same xx-coordinate as σ\sigma is covered by Φ(σ,𝒬SE)\varPhi_{\uparrow}(\sigma,\mathcal{Q}^{\text{SE}}). Because σ\sigma is also on the boundary of USWU^{\text{SW}}, given the fact that aia_{i} is strictly above σ\sigma and strictly to the right of σ\sigma, we see that aiUSWa_{i}\notin U^{\text{SW}}.

Now it suffices to verify that any quadrant in 𝒬SE\mathcal{Q}^{\text{SE}} or 𝒬NW\mathcal{Q}^{\text{NW}} contains at most one of a1,,aka_{1},\dots,a_{k}. Let i,j{1,,k}i,j\in\{1,\dots,k\} such that i<ji<j. We want to show that no quadrant in 𝒬SE\mathcal{Q}^{\text{SE}} or 𝒬NW\mathcal{Q}^{\text{NW}} contains both aia_{i} and aja_{j}. Since aja_{j} is not covered by 𝒬ans\mathcal{Q}_{\text{ans}} at the beginning of the jj-th iteration, it is also not covered by 𝒬ans\mathcal{Q}_{\text{ans}} at any point before the jj-th iteration and in particular, at the beginning of the ii-th iteration. This implies that aja_{j} is to the right of aia_{i}. In the ii-th iteration, we always add the quadrant Φ(ai,𝒬SE)\varPhi_{\uparrow}(a_{i},\mathcal{Q}^{\text{SE}}) to 𝒬ans\mathcal{Q}_{\text{ans}}. We have ajΦ(ai,𝒬SE)a_{j}\notin\varPhi_{\uparrow}(a_{i},\mathcal{Q}^{\text{SE}}), since aja_{j} is not covered by 𝒬ans\mathcal{Q}_{\text{ans}} at the end of the ii-th iteration. Thus, aja_{j} is strictly above aia_{i}. Now let Q𝒬SEQ\in\mathcal{Q}^{\text{SE}} be a quadrant that contains aia_{i}. Note that a point to the right of aia_{i} is contained in QQ only if it is contained in Φ(ai,𝒬SE)\varPhi_{\uparrow}(a_{i},\mathcal{Q}^{\text{SE}}), simply by the definition of Φ\varPhi_{\uparrow}. Because ajΦ(ai,𝒬SE)a_{j}\notin\varPhi_{\uparrow}(a_{i},\mathcal{Q}^{\text{SE}}), we have ajQa_{j}\notin Q. Consequently, no quadrant in 𝒬SE\mathcal{Q}^{\text{SE}} contains both aia_{i} and aja_{j}. Next, we show that no quadrant in 𝒬NW\mathcal{Q}^{\text{NW}} contains both aia_{i} and aja_{j}. If aiUNWa_{i}\notin U^{\text{NW}}, we are done. So assume aiUNWa_{i}\in U^{\text{NW}}. In this case, we add the quadrant Φ(ai,𝒬NW)\varPhi_{\rightarrow}(a_{i},\mathcal{Q}^{\text{NW}}) to 𝒬ans\mathcal{Q}_{\text{ans}} in Step 4 of the ii-th iteration. Again, we have ajΦ(ai,𝒬NW)a_{j}\notin\varPhi_{\rightarrow}(a_{i},\mathcal{Q}^{\text{NW}}) for aja_{j} is not covered by 𝒬ans\mathcal{Q}_{\text{ans}} at the end of the ii-th iteration. Let Q𝒬NWQ\in\mathcal{Q}^{\text{NW}} be a quadrant that contains aia_{i}. Note that a point above aia_{i} is contained in QQ only if it is contained in Φ(ai,𝒬NW)\varPhi_{\rightarrow}(a_{i},\mathcal{Q}^{\text{NW}}), simply by the definition of Φ\varPhi_{\rightarrow}. Because ajΦ(ai,𝒬NW)a_{j}\notin\varPhi_{\rightarrow}(a_{i},\mathcal{Q}^{\text{NW}}), we have ajQa_{j}\notin Q. Consequently, no quadrant in 𝒬NW\mathcal{Q}^{\text{NW}} contains both aia_{i} and aja_{j}.

To summarize, we showed that any quadrant Q𝒬Q\in\mathcal{Q} contains at most one of a1,,aka_{1},\dots,a_{k}. Therefore, covering all of a1,,aka_{1},\dots,a_{k} requires at least kk quadrants in 𝒬\mathcal{Q}. This implies k𝗈𝗉𝗍SEk\leq\mathsf{opt}^{\text{SE}}, because a1,,akSUSEa_{1},\dots,a_{k}\in S\cap U^{\text{SE}}. As a result, |𝒬ans|=O(k)=O(𝗈𝗉𝗍SE)|\mathcal{Q}_{\text{ans}}|=O(k)=O(\mathsf{opt}^{\text{SE}}).

A.9 Proof of Lemma 16

Let Π=(S,)\varPi=(S,\mathcal{R}) be a dynamic set cover instance with only point updates, and i0i\geq 0 be a number. We claim that |𝗈𝗉𝗍i(Π)𝗈𝗉𝗍i+1(Π)|1|\mathsf{opt}_{i}(\varPi)-\mathsf{opt}_{i+1}(\varPi)|\leq 1. Denote by SiS_{i} and Si+1S_{i+1} be the point set SS at the time ii and i+1i+1. Also, let SiSiS_{i}^{\prime}\subseteq S_{i} and Si+1Si+1S_{i+1}^{\prime}\subseteq S_{i+1} consist of the points that are covered by \mathcal{R}. Suppose the (i+1)(i+1)-th operation inserts a point aa to SS, so Si+1=Si{a}S_{i+1}=S_{i}\cup\{a\}. If aa is not covered by \mathcal{R}, then Si+1=SiS_{i+1}^{\prime}=S_{i}^{\prime} and 𝗈𝗉𝗍i+1(Π)=𝗈𝗉𝗍i(Π)\mathsf{opt}_{i+1}(\varPi)=\mathsf{opt}_{i}(\varPi). If aa is covered by \mathcal{R}, then Si+1=Si{a}S_{i+1}^{\prime}=S_{i}^{\prime}\cup\{a\}. Let RR\in\mathcal{R} be a range that covers aa. An optimal set cover for (Si,)(S_{i}^{\prime},\mathcal{R}) together with RR is a set cover for (Si+1,)(S_{i+1}^{\prime},\mathcal{R}). It follows that 𝗈𝗉𝗍i(Π)𝗈𝗉𝗍i+1(Π)𝗈𝗉𝗍i+1(Π)+1\mathsf{opt}_{i}(\varPi)\leq\mathsf{opt}_{i+1}(\varPi)\leq\mathsf{opt}_{i+1}(\varPi)+1. The case where the (i+1)(i+1)-th operation deletes a point from SS is symmetric. This shows that dynamic set cover in the partially dynamic setting is stable. The same argument can also be applied to show that dynamic hitting set in the partially dynamic setting is stable.

Next, we consider the dynamic interval hitting set problem (in the fully dynamic setting). Let Π=(S,)\varPi=(S,\mathcal{I}) be a dynamic interval hitting set instance, and i0i\geq 0 be a number. Denote by SiS_{i} and i\mathcal{I}_{i} as the point set SS and interval collection \mathcal{I} at the time ii. Let ii\mathcal{I}_{i}^{\prime}\subseteq\mathcal{I}_{i} consist of the intervals that are hit by SiS_{i}. We want to show that |𝗈𝗉𝗍i(Π)𝗈𝗉𝗍i+1(Π)|1|\mathsf{opt}_{i}(\varPi)-\mathsf{opt}_{i+1}(\varPi)|\leq 1. We distinguish two cases.

[Case 1] The (i+1)(i+1)-th operation happens on \mathcal{I}. Suppose the (i+1)(i+1)-th operation is an insertion on \mathcal{I}, and let II be the interval inserted. If II is not hit by SiS_{i}, then (Si+1,i+1)=(Si,i)(S_{i+1},\mathcal{I}_{i+1}^{\prime})=(S_{i},\mathcal{I}_{i}^{\prime}) and 𝗈𝗉𝗍i+1(Π)=𝗈𝗉𝗍i(Π)\mathsf{opt}_{i+1}(\varPi)=\mathsf{opt}_{i}(\varPi). If II is hit by SiS_{i}, then (Si+1,i+1)=(Si,i{I})(S_{i+1},\mathcal{I}_{i+1}^{\prime})=(S_{i},\mathcal{I}_{i}^{\prime}\cup\{I\}). In this case, we have 𝗈𝗉𝗍i(Π)𝗈𝗉𝗍i+1(Π)𝗈𝗉𝗍i(Π)+1\mathsf{opt}_{i}(\varPi)^{\prime}\leq\mathsf{opt}_{i+1}(\varPi)\leq\mathsf{opt}_{i}(\varPi)+1, since an optimal hitting set for (Si,i)(S_{i},\mathcal{I}_{i}^{\prime}) together with a point hitting II is a hitting set for (Si+1,i+1)(S_{i+1},\mathcal{I}_{i+1}^{\prime}). Thus, |𝗈𝗉𝗍i(Π)𝗈𝗉𝗍i+1(Π)|1|\mathsf{opt}_{i}(\varPi)-\mathsf{opt}_{i+1}(\varPi)|\leq 1. The case where the (i+1)(i+1)-th operation is a deletion on \mathcal{I} is symmetric.

[Case 2] The (i+1)(i+1)-th operation happens on SS. Suppose the (i+1)(i+1)-th operation is an insertion on SS, and let aa be the point inserted. Then (Si+1,i+1)=(Si{a},i𝒥)(S_{i+1},\mathcal{I}_{i+1}^{\prime})=(S_{i}\cup\{a\},\mathcal{I}_{i}^{\prime}\cup\mathcal{J}) where 𝒥={I:aI}\mathcal{J}=\{I\in\mathcal{I}:a\in I\}. It is clear that 𝗈𝗉𝗍i+1(Π)𝗈𝗉𝗍i(Π)+1\mathsf{opt}_{i+1}(\varPi)\leq\mathsf{opt}_{i}(\varPi)+1, because an optimal hitting set for (Si,i)(S_{i},\mathcal{I}_{i}^{\prime}) together with the point aa is a hitting set for (Si+1,i+1)(S_{i+1},\mathcal{I}_{i+1}^{\prime}). It suffices to show that 𝗈𝗉𝗍i(Π)𝗈𝗉𝗍i+1(Π)+1\mathsf{opt}_{i}(\varPi)\leq\mathsf{opt}_{i+1}(\varPi)+1. Let SSi+1S^{*}\subseteq S_{i+1} be an optimal hitting set for (Si+1,i+1)(S_{i+1},\mathcal{I}_{i+1}^{\prime}). If aSa\notin S^{*}, then SS^{*} is also an optimal hitting set for (Si,i)(S_{i},\mathcal{I}_{i}^{\prime}). Otherwise, let aSia^{-}\in S_{i} (resp., a+Sia^{+}\in S_{i}) be the rightmost (resp., leftmost) point to the left (resp., right) of aa; in the case where aa is the leftmost (resp, rightmost) point in SiS_{i}, then let aa^{-} (resp., a+a^{+}) be an arbitrary point in SiS_{i}. We claim that (S\{a}){a,a+}(S^{*}\backslash\{a\})\cup\{a^{-},a^{+}\} is a hitting set for (Si,i)(S_{i},\mathcal{I}_{i}^{\prime}). Consider an interval IiI\in\mathcal{I}_{i}^{\prime}. If I𝒥I\notin\mathcal{J}, then II is hit by some point in S\{a}S^{*}\backslash\{a\}. Otherwise, I𝒥I\in\mathcal{J}, and thus II is hit by aa. Since IiI\in\mathcal{I}_{i}^{\prime}, II is also hit by some point in SiS_{i}, say bb. If bb is to the left of aa, then II must be hit by aa^{-}. On the other hand, if bb is to the right of aa, II must be hit by a+a^{+}. Thus, in any case, II is hit by (S\{a}){a,a+}(S^{*}\backslash\{a\})\cup\{a^{-},a^{+}\}. It follows that |(S\{a}){a,a+}|=𝗈𝗉𝗍i+1(Π)+1𝗈𝗉𝗍i(Π)|(S^{*}\backslash\{a\})\cup\{a^{-},a^{+}\}|=\mathsf{opt}_{i+1}(\varPi)+1\geq\mathsf{opt}_{i}(\varPi). The case where the (i+1)(i+1)-th operation is a deletion on SS is symmetric.

A.10 Proof of Lemma 18

For convenience, we include in SS two dummy points x=x^{-}=-\infty and x+=+x^{+}=+\infty. We assume these two dummy points are always in SS. Since xx^{-} and x+x^{+} do not hit any interval, including them in SS is safe.

First, we store \mathcal{I} in a binary search tree 𝒯1\mathcal{T}_{1} where the key of an interval is its left endpoint. We augment each node 𝐮𝒯1\mathbf{u}\in\mathcal{T}_{1} with an additional field which stores the interval in the subtree rooted at 𝐮\mathbf{u} with the leftmost right endpoint. We notice that using 𝒯1\mathcal{T}_{1}, we can decide in O~(1)\widetilde{O}(1) time, for two given numbers a,a+a,a^{+}\in\mathbb{R}, whether there is an interval in \mathcal{I} whose both endpoints are in the open interval (a,a+)(a^{-},a^{+}). Specifically, we first look for the interval in 𝒯1\mathcal{T}_{1} with the leftmost right endpoint whose key is greater than aa. With the augmented fields, this interval can be found in O(logn)O(\log n) time. If the two endpoints of this interval are contained (a,a+)(a,a^{+}), then we return “yes”; otherwise, no interval in \mathcal{I} has two endpoints in (a,a+)(a,a^{+}) and we return “no”. Clearly, 𝒯1\mathcal{T}_{1} can be constructed in O~()\widetilde{O}(\mathcal{I}) time and dynamized with O~(1)\widetilde{O}(1) update time, and hence it is basic.

We then store SS in a standard range tree 𝒯2\mathcal{T}_{2}. The points in SS are stored at the leaves of 𝒯2\mathcal{T}_{2}. Each node 𝐮𝒯2\mathbf{u}\in\mathcal{T}_{2} corresponds to a canonical subset S(𝐮)S(\mathbf{u}) of SS consisting of the points stored in the subtree rooted at 𝐮\mathbf{u}. At each internal node 𝐮𝒯2\mathbf{u}\in\mathcal{T}_{2}, we store the leftmost and rightmost points in S(𝐮)S(\mathbf{u}) as well as a separation point s𝐮s_{\mathbf{u}}\in\mathbb{R} such that all points in the left (resp., right) subtree of 𝐮\mathbf{u} are to the left (resp., right) of s𝐮s_{\mathbf{u}}. At each leaf 𝐮\mathbf{u} of 𝒯\mathcal{T} that does not correspond to the point x+x^{+}, we maintain an additional field σ(𝐮)\sigma(\mathbf{u}) indicating whether there exists an interval in \mathcal{I} whose both endpoints are in the open interval (a𝐮,a𝐮+)(a_{\mathbf{u}},a_{\mathbf{u}}^{+}) where a𝐮Sa_{\mathbf{u}}\in S is the point corresponding to 𝐮\mathbf{u} and a𝐮+Sa_{\mathbf{u}}^{+}\in S is the leftmost point that is to the right of a𝐮a_{\mathbf{u}}; we set σ(𝐮)=1\sigma(\mathbf{u})=1 if such an interval exists and set σ(𝐮)=0\sigma(\mathbf{u})=0 otherwise. Note that using the binary search tree 𝒯1\mathcal{T}_{1}, σ(𝐮)\sigma(\mathbf{u}) can be computed in O~(1)\widetilde{O}(1) time for each leaf 𝐮\mathbf{u} of 𝒯2\mathcal{T}_{2}. Indeed, we only need to find a𝐮+a_{\mathbf{u}}^{+}, which can be done in O~(1)\widetilde{O}(1) time by a walk in 𝒯2\mathcal{T}_{2}, and query 𝒯1\mathcal{T}_{1} to see whether there is an interval in \mathcal{I} whose both endpoints are in the open interval (a𝐮,a𝐮+)(a_{\mathbf{u}},a_{\mathbf{u}}^{+}). Besides, we also maintain a counter σ\sigma^{*} that is the total number of the leaves whose σ\sigma-values are equal to 1. It is easy to see that (S,)(S,\mathcal{I}) has a hitting set iff σ=0\sigma^{*}=0.

It is clear that 𝒯2\mathcal{T}_{2} can be constructed in O~(|S|)\widetilde{O}(|S|) time. We show that when (S,)(S,\mathcal{I}) changes, we can maintain 𝒯2\mathcal{T}_{2} with the σ\sigma-fields in O~(1)\widetilde{O}(1) time, by using 𝒯1\mathcal{T}_{1}. Suppose a point aa\in\mathbb{R} is inserted to SS. We need to first compute the σ\sigma-value of the leaf corresponding to aa using 𝒯1\mathcal{T}_{1}. Let bSb\in S be the rightmost point to the left of aa. Due to the insertion of aa, the σ\sigma-value of the leaf 𝐮\mathbf{u} of 𝒯2\mathcal{T}_{2} corresponding to bb may also change. So we need to find 𝐮\mathbf{u}, which can be done in O~(1)\widetilde{O}(1) time by a walk in 𝒯2\mathcal{T}_{2}, and update σ(𝐮)\sigma(\mathbf{u}) using 𝒯1\mathcal{T}_{1}. The σ\sigma-values of all the other leaves remain unchanged. A deletion of a point from SS is handled similarly. Now suppose an interval II is inserted to or deleted from \mathcal{I}. We find in O~(1)\widetilde{O}(1) the leaf 𝐮\mathbf{u} of 𝒯2\mathcal{T}_{2} corresponding to the rightmost point in SS that is to the left of the left endpoint of II. Note that the insertion/deletion of II does not change the σ\sigma-values of the leaves other than 𝐮\mathbf{u}. So it suffices to re-compute σ(𝐮)\sigma(\mathbf{u}) using 𝒯1\mathcal{T}_{1}. The time cost for all the cases above is O~(1)\widetilde{O}(1). The rotations of 𝒯2\mathcal{T}_{2} (for self-balancing) do not change the σ\sigma-fields. It follows that the counter σ\sigma^{*} can also be maintained in O~(1)\widetilde{O}(1) time.

The dynamic data structure \mathcal{B} in the lemma just consists of (the dynamic versions of) the binary search tree 𝒯1\mathcal{T}_{1} and the range tree 𝒯2\mathcal{T}_{2}. The update time of \mathcal{B} is O~(1)\widetilde{O}(1) time and the construction time is O~(n0)\widetilde{O}(n_{0}). To indicate whether the current (S,)(S,\mathcal{I}) has a hitting set or not, we simply check whether σ=0\sigma^{*}=0 or not.

A.11 Proof of Lemma 19

We can simply store SS in a binary search tree. Then the rightmost (resp., leftmost) point in SS to the left (resp., right) of a given point aa can be reported in O~(1)\widetilde{O}(1) time by searching in the tree. Clearly, the binary search tree can be constructed in O~(|S|)\widetilde{O}(|S|) time and dynamized with O~(1)\widetilde{O}(1) update time, and thus it is basic.

A.12 Proof of Lemma 20

We can simply store SS in a binary search tree. Then a point aSa\in S contained in a given interval II can be reported in O~(1)\widetilde{O}(1) time by searching in the tree. Clearly, the binary search tree can be constructed in O~(|S|)\widetilde{O}(|S|) time and dynamized with O~(1)\widetilde{O}(1) update time, and thus it is basic.

A.13 Proof of Lemma 21

We use 𝗈𝗉𝗍\mathsf{opt}^{\prime} to denote the quasi-optimum of the current (S,)(S,\mathcal{I}). It suffices to show |S|(1+ε)𝗈𝗉𝗍|S^{*}|\leq(1+\varepsilon)\cdot\mathsf{opt}^{\prime} at any time, since we always have 𝗈𝗉𝗍𝗈𝗉𝗍\mathsf{opt}^{\prime}\leq\mathsf{opt}.

Initially, SS^{*} is an optimal hitting set for (S0,0)(S_{0},\mathcal{I}_{0}^{\prime}), so we have |S|=𝗈𝗉𝗍|S^{*}|=\mathsf{opt}^{\prime} at that time. If the current SS^{*} is obtained by re-computing using the output-sensitive algorithm, then |S|=𝗈𝗉𝗍=𝗈𝗉𝗍|S^{*}|=\mathsf{opt}^{\prime}=\mathsf{opt}, as we only do re-computation when the current (S,)(S,\mathcal{I}) has a hitting set. Suppose SS^{*} is obtained by local modification. Consider the last re-computation of SS^{*}, and we use S1S_{1}^{*} to denote the SS^{*} at that point and use 𝗈𝗉𝗍1\mathsf{opt}_{1}^{\prime} to denote the quasi-optimum of (S,)(S,\mathcal{I}) at that point. Then we have |S1|=𝗈𝗉𝗍1|S_{1}^{*}|=\mathsf{opt}_{1}^{\prime}. As argued before, 𝖼𝗇𝗍\mathsf{cnt} is the number of the operations processed after the last re-computation of SS^{*}. By the stability of dynamic interval hitting set (Lemma 16), we have |𝗈𝗉𝗍𝗈𝗉𝗍1|𝖼𝗇𝗍|\mathsf{opt}^{\prime}-\mathsf{opt}_{1}^{\prime}|\leq\mathsf{cnt}, implying 𝗈𝗉𝗍1𝗈𝗉𝗍+𝖼𝗇𝗍\mathsf{opt}_{1}^{\prime}\leq\mathsf{opt}^{\prime}+\mathsf{cnt}. Furthermore, by the fact that the size of SS^{*} either increases by 1 or keeps unchanged after each local modification, we have |S||S1|𝖼𝗇𝗍|S^{*}|-|S_{1}^{*}|\leq\mathsf{cnt}. It follows that

|S||S1|+𝖼𝗇𝗍=𝗈𝗉𝗍1+𝖼𝗇𝗍𝗈𝗉𝗍+2𝖼𝗇𝗍.|S^{*}|\leq|S_{1}^{*}|+\mathsf{cnt}=\mathsf{opt}_{1}^{\prime}+\mathsf{cnt}\leq\mathsf{opt}^{\prime}+2\cdot\mathsf{cnt}.

Note that 𝖼𝗇𝗍ε𝗈𝗉𝗍1/(2+ε)\mathsf{cnt}\leq\varepsilon\cdot\mathsf{opt}_{1}^{\prime}/(2+\varepsilon), for otherwise we should re-compute SS^{*}. This implies 𝖼𝗇𝗍(ε/2)𝗈𝗉𝗍\mathsf{cnt}\leq(\varepsilon/2)\cdot\mathsf{opt}^{\prime} and hence |S|(1+ε)𝗈𝗉𝗍|S^{*}|\leq(1+\varepsilon)\cdot\mathsf{opt}^{\prime}.

A.14 Proof of Lemma 22

Let (Si,i)(S_{i},\mathcal{I}_{i}) denote the instance (S,)(S,\mathcal{I}) at the time ii, and SiS_{i}^{*} be the point set SS^{*} at the time ii. Define ii\mathcal{I}_{i}^{\prime}\subseteq\mathcal{I}_{i} as the sub-collection consisting of the intervals that are hit by SiS_{i}. Then (Si,i)(S_{i},\mathcal{I}_{i}^{\prime}) always has a hitting set. Furthermore, if (Si,i)(S_{i},\mathcal{I}_{i}) has a hitting set, then (Si,i)=(Si,i)(S_{i},\mathcal{I}_{i}^{\prime})=(S_{i},\mathcal{I}_{i}). So it suffices to show that SiS_{i}^{*} is always a hitting set for (Si,i)(S_{i},\mathcal{I}_{i}^{\prime}). We prove this by induction. It is clear that S0S_{0}^{*} is a hitting set for (S0,0)(S_{0},\mathcal{I}_{0}^{\prime}). Assume Si1S_{i-1}^{*} is a hitting set for (Si1,i1)(S_{i-1},\mathcal{I}_{i-1}^{\prime}) and we show that SiS_{i}^{*} is a hitting set for (Si,i)(S_{i},\mathcal{I}_{i}^{\prime}). If SiS_{i}^{*} is obtained by re-computing, then (Si,i)=(Si,i)(S_{i},\mathcal{I}_{i}^{\prime})=(S_{i},\mathcal{I}_{i}), since we only re-compute SS^{*} when the current (S,)(S,\mathcal{I}) has a hitting set. In this case, SiS_{i}^{*} is clearly a hitting set for both (Si,i)(S_{i},\mathcal{I}_{i}^{\prime}) and (Si,i)(S_{i},\mathcal{I}_{i}). So suppose SiS_{i}^{*} is obtained by local modification.

We consider different cases separately according to the ii-th operation. If the ii-th operation inserts a point aa to SS, then Si=Si1{a}S_{i}=S_{i-1}\cup\{a\} and i=i1\mathcal{I}_{i}=\mathcal{I}_{i-1}. In this case, Si=Si1{a}S_{i}^{*}=S_{i-1}^{*}\cup\{a\} and i=i1𝒥\mathcal{I}_{i}^{\prime}=\mathcal{I}_{i-1}^{\prime}\cup\mathcal{J}, where 𝒥={Ii:aI}\mathcal{J}=\{I\in\mathcal{I}_{i}:a\in I\}. The intervals in i1\mathcal{I}_{i-1}^{\prime} are hit by Si1S_{i-1}^{*} and the intervals in 𝒥\mathcal{J} are hit by the point aa. Hence, SiS_{i}^{*} is a hitting set for (Si,i)(S_{i},\mathcal{I}_{i}^{\prime}). If the ii-th operation deletes a point aa from SS, then Si=Si1\{a}S_{i}=S_{i-1}\backslash\{a\} and i1=i\mathcal{I}_{i-1}=\mathcal{I}_{i}. In this case, SiS_{i}^{*} is obtained from Si1\{a}S_{i-1}^{*}\backslash\{a\} by adding the rightmost point aa^{-} in SiS_{i} to the left of aa and the leftmost point a+a^{+} in SiS_{i} to the right of aa (if they exist). Consider an interval IiI\in\mathcal{I}_{i}^{\prime}. We want to show that II is hit by SiS_{i}^{*}. Note that ii1\mathcal{I}_{i}^{\prime}\subseteq\mathcal{I}_{i-1}^{\prime}. Thus, Ii1I\in\mathcal{I}_{i-1}^{\prime} and II is hit by Si1S_{i-1}^{*}. If aIa\notin I, then II is hit by Si1\{a}S_{i-1}^{*}\backslash\{a\} and hence hit by SiS_{i}^{*}. Otherwise, aIa\in I. Since IiI\in\mathcal{I}_{i}^{\prime} and aSia\notin S_{i}, II must be hit by some point bSib\in S_{i} different from aa. If bb is to the left of aa, then II must be hit by aa^{-} (as aa^{-} is in between bb and aa). On the other hand, if bb is to the right of aa, II must be hit by a+a^{+} (as a+a^{+} is in between aa and bb). Therefore, II is hit by SiS_{i}^{*}. If the ii-th operation inserts an interval II to \mathcal{I}, then Si=Si1S_{i}=S_{i-1} and i=i1{I}\mathcal{I}_{i}=\mathcal{I}_{i-1}\cup\{I\}. In this case, SiS_{i}^{*} is obtained from Si1S_{i-1}^{*} by adding an arbitrary point aSia\in S_{i} that hits II (if II is hit by SiS_{i}). If II is not hit by SiS_{i}, then (Si,i)=(Si1,i1)(S_{i},\mathcal{I}_{i}^{\prime})=(S_{i-1},\mathcal{I}_{i-1}^{\prime}) and Si=Si1S_{i}^{*}=S_{i-1}^{*}, thus SiS_{i}^{*} is a hitting set for (Si,i)(S_{i},\mathcal{I}_{i}^{\prime}). If II is hit by SiS_{i}, then i=i1{I}\mathcal{I}_{i}^{\prime}=\mathcal{I}_{i-1}^{\prime}\cup\{I\} and Si=Si1{a}S_{i}^{*}=S_{i-1}^{*}\cup\{a\}. The intervals in i1\mathcal{I}_{i-1}^{\prime} are hit by Si1S_{i-1}^{*} by our induction hypothesis and II is hit by the point aa. Hence, SiS_{i}^{*} is a hitting set for (Si,i)(S_{i},\mathcal{I}_{i}^{\prime}). If the ii-th operation deletes an interval II from \mathcal{I}, then Si=Si1S_{i}=S_{i-1} and i=i1\{I}\mathcal{I}_{i}=\mathcal{I}_{i-1}\backslash\{I\}. In this case, Si=Si1S_{i}^{*}=S_{i-1}^{*}. Note that ii1\mathcal{I}_{i}^{\prime}\subseteq\mathcal{I}_{i-1}^{\prime}, which implies that SiS_{i}^{*} is a hitting set for (Si,i)(S_{i},\mathcal{I}_{i}^{\prime}).

A.15 Proof of Lemma 24

Note that 𝒬\mathcal{Q} is static. We simply store 𝒬\mathcal{Q} in a static data structure 0\mathcal{B}_{0} that can decide in O~(1)\widetilde{O}(1) time, for a given point a2a\in\mathbb{R}^{2}, whether there exists a quadrant in 𝒬\mathcal{Q} that covers aa; this can be done using a standard orthogonal stabbing data structure with O~(|𝒬|)\widetilde{O}(|\mathcal{Q}|) construction time. Then our data structure \mathcal{B} simply maintains the number n~\tilde{n} of the points in SS that are not covered by 𝒬\mathcal{Q}. Initially, n~\tilde{n} can be computed in O~(n0)\widetilde{O}(n_{0}) time by considering every point in SS and use 0\mathcal{B}_{0} to check if it is covered by 𝒬\mathcal{Q} in O~(1)\widetilde{O}(1) time. After an operation on SS, we can update n~\tilde{n} in O~(1)\widetilde{O}(1) time by checking whether the inserted/deleted point is covered by 𝒬\mathcal{Q} using 0\mathcal{B}_{0}.

A.16 Proof of Lemma 25

Let SiS_{i} denote the set SS at the time ii and 𝒬i\mathcal{Q}_{i}^{*} be the 𝒬\mathcal{Q}^{*} at the time ii. Define SiSiS_{i}^{\prime}\subseteq S_{i} as the sub-collection consisting of the points that are covered by 𝒬\mathcal{Q}. Then (Si,𝒬)(S_{i}^{\prime},\mathcal{Q}) always has a hitting set. Furthermore, if (Si,𝒬)(S_{i},\mathcal{Q}) has a set cover, then (Si,𝒬)=(Si,𝒬)(S_{i}^{\prime},\mathcal{Q})=(S_{i},\mathcal{Q}). So it suffices to show that 𝒬i\mathcal{Q}_{i}^{*} is always a set cover for (Si,𝒬)(S_{i}^{\prime},\mathcal{Q}). We prove this by induction. It is clear that 𝒬0\mathcal{Q}_{0}^{*} is a set cover for (S0,𝒬)(S_{0}^{\prime},\mathcal{Q}). Assume 𝒬i1\mathcal{Q}_{i-1}^{*} is a set cover for (Si1,𝒬)(S_{i-1}^{\prime},\mathcal{Q}) and we show that 𝒬i\mathcal{Q}_{i}^{*} is a set cover for (Si,𝒬)(S_{i}^{\prime},\mathcal{Q}). If 𝒬i\mathcal{Q}_{i}^{*} is obtained by re-computing, then (Si,𝒬)=(Si,𝒬)(S_{i}^{\prime},\mathcal{Q})=(S_{i},\mathcal{Q}), since we only re-compute 𝒬\mathcal{Q}^{*} when the current (S,𝒬)(S,\mathcal{Q}) has a set cover. In this case, 𝒬i\mathcal{Q}_{i}^{*} is clearly a set cover for both (Si,𝒬)(S_{i}^{\prime},\mathcal{Q}) and (Si,𝒬)(S_{i},\mathcal{Q}). So suppose 𝒬i\mathcal{Q}_{i}^{*} is obtained by local modification. If the ii-th iteration inserts a point aa to SS, then Si=Si1{a}S_{i}=S_{i-1}\cup\{a\}. In this case, 𝒬i\mathcal{Q}_{i}^{*} is obtained by including in 𝒬i1\mathcal{Q}_{i-1}^{*} an arbitrary quadrant Q𝒬Q\in\mathcal{Q} that contains aa (if such a quadrant exists). If aa is not covered by \mathcal{R}, then Si=Si1S_{i}^{\prime}=S_{i-1}^{\prime} and 𝒬i=𝒬i1\mathcal{Q}_{i}^{*}=\mathcal{Q}_{i-1}^{*}, thus 𝒬i\mathcal{Q}_{i}^{*} is a set cover for (Si,𝒬)(S_{i}^{\prime},\mathcal{Q}). If aa is covered by \mathcal{R}, then Si=Si1{a}S_{i}^{\prime}=S_{i-1}^{\prime}\cup\{a\} and 𝒬i=𝒬i1{Q}\mathcal{Q}_{i}^{*}=\mathcal{Q}_{i-1}^{*}\cup\{Q\}. The points in Si1S_{i-1}^{\prime} are covered by 𝒬i1\mathcal{Q}_{i-1}^{*} by our induction hypothesis and the point aa is covered by QQ. Hence, 𝒬i\mathcal{Q}_{i}^{*} is a set cover for (Si,𝒬)(S_{i}^{\prime},\mathcal{Q}). If the ii-th iteration delete a point aa from SS, then Si=Si1\{a}S_{i}=S_{i-1}\backslash\{a\}. In this case, 𝒬i=𝒬i1\mathcal{Q}_{i}^{*}=\mathcal{Q}_{i-1}^{*}. Note that SiSi1S_{i}^{\prime}\subseteq S_{i-1}^{\prime}, which implies that 𝒬i\mathcal{Q}_{i}^{*} is a set cover for (Si,𝒬)(S_{i}^{\prime},\mathcal{Q}).

Appendix B Implementation details and detailed time analysis for the dynamic interval set cover data structure

We present the implementation details of our dynamic interval set cover data structure 𝒟new\mathcal{D}_{\text{new}} in Section 4.1 as well as a detailed time analysis. Assume the function ff we choose satisfies two properties: (1) f(m,ε)m/2f(m,\varepsilon)\leq m/2 for any mm, and (2) f(Θ(m),ε)=Θ(f(m,ε))f(\Theta(m),\varepsilon)=\Theta(f(m,\varepsilon)).

First, we discuss how to construct 𝒟new\mathcal{D}_{\text{new}}. Constructing the data structure 𝒜\mathcal{A} takes O~(n0)\widetilde{O}(n_{0}) time, as it is basic. The portions J1,,JrJ_{1},\dots,J_{r} can be computed in O~(n0)\widetilde{O}(n_{0}) time by sorting the points in SS and the endpoints of the intervals in \mathcal{I}. Once J1,,JrJ_{1},\dots,J_{r} are computed, we build a (static) point location data structure 1\mathcal{B}_{1} which can report in O(logr)O(\log r) time, for a given point aa\in\mathbb{R}, the portion JiJ_{i} that contains aa. Clearly, 1\mathcal{B}_{1} can be constructed in O~(r)\widetilde{O}(r) time. With 1\mathcal{B}_{1} in hand, we can determine in O~(1)\widetilde{O}(1) time, for each point aSa\in S (resp., each interval II\in\mathcal{I}), the portion JiJ_{i} that contains aa (resp., the two JiJ_{i}’s that contain the endpoints of II). By doing this for all points in SS and all intervals in \mathcal{I}, we obtain all SiS_{i} and all i\mathcal{I}_{i} in O~(n0+r)\widetilde{O}(n_{0}+r) time. After this, we can build the data structures 𝒟old(i)\mathcal{D}_{\text{old}}^{(i)}’s. Constructing each 𝒟old(i)\mathcal{D}_{\text{old}}^{(i)} takes O~(f(n0,ε))\widetilde{O}(f(n_{0},\varepsilon)) time since |Si|+|i|=O(f(n0,ε))|S_{i}|+|\mathcal{I}_{i}|=O(f(n_{0},\varepsilon)). Hence, the time for constructing all 𝒟old(1),,𝒟old(r)\mathcal{D}_{\text{old}}^{(1)},\dots,\mathcal{D}_{\text{old}}^{(r)} is O~(n0)\widetilde{O}(n_{0}).

The support data structure 1\mathcal{B}_{1} will be used later in the implementation of the update procedure of 𝒟new\mathcal{D}_{\text{new}} (we do not need to update 1\mathcal{B}_{1} since it is static). Besides, we need another support data structure 2\mathcal{B}_{2} defined as follows.

Lemma 30.

One can store \mathcal{I} in a basic data structure 2\mathcal{B}_{2} such that given an interval JJ, an interval in \mathcal{I} that contains JJ can be reported in O~(1)\widetilde{O}(1) time (if it exists).

Proof.

We store \mathcal{I} in a binary search tree 𝒯\mathcal{T} where the key of each interval is its left endpoint. We augment each node 𝐮𝒯\mathbf{u}\in\mathcal{T} with an additional field which stores the interval in the subtree rooted at 𝐮\mathbf{u} that has the rightmost right endpoint. Given a query interval JJ, we first look for the interval in 𝒯\mathcal{T} with the rightmost right endpoint whose key is smaller than or equal to the left endpoint of JJ. With the augmented fields, this interval can be found in O~(1)\widetilde{O}(1) time. If this interval contains JJ, then we report it; otherwise, no interval in \mathcal{I} contains JJ. Clearly, 𝒯\mathcal{T} can be built in O~(n0)\widetilde{O}(n_{0}) time and dynamized with O~(1)\widetilde{O}(1) update time, and thus 𝒯\mathcal{T} is basic. ∎

Since 2\mathcal{B}_{2} is basic, it can be constructed in O~(n0)\widetilde{O}(n_{0}) time and updated in O~(1)\widetilde{O}(1) time. We conclude that the construction time of 𝒟new\mathcal{D}_{\text{new}} is O~(n0)\widetilde{O}(n_{0}).

Next, we consider how to implement the update procedure of 𝒟new\mathcal{D}_{\text{new}}. After each operation, we need to update the data structure 𝒜\mathcal{A} and the support data structure 2\mathcal{B}_{2}, which can be done in O~(1)\widetilde{O}(1) time since they are basic. Also, if some (Si,i)(S_{i},\mathcal{I}_{i}) changes, we need to update the data structure 𝒟old(i)\mathcal{D}_{\text{old}}^{(i)}. An operation on SS changes exactly one SiS_{i} and an operation on \mathcal{I} changes at most two i\mathcal{I}_{i}’s. Thus, we only need to update at most two 𝒟old(i)\mathcal{D}_{\text{old}}^{(i)}’s, and by using 1\mathcal{B}_{1} we can find these 𝒟old(i)\mathcal{D}_{\text{old}}^{(i)}’s in O~(1)\widetilde{O}(1) time. Note that the size of each (Si,i)(S_{i},\mathcal{I}_{i}) is bounded by O(f(n0,ε))O(f(n_{0},\varepsilon)) during the first period (i.e., before the first reconstruction), because the period only consists of f(n0,ε)f(n_{0},\varepsilon) operations. Thus, updating the 𝒟old\mathcal{D}_{\text{old}} data structures takes O(fα(n0,ε)/ε1α)O(f^{\alpha}(n_{0},\varepsilon)/\varepsilon^{1-\alpha}) amortized time.

Then we discuss the maintenance of the solution. The time for simulating the output-sensitive algorithm is O~(δ)\widetilde{O}(\delta), i.e., O~(min{r/ε,n})\widetilde{O}(\min\{r/\varepsilon,n\}). If the algorithm gives the solution appx\mathcal{I}_{\text{appx}}, we compute |appx||\mathcal{I}_{\text{appx}}| and store appx\mathcal{I}_{\text{appx}} in a binary search tree; by doing this, we can answer the size, membership, and reporting queries for 𝒬appx\mathcal{Q}_{\text{appx}} in the required query times. This step takes O~(δ)\widetilde{O}(\delta) time, i.e., O~(min{r/ε,n})\widetilde{O}(\min\{r/\varepsilon,n\}) time, since |appx|δ|\mathcal{I}_{\text{appx}}|\leq\delta in this case. If the output-sensitive algorithm fails, we compute the sets PP and PP^{\prime}. This can be done in O~(r)\widetilde{O}(r) time by using the support data structure 2\mathcal{B}_{2}. After this, we compute \mathcal{I}^{*}, which again takes O~(r)\widetilde{O}(r) time by using 2\mathcal{B}_{2}; specifically, we consider each iPi\in P and use 2\mathcal{B}_{2} to find an interval in \mathcal{I} that contains JiJ_{i}. We have appx=(iPi)\mathcal{I}_{\text{appx}}=\mathcal{I}^{*}\sqcup(\bigsqcup_{i\in P^{\prime}}\mathcal{I}_{i}^{*}). To support the size query for appx\mathcal{I}_{\text{appx}} in O(1)O(1) time, we need to compute |appx|=||+iP|i||\mathcal{I}_{\text{appx}}|=|\mathcal{I}^{*}|+\sum_{i\in P^{\prime}}|\mathcal{I}_{i}^{*}|. This can be done in O(r)O(r) time, because we can query 𝒟old(i)\mathcal{D}_{\text{old}}^{(i)} to obtain |i||\mathcal{I}_{i}^{*}| in O(1)O(1) time. In order to support the membership and reporting queries, we store \mathcal{I}^{*} in a binary search trees 𝒯\mathcal{T}. Also, we store the set PP^{\prime}, and for each iPi\in P^{\prime} we store a pointer pointing to the data structure 𝒟old(i)\mathcal{D}_{\text{old}}^{(i)}. Consider a membership query II\in\mathcal{I}. Using the binary search tree 𝒯\mathcal{T}, we can obtain the multiplicity of II in \mathcal{I}^{*} in O(log||)O(\log|\mathcal{I}^{*}|) time. Then we use 1\mathcal{B}_{1} to find in O(logr)O(\log r) time the (at most) two i\mathcal{I}_{i}’s that contain II (say IiI\in\mathcal{I}_{i} and IjI\in\mathcal{I}_{j}). By querying 𝒟old(i)\mathcal{D}_{\text{old}}^{(i)} and 𝒟old(j)\mathcal{D}_{\text{old}}^{(j)}, we know the multiplicities of II in i\mathcal{I}_{i}^{*} and j\mathcal{I}_{j}^{*}, which takes O(log|i|+log|j|)O(\log|\mathcal{I}_{i}^{*}|+\log|\mathcal{I}_{j}^{*}|). The sum of these multiplicities is just the multiplicity of II in appx\mathcal{I}_{appx}. The time for answering the query is O(log||+log|i|+log|j|+logr)O(\log|\mathcal{I}^{*}|+\log|\mathcal{I}_{i}^{*}|+\log|\mathcal{I}_{j}^{*}|+\log r). Note that log||+log|i|+log|j|=O(log|appx|)\log|\mathcal{I}^{*}|+\log|\mathcal{I}_{i}^{*}|+\log|\mathcal{I}_{j}^{*}|=O(\log|\mathcal{I}_{\text{appx}}|). Furthermore, because |appx|>δ|\mathcal{I}_{\text{appx}}|>\delta (as the output-sensitive algorithm fails), we have |appx|=Ω(r)|\mathcal{I}_{\text{appx}}|=\Omega(r). Thus, the time cost for a membership query is O(log|appx|)O(\log|\mathcal{I}_{\text{appx}}|). Finally, consider the reporting query. We first use the binary search tree 𝒯\mathcal{T} to report the elements in \mathcal{I}^{*}, using O(||)O(|\mathcal{I}^{*}|) time. Then for each iPi\in P^{\prime}, we query the data structure 𝒟old(i)\mathcal{D}_{\text{old}}^{(i)} to report the elements in i\mathcal{I}_{i}^{*}. The total time cost is O(||+iP|i|+|P|)O(|\mathcal{I}^{*}|+\sum_{i\in P^{\prime}}|\mathcal{I}_{i}^{*}|+|P^{\prime}|). Since |P|r|P^{\prime}|\leq r and |appx|=Ω(r)|\mathcal{I}_{\text{appx}}|=\Omega(r) as argued before, the time for answering the reporting query is O(|appx|)O(|\mathcal{I}_{\text{appx}}|). The above work for storing appx\mathcal{I}_{\text{appx}} takes O~(r)\widetilde{O}(r) time, since ||=O(r)|\mathcal{I}^{*}|=O(r) and |P|=O(r)|P^{\prime}|=O(r). To summarize, maintaining the solution takes O~(min{r/ε,n}+r)\widetilde{O}(\min\{r/\varepsilon,n\}+r) time.

After processing f(n0,ε)f(n_{0},\varepsilon) operations, we need to reconstruct the entire data structure 𝒟new\mathcal{D}_{\text{new}}. The reconstruction is the same as the initial construction, except that n0n_{0} is replaced with n1n_{1}, the size of (S,)(S,\mathcal{I}) at the time of reconstruction. Thus, the reconstruction takes O~(n1)\widetilde{O}(n_{1}) time. We amortize the time cost over all the f(n0,ε)f(n_{0},\varepsilon) operations in the period. Since n1n0+f(n0,ε)n_{1}\leq n_{0}+f(n_{0},\varepsilon), the amortized time for reconstruction is O~(n0/f(n0,ε))\widetilde{O}(n_{0}/f(n_{0},\varepsilon)), i.e., O~(r)\widetilde{O}(r).

Combining the time for updating the 𝒟old\mathcal{D}_{\text{old}} data structures, the time for maintaining the solution, and the time for reconstruction, we see that the amortized update time of 𝒟new\mathcal{D}_{\text{new}} is O~(fα(n0,ε)/ε1α+min{r/ε,n}+r)\widetilde{O}(f^{\alpha}(n_{0},\varepsilon)/\varepsilon^{1-\alpha}+\min\{r/\varepsilon,n\}+r) during the first period (since 𝒟new\mathcal{D}_{\text{new}} is reconstructed periodically, it suffices to analyze the update time in the first period). By property (1) of ff, we have n=Θ(n0)n=\Theta(n_{0}) at any time in the period, i.e., the size of (S,)(S,\mathcal{I}) is Θ(n0)\Theta(n_{0}) at any time in the period. By property (2) of ff, we further have f(n,ε)=Θ(f(n0,ε))f(n,\varepsilon)=\Theta(f(n_{0},\varepsilon)) at any time in the period. It follows that the amortized update time of 𝒟new\mathcal{D}_{\text{new}} is O~(fα(n,ε)/ε1α+min{n/(f(n,ε)ε),n}+n/f(n,ε))\widetilde{O}(f^{\alpha}(n,\varepsilon)/\varepsilon^{1-\alpha}+\min\{n/(f(n,\varepsilon)\cdot\varepsilon),n\}+n/f(n,\varepsilon)) during the period. To minimize the time complexity while guaranteeing the two conditions of ff, we set f(n,ε)=min{n1α/εα,n/2}f(n,\varepsilon)=\min\{n^{1-\alpha^{\prime}}/\varepsilon^{\alpha^{\prime}},n/2\} where α\alpha^{\prime} is as defined in Theorem 2, i.e., α=α/(1+α)\alpha^{\prime}=\alpha/(1+\alpha). The following lemma shows that our choice of ff makes the time bound O~(nα/ε1α)\widetilde{O}(n^{\alpha^{\prime}}/\varepsilon^{1-\alpha^{\prime}}).

Lemma 31.

When f(n,ε)=min{n1α/εα,n/2}f(n,\varepsilon)=\min\{n^{1-\alpha^{\prime}}/\varepsilon^{\alpha^{\prime}},n/2\}, we have

fα(n,ε)ε1α+min{nf(n,ε)ε,n}+nf(n,ε)=O(nαε1α).\frac{f^{\alpha}(n,\varepsilon)}{\varepsilon^{1-\alpha}}+\min\left\{\frac{n}{f(n,\varepsilon)\cdot\varepsilon},n\right\}+\frac{n}{f(n,\varepsilon)}=O\left(\frac{n^{\alpha^{\prime}}}{\varepsilon^{1-\alpha^{\prime}}}\right).
Proof.

If f(n,ε)=n1α/εαf(n,\varepsilon)=n^{1-\alpha^{\prime}}/\varepsilon^{\alpha^{\prime}}, then one can easily verify the equation in the lemma via a direct computation (by bounding each of the three terms on the left-hand side). It suffices to verify the equation for the case f(n,ε)=n/2f(n,\varepsilon)=n/2. In this case, we have n1α/εαn/2n^{1-\alpha^{\prime}}/\varepsilon^{\alpha^{\prime}}\geq n/2, implying that n=O(1/ε)n=O(1/\varepsilon). It follows that nα/ε1α=O(nα/ε1α)n^{\alpha}/\varepsilon^{1-\alpha}=O(n^{\alpha^{\prime}}/\varepsilon^{1-\alpha^{\prime}}). So the first term in the left-hand side is bounded by O(nα/ε1α)O(n^{\alpha^{\prime}}/\varepsilon^{1-\alpha^{\prime}}). The second term is O(min{1/ε,n})O(\min\{1/\varepsilon,n\}), which is bounded by O(nα/ε1α)O(n^{\alpha^{\prime}}/\varepsilon^{1-\alpha^{\prime}}). The third term is clearly O(1)O(1). This proves the equation in the lemma. ∎

Appendix C Implementation details and detailed time analysis for the dynamic quadrant set cover data structure

We present the implementation details of our dynamic quadrant set cover data structure 𝒟new\mathcal{D}_{\text{new}} in Section 4.2 as well as a detailed time analysis. Since we are interested in the asymptotic bounds, we may assume that the approximation factor ε\varepsilon is sufficiently small, say ε<1\varepsilon<1. Assume the function ff we choose satisfies two properties: (1) m/2f(m,ε)m/2\sqrt{m}/2\leq f(m,\varepsilon)\leq m/2 for any mm, and (2) f(Θ(m),ε)=Θ(f(m,ε))f(\Theta(m),\varepsilon)=\Theta(f(m,\varepsilon)). Note that property (1) implies that r=n0/f(n0,ε)=O(f(n0,ε))r=\lceil n_{0}/f(n_{0},\varepsilon)\rceil=O(f(n_{0},\varepsilon)) and r2=O(n0)r^{2}=O(n_{0}).

First, we discuss how to construct 𝒟new\mathcal{D}_{\text{new}}. Constructing the data structure 𝒜\mathcal{A} takes O~(n0)\widetilde{O}(n_{0}) time, as it is basic. The grid can be built in O~(n0)\widetilde{O}(n_{0}) time by sorting the points in SS and the vertices of the quadrants in 𝒬\mathcal{Q}. Once the grid is computed, we build a (static) point location data structure 1\mathcal{B}_{1} which can report in O(logr)O(\log r) time, for a given point a2a\in\mathbb{R}^{2}, the grid cell that contains aa. Since the grid has r2r^{2} cells, 1\mathcal{B}_{1} can be built in O~(r2)\widetilde{O}(r^{2}) time. With 1\mathcal{B}_{1} in hand, we can determine in O~(1)\widetilde{O}(1) time, for each point aSa\in S (resp., each quadrant Q𝒬Q\in\mathcal{Q}), the cell contains aa (resp., the vertex of QQ). By doing this for all points in SS and all quadrants in 𝒬\mathcal{Q}, we obtain all Si,jS_{i,j} and the non-special quadrants in all 𝒬i,j\mathcal{Q}_{i,j} in O~(n0+r2)\widetilde{O}(n_{0}+r^{2}) time, i.e., O~(n0)\widetilde{O}(n_{0}) time. In order to compute the special quadrants, we need the following support data structure 2\mathcal{B}_{2}.

Lemma 32.

One can store 𝒬\mathcal{Q} in a basic data structure 2\mathcal{B}_{2} such that given an orthogonal rectangle \Box, the leftmost (resp., rightmost) quadrant in 𝒬\mathcal{Q} that right (resp., left) intersects \Box and the topmost (resp., bottommost) quadrant in 𝒬\mathcal{Q} that bottom (resp., top) intersects \Box can be reported in O~(1)\widetilde{O}(1) time (if they exist).

Proof.

It suffices to show how to compute the leftmost quadrant in 𝒬\mathcal{Q} that right intersects a given orthogonal rectangle \Box. Note that only southeast and northeast quadrants can left intersects a rectangle, and without loss of generality, it suffices to see how to find the leftmost southeast quadrant in 𝒬\mathcal{Q} that right intersects \Box. Let 𝒬SE𝒬\mathcal{Q}^{\text{SE}}\subseteq\mathcal{Q} consist of the southeast quadrants. We store the vertices of the quadrants in 𝒬SE\mathcal{Q}^{\text{SE}} in a basic 3-sided range-minimum data structure (Lemma 36) with O~(1)\widetilde{O}(1) query time, by setting the weight of each vertex to be its xx-coordinate. Let =[x1,x2]×[y1,y2]\Box=[x_{1},x_{2}]\times[y_{1},y_{2}] be a given orthogonal rectangle. Observe that a quadrant in 𝒬SE\mathcal{Q}^{\text{SE}} right intersects \Box iff its vertex lies in the 3-sided rectangle (x1,x2]×[y2,)(x_{1},x_{2}]\times[y_{2},\infty). Thus, the leftmost quadrant in 𝒬SE\mathcal{Q}^{\text{SE}} that right intersects \Box corresponds to the lightest vertex (i.e., the vertex with the smallest weight) in (x1,x2]×[y2,)(x_{1},x_{2}]\times[y_{2},\infty), which can be reported by the 3-sided range-minimum data structure in O~(1)\widetilde{O}(1) time. ∎

Using the support data structure 2\mathcal{B}_{2}, we can find the special quadrants in all 𝒬i,j\mathcal{Q}_{i,j} in O~(r2)\widetilde{O}(r^{2}) time. After this, we can build the data structures 𝒟old(i,j)\mathcal{D}_{\text{old}}^{(i,j)}’s. Constructing each 𝒟old(i,j)\mathcal{D}_{\text{old}}^{(i,j)} takes O~(|Si,j|+|𝒬i,j|)\widetilde{O}(|S_{i,j}|+|\mathcal{Q}_{i,j}|) time. By Lemma 10, the time for constructing all 𝒟old(i,j)\mathcal{D}_{\text{old}}^{(i,j)}’s is O~(n0+r2)\widetilde{O}(n_{0}+r^{2}). Therefore, the entire construction time of 𝒟new\mathcal{D}_{\text{new}} is O~(n0+r2)\widetilde{O}(n_{0}+r^{2}), i.e., O~(n0)\widetilde{O}(n_{0}).

The support data structures 1\mathcal{B}_{1} and 2\mathcal{B}_{2} will be used later in the implementation of the update procedure of 𝒟new\mathcal{D}_{\text{new}}. Thus, 2\mathcal{B}_{2} will be updated after each operation (while 1\mathcal{B}_{1} is static). Besides 1\mathcal{B}_{1} and 2\mathcal{B}_{2}, we need another support data structure 3\mathcal{B}_{3} defined as follows.

Lemma 33.

One can store 𝒬\mathcal{Q} in a basic data structure 3\mathcal{B}_{3} such that given an orthogonal rectangle \Box, a quadrant Q𝒬Q\in\mathcal{Q} that contains \Box can be reported in O~(1)\widetilde{O}(1) time (if it exists).

Proof.

It suffices to consider the southeast quadrants. Let 𝒬SE𝒬\mathcal{Q}^{\text{SE}}\subseteq\mathcal{Q} consist of the southeast quadrants. We store 𝒬SE\mathcal{Q}^{\text{SE}} in a binary search tree 𝒯\mathcal{T} where the key of a quadrant is the xx-coordinate of its vertex. We augment each node 𝐮𝒯\mathbf{u}\in\mathcal{T} with an additional field which stores the topmost quadrant in the subtree rooted at 𝐮\mathbf{u}. Given an orthogonal rectangle =[x1,x2]×[y1,y2]\Box=[x_{1},x_{2}]\times[y_{1},y_{2}], we first look for the topmost quadrant QQ whose key is smaller than or equal to x1x_{1}. With the augmented fields, QQ can be found in O~(1)\widetilde{O}(1) time using 𝒯\mathcal{T}. If QQ contains \Box, then we report QQ, otherwise no quadrant in 𝒬SE\mathcal{Q}^{\text{SE}} contains \Box (because a southeast quadrant contains \Box iff the xx-coordinate of its vertex is smaller than or equal to x1x_{1} and the yy-coordinate of its vertex is greater than or equal to y2y_{2}). Clearly, 𝒯\mathcal{T} can be built in O~(n0)\widetilde{O}(n_{0}) time and dynamized with O~(1)\widetilde{O}(1) update time, and thus 𝒯\mathcal{T} is basic. ∎

Next, we consider how to implement the update procedure of 𝒟new\mathcal{D}_{\text{new}}. We first discuss the update of the 𝒟old\mathcal{D}_{\text{old}} data structures. If the operation is an insertion or deletion on SS, then we use 1\mathcal{B}_{1} to find the cell i,j\Box_{i,j} that contains the inserted/deleted point and update the data structure 𝒟old(i,j)\mathcal{D}_{\text{old}}^{(i,j)}. The situation is more complicated when the operation happens on 𝒬\mathcal{Q}. Suppose the operation inserts a quadrant QQ to 𝒬\mathcal{Q}. Without loss of generality, assume QQ is a southeast quadrant. Using 1\mathcal{B}_{1}, we can find the cell i,j\Box_{i,j} that contains the vertex of QQ. We then update the data structure 𝒟old(i,j)\mathcal{D}_{\text{old}}^{(i,j)} by inserting QQ to 𝒬i,j\mathcal{Q}_{i,j}. Besides, the insertion of QQ may also change 𝒬i,k\mathcal{Q}_{i,k} for k{j+1,,r}k\in\{j+1,\dots,r\} and 𝒬k,j\mathcal{Q}_{k,j} for k{i+1,,r}k\in\{i+1,\dots,r\}. Fix an index k{j+1,,r}k\in\{j+1,\dots,r\}. The quadrant QQ bottom intersects i,k\Box_{i,k}. We use the support data structure 2\mathcal{B}_{2} to find the topmost quadrant QQ^{\prime} in 𝒬\mathcal{Q} (before the insertion of QQ) that bottom intersects i,k\Box_{i,k}. Then QQ^{\prime} is a special quadrant in 𝒬i,k\mathcal{Q}_{i,k}. If the yy-coordinate of the vertex of QQ is greater than the yy-coordinate of the vertex of QQ^{\prime}, then we delete QQ^{\prime} from 𝒬i,k\mathcal{Q}_{i,k} and insert QQ to 𝒬i,k\mathcal{Q}_{i,k}, and we update 𝒟old(i,k)\mathcal{D}_{\text{old}}^{(i,k)} twice for these two operations. The data structures 𝒟old(k,j)\mathcal{D}_{\text{old}}^{(k,j)}’s can be updated similarly. Updating each 𝒟old(i,k)\mathcal{D}_{\text{old}}^{(i,k)} takes O~(mi,kα/ε1α)\widetilde{O}(m_{i,k}^{\alpha}/\varepsilon^{1-\alpha}) amortized time, where mi,km_{i,k} is the size of the current (Si,k,𝒬i,k)(S_{i,k},\mathcal{Q}_{i,k}), and updating each 𝒟old(k,j)\mathcal{D}_{\text{old}}^{(k,j)} takes O~(mk,jα/ε1α)\widetilde{O}(m_{k,j}^{\alpha}/\varepsilon^{1-\alpha}) amortized time. Therefore, the total amortized time cost for updating these 𝒟old\mathcal{D}_{\text{old}} data structures is bounded by O~(k=1rmi,kα/ε1α+k=1rmk,jα/ε1α)\widetilde{O}(\sum_{k=1}^{r}m_{i,k}^{\alpha}/\varepsilon^{1-\alpha}+\sum_{k=1}^{r}m_{k,j}^{\alpha}/\varepsilon^{1-\alpha}). By Lemma 10 and the fact r=O(f(n0,ε))r=O(f(n_{0},\varepsilon)), we have k=1rmi,k=O(f(n0,ε))\sum_{k=1}^{r}m_{i,k}=O(f(n_{0},\varepsilon)) and k=1rmk,j=O(f(n0,ε))\sum_{k=1}^{r}m_{k,j}=O(f(n_{0},\varepsilon)) at any time of the first period. Since α1\alpha\leq 1, by Hölder’s inequality and Lemma 10,

k=1rmi,kα(k=1rmi,kr)αr=O(r1αfα(n0,ε))\sum_{k=1}^{r}m_{i,k}^{\alpha}\leq\left(\frac{\sum_{k=1}^{r}m_{i,k}}{r}\right)^{\alpha}\cdot r=O(r^{1-\alpha}\cdot f^{\alpha}(n_{0},\varepsilon))

and similarly k=1rmk,jα=O(r1αfα(n0,ε))\sum_{k=1}^{r}m_{k,j}^{\alpha}=O(r^{1-\alpha}\cdot f^{\alpha}(n_{0},\varepsilon)). It follows that updating the 𝒟old\mathcal{D}_{\text{old}} data structures takes O~(r1αfα(n0,ε)/ε1α)\widetilde{O}(r^{1-\alpha}\cdot f^{\alpha}(n_{0},\varepsilon)/\varepsilon^{1-\alpha}) amortized time. Updating the data structure 𝒜\mathcal{A} and the support data structures 2\mathcal{B}_{2} and 3\mathcal{B}_{3} can be done in O~(1)\widetilde{O}(1) time since they are basic.

Then we discuss the maintenance of the solution. The time for simulating the output-sensitive algorithm is O~(μδ)\widetilde{O}(\mu\cdot\delta), i.e., O~(min{r2/ε,n})\widetilde{O}(\min\{r^{2}/\varepsilon,n\}). If the algorithm gives the solution 𝒬appx\mathcal{Q}_{\text{appx}}, we compute |𝒬appx||\mathcal{Q}_{\text{appx}}| and store 𝒬appx\mathcal{Q}_{\text{appx}} in a binary search tree; by doing this, we can answer the size, membership, and reporting queries for 𝒬appx\mathcal{Q}_{\text{appx}} in the required query times. This step takes O~(μδ)\widetilde{O}(\mu\cdot\delta) time, i.e., O~(min{r2/ε,n})\widetilde{O}(\min\{r^{2}/\varepsilon,n\}) time, since |𝒬appx|μδ|\mathcal{Q}_{\text{appx}}|\leq\mu\cdot\delta in this case. If the output-sensitive algorithm fails, we compute the sets PP and PP^{\prime}. This can be done in O~(r2)\widetilde{O}(r^{2}) time by using the support data structure 3\mathcal{B}_{3}. After this, we compute 𝒬\mathcal{Q}^{*}, which again takes O~(r2)\widetilde{O}(r^{2}) time by using 3\mathcal{B}_{3}; specifically, we consider each (i,j)P(i,j)\in P and use 3\mathcal{B}_{3} to find a quadrant in 𝒬\mathcal{Q} that contains i,j\Box_{i,j}. We have 𝒬appx=𝒬((i,j)P𝒬i,j)\mathcal{Q}_{\text{appx}}=\mathcal{Q}^{*}\sqcup(\bigsqcup_{(i,j)\in P^{\prime}}\mathcal{Q}_{i,j}^{*}). To support the size query for 𝒬appx\mathcal{Q}_{\text{appx}} in O(1)O(1) time, we need to compute |𝒬appx|=|𝒬|+(i,j)P|𝒬i,j||\mathcal{Q}_{\text{appx}}|=|\mathcal{Q}^{*}|+\sum_{(i,j)\in P^{\prime}}|\mathcal{Q}_{i,j}^{*}|. This can be done in O(r2)O(r^{2}) time, because we can query 𝒟old(i,j)\mathcal{D}_{\text{old}}^{(i,j)} to obtain |𝒬i,j||\mathcal{Q}_{i,j}^{*}| in O(1)O(1) time. To support the reporting query in O(|𝒬appx|)O(|\mathcal{Q}_{\text{appx}}|) time, we only need to store 𝒬\mathcal{Q}^{*} and PP^{\prime}, and store at each (i,j)P(i,j)\in P^{\prime} a pointer pointing to the data structure 𝒟old(i,j)\mathcal{D}_{\text{old}}^{(i,j)}. In this way, we can report the quadrants in 𝒬\mathcal{Q}^{*} and for each (i,j)P(i,j)\in P^{\prime}, report the quadrants in 𝒬i,j\mathcal{Q}_{i,j}^{*} in O(|𝒬i,j|)O(|\mathcal{Q}_{i,j}^{*}|) time by querying 𝒟old(i,j)\mathcal{D}_{\text{old}}^{(i,j)}. Supporting the membership query in O(log|𝒬appx|)O(\log|\mathcal{Q}_{\text{appx}}|) time is more difficult, since a quadrant may belong to many 𝒬i,j\mathcal{Q}_{i,j}^{*}’s. To handle this issue, the idea is to collect all the special quadrants in the 𝒬i,j\mathcal{Q}_{i,j}^{*}’s. Specifically, let 𝒫i,j𝒬i,j\mathcal{P}_{i,j}^{*}\subseteq\mathcal{Q}_{i,j}^{*} consist of the (at most) four special quadrants. We can compute 𝒫i,j\mathcal{P}_{i,j}^{*} for each (i,j)P(i,j)\in P^{\prime} in O~(1)\widetilde{O}(1) time by first finding the (at most) four special quadrants in 𝒬i,j\mathcal{Q}_{i,j} using 2\mathcal{B}_{2} and computing the multiplicity of each special quadrant in 𝒬i,j\mathcal{Q}_{i,j}^{*} by querying 𝒟old(i,j)\mathcal{D}_{\text{old}}^{(i,j)}. We then store 𝒬((i,j)P𝒫i,j)\mathcal{Q}^{*}\sqcup(\bigsqcup_{(i,j)\in P^{\prime}}\mathcal{P}_{i,j}^{*}) in a binary search tree 𝒯\mathcal{T}. Given a query quadrant Q𝒬Q\in\mathcal{Q}, we first use 𝒯\mathcal{T} to compute the multiplicity p1p_{1} of QQ in 𝒬((i,j)P𝒫i,j)\mathcal{Q}^{*}\sqcup(\bigsqcup_{(i,j)\in P^{\prime}}\mathcal{P}_{i,j}^{*}) in O(log|𝒬appx|)O(\log|\mathcal{Q}_{\text{appx}}|) time. Then we use 1\mathcal{B}_{1} to find the cell i,j\Box_{i,j} that contains the vertex of QQ in O(logr)O(\log r) time and query 𝒟old(i,j)\mathcal{D}_{\text{old}}^{(i,j)} to find the multiplicity p2p_{2} of QQ in 𝒬i,j\mathcal{Q}_{i,j}^{*} in O(log|𝒬i,j|)O(\log|\mathcal{Q}_{i,j}^{*}|) time. One can easily verify that p1+p2p_{1}+p_{2} is the multiplicity of QQ in 𝒬appx\mathcal{Q}_{\text{appx}}. The query takes O(log|𝒬appx|+logr+log|𝒬i,j|)O(\log|\mathcal{Q}_{\text{appx}}|+\log r+\log|\mathcal{Q}_{i,j}^{*}|) time. Note that |𝒬i,j||𝒬appx||\mathcal{Q}_{i,j}^{*}|\leq|\mathcal{Q}_{\text{appx}}| and |𝒬appx|=Ω(r)|\mathcal{Q}_{\text{appx}}|=\Omega(r), where the latter follow from the fact that |𝒬appx|>δ|\mathcal{Q}_{\text{appx}}|>\delta (as the output-sensitive algorithm fails). Therefore, we can support the membership query in O(log|𝒬appx|)O(\log|\mathcal{Q}_{\text{appx}}|) time. The above work for storing 𝒬appx\mathcal{Q}_{\text{appx}} takes O~(r2)\widetilde{O}(r^{2}) time, since |𝒬|=O(r2)|\mathcal{Q}^{*}|=O(r^{2}) and |P|=O(r2)|P^{\prime}|=O(r^{2}). To summarize, maintaining the solution takes O~(min{r2/ε,n}+r2)\widetilde{O}(\min\{r^{2}/\varepsilon,n\}+r^{2}) time.

After processing f(n0,ε)f(n_{0},\varepsilon) operations, we need to reconstruct the entire data structure 𝒟new\mathcal{D}_{\text{new}}. The reconstruction is the same as the initial construction, except that n0n_{0} is replaced with n1n_{1}, the size of (S,𝒬)(S,\mathcal{Q}) at the time of reconstruction. Thus, the reconstruction takes O~(n1)\widetilde{O}(n_{1}) time. We amortize the time cost over all the f(n0,ε)f(n_{0},\varepsilon) operations in the period. Since n1n0+f(n0,ε)n_{1}\leq n_{0}+f(n_{0},\varepsilon), the amortized time for reconstruction is O~(n0/f(n0,ε))\widetilde{O}(n_{0}/f(n_{0},\varepsilon)), i.e., O~(r)\widetilde{O}(r).

Combining the time for updating the 𝒟old\mathcal{D}_{\text{old}} data structures, the time for maintaining the solution, and the time for reconstruction, we see that the amortized update time of 𝒟new\mathcal{D}_{\text{new}} is O~(r1αfα(n0,ε)/ε1α+min{r2/ε,n}+r2)\widetilde{O}(r^{1-\alpha}\cdot f^{\alpha}(n_{0},\varepsilon)/\varepsilon^{1-\alpha}+\min\{r^{2}/\varepsilon,n\}+r^{2}) during the first period (since 𝒟new\mathcal{D}_{\text{new}} is reconstructed periodically, it suffices to analyze the update time in the first period). By property (1) of ff, we have n=Θ(n0)n=\Theta(n_{0}) at any time in the period, i.e., the size of (S,𝒬)(S,\mathcal{Q}) is Θ(n0)\Theta(n_{0}) at any time in the period. By property (2) of ff, we further have f(n,ε)=Θ(f(n0,ε))f(n,\varepsilon)=\Theta(f(n_{0},\varepsilon)) at any time in the period. It follows that the amortized update time of 𝒟new\mathcal{D}_{\text{new}} is O~(n1α/(f12α(n,ε)ε1α)+min{n2/(f2(n,ε)ε),n}+n2/f2(n,ε))\widetilde{O}(n^{1-\alpha}/(f^{1-2\alpha}(n,\varepsilon)\cdot\varepsilon^{1-\alpha})+\min\{n^{2}/(f^{2}(n,\varepsilon)\cdot\varepsilon),n\}+n^{2}/f^{2}(n,\varepsilon)). To minimize the time complexity while guaranteeing the two conditions of ff, we set f(n,ε)=min{n1α/2/(ε)α,n/2}f(n,\varepsilon)=\min\{n^{1-\alpha^{\prime}/2}/(\sqrt{\varepsilon})^{\alpha^{\prime}},n/2\} where α\alpha^{\prime} is as defined in Theorem 7, i.e., α=2α/(1+2α)\alpha^{\prime}=2\alpha/(1+2\alpha). Note that by doing this we have f(n,ε)nf(n,\varepsilon)\geq\sqrt{n} because of our assumption ε<1\varepsilon<1. The following lemma shows that our choice of ff makes the time bound O~(nα/ε1α)\widetilde{O}(n^{\alpha^{\prime}}/\varepsilon^{1-\alpha^{\prime}}).

Lemma 34.

When f(n,ε)=min{n1α/2/(ε)α,n/2}f(n,\varepsilon)=\min\{n^{1-\alpha^{\prime}/2}/(\sqrt{\varepsilon})^{\alpha^{\prime}},n/2\}, we have

n1αf12α(n,ε)ε1α+min{n2f2(n,ε)ε,n}+n2f2(n,ε)=O(nαε1α).\frac{n^{1-\alpha}}{f^{1-2\alpha}(n,\varepsilon)\cdot\varepsilon^{1-\alpha}}+\min\left\{\frac{n^{2}}{f^{2}(n,\varepsilon)\cdot\varepsilon},n\right\}+\frac{n^{2}}{f^{2}(n,\varepsilon)}=O\left(\frac{n^{\alpha^{\prime}}}{\varepsilon^{1-\alpha^{\prime}}}\right).
Proof.

If f(n,ε)=n1α/2/(ε)αf(n,\varepsilon)=n^{1-\alpha^{\prime}/2}/(\sqrt{\varepsilon})^{\alpha^{\prime}}, then one can easily verify the equation in the lemma via a direct computation (by bounding each of the three terms on the left-hand side). It suffices to verify the equation for the case f(n,ε)=n/2f(n,\varepsilon)=n/2. In this case, we have n1α/2/(ε)αn/2n^{1-\alpha^{\prime}/2}/(\sqrt{\varepsilon})^{\alpha^{\prime}}\geq n/2, implying that n=O(1/ε)n=O(1/\varepsilon). It follows that nα/ε1α=O(nα/ε1α)n^{\alpha}/\varepsilon^{1-\alpha}=O(n^{\alpha^{\prime}}/\varepsilon^{1-\alpha^{\prime}}). So the first term in the left-hand side is bounded by O(nα/ε1α)O(n^{\alpha^{\prime}}/\varepsilon^{1-\alpha^{\prime}}). The second term is O(min{1/ε,n})O(\min\{1/\varepsilon,n\}), which is bounded by O(nα/ε1α)O(n^{\alpha^{\prime}}/\varepsilon^{1-\alpha^{\prime}}). The third term is clearly O(1)O(1). This proves the equation in the lemma. ∎

Appendix D Missing details in the output-sensitive quadrant set cover algorithm

D.1 Handling the no-solution case

Let U=Q𝒬QU=\bigcup_{Q\in\mathcal{Q}}Q. Observe that no matter whether (S,𝒬)(S,\mathcal{Q}) has no set cover or not, the algorithm in Section 4.2.2 gives an O(1)O(1)-approximate optimal set cover 𝒬\mathcal{Q}^{*} for (SU,𝒬)(S\cap U,\mathcal{Q}). Thus, in order to handle the no-solution case, we only need to check whether 𝒬\mathcal{Q}^{*} covers all points in SS, after 𝒬\mathcal{Q}^{*} is computed. Define U=Q𝒬QU^{*}=\bigcup_{Q\in\mathcal{Q}^{*}}Q. Note that 𝒬\mathcal{Q}^{*} is a set cover for SS iff 2\U\mathbb{R}^{2}\backslash U^{*} does not contain any point in SS. The area 2\U\mathbb{R}^{2}\backslash U^{*} is a rectilinear domain (not necessarily connected). Since UU^{*} is the union of O(|𝒬|)O(|\mathcal{Q}^{*}|) quadrants, the complexity of UU^{*} is O(|𝒬|)O(|\mathcal{Q}^{*}|), so is the complexity of 2\U\mathbb{R}^{2}\backslash U^{*}. Furthermore, it is easy to compute 2\U\mathbb{R}^{2}\backslash U^{*} and decompose it into O(|𝒬|)O(|\mathcal{Q}^{*}|) rectangles in O~(|𝒬|)\widetilde{O}(|\mathcal{Q}^{*}|) time, given 𝒬\mathcal{Q}^{*} in hand. We then only need to test for each such rectangle RR whether RR contains any point in SS or not, which can be done via an orthogonal range-emptiness query on SS; there are existing basic data structures that support orthogonal range-emptiness queries in O~(1)\widetilde{O}(1) time [24].

D.2 Implementing the algorithm using basic data structures

In this section, we show that the output-sensitive quadrant set cover algorithm can be performed in O~(𝗈𝗉𝗍)\widetilde{O}(\mathsf{opt}) time by storing the instance (S,𝒬)(S,\mathcal{Q}) in some basic data structure. As argued in Section 4.2.2, it suffices to show how to implement the following operations in O~(1)\widetilde{O}(1) time using basic data structures (please refer to Section 4.2.2 for the notations).

  • Computing the point σ\sigma.

  • Given a point a2a\in\mathbb{R}^{2}, testing whether aUNEa\in U^{\text{NE}} and aUNWa\in U^{\text{NW}}.

  • Given a point a2a\in\mathbb{R}^{2}, computing the quadrants Φ(a,𝒬SW)\varPhi_{\rightarrow}(a,\mathcal{Q}^{\text{SW}}), Φ(a,𝒬NW)\varPhi_{\rightarrow}(a,\mathcal{Q}^{\text{NW}}), Φ(a,𝒬SE)\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{SE}}), and Φ(a,𝒬NE)\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{NE}}).

  • Given a number y~\tilde{y}, computing the point ϕ(y~)\phi(\tilde{y}).

Computing σ\sigma.

Recall that γ\gamma is the boundary of USEU^{\text{SE}}, which is a staircase curve from bottom-left to top-right. The area USWU^{\text{SW}} contains the bottom-left end of γ\gamma (if γUSW\gamma\cap U^{\text{SW}}\neq\emptyset). The point σ\sigma is the “endpoint” of γUSW\gamma\cap U^{\text{SW}}, i.e., the point on γ\gamma closest to the top-right end of γ\gamma that is contained in USWU^{\text{SW}}. We define the height of USWU^{\text{SW}} at xx\in\mathbb{R}, denoted by 𝗁𝗍x(USW)\mathsf{ht}_{x}(U^{\text{SW}}), as the largest number yy\in\mathbb{R} such that the point with coordinates (x,y)(x,y) is contained in USWU^{\text{SW}}; similarly, we can define the height of USEU^{\text{SE}} at xx\in\mathbb{R}, denoted by 𝗁𝗍x(USE)\mathsf{ht}_{x}(U^{\text{SE}}). We say USWU^{\text{SW}} is higher than USEU^{\text{SE}} at xx\in\mathbb{R} if 𝗁𝗍x(USW)𝗁𝗍x(USE)\mathsf{ht}_{x}(U^{\text{SW}})\geq\mathsf{ht}_{x}(U^{\text{SE}}); otherwise, we say USWU^{\text{SW}} is lower than USEU^{\text{SE}} at xx. It is easy to see the following three facts about the point σ\sigma.

  1. 1.

    The xx-coordinate of σ\sigma, denoted by xσx_{\sigma}, is equal to x(Q)x(Q) for some Q𝒬SE𝒬SWQ\in\mathcal{Q}^{\text{SE}}\cup\mathcal{Q}^{\text{SW}}; recall that x(Q)x(Q) is the xx-coordinate of the vertex of QQ.

  2. 2.

    USWU^{\text{SW}} is higher than USEU^{\text{SE}} at all x<xσx<x_{\sigma} and USWU^{\text{SW}} is lower than USEU^{\text{SE}} at all x>xσx>x_{\sigma}.

  3. 3.

    The coordinates of σ\sigma is (xσ,min{𝗁𝗍xσ(USW),𝗁𝗍xσ(USE)})(x_{\sigma},\min\{\mathsf{ht}_{x_{\sigma}}(U^{\text{SW}}),\mathsf{ht}_{x_{\sigma}}(U^{\text{SE}})\}).

Based on these facts, we compute σ\sigma as follows. First, we need a basic data structure built on 𝒬SW\mathcal{Q}^{\text{SW}} (resp., 𝒬SE\mathcal{Q}^{\text{SE}}) that can compute the height function in O~(1)\widetilde{O}(1) time.

Lemma 35.

One can store 𝒬SW\mathcal{Q}^{\textnormal{SW}} (resp., 𝒬SE\mathcal{Q}^{\textnormal{SE}}) in a basic data structure which can report 𝗁𝗍x(USW)\mathsf{ht}_{x}(U^{\textnormal{SW}}) (resp., 𝗁𝗍x(USE)\mathsf{ht}_{x}(U^{\textnormal{SE}})) for a given number xx\in\mathbb{R} in O~(1)\widetilde{O}(1) time.

Proof.

It suffices to consider 𝒬SW\mathcal{Q}^{\text{SW}}. We store 𝒬SW\mathcal{Q}^{\text{SW}} in a standard binary search tree 𝒯\mathcal{T}, by using x(Q)x(Q) as the key of each quadrant Q𝒬SWQ\in\mathcal{Q}^{\text{SW}}. At each node 𝐮𝒯\mathbf{u}\in\mathcal{T}, we store a field Y(𝐮)Y(\mathbf{u}) which is the maximum of y(Q)y(Q) for all quadrants QQ in the subtree rooted at 𝐮\mathbf{u}. Clearly, 𝒯\mathcal{T} can be constructed in O~(n)\widetilde{O}(n) time where n=|𝒬SW|n=|\mathcal{Q}^{\text{SW}}| and can be dynamized with O~(1)\widetilde{O}(1) update time to support insertions and deletions on 𝒬SW\mathcal{Q}^{\text{SW}}, hence it is a basic data structure. Next, we consider how to compute 𝗁𝗍x(USW)\mathsf{ht}_{x}(U^{\text{SW}}) for a given xx\in\mathbb{R} using 𝒯\mathcal{T}. One can easily see that 𝗁𝗍x(USW)=max{y(Q):Q𝒬SW with x(Q)x}\mathsf{ht}_{x}(U^{\text{SW}})=\max\{y(Q):Q\in\mathcal{Q}^{\text{SW}}\text{ with }x(Q)\geq x\}. In other words, 𝗁𝗍x(USW)\mathsf{ht}_{x}(U^{\text{SW}}) is the maximum of y(Q)y(Q) for all Q𝒬SWQ\in\mathcal{Q}^{\text{SW}} corresponding to the nodes in 𝒯\mathcal{T} whose keys are at least xx. Therefore, using the field Y(𝐮)Y(\mathbf{u}), 𝗁𝗍x(USW)\mathsf{ht}_{x}(U^{\text{SW}}) can be computed in O(logn)O(\log n) time simply via a top-down walk in 𝒯\mathcal{T}. The walk begins at the root of 𝒯\mathcal{T}, and we set 𝗁𝗍=\mathsf{ht}=-\infty initially. If the key of the current node is smaller than xx, then we just go to its right child. Otherwise, we update 𝗁𝗍\mathsf{ht} as 𝗁𝗍max{𝗁𝗍,y(Q),Y(𝐫)}\mathsf{ht}\leftarrow\max\{\mathsf{ht},y(Q),Y(\mathbf{r})\} where Q𝒬SWQ\in\mathcal{Q}^{\text{SW}} is the quadrant corresponding to the current node and 𝐫\mathbf{r} is the right child of the current node (if the current node has no right child, we update 𝗁𝗍\mathsf{ht} as 𝗁𝗍max{𝗁𝗍,y(Q)}\mathsf{ht}\leftarrow\max\{\mathsf{ht},y(Q)\}), and go to the left child of the current node. When the walk ends at a leaf node, the number 𝗁𝗍\mathsf{ht} is just equal to 𝗁𝗍x(USW)\mathsf{ht}_{x}(U^{\text{SW}}). The walk takes O(logn)O(\log n) time, hence 𝗁𝗍x(USW)\mathsf{ht}_{x}(U^{\text{SW}}) can be computed in O~(1)\widetilde{O}(1) time using 𝒯\mathcal{T}. This completes the proof of the lemma. ∎

With the above lemma in hand, we may now assume that the height functions 𝗁𝗍x(USW)\mathsf{ht}_{x}(U^{\text{SW}}) and 𝗁𝗍x(USE)\mathsf{ht}_{x}(U^{\text{SE}}) can be computed in O~(1)\widetilde{O}(1) time for any xx\in\mathbb{R}. In particular, we can test in O~(1)\widetilde{O}(1) time whether USWU^{\text{SW}} is higher or lower than USEU^{\text{SE}} at any xx\in\mathbb{R}. We then build a binary search tree 𝒯\mathcal{T} on 𝒬SW𝒬SE\mathcal{Q}^{\text{SW}}\cup\mathcal{Q}^{\text{SE}}, by using x(Q)x(Q) as the key of each quadrant Q𝒬SW𝒬SEQ\in\mathcal{Q}^{\text{SW}}\cup\mathcal{Q}^{\text{SE}}. Clearly, 𝒯\mathcal{T} can be constructed in O~(n)\widetilde{O}(n) time where n=|𝒬SW𝒬SE|n=|\mathcal{Q}^{\text{SW}}\cup\mathcal{Q}^{\text{SE}}| and can be dynamized with O~(1)\widetilde{O}(1) update time, hence it is a basic data structure. We observe that, using 𝒯\mathcal{T}, we can determine in O~(1)\widetilde{O}(1) time for any given number pp\in\mathbb{R} which one of the following three is true: (1) p<xσp<x_{\sigma}, (2) p=xσp=x_{\sigma}, (3) p>xσp>x_{\sigma}. To see this, consider a given number pp\in\mathbb{R}. We first search in 𝒯\mathcal{T} to see whether p=x(Q)p=x(Q) for some Q𝒬SW𝒬SEQ\in\mathcal{Q}^{\text{SW}}\cup\mathcal{Q}^{\text{SE}}. If not, we know pxσp\neq x_{\sigma}, because xσ=x(Q)x_{\sigma}=x(Q) for some Q𝒬SW𝒬SEQ\in\mathcal{Q}^{\text{SW}}\cup\mathcal{Q}^{\text{SE}} (as we observed before). In this case, we can decide whether p<xσp<x_{\sigma} or p>xσp>x_{\sigma} by simply testing whether USWU^{\text{SW}} is higher or lower than USEU^{\text{SE}} at pp. Specifically, if USWU^{\text{SW}} is higher than USEU^{\text{SE}} at pp, then p<xσp<x_{\sigma}, otherwise p>xσp>x_{\sigma}, because USWU^{\text{SW}} is higher (resp., lower) than USEU^{\text{SE}} at all x<xσx<x_{\sigma} (resp., x>xσx>x_{\sigma}) as we observed before. The remaining case is that p=x(Q)p=x(Q) for some Q𝒬SW𝒬SEQ\in\mathcal{Q}^{\text{SW}}\cup\mathcal{Q}^{\text{SE}}. In this case, we also test whether USWU^{\text{SW}} is higher or lower than USEU^{\text{SE}} at pp; by doing this, we can decide whether pxσp\leq x_{\sigma} or pxσp\geq x_{\sigma}. Suppose pxσp\leq x_{\sigma}. To see whether p<xσp<x_{\sigma} or p=xσp=x_{\sigma}, we search in 𝒯\mathcal{T} to find the smallest key pp^{\prime} that is larger than pp. Let p~\tilde{p}\in\mathbb{R} be any number such that p<p~<pp<\tilde{p}<p^{\prime}. If USWU^{\text{SW}} is higher than USEU^{\text{SE}} at p~\tilde{p}, then we have p<p~xσp<\tilde{p}\leq x_{\sigma}. If USWU^{\text{SW}} is lower than USEU^{\text{SE}} at p~\tilde{p}, then we know pxσp~p\leq x_{\sigma}\leq\tilde{p}. Note that any number in (p,p~](p,\tilde{p}] is not equal to x(Q)x(Q) for any Q𝒬SW𝒬SEQ\in\mathcal{Q}^{\text{SW}}\cup\mathcal{Q}^{\text{SE}}, due to the choice of pp^{\prime} and the inequality p<p~<pp<\tilde{p}<p^{\prime}. Therefore, xσ(p,p~]x_{\sigma}\notin(p,\tilde{p}], which implies p=xσp=x_{\sigma}. To summarize, we can determine in O~(1)\widetilde{O}(1) time for any pp\in\mathbb{R} which one of the following three is true: (1) p<xσp<x_{\sigma}, (2) p=xσp=x_{\sigma}, (3) p>xσp>x_{\sigma}.

This allows us to compute xσx_{\sigma} using a binary search manner. Specifically, we do a top-down walk from the root of 𝒯\mathcal{T}. If the key of the current node is equal to xσx_{\sigma}, we are done. Otherwise, if the key of the current node is smaller (resp., larger) than xσx_{\sigma}, we go to its right (resp., left) child, because the keys of the nodes in the left (resp., right) subtree are all smaller (resp., larger) than xσx_{\sigma}. During the walk, we can definitely find a node whose key is xσx_{\sigma}, since xσ=x(Q)x_{\sigma}=x(Q) for some Q𝒬SW𝒬SEQ\in\mathcal{Q}^{\text{SW}}\cup\mathcal{Q}^{\text{SE}}, i.e., xσx_{\sigma} is the key of some node in 𝒯\mathcal{T}. In this way, we can compute xσx_{\sigma} in O~(1)\widetilde{O}(1) time. As we observed before, the coordinates of σ\sigma is (xσ,min{𝗁𝗍xσ(USW),𝗁𝗍xσ(USE)})(x_{\sigma},\min\{\mathsf{ht}_{x_{\sigma}}(U^{\text{SW}}),\mathsf{ht}_{x_{\sigma}}(U^{\text{SE}})\}). Thus, once we know xσx_{\sigma}, it suffices to compute 𝗁𝗍xσ(USW)\mathsf{ht}_{x_{\sigma}}(U^{\text{SW}}) and 𝗁𝗍xσ(USE)\mathsf{ht}_{x_{\sigma}}(U^{\text{SE}}), which takes O~(1)\widetilde{O}(1) time by Lemma 35. We conclude that computing σ\sigma can be done in O~(1)\widetilde{O}(1) time (by properly store 𝒬SW\mathcal{Q}^{\text{SW}} and 𝒬SE\mathcal{Q}^{\text{SE}} in basic data structures).

Testing whether aUNEa\in U^{\text{NE}} and aUNWa\in U^{\text{NW}}.

It suffices to consider how to test whether aUNEa\in U^{\text{NE}}. We store 𝒬NE\mathcal{Q}^{\text{NE}} in a binary search tree 𝒯\mathcal{T}, by using y(Q)y(Q) as the key of each quadrant Q𝒬SWQ\in\mathcal{Q}^{\text{SW}}. We augment each node 𝐮𝒯\mathbf{u}\in\mathcal{T} with a field which stores the leftmost quadrant in the subtree rooted at 𝐮\mathbf{u}. Clearly, 𝒯\mathcal{T} can be constructed in O~(n)\widetilde{O}(n) time where n=|𝒬NE|n=|\mathcal{Q}^{\text{NE}}| and can be dynamized with O~(1)\widetilde{O}(1) update time, hence it is basic. Given a point a2a\in\mathbb{R}^{2}, we first look for the leftmost quadrant QQ in 𝒯\mathcal{T} whose key is smaller than or equal to the yy-coordinate of aa. With the augmented fields, QQ can be found in O~(1)\widetilde{O}(1) time. If aQa\in Q, then we know aUNEa\in U^{\text{NE}}. Otherwise, we claim that aUNEa\notin U^{\text{NE}}. Indeed, a quadrant in 𝒯\mathcal{T} contains aa only if its key is smaller than or equal to the yy-coordinate of aa. Since QQ is the leftmost one among such quadrants and aQa\notin Q, we know that aa is not contained in any quadrant in 𝒯\mathcal{T}, i.e., aUNEa\notin U^{\text{NE}}. We conclude that testing whether aUNEa\in U^{\text{NE}} and aUNWa\in U^{\text{NW}} for a given point a2a\in\mathbb{R}^{2} can be done in O~(1)\widetilde{O}(1) time (by properly store 𝒬NE\mathcal{Q}^{\text{NE}} and 𝒬NW\mathcal{Q}^{\text{NW}} in basic data structures).

Computing Φ\varPhi.

Recall that for a point a2a\in\mathbb{R}^{2} and a collection 𝒫\mathcal{P} of quadrants, Φ(a,𝒫)\varPhi_{\rightarrow}(a,\mathcal{P}) and Φ(a,𝒫)\varPhi_{\uparrow}(a,\mathcal{P}) denote the rightmost and topmost quadrants in 𝒫\mathcal{P} that contain aa, respectively. We want to compute Φ(a,𝒬SW)\varPhi_{\rightarrow}(a,\mathcal{Q}^{\text{SW}}), Φ(a,𝒬NW)\varPhi_{\rightarrow}(a,\mathcal{Q}^{\text{NW}}), Φ(a,𝒬SE)\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{SE}}), and Φ(a,𝒬NE)\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{NE}}) in O~(1)\widetilde{O}(1) time for a given point a2a\in\mathbb{R}^{2}, using basic data structures. Here we only consider how to compute Φ(a,𝒬SW)\varPhi_{\rightarrow}(a,\mathcal{Q}^{\text{SW}}), the other three can be computed in the same way. We store 𝒬SW\mathcal{Q}^{\text{SW}} in a binary search tree 𝒯\mathcal{T}, by using y(Q)y(Q) as the key of each quadrant Q𝒬SWQ\in\mathcal{Q}^{\text{SW}}. At each node 𝐮𝒯\mathbf{u}\in\mathcal{T}, we store a field that is the rightmost quadrant in the subtree rooted at 𝐮\mathbf{u}. Clearly, 𝒯\mathcal{T} can be constructed in O~(n)\widetilde{O}(n) time where n=|𝒬SW|n=|\mathcal{Q}^{\text{SW}}| and can be dynamized with O~(1)\widetilde{O}(1) update time, hence it is a basic data structure. Given a point a2a\in\mathbb{R}^{2}, we first look for the rightmost quadrant QQ in 𝒯\mathcal{T} whose key is greater than or equal to the yy-coordinate yay_{a} of aa. With the augmented fields, QQ can be found in O~(1)\widetilde{O}(1) time. If QQ contains aa, then QQ is the rightmost quadrant in 𝒯\mathcal{T} (i.e., in 𝒬SW\mathcal{Q}^{\text{SW}}) that contains aa, because any quadrant QQ^{\prime} that contains aa must satisfy y(Q)yay(Q^{\prime})\geq y_{a}. Otherwise, no quadrant in 𝒬SW\mathcal{Q}^{\text{SW}} contains aa. We conclude that computing Φ(a,𝒬SW)\varPhi_{\rightarrow}(a,\mathcal{Q}^{\text{SW}}), Φ(a,𝒬NW)\varPhi_{\rightarrow}(a,\mathcal{Q}^{\text{NW}}), Φ(a,𝒬SE)\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{SE}}), and Φ(a,𝒬NE)\varPhi_{\uparrow}(a,\mathcal{Q}^{\text{NE}}) for a given point a2a\in\mathbb{R}^{2} can be done in O~(1)\widetilde{O}(1) time (by properly store 𝒬SW\mathcal{Q}^{\text{SW}}, 𝒬NW\mathcal{Q}^{\text{NW}}, 𝒬SE\mathcal{Q}^{\text{SE}}, 𝒬NE\mathcal{Q}^{\text{NE}} in basic data structures).

Computing ϕ(y~)\phi(\tilde{y}).

Recall that for a number y~\tilde{y}\in\mathbb{R}, ϕ(y~)\phi(\tilde{y}) is the leftmost point in SUSES\cap U^{\text{SE}} whose yy-coordinate is greater than y~\tilde{y}. We want to store SS and 𝒬SE\mathcal{Q}^{\text{SE}} in some basic data structure such that ϕ(y~)\phi(\tilde{y}) can be computed in O~(1)\widetilde{O}(1) time for any given y~\tilde{y}\in\mathbb{R}. For simplicity of exposition, let us make a general-position assumption: the points in SS and the vertices of the quadrants in 𝒬SE\mathcal{Q}^{\text{SE}} have distinct yy-coordinates. The first thing we need is a data structure that supports the so-called 3-sided range-minimum query. A 3-sided range-minimum query on a set of weighted points in 2\mathbb{R}^{2} ask for the lightest point (i.e., the point with the smallest weight) contained in a given 3-sided query rectangle R=[x0,)×[y1,y2]R=[x_{0},\infty)\times[y_{1},y_{2}].

Lemma 36.

There exists a basic data structure that supports 3-sided range-minimum queries in O~(1)\widetilde{O}(1) time.

Proof.

The standard range trees can answer static 3-sided range-minimum queries in O~(1)\widetilde{O}(1) time, and can be constructed in O~(n)\widetilde{O}(n) time, where nn is the size of the dataset. Since range-minimum queries are decomposable, the approach of [4] can be applied to dynamize the static data structure with O~(1)\widetilde{O}(1) update time, by paying an extra logarithmic factor in the query time. This gives us the basic data structure that supports 3-sided range-minimum queries in O~(1)\widetilde{O}(1) time. ∎

We store SS in the basic 3-sided range-minimum data structure 𝒜\mathcal{A} of the above lemma, by setting the weight of each point in SS to be its xx-coordinate. Besides 𝒜\mathcal{A}, we need a (1D) range tree 𝒯\mathcal{T} built on SV(𝒬SE)S\cup V(\mathcal{Q}^{\text{SE}}) for yy-coordinates, where V(𝒬SE)V(\mathcal{Q}^{\text{SE}}) is the set of the vertices of the quadrants in 𝒬SE\mathcal{Q}^{\text{SE}}. By the definition of a range tree, the points in SV(𝒬SE)S\cup V(\mathcal{Q}^{\text{SE}}) are one-to-one corresponding to the leaves of 𝒯\mathcal{T}, where the left-right order of the leaves corresponds to the small-large order of the yy-coordinates of the points. The canonical subset of each node 𝐮𝒯\mathbf{u}\in\mathcal{T} refers to the subset of SV(𝒬SE)S\cup V(\mathcal{Q}^{\text{SE}}) consisting of the points stored in the subtree rooted at 𝐮\mathbf{u}. For a node 𝐮𝒯\mathbf{u}\in\mathcal{T}, we denote by S(𝐮)S(\mathbf{u}) the set of the points in SS that are contained in the canonical subset of 𝐮\mathbf{u}, and denote by 𝒬(𝐮)\mathcal{Q}(\mathbf{u}) the collection of the quadrants in 𝒬SE\mathcal{Q}^{\text{SE}} whose vertices are contained in the canonical subset of 𝐮\mathbf{u}. Also, we write U(𝐮)=Q𝒬(𝐮)QU(\mathbf{u})=\bigcup_{Q\in\mathcal{Q}(\mathbf{u})}Q At each node 𝐮𝒯\mathbf{u}\in\mathcal{T}, we store the following three fields.

  • y(𝐮)y^{-}(\mathbf{u}): the yy-coordinate of the bottommost point in the canonical subset of 𝐮\mathbf{u}.

  • y+(𝐮)y^{+}(\mathbf{u}): the yy-coordinate of the topmost point in the canonical subset of 𝐮\mathbf{u}.

  • L(𝐮)L(\mathbf{u}): the leftmost quadrant in 𝒬(𝐮)\mathcal{Q}(\mathbf{u}).

  • a(𝐮)a(\mathbf{u}): the leftmost point in S(𝐮)S(\mathbf{u}) that is contained in U(𝐮)U(\mathbf{u}).

Let 𝐮𝒯\mathbf{u}\in\mathcal{T} be a node and 𝐥,𝐫\mathbf{l},\mathbf{r} be its left and right children, respectively. It is clear that y(𝐮)y^{-}(\mathbf{u}), y+(𝐮)y^{+}(\mathbf{u}), L(𝐮)L(\mathbf{u}) can be computed in O(1)O(1) time knowing the y()y^{-}(\cdot), y+()y^{+}(\cdot), and L()L(\cdot) fields of 𝐥\mathbf{l} and 𝐫\mathbf{r}. We claim that a(𝐮)a(\mathbf{u}) can be computed in O~(1)\widetilde{O}(1) time based on the information stored at 𝐮\mathbf{u}, 𝐥\mathbf{l}, 𝐫\mathbf{r}, and the 3-sided range-minimum data structure 𝒜\mathcal{A}. By definition, a(𝐮)a(\mathbf{u}) is the leftmost point in S(𝐮)S(\mathbf{u}) that is contained in U(𝐮)U(\mathbf{u}). Since S(𝐮)=S(𝐥)S(𝐫)S(\mathbf{u})=S(\mathbf{l})\cup S(\mathbf{r}), it suffices to compute the leftmost point in S(𝐥)S(\mathbf{l}) contained in U(𝐮)U(\mathbf{u}) and the leftmost point in S(𝐫)S(\mathbf{r}) contained in U(𝐮)U(\mathbf{u}). Note that any point in S(𝐫)S(\mathbf{r}) is not contained in U(𝐥)U(\mathbf{l}), since any quadrant in 𝒬(𝐥)\mathcal{Q}(\mathbf{l}) is “below” any point in S(𝐫)S(\mathbf{r}). It follows that the leftmost point in S(𝐫)S(\mathbf{r}) that is contained in U(𝐮)U(\mathbf{u}) is just a(𝐫)a(\mathbf{r}). To compute the leftmost point in S(𝐥)S(\mathbf{l}) that is contained in U(𝐮)U(\mathbf{u}), we only need to compute the leftmost point in S(𝐥)S(\mathbf{l}) contained in U(𝐥)U(\mathbf{l}) and the leftmost point in S(𝐥)S(\mathbf{l}) contained in U(𝐫)U(\mathbf{r}), because U(𝐮)=U(𝐥)U(𝐫)U(\mathbf{u})=U(\mathbf{l})\cup U(\mathbf{r}). The leftmost point in S(𝐥)S(\mathbf{l}) contained in U(𝐥)U(\mathbf{l}) is just a(𝐥)a(\mathbf{l}). In order to compute the leftmost point in S(𝐥)S(\mathbf{l}) contained in U(𝐫)U(\mathbf{r}), we observe that a point in S(𝐥)S(\mathbf{l}) is contained in U(𝐫)U(\mathbf{r}) iff it is contained in the quadrant L(𝐫)L(\mathbf{r}). Thus, the leftmost point in S(𝐥)S(\mathbf{l}) contained in U(𝐫)U(\mathbf{r}) is just the leftmost point in S(𝐥)S(\mathbf{l}) contained in L(𝐫)L(\mathbf{r}). Note that S(𝐥)S(\mathbf{l}) is exactly the set of the points in SS that lie in the strip P=×[y(𝐥),y+(𝐥)]P=\mathbb{R}\times[y^{-}(\mathbf{l}),y^{+}(\mathbf{l})], i.e., S(𝐥)=SPS(\mathbf{l})=S\cap P. Hence, we can query the data structure 𝒜\mathcal{A} with the 3-sided rectangle PL(𝐫)P\cap L(\mathbf{r}) to obtain the leftmost point in S(𝐥)S(\mathbf{l}) contained in L(𝐫)L(\mathbf{r}), i.e., the leftmost point in S(𝐥)S(\mathbf{l}) contained in U(𝐫)U(\mathbf{r}), which takes O~(1)\widetilde{O}(1) time. We conclude that, by using the 3-sided range-minimum data structure 𝒜\mathcal{A}, all the fields of a node 𝐮\mathbf{u} can be computed in O~(1)\widetilde{O}(1) time based on the information stored at 𝐮\mathbf{u} and their children. Therefore, the range tree 𝒯\mathcal{T} with the augmented fields can be dynamized with O~(1)\widetilde{O}(1) update time using the standard technique for dynamizing augmented trees [12]. The construction time of 𝒯\mathcal{T} is clearly O~(n)\widetilde{O}(n) where n=|S|+|𝒬SE|n=|S|+|\mathcal{Q}^{\text{SE}}|, thus 𝒯\mathcal{T} is a basic data structure.

Next, we consider how to use 𝒯\mathcal{T} to compute ϕ(y~)\phi(\tilde{y}) in O~(1)\widetilde{O}(1) time for a given y~\tilde{y}\in\mathbb{R}. We first find the t=O(logn)t=O(\log n) canonical nodes 𝐮1,,𝐮t𝒯\mathbf{u}_{1},\dots,\mathbf{u}_{t}\in\mathcal{T} corresponding to the range [y~,)[\widetilde{y},\infty). Suppose 𝐮1,,𝐮t\mathbf{u}_{1},\dots,\mathbf{u}_{t} are sorted from left to right in 𝒯\mathcal{T}. By the property of canonical nodes, the canonical subsets of 𝐮1,,𝐮t\mathbf{u}_{1},\dots,\mathbf{u}_{t} are disjoint and their union is the subset of SV(𝒬SE)S\cup V(\mathcal{Q}^{\text{SE}}) consisting of the points whose yy-coordinates are in the range [y~,)[\widetilde{y},\infty). The point ϕ(y~)\phi(\tilde{y}) we are looking for is just the leftmost point in i=1tS(𝐮i)\bigcup_{i=1}^{t}S(\mathbf{u}_{i}) that is contained in USEU^{\text{SE}}. Note that ϕ(y~)\phi(\tilde{y}) is not contained in any southeast quadrant QQ with y(Q)<y~y(Q)<\tilde{y}. Thus, ϕ(y~)\phi(\tilde{y}) is the leftmost point in i=1tS(𝐮i)\bigcup_{i=1}^{t}S(\mathbf{u}_{i}) that is contained in i=1tU(𝐮i)\bigcup_{i=1}^{t}U(\mathbf{u}_{i}). To compute ϕ(y~)\phi(\tilde{y}), it suffices to know the leftmost point in S(𝐮i)S(\mathbf{u}_{i}) that is contained in U(𝐮j)U(\mathbf{u}_{j}) for all i,j{1,,t}i,j\in\{1,\dots,t\}. If i>ji>j, then any point in S(𝐮i)S(\mathbf{u}_{i}) is not contained in U(𝐮j)U(\mathbf{u}_{j}). If i=ji=j, then the leftmost point in S(𝐮i)S(\mathbf{u}_{i}) contained in U(𝐮j)U(\mathbf{u}_{j}) is just a(𝐮i)=a(𝐮j)a(\mathbf{u}_{i})=a(\mathbf{u}_{j}). If i<ji<j, then a point in S(𝐮i)S(\mathbf{u}_{i}) is contained in U(𝐮j)U(\mathbf{u}_{j}) iff it is contained in the quadrant L(𝐮j)L(\mathbf{u}_{j}). Note that the points in S(𝐮i)S(\mathbf{u}_{i}) are exactly the set of the points in SS that lie in the strip Pi=×[y(𝐮i),y+(𝐮i)]P_{i}=\mathbb{R}\times[y^{-}(\mathbf{u}_{i}),y^{+}(\mathbf{u}_{i})], i.e., S(𝐮i)=SPiS(\mathbf{u}_{i})=S\cap P_{i}. Therefore, the leftmost point in S(𝐮i)S(\mathbf{u}_{i}) that is contained in L(𝐮j)L(\mathbf{u}_{j}) can be computed in O~(1)\widetilde{O}(1) time by querying the data structure 𝒜\mathcal{A} with the 3-sided rectangle PiL(𝐮j)P_{i}\cap L(\mathbf{u}_{j}). To summarize, the leftmost point in S(𝐮i)S(\mathbf{u}_{i}) that is contained in U(𝐮j)U(\mathbf{u}_{j}) can be computed in O~(1)\widetilde{O}(1) time for any i,j{1,,t}i,j\in\{1,\dots,t\}. Since t=O(logn)t=O(\log n), computing the leftmost point in S(𝐮i)S(\mathbf{u}_{i}) that is contained in U(𝐮j)U(\mathbf{u}_{j}) for all i,j{1,,t}i,j\in\{1,\dots,t\} takes O~(1)\widetilde{O}(1) time. Once we have these points, the leftmost one among them is just the leftmost point in i=1tS(𝐮i)\bigcup_{i=1}^{t}S(\mathbf{u}_{i}) that is contained in i=1tU(𝐮i)\bigcup_{i=1}^{t}U(\mathbf{u}_{i}), i.e., the point ϕ(y~)\phi(\tilde{y}). We conclude that computing ϕ(y~)\phi(\tilde{y}) can be done in O~(1)\widetilde{O}(1) time (by properly storing SS and 𝒬SE\mathcal{Q}^{\text{SE}} in some basic data structure).