This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Locally Defined Independence Systems on Graphs

Yuki Amano [email protected]
Abstract

The maximization for the independence systems defined on graphs is a generalization of combinatorial optimization problems such as the maximum bb-matching, the unweighted MAX-SAT, the matchoid, and the maximum timed matching problems. In this paper, we consider the problem under the local oracle model to investigate the global approximability of the problem by using the local approximability. We first analyze two simple algorithms FixedOrder and Greedy for the maximization under the model, which shows that they have no constant approximation ratio. Here algorithms FixedOrder and Greedy apply local oracles with fixed and greedy orders of vertices, respectively. We then propose two approximation algorithms for the kk-degenerate graphs, whose approximation ratios are α+2k2\alpha+2k-2 and αk\alpha k, where α\alpha is the approximation ratio of local oracles. The second one can be generalized to the hypergraph setting. We also propose an (α+k)(\alpha+k)-approximation algorithm for bipartite graphs, in which the local independence systems in the one-side of vertices are kk-systems with independence oracles.

keywords:
Independence system , Approximation algorithm , Local oracle model , Degeneracy
\affiliation

[1]organization=Research Institute for Mathematical Sciences, Kyoto University,city=Kyoto, postcode=606-8502, country=Japan

1 Introduction

The maximization for independence system is one of the most fundamental combinatrial optimization problems [1, 2, 3]. An independence system is a pair (E,)(E,\mathcal{I}) of a finite set EE and a family 2E\mathcal{I}\subseteq 2^{E} that satisfies

contains empty set, i.e.,, and\displaystyle\mathcal{I}\ \text{contains empty set, i.e.,}\ \emptyset\in\mathcal{I}\text{, and} (1)
JimpliesIfor anyIJE.\displaystyle J\in\mathcal{I}\ \text{implies}\ I\in\mathcal{I}\ \text{for any}\ I\subseteq J\subseteq E. (2)

Here a member II in \mathcal{I} is called an independent set. The property (2) means that \mathcal{I} is downward closed. The maximization problem for an independence system is to find an independent set with the maximum cardinality. This problem includes, as a special case, the maximum independent set of a graph, the maximum matching, the maximum set packing and the matroid (intersection) problems [4, 1, 3].

In this paper, we consider the following independence systems defined on graphs. Let G=(V,E)G=(V,E) be a graph with a vertex set VV and an edge set EE. For a vertex vv in VV, let EvE_{v} denotes the set of edges incident to vv. In our problem setting, each vertex vv has a local independence system (Ev,v)(E_{v},\mathcal{I}_{v}), i.e., v2Ev\mathcal{I}_{v}\subseteq 2^{E_{v}}, and we consider the independence system (E,)(E,\mathcal{I}) defined by

={IEIEvvfor allvV}.\displaystyle\mathcal{I}=\{I\subseteq E\mid I\cap E_{v}\in\mathcal{I}_{v}\ \text{for all}\ v\in V\}. (3)

Namely, (E,)(E,\mathcal{I}) is obtained by concatenating local independence systems (Ev,v)(E_{v},\mathcal{I}_{v}), and is called an independence system defined on a graph GG. We assume without loss of generality that {e}\{e\}\in\mathcal{I} holds for any eEe\in E, since otherwise the underlying graph GG can be replaced by G=(V,E{e})G^{\prime}=(V,E\setminus\{e\}). In this paper, we consider the maximization problem for it, i.e., for a given graph G=(V,E)G=(V,E) with local independence systems (Ev,v)(E_{v},\mathcal{I}_{v}), our problem is described as

maximize|I|subject toIEvvfor allvVIE.\displaystyle\begin{split}\text{maximize}&\quad|I|\\ \text{subject to}&\quad I\cap E_{v}\in\mathcal{I}_{v}\ \text{for all}\ v\in V\\ &\quad I\subseteq E.\end{split} (4)

Note that any independence system (E,)(E,\mathcal{I}) is viewed as an independence system defined on a star. As an example of the problem (4), for b:V+b:V\to\mathbb{Z}_{+}, let v={IEv|I|b(v)}\mathcal{I}_{v}=\{I\subseteq E_{v}\mid|I|\leq b(v)\}, i.e., v\mathcal{I}_{v} is b(v)b(v)-bounded for every vv in VV. Then (3) denotes the family of bb-matchings in GG. In particular, if b1b\equiv 1, it corresponds to the family of matchings in GG [5]. More generally, if (Ev,v)(E_{v},\mathcal{I}_{v}) is a matroid for every vv in VV, then the family (3) is a so-called matchoid [6].

The maximum bb-matching and matchoid problems are well-studied combinatorial optimization problems [5, 6]. It is known that the maximum matchoid problem is NP-hard [7] and it is 3/23/2-approximable [8, 9].

The unweighted maximum satisfiability problem (MAX-SAT) is another example of the problem (4). The unweighted MAX-SAT is the problem to find a variable assignment that maximizes the number of the satisfied clauses in a given conjunctive normal form (CNF). For a variable set XX, let φ=cCc\varphi=\bigwedge_{c\in C}c be a CNF, where CC denotes a set of clauses with variables in XX. Define a bipartite graph G=(V=XC,E)G=(V=X\cup C,E) and local independence systems (Ev,v)(E_{v},\mathcal{I}_{v}) by

E\displaystyle E ={(x,c)X×Cxis contained inc}\displaystyle=\{(x,c)\in X\times C\mid x\ \text{is contained in}\ c\}
v\displaystyle\mathcal{I}_{v} ={{IEveitherIC+(v)orIC(v)}ifvX{IEv|I|1}ifvC,\displaystyle=\begin{cases}\{I\subseteq E_{v}\mid\text{either}\ I\subseteq C_{+}(v)\ \text{or}\ I\subseteq C_{-}(v)\}&\text{if}\ v\in X\\ \{I\subseteq E_{v}\mid|I|\leq 1\}&\text{if}\ v\in C,\end{cases}

where, for a variable xXx\in X, C+(x)C_{+}(x) and C(x)C_{-}(x) respectively denote the sets of clauses in CC that contain literals xx and x¯\overline{x}. Therefore the unweighted MAX-SAT is an example of the problem (4). As is well-known, the unweighted MAX-SAT is NP-hard and approximable in ratio 1.2551.255 [10].

(Edge-)temporal graphs [11] were introduced to model dynamic network topologies where the edge set vary with time. Namely, a temporal graph is a graph G=(V,E)G=(V,E) with given time labels LeTL_{e}\subseteq T for all ee in EE, where TT is a given finite set of time labels. For a temporal graph, a subset MM of EE is a timed matching if LeL_{e} and LfL_{f} are disjoint for any adjacent pair of edges ee and ff in MM. By defining local independence systems (Ev,v)(E_{v},\mathcal{I}_{v}) by v={IEvLeLf=for alle,fI}\mathcal{I}_{v}=\{I\subseteq E_{v}\mid L_{e}\cap L_{f}=\emptyset\ \text{for all}\ e,f\in I\}, the maximum timed matching can be formulated as the problem (4). It is known that the maximum timed matching problem is NP-hard even if the given graph GG is bipartite, and is solvable in polynomial time if GG is a tree and every LeL_{e} is represented as an interval for a given order of time labels [12].

In this paper, we consider the problem (4) by making use of local oracles 𝒜v\mathcal{A}_{v} for each vv in VV. The oracle 𝒜v\mathcal{A}_{v} computes an α\alpha-approximate independent set II of (F,v[F])(F,\mathcal{I}_{v}[F]) for a given set FEvF\subseteq E_{v}, where v[F]\mathcal{I}_{v}[F] is the restriction of v\mathcal{I}_{v} to FF, i.e., v[F]={IFIv}\mathcal{I}_{v}[F]=\{I\cap F\mid I\in\mathcal{I}_{v}\}. That is, the oracle 𝒜v:2Ev2Ev\mathcal{A}_{v}:2^{E_{v}}\to 2^{E_{v}} satisfies

𝒜v(F)\displaystyle\mathcal{A}_{v}(F) v[F]\displaystyle\in\mathcal{I}_{v}[F] (5)
α|𝒜v(F)|\displaystyle\alpha\,|\mathcal{A}_{v}(F)| maxJv[F]|J|.\displaystyle\geq\max_{J\in\mathcal{I}_{v}[F]}|J|. (6)

We call 𝒜v\mathcal{A}_{v} an α\alpha-approximation local oracle. It is also called an exact local oracle if α=1\alpha=1. In this paper, we assume the monotonicity of 𝒜v\mathcal{A}_{v}, i.e., |𝒜v(S)||𝒜v(T)||\mathcal{A}_{v}(S)|\leq|\mathcal{A}_{v}(T)| holds for the subsets STEvS\subseteq T\subseteq E_{v}, which is a natural assumption on the oracle since it deals with independence system. We study this oracle model to investigate the global approximability of the problem (4) by using the local approximability. The oracle model was used in [13] for the maximum coverage problem with group constraints, where the oracle model is regarded as a generalization of greedy algorithms. It is shown that the maximum coverage problem is (α+1)(\alpha+1)-approximable in this oracle model. We remark here that a greedy algorithm for the maximum cardinality matching problem can be viewed as a 22-approximation algorithm under the exact local oracle model.

For the above reductions of the maximum matchoid problem and the unweighted MAX-SAT, their local maximization problems for (Ev,v)(E_{v},\mathcal{I}_{v}) can be solved exactly, while their global maximization problems are NP-hard. This means that the problem (4) seems to be intractable, even if exact local oracles are given.

In this paper, we first propose two natural algorithms for the problem (4), where the first one applies local oracles 𝒜v\mathcal{A}_{v} in the order of the vertices vv that is fixed in advance, while the second one applies local oracles in the greedy order of vertices v1,,vnv_{1},\dots,v_{n}, where n=|V|n=|V| and

viargmax{|𝒜v(EvF(i))|vV{v1,,vi1}}\displaystyle v_{i}\in\arg\max\{|\mathcal{A}_{v}(E_{v}\cap F^{(i)})|\mid v\in V\setminus\{v_{1},\dots,v_{i-1}\}\} fori=1,,n.\displaystyle\text{for}\ i=1,\dots,n.

Here the subset F(i)EF^{(i)}\subseteq E is a set of available edges during the ii-th iteration.

We show that the first algorithm guarantees an approximation ratio (α+n2)(\alpha+n-2), and the second algorithm guarantees an approximation ratio ρ(α,n)\rho(\alpha,n), where ρ\rho is the function of α\alpha and nn defined as

α+2α12α(n1)12\displaystyle\alpha+\frac{2\alpha-1}{2\alpha}(n-1)-\frac{1}{2}\quad if(α1)(n1)α(α+1)\displaystyle\text{if}\ (\alpha-1)(n-1)\geq\alpha(\alpha+1)
α+αα+1(n1)\displaystyle\alpha+\frac{\alpha}{\alpha+1}(n-1)\quad ifα(α1)(n1)<α(α+1)\displaystyle\text{if}\ \alpha\leq(\alpha-1)(n-1)<\alpha(\alpha+1)
n2\displaystyle\frac{n}{2}\quad if(α1)(n1)<α.\displaystyle\text{if}\ (\alpha-1)(n-1)<\alpha.

We also show that both of approximation ratios are almost tight for these algorithms.

We then consider two subclasses of the problem (4). We provide two approximation algorithms for the kk-degenerate graphs, whose approximation ratios are α+2k2\alpha+2k-2 and αk\alpha k. Here, a graph is kk-degenerate if any subgraph has a vertex of degree at most kk. This implies for example that the algorithms finds an α\alpha-approximate independent set for the problem if a given graph is a tree. This is best possible, because the local maximization is not approximable with c(<α)c\ (<\alpha). We also show that the second algorithm can be generalized to the hypergraph setting.

We next provide an (α+k)(\alpha+k)-approximation algorithm for the problem when a given graph is bipartite and local independence systems for one side are all kk-systems with independence oracles. Here an independence system (E,)(E,\mathcal{I}) is called a kk-system if for any subset FEF\subseteq E, any two maximal independent sets II and JJ in [F]\mathcal{I}[F] satisfy k|I||J|k|I|\geq|J|, and its independence oracle is to decide if a given subset JEJ\subseteq E belongs to \mathcal{I} or not.

The rest of the paper is organized as follows. In Section 2, we describe two natural algorithms for the problem (4) and analyze their approximation ratios. Section 3 provides approximation algorithms for the problem in which a given graph GG has bounded degeneracy. Section 4 also provides an approximation algorithm for the problem in which a given graph GG is bipartite, and all the local independence systems of the one side of vertices are kk-systems. Section 5 defines independence systems defined on hypergraphs and generalizes algorithms to the hypergraph case.

2 Local subpartitions and greedy algorithms

In this section, we first define local subpartitons of edges and analyze two natural algorithms for the problem (4). Local subpartitons of edges is used for constructing and analyzing approximation algorithms for the problem.

Definition 1.

For a graph G=(V,E)G=(V,E) and subsets PvEvP_{v}\subseteq E_{v} for all vVv\in V, the collection 𝒫={PvvV}\mathcal{P}=\{P_{v}\mid v\in V\} is called a local subpartition if PuPv=P_{u}\cap P_{v}=\emptyset for all distinct pairs uu and vv of vertices. The set R=E(P𝒫P)R=E\setminus(\bigcup_{P\in\mathcal{P}}P) is called the residual edge set of 𝒫\mathcal{P}.

By definition, for a local subpartition 𝒫={PvvV}\mathcal{P}=\{P_{v}\mid v\in V\}, the set PvP_{v} may be empty for some vv in VV. Let 𝒫={PvvV}\mathcal{P}=\{P_{v}\mid v\in V\} be a local subpartition of EE, and for every vVv\in V, let Ivv[Pv]I_{v}\in\mathcal{I}_{v}[P_{v}] be a subset of PvP_{v} which is independent in v\mathcal{I}_{v}. Then the union I=vVIvI=\bigcup_{v\in V}I_{v} may not be an independent set of \mathcal{I}. If it is independent, we have the following lemma, where we recall that 𝒜v(vV)\mathcal{A}_{v}\ (v\in V) denotes an α\alpha-approximate local oracle.

Lemma 2.

For a local subpartition 𝒫={PvvV}\mathcal{P}=\{P_{v}\mid v\in V\}, let RR be the residual edge set of 𝒫\mathcal{P}. If I=vV𝒜v(Pv)I=\bigcup_{v\in V}\mathcal{A}_{v}(P_{v}) is an independent set in \mathcal{I}, then it guarantees an approximate ratio of α+maxJ[R]|J|/|I|\alpha+\max_{J\in\mathcal{I}[R]}|J|/|I| for the problem (4).

Proof.

For an independent set KK\in\mathcal{I}, the set I=vV𝒜v(Pv)I=\bigcup_{v\in V}\mathcal{A}_{v}(P_{v}) satisfies the following inequalities:

α|I|\displaystyle\alpha\,|I| =αvV|𝒜v(Pv)|vV|KPv|=|K||KR|.\displaystyle=\alpha\sum_{v\in V}|\mathcal{A}_{v}(P_{v})|\geq\sum_{v\in V}|K\cap P_{v}|=|K|-|K\cap R|.

This implies

|K|(α+|KR|/|I|)|I|(α+maxJ[R]|J|/|I|)|I|,\displaystyle|K|\leq(\alpha+|K\cap R|/|I|)\,|I|\leq(\alpha+\max_{J\in\mathcal{I}[R]}|J|/|I|)\,|I|,

which completes the proof. ∎

All the algorithms proposed in this paper construct local subpartitons of edges, and Lemma 2 is used to analyze their approximation ratio.

Let us then see the following two simple algorithms which cannot guarantee any constant approximation ratio, even if exact local oracles are available. The first one called FixedOrder makes use of local oracles 𝒜v\mathcal{A}_{v} in the order of the vertices vVv\in V that is fixed in advance.

\fname@algorithm FixedOrder
/* (v1,,vnv_{1},\dots,v_{n}) is a given vertex order. */
F:=EF:=E.
for v=v1,,vnv=v_{1},\dots,v_{n} do
     Pv:=EvFP_{v}:=E_{v}\cap F.
     Rv:=((uV:(u,v)𝒜v(Pv)Eu)Pv)FR_{v}:=\left(\left({\bigcup_{\begin{subarray}{c}u\in V:\\ (u,{v})\in\mathcal{A}_{v}(P_{v})\end{subarray}}}E_{u}\right)\setminus P_{v}\right)\cap F.
     F:=F(PvRv)F:=F\setminus(P_{v}\cup R_{v}).
end for
I:=vV𝒜v(Pv)I:=\bigcup_{v\in V}\mathcal{A}_{v}(P_{v}).
R:=vVRvR:=\bigcup_{v\in V}R_{v}.
Output II and halt.

Algorithm FixedOrder constructs a local subpartition 𝒫={PvvV}\mathcal{P}=\{P_{v}\mid v\in V\} and the residual edge set RR of 𝒫\mathcal{P}, which will be proven in the next theorem. In order to ensure that I=vV𝒜v(Pv)I=\bigcup_{v\in V}\mathcal{A}_{v}(P_{v}) is independent in Lemma 2, the algorithm maintains FF as a candidate edge set during the iteration. The following theorem provides the approximation ratio of the algorithm, which is almost tight for the algorithm. In fact, it is tight if α\alpha is an integer.

Theorem 3.

Algorithm FixedOrder computes an (α+n2)(\alpha+n-2)-approximate solution for the maximization for the problem (4) under the approximate local oracle model, and the approximation ratio of the algorithm is at least α+n2\lfloor\alpha\rfloor+n-2.

Proof.

For a vertex vVv\in V, let PvP_{v} and RvR_{v} denote the edge sets computed in the algorithm, and let I=vV𝒜v(Pv)I=\bigcup_{v\in V}\mathcal{A}_{v}(P_{v}) and R=vVRvR=\bigcup_{v\in V}R_{v}. Then we have PvEvP_{v}\subseteq E_{v} for any vVv\in V and PviEvj=P_{v_{i}}\cap E_{v_{j}}=\emptyset if j<ij<i. These imply that 𝒫={PvvV}\mathcal{P}=\{P_{v}\mid v\in V\} is a local subpartition of EE. Note that RuPv=R_{u}\cap P_{v}=\emptyset for any uu and vv in VV, which implies that RPv=R\cap P_{v}=\emptyset for any vv in VV. Since Pvi=Evij<i(PvjRvj)P_{v_{i}}=E_{v_{i}}\setminus\bigcup_{j<i}(P_{v_{j}}\cup R_{v_{j}}) also holds for any viVv_{i}\in V, RR is the residual edge set of 𝒫\mathcal{P}. Moreover, we have

|R|vV|Rv|vVuV:(u,v)𝒜v(Pv)|EuPv|(n2)|I|.\displaystyle|R|\leq\sum_{v\in V}|R_{v}|\leq\sum_{v\in V}\sum_{\begin{subarray}{c}u\in V:\\ (u,v)\in\mathcal{A}_{v}(P_{v})\end{subarray}}|E_{u}\setminus P_{v}|\leq(n-2)|I|. (7)

For any vertex viVv_{i}\in V, if an edge (vi,vj)(v_{i},v_{j}) with i>ji>j is contained in 𝒜vj(Pvj)\mathcal{A}_{v_{j}}(P_{v_{j}}), then PvlEvi=P_{v_{l}}\cap E_{v_{i}}=\emptyset holds for all l>jl>j. This implies that |IEv|1|I\cap E_{v}|\leq 1 or IEv=𝒜v(Pv)I\cap E_{v}=\mathcal{A}_{v}(P_{v}) for all vVv\in V, which concludes that II\in\mathcal{I}. Therefore by Lemma 2, II guarantees an approximation ratio of α+n2\alpha+n-2 for the problem (4).

We next show a lower bound of the approximation ratio of the algorithm. Define a graph G=(V,E)G=(V,E) by

V\displaystyle V ={s,t}{vii=1,,n2}\displaystyle=\{s,t\}\cup\{v_{i}\mid i=1,\ldots,n-2\}
E\displaystyle E =EsEt,\displaystyle=E_{s}\cup E_{t},

where

Es\displaystyle E_{s} ={(s,t)}{(s,vi)i=1,,α1}and\displaystyle=\{(s,t)\}\cup\{(s,v_{i})\mid i=1,\ldots,\lfloor\alpha\rfloor-1\}\ \text{and}
Et\displaystyle E_{t} ={(s,t)}{(t,vi)i=1,,n2}.\displaystyle=\{(s,t)\}\cup\{(t,v_{i})\mid i=1,\ldots,n-2\}.

For every vertex vVv\in V, let v=2Ev\mathcal{I}_{v}=2^{E_{v}}. By definition, EE is an optimal solution for this instance of the problem (4). On the other hand, if the algorithm first chooses a vertex ss and the local oracle 𝒜s\mathcal{A}_{s} returns {(s,t)}\{(s,t)\}, then the algorithm outputs I={(s,t)}I=\{(s,t)\}. Since |E|=α+n2|E|=\lfloor\alpha\rfloor+n-2 and |I|=1|I|=1, α+n2\lfloor\alpha\rfloor+n-2 is a lower bound of the approximation ratio of the algorithm. ∎

The second algorithm called Greedy makes use of local oracles 𝒜v\mathcal{A}_{v} in a greedy order of vertices v1,,vnv_{1},\dots,v_{n}, where

viargmax{|𝒜v(EvF(i))|vV{v1,,vi1}}\displaystyle v_{i}\in\arg\max\{|\mathcal{A}_{v}(E_{v}\cap F^{(i)})|\mid v\in V\setminus\{v_{1},\dots,v_{i-1}\}\} fori=1,,n.\displaystyle\text{for}\ i=1,\dots,n.

Here the subset F(i)EF^{(i)}\subseteq E is the candidate edge set in the ii-th round of Greedy.

\fname@algorithm Greedy
F:=EF:=E.
W:=VW:=V.
while WW\not=\emptyset do
     vargmaxwW|𝒜w(EwF)|v\in\arg\max_{w\in W}|\mathcal{A}_{w}(E_{w}\cap F)|.
     W:=W{v}W:=W\setminus\{v\}.
     Pv:=EvFP_{v}:=E_{v}\cap F.
     Rv:=((uV:(u,v)𝒜v(Pv)Eu)Pv)FR_{v}:=\left(\left(\bigcup_{\begin{subarray}{c}u\in V:\\ (u,v)\in\mathcal{A}_{v}(P_{v})\end{subarray}}E_{u}\right)\setminus P_{v}\right)\cap F.
     F:=F(PvRv)F:=F\setminus(P_{v}\cup R_{v}).
end while
I:=vV𝒜v(Pv)I:=\bigcup_{v\in V}\mathcal{A}_{v}(P_{v})
R:=vVRvR:=\bigcup_{v\in V}R_{v}.
Output II and halt.

For the analysis of Greedy, we use the following lemma which is slightly different from Lemma 2.

Lemma 4.

Let 𝒫={PvvV}\mathcal{P}=\{P_{v}\mid v\in V\} be a local subpartition of EE, and for the residual RR of 𝒫\mathcal{P}, let {RvvV}\{R_{v}\mid v\ \in V\} be a partition of RR that satisfies Rv=R_{v}=\emptyset for all vertices vv with Pv=P_{v}=\emptyset. If I=vV𝒜v(Pv)I=\bigcup_{v\in V}\mathcal{A}_{v}(P_{v}) is an independent set in \mathcal{I}, then it guarantees approximation ratio β\beta for the problem (4), where

β=maxvV:PvmaxJ[PvRv]|J||𝒜v(Pv)|.\displaystyle\beta=\max_{v\in V:P_{v}\not=\emptyset}\ \max_{J\in\mathcal{I}[P_{v}\cup R_{v}]}\frac{|J|}{|\mathcal{A}_{v}(P_{v})|}.
Proof.

For an independent set KK\in\mathcal{I}, I=vV𝒜v(Pv)I=\bigcup_{v\in V}\mathcal{A}_{v}(P_{v}) satisfies the following inequalities:

|K|vVmaxJ[PvRv]|J|βvV:Pv|𝒜v(Pv)|=β|I|,\displaystyle|K|\leq\sum_{v\in V}\max_{J\in\mathcal{I}[P_{v}\cup R_{v}]}|J|\leq\beta\sum_{v\in V:P_{v}\not=\emptyset}|\mathcal{A}_{v}(P_{v})|=\beta\,|I|,

which completes the proof. ∎

The following theorem provides the approximation ratio of Greedy, which is almost tight for the algorithm. In fact, it is tight if (α1)(n1)<α(\alpha-1)(n-1)<\alpha.

Theorem 5.

Algorithm Greedy computes a ρ(α,n)\rho(\alpha,n)-approximate solution for the problem (4), where ρ\rho is the function of α\alpha and nn defined as

α+2α12α(n1)12\displaystyle\alpha+\frac{2\alpha-1}{2\alpha}(n-1)-\frac{1}{2} if(α1)(n1)α(α+1)\displaystyle\text{if}\ (\alpha-1)(n-1)\geq\alpha(\alpha+1) (8)
α+αα+1(n1)\displaystyle\alpha+\frac{\alpha}{\alpha+1}(n-1) ifα(α1)(n1)<α(α+1)\displaystyle\text{if}\ \alpha\leq(\alpha-1)(n-1)<\alpha(\alpha+1) (9)
n2\displaystyle\frac{n}{2} if(α1)(n1)<α.\displaystyle\text{if}\ (\alpha-1)(n-1)<\alpha. (10)

Moreover, the approximation ratio is at least

(8)α2\displaystyle(\ref{lemeq:upperbounds1})-\frac{\alpha}{2} if(α1)(n1)α(α+1)\displaystyle\text{if}\ (\alpha-1)(n-1)\geq\alpha(\alpha+1)
(9)(32α12)\displaystyle(\ref{lemeq:upperbounds2})-\left(\frac{3}{2}\alpha-\frac{1}{2}\right) ifα(α1)(n1)<α(α+1)\displaystyle\text{if}\ \alpha\leq(\alpha-1)(n-1)<\alpha(\alpha+1)
(10)\displaystyle(\ref{lemeq:upperbounds3}) if(α1)(n1)<α.\displaystyle\text{if}\ (\alpha-1)(n-1)<\alpha.
Proof.

Let v1,,vnv_{1},\ldots,v_{n} be the order of vertices chosen in the while-loop of the algorithm. For vVv\in V, let PvP_{v} and RvR_{v} be the edge sets computed in the algorithm, and let R=vVRvR=\bigcup_{v\in V}R_{v}. Then, similarly to Algorithm FixedOrder, 𝒫={PvvV}\mathcal{P}=\{P_{v}\mid v\in V\} is a local subpartition of EE with the residual RR, and II is independent in \mathcal{I}. Since vv is chosen from argmaxwW|𝒜w(EwF)|\arg\max_{w\in W}|\mathcal{A}_{w}(E_{w}\cap F)| for each while-loop of Algorithm Greedy, there is an index ss such that (1)(1) PviP_{v_{i}}\not=\emptyset for all isi\leq s and (2)(2) Pvi,Rvi=P_{v_{i}},R_{v_{i}}=\emptyset for all i>si>s. For isi\leq s, define Fi=PviRviF_{i}=P_{v_{i}}\cup R_{v_{i}}.

Then, by Lemma 4, it is enough to show that

maxJ[Fi]|J||𝒜vi(Pvi)|ρ(α,n)fori=1,,s.\displaystyle\frac{\max_{J\in\mathcal{I}[F_{i}]}|J|}{|\mathcal{A}_{v_{i}}(P_{v_{i}})|}\leq\rho(\alpha,n)\ \text{for}\ i=1,\dots,s. (11)

Let tt be an index that attains the maximum in the left-hand side of (11), and let gt=maxJ[Ft]|J|g_{t}=\max_{J\in\mathcal{I}[F_{t}]}|J|.

Let H=(W,Ft)H=(W,F_{t}) be a graph induced by FtF_{t}, i.e., W={v,uV(v,u)Ft}W=\{v,u\in V\mid(v,u)\in F_{t}\}. Let ν=|W|\nu=|W| and x=|𝒜vt(Pvt)|x=|\mathcal{A}_{v_{t}}(P_{v_{t}})|. Note that

FtEvtuW:(u,vt)𝒜v(Pvt)Eu,\displaystyle F_{t}\subseteq E_{v_{t}}\cup\bigcup_{u\in W:\ (u,v_{t})\in\mathcal{A}_{v}(P_{v_{t}})}E_{u}, (12)
maxJ[FtEv]|J|α|𝒜v(FtEv)|αx\displaystyle\max_{J\in\mathcal{I}[F_{t}\cap E_{v}]}|J|\leq\lfloor\alpha\,|\mathcal{A}_{v}(F_{t}\cap E_{v})|\rfloor\leq\lfloor\alpha\,x\rfloor forvW,\displaystyle\text{for}\ v\in W, (13)

and vtv_{t} is chosen greedily. In order to estimate gtg_{t}, we consider subgraphs of HH with degree at most αx\lfloor\alpha x\rfloor. We separately deal with the following four cases.

Case 1: (α1)(n1)<α.\displaystyle\ (\alpha-1)(n-1)<\alpha.
Case 2: (α1)(n1)αandαxν1.\displaystyle\ (\alpha-1)(n-1)\geq\alpha\ \text{and}\ \alpha x\geq\nu-1.
Case 3: (α1)(n1)αandνx1αx<ν1.\displaystyle\ (\alpha-1)(n-1)\geq\alpha\ \text{and}\ \nu-x-1\leq\alpha x<\nu-1.
Case 4: (α1)(n1)αandαx<νx1.\displaystyle\ (\alpha-1)(n-1)\geq\alpha\ \text{and}\ \alpha x<\nu-x-1.

Case 1 is to prove (10), while Cases 2,3, and 4 are to prove (8) and (9).

Case 1. Note that (α1)(n1)<α(\alpha-1)(n-1)<\alpha is equivalent to the condition α(n2)<n1\alpha(n-2)<n-1. Thus if x<ν1(n1)x<\nu-1\ (\leq n-1), then we have αx=x\lfloor\alpha x\rfloor=x, and hence it follows from (12) and (13) that

gtx(νx1)+12((x+1)xx(νx1))=νx2,\displaystyle g_{t}\leq x(\nu-x-1)+\frac{1}{2}((x+1)x-x(\nu-x-1))=\frac{\nu x}{2},

which implies that

gtxν2n2.\displaystyle\frac{g_{t}}{x}\leq\frac{\nu}{2}\leq\frac{n}{2}.

On the other hand, if xν1x\geq\nu-1, then we have

gtx12ν(ν1)xν2n2.\displaystyle\frac{g_{t}}{x}\leq\frac{\frac{1}{2}\nu(\nu-1)}{x}\leq\frac{\nu}{2}\leq\frac{n}{2}.

In either case, (10) is proved.

Case 2. Since αxν1\alpha x\geq\nu-1, (12) implies that

gt(x+1)(νx1)+12(x+1)x12x2+(ν32)x+ν1.\displaystyle g_{t}\leq(x+1)(\nu-x-1)+\frac{1}{2}(x+1)x\leq-\frac{1}{2}x^{2}+\left(\nu-\frac{3}{2}\right)x+\nu-1.

Thus,

gtx\displaystyle\frac{g_{t}}{x} 12x+(ν32)+ν1x\displaystyle\leq-\frac{1}{2}x+\left(\nu-\frac{3}{2}\right)+\frac{\nu-1}{x}
α+2α12α(ν1)12\displaystyle\leq\alpha+\frac{2\alpha-1}{2\alpha}(\nu-1)-\frac{1}{2}
α+2α12α(n1)12,\displaystyle\leq\alpha+\frac{2\alpha-1}{2\alpha}(n-1)-\frac{1}{2},

where the second inequality is obtained by substituting x=ν1αx=\frac{\nu-1}{\alpha}.

Case 3. Let LL be an independent set in [Ft]\mathcal{I}[F_{t}]. Then we have |LEv|αx|L\cap E_{v}|\leq\lfloor\alpha x\rfloor for any vVv\in V, since x=𝒜vt(Pvt)x=\mathcal{A}_{v_{t}}(P_{v_{t}}) and local oracle 𝒜v\mathcal{A}_{v} is α\alpha-approximate. Thus if αxx+1\alpha x\geq x+1, it follows from (12) that

gt\displaystyle g_{t} (x+1)(νx1)+12(x+1)(αx(νx1))\displaystyle\leq(x+1)(\nu-x-1)+\frac{1}{2}(x+1)(\lfloor\alpha x\rfloor-(\nu-x-1))
12{(α1)x2+(α+ν2)x+ν1},\displaystyle\leq\frac{1}{2}\left\{(\alpha-1)x^{2}+(\alpha+\nu-2)x+\nu-1\right\},

which implies that

gtx\displaystyle\frac{g_{t}}{x} 12((α1)x+α+ν2+ν1x).\displaystyle\leq\frac{1}{2}\left((\alpha-1)x+\alpha+\nu-2+\frac{\nu-1}{x}\right). (14)

Since ν1α+1xν1α\frac{\nu-1}{\alpha+1}\leq x\leq\frac{\nu-1}{\alpha}, the right hand side of (14) take take maximum at x=ν1α+1x=\frac{\nu-1}{\alpha+1} or ν1α\frac{\nu-1}{\alpha}. Namely, we have

gtxmax{α+αα+1(n1),α+2α12α(n1)12}.\displaystyle\frac{g_{t}}{x}\leq\max\left\{\alpha+\frac{\alpha}{\alpha+1}(n-1),\ \alpha+\frac{2\alpha-1}{2\alpha}(n-1)-\frac{1}{2}\right\}. (15)

On the other hand, if αx<x+1\alpha x<x+1, i.e., αx=x\lfloor\alpha x\rfloor=x, then we have

gtx(νx1)+12((x+1)xx(νx1))=νx2,\displaystyle g_{t}\leq x(\nu-x-1)+\frac{1}{2}((x+1)x-x(\nu-x-1))=\frac{\nu x}{2},

which implies that gtxν2n2\frac{g_{t}}{x}\leq\frac{\nu}{2}\leq\frac{n}{2}. Since n2α+2α12α(n1)12\frac{n}{2}\leq\alpha+\frac{2\alpha-1}{2\alpha}(n-1)-\frac{1}{2} and n2α+αα+1(n1)\frac{n}{2}\leq\alpha+\frac{\alpha}{\alpha+1}(n-1) for α1\alpha\geq 1, (15) is obtained in Case 3.

Case 4. In this case, we have gt(x+1)αxg_{t}\leq(x+1)\lfloor\alpha x\rfloor and

gtxα+αxα+αα+1(n1),\displaystyle\frac{g_{t}}{x}\leq\alpha+\alpha x\leq\alpha+\frac{\alpha}{\alpha+1}(n-1),

where the second inequality is obtained from α+αx\alpha+\alpha x by substituting x=ν1α+1x=\frac{\nu-1}{\alpha+1}.

In summarizing Cases 2, 3, and 4, we obtain inequality (15). Since

α+2α12α(n1)12α+αα+1(n1)\displaystyle\alpha+\frac{2\alpha-1}{2\alpha}(n-1)-\frac{1}{2}\geq\alpha+\frac{\alpha}{\alpha+1}(n-1)

if and only if (α1)(n1)α(α+1)(\alpha-1)(n-1)\geq\alpha(\alpha+1), (8) and (9) are obtained.

We next show lower bounds of the approximation ratio of the algorithm. Let (E,)(E,\mathcal{I}) be an independence system on a complete graph Kn=(V,E)K_{n}=(V,E) with I=2EI=2^{E}, and we assume that 𝒜v(Ev)=Ev\mathcal{A}_{v}(E_{v})=E_{v} for some vVv\in V. Then Algorithm Greedy chooses a vertex vVv\in V with 𝒜v(Ev)=Ev\mathcal{A}_{v}(E_{v})=E_{v} and sets Pv=EvP_{v}=E_{v} and Rv=EEvR_{v}=E\setminus E_{v} at the first iteration of the while-loop. Since F=F=\emptyset after the first iteration of the while-loop, the algorithm outputs I=EvI=E_{v}, and this implies that n2\frac{n}{2} is a lower bound of the approximation ratio of the algorithm. Note that (10) is tight if (α1)(n1)<α(\alpha-1)(n-1)<\alpha. Moreover, if (α1)(n1)<α(α+1)(\alpha-1)(n-1)<\alpha(\alpha+1) holds, we have

(9)(10)=α12+12(α+1)(α1)(n1)32α12,\displaystyle(\ref{lemeq:upperbounds2})-(\ref{lemeq:upperbounds3})=\alpha-\frac{1}{2}+\frac{1}{2(\alpha+1)}(\alpha-1)(n-1)\leq\frac{3}{2}\alpha-\frac{1}{2},

which proves the tightness of (9). We consider a lower bound of the algorithm if (α1)(n1)α(α+1)(\alpha-1)(n-1)\geq\alpha(\alpha+1) holds. Define a graph G=(V=UW,E)G=(V=U\cup W,E) by

U\displaystyle U ={vii=1,,n1α+1}\displaystyle=\{v_{i}\mid i=1,\dots,\left\lceil\frac{n-1}{\alpha}\right\rceil+1\}
W\displaystyle W ={vii=n1α+2,,n}\displaystyle=\{v_{i}\mid i=\left\lceil\frac{n-1}{\alpha}\right\rceil+2,\dots,n\}
Ev\displaystyle E_{v} ={{(v,u)uV{v}}ifvU{(v,u)uU}ifvW.\displaystyle=\begin{cases}\{(v,u)\mid u\in V\setminus\{v\}\}&\text{if}\ v\in U\\ \{(v,u)\mid u\in U\}&\text{if}\ v\in W.\end{cases}

Let (E,)(E,\mathcal{I}) be an independence system on GG with EE\in\mathcal{I}, and we assume that every local oracle returns an independent set of size n1α\left\lceil\frac{n-1}{\alpha}\right\rceil. Note that αn1αn1\alpha\left\lceil\frac{n-1}{\alpha}\right\rceil\geq n-1 and (α1)n1α1(\alpha-1)\left\lceil\frac{n-1}{\alpha}\right\rceil\geq 1 hold, since (α1)(n1)α(α+1)(\alpha-1)(n-1)\geq\alpha(\alpha+1). If Algorithm Greedy chooses v1Uv_{1}\in U with 𝒜v1(Ev1)={(v1,u)uU{v1}}\mathcal{A}_{v_{1}}(E_{v_{1}})=\{(v_{1},u)\mid u\in U\setminus\{v_{1}\}\} at the first iteration of the while-loop, then Rv1=EEv1R_{v_{1}}=E\setminus E_{v_{1}} holds and the algorithm outputs I=𝒜v1(Ev1)I=\mathcal{A}_{v_{1}}(E_{v_{1}}). Similarly to the analysis of an upper bound of Case 2, we have

g1n1α\displaystyle\frac{g_{1}}{\left\lceil\frac{n-1}{\alpha}\right\rceil} =12n1α+(n32)+n1n1α\displaystyle=-\frac{1}{2}\left\lceil\frac{n-1}{\alpha}\right\rceil+\left(n-\frac{3}{2}\right)+\frac{n-1}{\left\lceil\frac{n-1}{\alpha}\right\rceil}
12(n1α+1)+(n32)+n1n1α+1\displaystyle\geq-\frac{1}{2}\left(\frac{n-1}{\alpha}+1\right)+\left(n-\frac{3}{2}\right)+\frac{n-1}{\frac{n-1}{\alpha}+1}
=(α+2α12α(n1)12)(12+α2n1+α)\displaystyle=\left(\alpha+\frac{2\alpha-1}{2\alpha}(n-1)-\frac{1}{2}\right)-\left(\frac{1}{2}+\frac{\alpha^{2}}{n-1+\alpha}\right)
(α+2α12α(n1)12)α2,\displaystyle\geq\left(\alpha+\frac{2\alpha-1}{2\alpha}(n-1)-\frac{1}{2}\right)-\frac{\alpha}{2},

where the first and second inequalities follow from n1α<n1α+1\left\lceil\frac{n-1}{\alpha}\right\rceil<\frac{n-1}{\alpha}+1 and (α1)(n1)α(α+1)(\alpha-1)(n-1)\geq\alpha(\alpha+1), respectively. This proves the tightness of (8). ∎

3 Approximation algorithms based on local oracles

In this section, we present approximation algorithms for the problem (4). Our first algorithm called OrderedApprox makes use of a linear order \prec of vertices VV. Different from FixedOreder and Greedy in Section 2, the algorithm tries to minimize the size of RR. In fact, we have R=R=\emptyset if GG is a tree. More precisely, for each vVv\in V, we initialize Pv=DvP_{v}=D_{v}, where DvD_{v} is the set of downward edges of vv, i.e., Dv={(u,v)Evuv}D_{v}=\{(u,v)\in E_{v}\mid u\prec v\}, and compute 𝒜v(Pv)\mathcal{A}_{v}(P_{v}) and 𝒜v(Pv{e})\mathcal{A}_{v}(P_{v}\cup\{e\}) for each eUve\in U_{v}, where UvU_{v} is the set of upward edges of vv, i.e., Uv={(u,v)Evvu}U_{v}=\{(u,v)\in E_{v}\mid v\prec u\}. Based on these outputs of local oracle and their values, we consider four cases, each of which we update PvP_{v} and construct Iv=𝒜v(Pv)I_{v}=\mathcal{A}_{v}(P_{v}) and RvR_{v} accordingly. Here PvP_{v} and R=vVRvR=\bigcup_{v\in V}R_{v} correspond to those in Lemma 2. We then modify PwP_{w} for vertices ww with vwv\prec w, in such a way that the set I=𝒜w(Pw)uV:uvIvI=\mathcal{A}_{w}(P_{w})\cup\bigcup_{u\in V:u\preceq v}I_{v} is an independent set in \mathcal{I}. The second algorithm first decomposes edge set EE into forests E1,,EγE_{1},\ldots,E_{\gamma}, for each forest EiE_{i}, applies OrderedApprox to compute an independent set IiI_{i}, and chooses a maximum independent set among them.

We first define the upward and downward edge sets with respect to a linear order.

Definition 6.

For a graph G=(V,E)G=(V,E), let \prec be a linear order of vertices VV. For a vertex vv, let Uv={(v,w)Evvw}U_{v}=\{(v,w)\in E_{v}\mid v\prec w\} and Dv={(v,w)Evwv}D_{v}=\{(v,w)\in E_{v}\mid w\prec v\} be the sets of upward and downward edges incident to vv, respectively. We define the width of GG (with respect to a linear order \prec) as maxvV|Uv|\max_{v\in V}|U_{v}|.

The minimum width of the graph G=(V,E)G=(V,E) among all linear order of vertices VV is called the degeneracy of GG, and GG is called kk-degenerate if kk is at least the minimum width of GG [14, 15, 16]. Note that a linear order of vertices VV certifying that GG is kk-degenerate can be obtained by repeatedly choosing vertices with minimum degree in the remaining graph, i.e.,

viargmin{degG[V{v1,,vi1}](v)vV{v1,,vi1}}\displaystyle v_{i}\in\arg\min\{\deg_{G[V\setminus\{v_{1},\dots,v_{i-1}\}]}(v)\mid v\in V\setminus\{v_{1},\dots,v_{i-1}\}\} fori=1,,n,\displaystyle\text{for}\ i=1,\dots,n,

where degH(v)\deg_{H}(v) denotes the degree of vertex vv in the graph HH and G[W]G[W] denotes the subgraph of GG induced by a vertex subset WW. Therefore, such a liner order can be computed in linear time. It is also known that the degeneracy of a graph is at most the tree-width of it.

\fname@algorithm OrderedApprox(G,,)(G,\mathcal{I},\prec)
1:An independence system (E,)(E,\mathcal{I}) defined on a graph G=(V,E)G=(V,E) and a linear order v1v2vnv_{1}\prec v_{2}\prec\ldots\prec v_{n} of vertices VV.
2:An independent set in \mathcal{I}.
3:X:=X:=\emptyset.
4:Pv:=DvP_{v}:=D_{v} for vVv\in V.
5:for i=1,,ni=1,\ldots,n do
6:     Bvi:={eUvie𝒜vi(Pvi{e}),|𝒜vi(Pvi{e})|>|𝒜vi(Pvi)|}B_{v_{i}}:=\{e\in U_{v_{i}}\mid e\in\mathcal{A}_{v_{i}}(P_{v_{i}}\cup\{e\}),\ |\mathcal{A}_{v_{i}}(P_{v_{i}}\cup\{e\})|>|\mathcal{A}_{v_{i}}(P_{v_{i}})|\}.
7:     if Uvi=U_{v_{i}}=\emptyset then
8:         Ivi:=𝒜vi(Pvi)I_{v_{i}}:=\mathcal{A}_{v_{i}}(P_{v_{i}}).
9:         Rvi:=R_{v_{i}}:=\emptyset.
10:     else if UviU_{v_{i}}\not=\emptyset, PviP_{v_{i}}\not=\emptyset, and Bvi=B_{v_{i}}=\emptyset then
11:         Choose an edge e=(vi,vj)Uvie=(v_{i},v_{j})\in U_{v_{i}} arbitrarily.
12:         if e𝒜vi(Pvi{e})e\not\in\mathcal{A}_{v_{i}}(P_{v_{i}}\cup\{e\}) then
13:              Ivi:=𝒜vi(Pvi{e})I_{v_{i}}:=\mathcal{A}_{v_{i}}(P_{v_{i}}\cup\{e\}).
14:         else (i.e., |𝒜vi(Pvi{e})|=|𝒜vi(Pvi)||\mathcal{A}_{v_{i}}(P_{v_{i}}\cup\{e\})|=|\mathcal{A}_{v_{i}}(P_{v_{i}})|)
15:              Ivi:=𝒜vi(Pvi)I_{v_{i}}:=\mathcal{A}_{v_{i}}(P_{v_{i}})./* |Ivi|=|𝒜vi(Pvi{e})||I_{v_{i}}|=|\mathcal{A}_{v_{i}}(P_{v_{i}}\cup\{e\})| */
16:         end if
17:         Pvi:=Pvi{e}P_{v_{i}}:=P_{v_{i}}\cup\{e\}.
18:         Rvi:=Uvi{e}R_{v_{i}}:=U_{v_{i}}\setminus\{e\}.
19:         Pvj:=Pvj{e}P_{v_{j}}:=P_{v_{j}}\setminus\{e\}.
20:     else if UviU_{v_{i}}\not=\emptyset, PviP_{v_{i}}\not=\emptyset, and BviB_{v_{i}}\not=\emptyset then
21:         Choose an edge bBvib\in B_{v_{i}} arbitrarily.
22:         Ivi:=𝒜vi(Pvi{b}){b}I_{v_{i}}:=\mathcal{A}_{v_{i}}(P_{v_{i}}\cup\{b\})\setminus\{b\}. /* IviPviI_{v_{i}}\subseteq P_{v_{i}} and |Ivi|=|𝒜vi(Pvi)||I_{v_{i}}|=|\mathcal{A}_{v_{i}}(P_{v_{i}})| */
23:         Rvi:=Uvi{b}R_{v_{i}}:=U_{v_{i}}\setminus\{b\}.
24:     else (i.e., UviU_{v_{i}}\not=\emptyset and Pvi=P_{v_{i}}=\emptyset)
25:         Ivi:=I_{v_{i}}:=\emptyset.
26:         Rvi:=R_{v_{i}}:=\emptyset. /* RviR_{v_{i}} might be updated in line 31 */
27:         X:=X{vi}X:=X\cup\{v_{i}\}.
28:     end if
29:     for (vl,vi)Ivi(v_{l},v_{i})\in I_{v_{i}} do
30:         if vlXv_{l}\in X then
31:              Rvl:={(vl,vj)Uvlj>i}R_{v_{l}}:=\{(v_{l},v_{j})\in U_{v_{l}}\mid j>i\}.
32:              X:=X{vl}X:=X\setminus\{v_{l}\}.
33:         end if
34:     end for
35:     for j=i+1,,nj=i+1,\ldots,n do
36:         Pvj:=Pvj(liRvl)P_{v_{j}}:=P_{v_{j}}\setminus(\bigcup_{l\leq i}R_{v_{l}}).
37:     end for
38:end for
39:Output I=vVIvI=\bigcup_{v\in V}I_{v} and halt.

Algorithm OrderedApprox first initializes Pv=DvP_{v}=D_{v} for all vVv\in V and for each ii-th iteration of the for-loop, computes an edge set BviUviB_{v_{i}}\subseteq U_{v_{i}} by

Bvi:={eUvie𝒜vi(Pvi{e}),|𝒜vi(Pvi{e})|>|𝒜vi(Pvi)|}.\displaystyle B_{v_{i}}:=\{e\in U_{v_{i}}\mid e\in\mathcal{A}_{v_{i}}(P_{v_{i}}\cup\{e\}),\ |\mathcal{A}_{v_{i}}(P_{v_{i}}\cup\{e\})|>|\mathcal{A}_{v_{i}}(P_{v_{i}})|\}.

It separately treats the following four cases as in the description of the algorithm.

Case 1: Uvi=\displaystyle\ U_{v_{i}}=\emptyset
Case 2: Uvi,Pvi,andBvi=\displaystyle\ U_{v_{i}}\not=\emptyset,\ P_{v_{i}}\not=\emptyset,\ \text{and}\ B_{v_{i}}=\emptyset
Case 3: Uvi,Pvi,andBvi\displaystyle\ U_{v_{i}}\not=\emptyset,\ P_{v_{i}}\not=\emptyset,\ \text{and}\ B_{v_{i}}\not=\emptyset
Case 4: UviandPvi=\displaystyle\ U_{v_{i}}\not=\emptyset\ \text{and}\ P_{v_{i}}=\emptyset

For all the cases, we show the following two lemmas.

Lemma 7.

Algorithm OrderedApprox satisfies the following three conditions

(i)\displaystyle\hskip 10.0pt\rm{({i})} IvjPvjEvjwith|Ivj|=|𝒜vj(Pvj)|for all ji,\displaystyle\ I_{v_{j}}\subseteq P_{v_{j}}\subseteq E_{v_{j}}\ \text{with}\ |I_{v_{j}}|=|\mathcal{A}_{v_{j}}(P_{v_{j}})|\ \text{for all }\ j\leq i,
(ii)\displaystyle\rm{({ii})} RvjUvjfor all ji,and\displaystyle\ R_{v_{j}}\subseteq U_{v_{j}}\ \text{for all }\ j\leq i,\ \text{and}
(iii)\displaystyle\rm{({iii})} 𝒫={PvvV}is a local subpartition ofEwith residualR=jiRvj\displaystyle\ \mathcal{P}=\{P_{v}\mid v\in V\}\ \text{is a local subpartition of}\ E\ \text{with residual}\ R=\bigcup_{j\leq i}R_{v_{j}}

at the end of the ii-th iteration of the for-loop.

Proof.

Since IvjI_{v_{j}} and PvjP_{v_{j}} are never modified after the jj-th iteration of the for-loop in Line 5, it is enough to show that IviPviEviI_{v_{i}}\subseteq P_{v_{i}}\subseteq E_{v_{i}} and |Ivi|=|𝒜vi(Pvi)||I_{v_{i}}|=|\mathcal{A}_{v_{i}}(P_{v_{i}})| at the end of the ii-th iteration of the for-loop to prove Condition (i). We can see that PviP_{v_{i}} is initialized to DviD_{v_{i}}, and PviP_{v_{i}} is modified in Lines 17, 19, and 36. In Lines 19 and 36, no edge is added to PviP_{v_{i}}, and in Line 17, some edge eUvie\in U_{v_{i}} is added to PviP_{v_{i}}. These show that PviEviP_{v_{i}}\subseteq E_{v_{i}}. Moreover, since IviI_{v_{i}} is constructed in Lines 8, 13, 15, 22, and 25, we have IviPviI_{v_{i}}\subseteq P_{v_{i}} and |Ivi|=|𝒜vi(Pvi)||I_{v_{i}}|=|\mathcal{A}_{v_{i}}(P_{v_{i}})|, which implies Condition (i).

Since RvjR_{v_{j}} is never modified after the jj-th iteration for all the cases except for Case 4, Condition (ii) is satisfied if Case 4 is not satisfied in the jj-th iteration of the for-loop. On the other hand, if Case 4 is satisfied in the jj-th iteration of the for-loop, then RvjR_{v_{j}} might be updated in Line 31, which again satisfies RvjUvjR_{v_{j}}\subseteq U_{v_{j}}. This implies Condition (ii).

Let us finally show Condition (iii). Before the first iteration of the for-loop, 𝒫={Pv=DvvV}\mathcal{P}=\{P_{v}=D_{v}\mid v\in V\} is a local subpartition of EE with the residual R=R=\emptyset. Assuming that Condition (iii) is satisfied in the beginning of the ii-th iteration, we show that Condition (iii) is satisfied at the end of the ii-th iteration. Note that PviP_{v_{i}} is modified in Lines 17, 19, and 36. In Lines 19 and 36, no edge is added to PvjP_{v_{j}} for any jj. If some edge e=(vi,vj)Uvie=(v_{i},v_{j})\in U_{v_{i}} is added to PviP_{v_{i}} in Line 17, ee is removed from PvjP_{v_{j}}. These imply that 𝒫={PvvV}\mathcal{P}=\{P_{v}\mid v\in V\} is a subpartition of EE. Moreover, in any iteration, no edge is added to Pv𝒫Pv\bigcup_{P_{v}\in\mathcal{P}}P_{v}, and when RvR_{v} is constructed in Lines 9, 18, 23, 26, and 31, all the edges in such an RvR_{v} are deleted from the corresponding PvjP_{v_{j}} in Line 36. Thus Condition (iii) is satisfied at the end of the ii-th iteration, which completes the proof of the lemma. ∎

Lemma 8.

Algorithm OrderedApprox satisfies that

Ij<iIvjfor any independent setI[Pvi]\displaystyle I\cup\bigcup_{j<i}I_{v_{j}}\in\mathcal{I}\ \text{for any independent set}\ I\in\mathcal{I}[P_{v_{i}}] (16)

in the beginning of the ii-th iteration of the for-loop.

Proof.

For an index jj, consider the end of the jj-th iteration of the for-loop in Line 5. We have Ivj{e}vjI_{v_{j}}\cup\{e\}\in\mathcal{I}_{v_{j}} for any eUvj(PvjRvj)e\in U_{v_{j}}\setminus(P_{v_{j}}\cup R_{v_{j}}). If the jj-th iteration of the for-loop falls into Cases 1, 2, or 3, then Uvj(k>jPvk)=Uvj(PvjRvj)U_{v_{j}}\cap(\bigcup_{k>j}P_{v_{k}})=U_{v_{j}}\setminus(P_{v_{j}}\cup R_{v_{j}}) contains at most one edge. Otherwise (i.e., Case 4), we have Uvjk>jPvkU_{v_{j}}\subseteq\bigcup_{k>j}P_{v_{k}}, and RvjR_{v_{j}} will be updated to {(vj,vl)Uvjl>k}\{(v_{j},v_{l})\in U_{v_{j}}\mid l>k\} once (vj,vk)Uvj(v_{j},v_{k})\in U_{v_{j}} is chosen by IvkI_{v_{k}} in the kk-th iteration for some k>jk>j. Therefore, (16) is satisfied in the beginning of the ii-th iteration of the for-loop. ∎

Theorem 9.

Algorithm OrderedApprox computes an (α+2γ2)(\alpha+2\gamma-2)-approximate independent set II in \mathcal{I} in polynomial time, where γ\gamma is the width of a given graph GG with respect to a given linear order of vertices VV.

Proof.

For any vv in VV, let IvI_{v}, PvP_{v}, and RvR_{v} be the sets obtained by Algorithm OrderedApprox. By Lemma 8, I=vVIvI=\bigcup_{v\in V}I_{v} is an independent set in \mathcal{I}. Let us first consider the size of RviR_{v_{i}}. If the ii-th iteration of the for-loop falls into Case 1, then we have Rvi=R_{v_{i}}=\emptyset. If it falls into Cases 2 or 3, then we have |Rvi|=|Uvi|1γ1|R_{v_{i}}|=|U_{v_{i}}|-1\leq\gamma-1 and IviI_{v_{i}}\not=\emptyset, which implies |Rvi|(γ1)|Ivi||R_{v_{i}}|\leq(\gamma-1)|I_{v_{i}}|. In Case 4, viv_{i} is added to XX and RviR_{v_{i}} is set to Rvi=R_{v_{i}}=\emptyset at the end of the ii-th iteration. Note that RviR_{v_{i}} might be updated to {(vi,vk)Uvkk>j}\{(v_{i},v_{k})\in U_{v_{k}}\mid k>j\} if (vi,vj)Uvi(v_{i},v_{j})\in U_{v_{i}} is chosen by IvjI_{v_{j}} in the jj-th iteration for some j>ij>i. In either case, we have |Rvi|γ1|R_{v_{i}}|\leq\gamma-1, and there exists an edge (vi,vj)(v_{i},v_{j}) in UviIvjU_{v_{i}}\cap I_{v_{j}} if RviR_{v_{i}}\not=\emptyset. For p=1,2,3p=1,2,3, and 44, let VpV_{p} denote the set of vertices viv_{i} such that the ii-th iteration of the for-loop falls into Case pp. Then we have

vV1|Rv|\displaystyle\sum_{v\in V_{1}}|R_{v}| =0,\displaystyle=0,
vV2V3|Rv|\displaystyle\sum_{v\in V_{2}\cup V_{3}}|R_{v}| (γ1)vV2V3|Iv|(γ1)|I|,and\displaystyle\leq(\gamma-1)\sum_{v\in V_{2}\cup V_{3}}|I_{v}|\leq(\gamma-1)|I|,\ \text{and}
vV4|Rv|\displaystyle\sum_{v\in V_{4}}|R_{v}| (γ1)|I|.\displaystyle\leq(\gamma-1)|I|.

Therefore, we obtain the following inequality

|R|=vV|Rv|2(γ1)|I|.\displaystyle|R|=\sum_{v\in V}|R_{v}|\leq 2(\gamma-1)|I|.

By Lemma 7, 𝒫={PvvV}\mathcal{P}=\{P_{v}\mid v\in V\} is a local subpartition of EE with the residual R=vVRvR=\bigcup_{v\in V}R_{v}. Thus, by applying Lemma 2 to this II, we can see that II is an (α+2γ2)(\alpha+2\gamma-2)-approximate independent set in \mathcal{I}. ∎

Since a linear order \prec of vertices VV representing the degeneracy of G=(V,E)G=(V,E) can be computed in linear time, we have the following corollary.

Corollary 10.

Algorithm OrderedApprox computes an (α+2k2)(\alpha+2k-2)-approximate independent set II in \mathcal{I} in polynomial time, if a given graph GG is kk-degenerate.

For a graph G=(V,E)G=(V,E) with the width γ\gamma, the second algorithm, called DecomApprox, first decomposes edge set EE into forests E1,,EγE_{1},\ldots,E_{\gamma}, for each EiE_{i}, applies OrderedApprox to compute an independent set IiI_{i}, and chooses a maximum independent set among them.

\fname@algorithm DecomApprox(G,,)(G,\mathcal{I},\prec)
An independence system (E,)(E,\mathcal{I}) defined on a graph G=(V,E)G=(V,E) and a linear order v1v2vnv_{1}\prec v_{2}\prec\dots\prec v_{n} of vertices VV, where γ\gamma is the width of graph GG with respect to \prec.
An independent set in \mathcal{I}.
for i=1,,γi=1,\dots,\gamma do
     Ei:=E_{i}:=\emptyset.
end for
for each vv in VV do
     for each eie_{i} in Uv={e1,e2,,e|Uv|}U_{v}=\{e_{1},e_{2},\dots,e_{|U_{v}|}\} do
         Ei:=Ei{ei}E_{i}:=E_{i}\cup\{e_{i}\}.
     end for
end for
for i=1,,γi=1,\dots,\gamma do
     Ii=OrderedApprox(G[Ei],[Ei],)I_{i}={\rm{\texttt{OrderedApprox}}}(G[E_{i}],\mathcal{I}[E_{i}],\prec).
end for
Iargmax{|Ii|i=1,,γ}I\in\arg\max\{|I_{i}|\mid i=1,\dots,\gamma\}.
Output II and halt.

Note that Gi=(V,Ei)G_{i}=(V,E_{i}) is a 11-degenerate for any i=1,,γi=1,\dots,\gamma. Thus we have the following theorem.

Theorem 11.

Algorithm DecomApprox computes an αγ\alpha\gamma-approximate independent set II in \mathcal{I} in polynomial time, where γ\gamma is the width of a given graph GG with respect to a given linear order of vertices VV.

Proof.

Since Gi=(V,Ei)G_{i}=(V,E_{i}) is a forest (i.e., GiG_{i} has the width 11), by Theorem 9, IiI_{i} is an α\alpha-approximate independent set of [Ei]\mathcal{I}[E_{i}]. Furthermore, any independent set JJ in \mathcal{I} satisfies that

|J|=i=1γ|JEi|i=1γα|Ii|αγmaxi=1,,γ|Ii|,\displaystyle|J|=\sum_{i=1}^{\gamma}|J\cap E_{i}|\leq\sum_{i=1}^{\gamma}\alpha|I_{i}|\leq\alpha\gamma\max_{i=1,\ldots,\gamma}|I_{i}|,

which completes the proof. ∎

Note that α<2\alpha<2 if and only if αγ<α+2γ2\alpha\gamma<\alpha+2\gamma-2. Therefore, an independent set II provided by Algorithm DecomApprox has approximation guarantee better than the one provided by Algorithm OrderedApprox when α<2\alpha<2. By applying Theorem 11 to graphs with degeneracy kk, we have the following corollary.

Corollary 12.

Algorithm DecomApprox computes an αk\alpha k-approximate independent set II in \mathcal{I} in polynomial time if a given graph GG is kk-degenerate.

4 Approximation for bipartite graph

We note that Algorithms OrderedApprox and DecomApprox do not provide an independent set with small approximation ratio if a given graph has no small degeneracy. Such examples include complete graphs and complete bipartite graphs.

In this section, we consider an approximation algorithm for the problem (4) where the input graph is bipartite and its degeneracy might not be bounded, and analyze the approximation ratio of the algorithm if all the local independence systems in the one-side of vertices are kk-systems with independence oracles. Namely, let (E,)(E,\mathcal{I}) be an independence system defined on a bipartite graph G=(V1V2,E)G=(V_{1}\cup V_{2},E). We consider the case, where every vV2v\in V_{2} satisfies that (Ev,v)(E_{v},\mathcal{I}_{v}) is a kk-system.

Definition 13.

For a positive kk\in\mathbb{R}, an independence system (E,)(E,\mathcal{I}) is called a kk-system if any subset FEF\subseteq E satisfies

k|I||J|for any two maximal independent setsIandJin[F].\displaystyle k|I|\geq|J|\ \text{for any two maximal independent sets}\ I\ \text{and}\ J\ \text{in}\ \mathcal{I}[F]. (17)

Note that any independence system (E,)(E,\mathcal{I}) is a |E||E|-system and that an independence system is a 11-system if and only if it is a matroid. By definition, matchoids are independence systems such that local independence systems are all 11-systems, and hence the families of bb-matchings are also the ones satisfying that local independence systems are all 11-systems. Moreover, the families of timed matchings are independence systems such that local independence systems (Ev,v)(E_{v},\mathcal{I}_{v}) are all kk-systems if any time label LeL_{e} with eEve\in E_{v} is disjoint from LfL_{f} with fEvf\in E_{v} except for at most kk edges in EvE_{v}.

\fname@algorithm BipartiteApprox(G,)(G,\mathcal{I})
1:An independence system (E,)(E,\mathcal{I}) defined on a bipartite graph G=(V1V2,E)G=(V_{1}\cup V_{2},E), where V1={v1,,vn1}V_{1}=\{v_{1},\dots,v_{n_{1}}\} and (Ev,v)(E_{v},\mathcal{I}_{v}) is a kk-system with an independence oracle for every vV2v\in V_{2}.
2:An independent set in \mathcal{I}.
3:Pv:=EvP_{v}:=E_{v} for vV1v\in V_{1}.
4:Rv:=R_{v}:=\emptyset and Jv:=J_{v}:=\emptyset for vV2v\in V_{2}.
5:for i=1,,n1(=|V1|)i=1,\ldots,n_{1}(=|V_{1}|) do
6:     Ivi:=𝒜vi(Pvi)I_{v_{i}}:=\mathcal{A}_{v_{i}}(P_{v_{i}}).
7:     for (w,vi)Ivi(w,v_{i})\in I_{v_{i}} do
8:         Jw:=Jw{(w,vi)}J_{w}:=J_{w}\cup\{(w,v_{i})\}
9:         Rw:=Rw{(w,vj)Ewj>iandJw{(w,vj)}w}R_{w}:=R_{w}\cup\{(w,v_{j})\in E_{w}\mid j>i\ \text{and}\ J_{w}\cup\{(w,v_{j})\}\not\in\mathcal{I}_{w}\}.
10:     end for
11:     for j=i+1,,n1j=i+1,\ldots,n_{1} do
12:         Pvj=Pvj(vV2Rv)P_{v_{j}}=P_{v_{j}}\setminus(\bigcup_{v\in V_{2}}R_{v}).
13:     end for
14:end for
15:R=vV2RvR=\bigcup_{v\in V_{2}}R_{v}.
16:Output I=vV1IvI=\bigcup_{v\in V_{1}}I_{v} and halt.

Our algorithm called BipartiteApprox can be regarded as variant of Algorithm OrderedApprox with a linear order \prec such that wvw\prec v hold for any vV1v\in V_{1} and wV2w\in V_{2}. Note that in this order all the vertices wV2w\in V_{2} fall into Case 44 at the for-loop in Line 5 in OrderedApprox, and hence X=V2X=V_{2} holds after the iteration of the last vertex in V2V_{2}. Different from OrderedApprox, Algorithm BipartiteApprox updates RwR_{w} for wV2(=X)w\in V_{2}\ (=X) more carefully.

Algorithm BipartiteApprox calls local oracles 𝒜v\mathcal{A}_{v} in an arbitrary order of vV1v\in V_{1}. More precisely, for each vV1v\in V_{1} and wV2w\in V_{2}, we initialize Pv=EvP_{v}=E_{v} and Rw=R_{w}=\emptyset, update PvP_{v} and RwR_{w} accordingly, and compute Iv=𝒜v(Pv)I_{v}=\mathcal{A}_{v}(P_{v}). Here PvP_{v} and R=wV2RvR=\bigcup_{w\in V_{2}}R_{v} correspond to those in Lemma 2.

Lemma 14.

Algorithm BipartiteApprox satisfies the following five conditions

(i)\displaystyle\hskip 15.0pt\rm{({i})} PvjEvjwithIvj=𝒜vj(Pvj)for all vjV1withji,\displaystyle\ P_{v_{j}}\subseteq E_{v_{j}}\ \text{with}\ I_{v_{j}}=\mathcal{A}_{v_{j}}(P_{v_{j}})\ \text{for all }\ v_{j}\in V_{1}\ \text{with}\ j\leq i,
(ii)\displaystyle\rm{({ii})} RvEvfor all vV2,\displaystyle\ R_{v}\subseteq E_{v}\ \text{for all }\ v\in V_{2},
(iii)\displaystyle\rm{({iii})} {JvEvvV2}is a partition ofjiIvj,\displaystyle\ \{J_{v}\subseteq E_{v}\mid v\in V_{2}\}\ \text{is a partition of}\ \bigcup_{j\leq i}I_{v_{j}},
(iv)\displaystyle\rm{({iv})} Jvis a maximal independent set in[JvRv]for allvV2,and\displaystyle\ J_{v}\ \text{is a maximal independent set in}\ \mathcal{I}[J_{v}\cup R_{v}]\ \text{for all}\ v\in V_{2},\ \text{and}
(v)\displaystyle\rm{({v})} 𝒫={PvvV1}{Pw=wV2}is a local subpartition ofE\displaystyle\ \mathcal{P}=\{P_{v}\mid v\in V_{1}\}\cup\{P_{w}=\emptyset\mid w\in V_{2}\}\ \text{is a local subpartition of}\ E
with residualR=vV2Rv\displaystyle\ \text{with residual}\ R=\bigcup_{v\in V_{2}}R_{v}

at the end of the ii-th iteration of the for-loop in Line 5.

Proof.

Since IvjI_{v_{j}} and PvjP_{v_{j}} are never modified after the jj-th iteration of the for-loop in Line 5, it is enough to show that PviEviP_{v_{i}}\subseteq E_{v_{i}} and Ivi=𝒜vi(Pvi)I_{v_{i}}=\mathcal{A}_{v_{i}}(P_{v_{i}}) at the end of the ii-th iteration to prove Condition (i). We can see that PviP_{v_{i}} is initialized to EviE_{v_{i}}, PviP_{v_{i}} is modified without adding edge in Line 12, and Ivi=𝒜vi(Pvi)I_{v_{i}}=\mathcal{A}_{v_{i}}(P_{v_{i}}) holds in Line 6. This implies the desired property.

For wV2w\in V_{2}, we initialize Jw,Rw=J_{w},R_{w}=\emptyset, and they are respectively updated in Lines 8 and 9 with adding edges in EwE_{w}, which satisfies Jw,RwEwJ_{w},R_{w}\subseteq E_{w}. This implies Condition (ii). Moreover, during the jj-th iteration, any (w,vj)Ivj(w,v_{j})\in I_{v_{j}} is added to JwJ_{w}, which implies Condition (iii). It is not difficult to see that Condition (iv) is satisfied since JwJ_{w} and RwR_{w} are updated in Lines 8 and 9.

Let us finally show Condition (v). Since GG is a bipartite graph, before the first iteration of the for-loop, {Pv=EvvV1}\{P_{v}=E_{v}\mid v\in V_{1}\} is a partition of EE. Since PvjP_{v_{j}} is modified without adding edge in Line 12, {PvvV1}\{P_{v}\mid v\in V_{1}\} is a subpartition of EE. Moreover, RwR_{w} is initialized to Rw=R_{w}=\emptyset for wV2w\in V_{2}, and when some edge e=(w,vj)e=(w,v_{j}) is added to RwR_{w} in Line 9, all such edges ee are deleted from the corresponding PvjP_{v_{j}} in Line 12. Thus Condition (v) is satisfied at the end of the ii-th iteration, which completes the proof of the lemma. ∎

Lemma 15.

Algorithm BipartiteApprox satisfies that j<iIvj\bigcup_{j<i}I_{v_{j}}\in\mathcal{I} in the beginning of the ii-th iteration of the for-loop in Line 5.

Proof.

We prove the statement in Lemma 15 by the induction on ii. If i=1i=1, the statement clearly holds since \emptyset\in\mathcal{I}. Assume that the statement holds when i=li=l and consider the case i=l+1i=l+1.

For vV1v\in V_{1} and q=l,l+1q=l,l+1, let Pv(q)P^{(q)}_{v} denotes the set PvP_{v} in the beginning of the qq-th iteration of the for-loop in Line 5. Note that for vV1v\in V_{1}, IvI_{v} is never modified after it is computed in Line 4. Similarly, for wV2w\in V_{2} and q=l,l+1q=l,l+1, let Jw(q)J^{(q)}_{w} and Rw(q)R^{(q)}_{w} respectively denote the sets JwJ_{w} and RwR_{w} in the beginning of the qq-th iteration, and let J(q)=wV2Jw(q)J^{(q)}=\bigcup_{w\in V_{2}}J^{(q)}_{w}.

By Lemma 14 (iii), we have

j<l+1Ivj=Ivlj<lIvj=IvlwV2Jw(l).\displaystyle\bigcup_{j<l+1}I_{v_{j}}=I_{v_{l}}\cup\bigcup_{j<l}I_{v_{j}}=I_{v_{l}}\cup\bigcup_{w\in V_{2}}J^{(l)}_{w}.

Note that Jw(l){e}wJ^{(l)}_{w}\cup\{e\}\in\mathcal{I}_{w} for any wV2w\in V_{2} and eEwRw(l)e\in E_{w}\setminus R^{(l)}_{w}. Since IvlPvl(l)E(wV2Rw(l))I_{v_{l}}\subseteq P^{(l)}_{v_{l}}\subseteq E\setminus(\bigcup_{w\in V_{2}}R^{(l)}_{w}) by Lemma 14 (i) and (v),

IvlwV2Jw(l),\displaystyle I_{v_{l}}\cup\bigcup_{w\in V_{2}}J^{(l)}_{w}\in\mathcal{I},

which completes the inductive argument. ∎

Theorem 16.

Let (E,)(E,\mathcal{I}) be an independence system defined on a bipartite graph G=(V1V2,E)G=(V_{1}\cup V_{2},E) such that (Ev,v)(E_{v},\mathcal{I}_{v}) is a kk-system with an independence oracle for every vV2v\in V_{2}. Then Algorithm BipartiteApprox computes an (α+k)(\alpha+k)-approximate independent set II in \mathcal{I} in polynomial time.

Proof.

For a vertex vV1v\in V_{1}, let IvI_{v} and PvP_{v} be the sets obtained by Algorithm BipartiteApprox, and for a vertex wV2w\in V_{2}, let JwJ_{w} and RwR_{w} be the sets obtained by the algorithm. By Lemma 15, I=vV1IvI=\bigcup_{v\in V_{1}}I_{v} is an independent set in \mathcal{I}. By Lemma 14, 𝒫={PvvV1}\mathcal{P}=\{P_{v}\mid v\in V_{1}\} is a local subpartition of EE with the residual R=wV2RwR=\bigcup_{w\in V_{2}}R_{w}. Let us analyze maxL[R]|L|/|I|\max_{L\in\mathcal{I}[R]}|L|/|I| to apply Lemma 2. By Lemma 14 (iv), JwJ_{w} is a maximal independent set in [JwRw]\mathcal{I}[J_{w}\cup R_{w}] for wV2w\in V_{2}. Since the every local independence system (Ew,w)(E_{w},\mathcal{I}_{w}) is a kk-system for wV2w\in V_{2},

|L|=wV2|LRw|kwV2|Jw|=k|I|\displaystyle|L|=\sum_{w\in V_{2}}|L\cap R_{w}|\leq k\sum_{w\in V_{2}}|J_{w}|=k|I|

for any L[R]L\in\mathcal{I}[R]. Thus, by applying Lemma 2 to this II, we can conclude that II is an (α+k)(\alpha+k)-approximate independent set in \mathcal{I}. ∎

Before concluding this section, we remark that independence systems on graphs are 2k2k-systems if all local independence systems are kk-systems, which is shown in the following lemma. Since the maximization for kk-systems (E,)(E,\mathcal{I}) are approximable with ratio kk by computing a maximal independent set in \mathcal{I}, Algorithm BipartiteApprox improves the ratio 2k2k to (α+k)(\alpha+k) for independence systems on graph bipartite graphs G=(V1V2,E)G=(V_{1}\cup V_{2},E) if the all local independence systems are kk-systems and α\alpha-approximation local oracles in V1V_{1} are given for α\alpha with α<k\alpha<k.

Lemma 17.

An independence system (E,)(E,\mathcal{I}) on graph G=(V,E)G=(V,E) is a 2k2k-system if all local independence systems (Ev,v)(E_{v},\mathcal{I}_{v}) are kk-systems.

Proof.

For a subset FEF\subseteq E, let II be a maximal independent set in [F]\mathcal{I}[F]. For each vertex vVv\in V, let Iv=IEvI_{v}=I\cap E_{v}, and define SvS_{v} and TvT_{v} by

Sv\displaystyle S_{v} ={(u,v)FIIv{(u,v)}v}\displaystyle=\{(u,v)\in F\setminus I\mid I_{v}\cup\{(u,v)\}\in\mathcal{I}_{v}\}
Tv\displaystyle T_{v} ={(u,v)FIIu{(u,v)}u}.\displaystyle=\{(u,v)\in F\setminus I\mid I_{u}\cup\{(u,v)\}\in\mathcal{I}_{u}\}.

Since II is a maximal independent set in [F]\mathcal{I}[F], SvTv=S_{v}\cap T_{v}=\emptyset for all vVv\in V, and 𝒮={SvvV}\mathcal{S}=\{S_{v}\mid v\in V\} and 𝒯={TvvV}\mathcal{T}=\{T_{v}\mid v\in V\} are both subpartitions of FF with vVSv=vVTv\bigcup_{v\in V}S_{v}=\bigcup_{v\in V}T_{v}. Moreover, IvI_{v} is a maximal independent set in [Iv(EvSv)]\mathcal{I}[I_{v}\cup(E_{v}\setminus S_{v})] and [IvTv]\mathcal{I}[I_{v}\cup T_{v}]. Therefore, for any independent set JJ in [F]\mathcal{I}[F], we have

|J|\displaystyle|J| =12vV|JEv|\displaystyle=\frac{1}{2}\sum_{v\in V}|J\cap E_{v}|
=12vV|JSv|+12vV|J(EvSv)|\displaystyle=\frac{1}{2}\sum_{v\in V}|J\cap S_{v}|+\frac{1}{2}\sum_{v\in V}|J\cap(E_{v}\setminus S_{v})|
=12vV|JTv|+12vV|J(EvSv)|\displaystyle=\frac{1}{2}\sum_{v\in V}|J\cap T_{v}|+\frac{1}{2}\sum_{v\in V}|J\cap(E_{v}\setminus S_{v})|
12vVk|Iv|+12vVk|Iv|\displaystyle\leq\frac{1}{2}\sum_{v\in V}k|I_{v}|+\frac{1}{2}\sum_{v\in V}k|I_{v}|
=2k|I|.\displaystyle=2k|I|.

Here the second equality follows from 𝒮\mathcal{S} is a subpartition of FF, the third equality follows from vVSv=vVTv\bigcup_{v\in V}S_{v}=\bigcup_{v\in V}T_{v}, and the first inequality follows from the fact that local independence systems are all kk-systems. ∎

5 Hypergraph generalization of the probslem

In this section we consider independence systems defined on hypergraphs and present two algorithms. The first algorithm corresponds to Algorithm OrderedApprox for the problem (4) whose input graph is a forest, and the second algorithm corresponds to Algorithm DecomApprox for the problem (4).

Definition 18.

Let G=(V,E)G=(V,E) be a hypergraph with a vertex set VV and a hyperedge set E2VE\subseteq 2^{V}. For each vertex vv in VV, let (Ev,v)(E_{v},\mathcal{I}_{v}) be an independence system on the set EvE_{v} of hyperedges incident to vv, i.e., Ev={eEve}E_{v}=\{e\in E\mid v\in e\}, and let ={IEIEvvfor allvV}\mathcal{I}=\{I\subseteq E\mid I\cap E_{v}\in\mathcal{I}_{v}\ \text{for all}\ v\in V\}. We say that (E,)(E,\mathcal{I}) is an independence system defined on a hypergraph GG.

The generalization contains the maximum matching problem for hypergraphs. Similarly to the graph case, we make use of local oracles 𝒜v\mathcal{A}_{v} for each vv in VV, which satisfy (5) and (6). In order to generalize the idea of Algorithm OrderedApprox to the hypergraph case, we define the upward and downward edge sets with respect to a linear order of vertices VV.

Definition 19.

For a hypergraph G=(V,E)G=(V,E), let \prec be a linear order of vertices VV. For a vertex vv, let Uv={eEvvwfor somewe}U_{v}=\{e\in E_{v}\mid v\prec w\ \text{for some}\ w\in e\} and Dv={eEvwvfor allwe}D_{v}=\{e\in E_{v}\mid w\prec v\ \text{for all}\ w\in e\} be the sets of upward and downward edges incident to vv, respectively. We define the width of GG (with respect to a linear order \prec) as maxvV|Uv|\max_{v\in V}|U_{v}|.

For a hypergraph G=(V,E)G=(V,E) and a vertex set WVW\subseteq V, G[W]=(W,E[W])G[W]=(W,E[W]) denotes the simple subhypergraph of GG induced by WW, where E[W]={eWeE,|eW|2}E[W]=\{e\cap W\mid e\in E,\ |e\cap W|\geq 2\}. Note that for WVW\subseteq V, G[W]G[W] is the subgraph of GG induced by WW if GG is a graph. Similarly to the graph case, a hypergraph GG of width kk satisfies that G[W]G[W] has a vertex of degree at most kk for all WVW\subseteq V. Therefore, a linear order certifying that a hypergraph GG has width kk can be computed in linear time.

Algorithm OrderedApprox for hypergraphs works similarly to Algorithm OrderedApprox. Different from Algorithm OrderedApprox, the algorithm updates PvP_{v} and RvR_{v} based on the fact that GG is a hypergraph. Note that such differences appear in Lines 4, 14, and 28 in the description of the algorithm.

\fname@algorithm OrderedApprox(G,,)(G,\mathcal{I},\prec)
1:An independence system (E,)(E,\mathcal{I}) defined on a hypergraph G=(V,E)G=(V,E) and a linear order v1v2vnv_{1}\prec v_{2}\prec\ldots\prec v_{n} of vertices VV.
2:An independent set in \mathcal{I}.
3:X:=X:=\emptyset.
4:Pv:=DvP_{v}:=D_{v} for vVv\in V.
5:for i=1,,ni=1,\ldots,n do
6:     Bvi={eUvi(l<iRvl)|e𝒜vi(Pvi{e})and|𝒜vi(Pvi{e})|>|𝒜vi(Pvi)|}.B_{v_{i}}=\left\{e\in U_{v_{i}}\setminus\left(\bigcup_{l<i}R_{v_{l}}\right)\middle|\begin{array}[]{l}e\in\mathcal{A}_{v_{i}}(P_{v_{i}}\cup\{e\})\ \text{and}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ |\mathcal{A}_{v_{i}}(P_{v_{i}}\cup\{e\})|>|\mathcal{A}_{v_{i}}(P_{v_{i}})|\end{array}\right\}.
7:     if Uvi=U_{v_{i}}=\emptyset then
8:         Ivi:=𝒜vi(Pvi)I_{v_{i}}:=\mathcal{A}_{v_{i}}(P_{v_{i}}).
9:         Rvi:=R_{v_{i}}:=\emptyset.
10:     else if UviU_{v_{i}}\not=\emptyset, PviP_{v_{i}}\not=\emptyset, and Bvi=B_{v_{i}}=\emptyset then
11:         Choose an edge eUvie\in U_{v_{i}} arbitrarily.
12:         if e𝒜vi(Pvi{e})e\not\in\mathcal{A}_{v_{i}}(P_{v_{i}}\cup\{e\}) then
13:              Ivi:=𝒜vi(Pvi{e})I_{v_{i}}:=\mathcal{A}_{v_{i}}(P_{v_{i}}\cup\{e\}).
14:         else (i.e., |𝒜vi(Pvi{e})|=|𝒜vi(Pvi)||\mathcal{A}_{v_{i}}(P_{v_{i}}\cup\{e\})|=|\mathcal{A}_{v_{i}}(P_{v_{i}})|)
15:              Ivi:=𝒜vi(Pvi)I_{v_{i}}:=\mathcal{A}_{v_{i}}(P_{v_{i}})./* |Ivi|=|𝒜vi(Pvi{e})||I_{v_{i}}|=|\mathcal{A}_{v_{i}}(P_{v_{i}}\cup\{e\})| */
16:         end if
17:         Pvi:=Pvi{e}P_{v_{i}}:=P_{v_{i}}\cup\{e\}.
18:         Rvi:=Uvi{e}R_{v_{i}}:=U_{v_{i}}\setminus\{e\}.
19:         for vje{vi}v_{j}\in e\setminus\{v_{i}\} do
20:              Pvj:=Pvj{e}P_{v_{j}}:=P_{v_{j}}\setminus\{e\}.
21:         end for
22:     else if UviU_{v_{i}}\not=\emptyset, PviP_{v_{i}}\not=\emptyset, and BviB_{v_{i}}\not=\emptyset then
23:         Choose an edge bBvib\in B_{v_{i}} arbitrarily.
24:         Ivi:=𝒜vi(Pvi{b}){b}I_{v_{i}}:=\mathcal{A}_{v_{i}}(P_{v_{i}}\cup\{b\})\setminus\{b\}. /* IviPviI_{v_{i}}\subseteq P_{v_{i}} and |Ivi|=|𝒜vi(Pvi)||I_{v_{i}}|=|\mathcal{A}_{v_{i}}(P_{v_{i}})| */
25:         Rvi:=Uvi{b}R_{v_{i}}:=U_{v_{i}}\setminus\{b\}.
26:     else (i.e., UviU_{v_{i}}\not=\emptyset and Pvi=P_{v_{i}}=\emptyset)
27:         Ivi:=I_{v_{i}}:=\emptyset.
28:         Rvi:=R_{v_{i}}:=\emptyset. /* RviR_{v_{i}} might be updated in line 34 */
29:         X:=X{vi}X:=X\cup\{v_{i}\}.
30:     end if
31:     for eIvie\in I_{v_{i}} do
32:         for vje{vi}v_{j}\in e\setminus\{v_{i}\} do
33:              if vjXv_{j}\in X then
34:                  Rvj:={eUvj(l<iRvl)viufor someue}R_{v_{j}}:=\{e\in U_{v_{j}}\setminus(\bigcup_{l<i}R_{v_{l}})\mid v_{i}\prec u\ \text{for some}\ u\in e\}.
35:                  X:=X{vj}X:=X\setminus\{v_{j}\}.
36:              end if
37:         end for
38:     end for
39:     for j=i+1,,nj=i+1,\ldots,n do
40:         Pvj:=Pvj(liRvl)P_{v_{j}}:=P_{v_{j}}\setminus(\bigcup_{l\leq i}R_{v_{l}}).
41:     end for
42:end for
43:R=vVRvR=\bigcup_{v\in V}R_{v}.
44:Output I=vVIvI=\bigcup_{v\in V}I_{v} and halt.

For each viVv_{i}\in V, define a set BviUviB_{v_{i}}\subseteq U_{v_{i}} as follows

Bvi={eUvi(l<iRvl)|e𝒜vi(Pvi{e})and|𝒜vi(Pvi{e})|>|𝒜vi(Pvi)|}.\displaystyle B_{v_{i}}=\left\{e\in U_{v_{i}}\setminus\left(\bigcup_{l<i}R_{v_{l}}\right)\middle|\begin{array}[]{l}e\in\mathcal{A}_{v_{i}}(P_{v_{i}}\cup\{e\})\ \text{and}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ |\mathcal{A}_{v_{i}}(P_{v_{i}}\cup\{e\})|>|\mathcal{A}_{v_{i}}(P_{v_{i}})|\end{array}\right\}.

It separately treats the following four cases as in the description of the algorithm.

Case 1: Uvi=\displaystyle\ U_{v_{i}}=\emptyset
Case 2: Uvi,Pvi,andBvi=\displaystyle\ U_{v_{i}}\not=\emptyset,\ P_{v_{i}}\not=\emptyset,\ \text{and}\ B_{v_{i}}=\emptyset
Case 3: Uvi,Pvi,andBvi\displaystyle\ U_{v_{i}}\not=\emptyset,\ P_{v_{i}}\not=\emptyset,\ \text{and}\ B_{v_{i}}\not=\emptyset
Case 4: UviandPvi=\displaystyle\ U_{v_{i}}\not=\emptyset\ \text{and}\ P_{v_{i}}=\emptyset

We show the following two lemmas, which respectively correspond to Lemmas 7 and 8 for the graph case.

Lemma 20.

Algorithm OrderedApprox satisfies the following three conditions

(i)\displaystyle\hskip 15.0pt\rm{({i})} IvjPvjEvjwith|Ivj|=|𝒜vj(Pvj)|for all ji,\displaystyle\ I_{v_{j}}\subseteq P_{v_{j}}\subseteq E_{v_{j}}\ \text{with}\ |I_{v_{j}}|=|\mathcal{A}_{v_{j}}(P_{v_{j}})|\ \text{for all }\ j\leq i,
(ii)\displaystyle\rm{({ii})} RvjUvjfor all ji,and\displaystyle\ R_{v_{j}}\subseteq U_{v_{j}}\ \text{for all }\ j\leq i,\ \text{and}
(iii)\displaystyle\rm{({iii})} 𝒫={PvvV}is a local subpartition ofEwith residualR=jiRvj\displaystyle\ \mathcal{P}=\{P_{v}\mid v\in V\}\ \text{is a local subpartition of}\ E\ \text{with residual}\ R=\bigcup_{j\leq i}R_{v_{j}}

at the end of the ii-th iteration of the for-loop.

Proof.

It is analogous to the proof of Lemma 7 for the graph case. ∎

Different from the graph case, the following lemma requires that vVv\in V ef={v}e\cap f=\{v\} holds for e,fPve,f\in P_{v}, i.e, 𝒜v(Pv)\mathcal{A}_{v}(P_{v})\in\mathcal{I}. Note that if maxeE|e|2\max_{e\in E}|e|\leq 2, the required condition is satisfied.

Lemma 21.

Algorithm OrderedApprox satisfies that in the beginning of the ii-th iteration of the for-loop,

Ij<iIvjfor any independent setI[Pvi]\displaystyle I\cup\bigcup_{j<i}I_{v_{j}}\in\mathcal{I}\ \text{for any independent set}\ I\in\mathcal{I}[P_{v_{i}}] (18)

if ef={vj}e\cap f=\{v_{j}\} holds for jij\leq i and for e,fPvje,f\in P_{v_{j}}.

Proof.

For any j<ij<i, consider the end of the jj-th iteration of the for-loop in Line 5. We have Ivj{e}vjI_{v_{j}}\cup\{e\}\in\mathcal{I}_{v_{j}} for any eUvj(PvjRvj)e\in U_{v_{j}}\setminus(P_{v_{j}}\cup R_{v_{j}}). If the jj-th iteration of the for-loop falls into Cases 1, 2, or 3, then Uvj(k>jPvk)=Uvj(PvjRvj)U_{v_{j}}\cap(\bigcup_{k>j}P_{v_{k}})=U_{v_{j}}\setminus(P_{v_{j}}\cup R_{v_{j}}) contains at most one edge. Otherwise we have Uvj(k>jPvk)(k<jRvk)U_{v_{j}}\subseteq(\bigcup_{k>j}P_{v_{k}})\cup(\bigcup_{k<j}R_{v_{k}}), and RvjR_{v_{j}} will be updated to {eUvj(k<jRvk)vkvfor someve}\{e\in U_{v_{j}}\setminus(\bigcup_{k<j}R_{v_{k}})\mid v_{k}\prec v\ \text{for some}\ v\in e\} if eUvje\in U_{v_{j}} with vkev_{k}\in e is chosen by IvkI_{v_{k}} in the kk-th iteration for some k>jk>j. Therefore, (18) is satisfied in the beginning of the ii-th iteration of the for-loop. ∎

Lemma 22.

Algorithm OrderedApprox computes an (α+δ(γ1))(\alpha+\delta(\gamma-1))-​​ approximate independent set II in \mathcal{I} in polynomial time if a given hypergraph GG satisfies that for vVv\in V, ef={v}e\cap f=\{v\} for e,fDve,f\in D_{v} with respect to a given linear order \prec of vertices VV, where γ\gamma is the width of GG with respect to \prec, and δ=maxeE|e|\delta=\max_{e\in E}|e|.

Proof.

For any vv in VV, let IvI_{v}, PvP_{v}, and RvR_{v} be the sets obtained by Algorithm OrderedApprox. By Lemma 21, I=vVIvI=\bigcup_{v\in V}I_{v} is an independent set in \mathcal{I}. Let us first consider the size of RviR_{v_{i}}. If the ii-th iteration of the for-loop falls into Case 1, then we have Rvi=R_{v_{i}}=\emptyset. If it falls into Cases 2 or 3, then we have |Rvi|=|Uvi|1γ1|R_{v_{i}}|=|U_{v_{i}}|-1\leq\gamma-1 and IviI_{v_{i}}\not=\emptyset, which implies |Rvi|(γ1)|Ivi||R_{v_{i}}|\leq(\gamma-1)|I_{v_{i}}|. Otherwise, viv_{i} is added to XX and RviR_{v_{i}} is set to Rvi=R_{v_{i}}=\emptyset at the end of the ii-th iteration. Note that RviR_{v_{i}} might be updated to {eUvi(l<jRvl)vjufor someue}\{e\in U_{v_{i}}\setminus(\bigcup_{l<j}R_{v_{l}})\mid v_{j}\prec u\ \text{for some}\ u\in e\} if eUvie\in U_{v_{i}} is chosen by IvjI_{v_{j}} in the jj-th iteration for some j>ij>i. In either case, we have |Rvi|γ1|R_{v_{i}}|\leq\gamma-1, and there exists an edge ee in UviIvjU_{v_{i}}\cap I_{v_{j}} if RviR_{v_{i}}\not=\emptyset. For p=1,2,3p=1,2,3, and 44, let VpV_{p} denote the set of vertices viv_{i} such that the ii-th iteration of the for-loop falls into Case pp. Then we have

vV1|Rv|\displaystyle\sum_{v\in V_{1}}|R_{v}| =0,\displaystyle=0,
vV2V3|Rv|\displaystyle\sum_{v\in V_{2}\cup V_{3}}|R_{v}| (γ1)vV2V3|Iv|(γ1)|I|,and\displaystyle\leq(\gamma-1)\sum_{v\in V_{2}\cup V_{3}}|I_{v}|\leq(\gamma-1)|I|,\ \text{and}
vV4|Rv|\displaystyle\sum_{v\in V_{4}}|R_{v}| (δ1)(γ1)|I|.\displaystyle\leq(\delta-1)(\gamma-1)|I|.

Therefore, we obtain the following inequality

|R|=vV|Rv|δ(γ1)|I|.\displaystyle|R|=\sum_{v\in V}|R_{v}|\leq\delta(\gamma-1)|I|.

By Lemma 20, 𝒫={PvvV}\mathcal{P}=\{P_{v}\mid v\in V\} is a local subpartition of EE with the residual R=vVRvR=\bigcup_{v\in V}R_{v}. Note that Lemma 2 still holds even for the hypergraph case. Thus, we can see that II is an (α+δ(γ1))(\alpha+\delta(\gamma-1))-approximate independent set in \mathcal{I}. ∎

We have the following corollary.

Corollary 23.

Algorithm OrderedApprox computes an α\alpha-approximate independent set II in \mathcal{I} in polynomial time if GG is a hypergraph of width 11.

We can also show that Algorithm DecomApprox can be generalized for the hypergraph case. Namely, for a hypergraph G=(V,E)G=(V,E) of width γ\gamma, Algorithm DecomApprox first decomposes edge set EE into E1,,EγE_{1},\ldots,E_{\gamma}, where Gi=(V,Ei)G_{i}=(V,E_{i}) is a hypergraph of width 11, for each EiE_{i}, applies OrderedApprox to compute an independent set IiI_{i}, and chooses a maximum independent set among them.

\fname@algorithm DecomApprox(G,,)(G,\mathcal{I},\prec)
1:An independence system (E,)(E,\mathcal{I}) defined on a hypergraph G=(V,E)G=(V,E) and a linear order v1v2vnv_{1}\prec v_{2}\prec\dots\prec v_{n} of vertices VV, where γ\gamma is the width of hypergraph GG with respect to \prec.
2:An independent set in \mathcal{I}
3:𝒬:=\mathcal{Q}:=\emptyset
4:for i=1,,ni=1,\dots,n do
5:     Uvi:={eUvivivfor allve{vi}}U_{v_{i}}^{\prime}:=\{e\in U_{v_{i}}\mid v_{i}\prec v\ \text{for all}\ v\in e\setminus\{v_{i}\}\}.
6:     for each eUvie\in U_{v_{i}}^{\prime} do
7:         u:=maxvevu:=\max_{v\in e}v.
8:         𝒬e:={Q𝒬QUv=for allve{u}}\mathcal{Q}_{e}:=\{Q^{\prime}\in\mathcal{Q}\mid Q^{\prime}\cap U_{v}=\emptyset\ \text{for all}\ v\in e\setminus\{u\}\}.
9:         if 𝒬e\mathcal{Q}_{e}\not=\emptyset then
10:              Q𝒬eQ\in\mathcal{Q}_{e}
11:         else
12:              Q:=Q:=\emptyset.
13:              𝒬:=𝒬{Q}\mathcal{Q}:=\mathcal{Q}\cup\{Q\}.
14:         end if
15:         Q:=Q{e}Q:=Q\cup\{e\}.
16:     end for
17:end for
18:for Q𝒬Q\in\mathcal{Q} do
19:     IQ:=dOrderedApprox(G[Q],[Q],)I_{Q}:=d{\rm{\texttt{OrderedApprox${}^{*}$}}}(G[Q],\mathcal{I}[Q],\prec).
20:end for
21:Iargmax{|IQ|Q𝒬}I\in\arg\max\{|I_{Q}|\mid Q\in\mathcal{Q}\}.
22:Output II and halt.

Thus we have the following theorem.

Theorem 24.

Algorithm DecomApprox computes an (α+α(δ1)(γ1))(\alpha+\alpha(\delta-1)(\gamma-1))-approximate independent set II in \mathcal{I} in polynomial time, where γ\gamma is the width of a given graph GG with respect to a given linear order of vertices VV, and δ=maxeE|e|\delta=\max_{e\in E}|e|.

Proof.

Let 𝒬\mathcal{Q} and {UvvV}\{U_{v}^{\prime}\mid v\in V\} be families of subsets of EE obtained by the algorithm. Note that 𝒬\mathcal{Q} is a partition of EE since for any vVv\in V and eUve\in U_{v}^{\prime}, eQe\in Q for some Q𝒬Q\in\mathcal{Q}, and {UvvV}\{U_{v}^{\prime}\mid v\in V\} is a partition of EE. Moreover, for any Q𝒬Q\in\mathcal{Q}, G[Q]=(V,Q)G[Q]=(V,Q) is a hypergraph of width 11, i.e., |QUv|1|Q\cap U_{v}|\leq 1 holds for any vVv\in V, since for any eQe\in Q and for any ve{maxueu}v\in e\setminus\{\max_{u\in e}u\}, (Q{e})Uv=(Q\setminus\{e\})\cap U_{v}=\emptyset holds. By Corollary 23, IQI_{Q} is an α\alpha-approximate independent set of [Q]\mathcal{I}[Q] for every Q𝒬Q\in\mathcal{Q}. Furthermore, any independent set JJ in \mathcal{I} satisfies that

|J|=Q𝒬|JQ|Q𝒬α|IQ|α|𝒬|maxQ𝒬|IQ|=α|𝒬||I|.\displaystyle|J|=\sum_{Q\in\mathcal{Q}}|J\cap Q|\leq\sum_{Q\in\mathcal{Q}}\alpha|I_{Q}|\leq\alpha|\mathcal{Q}|\max_{Q\in\mathcal{Q}}|I_{Q}|=\alpha|\mathcal{Q}||I|.

For any eEe\in E, let We={wV{minueu}eUw}W_{e}=\{w\in V\setminus\{\min_{u\in e}u\}\mid e\in U_{w}\}. Since δ=maxeE|e|\delta=\max_{e\in E}|e| and γ=maxvV|Uv|\gamma=\max_{v\in V}|U_{v}| hold, we have |We|δ2|W_{e}|\leq\delta-2 and |Uw{e}|γ1|U_{w}\setminus\{e\}|\leq\gamma-1 for any wWew\in W_{e}. Then we have

|𝒬|(δ2)(γ1)+γ=(δ1)(γ1)+1,\displaystyle|\mathcal{Q}|\leq(\delta-2)(\gamma-1)+\gamma=(\delta-1)(\gamma-1)+1,

which completes the proof. ∎

We also have the following corollary.

Corollary 25.

Algorithm DecomApprox computes an (α+α(δ1)(k1))(\alpha+\alpha(\delta-1)(k-1))-approximate independent set II in \mathcal{I} in polynomial time if GG is a hypergraph of width kk, where δ=maxeE|e|\delta=\max_{e\in E}|e|.

Acknowledgments

I give special gratitude to Kazuhisa Makino for his valuable discussions and carefully proofreading of the manuscript. I am also grateful to Yusuke Kobayashi, Akitoshi Kawamura, and Satoru Fujishige for constructive suggestions. This work was supported by the joint project of Kyoto University and Toyota Motor Corporation, titled ”Advanced Mathematical Science for Mobility Society”.

References

  • [1] D. Welsh, L. M. Society, Matroid Theory, L.M.S. monographs, Academic Press, 1976.
  • [2] W. Cook, W. Cook, W. Cunningham, W. Pulleyblank, A. Schrijver, Combinatorial Optimization, A Wiley-Interscience publication, Wiley, 1997.
  • [3] A. Schrijver, et al., Combinatorial optimization: polyhedra and efficiency, Vol. 24, Springer, 2003.
  • [4] J. G. Oxley, Matroid theory, Vol. 3, Oxford University Press, USA, 2006.
  • [5] M. D. Plummer, L. Lovász, Matching theory, Elsevier Science Ltd., 1986.
  • [6] T. Jenkyns, Matchoids: a generalization of matchings and matroids., Ph.D. thesis, University of Waterloo (1975).
  • [7] L. Lovász, The matroid matching problem, Algebraic methods in graph theory 2 (1978) 495–517.
  • [8] T. Fujito, A 2/32/3-approximation of the matroid matching, in: Algorithms and Computation: 4th International Symposium, ISAAC’93, Hong Kong, December 15-17, 1993. Proceedings, Vol. 762, Springer Science & Business Media, 1993, p. 185.
  • [9] J. Lee, M. Sviridenko, J. Vondrák, Matroid matching: the power of local search, SIAM Journal on Computing 42 (1) (2013) 357–379.
  • [10] A. Avidor, I. Berkovitch, U. Zwick, Improved approximation algorithms for max nae-sat and max sat, Approximation and Online Algorithms (2005) 27–40.
  • [11] V. Kostakos, Temporal graphs, Physica A: Statistical Mechanics and its Applications 388 (6) (2009) 1007–1023.
  • [12] S. Mandal, A. Gupta, 0-1 timed matching in bipartite temporal graphs, in: Conference on Algorithms and Discrete Applied Mathematics, Springer, 2020, pp. 331–346.
  • [13] C. Chekuri, A. Kumar, Maximum coverage problem with group budget constraints and applications, in: Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, Springer, 2004, pp. 72–83.
  • [14] D. R. Lick, A. T. White, kk-degenerate graphs, Canadian Journal of Mathematics 22 (5) (1970) 1082–1096. doi:10.4153/CJM-1970-125-1.
  • [15] E. C. Freuder, A sufficient condition for backtrack-free search, Journal of the ACM (JACM) 29 (1) (1982) 24–32.
  • [16] D. W. Matula, L. L. Beck, Smallest-last ordering and clustering and graph coloring algorithms, Journal of the ACM (JACM) 30 (3) (1983) 417–427.