This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Matroid Intersection under Restricted Oracles

Kristóf Bérczi ELKH-ELTE Egerváry Research Group and MTA-ELTE Momentum Matroid Optimization Research Group, Department of Operations Research, Eötvös Loránd University, Budapest, Hungary. Email: [email protected], [email protected]    Tamás Király11footnotemark: 1    Yutaro Yamaguchi Department of Information and Physical Sciences, Graduate School of Information Science and Technology, Osaka University, Osaka, Japan. Email: [email protected]    Yu Yokoi Principles of Informatics Research Division, National Institute of Informatics, Tokyo, Japan. Email: [email protected]
Abstract

Matroid intersection is one of the most powerful frameworks of matroid theory that generalizes various problems in combinatorial optimization. Edmonds’ fundamental theorem provides a min-max characterization for the unweighted setting, while Frank’s weight-splitting theorem provides one for the weighted case. Several efficient algorithms were developed for these problems, all relying on the usage of one of the conventional oracles for both matroids.

In the present paper, we consider the tractability of the matroid intersection problem under restricted oracles. In particular, we focus on the rank sum, common independence, and maximum rank oracles. We give a strongly polynomial-time algorithm for weighted matroid intersection under the rank sum oracle. In the common independence oracle model, we prove that the unweighted matroid intersection problem is tractable when one of the matroids is a partition matroid, and that even the weighted case is solvable when one of the matroids is an elementary split matroid. Finally, we show that the common independence and maximum rank oracles together are strong enough to realize the steps of our algorithm under the rank sum oracle.


Keywords: Matroid intersection, Tractability, Rank sum oracle, Common independence oracle, Maximum rank oracle

1 Introduction

A cornerstone of matroid theory is the efficient solvability of the matroid intersection problem introduced by Edmonds [9]. Efficient algorithms for weighted matroid intersection were developed subsequently by Edmonds [10], by Lawler [21, 23], and by Iri and Tomizawa [15]. The min-max duality theorem of Edmonds [9] for the unweighted matroid intersection problem was generalized by Frank [11] to the weighted case. These results do not only provide a well-established framework that includes various tractable combinatorial optimization problems such as bipartite matching and arborescence packing, but in certain cases they are unavoidable in solving natural optimization problems that seem to be unrelated to matroids. A beautiful example is the problem of computing a cheapest rooted kk-connected spanning subgraph of a digraph [12]. This is a pure graph optimization problem and yet the only known polynomial algorithm is based on the recognition that minimal rooted kk-connected subgraphs of a digraph form the common bases of two matroids, and therefore a weighted matroid intersection algorithm can be applied.

In order to design matroid algorithms and to analyze their complexity, it should be clarified how matroids are given. As the number of bases can be exponential in the size of the ground set, defining a matroid in an explicit form is inefficient. Rather than giving a matroid as an explicit input, it is usually assumed that one of the standard oracles is available, and the complexity of the algorithm is measured by the number of oracle calls and other elementary steps. Another way to define a matroid is to give an explicit linear representation, but this restricts the scope of the algorithm to linear matroids for which such an explicit representation is known.

For both the unweighted and weighted problems, a variety of efficient algorithms have been developed; see e.g. [9, 10, 21, 1, 22, 15, 8, 11, 5, 14]. A common feature of these algorithms, and also all previous studies on matroid intersection, is that they assume the availability of one of the standard oracles for both matroids; e.g., we can ask for the rank of a subset in each of them. Our main contribution is showing that this assumption is not necessary for the tractability of matroid intersection, not even in the weighted setting.

One motivation for studying restricted oracles comes from polymatroid matching, a framework introduced by Lawler [23] as a common generalization of matroid intersection and non-bipartite matching. In [26], Edmonds’ theorem was deduced from polymatroid matching using a sophisticated argument. The main point is that when the matroid intersection problem is formulated as a polymatroid matching problem, only the rank sum function of the two matroids is used rather than the two rank functions. Although the polymatroid matching problem cannot be solved in polynomial time in general [25, 16], the hardness was shown through special instances that seem to be far from matroid intersection. This suggests that matroid intersection might still be tractable when only the sum of the rank functions is available.

We mention that another natural oracle to consider is the minimum rank function, which answers the smaller value among the two ranks of a subset. It follows from the polyhedral results of Edmonds [9] that this oracle suffices to describe the convex hull of common independent sets. In an unpublished manuscript, Bárász [2] gave a polynomial-time algorithm for unweighted matroid intersection under the minimum rank oracle. We will present additional results about this oracle in a separate paper [3].

Our results

Our goal is to settle the tractability of the weighted matroid intersection problem under restricted oracles. In particular, we will focus on three different oracles: rank sum, common independence, and maximum rank oracles.

The study of the rank sum oracle is motivated by the above discussed connection to polymatroid matching results. The difficulty of giving an efficient algorithm is that the usual augmenting path approach cannot be applied directly, since the exchangeability graphs are not determined by the rank sum oracle. Still, we are able to give a strongly polynomial-time algorithm for the weighted matroid intersection problem by emulating the Bellman–Ford algorithm without explicitly knowing the underlying graph.

Theorem 1.1.

There exists a strongly polynomial-time algorithm for the weighted matroid intersection problem in the rank sum oracle model.

It is not difficult to see that a common independence oracle can be constructed with the help of a rank sum oracle. Therefore, any algorithm that is based on the usage of a common independence oracle immediately translates into an algorithm that uses a rank sum oracle. Nevertheless, the reverse implication does not hold, hence the common independence oracle is strictly weaker. We show that unweighted matroid intersection remains tractable under the common independence oracle model when one of the matroids is a partition matroid.

Theorem 1.2.

There exists a strongly polynomial-time algorithm for the unweighted matroid intersection problem in the common independence oracle model when one of the matroids is a partition matroid with all-one upper bound on the partition classes.

Although the complexity of the problem in general remains an intriguing open question even for the unweighted setting, this seemingly simple case already includes matchings in bipartite graphs and arborescences.

Recently, Joswig and Schröter [17] introduced the notion of split matroids, a class with distinguished structural properties that generalizes paving matroids. Bérczi, Király, Schwarcz, Yamaguchi and Yokoi [4] showed that every split matroid can be obtained as the direct sum of a so-called elementary split matroid and uniform matroids, and provided a hypergraph characterization of elementary split matroids. By relying on this characterization, we show that even weighted matroid intersection is tractable in the common independence oracle model when one of the matroids is from this class.

Theorem 1.3.

There exists a strongly polynomial-time algorithm for the weighted matroid intersection problem in the common independence oracle model when one of the matroids is an elementary split matroid.

We will see that the maximum rank oracle does not carry too much information on its own. However, when combined with the common independence oracle, they are strong enough to mimic our algorithm for the rank sum case.

Theorem 1.4.

There exists a strongly polynomial-time algorithm for the weighted matroid intersection problem when both the common independence and the maximum rank oracles are available.

Organization

The rest of the paper is organized as follows. Basic definitions and notation are introduced in Section 2, together with some fundamental results on matroid intersection. Section 3 describes the relation between different oracles. We present our strongly polynomial algorithm under the rank sum oracle in Section 4. The common independence oracle case when one of the matroids is a partition matroid or an elementary split matroid, as well as the combination of the common independence and maximum rank oracles, is discussed in Section 5.

2 Preliminaries

For the basics on matroids and the matroid intersection problem, we refer the reader to [27, 29]. Throughout the paper, for i=1,2i=1,2, let 𝐌i=(E,i){\mathbf{M}}_{i}=(E,{\mathcal{I}}_{i}) be loopless111The assumption that the matroids are loopless is not restrictive as loops can be easily detected by any of the oracles considered. matroids on the same finite ground set EE of size nn, whose independent set families, rank functions, and closure operators are denoted by i{\mathcal{I}}_{i}, by rir_{i}, and by cli\mathrm{cl}_{i}, respectively; that is, ri(X)=max{|Y|YX,Yi}r_{i}(X)=\max\left\{\,|Y|\mid Y\subseteq X,~{}Y\in{\mathcal{I}}_{i}\,\right\} and cli(X)={eEri(X{e})=ri(X)}\mathrm{cl}_{i}(X)=\{\,e\in E\mid r_{i}(X\cup\{e\})=r_{i}(X)\,\} for each XEX\subseteq E. For two sets X,YEX,Y\subseteq E, we denote their symmetric difference by XY=(XY)(YX)X\triangle Y=(X\setminus Y)\cup(Y\setminus X). The kk-truncation of a matroid 𝐌=(S,){\mathbf{M}}=(S,{\mathcal{I}}) is a matroid (𝐌)k=(S,k)({\mathbf{M}})_{k}=(S,{\mathcal{I}}^{\leq k}) with k={X|X|k}{\mathcal{I}}^{\leq k}=\{\,X\in{\mathcal{I}}\mid|X|\leq k\,\}. For IiI\in{\mathcal{I}}_{i} and xcli(I)Ix\in\mathrm{cl}_{i}(I)\setminus I, the fundamental circuit of xx with respect to II in 𝐌i{\mathbf{M}}_{i} is denoted by Ci(I,x)={yII+xyi}C_{i}(I,x)=\{\,y\in I\mid I+x-y\in{\mathcal{I}}_{i}\,\}.

We consider four oracles for matroid intersection. Given a set XEX\subseteq E as an input, a rank sum oracle (Sum) answers the sum rsum(X)r1(X)+r2(X){r_{\mathrm{sum}}}(X)\coloneqq r_{1}(X)+r_{2}(X) of the ranks of XX, a minimum rank oracle (Min) answers the minimum rmin(X)min{r1(X),r2(X)}{r_{\mathrm{min}}}(X)\coloneqq\min\left\{r_{1}(X),r_{2}(X)\right\} of the ranks of XX, a maximum rank oracle (Max) answers the maximum rmax(X)max{r1(X),r2(X)}{r_{\mathrm{max}}}(X)\coloneqq\max\left\{r_{1}(X),r_{2}(X)\right\} of the ranks of XX, and a common independence oracle (CI) answers “Yes” if X12X\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} and “No” otherwise.

Let us first overview some basic results on unweighted matroid intersection. In [9], Edmonds gave the following characterization for the maximum cardinality of a common independent set of two matroids.

Theorem 2.1 (Edmonds [9]).

Given two matroids 𝐌1=(E,1){\mathbf{M}}_{1}=(E,{\mathcal{I}}_{1}) and 𝐌2=(E,2){\mathbf{M}}_{2}=(E,{\mathcal{I}}_{2}) on a common ground set EE, the maximum cardinality of a common independent set of 𝐌1{\mathbf{M}}_{1} and 𝐌2{\mathbf{M}}_{2} is equal to

min{r1(Z)+r2(EZ)ZE}.\displaystyle\min\left\{\,r_{1}(Z)+r_{2}(E\setminus Z)\mid Z\subseteq E\,\right\}.

The notion of exchangeability graphs plays a central role in any matroid intersection algorithm.

Definition 2.2 (Exchangeability Graphs).

Assume that I12I\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2}. The exchangeability graph corresponding to II is a bipartite digraph D[I]=(EI,I;A[I])D[I]=(E\setminus I,I;A[I]) defined as follows. Set

SI\displaystyle S_{I} {sEII+s1},\displaystyle\coloneqq\{\,s\in E\setminus I\mid I+s\in{\mathcal{I}}_{1}\,\},
TI\displaystyle T_{I} {tEII+t2},\displaystyle\coloneqq\{\,t\in E\setminus I\mid I+t\in{\mathcal{I}}_{2}\,\},

where elements in SIS_{I} and in TIT_{I} are called sources and sinks, respectively. We then define the set A[I]A1[I]A2[I]A[I]\coloneqq A_{1}[I]\cup A_{2}[I] of exchangeability arcs, where

A1[I]\displaystyle A_{1}[I]\coloneqq {(y,x)xEI,yI,I+xy1}\displaystyle\ \{\,(y,x)\mid x\in E\setminus I,~{}y\in I,~{}I+x-y\in{\mathcal{I}}_{1}\,\}
=\displaystyle= {(y,s)sSI,yI}{(y,x)xE(ISI),yC1(I,x)},\displaystyle\ \{\,(y,s)\mid s\in S_{I},~{}y\in I\,\}\cup\{\,(y,x)\mid x\in E\setminus(I\cup S_{I}),~{}y\in C_{1}(I,x)\,\},
A2[I]\displaystyle A_{2}[I]\coloneqq {(x,y)xEI,yI,I+xy2}\displaystyle\ \{\,(x,y)\mid x\in E\setminus I,~{}y\in I,~{}I+x-y\in{\mathcal{I}}_{2}\,\}
=\displaystyle= {(t,y)tTI,yI}{(x,y)xE(ITI),yC2(I,x)}.\displaystyle\ \{\,(t,y)\mid t\in T_{I},~{}y\in I\,\}\cup\{\,(x,y)\mid x\in E\setminus(I\cup T_{I}),~{}y\in C_{2}(I,x)\,\}.

Note that SIS_{I} and A1[I]A_{1}[I] depend only on 1{\mathcal{I}}_{1}, and TIT_{I} and A2[I]A_{2}[I] depend only on 2{\mathcal{I}}_{2}.

Brualdi [6] observed that the set Ai[I]A_{i}[I] satisfies the following property for i=1,2i=1,2.

Lemma 2.3.

Let IiI\in{\mathcal{I}}_{i} and let ZEZ\subseteq E satisfy |IZ|=|I||I\triangle Z|=|I| and IZiI\triangle Z\in{\mathcal{I}}_{i}. Then Ai[I]A_{i}[I] contains a perfect matching on ZZ (i.e., a set of vertex-disjoint arcs whose tails and heads constitute ZZ).

Krogdahl [18, 19, 20] proved a partial converse to the above lemma.

Lemma 2.4 (Unique Perfect Matching Lemma).

Let IiI\in{\mathcal{I}}_{i} and let ZEZ\subseteq E satisfy |IZ|=|I||I\triangle Z|=|I|. If Ai[I]A_{i}[I] contains a unique perfect matching on ZZ, then IZiI\triangle Z\in{\mathcal{I}}_{i}.

Finally, let us recall that a standard algorithm for finding a maximum-cardinality common independent set is driven by the following subroutine, Algorithm 1 (see [29, §\S 41.2]).

For any digraph D=(E,A)D=(E,A), a path in DD is a sequence P=e1e2eP=e_{1}e_{2}\cdots e_{\ell} of distinct vertices such that (ei,ei+1)A(e_{i},e_{i+1})\in A for each i=1,2,,1i=1,2,\dots,\ell-1; we call PP an e1e_{1}ee_{\ell} path or an XXYY path for sets Xe1X\ni e_{1} and YeY\ni e_{\ell} to emphasize the end vertices, and define \ell as the length. A cycle in DD is a sequence e1e2ee1e_{1}e_{2}\cdots e_{\ell}e_{1} such that e1e2ee_{1}e_{2}\cdots e_{\ell} is a path and (e,e1)A(e_{\ell},e_{1})\in A. We often identify a path or a cycle with its vertex set {e1,e2,,e}\{e_{1},e_{2},\dots,e_{\ell}\}.

Input : A finite set EE, oracle access to 1{\mathcal{I}}_{1} and 2{\mathcal{I}}_{2}, and a common independent set I12I\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2}.
Output : A common independent set J12J\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} with |J|=|I|+1|J|=|I|+1 if one exists, or a subset ZEZ\subseteq E with r1(Z)+r2(EZ)=|I|r_{1}(Z)+r_{2}(E\setminus Z)=|I| otherwise.
1
2Construct the exchangeability graph D[I]D[I] with source set SIS_{I} and sink set TIT_{I}.
If some tTIt\in T_{I} is reachable from some sSIs\in S_{I}, then find a shortest SIS_{I}TIT_{I} path PP in D[I]D[I], and return J=IPJ=I\triangle P. Otherwise, return Z={eEecan reach sometTIinD[I]}Z=\{\,e\in E\mid e~{}\text{can reach some}~{}t\in T_{I}~{}\text{in}~{}D[I]\,\}.
Algorithm 1 (Augment[E,1,2,I][E,{\mathcal{I}}_{1},{\mathcal{I}}_{2},I]).

Now we turn to the weighted setting. For a weight function w:Ew\colon E\to{\mathbb{R}} and a subset XEX\subseteq E, define w(X)eXw(e)w(X)\coloneqq\sum_{e\in X}w(e). For a family 2E{\mathcal{F}}\subseteq 2^{E}, a subset XEX\subseteq E is ww-maximal in {\mathcal{F}} if Xarg max{w(Y)Y}X\in\textrm{arg\,max}\left\{\,w(Y)\mid Y\in{\mathcal{F}}\,\right\}. We define ik{Xi|X|=k}{\mathcal{I}}_{i}^{k}\coloneqq\{\,X\in{\mathcal{I}}_{i}\mid|X|=k\,\} for i=1,2i=1,2 and k=0,1,,nk=0,1,\dots,n.

One approach to solve the weighted matroid intersection problem is via augmentation along cheapest paths in the exchangeability graph as shown in Algorithm 2 (see [29, §\S 41.3]), where the cost function c:Ec\colon E\to{\mathbb{R}} is defined on the vertex set as follows:

c(e)\displaystyle c(e) {w(e)if eI,w(e)if eEI.\displaystyle\coloneqq\begin{cases}w(e)&\text{if $e\in I$},\\ -w(e)&\text{if $e\in E\setminus I$}.\end{cases} (2.1)

For each path (or cycle) PP in D[I]D[I], we define the cost of PP as c(P)ePc(e)c(P)\coloneqq\sum_{e\in P}c(e).

Input : A finite set EE, a weight function w:Ew\colon E\to{\mathbb{R}}, oracle access to 1{\mathcal{I}}_{1} and 2{\mathcal{I}}_{2}, and a ww-maximal set I1k2kI\in{\mathcal{I}}_{1}^{k}\cap{\mathcal{I}}_{2}^{k} for some k{0,1,,n1}k\in\{0,1,\dots,n-1\}.
Output : A ww-maximal set J1k+12k+1J\in{\mathcal{I}}_{1}^{k+1}\cap{\mathcal{I}}_{2}^{k+1} if one exists, or a subset ZEZ\subseteq E with r1(Z)+r2(EZ)=|I|r_{1}(Z)+r_{2}(E\setminus Z)=|I| otherwise.
1
2Construct the exchangeability graph D[I]D[I] with source set SIS_{I} and sink set TIT_{I}. In addition, define the cost function c:Ec\colon E\to{\mathbb{R}} by (2.1).
If some tTIt\in T_{I} is reachable from some sSIs\in S_{I}, then find a shortest cheapest SIS_{I}TIT_{I} path PP in D[I]D[I] (i.e., the cost c(P)c(P) is minimum, and subject to this, the length of PP is minimum), and return J=IPJ=I\triangle P. Otherwise, return Z={eEecan reach sometTIinD[I]}Z=\{\,e\in E\mid e~{}\text{can reach some}~{}t\in T_{I}~{}\text{in}~{}D[I]\,\}.
Algorithm 2 (CheapestPathAugment[E,w,1,2,I][E,w,{\mathcal{I}}_{1},{\mathcal{I}}_{2},I]).

The next lemma characterizes ww-maximal common independent sets in 1k2k{\mathcal{I}}^{k}_{1}\cap{\mathcal{I}}^{k}_{2}.

Lemma 2.5 (cf. [29, Theorem 41.5]).

A common independent set I1k2kI\in{\mathcal{I}}_{1}^{k}\cap{\mathcal{I}}_{2}^{k} is ww-maximal if and only if D[I]D[I] contains no negative-cost cycle with respect to the cost function cc defined as (2.1).

3 Polynomial Reducibility of Oracles

The aim of this section is to clarify the relation between oracles for matroid intersection, which implies the relation between the problem under the restricted oracles. For a single matroid, although there are many different types of oracles that are often used, many of these conventional oracles have the same computational power. More precisely, we say that an oracle 𝒪1{\mathcal{O}}_{1} is polynomially reducible to another oracle 𝒪2{\mathcal{O}}_{2} if 𝒪1{\mathcal{O}}_{1} can be simulated by using a polynomial number of oracle calls to 𝒪2{\mathcal{O}}_{2} measured in terms of the size of the ground set. Two oracles are polynomially equivalent if they are mutually polynomially reducible to each other. In this sense, the rank, independence, strong basis, circuit-finding, spanning, and port oracles are polynomially equivalent [28, 13, 7].

As defined in Section 2, we consider four types of oracles for matroid intersection: Sum, Min, Max, and CI. As it turns out, Max is not very useful on its own, but it provides a powerful tool when combined with any of the other three oracles. We denote by a ‘+’ sign when we have access to two of the oracles, e.g., Min+Max means that for a set XEX\subseteq E the oracle tells both rmin(X){r_{\mathrm{min}}}(X) and rmax(X){r_{\mathrm{max}}}(X).

In what follows, we discuss the reducibility of the oracles one by one. In order to keep the presentation clear, some of the ideas appear multiple times. For an overview of the results, see Figure 1. Observe that, by rsum(X)=rmin(X)+rmax(X){r_{\mathrm{sum}}}(X)={r_{\mathrm{min}}}(X)+{r_{\mathrm{max}}}(X), any combination of at least two of Min, Max, and Sum is clearly equivalent. This immediately implies that each of Min, Sum, and Max is reducible to each of Min+Max, Min+Sum, and Sum+Max.

Refer to caption
Figure 1: Hierarchy of oracles, directed arcs representing polynomial reducibility. Grey boxes denote oracles for which strongly polynomial time algorithms are given in the present paper.
Theorem 3.1.

CI is not polynomially reducible to Max, but it is polynomially reducible to Min and Sum.

Proof.

If 𝐌1{\mathbf{M}}_{1} is the free matroid, then Max always answers |X||X| independently from the choice of 𝐌2{\mathbf{M}}_{2}. Thus deciding if XEX\subseteq E is a common independent set or not is impossible relying solely on Max.

To see the second half, observe that for a set XEX\subseteq E, CI answers “Yes” if and only if XX is a common independent set of the two matroids, that is, r1(X)=r2(X)=|X|r_{1}(X)=r_{2}(X)=|X|. By the subcardinality of the rank functions, this is equivalent to rmin(X)=|X|{r_{\mathrm{min}}}(X)=|X| and to rsum(X)=2|X|{r_{\mathrm{sum}}}(X)=2|X|. As these conditions can be checked by Min and Sum, respectively, the theorem follows. ∎

Theorem 3.2.

Min is not polynomially reducible to Sum, CI, Max, and CI+Max.

Proof.

We define two instances of the matroid intersection problem on the same ground set E={a,b,c,d}E=\{a,b,c,d\} as follows. For i=1,2i=1,2, let 𝐌i{\mathbf{M}}_{i} be the graphic matroid of the graph GiG_{i} on Figure 2(a), and let 𝐌i{\mathbf{M}}^{\prime}_{i} be the graphic matroid of the graph GiG^{\prime}_{i} on Figure 2(b). Consider the maximum-cardinality common independent set problem for 𝐌1{\mathbf{M}}_{1} and 𝐌2{\mathbf{M}}_{2}, and for 𝐌1{\mathbf{M}}^{\prime}_{1} and 𝐌2{\mathbf{M}}^{\prime}_{2}. For any subset XX of EE, both Sum and CI give the same answer in the two instances, thus it is not possible to distinguish them from each other using one of these oracles. However, rmin(E){r_{\mathrm{min}}}(E) is 22 in one of them while 33 in the other one, showing that Min cannot be reduced to Sum or CI.

Now take the 33-truncation of these graphic matroids, and define 𝐍1=(𝐌1)3{\mathbf{N}}_{1}=({\mathbf{M}}_{1})_{3}, 𝐍2=(𝐌2)3{\mathbf{N}}_{2}=({\mathbf{M}}_{2})_{3}, 𝐍1=(𝐌1)3{\mathbf{N}}^{\prime}_{1}=({\mathbf{M}}^{\prime}_{1})_{3}, and 𝐍2=(𝐌2)3{\mathbf{N}}^{\prime}_{2}=({\mathbf{M}}^{\prime}_{2})_{3}. Consider the maximum-cardinality common independent set problem for 𝐍1{\mathbf{N}}_{1} and 𝐍2{\mathbf{N}}_{2}, and for 𝐍1{\mathbf{N}}^{\prime}_{1} and 𝐍2{\mathbf{N}}^{\prime}_{2}. By the slight change in the definitions, both CI and Max give the same answer in the two instances for any subset XEX\subseteq E, thus it is not possible to distinguish them from each other using a combination of these two oracles. However, rmin(E){r_{\mathrm{min}}}(E) is 22 in one of them while 33 in the other one, showing that Min cannot be reduced to CI+Max.

The case when 𝐌1{\mathbf{M}}_{1} is the free matroid shows again that rmin(X){r_{\mathrm{min}}}(X) cannot be determined relying solely on Max. ∎

Refer to caption
(a) The graphs G1G_{1} and G2G_{2} defining 𝐌1{\mathbf{M}}_{1} and 𝐌2{\mathbf{M}}_{2}.
Refer to caption
(b) The graphs G1G^{\prime}_{1} and G2G^{\prime}_{2} defining 𝐌1{\mathbf{M}}^{\prime}_{1} and 𝐌2{\mathbf{M}}^{\prime}_{2}.
Figure 2: Illustration of Theorems 3.2, 3.3, and 3.4.
Theorem 3.3.

Sum is not polynomially reducible to Min, CI, Max, and CI+Max.

Proof.

If 𝐌1{\mathbf{M}}_{1} is a uniform matroid of rank 11, then rmin(X)=0{r_{\mathrm{min}}}(X)=0 if X=X=\emptyset and 11 otherwise, while CI answers “Yes” if |X|1|X|\leq 1 and “No” otherwise. These answers are independent from the choice of 𝐌2{\mathbf{M}}_{2}, therefore we cannot determine rsum(X){r_{\mathrm{sum}}}(X) with their help.

The case when 𝐌1{\mathbf{M}}_{1} is the free matroid shows again that rsum(X){r_{\mathrm{sum}}}(X) cannot be determined relying solely on Max.

Consider the same two instances of the matroid intersection problem defined by the 33-truncations of the graphic matroids on Figure 2 as in the proof of Theorem 3.2. Recall that both CI and Max give the same answer in the two instances for any subset XEX\subseteq E, thus it is not possible to distinguish them from each other using a combination of these two oracles. However, rsum(E){r_{\mathrm{sum}}}(E) is 55 in one of them while 66 in the other one, showing that Sum cannot be reduced to CI+Max. ∎

Theorem 3.4.

Max is not polynomially reducible to Min, Sum, and CI.

Proof.

The case when 𝐌1{\mathbf{M}}_{1} is a uniform matroid of rank 11 shows again that rmax(X){r_{\mathrm{max}}}(X) cannot be determined relying solely on Min or CI.

Consider the same two instances of the matroid intersection problem defined by Figure 2 as in the proof of Theorem 3.2. Recall that for any subset XX of EE, Sum gives the same answer in both instances, thus it is not possible to distinguish them from each other using Sum. However, rmax(E){r_{\mathrm{max}}}(E) is 44 in one of them while 33 in the other one, showing that Max cannot be reduced to Sum. ∎

4 Matroid Intersection under Rank Sum Oracle

In Algorithm 2, we assume that we are given the independence oracles of matroids 𝐌1{\mathbf{M}}_{1} and 𝐌2{\mathbf{M}}_{2}, which are polynomially equivalent to the oracles of rank functions r1r_{1} and r2r_{2}, respectively. In this section, we consider the solvability of the weighted matroid intersection problem when only the rank sum function rsum:2E0{r_{\mathrm{sum}}}\colon 2^{E}\to{\mathbb{Z}}_{\geq 0} is available, that is, for any set XEX\subseteq E the oracle answers the value of rsum(X)r1(X)+r2(X){r_{\mathrm{sum}}}(X)\coloneqq r_{1}(X)+r_{2}(X). Note that a subset IEI\subseteq E belongs to 12{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} if and only if rsum(I)=2|I|{r_{\mathrm{sum}}}(I)=2|I|. However, when rsum(I)<2|I|{r_{\mathrm{sum}}}(I)<2|I|, we cannot decide whether IiI\in{\mathcal{I}}_{i} or not for each i=1,2i=1,2.

The matroid intersection problem under this oracle model is exactly a special case of the polymatroid matching problem, which is equivalent to the so-called matroid matching (or parity) problem [26, § 11.1]. While a max-min duality theorem was given by Lovász [24] for the case when the matroids in question admit linear representations222For the matroid intersection considered as a special case, the two matroids are necessarily representable over the same field., such a good characterization is not known in general even in the matroid intersection case.

In what follows, we provide an algorithm for the weighted matroid intersection problem with the rank sum oracle. We consider emulating Algorithm 2, i.e., CheapestPathAugment[E,w,1,2,I][E,w,{\mathcal{I}}_{1},{\mathcal{I}}_{2},I].

4.1 Searching a Shortest Cheapest Path

Take any k{0,1,,n1}k\in\{0,1,\dots,n-1\} and let II be a ww-maximal set 1k2k{\mathcal{I}}_{1}^{k}\cap{\mathcal{I}}_{2}^{k}. To emulate Algorithm 2, we want to find a shortest cheapest SIS_{I}TIT_{I} path in D[I]=(EI,I;A1[I]A2[I])D[I]=(E\setminus I,I;A_{1}[I]\cup A_{2}[I]) with respect to vertex cost c:Ec\colon E\to{\mathbb{R}} defined in Algorithm 2. With only the rank sum oracle, however, we cannot determine the sets A1[I]A_{1}[I], A2[I]A_{2}[I], SIS_{I}, and TIT_{I}, and hence cannot simply emulate Algorithm 2.

We show that, despite this difficulty, we can compute a shortest cheapest SIS_{I}TIT_{I} path in D[I]D[I]. We first make some observations. Let D[I]=(EI,I;A1[I]A2[I])D^{\prime}[I]=(E\setminus I,I;A^{\prime}_{1}[I]\cup A^{\prime}_{2}[I]) be the subgraph of D[I]D[I] obtained from D[I]D[I] by removing arcs entering SIS_{I} and leaving TIT_{I}, i.e.,

A1[I]\displaystyle A^{\prime}_{1}[I]\coloneqq{} {(y,x)xE(ISI),yC1(I,x)},\displaystyle\{\,(y,x)\mid x\in E\setminus(I\cup S_{I}),~{}y\in C_{1}(I,x)\,\},
A2[I]\displaystyle A^{\prime}_{2}[I]\coloneqq{} {(x,y)xE(ITI),yC2(I,x)}.\displaystyle\{\,(x,y)\mid x\in E\setminus(I\cup T_{I}),~{}y\in C_{2}(I,x)\,\}.

Note that each element in SITIS_{I}\cap T_{I} is an isolated vertex in D[I]D^{\prime}[I]. Recall that D[I]D[I] has no negative cost cycle with respect to cc by Lemma 2.5. This fact implies that finding a shortest cheapest SIS_{I}TIT_{I} path in D[I]D[I] is equivalent to finding one in D[I]D^{\prime}[I].

Lemma 4.1.

Any shortest cheapest SIS_{I}TIT_{I} path in D[I]D^{\prime}[I] is a shortest cheapest SIS_{I}TIT_{I} path in D[I]D[I].

Proof.

It is sufficient to show that any shortest cheapest SIS_{I}TIT_{I} path in D[I]D[I] is contained in D[I]D^{\prime}[I]. Suppose, to the contrary, that a shortest cheapest SIS_{I}TIT_{I} path PP uses some arc (y,s)A1[I]A1[I](y,s^{*})\in A_{1}[I]\setminus A^{\prime}_{1}[I] or (t,y)A2[I]A2[I](t^{*},y)\in A_{2}[I]\setminus A^{\prime}_{2}[I], and let sSIs\in S_{I} and tTIt\in T_{I} be its end vertices. If PP uses (y,s)A1[I]A1[I](y,s^{*})\in A_{1}[I]\setminus A^{\prime}_{1}[I], let P(s,y)P(s,y) and P(s,t)P(s^{*},t) be the subpaths of PP from ss to yy and from ss^{*} to tt, respectively. Since (y,s)A1[I](y,s)\in A_{1}[I], the path P(s,y)P(s,y) is extended to a cycle with the same vertex set in D[I]D[I], and hence c(P(s,y))c(P(s,y)) is nonnegative by Lemma 2.5. Then c(P)=c(P(s,y))+c(P(s,t))c(P(s,t))c(P)=c(P(s,y))+c(P(s^{*},t))\geq c(P(s^{*},t)) and |P(s,t)|<|P||P(s^{*},t)|<|P|, which contradicts that PP is a shortest cheapest SIS_{I}TIT_{I} path in D[I]D[I]. If PP uses (t,y)A2[I]A2[I](t^{*},y)\in A_{2}[I]\setminus A^{\prime}_{2}[I], we can similarly show that P(s,t)P(s,t^{*}) is an SIS_{I}TIT_{I} path that is at least as cheap as PP and shorter than PP. ∎

While we cannot determine SIS_{I} and TIT_{I}, we can determine SITIS_{I}\cup T_{I} as SITI={sEIrsum(I+s)2|I|+1}S_{I}\cup T_{I}=\{\,s\in E\setminus I\mid{r_{\mathrm{sum}}}(I+s)\geq 2|I|+1\,\}. We now provide a search algorithm with the rank sum oracle. Its description is given as Algorithm 3. For any sSIs\in S_{I} (resp., sTIs\in T_{I}), it emulates the Bellman–Ford algorithm in D[I]D^{\prime}[I] (resp., in the inverse of D[I]D^{\prime}[I]) rooted at ss without knowing D[I]D^{\prime}[I] explicitly. Since there is no negative cost cycle in D[I]D^{\prime}[I] by Lemma 2.5, the algorithm finds shortest cheapest paths from ss to all reachable vertices, and it returns a shortest cheapest ssTIT_{I} path (resp., ssSIS_{I} path) if it exists. By applying this search algorithm for all sSITIs\in S_{I}\cup T_{I}, we can obtain a shortest cheapest SIS_{I}TIT_{I} path in D[I]D^{\prime}[I], which is also a shortest cheapest path in D[I]D[I] by Lemma 4.1.

For each eEe\in E, the algorithm maintains a sequence PeP_{e} of distinct elements of EE, which is initialized by a virtual token 𝗇𝗎𝗅𝗅{\mathsf{null}} with c(𝗇𝗎𝗅𝗅)c({\mathsf{null}})\coloneqq\infty. We will show that PeP_{e} is an ssee path in D[I]D^{\prime}[I] unless Pe=𝗇𝗎𝗅𝗅P_{e}={\mathsf{null}}. For a sequence PeP_{e} and an element eEPee^{\prime}\in E\setminus P_{e}, we denote by Pe+eP_{e}+e^{\prime} the sequence obtained by appending ee^{\prime} to PeP_{e}.

Input : A finite set EE, a weight function w:Ew\colon E\to{\mathbb{R}}, which defines c:Ec\colon E\to{\mathbb{R}} by (2.1), oracle access to rsum:2E0{r_{\mathrm{sum}}}\colon 2^{E}\to{\mathbb{Z}}_{\geq 0}, a ww-maximal set I1k2kI\in{\mathcal{I}}_{1}^{k}\cap{\mathcal{I}}_{2}^{k} for some k{0,1,,n1}k\in\{0,1,\dots,n-1\}, and sSITIs\in S_{I}\cup T_{I}.
Output : For sSIs\in S_{I} (resp., sTIs\in T_{I}), a shortest cheapest ssTIT_{I} path (resp., ssSIS_{I} path) in D[I]D^{\prime}[I] (resp, in the inverse of D[I]D^{\prime}[I]) with respect to cc if one exists, or a message “No” otherwise.
1
2Set PssP_{s}\leftarrow s, and Pe𝗇𝗎𝗅𝗅P_{e}\leftarrow{\mathsf{null}} for each eEse\in E-s.
3For =1,2,,n1\ell=1,2,\dots,n-1, do the following. If \ell is odd: For each yIy\in I, do the following. Let xEIx\in E\setminus I minimize c(Px)c(P_{x}) subject to Px𝗇𝗎𝗅𝗅P_{x}\neq{\mathsf{null}}, yPxy\not\in P_{x}, and ()rsum(IPx)=2|I|+1,rsum(I(Px+y))=2|I|.\displaystyle(*)\quad{r_{\mathrm{sum}}}(I\triangle P_{x})=2|I|+1,\quad{r_{\mathrm{sum}}}(I\triangle(P_{x}+y))=2|I|. If c(Px+y)<c(Py)c(P_{x}+y)<c(P_{y}), update PyPx+yP_{y}\leftarrow P_{x}+y. If \ell is even: For each xEIx\in E\setminus I, do the following. Let yIy\in I minimize c(Py)c(P_{y}) subject to Py𝗇𝗎𝗅𝗅P_{y}\neq{\mathsf{null}}, xPyx\not\in P_{y}, and ()\displaystyle(**)\quad [rsum(I+x)=2|I|,rsum(I(Py+x))=2|I|+1]or\displaystyle[~{}{r_{\mathrm{sum}}}(I+x)=2|I|,~{}{r_{\mathrm{sum}}}(I\triangle(P_{y}+x))=2|I|+1~{}]~{}\text{or} [rsum(I+x)=2|I|+1,rsum(I(Py+x))=2|I|+2].\displaystyle[~{}{r_{\mathrm{sum}}}(I+x)=2|I|+1,~{}{r_{\mathrm{sum}}}(I\triangle(P_{y}+x))=2|I|+2~{}]. If c(Py+x)<c(Px)c(P_{y}+x)<c(P_{x}), update PxPy+xP_{x}\leftarrow P_{y}+x.
Let tEIt\in E\setminus I minimize c(Pt)c(P_{t}) subject to rsum(IPt)=2|I|+2{r_{\mathrm{sum}}}(I\triangle P_{t})=2|I|+2. Return PtP_{t} if c(Pt)c(P_{t})\neq\infty, and otherwise return “No”.
Algorithm 3 (EmulatingBellmanFord[E,c,rsum,I,s][E,c,{r_{\mathrm{sum}}},I,s]).

4.2 Correctness of the Search

The following lemma shows an important property of Algorithm 3, where we can assume that ss belongs to SI={sEII+s1}S_{I}=\{\,s\in E\setminus I\mid I+s\in{\mathcal{I}}_{1}\,\} by symmetry. (For sTI={sEII+s2}s\in T_{I}=\{\,s\in E\setminus I\mid I+s\in{\mathcal{I}}_{2}\,\}, replace D[I]D^{\prime}[I] with its inverse in the arguments.)

Lemma 4.2.

Let sSIs\in S_{I} and take any =1,2,,n1\ell=1,2,\dots,n-1. For any eEe\in E, just after the \ellth updating process of Step 2 of Algorithm 3, the sequence PeP_{e} is a shortest cheapest ssee path in D[I]D^{\prime}[I] subject to |Pe|+1|P_{e}|\leq\ell+1, where Pe=𝗇𝗎𝗅𝗅P_{e}={\mathsf{null}} means that there is no such path.

Proof.

We use induction on \ell. Note that the statement holds if =0\ell=0. We show the statement for any >0\ell>0 assuming that it holds for 1\ell-1. We use the following two claims.

Claim 4.3.

Suppose that, for any ee, PeP_{e} is a shortest cheapest ssee path in D[I]D^{\prime}[I] subject to |Pe||P_{e}|\leq\ell. Then, for any yIy\in I and xEIx\in E\setminus I such that [Px𝗇𝗎𝗅𝗅P_{x}\neq{\mathsf{null}}, yPxy\not\in P_{x}, and c(Px+y)<c(Py)c(P_{x}+y)<c(P_{y})], condition ()(\ast) holds if and only if (x,y)A2[I](x,y)\in A^{\prime}_{2}[I].

Claim 4.4.

Suppose that, for any ee, PeP_{e} is a shortest cheapest ssee path in D[I]D^{\prime}[I] subject to |Pe||P_{e}|\leq\ell. Then, for any xEIx\in E\setminus I and yIy\in I such that [Py𝗇𝗎𝗅𝗅P_{y}\neq{\mathsf{null}}, xPyx\not\in P_{y}, and c(Py+x)<c(Px)c(P_{y}+x)<c(P_{x})], condition ()(\ast\ast) holds if and only if (y,x)A1[I](y,x)\in A^{\prime}_{1}[I].

We postpone the proofs of these claims and complete the proof of the lemma relying on them. For any eEe\in E, let Pe1P^{\ell-1}_{e} and PeP^{\ell}_{e} be the sequence PeP_{e} just after the (1)(\ell-1)st and \ellth process, respectively. By induction, Pe1P^{\ell-1}_{e} is a shortest cheapest ssee path in D[I]D^{\prime}[I] subject to |Pe||P_{e}|\leq\ell. Let PeP^{*}_{e} be any shortest cheapest ssee path in D[I]D^{\prime}[I] subject to |Pe|+1|P^{*}_{e}|\leq\ell+1.

By the “only if” parts of Claims 4.3 and 4.4, PeP^{\ell}_{e} is an ssee path in D[I]D^{\prime}[I] with |Pe|+1|P^{\ell}_{e}|\leq\ell+1, and hence c(Pe)c(Pe)c(P^{*}_{e})\leq c(P^{\ell}_{e}). Also, c(Pe)c(Pe1)c(P^{\ell}_{e})\leq c(P^{\ell-1}_{e}) by the algorithm. If c(Pe)=c(Pe1)c(P^{*}_{e})=c(P^{\ell-1}_{e}), then Pe=Pe1P^{\ell}_{e}=P^{\ell-1}_{e} and the statement immediately follows. Otherwise, c(Pe)<c(Pe1)c(P^{*}_{e})<c(P^{\ell-1}_{e}). This implies |Pe|=+1|P^{*}_{e}|=\ell+1. Then eIe\in I if \ell is odd and eEIe\in E\setminus I if ee is even (recall that D[I]D^{\prime}[I] is a bipartite digraph between EIE\setminus I and II). Let ee^{\prime} be the second last element in PeP^{*}_{e} and let Pe:=PeeP^{*}_{e^{\prime}}:=P^{*}_{e}-e (i.e., delete ee from PeP^{*}_{e}). Then PeP^{*}_{e^{\prime}} is an ssee^{\prime} path with |Pe|=|P^{*}_{e^{\prime}}|=\ell, and hence c(Pe1)c(Pe)c(P^{\ell-1}_{e^{\prime}})\leq c(P^{*}_{e^{\prime}}) by the induction hypothesis. So c(Pe1+e)c(Pe)<c(Pe1)c(P^{\ell-1}_{e^{\prime}}+e)\leq c(P^{*}_{e})<c(P^{\ell-1}_{e}). If \ell is odd, then (e,e)A2[I](e^{\prime},e)\in A^{\prime}_{2}[I], and hence (\ast) holds with x:=ex:=e^{\prime} and y:=ey:=e by Claim 4.3. If \ell is even, then (e,e)A1[I](e^{\prime},e)\in A^{\prime}_{1}[I], and hence (\ast\ast) holds with y:=ey:=e^{\prime} and x:=ex:=e by Claim 4.4. In either case, we obtain c(Pe)c(Pe+e)=c(Pe)c(P^{\ell}_{e})\leq c(P^{*}_{e^{\prime}}+e)=c(P^{*}_{e}) and |Pe|+1=|Pe||P^{\ell}_{e}|\leq\ell+1=|P^{*}_{e}|. Hence PeP^{\ell}_{e} is a shortest cheapest ssee path subject to |Pe|+1|P_{e}|\leq\ell+1. ∎

In what follows, we prove Claims 4.3 and 4.4. First, we need the following lemma.

Lemma 4.5.

Let sSIs\in S_{I}. At any moment of the algorithm, the following conditions hold.

  1. (a)

    Any yIy\in I with Py𝗇𝗎𝗅𝗅P_{y}\neq{\mathsf{null}} satisfies |IPy|=|I||I\triangle P_{y}|=|I|, r1(IPy)=|I|r_{1}(I\triangle P_{y})=|I|, and r2(IPy)=|I|r_{2}(I\triangle P_{y})=|I|.

  2. (b)

    Any xEIx\in E\setminus I with Px𝗇𝗎𝗅𝗅P_{x}\neq{\mathsf{null}} satisfies |IPx|=|I|+1|I\triangle P_{x}|=|I|+1, r1(IPx)=|I|+1r_{1}(I\triangle P_{x})=|I|+1, and r2(IPx)|I|r_{2}(I\triangle P_{x})\geq|I|. Moreover, it satisfies r2(IPx)=|I|+1r_{2}(I\triangle P_{x})=|I|+1 if and only if xTIx\in T_{I}.

  3. (c)

    For each eEe\in E with Pe𝗇𝗎𝗅𝗅P_{e}\neq{\mathsf{null}}, we have (Pes)SI=(P_{e}-s)\cap S_{I}=\emptyset and (Pee)TI=(P_{e}-e)\cap T_{I}=\emptyset.

Proof.

By the algorithm, for each eEe\in E with Pe𝗇𝗎𝗅𝗅P_{e}\neq{\mathsf{null}}, the sequence PeP_{e} starts with sEIs\in E\setminus I and uses elements in EIE\setminus I and II alternately. Then |IPy|=|I||I\triangle P_{y}|=|I| for any yIy\in I with Py𝗇𝗎𝗅𝗅P_{y}\neq{\mathsf{null}} and |IPx|=|I|+1|I\triangle P_{x}|=|I|+1 for any xEIx\in E\setminus I with Px𝗇𝗎𝗅𝗅P_{x}\neq{\mathsf{null}}. For any yIy\in I, after PyP_{y} is updated, it satisfies rsum(IPy)=2|I|{r_{\mathrm{sum}}}(I\triangle P_{y})=2|I| by the condition (\ast) for update. Then (a) follows.

For any eEIe\in E\setminus I with Pe𝗇𝗎𝗅𝗅P_{e}\neq{\mathsf{null}}, any x(PeI)ex^{\prime}\in(P_{e}\setminus I)-e has some succeeding element yIy^{\prime}\in I in PeP_{e} and (\ast) holds for xx^{\prime} and yy^{\prime}. Hence rsum(IPx)=2|I|+1{r_{\mathrm{sum}}}(I\triangle P_{x^{\prime}})=2|I|+1. If x=sSIx^{\prime}=s\in S_{I}, it immediately implies xTIx^{\prime}\not\in T_{I}. If xsx^{\prime}\neq s, then xx^{\prime} has some preceding element y′′y^{\prime\prime} in PxP_{x}, and (\ast\ast) for y′′y^{\prime\prime} and xx^{\prime} implies rsum(I+x)=2|I|{r_{\mathrm{sum}}}(I+x^{\prime})=2|I| (as rsum(IPx)2|I|+2{r_{\mathrm{sum}}}(I\triangle P_{x^{\prime}})\neq 2|I|+2), and hence xSITIx^{\prime}\not\in S_{I}\cup T_{I}. Thus, sSITIs\in S_{I}\setminus T_{I} and any x(PeI)sex^{\prime}\in(P_{e}\setminus I)-s-e satisfies xSITIx^{\prime}\not\in S_{I}\cup T_{I}.

For any x(EI)sx\in(E\setminus I)-s with Px𝗇𝗎𝗅𝗅P_{x}\neq{\mathsf{null}}, by (\ast\ast) for xx and its preceding element yy, we have [rsum(I+x)=2|I|,rsum(IPx)=2|I|+1][{r_{\mathrm{sum}}}(I+x)=2|I|,\ {r_{\mathrm{sum}}}(I\triangle P_{x})=2|I|+1] or [rsum(I+x)=2|I|+1,rsum(IPx)=2|I|+2][{r_{\mathrm{sum}}}(I+x)=2|I|+1,\ {r_{\mathrm{sum}}}(I\triangle P_{x})=2|I|+2]. In the former case, rsum(I+x)=2|I|{r_{\mathrm{sum}}}(I+x)=2|I| implies xSITIx\not\in S_{I}\cup T_{I}, and hence PxTI=P_{x}\cap T_{I}=\emptyset. This implies r2(IPx)r2(IPx)=|I|r_{2}(I\triangle P_{x})\leq r_{2}(I\cup P_{x})=|I|, and then rsum(IPx)=2|I|+1{r_{\mathrm{sum}}}(I\triangle P_{x})=2|I|+1 implies r1(IPx)=|I|+1r_{1}(I\triangle P_{x})=|I|+1 and r2(IPx)=|I|r_{2}(I\triangle P_{x})=|I|. In the latter case, rsum(IPx)=2|I|+2{r_{\mathrm{sum}}}(I\triangle P_{x})=2|I|+2 implies r1(IPx)=r2(IPx)=|I|+1r_{1}(I\triangle P_{x})=r_{2}(I\triangle P_{x})=|I|+1. Since any x(PxI)xx^{\prime}\in(P_{x}\setminus I)-x satisfies xTIx^{\prime}\not\in T_{I} (as seen in the previous paragraph), we must have xTIx\in T_{I}, and then xSIx\not\in S_{I} follows from rsum(I+x)=2|I|+1{r_{\mathrm{sum}}}(I+x)=2|I|+1. Thus, (b) and (c) are shown. ∎

Now we are ready to show Claims 4.3 and 4.4. We sometimes denote by V(P)V(P) the set of elements in a sequence PP for emphasizing that we focus on the set rather than the sequence.

Proof of Claim 4.3.

By Lemma 4.5(b), rsum(IPx)=2|I|+1{r_{\mathrm{sum}}}(I\triangle P_{x})=2|I|+1 is equivalent to xTIx\not\in T_{I}. Lemma 4.5(b) also implies r1(IPx)=|I|+1=|IPx|r_{1}(I\triangle P_{x})=|I|+1=|I\triangle P_{x}|, and hence r1(I(Px+y))=r1((IPx)y)=|I|r_{1}(I\triangle(P_{x}+y))=r_{1}((I\triangle P_{x})-y)=|I|. Therefore, rsum(I(Px+y))=2|I|{r_{\mathrm{sum}}}(I\triangle(P_{x}+y))=2|I| is equivalent to r2(I(Px+y))=|I|=|I(Px+y)|r_{2}(I\triangle(P_{x}+y))=|I|=|I\triangle(P_{x}+y)|, i.e., I(Px+y)2I\triangle(P_{x}+y)\in{\mathcal{I}}_{2}. Then, the condition ()(\ast) is equivalent to

()xTI,I(Px+y)2.\displaystyle(*)^{\prime}\quad x\not\in T_{I},\quad I\triangle(P_{x}+y)\in{\mathcal{I}}_{2}.

We show that ()(\ast)^{\prime} holds if and only if (x,y)A2[I](x,y)\in A^{\prime}_{2}[I]. By the induction hypothesis, PxP_{x} is an ssxx path in D[I]D^{\prime}[I], and hence it uses arcs of A2[I]A^{\prime}_{2}[I] and A1[I]A^{\prime}_{1}[I] alternately. Let N1N_{1} and N2N_{2} be the sets of those arcs of A1[I]A^{\prime}_{1}[I] and A2[I]A^{\prime}_{2}[I], respectively. Since xEIx\in E\setminus I, N1N_{1} forms a matching that covers V(Px)sV(P_{x})-s and N2N_{2} forms a matching that covers V(Px)xV(P_{x})-x.

To show the “if” part, suppose (x,y)A2[I](x,y)\in A^{\prime}_{2}[I]. Then xTIx\not\in T_{I} by the definition of A2[I]A^{\prime}_{2}[I]. Also, N2:=N2+(x,y)A2[I]N^{\prime}_{2}:=N_{2}+(x,y)\subseteq A^{\prime}_{2}[I] forms a perfect matching on V(Px+y)V(P_{x}+y). Suppose conversely that I(Px+y)2I\triangle(P_{x}+y)\not\in{\mathcal{I}}_{2}. Then Lemma 2.4 implies that A2[I]A_{2}[I] contains some other perfect matching N2′′N^{\prime\prime}_{2} on V(Px+y)V(P_{x}+y). We see that N2′′A2[I]N^{\prime\prime}_{2}\subseteq A^{\prime}_{2}[I] because (Px+y)TI=(P_{x}+y)\cap T_{I}=\emptyset by Lemma 4.5(c) and xTIx\not\in T_{I}. Thus, N2N^{\prime}_{2}, N2′′N^{\prime\prime}_{2}, and N1N_{1} are all contained in D[I]D^{\prime}[I]. Consider the digraph D=(V(Px+y),A)D=(V(P_{x}+y),A) whose arc set AA consists of the arcs in N2N^{\prime}_{2}, N2′′N^{\prime\prime}_{2}, and two copies of N1N_{1}, where we consider their multiplicity, i.e., each arc in (N2N2′′)N1(N^{\prime}_{2}\cap N^{\prime\prime}_{2})\cup N_{1} is taken twice (parallel). Since N2′′N2N^{\prime\prime}_{2}\neq N^{\prime}_{2}, there exists an arc in N2′′N^{\prime\prime}_{2} whose head precedes its tail on the path Px+yP_{x}+y. Then DD contains at least one directed cycle. The indegree and outdegree of each vertex in DD are given as (din(s),dout(s))=(0,2)(d^{\rm in}(s),d^{\rm out}(s))=(0,2), (din(y),dout(y))=(2,0)(d^{\rm in}(y),d^{\rm out}(y))=(2,0), and (din(e),dout(e))=(2,2)(d^{\rm in}(e),d^{\rm out}(e))=(2,2) for all the other vertices ee. Then AA can be decomposed into the arc sets of two ssyy paths and one or more cycles. Let P1P_{1} and P2P_{2} be those ssyy paths and 𝒬\mathcal{Q} be the set of those cycles (where paths and cycles are sequences of vertices). Then each vertex in V(Px+y)V(P_{x}+y) is used exactly twice in this decomposition, and hence

c(P1)+c(P2)+Q𝒬c(Q)=2c(Px+y).\textstyle{c(P_{1})+c(P_{2})+\sum_{Q\in\mathcal{Q}}c(Q)=2c(P_{x}+y).}

Since D[I]D^{\prime}[I] has no negative cycle, this implies c(P1)+c(P2)2c(Px+y)c(P_{1})+c(P_{2})\leq 2c(P_{x}+y). Also, 𝒬\mathcal{Q}\neq\emptyset implies V(P1)V(Px+y)V(P_{1})\subsetneq V(P_{x}+y) or V(P2)v(Px+y)V(P_{2})\subsetneq v(P_{x}+y). In case V(P1)=V(Px+y)V(P_{1})=V(P_{x}+y), we have V(P2)V(Px+y)V(P_{2})\subsetneq V(P_{x}+y), which implies |P2|<|Px+y|+1|P_{2}|<|P_{x}+y|\leq\ell+1 because |Px||P_{x}|\leq\ell holds by induction. Also, V(P1)=V(Px+y)V(P_{1})=V(P_{x}+y) implies c(P2)c(Px+y)c(P_{2})\leq c(P_{x}+y), where c(Px+y)<c(Py)c(P_{x}+y)<c(P_{y}) by assumption. Thus, P2P_{2} is an ssyy path in D[I]D^{\prime}[I] with c(P2)<c(Py)c(P_{2})<c(P_{y}) and |P2||P_{2}|\leq\ell, which contradicts the induction hypothesis that PyP_{y} is a shortest cheapest ssyy path subject to |Py||P_{y}|\leq\ell. The case V(P2)=V(Px+y)V(P_{2})=V(P_{x}+y) is similar. In case V(P1)V(Px+y)V(P_{1})\subsetneq V(P_{x}+y) and V(P2)V(Px+y)V(P_{2})\subsetneq V(P_{x}+y), both P1P_{1} and P2P_{2} satisfy |Pi||P_{i}|\leq\ell and at least one of them, say PiP_{i}, satisfies c(Pi)c(Px+y)<c(Py)c(P_{i})\leq c(P_{x}+y)<c(P_{y}), which again contradicts the induction hypothesis on PyP_{y}.

We next show the “only if” part. Let ()(\ast)^{\prime} hold. By I(Px+y)2I\triangle(P_{x}+y)\in{\mathcal{I}}_{2}, Lemma 2.3 implies that A2[I]A_{2}[I] contains a perfect matching N2N^{\prime}_{2} on V(Px+y)V(P_{x}+y). Also, N2A2[I]N^{\prime}_{2}\subseteq A^{\prime}_{2}[I] because (Px+y)TI=(P_{x}+y)\cap T_{I}=\emptyset by Lemma 4.5(c) and xTIx\not\in T_{I}. Thus, N2N_{2}, N2N^{\prime}_{2}, and N1N_{1} are all contained in D[I]D^{\prime}[I]. Consider the digraph D=(V(Px+y),A)D^{*}=(V(P_{x}+y),A^{*}) whose arc set AA^{*} consists of the arcs in N2N_{2}, N2N^{\prime}_{2}, and two copies of N1N_{1}, where we consider their multiplicity as before. Conversely, suppose that (x,y)A2[I](x,y)\not\in A^{\prime}_{2}[I]. Then (x,y)N2(x,y)\not\in N^{\prime}_{2}. Since N2N^{\prime}_{2} covers V(Px+y)V(P_{x}+y), it has an arc whose tail is xx and whose head precedes xx in the path Px+yP_{x}+y. Then DD^{*} contains at least one directed cycle. Note that (din(s),dout(s))=(0,2)(d^{\rm in}(s),d^{\rm out}(s))=(0,2), (din(x),dout(x))=(2,1)(d^{\rm in}(x),d^{\rm out}(x))=(2,1), (din(y),dout(y))=(1,0)(d^{\rm in}(y),d^{\rm out}(y))=(1,0), and (din(e),dout(e))=(2,2)(d^{\rm in}(e),d^{\rm out}(e))=(2,2) for all the other vertices ee in DD^{*}. Then AA^{*} can be decomposed into the arc sets of one ssxx path, one ssyy path, and one or more cycles. Let RxR_{x}, RyR_{y}, and 𝒬\mathcal{Q}^{\prime} be that ssxx path, ssyy path, and the set of cycles, respectively. Then each vertex in V(Px+y)yV(P_{x}+y)-y is used twice and yy is used once in this decomposition, and hence

c(Rx)+c(Ry)+Q𝒬c(Q)=c(Px)+c(Px+y).\textstyle{c(R_{x})+c(R_{y})+\sum_{Q\in\mathcal{Q}^{\prime}}c(Q)=c(P_{x})+c(P_{x}+y).}

Since D[I]D^{\prime}[I] has no negative cost cycle, c(Rx)+c(Ry)c(Px)+c(Px+y)c(R_{x})+c(R_{y})\leq c(P_{x})+c(P_{x}+y). Also, 𝒬\mathcal{Q}\neq\emptyset implies V(Rx)V(Px)V(R_{x})\subsetneq V(P_{x}) or V(Ry)v(Px+y)V(R_{y})\subsetneq v(P_{x}+y). If V(Rx)=V(Px)V(R_{x})=V(P_{x}), then V(Ry)V(Px+y)V(R_{y})\subsetneq V(P_{x}+y), which implies |Ry|<|Px+y|+1|R_{y}|<|P_{x}+y|\leq\ell+1. Also, V(Rx)=V(Px)V(R_{x})=V(P_{x}) implies c(Ry)c(Px+y)<c(Py)c(R_{y})\leq c(P_{x}+y)<c(P_{y}). Thus, RyR_{y} is an ssyy path with c(Ry)<c(Py)c(R_{y})<c(P_{y}) and |Ry||R_{y}|\leq\ell, which contradicts the induction hypothesis on PyP_{y}. In case V(Ry)=V(Px+y)V(R_{y})=V(P_{x}+y), we have V(Rx)V(Px)V(R_{x})\subsetneq V(P_{x}), which implies |Rx|<|Px||R_{x}|<|P_{x}|. Also, V(Ry)=V(Px+y)V(R_{y})=V(P_{x}+y) implies c(Rx)c(Px)c(R_{x})\leq c(P_{x}). Thus, RxR_{x} is an ssxx path with c(Rx)c(Px)c(R_{x})\leq c(P_{x}) and |Rx|<|Px||R_{x}|<|P_{x}|, which contradicts the induction hypothesis on PxP_{x}. In case V(Rx)V(Px)V(R_{x})\subsetneq V(P_{x}) and V(Ry)V(Px+y)V(R_{y})\subsetneq V(P_{x}+y), we have |Rx|<|Px||R_{x}|<|P_{x}| and |Ry||R_{y}|\leq\ell. Also, we have c(Rx)c(Px)c(R_{x})\leq c(P_{x}) or c(Ry)c(Px+y)<c(Py)c(R_{y})\leq c(P_{x}+y)<c(P_{y}), and each of them yields a contradiction. ∎

Proof of Claim 4.4.

Since I(Py+x)=(IPy)+xI\triangle(P_{y}+x)=(I\triangle P_{y})+x, Lemma 4.5(a) implies |I(Py+x)|=|I|+1|I\triangle(P_{y}+x)|=|I|+1, r1(I(Py+x))|I|r_{1}(I\triangle(P_{y}+x))\geq|I|, and r2(I(Py+x))|I|r_{2}(I\triangle(P_{y}+x))\geq|I|. Also, by Lemma 4.5(c), we have PyTI=P_{y}\cap T_{I}=\emptyset (which implies cl2(I)=cl2(IPy)\mathrm{cl}_{2}(I)=\mathrm{cl}_{2}(I\triangle P_{y})), and hence r2(I(Py+x))=|I|+1r_{2}(I\triangle(P_{y}+x))=|I|+1 holds if and only if xTIx\in T_{I}. If xTIx\not\in T_{I} (resp., xTIx\in T_{I}), then rsum(I(Py+x))=2|I|+1{r_{\mathrm{sum}}}(I\triangle(P_{y}+x))=2|I|+1 (resp., rsum(I(Py+x))=2|I|+2{r_{\mathrm{sum}}}(I\triangle(P_{y}+x))=2|I|+2) is equivalent to r1(I(Py+x))=|I|+1r_{1}(I\triangle(P_{y}+x))=|I|+1, and also rsum(I+x)=2|I|{r_{\mathrm{sum}}}(I+x)=2|I| (resp., rsum(I+x)=2|I|+1{r_{\mathrm{sum}}}(I+x)=2|I|+1) is equivalent to xSIx\not\in S_{I}. Therefore, ()(\ast\ast) is equivalent to

()xSI,I(Py+x)1.\displaystyle(**)^{\prime}\quad x\not\in S_{I},\quad I\triangle(P_{y}+x)\in{\mathcal{I}}_{1}.

We show that ()(\ast\ast)^{\prime} holds if and only if (y,x)A1[I](y,x)\in A^{\prime}_{1}[I]. By induction hypothesis, PyP_{y} is an ssyy path in D[I]D^{\prime}[I], and hence it uses arcs of A2[I]A^{\prime}_{2}[I] and A1[I]A^{\prime}_{1}[I] alternately. Let N1N_{1} and N2N_{2} be the sets of those arcs of A1[I]A^{\prime}_{1}[I] and A2[I]A^{\prime}_{2}[I], respectively. Then N1N_{1} forms a matching that covers V(Py)ysV(P_{y})-y-s and N2N_{2} forms a matching that covers V(Py)V(P_{y}).

To show the “if” part, suppose (y,x)A1[I](y,x)\in A^{\prime}_{1}[I]. Then xSIx\not\in S_{I} by the definition of A1[I]A^{\prime}_{1}[I]. Also, N1:=N1+(y,x)A1[I]N^{\prime}_{1}:=N_{1}+(y,x)\subseteq A^{\prime}_{1}[I] forms a perfect matching on V(Py+x)sV(P_{y}+x)-s. Suppose conversely that I(Py+x)1I\triangle(P_{y}+x)\not\in{\mathcal{I}}_{1}. It implies I(Py+xs)1I\triangle(P_{y}+x-s)\not\in{\mathcal{I}}_{1} as follows. Lemma 4.5(c) implies (Py+x)SI={s}(P_{y}+x)\cap S_{I}=\{s\}, and hence cl1(I(Py+xs))cl1(I)\mathrm{cl}_{1}(I\triangle(P_{y}+x-s))\subseteq\mathrm{cl}_{1}(I). If cl1(I(Py+xs))=cl1(I)\mathrm{cl}_{1}(I\triangle(P_{y}+x-s))=\mathrm{cl}_{1}(I), then cl1(I(Py+x))=cl1(I+s)\mathrm{cl}_{1}(I\triangle(P_{y}+x))=\mathrm{cl}_{1}(I+s), and I(Py+x)1I\triangle(P_{y}+x)\in{\mathcal{I}}_{1} as I+s1I+s\in{\mathcal{I}}_{1}, a contradiction. Thus, we have cl1(I(Py+xs))cl1(I)\mathrm{cl}_{1}(I\triangle(P_{y}+x-s))\subsetneq\mathrm{cl}_{1}(I), which implies I(Py+xs)1I\triangle(P_{y}+x-s)\not\in{\mathcal{I}}_{1}. Then Lemma 2.4 implies that A1[I]A_{1}[I] contains some other perfect matching N1′′N^{\prime\prime}_{1} on V(Py+x)sV(P_{y}+x)-s. We see that N1′′A1[I]N^{\prime\prime}_{1}\subseteq A^{\prime}_{1}[I] because (Py+xs)SI=(P_{y}+x-s)\cap S_{I}=\emptyset by Lemma 4.5(b) and xSIx\not\in S_{I}. Thus, N1N^{\prime}_{1}, N1′′N^{\prime\prime}_{1}, and N2N_{2} are all contained in D[I]D^{\prime}[I]. Consider the digraph D=(V(Px+y),A)D=(V(P_{x}+y),A) whose arc set AA consists of the arcs in N1N^{\prime}_{1}, N1′′N^{\prime\prime}_{1}, and two copies of N2N_{2}, where we consider their multiplicity as before. Similarly to the proof of Claim 4.3, we see that there exists an ssyy path PP in D[I]D^{\prime}[I] with c(P)<c(Px)c(P)<c(P_{x}) and |P||P|\leq\ell, which contradicts the induction hypothesis on PxP_{x}.

We next show the “only if” part. Let ()(\ast\ast)^{\prime} hold. By I(Py+xs)I(Py+x)1I\triangle(P_{y}+x-s)\subseteq I\triangle(P_{y}+x)\in{\mathcal{I}}_{1}, Lemma 2.3 implies that A1[I]A_{1}[I] contains a perfect matching N1N^{\prime}_{1} on V(Py+x)sV(P_{y}+x)-s. Also, N1A1[I]N^{\prime}_{1}\subseteq A^{\prime}_{1}[I] because (Py+xs)SI=(P_{y}+x-s)\cap S_{I}=\emptyset by Lemma 4.5(c) and xSIx\not\in S_{I}. Thus, N1N_{1}, N1N^{\prime}_{1}, and N2N_{2} are all contained in D[I]D^{\prime}[I]. Consider the digraph D=(V(Px+y),A)D^{*}=(V(P_{x}+y),A^{*}) whose arc set AA^{*} consists of the arcs in N1N_{1}, N1N^{\prime}_{1}, and two copies of N2N_{2}, where we consider their multiplicity as before. Then, similarly to the proof of Claim 4.3, we see that there exists an ssyy or ssxx path whose property contradicts the induction hypothesis on PyP_{y} or PxP_{x}. ∎

Lemma 4.6.

The output of EmulatingBellmanFord[E,c,rsum,I,s][E,c,{r_{\mathrm{sum}}},I,s] is always correct.

Proof.

For any eEe\in E, a shortest cheapest ssee path PP satisfies |P|n|P|\leq n. Then, after the (n1)(n-1)st updating process, PeP_{e} is indeed a shortest cheapest ssee path by Lemma 4.2. Also, when some path is returned, it is a shortest cheapest ssTIT_{I} path by Lemma 4.5(b). ∎

4.3 Matroid Intersection Algorithm under Rank Sum Oracle

Using Algorithm 3 as a subroutine, we can emulate CheapestPathAugment[E,w,1,2,I][E,w,{\mathcal{I}}_{1},{\mathcal{I}}_{2},I] as Algorithm 4.

Input : A finite set EE, a weight function w:Ew\colon E\to{\mathbb{R}}, oracle access to rsum:2E0{r_{\mathrm{sum}}}\colon 2^{E}\to{\mathbb{Z}}_{\geq 0}, and a ww-maximal set I1k2kI\in{\mathcal{I}}_{1}^{k}\cap{\mathcal{I}}_{2}^{k} for some k=0,1,,n1k=0,1,\dots,n-1.
Output : A ww-maximal set J1k+12k+1J\in{\mathcal{I}}_{1}^{k+1}\cap{\mathcal{I}}_{2}^{k+1} if one exists, or a message “No”.
1
2Determine the set SITI={sEIrsum(I+s)2|I|+1}S_{I}\cup T_{I}=\{\,s\in E\setminus I\mid{r_{\mathrm{sum}}}(I+s)\geq 2|I|+1\,\}. Define a cost function c:Ec\colon E\to{\mathbb{R}} by (2.1).
3For each sSITIs\in S_{I}\cup T_{I}, apply EmulatingBellmanFord[E,c,rsum,I,s][E,c,{r_{\mathrm{sum}}},I,s].
If some path is returned in Step 2, then let PP be a shortest cheapest one among all returned paths, and return J=IPJ=I\triangle P. Otherwise, return a message “No”.
Algorithm 4 (CheapestPathAugmentRankSum[E,w,1,2,I][E,w,{\mathcal{I}}_{1},{\mathcal{I}}_{2},I]).

The following theorem concludes the section.

Lemma 4.7.

The output of CheapestPathAugmentRankSum[E,w,1,2,I][E,w,{\mathcal{I}}_{1},{\mathcal{I}}_{2},I] is always correct.

Proof.

The correctness of Algorithm 4 immediately follows from Lemmas 4.1 and 4.6. ∎

Lemma 4.7 completes the proof of Theorem 1.1.

Proof of Theorem 1.1.

Starting from I=I=\emptyset, the size of the common independent set can be gradually increased using CheapestPathAugmentRankSum[E,w,1,2,I][E,w,{\mathcal{I}}_{1},{\mathcal{I}}_{2},I] until II becomes a common independent set of maximum cardinality. The correctness of the algorithm follows from Lemma 4.7. Note that, if we are asked to find a maximum-weight common independent set, then it suffices to output one with maximum weight among the obtained ww-maximal common independent sets. ∎

5 Matroid Intersection under Common Independence Oracle

As discussed in Section 3, the common independence oracle is strictly weaker than the minimum rank and the rank sum oracles. As weighted matroid intersection turned out to be tractable for the rank sum oracle, the complexity of the problem under the common independence oracle is especially interesting.

In what follows, we present an algorithm for the unweighted matroid intersection problem when one of the matroids is a partition matroid, and an algorithm for the weighted matroid intersection problem when one of the matroids is an elementary split matroid. We also show that the common independence oracle, when complemented with the rank oracle, is strong enough to design an algorithm similar to that for the rank sum case.

5.1 Intersection with Partition Matroid

The aim of this section is to show that the unweighted matroid intersection problem is tractable under the common independence oracle when 𝐌1{\mathbf{M}}_{1} is a partition matroid with all-one upper bound on the partition classes, that is, when 1{\mathcal{I}}_{1} is represented as 1={IE|IEi|1for i=1,,q}{\mathcal{I}}_{1}=\{\,I\subseteq E\mid|I\cap E_{i}|\leq 1\ \text{for $i=1,\dots,q$}\,\} for some partition E=E1EqE=E_{1}\cup\dots\cup E_{q}. We will provide an algorithm that emulates Algorithm 1, i.e., Augment[E,1,2,I][E,{\mathcal{I}}_{1},{\mathcal{I}}_{2},I], using only the common independence oracle.

Take any k=0,1,,n1k=0,1,\dots,n-1 and let I1k2kI\in{\mathcal{I}}_{1}^{k}\cap{\mathcal{I}}_{2}^{k}. To emulate Algorithm 1, we want to find a shortest SIS_{I}TIT_{I} path in the exchangeability graph D[I]=(EI,I;A1[I]A2[I])D[I]=(E\setminus I,I;A_{1}[I]\cup A_{2}[I]). With only the common independence oracle, however, we cannot construct D[I]D[I], and cannot determine even SIS_{I} or TIT_{I}.

Note that a shortest SIS_{I}TIT_{I} path in D[I]D[I] never uses arcs entering sources or leaving sinks. Therefore, finding a shortest SIS_{I}TIT_{I} path in D[I]D[I] is equivalent to finding it in D[I]D^{\prime}[I], where D[I]D^{\prime}[I] is the subgraph of D[I]D[I] obtained by removing those arcs from D[I]D[I] (it is used also in Section 4). We now provide a search procedure, described as Algorithm 5, that will be used as a subroutine for our augmentating procedure. If a given element ss belongs to SIS_{I}, this search algorithm works like the breadth first search in D[I]D^{\prime}[I] rooted at ss, and returns a shortest ssTIT_{I} path or certifies the nonexistence of such a path.

In Algorithm 5, for each yIy\in I, a sequence PyP_{y} of distinct elements is defined. In our analysis, PyP_{y} will turn out to be a shortest ssyy path in D[I]D^{\prime}[I]. We use the notation Py+xP_{y}+x to denote the sequence obtained by appending an element xx to PyP_{y}.

Input : Oracle access to 12{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} where 1{\mathcal{I}}_{1} is the independent set family of a partition matroid, a common independent set I1k2kI\in{\mathcal{I}}^{k}_{1}\cap{\mathcal{I}}^{k}_{2}, and an element sEIs\in E\setminus I.
Output : A sequence PP with IP12I\triangle P\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} and |IP|=k+1|I\triangle P|=k+1 if one exists, or a message “No” otherwise. In particular, if sSIs\in S_{I} and D[I]D^{\prime}[I] has an ssTIT_{I} path, then a shortest ssTIT_{I} path is returned.
1
2If I+s12I+s\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2}, halt with returning ss.
3For each yIy\in I, set PysyP_{y}\leftarrow sy if I+sy12I+s-y\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2}, and Py𝗇𝗎𝗅𝗅P_{y}\leftarrow{\mathsf{null}} otherwise.
For =1,2,\ell=1,2,\dots, do the following. (i) If there is no yIy\in I with |Py|=2|P_{y}|=2\ell, halt with returning “No”. (ii) If there exist yIy^{\prime}\in I and xEIx\in E\setminus I such that |Py|=2|P_{y^{\prime}}|=2\ell, xPyx\not\in P_{y^{\prime}}, {y,x}12\{y^{\prime},x\}\not\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2}, and I(Py+x)12I\triangle(P_{y^{\prime}}+x)\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2}, then halt with returning Py+xP_{y^{\prime}}+x. (iii) For each yIy\in I with Py=𝗇𝗎𝗅𝗅P_{y}={\mathsf{null}}, if there exist yIy^{\prime}\in I and xEIx\in E\setminus I such that |Py|=2|P_{y^{\prime}}|=2\ell, xPyx\not\in P_{y^{\prime}}, {y,x}12\{y^{\prime},x\}\not\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2}, and I(Py+x+y)12I\triangle(P_{y^{\prime}}+x+y)\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2}, then PyPy+x+yP_{y}\leftarrow P_{y^{\prime}}+x+y.
Algorithm 5 (EmulatingBFS[E,12,I,s][E,{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2},I,s]).

By the algorithm, it is clear that the output is either a sequence PP with IP12I\triangle P\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} and |IP|=k+1|I\triangle P|=k+1 or a message “No”. Also, if sSITIs\in S_{I}\cap T_{I}, we see that ss itself is a shortest ssTIT_{I} path and is returned at Step 1. Therefore, we assume sSITIs\in S_{I}\setminus T_{I} and show that a shortest ssTIT_{I} path is returned if such a path exists. We denote by dist(s,TI)\operatorname{dist}(s,T_{I}) the length (i.e., the number of vertices) of a shortest ssTIT_{I} path in D[I]D^{\prime}[I] and by dist(s,y)\operatorname{dist}(s,y) the length of a shortest ssyy path in D[I]D^{\prime}[I] for each yIy\in I. Note that dist(s,TI)\operatorname{dist}(s,T_{I}) is odd and dist(s,y)\operatorname{dist}(s,y) is even for any yIy\in I.

Lemma 5.1.

Let sSITIs\in S_{I}\setminus T_{I}. The following hold for any yIy\in I and any =1,2,\ell=1,2,\dots.

  1. (a)

    PyP_{y} is defined in Step 2 if and only if dist(s,y)=2\operatorname{dist}(s,y)=2. If defined, it is a shortest ssyy path in D[I]D^{\prime}[I].

  2. (b)

    A sequence is returned in the \ellth process of Step 3(ii) if and only if dist(s,TI)=2+1\operatorname{dist}(s,T_{I})=2\ell+1. The returned sequence is a shortest ssTIT_{I} path in D[I]D^{\prime}[I].

  3. (c)

    PyP_{y} is defined in the \ellth process of Step 3(iii) if and only if dist(s,y)=2+2<dist(s,TI)\operatorname{dist}(s,y)=2\ell+2<\operatorname{dist}(s,T_{I}). If defined, it is a shortest ssyy path in D[I]D^{\prime}[I].

Proof.

For any yIy\in I, dist(s,y)=2\operatorname{dist}(s,y)=2 means (s,y)A2[I](s,y)\in A^{\prime}_{2}[I], which is equivalent to I+sy2I+s-y\in{\mathcal{I}}_{2} as sTIs\not\in T_{I}. Since sSIs\in S_{I} implies I+sy1I+s-y\in{\mathcal{I}}_{1}, then I+sy12I+s-y\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} holds if and only if (s,y)A2[I](s,y)\in A^{\prime}_{2}[I]. When (s,y)A2[I](s,y)\in A^{\prime}_{2}[I], clearly sysy is a shortest ssyy path. Thus, (a) is shown.

We show (b) and (c) by induction on \ell. Suppose that they hold for 1,2,,11,2,\dots,\ell-1 and we are at the beginning of the \ellth process of Step 3. Then dist(s,TI)2+1\operatorname{dist}(s,T_{I})\geq 2\ell+1 because otherwise the algorithm has halted before. Take any yIy^{\prime}\in I with |Py|=2|P_{y^{\prime}}|=2\ell. Then

  • PyP_{y^{\prime}} is a shortest ssyy^{\prime} path in D[I]D^{\prime}[I] by (a) and induction for (c);

  • cl2(I)=cl2(IPy)\mathrm{cl}_{2}(I)=\mathrm{cl}_{2}(I\triangle P_{y^{\prime}}) because

    • |I|=|IPy||I|=|I\triangle P_{y^{\prime}}| and IPy2I\triangle P_{y^{\prime}}\in{\mathcal{I}}_{2} hold by the algorithm (Steps 2 and 3(iii)), and

    • PyTI=P_{y^{\prime}}\cap T_{I}=\emptyset follows from dist(s,TI)2+1\operatorname{dist}(s,T_{I})\geq 2\ell+1.

As PyP_{y^{\prime}} is a path in D[I]D^{\prime}[I], it uses arcs in A2[I]A^{\prime}_{2}[I] and A1[I]A^{\prime}_{1}[I] alternately. Let N1N_{1} and N2N_{2} be the sets of those arcs of A1[I]A^{\prime}_{1}[I] and A2[I]A^{\prime}_{2}[I], respectively. By sEIs\in E\setminus I and yIy^{\prime}\in I, then N1N_{1} and N2N_{2} form matchings that cover V(Py)syV(P_{y^{\prime}})-s-y^{\prime} and V(Py)V(P_{y^{\prime}}), respectively (recall that we denote by V(P)V(P) the set of elements in a sequence PP for emphasizing that we focus on the set rather than the sequence).

Take any xEIx\in E\setminus I with xPyx\not\in P_{y^{\prime}}. The following claim completes the proof of (b).

Claim 5.2.

(y,x)A1[I](y^{\prime},x)\in A^{\prime}_{1}[I] and xTIx\in T_{I} if and only if {y,x}12\{y^{\prime},x\}\not\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} and I(Py+x)12I\triangle(P_{y^{\prime}}+x)\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2}.

Proof.

For the “only if” part, suppose (y,x)A1[I](y^{\prime},x)\in A^{\prime}_{1}[I] and xTIx\in T_{I}. As 𝐌𝟏\bf{M}_{1} is a partition matroid with all-one upper bounds, (y,x)A1[I](y^{\prime},x)\in A^{\prime}_{1}[I] means that {y,x}\{y^{\prime},x\} is a circuit in 𝐌𝟏\bf{M}_{1}, and hence {y,x}12\{y^{\prime},x\}\not\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2}. Since sSIs\in S_{I} and N1+(y,x)N_{1}+(y^{\prime},x) forms a matching that covers V(Py+x)sV(P_{y^{\prime}}+x)-s, we have I(Py+x)1I\triangle(P_{y^{\prime}}+x)\in{\mathcal{I}}_{1}. (In I(Py+x)I\triangle(P_{y^{\prime}}+x), each element in I(Py+xs)I\cap(P_{y^{\prime}}+x-s) is replaced by another element in the same partition class and ss comes from a partition class whose element is not used in II.) Also, cl2(I)=cl2(IPy)\mathrm{cl}_{2}(I)=\mathrm{cl}_{2}(I\triangle P_{y^{\prime}}) and xTIx\in T_{I} imply I(Py+x)=(IPy)+x2I\triangle(P_{y^{\prime}}+x)=(I\triangle P_{y^{\prime}})+x\in{\mathcal{I}}_{2}. Thus, the “only if” part is shown.

For the “if” part, suppose {y,x}12\{y^{\prime},x\}\not\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} and I(Py+x)12I\triangle(P_{y^{\prime}}+x)\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2}. As cl2(I)=cl2(IPy)\mathrm{cl}_{2}(I)=\mathrm{cl}_{2}(I\triangle P_{y^{\prime}}) holds, I(Py+x)2I\triangle(P_{y^{\prime}}+x)\in{\mathcal{I}}_{2} implies xTIx\in T_{I}. Then {y,x}2\{y^{\prime},x\}\in{\mathcal{I}}_{2}, and hence {y,x}12\{y^{\prime},x\}\not\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} implies that {y,x}\{y^{\prime},x\} is a circuit in 𝐌𝟏\bf{M}_{1}. Thus (y,x)A1[I](y^{\prime},x)\in A^{\prime}_{1}[I]. ∎

Suppose that we are at the beginning of \ellth process of Step 3(iii). Take yy^{\prime} and xx as before and take any yIy\in I such that PyP_{y} is undefined. Then dist(s,y)>2\operatorname{dist}(s,y)>2\ell by (a) and induction for (c). Also (y,x)A1[I](y^{\prime},x)\in A^{\prime}_{1}[I] implies xTIx\not\in T_{I} since otherwise the algorithm has halted at Step 3 (ii). The following claim completes the proof of (c).

Claim 5.3.

Assume that (y,x)A1[I](y^{\prime},x)\in A^{\prime}_{1}[I] implies xTIx\not\in T_{I}. Then (y,x)A1[I](y^{\prime},x)\in A^{\prime}_{1}[I] and (x,y)A2[I](x,y)\in A^{\prime}_{2}[I] if and only if {y,x}12\{y^{\prime},x\}\not\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} and I(Py+x+y)12I\triangle(P_{y^{\prime}}+x+y)\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2}.

Proof.

For the “only if” part, suppose (y,x)A1[I](y^{\prime},x)\in A^{\prime}_{1}[I] and (x,y)A2[I](x,y)\in A^{\prime}_{2}[I]. Similarly to the proof of Claim 5.2, (y,x)A1[I](y^{\prime},x)\in A^{\prime}_{1}[I] implies {y,x}12\{y^{\prime},x\}\not\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} and I(Py+x)1I\triangle(P_{y^{\prime}}+x)\in{\mathcal{I}}_{1}, and hence I(Py+x+y)=(I(Py+x))y1I\triangle(P_{y^{\prime}}+x+y)=(I\triangle(P_{y^{\prime}}+x))-y\in{\mathcal{I}}_{1}. Suppose, to the contrary, I(Py+x+y)2I\triangle(P_{y^{\prime}}+x+y)\not\in{\mathcal{I}}_{2}. Since N2+(x,y)A2[I]N_{2}+(x,y)\subseteq A^{\prime}_{2}[I] forms a perfect matching on V(Py+x+y)V(P_{y^{\prime}}+x+y), Lemma 2.4 implies that there exists some other perfect matching N2A2[I]N^{\prime}_{2}\subseteq A_{2}[I] on V(Py+x+y)V(P_{y^{\prime}}+x+y). This N2N^{\prime}_{2} is included in A2[I]A^{\prime}_{2}[I] because (Py+x+y)TI=(P_{y^{\prime}}+x+y)\cap T_{I}=\emptyset follows from PyTI=P_{y^{\prime}}\cap T_{I}=\emptyset and xTIx\not\in T_{I}. Then, D[I]D^{\prime}[I] contains an ssyy path with arcs in N1N2N_{1}\cup N^{\prime}_{2} and length at most 22\ell, which contradicts dist(s,y)>2\operatorname{dist}(s,y)>2\ell.

For the “if” part, suppose {y,x}12\{y^{\prime},x\}\not\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} and I(Py+x+y)12I\triangle(P_{y^{\prime}}+x+y)\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2}. By Lemma 2.3, I(Py+x+y)2I\triangle(P_{y^{\prime}}+x+y)\in{\mathcal{I}}_{2} implies that A2[I]A_{2}[I] contains a perfect matching N2′′N_{2}^{\prime\prime} on V(Py+x+y)V(P_{y^{\prime}}+x+y). Conversely, suppose (x,y)N2′′(x,y)\not\in N^{\prime\prime}_{2}. Then (x,y)N2′′(x^{*},y)\in N_{2}^{\prime\prime} for some xPyx^{*}\in P_{y^{\prime}}, and PyTI=P_{y^{\prime}}\cap T_{I}=\emptyset implies (x,y)A2[I](x^{*},y)\in A^{\prime}_{2}[I]. Hence D[I]D^{\prime}[I] has an ssyy path with length at most 22\ell, a contradiction. Thus, (x,y)N2′′A2[I](x,y)\in N^{\prime\prime}_{2}\subseteq A_{2}[I], which implies xTIx\in T_{I} or (x,y)A2[I](x,y)\in A^{\prime}_{2}[I], where the latter implies yC2(I,x){y,x}y\in C_{2}(I,x)\not\subseteq\{y^{\prime},x\}. Thus, in both cases, {y,x}2\{y^{\prime},x\}\in{\mathcal{I}}_{2}. Therefore, {y,x}12\{y^{\prime},x\}\not\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} implies {y,x}1\{y^{\prime},x\}\not\in{\mathcal{I}}_{1}, and hence yC1(I,x)y^{\prime}\in C_{1}(I,x), and (y,x)A1[I](y^{\prime},x)\in A^{\prime}_{1}[I] follows. By assumption, we then have xTIx\not\in T_{I}, and hence (x,y)A2[I](x,y)\in A^{\prime}_{2}[I]. ∎

Thus, both (b) and (c) hold for \ell. ∎

Lemma 5.1 completes the proof of correctness of Algorithm 5.

Lemma 5.4.

The output of EmulatingBFS[E,12,I,s][E,{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2},I,s] is always correct.

Proof.

If dist(s,TI)=1\operatorname{dist}(s,T_{I})=1, i.e., if sSITIs\in S_{I}\cap T_{I}, then the algorithm correctly returns ss at Step 1. If dist(s,TI)=2+1>1\operatorname{dist}(s,T_{I})=2\ell+1>1, then sSITIs\in S_{I}\setminus T_{I}, and hence Lemma 5.1(b) implies that a shortest ssTIT_{I} path is returned in the \ellth process of Step 3(ii). ∎

Using EmulatingBFS (Algorithm 5) as a subroutine, we design a procedure that emulates Augment[E,1,2,I][E,{\mathcal{I}}_{1},{\mathcal{I}}_{2},I] with the common independence oracle.

Input : Oracle access to 12{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} where 1{\mathcal{I}}_{1} is the independent set family of a partition matroid, and a common independent set I1k2kI\in{\mathcal{I}}^{k}_{1}\cap{\mathcal{I}}^{k}_{2}.
Output : A common independent set J1k+12k+1J\in{\mathcal{I}}^{k+1}_{1}\cap{\mathcal{I}}^{k+1}_{2} if one exists, or a message “No” otherwise.
1
2If I+x12I+x\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} for some xEIx\in E\setminus I, then halt with returning J=I+xJ=I+x.
For each sEIs\in E\setminus I, perform EmulatingBFS[E,12,I,s][E,{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2},I,s]. If some sequence PP is returned, then return J=IPJ=I\triangle P. Otherwise return with the message “No”.
Algorithm 6 (AugmentCommonIndependencePartition[E,12,I][E,{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2},I]).
Lemma 5.5.

The output of AugmentCommonIndependencePartition[E,12,I][E,{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2},I] (Algorithm 6) is always correct.

Proof.

The output in Step 6 is clearly correct. As Step 6 returns some sequence PP only if IPI\triangle P is indeed a common independent set of size k+1k+1, it suffices to show that if there exists a common independent set JJ of size k+1k+1, then EmulatingBFS[E,12,I,s][E,{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2},I,s] returns a sequence for some sEIs\in E\setminus I. By the correctness of Augment[E,1,2,I][E,{\mathcal{I}}_{1},{\mathcal{I}}_{2},I], the existence of such JJ implies that D[I]D[I] has some SIS_{I}TIT_{I} path, and so does D[I]D^{\prime}[I]. Then D[I]D^{\prime}[I] contains an ssTIT_{I} path for some sSIs\in S_{I}. For such ss, EmulatingBFS[E,12,I,s][E,{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2},I,s] returns a shortest ssTIT_{I} path in D[I]D^{\prime}[I]. ∎

Lemma 5.5 completes the proof of Theorem 1.2.

Proof of Theorem 1.2.

Starting from II\coloneqq\emptyset, the size of the common independent set can be gradually increased using AugmentCommonIndependencePartition[E,12,I][E,{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2},I] until II becomes a common independent set of maximum cardinality. The correctness of the algorithm follows by Lemma 5.5. ∎

5.2 Intersection with Elementary Split Matroid

Motivated by the study of matroid polytopes from a tropical geometry point of view, Joswig and Schröter [17] introduced the notion of split matroids. This class does not only generalize paving matroids, but it is closed both under duality and taking minors. Bérczi, Király, Schwarcz, Yamaguchi and Yokoi [4] later observed that every split matroid can be obtained as the direct sum of a so-called elementary split matroid and uniform matroids. Elementary split matroids capture all the nice properties of connected split matroids, and is closed not only under duality and taking minors but also truncation. Motivated by representations of paving matroids by hypergraphs, they provided a hypergraph characterization of elementary split matroids as follows.

Let EE be a ground set of size at least rr, ={H1,,Hq}{\mathcal{H}}=\{H_{1},\dots,H_{q}\} be a (possibly empty) collection of subsets of EE (called hyperedges), and r,r1,,rqr,r_{1},\dots,r_{q} be nonnegative integers satisfying

|HiHj|\displaystyle|H_{i}\cap H_{j}| ri+rjr\displaystyle\leq r_{i}+r_{j}-r for 1i<jq1\leq i<j\leq q, (H1)
|EHi|+ri\displaystyle|E\setminus H_{i}|+r_{i} r\displaystyle\geq r for i=1,,qi=1,\dots,q. (H2)

Then ={XE|X|r,|XHi|rifor 1iq}{\mathcal{I}}=\{\,X\subseteq E\mid|X|\leq r,\ |X\cap H_{i}|\leq r_{i}\ \text{for $1\leq i\leq q$}\,\} forms the family of independent sets of a rank-rr matroid MM with rank function rM(Z)=min{r,|Z|,min1iq{|ZHi|+ri}}r_{M}(Z)=\min\big{\{}r,|Z|,\min_{1\leq i\leq q}\{|Z\setminus H_{i}|+r_{i}\}\big{\}}. Matroids that can be obtained in this form are called elementary split matroids.

We call a set FEF\subseteq E HiH_{i}-tight or tight with respect to HiH_{i} if |FHi|=ri|F\cap H_{i}|=r_{i}. The following lemma shows that an independent set of size less than rr cannot be tight with respect to two different hyperedges.

Lemma 5.6.

Let MM be an elementary split matroid with representation ={H1,,Hq}{\mathcal{H}}=\{H_{1},\dots,H_{q}\} and r,r1,,rqr,r_{1},\dots,r_{q}, and let FF be a set of size less than rr. Then FF is tight with respect to at most one of the hyperedges.

Proof.

Suppose to the contrary that FF is both HiH_{i}- and HjH_{j}-tight. Then we get

|HiHj|\displaystyle|H_{i}\cap H_{j}|{} |FHiHj|=|FHi|+|FHj||F(HiHj)|\displaystyle{}\geq|F\cap H_{i}\cap H_{j}|=|F\cap H_{i}|+|F\cap H_{j}|-|F\cap(H_{i}\cup H_{j})|
ri+rj|F|>ri+rjr,\displaystyle{}\geq r_{i}+r_{j}-|F|>r_{i}+r_{j}-r,

contradicting (H1). ∎

Now we show that the weighted matroid intersection problem is tractable under the common independence oracle when 𝐌1{\mathbf{M}}_{1} is an elementary split matroid, that is, when 1{\mathcal{I}}_{1} can be represented as 1={XE|X|r,|XHi|rifor 1iq}{\mathcal{I}}_{1}=\{\,X\subseteq E\mid|X|\leq r,\ |X\cap H_{i}|\leq r_{i}\ \text{for $1\leq i\leq q$}\,\} for some (possibly empty) hypergraph ={H1,,Hq}{\mathcal{H}}=\{H_{1},\dots,H_{q}\} and nonnegative integers r,r1,,rqr,r_{1},\dots,r_{q} satisfying (H1) and (H2). The proof is based on observing that the exchangeability graph has a special structure.

Proof of Theorem 1.3.

Suppose that we have oracle access to the common independent set family 12{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} of two matroids on EE, where 1{\mathcal{I}}_{1} belongs to an elementary split matroid. Consider a ww-maximal set I1k2kI\in{\mathcal{I}}_{1}^{k}\cap{\mathcal{I}}_{2}^{k} for some k{0,1,,n1}k\in\{0,1,\dots,n-1\}. According to Algorithm 2 and Lemma 4.1, a ww-maximal set J1k+12k+1J\in{\mathcal{I}}_{1}^{k+1}\cap{\mathcal{I}}_{2}^{k+1}, if exists, can be obtained in the form J=IPJ=I\triangle P where PP is a shortest cheapest SIS_{I}TIT_{I} path in D[I]D^{\prime}[I].

Claim 5.7.

If 1k+12k+1{\mathcal{I}}_{1}^{k+1}\cap{\mathcal{I}}_{2}^{k+1}\neq\emptyset, then there exists a shortest cheapest SIS_{I}TIT_{I} path in D[I]D^{\prime}[I] of length at most 33.

Proof.

As 1k+12k+1{\mathcal{I}}_{1}^{k+1}\cap{\mathcal{I}}_{2}^{k+1}\neq\emptyset, there necessarily exists an SIS_{I}TIT_{I} path in D[I]D^{\prime}[I]; let P=e1e2eP=e_{1}e_{2}\cdots e_{\ell} be a shortest cheapest one. As II is not a basis of 𝐌1{\mathbf{M}}_{1}, observe that I+x1I+x\notin{\mathcal{I}}_{1} for some xEIx\in E\setminus I if and only if there exists a hyperedge HiH_{i} such that II is HiH_{i}-tight and xHix\in H_{i}. By Lemma 5.6, II is tight with respect to at most one of the hyperedges, hence we get that E(SII)HiE\setminus(S_{I}\cup I)\subseteq H_{i}. This also implies that A1[I]={(y,x)xHiI,yHiI}A^{\prime}_{1}[I]=\{\,(y,x)\mid x\in H_{i}\setminus I,~{}y\in H_{i}\cap I\,\}.

Assume that the length of the path is more than 33. Then, by the above observation, both (e2,e)(e_{2},e_{\ell}) and (e1,e3)(e_{\ell-1},e_{3}) exist in D[I]D^{\prime}[I]. Therefore Ce3e4e1e3C\coloneqq e_{3}e_{4}\cdots e_{\ell-1}e_{3} is a cycle in D[I]D^{\prime}[I], and Pe1e2eP^{\prime}\coloneqq e_{1}e_{2}e_{\ell} is an SiS_{i}TiT_{i} path in D[I]D^{\prime}[I]. By Lemma 2.5, the cost of CC is nonnegative, and hence the cost of PP^{\prime} is at most the cost of PP, contradicting the choice of PP. ∎

By Claim 5.7, a ww-maximal member of 1k+12k+1{\mathcal{I}}_{1}^{k+1}\cap{\mathcal{I}}_{2}^{k+1}, if exists, can be found by checking every set II^{\prime} of size k+1k+1 with |II|2|I\triangle I^{\prime}|\leq 2. This concludes the proof of the theorem. ∎

5.3 Complemented with Maximum Rank Oracle

When access is given to both a common independence and a maximum rank oracle, every step of Algorithms 3 and 4, i.e., EmulatingBellmanFord and CheapestPathAugmentRankSum, can be emulated and hence the weighted matroid intersection problem is solved as with Section 4.

Proof of Theorem 1.4.

Suppose that we have oracle access to the common independent set family 12{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} and the maximum rank function rmax{r_{\mathrm{max}}} of two matroids on EE instead of that to rsum{r_{\mathrm{sum}}}. In EmulatingBellmanFord and CheapestPathAugmentRankSum, we ask the rank sum oracle the following types of questions:

  1. (a)

    whether rsum(I+x)=2|I|{r_{\mathrm{sum}}}(I^{\prime}+x)=2|I^{\prime}| or not for I12I^{\prime}\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} and xEIx\in E\setminus I^{\prime},

  2. (b)

    whether rsum(I+x)=2|I|+1{r_{\mathrm{sum}}}(I^{\prime}+x)=2|I^{\prime}|+1 or not for I12I^{\prime}\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} and xEIx\in E\setminus I^{\prime},

  3. (c)

    whether rsum(I+x)=2|I|+2{r_{\mathrm{sum}}}(I^{\prime}+x)=2|I^{\prime}|+2 or not for I12I^{\prime}\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} and xEIx\in E\setminus I^{\prime}, and

  4. (d)

    whether rsum(I)=2|I|{r_{\mathrm{sum}}}(I^{\prime})=2|I^{\prime}| or not for IEI^{\prime}\subseteq E.

These questions can be tested using the common independence and the maximum rank oracles together as follows. The answer to (1) is “Yes” if and only if I+x12I^{\prime}+x\notin{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} and rmax(I+x)=|I|{r_{\mathrm{max}}}(I^{\prime}+x)=|I^{\prime}|, the answer to (2) is “Yes” if and only if I+x12I^{\prime}+x\not\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2} and rmax(I+x)=|I|+1{r_{\mathrm{max}}}(I^{\prime}+x)=|I^{\prime}|+1, the answer to (3) is “Yes” if and only if I+x12I^{\prime}+x\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2}, and the answer to (4) is “Yes” if and only if I12I^{\prime}\in{\mathcal{I}}_{1}\cap{\mathcal{I}}_{2}. ∎

6 Concluding Remarks

In this paper, we have shown the tractability of unweighted/weighted matroid intersection problems under several restricted oracles. The rank sum oracle or the combination of the common independence oracle and the maximum rank oracle is enough to solve the weighted matroid intersection problem in polynomial time (Theorems 1.1 and 1.4). Also, if one matroid is restricted to a partition matroid with all-one upper bound or to an elementary split matroid, the unweighted or weighted problem, respectively, can be solved only using the common independence oracle (Theorems 1.2 and 1.3).

The following two big questions still remain, and our subsequent paper [3] is tackling the first question.

Question 6.1.

Is there a strongly polynomial-time algorithm for the weighted matroid intersection problem in the minimum rank oracle model? Or can we show the hardness?

Question 6.2.

Is there a strongly polynomial-time algorithm for the unweighted/weighted matroid intersection problem in the common independence oracle model? Or can we show the hardness?

Acknowledgments

We are grateful to Yuni Iwamasa and Taihei Oki for initial discussions on the problem. We would like to thank the reviewers, who read this paper and provided positive and helpful comments.

The work was supported by the Lendület Programme of the Hungarian Academy of Sciences – grant number LP2021-1/2021 and by the Hungarian National Research, Development and Innovation Office – NKFIH, grant numbers FK128673 and TKP2020-NKA-06. Yutaro Yamaguchi was supported by JSPS KAKENHI Grant Numbers 20K19743 and 20H00605, and by Overseas Research Program in Graduate School of Information Science and Technology, Osaka University. Yu Yokoi was supported by JST PRESTO Grant Number JPMJPR212B.

References

  • [1] M. Aigner and T. A. Dowling. Matching theory for combinatorial geometries. Transactions of the American Mathematical Society, 158(1):231–245, 1971.
  • [2] M. Bárász. Matroid intersection for the min-rank oracle. Technical Report QP-2006-03, Egerváry Research Group, Budapest, 2006. http://www.cs.elte.hu/egres/.
  • [3] M. Bárász, K. Bérczi, T. Király, Y. Yamaguchi, and Y. Yokoi. Matroid intersection under minimum rank oracle. In preparation.
  • [4] K. Bérczi, T. Király, T. Schwarcz, Y. Yamaguchi, and Y. Yokoi. Hypergraph characterization of split matroids. Journal of Combinatorial Theory, Series A, 194:105697, 2023.
  • [5] C. Brezovec, G. Cornuéjols, and F. Glover. Two algorithms for weighted matroid intersection. Mathematical Programming, 36(1):39–53, 1986.
  • [6] R. A. Brualdi. Comments on bases in dependence structures. Bulletin of the Australian Mathematical Society, 1(2):161–167, 1969.
  • [7] C. R. Coullard and L. Hellerstein. Independence and port oracles for matroids, with an application to computational learning theory. Combinatorica, 16(2):189–208, 1996.
  • [8] W. H. Cunningham. Improved bounds for matroid partition and intersection algorithms. SIAM Journal on Computing, 15(4):948–957, 1986.
  • [9] J. Edmonds. Submodular functions, matroids, and certain polyhedra. In Combinatorial Structures and Their Applications, pages 69–87. Gorden and Breach, 1970. (Also in Combinatorial Optimization — Eureka, You Shrink!, pages 11–26, Springer, 2003.).
  • [10] J. Edmonds. Matroid intersection. Annals of Discrete Mathematics, 4:39–49, 1979.
  • [11] A. Frank. A weighted matroid intersection algorithm. Journal of Algorithms, 2(4):328–336, 1981.
  • [12] A. Frank. Rooted kk-connections in digraphs. Discrete Applied Mathematics, 157(6):1242–1254, 2009.
  • [13] D. Hausmann and B. Korte. Algorithmic versus axiomatic definitions of matroids. In Mathematical Programming at Oberwolfach, pages 98–111. Springer, 1981.
  • [14] C.-C. Huang, N. Kakimura, and N. Kamiyama. Exact and approximation algorithms for weighted matroid intersection. Mathematical Programming, 177(1-2):85–112, 2019.
  • [15] M. Iri and N. Tomizawa. An algorithm for finding an optimal “independent assignment”. Journal of the Operations Research Society of Japan, 19(1):32–57, 1976.
  • [16] P. M. Jensen and B. Korte. Complexity of matroid property algorithms. SIAM Journal on Computing, 11(1):184–190, 1982.
  • [17] M. Joswig and B. Schröter. Matroids from hypersimplex splits. Journal of Combinatorial Theory, Series A, 151:254–284, 2017.
  • [18] S. Krogdahl. A combinatorial base for some optimal matroid intersection algorithms. Technical Report STAN-CS-74-468, Computer Science Department, Stanford University, Stanford, CA, U.S., 1974.
  • [19] S. Krogdahl. A combinatorial proof for a weighted matroid intersection algorithm. Technical Report Computer Science Report 17, Institute of Mathematical and Physical Sciences, University of Tromso, Tromso, Norway, 1976.
  • [20] S. Krogdahl. The dependence graph for bases in matroids. Discrete Mathematics, 19(1):47–59, 1977.
  • [21] E. L. Lawler. Optimal matroid intersections. In Combinatorial Structures and Their Applications, pages 233–234. Gorden and Breach, 1970.
  • [22] E. L. Lawler. Matroid intersection algorithms. Mathematical Programming, 9(1):31–56, 1975.
  • [23] E. L. Lawler. Combinatorial Optimization: Networks and Matroids. Holt, Rinehart and Winston, 1976.
  • [24] L. Lovász. Matroid matching and some applications. Journal of Combinatorial Theory, Series B, 28(2):208–236, 1980.
  • [25] L. Lovász. The matroid matching problem. In Algebraic Methods in Graph Theory II, pages 495–517. North-Holland, 1981.
  • [26] L. Lovász and M. D. Plummer. Matching Theory. American Mathematical Society, 2009.
  • [27] J. Oxley. Matroid Theory. Oxford University Press, 2011.
  • [28] G. Robinson and D. Welsh. The computational complexity of matroid properties. In Mathematical Proceedings of the Cambridge Philosophical Society, volume 87, pages 29–45. Cambridge University Press, 1980.
  • [29] A. Schrijver. Combinatorial Optimization: Polyhedra and Efficiency. Springer, 2003.