This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Random Intersection Chains

\nameQiuqiang Lin \email[email protected]
\addrSchool of Mathematical Sciences
Zhejiang University
Hangzhou 310027, China \AND\nameChuanhou Gao \email[email protected]
\addrSchool of Mathematical Sciences
Zhejiang University
Hangzhou 310027, China
Abstract

Interactions between several features sometimes play an important role in prediction tasks. But taking all the interactions into consideration will lead to an extremely heavy computational burden. For categorical features, the situation is more complicated since the input will be extremely high-dimensional and sparse if one-hot encoding is applied. Inspired by association rule mining, we propose a method that selects interactions of categorical features, called Random Intersection Chains. It uses random intersections to detect frequent patterns, then selects the most meaningful ones among them. At first a number of chains are generated, in which each node is the intersection of the previous node and a random chosen observation. The frequency of patterns in the tail nodes is estimated by maximum likelihood estimation, then the patterns with largest estimated frequency are selected. After that, their confidence is calculated by Bayes’ theorem. The most confident patterns are finally returned by Random Intersection Chains. We show that if the number and length of chains are appropriately chosen, the patterns in the tail nodes are indeed the most frequent ones in the data set. We analyze the computation complexity of the proposed algorithm and prove the convergence of the estimators. The results of a series of experiments verify the efficiency and effectiveness of the algorithm.

Keywords: categorical feature, association rule mining, interaction, intersection

1 Introduction

For practical application of machine learning algorithms, usually not only the original features, but also their interactions play an important role. However, taking all the interactions into consideration will lead to an extremely high-dimensional input. For example, even if only the product of two numerical features is considered, there will be O(p2)O(p^{2}) such interactions for pp main effects. As the size of data is growing rapidly nowadays, adding all pairwise interaction into the input may be computationally infeasible, let alone higher-order interactions. What’s more, adding all the interactions without selection is possibly harmful to the prediction model since too much redundant information is brought in.

It is a well-established practice among statisticians fitting models on interactions as well as original features. For example, Bien et al. (2013) add a set of convex constraints to the lasso that honour the hierarchy restriction; Hao and Zhang (2014) tackle the difficulty by forward-selection-based procedures; Hao et al. (2018) consider two-stage LASSO and a new regularization method named RAMP to compute a hierarchy-preserving regularization solution path efficiently; Agrawal et al. (2019) propose to speed up inference in Bayesian linear regression with pairwise interactions by using a Gaussian process and a kernel interaction trick. However, these methods are based on the hierarchy assumption. That is, an interaction will be useful only if its lower-order components are also useful. So the theoretical analysis lose efficacy and their practical performance may be unsatisfactory for the case where the assumption does not hold.

There is also some work free of the hierarchy assumption. Thanei et al. (2018) propose the xyz algorithm, where the underlying idea is to transform interaction search into a closest pair problem which can be solved efficiently in subquadratic time. Instead of the hierarchy principle, Yu et al. (2019) come up with the reluctant principle, which says that one should prefer main effects over interactions given similar prediction performance.

The above-mentioned work mainly aims to select pairwise interactions. A drawback of these methods is that potentially informative higher-order interactions are overlooked. Random intersection trees (Shah and Meinshausen, 2014) gets over this difficulty by starting with a maximal interaction that includes all variables, and then gradually removing variables if they fail to appear in randomly chosen observations of a class of interest. Another approach is Backtracking (Shah, 2016). It can be incorporated into many existing high-dimensional methods based on penalty functions, and works by building increasing sets of candidate interactions iteratively.

An alternative approach for higher-order interaction selection is extracting interactions from rules. Decision trees, such as ID3 (Quinlan, 1986), C4.5 (Quinlan, 1993) and CART (Breiman et al., 1984), are widely-used models to generate comprehensive rules. They work by partitioning the input space according to whether the features satisfy some conditions, and then assigning a constant to each region. After the tree is built, each path connecting the root node and a leaf node can be transformed into a decision rule by combining the split decisions. The predictions of the leaf nodes are discarded and only the splits are used in the decision rules. RuleFit (Friedman and Popescu, 2008) uses these decision rules as binary features, then fits a linear model on them. According to Qu et al. (2016), however, the exploration ability of tree-based models is restricted for high-dimensional categorical features due to the low usage rate of categorical features.

Besides tree-based models, there is another family of algorithms that generate rules based on the data, namely association rule mining. For a categorical feature XX, and one of its optional values xx, we can treat the pattern “X=xX=x” as an item. In this way we can transform an observation in the data set to a record of items. Then association rule mining algorithms can be applied. Mining association rules between items from a large database is one of the most important and well researched topics of data mining. Let I={i1,i2,,im}I=\{i_{1},i_{2},...,i_{m}\} be a set of items, called itemsets. Association rule mining aims to extract rules in the form of “XYX\to Y”, where XIX\subset I, YIY\subset I, XY=X\cap Y=\emptyset. Calling XX the antecedent and YY the consequent, the rule means XX implies YY. The support of an itemset XX is the number of records that contain XX. For an association rule “XYX\to Y”, its support is defined as the fraction of records that contain XYX\cup Y to the total number of records in the database, and its confidence is the number of cases in which the rule is correct relative to the number of cases in which it is applicable, or equivalently, support(XYX\cup Y)/support(XX).

Association rule mining problem was firstly stated by Agrawal et al. (1993), and further studied by many researchers. Apriori (Agrawal and Srikant, 1994), FP-growth (Han et al., 2000) and H-mine (Jian Pei et al., 2001) are some well-known algorithms for association rule mining. If the antecedents is meaningful for the target, it seems reasonable to use them as features for another classification model rather than a classifier themselves, just like how RuleFit makes use of decision rules.

Although the common association rule mining algorithms are much faster than a brute force search, many of them are not suitable for “big data” since they have to go through the whole database multiple times. On the contrary, Random Intersection Trees gets over this difficulty by regarding the intersection of a set of random samples as a frequent pattern. To be more specific, it generates MM trees of depth DD, in which each node but the leaf nodes has BB child nodes, where BB subjects to a pre-specified distribution. The root node contains the items in a uniformly chosen observation from the database, and each node except the root consists of the intersection of its parent node and a uniformly chosen observation. The patterns in the leaf nodes are finally used as interactions. One of the shortcomings of Random Intersection Trees is that the selection is relatively crude because the subpatterns of the found frequent patterns are neglected, while they are actually more frequent. What’s more, Random Intersection Trees aims to find the patterns that are frequent in a class of interest but infrequent in other classes, which are exactly “confident rules”. But the precise quantities of “frequency” or “confidence” are not provided, which makes it difficult to make a more careful comparison among different patterns. Another problem is that Random Intersection Trees can only deal with binary features. A categorical feature of high cardinality should be one-hot encoded before applying the algorithm, which may result in a very high-dimensional and sparse input.

Inspired by the idea of Random Intersection Trees, we suggest a method that can select useful interactions of categorical features for classification tasks, called Random Intersection Chains. The road map of Random Intersection Chains is listed below.

  1. 1.

    Generate chains for different classes separately by random intersections;

  2. 2.

    Calculate the frequency of the patterns in the tail nodes as well as their subpatterns by maximum likelihood estimation;

  3. 3.

    Select the most frequent patterns;

  4. 4.

    Calculate the confidence of the most frequent patterns by Bayes’ theorem;

  5. 5.

    Select the most confident patterns.

Our main contributions in this paper can be concluded as follows: (1) an interaction selection method for categorical features in classification tasks, named Random Intersection Chains, is proposed; (2) we show that Random Intersection Chains can find all the frequent patterns while leaving out the infrequent ones if parameters are appropriately chosen; (3) the computational complexity, including space complexity and time complexity, are analyzed; (4) we prove that the estimated frequency and confidence converge to their true values; (5) a series of experiments are conducted to verify the effectiveness and efficiency of Random Intersection Chains.

The rest of the paper is organized as follows. In Section 2, a brief introduction of some related contents is given. In Section 3 we introduce Random Intersection Chains, our algorithm for interactive feature selection, in detail. This followed by some analyses of computational complexities in Section 4. In Section 5, we theoretically analyze the the convergence of the estimated frequency and confidence. In Section 6 we report the results of a series of experiments to verify the effectiveness of the algorithm. Finally this paper is concluded in Section 7. The proofs of some main results are relegated to the Appendix.

2 Preliminaries

In this paper, we consider the classification task involving high-dimensional categorical predictors. Usually the categorical features have a number of optional values. A common approach to deal with such features is one-hot encoding, which transforms a categorical feature to a large number of binary variables. But this method will make the input extremely high-dimensional and sparse. To avoid this difficulty, label encoding is adopted in this paper, which maps each category to a specific integer. For example, there may be three categories, namely “red”, “green” and “blue”, in a feature representing colors. Then we replace “red”, “green” and “blue” with 1, 2 and 3, respectively. It’s worth noting that these integers can only be used to check whether two values are identical or different, while their numerical relationships should be ignored.

Suppose C1,C2,,CpC_{1},C_{2},...,C_{p} are pp categorical features, and CC is the set of classification labels. The given data set is in the form of D={𝑿,𝒚}D=\{\boldsymbol{X},\boldsymbol{y}\}, where 𝑿N×p\boldsymbol{X}\in\mathbb{N}^{N\times p} contains the records of NN observations, 𝒚CN\boldsymbol{y}\in C^{N} indicates the label of these observations. The ii-th row of 𝑿\boldsymbol{X} and the ii-th component of 𝒚\boldsymbol{y} are denoted by XiX_{i} and yiy_{i}, respectively. Suppose Xi=[c1,c2,,cp]X_{i}=[c_{1},c_{2},...,c_{p}] is an observation in the data set, then it can be viewed from two aspects. First if we treat c1,c2,,cpc_{1},c_{2},...,c_{p} as integers, 𝑿i\boldsymbol{X}_{i} is naturally a vector of dimension pp. Or c1,c2,,cpc_{1},c_{2},...,c_{p} can be seen as items, thus 𝑿i\boldsymbol{X}_{i} is an record consisting of pp items. Therefore, 𝑿\boldsymbol{X} can be regarded as a data set for machine learning algorithms, or a database for data mining algorithms.

For a variable CjC_{j} and one of its possible values cjc_{j}, we use “Cj=cjC_{j}=c_{j}” to represent 𝟙{Cj=cj}\mathbbm{1}_{\{C_{j}=c_{j}\}}, a binary feature that indicates whether the value of variable CjC_{j} is cjc_{j}. “Cj=cjC_{j}=c_{j}” also stands for an item that only appears in the records where the value of variable CjC_{j} is cjc_{j}. Similarly, for {j1,j2,,jk}{1,2,,p}\{j_{1},j_{2},...,j_{k}\}\subseteq\{1,2,...,p\}, suppose Cj1,Cj2,,CjkC_{j_{1}},C_{j_{2}},...,C_{j_{k}} are kk variables and cj1,cj2,,cjkc_{j_{1}},c_{j_{2}},...,c_{j_{k}} are one of their corresponding possible values. Then a pattern ss=“Cj1=cj1,Cj2=cj2,,Cjk=cjkC_{j_{1}}=c_{j_{1}},C_{j_{2}}=c_{j_{2}},...,C_{j_{k}}=c_{j_{k}}” can be comprehended as a logical expression, a binary feature 𝟙{sX}\mathbbm{1}_{\{s\subseteq X\}}, or an itemset containing kk items. We call such a expression “kk-order interaction”. This definition coincides with the term “interaction” used by Shah and Meinshausen (2014), and will reduce to the latter if cjc_{j}=1 for all j{j1,j2,,jk}j\in\{j_{1},j_{2},...,j_{k}\}. Also the interaction defined here is a non-additive interaction (Friedman and Popescu, 2008; Sorokina et al., 2008; Tsang et al., 2017), since it can not be represented by a linear combination of lower-order interactions. In this paper, we use the terms “interaction”, “itemset” and “pattern” interchangeably to describe such expressions. When there is no ambiguity for classification label cc, this expression is also referred to “scs\to c” as a “rule”.

The frequency of an interaction ss for class cc is defined as the ratio of records containing ss with label cc to all the records with label cc, and is denoted by

ps(c)=N(sX|Y=c)1|I(c)|iI(c)𝟙{sXi},p_{s}^{(c)}=\mathbb{P}_{N}(s\subseteq X|Y=c)\coloneqq\frac{1}{|I^{(c)}|}\sum_{i\in I^{(c)}}\mathbbm{1}_{\{s\subseteq X_{i}\}}, (1)

where I(c)I^{(c)} is the set of observations in class cc. The confidence of an interaction ss for class cc is defined as the ratio of records containing ss with label cc to all the records containing ss, and is denoted by

qs(c)=N(Y=c|sX)1|Is|iI(c)𝟙{sXi},q_{s}^{(c)}=\mathbb{P}_{N}(Y=c|s\subseteq X)\coloneqq\frac{1}{|I_{s}|}\sum_{i\in I^{(c)}}\mathbbm{1}_{\{s\subseteq X_{i}\}}, (2)

where IsI_{s} is the set of observations containing interaction ss. An interaction is said to be frequent if its frequency is large, and confident if its confidence is large for a class.

The main goal of this paper is to efficiently detect the interactions that are both frequent and confident for some classes. Then these interactions are used as the input for a succeeding classification model. Since irrelevant predictors in the original input are dropped and useful high-order interactions are explicitly added, interaction detection is likely to be beneficial for the prediction performance.

3 Random Intersection Chains

In this section we give a naive version of Random Intersection Chains at first, and show that there exist appropriate parameters for it to find all the frequent patterns while the infrequent ones are left out. Then a modification is provided to prevent the exponential computational burden for selecting the most frequent subpatterns from a given pattern.

3.1 A Basic Algorithm

Drawing inspiration from association rule mining and Random Intersection Trees, we suggest an algorithm that can efficiently detect useful interactions, named Random Intersection Chains. Like other data mining algorithms, frequent itemsets are discovered at first and then confident rules are generated. But instead of scanning the complete database, we adopt random intersections to mine the frequent itemsets, then their frequency and confidence are calculated with the assistant of maximum likelihood estimation or Bayes’ theorem.

The first node of a chain, called the head node, contains the items in a randomly chosen instance. The other nodes in the chain contain the intersection of its previous node and a new randomly chosen instance. We repeatedly choose random instances until the length of a chain reaches the pre-defined threshold, or the number of the items in the last node (named the tail node) is sufficiently small. Finally the itemsets in the tail nodes as well as their subsets are regarded as frequent itemsets. For example, if we want to generate a chain consisting of three nodes, and the chosen instances are [c1,c2,c3][c_{1},c_{2},c_{3}], [c1,c2,c3][c_{1},c_{2}^{\prime},c_{3}], [c1,c2,c3][c_{1},c_{2},c_{3}^{\prime}], where c1c1c_{1}\neq c_{1}^{\prime}, c2c2c_{2}\neq c_{2}^{\prime} and c3c3c_{3}\neq c_{3}^{\prime}, then the chain is [C1=c1,C2=c2,C3=c3][C1=c1,C3=c3][C1=c1][C_{1}=c_{1},C_{2}=c_{2},C_{3}=c_{3}]\to[C_{1}=c_{1},C_{3}=c_{3}]\to[C_{1}=c_{1}]. The procedure of generating MM such chains of length DD can be seen straightly from Algorithm 1.

Algorithm 1 GenerateChain: generate chains by intersection
0:  {(Xi,yi)}iI(c)\{(X_{i},y_{i})\}_{i\in I^{(c)}}(observations in class cc); D(length of a chain); M(number of chains);
0:  chains for class c;
1:  for m=1toMm=1~{}to~{}M do
2:     Draw a random observation Xi1X_{i_{1}} from the observations
3:     S1,m(c)Xi1S_{1,m}^{(c)}\leftarrow X_{i_{1}}
4:     for d=2toDd=2~{}to~{}D do
5:        Draw a random observation XidX_{i_{d}} from the observations
6:        Sd,m(c)Sd1,m(c)XidS_{d,m}^{(c)}\leftarrow S_{d-1,m}^{(c)}\cap X_{i_{d}}
7:     end for
8:  end for
9:  return {{Sd,m(c)}d=1D}m=1M\{\{S_{d,m}^{(c)}\}_{d=1}^{D}\}_{m=1}^{M}

The larger frequency a pattern has, the more likely it is to appear in a uniformly chosen instance. Therefore, it’s reasonable to assume the pattern in tail nodes are more frequent than others. After detecting the frequent patterns, Shah and Meinshausen (2014) adopt an estimator based on min-wise hashing to obtain their frequency. Though this estimator enjoys reduced variance compared to that which would be obtained using subsampling, it seems somewhat redundant and unnatural because it’s independent of Random Intersection Trees. However, Random Intersection Chains can estimate frequency by themselves.

For a pattern ss with frequency ps(c)p_{s}^{(c)}, denote the number of its appearance in the mm-th chain for class cc by ks,m(c)k_{s,m}^{(c)}. The likelihood of observing this chain is

(ks,m|ps)={psks,m(1ps),if ks,m<Dpsks,m,if ks,m=D,\mathbb{P}(k_{s,m}|p_{s})=\begin{cases}p_{s}^{k_{s,m}}(1-p_{s}),\hfill\mbox{if~{}}k_{s,m}<D\\ p_{s}^{k_{s,m}},\hfill\mbox{if~{}}k_{s,m}=D\end{cases}, (3)

where we omit the superscript “(c)(c)” to keep notation uncluttered. And we have the likelihood of observing MM chains as shown in Equation 4,

({ks,m}m=1M|ps)\displaystyle\mathbb{P}(\{k_{s,m}\}_{m=1}^{M}|p_{s}) =m:ks,m<Dpsks,m(1ps)m:ks,m=Dpsks,m\displaystyle=\prod_{m:k_{s,m}<D}p_{s}^{k_{s,m}}(1-p_{s})\cdot\prod_{m:k_{s,m}=D}p_{s}^{k_{s,m}} (4)
=psKs(1ps)Is,\displaystyle=p_{s}^{K_{s}}(1-p_{s})^{I_{s}},

where Ks=m=1Mks,mK_{s}=\sum_{m=1}^{M}k_{s,m}, Is=m=1M𝟙{ks,m<D}I_{s}=\sum_{m=1}^{M}\mathbbm{1}_{\{k_{s,m}<D\}}. Thus the log of likelihood is

log({ks,m}m=1M|ps)=Kslogps+Islog(1ps).\log\mathbb{P}(\{k_{s,m}\}_{m=1}^{M}|p_{s})=K_{s}\log p_{s}+I_{s}\log(1-p_{s}). (5)

Setting the derivative of Equation 5 with respect to psp_{s} equalling to zero and rearranging, we obtain

p^s=KsKs+Is=k¯sk¯s+χ¯s,\hat{p}_{s}=\frac{K_{s}}{K_{s}+I_{s}}=\frac{\bar{k}_{s}}{\bar{k}_{s}+\bar{\chi}_{s}}, (6)

where k¯s=1Mm=1Mks,m\bar{k}_{s}=\frac{1}{M}\sum_{m=1}^{M}k_{s,m}, χ¯s=1Mm=1M𝟙{ks,m<D}\bar{\chi}_{s}=\frac{1}{M}\sum_{m=1}^{M}\mathbbm{1}_{\{k_{s,m}<D\}}.

Algorithm 2 estimates the frequency of a pattern by maximum likelihood estimation based on Random Intersection Chains. Algorithm 3 provides an estimator of confidence by Bayes’ theorem once the frequency is available.

Algorithm 2 Frequency: estimate the frequency according to chains
0:  ss(a pattern); {{Sd,m(c)}d=1D}m=1M\{\{S_{d,m}^{(c)}\}_{d=1}^{D}\}_{m=1}^{M}(the chains);
0:  p^s(c)\hat{p}_{s}^{(c)}(the frequency);
1:  for m=1toMm=1~{}to~{}M do
2:     ks,m(c)argmaxd{sSd,m(c)}k_{s,m}^{(c)}\leftarrow\mathop{\arg\max}_{d}\{s\in S_{d,m}^{(c)}\}
3:     χs,m(c)𝟙{ks,m(c)<D}\chi_{s,m}^{(c)}\leftarrow\mathbbm{1}_{\{k_{s,m}^{(c)}<D\}}
4:  end for
5:  p^s(c)k¯s(c)k¯s(c)+χ¯s(c)=m=1Mks,m(c)m=1M[ks,m(c)+χs,m(c)]\hat{p}_{s}^{(c)}\leftarrow\frac{\bar{k}_{s}^{(c)}}{\bar{k}_{s}^{(c)}+\bar{\chi}_{s}^{(c)}}=\frac{\sum_{m=1}^{M}{k_{s,m}^{(c)}}}{\sum_{m=1}^{M}[{k_{s,m}^{(c)}}+{\chi_{s,m}^{(c)}]}}
6:  return p^s(c)\hat{p}_{s}^{(c)}
Algorithm 3 Confidence: estimate the confidence by Bayes’ theorem
0:  {p^s(c)}cC\{\hat{p}_{s}^{(c)}\}_{c\in C}(frequency); {p(c)}cC\{p^{(c)}\}_{c\in C}(prior probabilities);
0:  q^(c)\hat{q}^{(c)}(the confidence);
1:  q^s(c)p^s(c)p(c)cCp^s(c)p(c)\hat{q}_{s}^{(c)}\leftarrow\frac{\hat{p}_{s}^{(c)}p^{(c)}}{\sum_{c^{\prime}\in C}\hat{p}_{s}^{(c^{\prime})}p^{(c^{\prime})}}
2:  return q^s(c)\hat{q}_{s}^{(c)}

After the chains are generated, we estimate the frequency of the patterns in the tail nodes. Then the confidence of these patterns is calculated, after which the confident patterns are returned as the useful interactions. We formally describe a basic version of Random Intersection Chains in Algorithm 4, which combines the characteristic of both random intersections and association rule mining.

Algorithm 4 Random Intersection Chains
0:  {(Xi,yi)}i=1N\{(X_{i},y_{i})\}_{i=1}^{N}(database); D(length of a chain); M(number of chains);ξ\xi(threshold of confidence);
0:  {L(c)}cC\{L^{(c)}\}_{c\in C}(returned patterns);
1:  p(c)|I(c)|/Np^{(c)}\leftarrow|I^{(c)}|/N for cCc\in C
2:  S(c)S^{(c)}\leftarrow\emptyset, for cCc\in C
3:  for all cCc\in C do
4:     {{Sd,m(c)}d=1D}m=1M\{\{S_{d,m}^{(c)}\}_{d=1}^{D}\}_{m=1}^{M}\leftarrow GenerateChain({(Xi,yi)}iI(c),D,M\{(X_{i},y_{i})\}_{i\in I^{(c)}},D,M)
5:     S(c)m=1M{s|sSD,m(c)}S^{(c)}\leftarrow\bigcup_{m=1}^{M}\{s|s\subseteq S_{D,m}^{(c)}\}
6:  end for
7:  for all cCc\in C do
8:     L(c)L^{(c)}\leftarrow\emptyset
9:     for all sS(c)s\in S^{(c)} do
10:        p^s(c)\hat{p}_{s}^{(c^{\prime})}\leftarrow Frequency(s,{{Sd,m(c)}d=1D}m=1Ms,\{\{S_{d,m}^{(c^{\prime})}\}_{d=1}^{D}\}_{m=1}^{M}), for all cCc^{\prime}\in C
11:        q^s(c)\hat{q}_{s}^{(c)}\leftarrow Confidence({p^s(c)}cC,{p(c)}cC\{\hat{p}_{s}^{(c^{\prime})}\}_{c^{\prime}\in C},\{p^{(c^{\prime})}\}_{c^{\prime}\in C})
12:        if q^s(c)ξ\hat{q}_{s}^{(c)}\geq\xi then
13:           L(c)L(c){s}L^{(c)}\leftarrow L^{(c)}\cup\{s\}
14:        end if
15:     end for
16:  end for
17:  return {L(c)}cC\{L^{(c)}\}_{c\in C}

We now explain Algorithm 4 line by line. There are three parameters in the algorithm, named DD, MM and ξ\xi, among which the first two are inherited from random intersections, and the last is from association rule mining. DD represents the length of a chain, MM stands for the number of chains and ξ\xi is the threshold of confidence.

Line 1 calculates the proportion of each class, where I(c)I^{(c)} represents the indices of observations in class cc, CC is the set of class labels. These proportions will be used later to calculate the confidence. Line 2 initializes the set of frequent patterns, which is now an empty set, for each class. For every class, MM chains are generated at Line 4, then the patterns in the tail nodes as well as all their subpatterns are added to the sets created at Line 2. The frequency of a pattern in every class is calculated at Line 10, after which the confidence is calculated at Line 11. If the confidence of a pattern is larger than the pre-defined threshold, this pattern will be included in the resulting set at Line 13. Finally the resulting set is returned at Line 17.

The longer a chain is, the harder for a pattern to appear in its tail node. Oppositely, the more chains, the more likely for a pattern to be observed in at least one of their tail nodes. By adjusting DD and MM carefully, we can control which patterns will be considered as “frequent”. As proven in Theorem 1, there actually exist choices of parameters such that the returned set contains frequent patterns with arbitrarily high probability while including the infrequent ones with probability lower than any given threshold.

Theorem 1

Given η1,η2(0,1]\eta_{1},\eta_{2}\in(0,1], for any θ(0,1]\theta\in(0,1], there exist choices of MM, DD such that the set L(c)L^{(c)} returned by Algorithm 4 contains ss with probability at least 1η11-\eta_{1} if P(sX|Y=c)θP(s\subseteq X|Y=c)\geq\theta, and with probability at most η2\eta_{2} if P(sX|Y=c)<θP(s\subseteq X|Y=c)<\theta.

From the proof of Algorithm 1, it follows that M and D meet the demand if

log(1p1D)log(1p2D)ab,\frac{{\rm log}(1-p_{1}^{D})}{\log(1-p_{2}^{D})}\geq\frac{a}{b},
Malog(1p1D),M\geq\frac{a}{\log(1-p_{1}^{D})},

where

p1=min{ps:psθ1},p_{1}=\min\{p_{s}:p_{s}\geq\theta_{1}\},
p2=max{ps:ps<θ1},p_{2}=\max\{p_{s}:p_{s}<\theta_{1}\},
a=logη11,a=\log\eta_{1}^{-1},
b=log(1η2)1.b=\log(1-\eta_{2})^{-1}.

So we have Corollary 2.

Corollary 2

M and D meet the requirements in Theorem 1 if MMM\geq M^{*} and DDD\geq D^{*}, where

D=max{log(b+1)log(a)log(1/p1),log2+logalogblog(1/p2)log(1/p1)},D^{*}=\lceil\max\left\{\frac{{\rm log}(b+1)-{\rm log}(a)}{{\rm log}(1/p_{1})},\frac{{\rm log}2+{\rm log}a-{\rm log}b}{{\rm log}(1/p_{2})-{\rm log}(1/p_{1})}\right\}\rceil, (7)
M=alog[1/(1p1D)].M^{*}=\lceil\frac{a}{{\rm log}[1/(1-p_{1}^{D^{*}})]}\rceil. (8)

Usually η1\eta_{1} and η2\eta_{2} are small, thus aa is large and bb is small. Assume ab+1a\geq b+1, then the first item in the braces of Equation 7 is no greater than 0. If a12ba\geq\frac{1}{2}b, the second item in the braces of Equation 7 is no less than 0. In this case, Corollary 3 holds.

Corollary 3

If amax{b+1,12b}a\geq\max\left\{b+1,\frac{1}{2}b\right\}, then M and D meet the requirements in Theorem 1 if MMM\geq M^{*} and DDD\geq D^{*}, where

D=log2+logalogblog(1/p2)log(1/p1),D^{*}=\lceil\frac{{\rm log}2+{\rm log}a-{\rm log}b}{{\rm log}(1/p_{2})-{\rm log}(1/p_{1})}\rceil, (9)
M=alog[1/(1p1D)].M^{*}=\lceil\frac{a}{{\rm log}[1/(1-p_{1}^{D^{*}})]}\rceil. (10)

Compared with Random Intersection Trees, it is more convenient to apply Random Intersection Chains on multi-class classification tasks. The former is originally designed for binary classification, and detects the interesting patterns for one class at a time. But it’s unable to answer which patterns are the most useful ones among different classes. On the contrary, since Random Intersection Chains not only detects the frequent patterns but also estimates their frequency and confidence, we can directly compare the frequency or confidence of patterns in different classes. Thus we can select the patterns from all the classes simultaneously.

3.2 Random Intersection Chains with Priority Queues

Thanks to its abandon of scanning the complete database, the naive version of Random Intersection Chains is more efficient than the traditional methods on large data sets. Another advantage is that it can extract high-order interactions without discovering all lower-order interactions beforehand. However, this characteristic is also where a drawback of Random Intersection Chains comes from.

It is obvious that any subpattern of a frequent pattern is also frequent. Since the pattern in the tail node of a chain may contain many components, there are in fact a huge number of frequent patterns we have obtained. For example, if an interaction in a tail node contains kk components, then all the combinations of these components are frequent. So there are 2k12^{k}-1 frequent patterns indeed. It’s computationally infeasible to calculate frequency and confidence for every frequent pattern in this sense. And it’s even impossible to pass over all these subpatterns due to its exponential complexity.

Random Intersection Trees (Shah and Meinshausen, 2014) gets over this difficulty by giving priority to the largest frequent patterns. That is, the authors only consider the patterns whose every superset is infrequent. They limit their attention to the patterns of the leaf nodes, but ignore their subpatterns. This may not be a satisfactory solution, since a subpattern is more frequent than the complete pattern, and sometimes can also have higher confidence. Overlooking them may lead to the missing of informative interactions. Fortunately, by taking advantage of a data structure named priority queue, we find an approach that selects the most frequent patterns from the power set of a given interaction in polynomial time.

A priority queue is a data structure for maintaining a set of elements, each with an associated value called a key (Cormen et al., 2009). In a priority queue, an element with high priority is served before an element with low priority. Like many implementations, we prefer the element enqueued earlier if two elements have the same priority. This may seem arbitrary, but is meaningful later in this work. In this paper, we give priority to elements with larger keys, so what we used is exactly a max-priority queue. A series of operations should be supported by such a queue, and the following are required in our method.

  • INSERT(S,x,keyS,x,key): inserts the element xx with keykey into the set SS, which is equivalent to the operation S=S{x}S=S\cup\{x\};

  • EXTRACT-MAX(SS): removes and returns the element with the largest key in SS;

  • COPY(SS): returns a copy of SS.

We add a new attribute sizesize to a priority queue, which indicates its maximum capacity. In other words, we will discard an element if its keykey is not the S.sizeS.size largest, which can be done by removing the element with the smallest key after inserting a new element into a full queue. If S.sizeS.size is zero, then this priority queue will always be empty.

Algorithm 5 Random intersection chains with priority queue
0:  {(Xi,yi)}i=1N\{(X_{i},y_{i})\}_{i=1}^{N}(database); DD(length of a chain); MM(number of chains);dfreqd_{\rm freq}(number of frequent patterns);dconfd_{\rm conf}(number of confident patterns);
0:  {L(c)}cC\{L^{(c)}\}_{c\in C}(returned patterns);
1:  p(c)|I(c)|/Np^{(c)}\leftarrow|I^{(c)}|/N, for cCc\in C
2:  for all cCc\in C do
3:     {{Sd,m(c)}d=1D}m=1M\{\{S_{d,m}^{(c)}\}_{d=1}^{D}\}_{m=1}^{M}\leftarrow GenerateChain({(Xi,yi=c)}iI(c),D,M\{(X_{i},y_{i}=c)\}_{i\in I^{(c)}},D,M)
4:     Initialize S(c)S^{(c)} as an empty priority queue of size dfreqd_{\rm freq}
5:     for m=1toMm=1~{}to~{}M do
6:        InsertFreqSubset(S(c),SD,m(c),{{Sd,m(c)}d=1D}m=1MS^{(c)},S_{D,m}^{(c)},\{\{S_{d,m}^{(c)}\}_{d=1}^{D}\}_{m=1}^{M})
7:     end for
8:  end for
9:  for all cCc\in C do
10:     Initialize L(c)L^{(c)} as an empty priority queue of size dconfd_{\rm conf}
11:     for all sS(c)s\in S^{(c)} do
12:        INSERT(L(c),s,qs(c)L^{(c)},s,q_{s}^{(c)})
13:     end for
14:  end for
15:  return {L(c)}cC\{L^{(c)}\}_{c\in C}

With priority queues defined above, we come up with Algorithm 5. The main difference between Algorithm 4 and Algorithm 5 lies in the approach of identifying which patterns are frequent or confident. In Algorithm 4, all the patterns in the tail nodes and their subpatterns are considered to be frequent, which may result in heavy computation. On the contrary, Algorithm 5 only takes the dfreqd_{\rm freq} most frequent patterns into consideration. What’s more, Algorithm 5 returns the dconfd_{\rm conf} most confident patterns, while Algorithm 4 identifies confident patterns by a pre-defined threshold.

Algorithm 6 InsertFreqSubset: add the frequent subsets of an itemset
0:  S(c)S^{(c)} (a priority queue); ss(an itemset); {{Sd,m(c)}d=1D}m=1M\{\{S_{d,m}^{(c)}\}_{d=1}^{D}\}_{m=1}^{M}(chains);
1:  for all xsx\in s do
2:     p^x(c)\hat{p}_{x}^{(c)}\leftarrow Frequency({x},{{Sd,m(c)}d=1D}m=1M\{x\},\{\{S_{d,m}^{(c)}\}_{d=1}^{D}\}_{m=1}^{M})
3:     INSERT(S(c),{x},p^x(c)S^{(c)},\{x\},\hat{p}_{x}^{(c)})
4:  end for
5:  for k=2to|s|k=2~{}to~{}|s| do
6:     AA\leftarrow COPY(S(c)S^{(c)})
7:     while |A|>1|A|>1 do
8:        aa\leftarrowEXTRACT-MAX(AA)
9:        A.sizeA.size1A.size\leftarrow A.size-1
10:        if |a|=1|a|=1 and aS(c)a\in S^{(c)} then
11:           BB\leftarrow COPY(AA)
12:           while |B|>0|B|>0 do
13:              bb\leftarrowEXTRACT-MAX(BB)
14:              B.sizeB.size1B.size\leftarrow B.size-1
15:              if |b|=k1|b|=k-1 and bS(c)b\in S^{(c)} and ab=a\cap b=\emptyset then
16:                 p^ab(c)\hat{p}_{a\cup b}^{(c)}\leftarrow Frequency(ab,{{Sd,m(c)}d=1D}m=1Ma\cup b,\{\{S_{d,m}^{(c)}\}_{d=1}^{D}\}_{m=1}^{M})
17:                 INSERT(S(c),ab,p^ab(c)S^{(c)},a\cup b,\hat{p}_{a\cup b}^{(c)})
18:                 INSERT(A,ab,p^ab(c)A,a\cup b,\hat{p}_{a\cup b}^{(c)})
19:                 INSERT(B,ab,p^ab(c)B,a\cup b,\hat{p}_{a\cup b}^{(c)})
20:              end if
21:           end while
22:        end if
23:     end while
24:  end for

The most important improvement of Algorithm 5 lies in Line 6, where the algorithm named “InsertFreqSubset” is used to select the most frequent subsets among the power set of a given itemset. The algorithm works by selecting frequent itemsets level-wisely. We have known that an itemset can not be more frequent then any of its subsets. So if an itemset fails to be in the priority queue, so do its supersets. For an itemset ss consisting of more than one item, it could be uniquely represented by s={a}(s{a})s=\{a\}\cup(s\setminus\{a\}), where aa is the most frequent item in ss (if there are several items having the same frequency, choose the one enqueued earliest, which coincides with the implementation of our priority queue). {a}\{a\} is by definition more frequent than the other singleton subsets of s{a}s\setminus\{a\}, thus it’s more frequent than s{a}s\setminus\{a\}. So if we extract the elements from the priority queue in order, s{a}s\setminus\{a\} occurs later than {a}\{a\}. This sheds some light on the search for frequent kk-order itemsets. We need first take care of the 1-order itemsets in the priority queue. For each 1-order itemset, we only pay attention to the (kk-1)-order itemsets that are extracted later than it. If the 1-order itemset and a (kk-1)-order itemset are disjoint, then their union is a candidate frequent kk-order itemset. We need only calculate the frequency of these kk-order itemsets.

The detailed InsertFreqSubset algorithm is given in Algorithm 6. Frequent 1-order itemsets are selected in Line 1-4. This is followed by a for-loop, where the loop index variable kk represents the size of the candidate itemsets. Since extracting the elements from a queue will change the queue, we make a copy of the queue at Line 5 and Line 11, then apply the EXTRACT-MAX operation on the replica to prevent unwanted changes. The outer while-loop aims to find 1-order itemsets in the priority queue. For an 1-order itemset, the inner while-loop is conducted to find its frequent kk-order supersets. (kk-1)-order itemsets in the remaining queue is caught at Line 15, then a candidate kk-order itemset is generated by combining the 1- and (kk-1)-order itemset, whose frequency is estimated at Line 16. After that the candidate itemset is inserted into the resulting queue S(c)S^{(c)} at Line 17.

The changes of the queue capacity at Line 9 and 14, as well as INSERT operations at Line 18 and 19 have no effects on the resulting queue. But they can squeeze out the infrequent 1- or (kk-1)-order itemsets as soon as possible, which reduces the number of candidates and speed up the algorithm.

4 Computational Complexity

In this section, we analysis the computational complexity of generating a chain and selecting the most frequent subsets of a given itemset, from which we can see that the proposed algorithms are very efficient in some sense.

4.1 Complexity of Chain Generation

Most intuitively, a chain can be represented by recording each itemsets at a node. But this not only causes a wastage of space, but also makes it troublesome to generate or check a chain. For example, to compute the intersection, Shah and Meinshausen (2014) check whether each component of the current interaction is in the new observation. Every such check is O(logp)O(\log p) even if a binary search is adopted. If most of the components are sufficiently frequent, the size of interactions will keep close to pp, so the time complexity of an intersection can be near O(plogp)O(p\log p). The total time needed to generate a chain of length DD is near O(Dplogp)O(Dp\log p), and the memory required to store this chain is near O(Dp)O(Dp).

It’s worth noting that the itemsets in a chain is descending. That is to say, the itemset in a node (except the head node) must be a subset of its previous node. So rather than record the chain as a series of ordered itemsets, we can view it from the aspect of items. A chain can be represented by two pp-dimensional vectors, where the first is a copy of the first randomly chosen observation, and the other records how many times the corresponding item occurs in the chain. For instance, the chain [C1=c1,C2=c2,C3=c3][C1=c1,C3=c3][C1=c1][C_{1}=c_{1},C_{2}=c_{2},C_{3}=c_{3}]\to[C_{1}=c_{1},C_{3}=c_{3}]\to[C_{1}=c_{1}] can be represented by {[C1=c1,C2=c2,C3=c3][C_{1}=c_{1},C_{2}=c_{2},C_{3}=c_{3}], [3, 1, 2]}, or simply {[c1,c2,c3][c_{1},c_{2},c_{3}], [3, 1, 2]}. The memory to store a chain is thus O(p)O(p), and is independent of its length. The memory required to store MM chains is the same as 2M2M observations. For large data sets, the number of observations NN can be very huge, thus additional 2M2M observations usually have little influence on the requirement of storage. An item is in the ii-th node if and only if it occurs at least ii times. When adding a new node to the chain, we need only pay attention to the items whose number of occurrences equals to the current length of the chain. If these items also appear in the new randomly chosen observation, then making the intersection can be done simply by adding one to their number of occurrences. In this way, the time complexity of adding a node is O(p)O(p), and generating a chain of length DD has a time complexity of O(pD)O(pD). Making use of this representation, we have Theorem 4 and Theorem 5.

Theorem 4

The space complexity of Algorithm 1 is O(p|C|M)O(p|C|M). If MM and DD are chosen as MM^{*} and DD^{*} in Corollary 3, then the return meets the requirements in Theorem 1, while the space complexity is O(p|C|alog[1/(1p1D)])O(p|C|\lceil\frac{a}{\log[1/(1-p_{1}^{D^{*}})]}\rceil).

Theorem 5

The time complexity of Algorithm 1 is O(p|C|MD)O(p|C|MD). If MM and DD are chosen as MM^{*} and DD^{*} in Corollary 3, then the return meets the requirements in Theorem 1, while the time complexity is O(p|C|alog[1/(1p1D)]D)O(p|C|\lceil\frac{a}{\log[1/(1-p_{1}^{D^{*}})]}\rceil D^{*}).

From Theorem 4 and Theorem 5 we can conclude that Random Intersection Chains will be efficient when p1p_{1} is large and p2p_{2} is small. For the ideal situation where p1p_{1} approaches 1 and p2p_{2} tends to 0, MM^{*} and DD^{*} are near 1, both the space and time complexity of chain generation are linear with the number of features pp or the number of different labels in CC.

4.2 Complexity of Subset Selection

As stated earlier in Section 3.2, finding a frequent pattern ss means we have actually found O(2|s|)O(2^{|s|}) frequent patterns, since every subpattern of ss is frequent. It’s not realistic to take all of them into consideration, not even to perform a traversal. Algorithm 6 solves this problem with the help of priority queues. We provide Theorem 6 to guarantee the validity and time complexity of Algorithm 6.

Theorem 6

If the input priority queue of Algorithm 6 is empty, then it contains the dfreqd_{\rm freq} most frequent subsets of ss when the algorithm ends, and the number of frequency calculation is O(|s|dfreq2)O(|s|d_{\rm freq}^{2}).

If the input priority queue of Algorithm 6 is not empty, e.g. S(c)=SS^{(c)}=S^{\prime}\neq\emptyset, then during the running of the algorithm, itemsets with small frequency in SS^{\prime} will be squeezed out by the subsets of the input itemset ss, while itemsets with large frequency in SS^{\prime} occupies a position in the priority queue from beginning to end. So at the end of the algorithm, the priority queue S(c)S^{(c)} contains the dfreqd_{\rm freq} most frequent itemsets in SPow(s)S^{\prime}\cup Pow(s). As a result, after the for-loop in Line 5-7 of Algorithm 5, S(c)S^{(c)} contains the dfreqd_{\rm freq} most frequent itemsets found by the chains for class cc.

5 Convergence Analysis

The previous section analyzes the computational complexity of Random Intersection Chains, from which we can see that it is efficient in some sense. Another main concern about Random Intersection Chains is how effective it is. In other words, since the frequency and confidence are estimated on the basis of random intersection, one may wonder how well they can approximate their true value. We find that these estimators have some good properties. To illustrate this point, asymptotic behaviors of p^s(c)\hat{p}_{s}^{(c)} and q^s(c)\hat{q}_{s}^{(c)} are given in Theorem 7 and Theorem 8. The derivations are given in the appendix, which are mainly based on the multivariate delta method.

Theorem 7

p^s(c)\hat{p}_{s}^{(c)} calculated by Algorithm 2 satisfies:

M[p^s(c)ps(c)]dn(0,ps(c)(1ps(c))21(ps(c))D).\sqrt{M}[\hat{p}_{s}^{(c)}-p_{s}^{(c)}]\stackrel{{\scriptstyle d}}{{\longrightarrow}}n(0,\frac{p_{s}^{(c)}(1-p_{s}^{(c)})^{2}}{1-(p_{s}^{(c)})^{D}}). (11)
Theorem 8

q^s(c)\hat{q}_{s}^{(c)} calculated by Algorithm 3 satisfies:

M[q^s(c)qs(c)]dn(0,τ2).\sqrt{M}[\hat{q}_{s}^{(c)}-q_{s}^{(c)}]\stackrel{{\scriptstyle d}}{{\longrightarrow}}n(0,\tau^{2}). (12)

where

τ2=[ps(c)p(c)ps2]2cC[1ps(c)]2ps(c)1ps(c)Dp(c)2+[p(c)ps2]2[1ps(c)]2ps(c)1ps(c)Dps(ps2ps(c)p(c))\tau^{2}=\left[\frac{p_{s}^{(c)}p^{(c)}}{p_{s}^{2}}\right]^{2}\sum_{c^{\prime}\in C}\frac{[1-p_{s}^{(c^{\prime})}]^{2}p_{s}^{(c^{\prime})}}{1-p_{s}^{(c^{\prime})D}}p^{(c^{\prime})2}+\left[\frac{p^{(c)}}{p_{s}^{2}}\right]^{2}\frac{[1-p_{s}^{(c)}]^{2}p_{s}^{(c)}}{1-p_{s}^{(c)D}}p_{s}(p_{s}-2p_{s}^{(c)}p^{(c)})

From Theorem 7 we can see that p^s(c)\hat{p}_{s}^{(c)} converges to an unbiased estimator in distribution as MM goes to infinity. The limiting estimator multiplied by M\sqrt{M} would have variance ps(c)(1ps(c))21(ps(c))D\frac{p_{s}^{(c)}(1-p_{s}^{(c)})^{2}}{1-(p_{s}^{(c)})^{D}}. This variance is monotone decreasing with the increase of DD. Remember that the space needed for chains is independent of DD. So if time permits, setting DD as large as possible seems a good choice. The variance tends to 0 if ps(c)p_{s}^{(c)} is close to either 0 or 1, which means the estimator is more accurate if the itemset is extremely frequent or extremely infrequent.

Theorem 8 leads to some similar results. q^s(c)\hat{q}_{s}^{(c)} also converges to an unbiased estimator in distribution as MM goes to infinity. But the variance of the limiting estimator multiplied by M\sqrt{M} is more complex. Anyway, the variance is monotone decreasing with the increase of DD, too. This makes setting larger DD more appealing. In general, large psp_{s}(means ss is frequent in the whole database), small p(c)p^{(c)}(means cc is a minor class) and extremely large or small ps(c)p_{s}^{(c^{\prime})}(means ss is either very frequent or very infrequent for each class) will leads to relatively small variance.

6 Numerical Studies

To verify the efficiency and effectiveness of Random Intersection Chains, we give the results of several numerical examples. We first conduct a series of experiments on two benchmark data sets for click-through rate (CTR) prediction, which aims to illustrate the efficiency, consistency and effectiveness of Random Intersection Chains on large-scale data sets. We also adopt the data sets used by Shah and Meinshausen (2014), from whose experimental results two conclusions can be obtained: (1) Random Intersection Chains can find almost all meaningful patterns for an ideal data set, (2) rather than act as a classifier themselves, the detected patterns can lead to a better result if they serve as input features for another classification model. We also show that Random Intersection Chains helps to find interactions of numerical features if they are transformed to discrete ones at first, by comparing it to some existing interactive feature selection algorithms on another two UCI data sets.

According to the discussion in Section 5, longer chains lead to better estimations. So we set DD=100,000, and introduce the maximum order of interaction KK as an additional parameter. A chain stops growing if either its length is larger than DD, or the number of items in its tail node is no larger than KK.

6.1 Click-Through Rate Prediction

Click-through rate (CTR) prediction is an important application of machine learning algorithms, which aims to predict the ratio of clicks to impressions of a specific link. The input features associate with either a user or an item, many of which are categorical and of high cardinality. The label indicates the clicks of a user to an item. Usually few items will be clicked by a user, which makes the data unbalanced.

We conduct experiments on two public real-world datasets, named Criteo and Avazu. Criteo data set consists of a portion of Criteo’s traffic over a period of 7 days. There are 45 million users’ clicking records on displayed ads in the data, and the rows are chronologically ordered. It contains 26 categorical features and 13 numerical features. Avazu data set contains the records of whether a displayed mobile ad is clicked by a user or not. Click-through data of 10 days, ordered chronologically, is provided. It has 23 features, all of which are categorical. And the total number of samples is above 40 million.

If one-hot encoding is applied, there will be 998,960 binary features for Criteo data set and 1,544,428 features for Avazu data set, which is obviously unacceptable. We first unify all the uncommon categories into a single category “others”. A category is “uncommon” if the number of its occurrences is less than 10 for Criteo or 5 for Avazu. Then the categorical features are label encoded. As for numerical features, a value zz will be transformed to (logz)2(\log z)^{2} if z>2z>2. Finally, each data set is divided into 10 parts, where 8 parts are used for training, 1 part for validation and 1 part for test. This procedure is actually the same as in the work of Song et al. (2019), which is also adopted by Song et al. (2020); Tsang et al. (2020).

As analyzed in Section 4 and Section 5, the more chains are generated, the more time and memory are needed, but the more accurate estimations of frequency and confidence are obtained. We apply Random Intersection Chains on both data sets with MM from 100 to 15,000 for each part in the training set. The running time is shown in Figure 1(a). As for memory requirements, the additional space cost caused by Random Intersection Chains is at most the same as 2×(15,000×8)=240,0002\times(15,000\times 8)=240,000 observations, which is very small when compared with the original 40 million observations. To show the consistency of Random Intersection Chains, we adopt the Jaccard-index as the criterion of similarity between two sets SS and SS^{\prime}, which is defined as

J(S,S)=|SS||SS|.{\rm J}(S,S^{\prime})=\frac{|S\cap S^{\prime}|}{|S\cup S^{\prime}|}. (13)

We calculate the Jaccard-index for interactions found by MM=15000 and the interactions found by smaller values of MM, and the results are exhibited in Figure 1(b). As can be seen from Figure 1(a), the running time is linear with the number of chains, and it’s relatively small considering the rather large size of the data sets. Figure 1(b) indicates that the returns of Random Intersection Chains are very similar for large MM, which verifies the consistency.

Refer to caption
(a) Running time of random intersection chains.
Refer to caption
(b) Jaccard-index of the found interactions.
Figure 1: Running time and Jaccard-index for Random Intersection Chains with different number of chains on Criteo and Avazu data sets.

One of the advantages of the interactions defined in this paper is their interpretability. Since the Avazu data set contains non-anonymized features, we list the 10 most confident interactions with their estimated or accurate frequency and confidence in Table 1, where we only list out the name of features, and omit the specific value for each feature to keep notation uncluttered. We can conclude that “banner_pos” plays an important role in advertising. This observation coincides with the intuition that an advertisement is more likely to be clicked if it’s exhibited in a good position. Many interactions have a feature associated with “app” and a feature about “device”, which indicates the relationship between an item and a user. Also the estimations of frequency and confidence are pretty good, the RMSE of the estimations are 1.2×1031.2\times 10^{-3}, 1.9×1031.9\times 10^{-3} and 1.1×1031.1\times 10^{-3} for Frequency(-), Frequency(-) and Confidence, respectively. What’s more, the corresponding Pearson correlation coefficient are 0.9987, 0.9977 and 0.9879, which means the numerical order of frequency or confidence is well preserved. Thus the patterns found by Random Intersection Chains is likely to be the most frequent and confident ones.

Table 1: Ten most confident interactions for Avazu. Frequency(-) stands for the frequency in negative class and Frequency(+) for the frequency in positive class. “Est.” represents the values of estimators and “True” represents the accurate values in the data sets.
interaction Frequency(-) Frequency(+) Confidence
Est. True Est. True Est. True
device_id, C20 0.3607 0.3610 0.4495 0.4518 0.2032 0.2038
banner_pos,app_domain,app_category,device_conn_type 0.3646 0.3633 0.4486 0.4497 0.2011 0.2020
banner_pos, app_category, device_conn_type 0.3646 0.3633 0.4486 0.4497 0.2011 0.2020
banner_pos, app_domain, app_category 0.3835 0.3821 0.4717 0.4735 0.2010 0.2022
banner_pos, app_category 0.3835 0.3821 0.4717 0.4735 0.2010 0.2022
banner_pos, app_id, app_domain 0.3759 0.3748 0.4604 0.4619 0.2003 0.2014
banner_pos, app_id 0.3759 0.3748 0.4604 0.4619 0.2003 0.2014
banner_pos, app_domain, device_conn_type 0.3684 0.3669 0.4498 0.4510 0.1998 0.2009
device_type, device_conn_type, C20 0.3690 0.3684 0.4502 0.4537 0.1997 0.2012
banner_pos, app_domain 0.3882 0.3867 0.4731 0.4751 0.1995 0.2008

Finally, the interactions are used as binary features, based on which several popular CTR prediction models are trained. We adopt 5 models, namely Wide&Deep (Cheng et al., 2016), DeepFM (Guo et al., 2017), xDeepFM (Lian et al., 2018), Deep&Cross (Wang et al., 2017) and AutoInt (Song et al., 2019), to test whether adding the interactions to the input is helpful. The results are shown in Table 2, where the performance of GLIDER, an interaction detection methods proposed by Tsang et al. (2020), is also presented as a comparison. We can see that in most cases, adding the interactions found by Random Intersection Chains leads to a significant improvement. In fact, an improvement as small as 0.001 is desirable for the Criteo data set (Cheng et al., 2016; Guo et al., 2017; Wang et al., 2017; Song et al., 2019; Tsang et al., 2020), and Random Intersection Chains lives up to this expectation. Perhaps due to the better learning rate we used, our baseline is better than GLIDER. Despite the difference between baselines, Random Intersection Chains makes a comparable improvement to GLIDER. But according to Tsang et al. (2020), it will take several hours and more than 150 GB memory to perform GLIDER, while the requirement of Random Intersection Chains is much lower.

Table 2: CTR prediction performance on two benchmark data sets. “+RIC” means adding the interactions found by Random Intersection Chains to the input. All experiments were repeated for 5 times, and the means are provided with standard deviations in parentheses followed. The results which yield a p-value less than 0.05 are shown in bold. Rows with * are reported by Tsang et al. (2020). “+GLIDER” means the inclusion of interactions found by GLIDER.
Model Criteo Avazu
AUC logloss AUC logloss
Wide&Deep 0.8087(4e-4) 0.4427(3e-4) 0.7826(2e-4) 0.3781(1e-4)
+RIC 0.8097(3e-4) 0.4418(2e-4) 0.7828(2e-4) 0.3779(1e-4)
Wide&Deep* 0.8069(5e-4) 0.4446(4e-4) 0.7794(3e-4) 0.3804(2e-4)
+GLIDER* 0.8080(3e-4) 0.4436(3e-4) 0.7795(1e-4) 0.3802(9e-5)
DeepFM 0.8087(4e-4) 0.4427(3e-4) 0.7819(3e-4) 0.3785(2e-4)
+RIC 0.8097(3e-4) 0.4418(2e-4) 0.7826(4e-4) 0.3781(2e-4)
DeepFM* 0.8079(3e-4) 0.4436(2e-4) 0.7792(3e-4) 0.3804(9e-5)
+GLIDER* 0.8097(2e-4) 0.4420(2e-4) 0.7795(2e-4) 0.3802(2e-4)
Deep&Cross 0.8084(7e-4) 0.4430(7e-4) 0.7824(2e-4) 0.3782(1e-4)
+RIC 0.8096(6e-4) 0.4419(5e-4) 0.7835(8e-4) 0.3776(4e-4)
Deep&Cross* 0.8076(2e-4) 0.4438(2e-4) 0.7791(2e-4) 0.3805(1e-4)
+GLIDER* 0.8086(3e-4) 0.4428(2e-4) 0.7792(2e-4) 0.3803(9e-5)
xDeepFM 0.8082(5e-4) 0.4432(4e-4) 0.7824(4e-4) 0.3782(2e-4)
+RIC 0.8103(2e-4) 0.4414(2e-4) 0.7825(4e-4) 0.3781(2e-4)
xDeepFM* 0.8084(2e-4) 0.4433(2e-4) 0.7785(3e-4) 0.3808(2e-4)
+GLIDER* 0.8097(3e-4) 0.4421(3e-4) 0.7787(4e-4) 0.3806(1e-4)
AutoInt 0.8077(3e-4) 0.4436(3e-4) 0.7788(2e-4) 0.3804(1e-4)
+RIC 0.8090(4e-4) 0.4425(4e-4) 0.7795(3e-4) 0.3802(1e-4)
AutoInt* 0.8083 0.4434 0.7774 0.3811
+GLIDER* 0.8090(2e-4) 0.4426(2e-4) 0.7773(1e-4) 0.3811(5e-5)

6.2 Tic-Tac-Toe Endgame Data

Tic-Tac-Toe endgame data set (Matheus and Rendell, 1989; Aha et al., 1991) encodes the complete set of possible board configurations at the end of tic-tac-toe games. There are 958 instances and 9 categorical features in the data set. The possible values for each feature are “x”, “o” and “b”, which stand for “black”, “white” and “blank”, respectively. There are 8 possible ways to win for both players (3 horizontal lines, 3 vertical lines and 2 diagonal lines). Our target is to learn these rules that determine which player wins the game.

This is an ideal data set to test the effectiveness of Random Intersection Chains, since all the features are categorical and the label is intrinsically determined by some rules. As the result of an ending game is completely determined by some rules, we can make an accurate prediction if we find all the interesting rules. So we make an effort to extract the total 16 valid rules, with different dfreqd_{\rm freq}.

We set KK=4 and dconfd_{\rm conf}=10, so only the 10 most confident patterns for each player are kept. The range of dfreqd_{\rm freq} we adopted is [20, 1000], and the number of found interesting rules is given in Figure 2. At the beginning, there are more interesting patterns can be found as dfreqd_{\rm freq} grows. Actually for dfreqd_{\rm freq} larger than 400, all the interesting patterns corresponding to x’s victory are found, while there is one missing pattern for “o”. We check the list of the found patterns, and found that the missing pattern is “a3=o, b3=o, c3=o”. Its support is 32/958=0.0334, indeed a very small value. When dfreqd_{\rm freq} is too large, not only the missing pattern doesn’t occur, but also some already found patterns disappear. This is because some uncommon patterns have high confidence coincidentally. For example, “a1=x, b1=o, b3=b” occurs 30 times in the database, and all the instances containing this pattern happen to be in positive class. Since the number of the remained patterns are limited, these occasional patterns squeeze out the interesting ones.

The results indicate that the size of priority queue influences the extracted patterns in two ways. On the one hand, small dfreqd_{\rm freq} may lead to a neglect of some meaningful patterns. On the other hand, if it’s too large, some occasionally confident patterns will be chosen, which can be treated as the “overfitting” phenomenon of Random Intersection Chains.

Refer to caption
Figure 2: Number of valid rules in confident rules.

6.3 Reuters RCV1 Text Data

The Reuters RCV1 text data contains the tf-idf (term frequency-inverse document frequency) weighted presence of 47,148 word-stems in each document (Lewis et al., 2004).

Lewis et al. (2004) used a data set consisting of 23,149 documents as the training set. Like the processing approach adopted by Shah and Meinshausen (2014), we only consider word-stems appearing in at least 100 documents and the topics that contain at least 200 documents. This leaves 2484 word-stems as predictor variables and 52 topics as prediction targets. Also, tf-idf is transformed to a binary version, using 1 or 0 to represent whether it is positive. In this work, we split the original training data into a smaller training set and a test set. The first 13,149 instances are used for training and the rest for testing.

For each topic cc, the target is to predict whether a document belongs to this topic. Setting dfreqd_{\rm freq}=400, dconfd_{\rm conf}=200 and KK=4, MM=300, we apply Random Intersection Chains on the training set. Then we evaluate the interactions in two different ways. The first method, labeled by “Best-Rule”, is classifying the instances by the best rule directly. The “best” rule is defined as the most confident one among the rules with supports larger than p(c)p^{(c)}/10, which is the same as what is used by Shah and Meinshausen (2014). The other method is treating the rules as selected features for a linear model, which is called “Rules+LR”. We also fit a linear model on the total 2484 features as a comparison, labeled as “LR”. The precision and recall for the models are given in Figure 3, where the best rules found by Random Intersection Chains are shown in the right side of the figure.

Refer to caption
(a) Precision on the test data.
Refer to caption
(b) Recall on the test data.
Figure 3: Precision and recall on the test data.

We can see that “Best-Rule” sometimes gives good precision or recall, but its performance is usually the worst among the three models. This is not surprising because the information in a single rule is limited, and it’s unreasonable to ask a rule to be both general and reliable on a complicated data set. “Rules+LR” yields similar precision but generally smaller recall when compared with “LR”. At a first glance, it seems Random Intersection Chains brings few benefits. But noticing that the input dimension of the linear model in “Rules+LR” is 200, while the number is 2484 in “LR”. It rapidly reduces the data size and the computational burden, while the precision and recall only fall slightly. Also, the interpretability is enhanced since few features are used.

6.4 Other UCI Data

We also apply Random Intersection Chains on the two data sets used by Shah (2016). The first is Communities and Crime Unnormalized Data Set, “CCU” for short, which contains crime statistics for the year 1995 obtained from FBI data, and national census data from 1990. We take violent crimes per capita as our response, which makes it a regression task. We process the data in the same way as Shah (2016). This leads to a data set consisting of 1903 observations and 101 features.

The second data set is “ISOLET”, which consists of 617 features based on the speech waveforms generated from utterances of each letter of the English alphabet. We consider classification on the notoriously challenging E-set consisting of the letters “B”, “C”, “D”, “E”, “G”, “P”, “T”, “V” and “Z”. And finally we have 2700 observations spread equally among 9 classes.

We use the l1l_{1}-penalised linear regression as the base regression procedure, and penalised multinomial regression for the classification example. The regularization coefficient is determined by 5-fold cross-validation. To evaluate the procedures, we randomly select 2/3 of the instances for training and the rest for testing. This procedure is repeated 200 times for each of the data sets. Mean square error is used as the criterion for the regression model and misclassification rate is used for the classification task. All the settings are exactly the same as those used by Shah (2016), except we use l2l_{2}-regularizer to penalise the multinomial regression instead of group Lasso. This is because we don’t know how Shah (2016) grouped the features.

Since the inputs for these two data sets are numerical and CCU data set corresponds to a regression task, Random Intersection Chains can not be applied directly. To handle this difficulty, the continuous features and response should be transformed to a discrete version. The response of CCU data set is split into 5 categories by quantiles, and all the continuous features are then split to 5 intervals according to information gain. Setting dfreqd_{\rm freq}=2500, dconfd_{\rm conf}=1225 and K=5K=5, M=300M=300, we add the product Xi1Xi2XikX_{i_{1}}X_{i_{2}}\cdots X_{i_{k}} as an interactive feature to the input if there is a rule in the form of (Xi1X_{i_{1}}=xi1x_{i_{1}}, Xi2X_{i_{2}}=xi2x_{i_{2}}, …, XikX_{i_{k}}=xikx_{i_{k}}) for some xi1x_{i_{1}}, xi2x_{i_{2}}, …, xikx_{i_{k}} in the resulting rule sets. The results of models with and without adding the rules found by Random Intersection Chains are shown in Table 3, labeled by “RIC” and “Main”. We also list out the results reported by Shah (2016), including base procedures (“Main*”), iterated Lasso fits (“Iterated”), Lasso following marginal screening for interactions (“Screening”), Backtracking, Random Forests (Breiman, 2001), hierNet (Bien et al., 2013) and MARS (Friedman, 1991). For CCU data set, our base model outperforms the one used by Shah (2016), which may caused by a better penalty parameter. Random Intersection Chains leads to comparable or better result when compared to existing algorithms. As for ISOLET data set, the result of our base model is not as good as the one used by Shah (2016). This is not surprising since we simply use l2l_{2}-regularizer while Shah (2016) adopted group Lasso to penalise the model. But we can see that Random Intersection Chains can run on this data set and leads to a good improvement, while some existing methods such as Screening, hierNet, MARS are inapplicable. We think this could be an evidence of our method’s efficiency.

Table 3: Results of CCU and ISOLET. Rows with * are reported by Shah (2016).
method ERROR
Communities and crime ISOLET
Main 0.404(5.7×103)0.404(5.7\times 10^{-3}) 0.0730(5.5×1040.0730(5.5\times 10^{-4})
RIC 0.369(6.3×1030.369(6.3\times 10^{-3}) 0.0665(5.3×1040.0665(5.3\times 10^{-4})
Main* 0.414(6.5×1030.414(6.5\times 10^{-3}) 0.0641(4.7×1040.0641(4.7\times 10^{-4})
Iterate* 0.384(5.9×1030.384(5.9\times 10^{-3}) 0.0641(4.7×1040.0641(4.7\times 10^{-4})
Screening* 0.390(7.8×1030.390(7.8\times 10^{-3}) -
Backtracking* 0.365(3.7×1030.365(3.7\times 10^{-3}) 0.0563(4.5×1040.0563(4.5\times 10^{-4})
Random Forest* 0.356(2.4×1030.356(2.4\times 10^{-3}) 0.0837(6.0×1040.0837(6.0\times 10^{-4})
hierNet* 0.373(4.7×1030.373(4.7\times 10^{-3}) -
MARS* 5580.586(3.1×1035580.586(3.1\times 10^{3}) -

7 Conclusion

Based on the contents of association rule mining and random intersections, we propose Random Intersection Chains to discover meaningful categorical interactions. A number of chains are generated by intersections, and the patterns in the tail nodes are regarded as frequent patterns. Then the frequency and confidence of these patterns are estimated by maximum likelihood estimation and Bayes’ theorem. Finally the most confident patterns are selected. An efficient algorithm for selecting the most frequent subpatterns from a given pattern in polynomial time is also provided.

We prove that there exist appropriate parameters that can keep all the frequent patterns, while the infrequent ones are prevented. The time and space complexities are analyzed, showing that the algorithm is both time- and memory-efficient. The asymptotic behavior of the estimations is guaranteed. When the number of chains goes to infinity, the estimated frequency and confidence converge to their true values.

As a supplementary, s series of experiments are conducted to verify the effectiveness of Random Intersection Chains. We show it’s time efficient and consistent to detect the most frequent and confident patterns by applying the algorithm on two CTR prediction data sets. The prediction result verifies that adding these interactions are beneficial for CTR prediction. The ability of detecting useful patterns is further tested on the Tic-Toc-Toe data, where almost all the meaningful rules are found if parameters are appropriately chosen. The experiments on Reuters RCV1 Text data show that the found patterns can not only serve as a classifier themselves, but also be the input features for another model. We also compare our algorithm with some other interaction detection methods on several UCI data sets with continuous features or response. The results show that Random Intersection Chains can help if the features or response are transformed into categorical ones beforehand.

One limitation of Random Intersection Chains is that it can not be applied directly on numerical features or response. We are trying to extend its application domain to these cases. Another difficulty lies in the choice of parameters. Different parameter settings influences the prediction performance, but tuning the parameters by grid search is time-consuming. We hope to find a better approach to chose the parameters.


Acknowledgments

This work was partially supported by the National Natural Science Foundation of China under grants 11671418 and 12071428 and by the Zhejiang Provincial Natural Science Foundation of China under grant LZ20A010002.

Appendix A.

In this appendix we give the proofs omitted earlier in the paper.

Proof of Theorem 1. To keep the notation uncluttered, we omit the superscript “(c)(c)” on the probabilities. For a pattern ss, we use the notation psp_{s} for the probability of ss’s occurrence conditioned on Y=cY=c. And define

p1=min{ps:psθ},p_{1}=\min\{p_{s}:p_{s}\geq\theta\},
p2=max{ps:ps<θ}.p_{2}=\max\{p_{s}:p_{s}<\theta\}.

For a chain of length DD,

(sSD,1(c))=psD.\mathbb{P}(s\subseteq S_{D,1}^{(c)})=p_{s}^{D}.

And for MM chains,

(sSD,M(c))=1[1(sSD,1(c))]M=1[1psD]Mg(ps;D,M).\mathbb{P}(s\subseteq S_{D,M}^{(c)})=1-[1-\mathbb{P}(s\subseteq S_{D,1}^{(c)})]^{M}=1-[1-p_{s}^{D}]^{M}\eqqcolon g(p_{s};D,M).

We can see that g(ps;M,D)g(p_{s};M,D) is monotone increasing with the increasing of psp_{s} and MM, and the decreasing of DD. For psθp_{s}\geq\theta, if Mlogη1log(1p1D)M\geq\frac{{\log}\eta_{1}}{\log(1-p_{1}^{D})}, then

(SSD,M(c))\displaystyle\mathbb{P}(S\subseteq S_{D,M}^{(c)}) =g(ps;D,M)\displaystyle=g(p_{s};D,M)
g(p1;D,logη1log(1p1D))\displaystyle\geq g(p_{1};D,\frac{{\log}\eta_{1}}{\log(1-p_{1}^{D})})
=1[1p1D]logη1log(1p1D)\displaystyle=1-[1-p_{1}^{D}]^{\frac{{\log}\eta_{1}}{\log(1-p_{1}^{D})}}
=1η1log(1p1D)log(1p1D)\displaystyle=1-\eta_{1}^{\frac{{\log}(1-p_{1}^{D})}{{\log}(1-p_{1}^{D})}}
=1η1.\displaystyle=1-\eta_{1}.

Define

M(D)=logη1log(1p1D),M^{*}(D)=\lceil\frac{\log\eta_{1}}{\log(1-p_{1}^{D})}\rceil,
M¯(D)=logη1log(1p1D)+1.\bar{M}(D)=\frac{\log\eta_{1}}{\log(1-p_{1}^{D})}+1.

Thus M¯(D)M(D)logη1log(1p1D)\bar{M}(D)\geq M^{*}(D)\geq\frac{\log\eta_{1}}{\log(1-p_{1}^{D})}. Then for psθp_{s}\geq\theta, we have (SSD,M(c))1η1\mathbb{P}(S\subseteq S_{D,M}^{(c)})\geq 1-\eta_{1} if MM(D)M\geq M^{*}(D).

Next we give the conditions for S(c)S^{(c)} containing ss with probability at most η2\eta_{2} if P(sX|Y=c)<θP(s\subseteq X|Y=c)<\theta. Fixing M=M(D)M=M^{*}(D), for ps<θp_{s}<\theta we have

(SSD,M(c))\displaystyle\mathbb{P}(S\subseteq S_{D,M^{*}}^{(c)}) =g(ps;D,M)\displaystyle=g(p_{s};D,M^{*})
<g(p2;D,M¯)\displaystyle<g(p_{2};D,\bar{M})
=1[1p2D]log(η1)log(1p1D)+1\displaystyle=1-[1-p_{2}^{D}]^{\frac{{\log}(\eta_{1})}{\log(1-p_{1}^{D})}+1}
=1η1log(1p2D)log(1p1D)(1p2D).\displaystyle=1-\eta_{1}^{\frac{{\log}(1-p_{2}^{D})}{{\log}(1-p_{1}^{D})}}(1-p_{2}^{D}).

Define

f(D)=log(1p2D)log(1p1D).f(D)=\frac{{\log}(1-p_{2}^{D})}{{\log}(1-p_{1}^{D})}.

Take the derivative of ff, then we have

f(D)\displaystyle f^{\prime}(D) =p2Dlogp21p2Dlog(1p1D)+p1Dlogp11p1Dlog(1p2D)\displaystyle=\frac{-p_{2}^{D}\log p_{2}}{1-p_{2}^{D}}\log(1-p_{1}^{D})+\frac{p_{1}^{D}\log p_{1}}{1-p_{1}^{D}}\log(1-p_{2}^{D})
=log(1p1D)log(1p2D)[f1(p1D)f1(p2D)],\displaystyle=\log(1-p_{1}^{D})\log(1-p_{2}^{D})[f_{1}(p_{1}^{D})-f_{1}(p_{2}^{D})],

where

f1(x)=xlogx(1x)log(1x).f_{1}(x)=\frac{x\log x}{(1-x)\log(1-x)}.

So the corresponding derivative is

f1(x)=(1+logxx)log(1x)+xlogx[(1x)log(1x)]2.f_{1}^{\prime}(x)=\frac{(1+\log x-x)\log(1-x)+x\log x}{[(1-x)\log(1-x)]^{2}}.

Denote the numerator as f2(x)f_{2}(x), and take the derivative, then we have

f2(x)=(1+logxx)log(1x)+xlogx,f_{2}(x)=(1+\log x-x)\log(1-x)+x\log x,
f2(x)=(1x)2log(1x)x2logxx(1x).f_{2}^{\prime}(x)=\frac{(1-x)^{2}\log(1-x)-x^{2}\log x}{x(1-x)}.

Again denoting the numerator as f3(x)f_{3}(x) and taking the derivative, we have

f3(x)=(1x)2log(1x)x2logx,f_{3}(x)=(1-x)^{2}\log(1-x)-x^{2}\log x,
f3(x)=2(1x)log(1x)2xlogx1.f_{3}^{\prime}(x)=-2(1-x)\log(1-x)-2x\log x-1.

Denoting f4(x)=f3(x)f_{4}(x)=f_{3}^{\prime}(x), we have

f4(x)=2log(1x)2logx=2log(1x1).f_{4}^{\prime}(x)=2\log(1-x)-2\log x=2\log(\frac{1}{x}-1).

Therefore for x(0,1)x\in(0,1),

f4(x)<0forx(0,12),f4(x)>0forx(12,1)\displaystyle f_{4}^{\prime}(x)<0~{}{\rm for}~{}x\in(0,\frac{1}{2}),~{}f_{4}^{\prime}(x)>0~{}{\rm for}~{}x\in(\frac{1}{2},1)
\displaystyle\Rightarrow f3(x)=f4(x)f4(12)=1<0\displaystyle f_{3}^{\prime}(x)=f_{4}(x)\leq f_{4}(\frac{1}{2})=-1<0
\displaystyle\Rightarrow f3(x)limx0f3(x)=0\displaystyle f_{3}(x)\leq\lim_{x\to 0}f_{3}(x)=0
\displaystyle\Rightarrow f2(x)0\displaystyle f_{2}^{\prime}(x)\leq 0
\displaystyle\Rightarrow f2(x)limx0f2(x)=0\displaystyle f_{2}(x)\leq\lim_{x\to 0}f_{2}(x)=0
\displaystyle\Rightarrow f1(x)0.\displaystyle f_{1}^{\prime}(x)\leq 0.

Noticing that 0p2<p110\leq p_{2}<p_{1}\leq 1, we have f1(p1D)<f1(p2D)f_{1}(p_{1}^{D})<f_{1}(p_{2}^{D}), and thus f(D)<0f^{\prime}(D)<0. So g(p2;D,M¯)g(p_{2};D,\bar{M}) is a monotone decreasing function of DD. Extend the domain of ff to real numbers, according to LHo^pitalsL^{\prime}H\hat{o}pital^{\prime}s rule,

limxf(x)\displaystyle\lim_{x\to\infty}f(x) =limx(p2xlogp21p2x/p1xlogp11p1x)\displaystyle=\lim_{x\to\infty}{(\frac{-p_{2}^{x}\log p_{2}}{1-p_{2}^{x}}/\frac{-p_{1}^{x}\log p_{1}}{1-p_{1}^{x}})} (14)
=logp2logp1limx(p2p1)xlimx1p1x1p2x\displaystyle=\frac{\log p_{2}}{\log p_{1}}\lim_{x\to\infty}(\frac{p_{2}}{p_{1}})^{x}\lim_{x\to\infty}\frac{1-p_{1}^{x}}{1-p_{2}^{x}}
=logp2logp101=0.\displaystyle=\frac{\log p_{2}}{\log p_{1}}\cdot 0\cdot 1=0.

Then according to Heine theorem,

limDf(D)=0.\lim_{D\to\infty}f(D)=0.

Thus we have

limDg(p2;D,M¯)=1limDη1f(D)(1p2D)=0.\lim_{D\to\infty}g(p_{2};D,\bar{M})=1-\lim_{D\to\infty}\eta_{1}^{f(D)}(1-p_{2}^{D})=0.

So for any η2(0,1)\eta_{2}\in(0,1), there exists DD^{*}\in\mathbb{N} such that (SSD,M(c))η2\mathbb{P}(S\subseteq S_{D,M^{*}}^{(c)})\leq\eta_{2} if DDD\geq D^{*}.  

Proof of Corollary 2. From the proof of Theorem 1, it follows that there exist a choice MM^{*} and DD^{*}, where DD^{*} is a feasible solution of DD subject to

1[1p2D]log(η1)log(1p1D)+1η2\displaystyle 1-[1-p_{2}^{D}]^{\frac{{\log}(\eta_{1})}{\log(1-p_{1}^{D})}+1}\leq\eta_{2}
\displaystyle\Leftrightarrow (1p2D)η1log(1p2D)log(1p1D)1η2\displaystyle(1-p_{2}^{D})\eta_{1}^{\frac{\log(1-p_{2}^{D})}{\log(1-p_{1}^{D})}}\geq 1-\eta_{2}
\displaystyle\Leftrightarrow logη1log(1p1D)log(1η2)log(1p2D)1.\displaystyle\frac{\log\eta_{1}}{\log(1-p_{1}^{D})}\leq\frac{\log(1-\eta_{2})}{\log(1-p_{2}^{D})}-1.

The inequality x1+xlog(1+x)x\frac{x}{1+x}\leq\log(1+x)\leq x holds for x>1x>-1. Substituting p1D1p1D\frac{p_{1}^{D}}{1-p_{1}^{D}} for xx in the first inequality and rearranging, we have

1log(1p1D)11p1D.\frac{1}{\log(1-p_{1}^{D})^{-1}}\leq\frac{1}{p_{1}^{D}}. (15)

Similarly taking the place of xx by p2D1p2D\frac{p_{2}^{D}}{1-p_{2}^{D}} in the second inequality and rearranging, we have

1p2D11log(1p2D)1.\frac{1}{p_{2}^{D}}-1\leq\frac{1}{\log(1-p_{2}^{D})^{-1}}. (16)

Taking advantage of Inequalities 15 and 16, if

logη11p1D(1p2D1)log(1η2)11,\frac{\log\eta_{1}^{-1}}{p_{1}^{D}}\leq(\frac{1}{p_{2}^{D}}-1)\log(1-\eta_{2})^{-1}-1, (17)

then

logη11log(1p1D)1logη11p1D(1p2D1)log(1η2)11log(1η2)1log(1p2D)11,\frac{\log\eta_{1}^{-1}}{\log(1-p_{1}^{D})^{-1}}\leq\frac{\log\eta_{1}^{-1}}{p_{1}^{D}}\leq(\frac{1}{p_{2}^{D}}-1)\log(1-\eta_{2})^{-1}-1\leq\frac{\log(1-\eta_{2})^{-1}}{\log(1-p_{2}^{D})^{-1}}-1,

where the first, second and third inequality is obtained from Inequalities 15, 17 and 16, respectively. So we need only to ensure Inequality 17. Denote a=logη11,b=log(1η2)1a=\log\eta_{1}^{-1},b=\log(1-\eta_{2})^{-1},

Inequality17\displaystyle{\rm Inequality}~{}\ref{eq:condition_d} ap1Db(p2D1)1\displaystyle\Leftrightarrow ap_{1}^{-D}\leq b(p_{2}^{-D}-1)-1
ap1D[ba(p1p2)D1]b+1\displaystyle\Leftrightarrow ap_{1}^{-D}\left[\frac{b}{a}\left(\frac{p_{1}}{p_{2}}\right)^{D}-1\right]\geq b+1
{ap1Db+1ba(p1p2)D2\displaystyle\Leftarrow\begin{cases}ap_{1}^{-D}\geq b+1\\ \frac{b}{a}\left(\frac{p_{1}}{p_{2}}\right)^{D}\geq 2\end{cases}
{Dlog(b+1)log(a)logp11D1Dlog2+logalogblogp21logp11D2.\displaystyle\Leftarrow\begin{cases}D\geq\frac{\log(b+1)-log(a)}{\log p_{1}^{-1}}\eqqcolon D_{1}\\ D\geq\frac{\log 2+\log a-\log b}{\log p_{2}^{-1}-\log p_{1}^{-1}}\eqqcolon D_{2}\end{cases}.

So D=max{D1,D2},M=alog(1p1D)D^{*}=\lceil\max\{D_{1},D_{2}\}\rceil,M^{*}=\lceil\frac{a}{\log(1-p_{1}^{D^{*}})}\rceil are subject to the conditions in Theorem 1.  

Proof of Theorem 4. As stated in Section 4, we can represent a chain by two vectors of dimension pp. Thus for every chain, the memory needed is 2p2p. For every class in CC, MM chains are generated. Thus the entire space complexity is therefore 2p|C|M2p|C|M.  

Proof of Theorem 5. For the dd-th node of the mm-th chain, we need to check whether the element in Sd1,mS_{d-1,m} occurs in XidX_{i_{d}}, the time complexity is pp. So the time complexity of generating a chain is O(pD)O(pD). And the total time complexity is O(p|C|MD)O(p|C|MD).  

Proof of Theorem 6. Denote

Sk={s:|s|k,ss},S_{k}=\{s^{\prime}:|s^{\prime}|\leq k,s^{\prime}\subseteq s\},
Fk={thedfreqmostfrequentpatternsinSk}.F_{k}=\{{\rm the}~{}d_{\rm freq}~{}{\rm most~{}frequent~{}patterns~{}in}~{}S_{k}\}.

We prove the first part of this theorem by the loop invariant: at the beginning of the iith iteration of Line 5-24, the priority queue S(c)S^{(c)} contains the patterns in FkF_{k}.

Initialization: Before the loop in Line 5-24, this loop invariant is true since every element in SS was inserted into S(c)S^{(c)} in Line 1-4. By the property of priority queue, S(c)S^{(c)} stores the dfreqd_{\rm freq} most frequent elements, each as an 1-order pattern.

Maintenance: During the ii-th iteration, by definition we have (Fi+1Fi)Si+1(F_{i+1}\setminus F_{i})\subseteq S_{i+1} and (Fi+1Si)Fi(F_{i+1}\cap S_{i})\subseteq F_{i}. If Fi+1=FiF_{i+1}=F_{i}, no patterns can be inserted into the priority queue, thus S(c)S^{(c)} keeps unchanged, and satisfies the loop invariant. Otherwise, for any pattern s(Fi+1Fi)s^{\prime}\in(F_{i+1}\setminus F_{i}), we can write s=a(sa)s^{\prime}=a\cup(s^{\prime}\setminus a), where aa is the most frequent 1-order subpattern of ss^{\prime}. Since (sa)(s^{\prime}\setminus a) is more frequent than ss^{\prime}, it belongs to Fi+1F_{i+1}. Noticing (sa)(s^{\prime}\setminus a) is an ii-order pattern, it belongs SiS_{i}, thus it also belongs to (Fi+1Si)Fi(F_{i+1}\cap S_{i})\subseteq F_{i}. By the definition of aa, its frequency is larger than any 1-order subpattern of (sa)(s^{\prime}\setminus a), and thus also larger than (sa)(s^{\prime}\setminus a). According to the property of priority queue, patterns with larger frequency are extracted earlier. So when aa is extracted from the priority queue AA at Line 8, (sa)(s^{\prime}\setminus a) is still in AA. And (sa)(s^{\prime}\setminus a) will be extracted from the queue BB later at Line 13. aa and (sa)(s^{\prime}\setminus a) satisfy the conditions in Line 14 and thus ss^{\prime} will be inserted into S(c)S^{(c)} at Line 17. Since every pattern in Fi+1FiF_{i+1}\setminus F_{i} will not be missed by the algorithm, and S(c)S^{(c)} contains all the patterns in FiF_{i} at the beginning of iteration by the assumption, we can conclude that S(c)S^{(c)} contains every pattern in Fi+1F_{i+1}. The loop invariant is still true.

Termination: When the loop ends, it is the |s||s|-th iteration, thus S(c)S^{(c)} contains all patterns in F|s|F_{|s|}, which is exactly what we need.

As for computational complexity, we only need to calculate the frequency in Line 2 and 16. It can be seen directly that Line 2 repeats |s||s| times. Line 16 repeats at most dfreqd_{\rm freq} times due to Line 12, which will as a whole repeats at most dfreqd_{\rm freq} times due to Line 7, and finally repeats |s||s| times due to Line 5 in total. Thus there are at most O(|s|dfreq2)O(|s|d_{\rm freq}^{2}) computations of frequency overall.  

To prove Theorem 7 and 8, we need to to know the expectation and variance of ks,m(c)k_{s,m}^{(c)}, χs,m(c)\chi_{s,m}^{(c)} and their covariance. So we give Lemma 9, 10 and 11 first. To simplify the notation, we omit the superscript “(c)(c)” unless otherwise stated.

Lemma 9

The expectation and variance of km,sk_{m,s} in Algorithm 2 is

E[km,s]=ps(1psD)1ps,{\rm E}[k_{m,s}]=\frac{p_{s}(1-p_{s}^{D})}{1-p_{s}},
Var[km,s]=ps(2D+1)psD+1+(2D+1)psD+2ps2D+2(1ps)2.{\rm Var}[k_{m,s}]=\frac{p_{s}-(2D+1)p_{s}^{D+1}+(2D+1)p_{s}^{D+2}-p_{s}^{2D+2}}{(1-p_{s})^{2}}.

Proof.

E[km,s]\displaystyle{\rm E}[k_{m,s}] =d=0D(km,s=d)d\displaystyle=\sum_{d=0}^{D}{\mathbb{P}(k_{m,s}=d)\cdot d}
=d=0D1psd(1ps)d+psDD\displaystyle=\sum_{d=0}^{D-1}p_{s}^{d}(1-p_{s})d+p_{s}^{D}D
=psDpsD+(D1)psD+11ps+DpsD\displaystyle=\frac{p_{s}-Dp_{s}^{D}+(D-1)p_{s}^{D+1}}{1-p_{s}}+Dp_{s}^{D}
=ps(1psD)1ps,\displaystyle=\frac{p_{s}(1-p_{s}^{D})}{1-p_{s}},
E[km,s2]\displaystyle{\rm E}[k_{m,s}^{2}] =d=0D(km,s=d)d2\displaystyle=\sum_{d=0}^{D}{\mathbb{P}(k_{m,s}=d)\cdot d^{2}}
=d=0D1psd(1ps)d2+psDD2\displaystyle=\sum_{d=0}^{D-1}p_{s}^{d}(1-p_{s})d^{2}+p_{s}^{D}D^{2}
=ps+ps2(2D+1)psD+1+(2D1)psD+2(1ps)2,\displaystyle=\frac{p_{s}+p_{s}^{2}-(2D+1)p_{s}^{D+1}+(2D-1)p_{s}^{D+2}}{(1-p_{s})^{2}},
Var[km,s]\displaystyle{\rm Var}[k_{m,s}] =E[km,s2](E[km,s])2\displaystyle={\rm E}[k_{m,s}^{2}]-({\rm E}[k_{m,s}])^{2}
=ps(2D+1)psD+1+(2D+1)psD+2ps2D+2(1ps)2.\displaystyle=\frac{p_{s}-(2D+1)p_{s}^{D+1}+(2D+1)p_{s}^{D+2}-p_{s}^{2D+2}}{(1-p_{s})^{2}}.

  

Lemma 10

The expectation and variance of χm,s\chi_{m,s} in Algorithm 2 is

E[χm,s]=1pxD,{\rm E}[\chi_{m,s}]=1-p_{x}^{D},
Var[χm,s]=pxD(1pxD).{\rm Var}[\chi_{m,s}]=p_{x}^{D}(1-p_{x}^{D}).

Proof.

E[χm,s]=E[χm,s2]=(km,s<D)=1pxD,{\rm E}[\chi_{m,s}]={\rm E}[\chi_{m,s}^{2}]=\mathbb{P}(k_{m,s}<D)=1-p_{x}^{D},
Var[χm,s]=E[χm,s2](E[χm,s])2=pxD(1pxD).{\rm Var}[\chi_{m,s}]={\rm E}[\chi_{m,s}^{2}]-({\rm E}[\chi_{m,s}])^{2}=p_{x}^{D}(1-p_{x}^{D}).

  

Lemma 11

The covariance of km,sk_{m,s} and χm,s\chi_{m,s} in Algorithm 2 is

Cov[km,s,χm,s]=DpsD+(D+1)psD+1ps2D+11ps.{\rm Cov}[k_{m,s},\chi_{m,s}]=\frac{-Dp_{s}^{D}+(D+1)p_{s}^{D+1}-p_{s}^{2D+1}}{1-p_{s}}.

Proof.

E[km,sχm,s]=d=0D1psd(1ps)d=psDpsD+(D1)psD+11ps,{\rm E}[k_{m,s}\chi_{m,s}]=\sum_{d=0}^{D-1}p_{s}^{d}(1-p_{s})d=\frac{p_{s}-Dp_{s}^{D}+(D-1)p_{s}^{D+1}}{1-p_{s}},
Cov[km,s,χm,s]=E[km,sχm,s]E[km,s]E[χm,s]=DpsD+(D+1)psD+1ps2D+11ps.{\rm Cov}[k_{m,s},\chi_{m,s}]={\rm E}[k_{m,s}\chi_{m,s}]-{\rm E}[k_{m,s}]{\rm E}[\chi_{m,s}]=\frac{-Dp_{s}^{D}+(D+1)p_{s}^{D+1}-p_{s}^{2D+1}}{1-p_{s}}.

  
Proof of Theorem 7. We need to determine the distribution of Km,s(c)K_{m,s}^{(c)} and Im,s(c)I_{m,s}^{(c)}.

Define

g(k,χ)=kk+χ,g(k,\chi)=\frac{k}{k+\chi},
gkgk|Ekm,s,Eχm,s=(1ps)21psD,g_{k}^{\prime}\coloneqq\left.\frac{\partial g}{\partial k}\right|_{{\rm E}k_{m,s},{\rm E}\chi_{m,s}}=\frac{(1-p_{s})^{2}}{1-p_{s}^{D}},
gχgχ|Ekm,s,Eχm,s=ps(1ps)1psD,g_{\chi}^{\prime}\coloneqq\left.\frac{\partial g}{\partial\chi}\right|_{{\rm E}k_{m,s},{\rm E}\chi_{m,s}}=\frac{-p_{s}(1-p_{s})}{1-p_{s}^{D}},
τ2(gk)2Var[km,s]+(gχ)2Var[χm,s]+2gkgχCov[km,s,χm,s]=ps(1ps2)1psD.\tau^{2}\coloneqq(g_{k}^{\prime})^{2}{\rm Var}[k_{m,s}]+(g_{\chi}^{\prime})^{2}{\rm Var}[\chi_{m,s}]+2g_{k}^{\prime}g_{\chi}^{\prime}{\rm Cov}[k_{m,s},\chi_{m,s}]=\frac{p_{s}(1-p_{s}^{2})}{1-p_{s}^{D}}.

By the Multivariate Delta Method,

M(p^sps)=M(g(k¯s,χ¯s)ps)n(0,τ2)\sqrt{M}(\hat{p}_{s}-p_{s})=\sqrt{M}(g(\bar{k}_{s},\bar{\chi}_{s})-p_{s})\to n(0,\tau^{2})

in distribution.  

Proof of Theorem 8. Without loss of generality, we take q^s(1)\hat{q}_{s}^{(1)} for example. From the definition of q^s(1)\hat{q}_{s}^{(1)}, we have

q^s(1)=p^s(1)p(1)cCp^s(c)p(c)=k¯s(1)k¯s(1)+χ¯s(1)p(1)cCk¯s(c)k¯s(c)+χ¯s(c)p(c)h({k¯s(c),χ¯s(c)}cC),\hat{q}_{s}^{(1)}=\frac{\hat{p}_{s}^{(1)}p^{(1)}}{\sum_{c\in C}\hat{p}_{s}^{(c)}p^{(c)}}=\frac{\frac{\bar{k}_{s}^{(1)}}{\bar{k}_{s}^{(1)}+\bar{\chi}_{s}^{(1)}}p^{(1)}}{\sum_{c\in C}\frac{\bar{k}_{s}^{(c)}}{\bar{k}_{s}^{(c)}+\bar{\chi}_{s}^{(c)}}p^{(c)}}\eqqcolon h(\{\bar{k}_{s}^{(c)},\bar{\chi}_{s}^{(c)}\}_{c\in C}),

The partial derivatives are

hk¯s(c)\displaystyle\frac{\partial h}{\partial\bar{k}_{s}^{(c)}} ={χ¯s(1)(k¯s(1)+χ¯s(1))2p(1)cCk¯s(c)k¯s(c)+χ¯s(c)p(c)χ¯s(1)(k¯s(1)+χ¯s(1))2p(1)k¯s(1)k¯s(1)+χ¯s(1)p(1)(cCk¯s(c)k¯s(c)+χ¯s(c)p(c))2,if c=1χ¯s(c)(k¯s(c)+χ¯s(c))2p(c)k¯s(1)k¯s(1)+χ¯s(1)p(1)(cCk¯s(c)k¯s(c)+χ¯s(c)p(c))2,if c1,\displaystyle=\begin{cases}\frac{\frac{\bar{\chi}_{s}^{(1)}}{\left(\bar{k}_{s}^{(1)}+\bar{\chi}_{s}^{(1)}\right)^{2}}p^{(1)}\sum_{c\in C}\frac{\bar{k}_{s}^{(c)}}{\bar{k}_{s}^{(c)}+\bar{\chi}_{s}^{(c)}}p^{(c)}-\frac{\bar{\chi}_{s}^{(1)}}{\left(\bar{k}_{s}^{(1)}+\bar{\chi}_{s}^{(1)}\right)^{2}}p^{(1)}\frac{\bar{k}_{s}^{(1)}}{\bar{k}_{s}^{(1)}+\bar{\chi}_{s}^{(1)}}p^{(1)}}{\left(\sum_{c\in C}\frac{\bar{k}_{s}^{(c)}}{\bar{k}_{s}^{(c)}+\bar{\chi}_{s}^{(c)}}p^{(c)}\right)^{2}},\hfil\mbox{if~{}}c=1\\ \frac{-\frac{\bar{\chi}_{s}^{(c)}}{\left(\bar{k}_{s}^{(c)}+\bar{\chi}_{s}^{(c)}\right)^{2}}p^{(c)}\frac{\bar{k}_{s}^{(1)}}{\bar{k}_{s}^{(1)}+\bar{\chi}_{s}^{(1)}}p^{(1)}}{\left(\sum_{c\in C}\frac{\bar{k}_{s}^{(c)}}{\bar{k}_{s}^{(c)}+\bar{\chi}_{s}^{(c)}}p^{(c)}\right)^{2}},\hfill\mbox{if~{}}c\neq 1\\ \end{cases},
hχ¯s(c)\displaystyle\frac{\partial h}{\partial\bar{\chi}_{s}^{(c)}} ={k¯s(1)(k¯s(1)+χ¯s(1))2p(1)cCk¯s(c)k¯s(c)+χ¯s(c)p(c)+k¯s(1)(k¯s(1)+χ¯s(1))2p(1)k¯s(1)k¯s(1)+χ¯s(1)p(1)(cCk¯s(c)k¯s(c)+χ¯s(c)p(c))2,if c=1k¯s(c)(k¯s(c)+χ¯s(c))2p(c)k¯s(1)k¯s(1)+χ¯s(1)p(1)(cCk¯s(c)k¯s(c)+χ¯s(c)p(c))2,if c1.\displaystyle=\begin{cases}\frac{\frac{-\bar{k}_{s}^{(1)}}{\left(\bar{k}_{s}^{(1)}+\bar{\chi}_{s}^{(1)}\right)^{2}}p^{(1)}\sum_{c\in C}\frac{\bar{k}_{s}^{(c)}}{\bar{k}_{s}^{(c)}+\bar{\chi}_{s}^{(c)}}p^{(c)}+\frac{\bar{k}_{s}^{(1)}}{\left(\bar{k}_{s}^{(1)}+\bar{\chi}_{s}^{(1)}\right)^{2}}p^{(1)}\frac{\bar{k}_{s}^{(1)}}{\bar{k}_{s}^{(1)}+\bar{\chi}_{s}^{(1)}}p^{(1)}}{\left(\sum_{c\in C}\frac{\bar{k}_{s}^{(c)}}{\bar{k}_{s}^{(c)}+\bar{\chi}_{s}^{(c)}}p^{(c)}\right)^{2}},\hfil\mbox{if~{}}c=1\\ \frac{\frac{\bar{k}_{s}^{(c)}}{\left(\bar{k}_{s}^{(c)}+\bar{\chi}_{s}^{(c)}\right)^{2}}p^{(c)}\frac{\bar{k}_{s}^{(1)}}{\bar{k}_{s}^{(1)}+\bar{\chi}_{s}^{(1)}}p^{(1)}}{\left(\sum_{c\in C}\frac{\bar{k}_{s}^{(c)}}{\bar{k}_{s}^{(c)}+\bar{\chi}_{s}^{(c)}}p^{(c)}\right)^{2}},\hfill\mbox{if~{}}c\neq 1\end{cases}.

Substitute the variables with their expectations, and denote ps=cCps(c)p(c)p_{s}=\sum_{c\in C}p_{s}^{(c)}p^{(c)} as the marginal probability of ss, then

hk¯s(c)hk¯s(c)(μ)\displaystyle h_{\bar{k}_{s}^{(c)}}^{\prime}\coloneqq\frac{\partial h}{\partial\bar{k}_{s}^{(c)}}(\mu) ={1ps2[(1ps(1))21ps(1)Dp(1)ps(1ps(1))21ps(1)Dp(1)ps(1)p(1)],if c=11ps2[(1ps(c))21ps(c)Dp(c)ps(1)p(1)],if c1,\displaystyle=\begin{cases}\frac{1}{p_{s}^{2}}\left[\frac{(1-p_{s}^{(1)})^{2}}{1-p_{s}^{(1)D}}p^{(1)}p_{s}-\frac{(1-p_{s}^{(1)})^{2}}{1-p_{s}^{(1)D}}p^{(1)}p_{s}^{(1)}p^{(1)}\right],\hfil\mbox{if~{}}c=1\\ \frac{1}{p_{s}^{2}}\left[-\frac{(1-p_{s}^{(c)})^{2}}{1-p_{s}^{(c)D}}p^{(c)}p_{s}^{(1)}p^{(1)}\right],\hfill\mbox{if~{}}c\neq 1\\ \end{cases},
hχ¯s(c)hχ¯s(c)(μ)\displaystyle h_{\bar{\chi}_{s}^{(c)}}^{\prime}\coloneqq\frac{\partial h}{\partial\bar{\chi}_{s}^{(c)}}(\mu) ={1ps2[(1ps(1))ps(1)1ps(1)Dp(1)ps+(1ps(1))ps(1)1ps(1)Dp(1)ps(1)p(1)],if c=11ps2[(1ps(c))ps(c)1ps(c)Dp(c)ps(1)p(1)],if c1,\displaystyle=\begin{cases}\frac{1}{p_{s}^{2}}\left[-\frac{(1-p_{s}^{(1)})p_{s}^{(1)}}{1-p_{s}^{(1)D}}p^{(1)}p_{s}+\frac{(1-p_{s}^{(1)})p_{s}^{(1)}}{1-p_{s}^{(1)D}}p^{(1)}p_{s}^{(1)}p^{(1)}\right],\hfil\mbox{if~{}}c=1\\ \frac{1}{p_{s}^{2}}\left[\frac{(1-p_{s}^{(c)})p_{s}^{(c)}}{1-p_{s}^{(c)D}}p^{(c)}p_{s}^{(1)}p^{(1)}\right],\hfill\mbox{if~{}}c\neq 1\end{cases},

where μ\mu represents the corresponding expectations of all the variables in {k¯s(c),χ¯s(c)}cC\{\bar{k}_{s}^{(c)},\bar{\chi}_{s}^{(c)}\}_{c\in C}. Denote

A(c)\displaystyle A^{(c)} =1ps2(1ps(1))21ps(1)Dp(1)ps,\displaystyle=\frac{1}{p_{s}^{2}}\frac{(1-p_{s}^{(1)})^{2}}{1-p_{s}^{(1)D}}p^{(1)}p_{s}, (18)
B(c)\displaystyle B^{(c)} =1ps2(1ps(c))21ps(c)Dp(c)ps(1)p(1),\displaystyle=\frac{1}{p_{s}^{2}}\frac{(1-p_{s}^{(c)})^{2}}{1-p_{s}^{(c)D}}p^{(c)}p_{s}^{(1)}p^{(1)},
C(c)\displaystyle C^{(c)} =1ps2(1ps(1))ps(1)1ps(1)Dp(1)ps,\displaystyle=\frac{1}{p_{s}^{2}}\frac{(1-p_{s}^{(1)})p_{s}^{(1)}}{1-p_{s}^{(1)D}}p^{(1)}p_{s},
D(c)\displaystyle D^{(c)} =1ps2(1ps(c))ps(c)1ps(c)Dp(c)ps(1)p(1).\displaystyle=\frac{1}{p_{s}^{2}}\frac{(1-p_{s}^{(c)})p_{s}^{(c)}}{1-p_{s}^{(c)D}}p^{(c)}p_{s}^{(1)}p^{(1)}.

Then

hk¯s(c)\displaystyle h_{\bar{k}_{s}^{(c)}}^{\prime} ={A(c)B(c),if c=1B(c),if c1,\displaystyle=\begin{cases}A^{(c)}-B^{(c)},\hfil\mbox{if~{}}c=1\\ -B^{(c)},\hfill\mbox{if~{}}c\neq 1\\ \end{cases}, (19)
hχ¯s(c)\displaystyle h_{\bar{\chi}_{s}^{(c)}}^{\prime} ={C(c)+D(c),if c=1D(c),if c1.\displaystyle=\begin{cases}-C^{(c)}+D^{(c)},\hfil\mbox{if~{}}c=1\\ D^{(c)},\hfill\mbox{if~{}}c\neq 1\end{cases}.

Since the chains for different classes are independent, for all ccc\neq c^{\prime} we have

Cov[k¯s(c),k¯s(c)]=Cov[χ¯s(c),χ¯s(c)]=Cov[k¯s(c),χ¯s(c)]=0.{\rm Cov}[\bar{k}_{s}^{(c)},\bar{k}_{s}^{(c^{\prime})}]={\rm Cov}[\bar{\chi}_{s}^{(c)},\bar{\chi}_{s}^{(c^{\prime})}]={\rm Cov}[\bar{k}_{s}^{(c)},\bar{\chi}_{s}^{(c^{\prime})}]=0.

Define

τ2\displaystyle\tau^{2} =cCcC[hk¯s(c)hk¯s(c)Cov[k¯s(c),k¯s(c)]+hχ¯s(c)hχ¯s(c)Cov[χ¯s(c),χ¯s(c)]\displaystyle=\sum_{c\in C}\sum_{c^{\prime}\in C}\left[h_{\bar{k}_{s}^{(c)}}^{\prime}h_{\bar{k}_{s}^{(c^{\prime})}}^{\prime}{\rm Cov}[\bar{k}_{s}^{(c)},\bar{k}_{s}^{(c^{\prime})}]+h_{\bar{\chi}_{s}^{(c)}}^{\prime}h_{\bar{\chi}_{s}^{(c^{\prime})}}^{\prime}{\rm Cov}[\bar{\chi}_{s}^{(c)},\bar{\chi}_{s}^{(c^{\prime})}]\right. (20)
+2hk¯s(c)hχ¯s(c)Cov[k¯s(c),χ¯s(c)]]\displaystyle\quad\left.+2h_{\bar{k}_{s}^{(c)}}^{\prime}h_{\bar{\chi}_{s}^{(c^{\prime})}}^{\prime}{\rm Cov}[\bar{k}_{s}^{(c)},\bar{\chi}_{s}^{(c^{\prime})}]\right]
=cC[hk¯s(c)2Var[k¯s(c)]+hχ¯s(c)2Var[χ¯s(c)]+2hk¯s(c)hχ¯s(c)Cov[k¯s(c),χ¯s(c)]]\displaystyle=\sum_{c\in C}\left[h_{\bar{k}_{s}^{(c)}}^{\prime 2}{\rm Var}[\bar{k}_{s}^{(c)}]+h_{\bar{\chi}_{s}^{(c)}}^{\prime 2}{\rm Var}[\bar{\chi}_{s}^{(c)}]+2h_{\bar{k}_{s}^{(c)}}^{\prime}h_{\bar{\chi}_{s}^{(c)}}^{\prime}{\rm Cov}[\bar{k}_{s}^{(c)},\bar{\chi}_{s}^{(c)}]\right]
=cC[B(c)2Var[k¯s(c)]+D(c)2Var[χ¯s(c)]2B(c)D(c)Cov[k¯s(c),χ¯s(c)]]\displaystyle=\sum_{c\in C}\left[B^{(c)2}{\rm Var}[\bar{k}_{s}^{(c)}]+D^{(c)2}{\rm Var}[\bar{\chi}_{s}^{(c)}]-2B^{(c)}D^{(c)}{\rm Cov}[\bar{k}_{s}^{(c)},\bar{\chi}_{s}^{(c)}]\right]
+[A(A2B(1))Var[k¯s(1)]+C(C2D(1))Var[χ¯s(c)]\displaystyle\quad+\left[A(A-2B^{(1)}){\rm Var}[\bar{k}_{s}^{(1)}]+C(C-2D^{(1)}){\rm Var}[\bar{\chi}_{s}^{(c)}]\right.
+2(AD(1)+B(1)CAC)Cov[k¯s(1),χ¯s(1)]]\displaystyle\quad\left.+2(AD^{(1)}+B^{(1)}C-AC){\rm Cov}[\bar{k}_{s}^{(1)},\bar{\chi}_{s}^{(1)}]\right]
=[ps(1)p(1)ps2]2cC[1ps(c)]2ps(c)1ps(c)Dp(c)2+[p(1)ps2]2[1ps(1)]2ps(1)1ps(1)Dps(ps2ps(1)p(1)).\displaystyle=\left[\frac{p_{s}^{(1)}p^{(1)}}{p_{s}^{2}}\right]^{2}\sum_{c\in C}\frac{[1-p_{s}^{(c)}]^{2}p_{s}^{(c)}}{1-p_{s}^{(c)D}}p^{(c)2}+\left[\frac{p^{(1)}}{p_{s}^{2}}\right]^{2}\frac{[1-p_{s}^{(1)}]^{2}p_{s}^{(1)}}{1-p_{s}^{(1)D}}p_{s}(p_{s}-2p_{s}^{(1)}p^{(1)}).

According to the Multivariate Delta Method, noticing that h|μ=qs\left.h\right|_{\mu}=q_{s}, we can conclude that

M[q^s(1)qs]=M[h({k¯s(c),χ¯s(c)}cC)qs]n(0,τ2)\sqrt{M}[\hat{q}_{s}^{(1)}-q_{s}]=\sqrt{M}[h(\{\bar{k}_{s}^{(c)},\bar{\chi}_{s}^{(c)}\}_{c\in C})-q_{s}]\to n(0,\tau^{2})

in distribution.  


References

  • Agrawal and Srikant (1994) R. Agrawal and R. Srikant. Fast algorithms for mining association rules. In Proc. of 20th Intl. Conf. on VLDB, pages 487–499, 1994.
  • Agrawal et al. (1993) R. Agrawal, T. Imieliundefinedski, and A. Swami. Mining association rules between sets of items in large databases. SIGMOD Rec., 22(2):207–216, June 1993. ISSN 0163-5808. doi: 10.1145/170036.170072.
  • Agrawal et al. (2019) R. Agrawal, J. H. Huggins, B. L. Trippe, and T. Broderick. The kernel interaction trick: Fast bayesian discovery of pairwise interactions in high dimensions. In ICML 2019, volume 2019-June, pages 199 – 214, Long Beach, CA, United states, 2019.
  • Aha et al. (1991) D. W. Aha, D. Kibler, and M. K. Albert. Instance-based learning algorithms. Machine Learning, 6(1):37–66, Jan. 1991. ISSN 0885-6125. doi: 10.1023/A:1022689900470. URL https://doi.org/10.1023/A:1022689900470.
  • Bien et al. (2013) J. Bien, J. Taylor, and R. Tibshirani. A lasso for hierarchical interactions. The Annals of Statistics, 41(3):1111–1141, 06 2013. doi: 10.1214/13-AOS1096.
  • Breiman (2001) L. Breiman. Random forests. Machine Learning, 45(1):5–32, 2001.
  • Breiman et al. (1984) L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification and Regression Trees. Wadsworth, 01 1984.
  • Cheng et al. (2016) H.-T. Cheng, L. Koc, J. Harmsen, T. Shaked, T. Chandra, H. Aradhye, G. Anderson, G. Corrado, W. Chai, M. Ispir, R. Anil, Z. Haque, L. Hong, V. Jain, X. Liu, and H. Shah. Wide & deep learning for recommender systems. In DLRS’16, page 7–10, 2016.
  • Cormen et al. (2009) T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction to Algorithms, Third Edition. The MIT Press, 3rd edition, 2009. ISBN 0262033844.
  • Friedman (1991) J. Friedman. Multivariate adaptive regression spline. Ann. Statist., 19:1–61, Mar 1991. doi: 10.1214/aos/1176347973.
  • Friedman and Popescu (2008) J. H. Friedman and B. E. Popescu. Predictive learning via rule ensembles. The Annals of Applied Statistics, 2(3):916 – 954, 2008. doi: 10.1214/07-AOAS148. URL https://doi.org/10.1214/07-AOAS148.
  • Guo et al. (2017) H. Guo, R. Tang, Y. Ye, Z. Li, and X. He. Deepfm: A factorization-machine based neural network for ctr prediction. In IJCAI’17, page 1725–1731, 2017.
  • Han et al. (2000) J. Han, J. Pei, and Y. Yin. Mining frequent patterns without candidate generation. SIGMOD Rec., 29(2):1–12, May 2000. ISSN 0163-5808. doi: 10.1145/335191.335372. URL https://doi.org/10.1145/335191.335372.
  • Hao and Zhang (2014) N. Hao and H. H. Zhang. Interaction screening for ultrahigh-dimensional data. Journal of the American Statistical Association, 109(507):1285–1301, 2014. doi: 10.1080/01621459.2014.881741. PMID: 25386043.
  • Hao et al. (2018) N. Hao, Y. Feng, and H. H. Zhang. Model selection for high-dimensional quadratic regression via regularization. Journal of the American Statistical Association, 113(522):615–625, 2018. doi: 10.1080/01621459.2016.1264956.
  • Jian Pei et al. (2001) Jian Pei, Jiawei Han, Hongjun Lu, Shojiro Nishio, Shiwei Tang, and Dongqing Yang. H-mine: hyper-structure mining of frequent patterns in large databases. In Proceedings 2001 IEEE International Conference on Data Mining, pages 441–448, 2001. doi: 10.1109/ICDM.2001.989550.
  • Lewis et al. (2004) D. Lewis, Y. Yang, T. Russell-Rose, and F. Li. Rcv1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 5:361–397, 04 2004.
  • Lian et al. (2018) J. Lian, X. Zhou, F. Zhang, Z. Chen, X. Xie, and G. Sun. Xdeepfm: Combining explicit and implicit feature interactions for recommender systems. In SIGKDD, page 1754–1763, 2018.
  • Matheus and Rendell (1989) C. J. Matheus and L. A. Rendell. Constructive induction on decision trees. In Proceedings of the 11th International Joint Conference on Artificial Intelligence - Volume 1, IJCAI’89, page 645–650, San Francisco, CA, USA, 1989. Morgan Kaufmann Publishers Inc.
  • Qu et al. (2016) Y. Qu, H. Cai, K. Ren, W. Zhang, Y. Yu, Y. Wen, and J. Wang. Product-based neural networks for user response prediction. In 2016 IEEE 16th International Conference on Data Mining (ICDM), pages 1149–1154, 2016. doi: 10.1109/ICDM.2016.0151.
  • Quinlan (1986) J. R. Quinlan. Induction of decision trees. Machine Learning, 1(1):81–106, 1986.
  • Quinlan (1993) R. Quinlan. C4.5: Programs for Machine Learning, volume 1. 01 1993. ISBN 1-55860-238-0.
  • Shah (2016) R. D. Shah. Modelling interactions in high-dimensional data with backtracking. Journal of Machine Learning Research, 17(1):7225–7255, Jan. 2016. ISSN 1532-4435.
  • Shah and Meinshausen (2014) R. D. Shah and N. Meinshausen. Random intersection trees. Journal of Machine Learning Research, 15(1):629–654, 1 2014. ISSN 1532-4435.
  • Song et al. (2020) Q. Song, D. Cheng, H. Zhou, J. Yang, Y. Tian, and X. Hu. Towards automated neural interaction discovery for click-through rate prediction. In SIGKDD, page 945–955, 2020.
  • Song et al. (2019) W. Song, C. Shi, Z. Xiao, Z. Duan, Y. Xu, M. Zhang, and J. Tang. Autoint: Automatic feature interaction learning via self-attentive neural networks. In CIKM’19, 2019.
  • Sorokina et al. (2008) D. Sorokina, R. Caruana, M. Riedewald, and D. Fink. Detecting statistical interactions with additive groves of trees. In ICML, page 1000–1007, 2008.
  • Thanei et al. (2018) G.-A. Thanei, N. Meinshausen, and R. Shah. The xyz algorithm for fast interaction search in high-dimensional data. Journal of Machine Learning Research, 19:1–42, 10 2018.
  • Tsang et al. (2017) M. Tsang, D. Cheng, and Y. Liu. Detecting statistical interactions from neural network weights. In ICLR, 05 2017.
  • Tsang et al. (2020) M. Tsang, D. Cheng, H. Liu, X. Feng, E. Zhou, and Y. Liu. Feature interaction interpretability: A case for explaining ad-recommendation systems via neural interaction detection. In ICLR, 2020.
  • Wang et al. (2017) R. Wang, B. Fu, G. Fu, and M. Wang. Deep & cross network for ad click predictions. In ADKDD, 2017.
  • Yu et al. (2019) G. Yu, J. Bien, and R. Tibshirani. Reluctant interaction modeling. 07 2019.