Random Intersection Chains
Abstract
Interactions between several features sometimes play an important role in prediction tasks. But taking all the interactions into consideration will lead to an extremely heavy computational burden. For categorical features, the situation is more complicated since the input will be extremely high-dimensional and sparse if one-hot encoding is applied. Inspired by association rule mining, we propose a method that selects interactions of categorical features, called Random Intersection Chains. It uses random intersections to detect frequent patterns, then selects the most meaningful ones among them. At first a number of chains are generated, in which each node is the intersection of the previous node and a random chosen observation. The frequency of patterns in the tail nodes is estimated by maximum likelihood estimation, then the patterns with largest estimated frequency are selected. After that, their confidence is calculated by Bayes’ theorem. The most confident patterns are finally returned by Random Intersection Chains. We show that if the number and length of chains are appropriately chosen, the patterns in the tail nodes are indeed the most frequent ones in the data set. We analyze the computation complexity of the proposed algorithm and prove the convergence of the estimators. The results of a series of experiments verify the efficiency and effectiveness of the algorithm.
Keywords: categorical feature, association rule mining, interaction, intersection
1 Introduction
For practical application of machine learning algorithms, usually not only the original features, but also their interactions play an important role. However, taking all the interactions into consideration will lead to an extremely high-dimensional input. For example, even if only the product of two numerical features is considered, there will be such interactions for main effects. As the size of data is growing rapidly nowadays, adding all pairwise interaction into the input may be computationally infeasible, let alone higher-order interactions. What’s more, adding all the interactions without selection is possibly harmful to the prediction model since too much redundant information is brought in.
It is a well-established practice among statisticians fitting models on interactions as well as original features. For example, Bien et al. (2013) add a set of convex constraints to the lasso that honour the hierarchy restriction; Hao and Zhang (2014) tackle the difficulty by forward-selection-based procedures; Hao et al. (2018) consider two-stage LASSO and a new regularization method named RAMP to compute a hierarchy-preserving regularization solution path efficiently; Agrawal et al. (2019) propose to speed up inference in Bayesian linear regression with pairwise interactions by using a Gaussian process and a kernel interaction trick. However, these methods are based on the hierarchy assumption. That is, an interaction will be useful only if its lower-order components are also useful. So the theoretical analysis lose efficacy and their practical performance may be unsatisfactory for the case where the assumption does not hold.
There is also some work free of the hierarchy assumption. Thanei et al. (2018) propose the xyz algorithm, where the underlying idea is to transform interaction search into a closest pair problem which can be solved efficiently in subquadratic time. Instead of the hierarchy principle, Yu et al. (2019) come up with the reluctant principle, which says that one should prefer main effects over interactions given similar prediction performance.
The above-mentioned work mainly aims to select pairwise interactions. A drawback of these methods is that potentially informative higher-order interactions are overlooked. Random intersection trees (Shah and Meinshausen, 2014) gets over this difficulty by starting with a maximal interaction that includes all variables, and then gradually removing variables if they fail to appear in randomly chosen observations of a class of interest. Another approach is Backtracking (Shah, 2016). It can be incorporated into many existing high-dimensional methods based on penalty functions, and works by building increasing sets of candidate interactions iteratively.
An alternative approach for higher-order interaction selection is extracting interactions from rules. Decision trees, such as ID3 (Quinlan, 1986), C4.5 (Quinlan, 1993) and CART (Breiman et al., 1984), are widely-used models to generate comprehensive rules. They work by partitioning the input space according to whether the features satisfy some conditions, and then assigning a constant to each region. After the tree is built, each path connecting the root node and a leaf node can be transformed into a decision rule by combining the split decisions. The predictions of the leaf nodes are discarded and only the splits are used in the decision rules. RuleFit (Friedman and Popescu, 2008) uses these decision rules as binary features, then fits a linear model on them. According to Qu et al. (2016), however, the exploration ability of tree-based models is restricted for high-dimensional categorical features due to the low usage rate of categorical features.
Besides tree-based models, there is another family of algorithms that generate rules based on the data, namely association rule mining. For a categorical feature , and one of its optional values , we can treat the pattern “” as an item. In this way we can transform an observation in the data set to a record of items. Then association rule mining algorithms can be applied. Mining association rules between items from a large database is one of the most important and well researched topics of data mining. Let be a set of items, called itemsets. Association rule mining aims to extract rules in the form of “”, where , , . Calling the antecedent and the consequent, the rule means implies . The support of an itemset is the number of records that contain . For an association rule “”, its support is defined as the fraction of records that contain to the total number of records in the database, and its confidence is the number of cases in which the rule is correct relative to the number of cases in which it is applicable, or equivalently, support()/support().
Association rule mining problem was firstly stated by Agrawal et al. (1993), and further studied by many researchers. Apriori (Agrawal and Srikant, 1994), FP-growth (Han et al., 2000) and H-mine (Jian Pei et al., 2001) are some well-known algorithms for association rule mining. If the antecedents is meaningful for the target, it seems reasonable to use them as features for another classification model rather than a classifier themselves, just like how RuleFit makes use of decision rules.
Although the common association rule mining algorithms are much faster than a brute force search, many of them are not suitable for “big data” since they have to go through the whole database multiple times. On the contrary, Random Intersection Trees gets over this difficulty by regarding the intersection of a set of random samples as a frequent pattern. To be more specific, it generates trees of depth , in which each node but the leaf nodes has child nodes, where subjects to a pre-specified distribution. The root node contains the items in a uniformly chosen observation from the database, and each node except the root consists of the intersection of its parent node and a uniformly chosen observation. The patterns in the leaf nodes are finally used as interactions. One of the shortcomings of Random Intersection Trees is that the selection is relatively crude because the subpatterns of the found frequent patterns are neglected, while they are actually more frequent. What’s more, Random Intersection Trees aims to find the patterns that are frequent in a class of interest but infrequent in other classes, which are exactly “confident rules”. But the precise quantities of “frequency” or “confidence” are not provided, which makes it difficult to make a more careful comparison among different patterns. Another problem is that Random Intersection Trees can only deal with binary features. A categorical feature of high cardinality should be one-hot encoded before applying the algorithm, which may result in a very high-dimensional and sparse input.
Inspired by the idea of Random Intersection Trees, we suggest a method that can select useful interactions of categorical features for classification tasks, called Random Intersection Chains. The road map of Random Intersection Chains is listed below.
-
1.
Generate chains for different classes separately by random intersections;
-
2.
Calculate the frequency of the patterns in the tail nodes as well as their subpatterns by maximum likelihood estimation;
-
3.
Select the most frequent patterns;
-
4.
Calculate the confidence of the most frequent patterns by Bayes’ theorem;
-
5.
Select the most confident patterns.
Our main contributions in this paper can be concluded as follows: (1) an interaction selection method for categorical features in classification tasks, named Random Intersection Chains, is proposed; (2) we show that Random Intersection Chains can find all the frequent patterns while leaving out the infrequent ones if parameters are appropriately chosen; (3) the computational complexity, including space complexity and time complexity, are analyzed; (4) we prove that the estimated frequency and confidence converge to their true values; (5) a series of experiments are conducted to verify the effectiveness and efficiency of Random Intersection Chains.
The rest of the paper is organized as follows. In Section 2, a brief introduction of some related contents is given. In Section 3 we introduce Random Intersection Chains, our algorithm for interactive feature selection, in detail. This followed by some analyses of computational complexities in Section 4. In Section 5, we theoretically analyze the the convergence of the estimated frequency and confidence. In Section 6 we report the results of a series of experiments to verify the effectiveness of the algorithm. Finally this paper is concluded in Section 7. The proofs of some main results are relegated to the Appendix.
2 Preliminaries
In this paper, we consider the classification task involving high-dimensional categorical predictors. Usually the categorical features have a number of optional values. A common approach to deal with such features is one-hot encoding, which transforms a categorical feature to a large number of binary variables. But this method will make the input extremely high-dimensional and sparse. To avoid this difficulty, label encoding is adopted in this paper, which maps each category to a specific integer. For example, there may be three categories, namely “red”, “green” and “blue”, in a feature representing colors. Then we replace “red”, “green” and “blue” with 1, 2 and 3, respectively. It’s worth noting that these integers can only be used to check whether two values are identical or different, while their numerical relationships should be ignored.
Suppose are categorical features, and is the set of classification labels. The given data set is in the form of , where contains the records of observations, indicates the label of these observations. The -th row of and the -th component of are denoted by and , respectively. Suppose is an observation in the data set, then it can be viewed from two aspects. First if we treat as integers, is naturally a vector of dimension . Or can be seen as items, thus is an record consisting of items. Therefore, can be regarded as a data set for machine learning algorithms, or a database for data mining algorithms.
For a variable and one of its possible values , we use “” to represent , a binary feature that indicates whether the value of variable is . “” also stands for an item that only appears in the records where the value of variable is . Similarly, for , suppose are variables and are one of their corresponding possible values. Then a pattern =“” can be comprehended as a logical expression, a binary feature , or an itemset containing items. We call such a expression “-order interaction”. This definition coincides with the term “interaction” used by Shah and Meinshausen (2014), and will reduce to the latter if =1 for all . Also the interaction defined here is a non-additive interaction (Friedman and Popescu, 2008; Sorokina et al., 2008; Tsang et al., 2017), since it can not be represented by a linear combination of lower-order interactions. In this paper, we use the terms “interaction”, “itemset” and “pattern” interchangeably to describe such expressions. When there is no ambiguity for classification label , this expression is also referred to “” as a “rule”.
The frequency of an interaction for class is defined as the ratio of records containing with label to all the records with label , and is denoted by
(1) |
where is the set of observations in class . The confidence of an interaction for class is defined as the ratio of records containing with label to all the records containing , and is denoted by
(2) |
where is the set of observations containing interaction . An interaction is said to be frequent if its frequency is large, and confident if its confidence is large for a class.
The main goal of this paper is to efficiently detect the interactions that are both frequent and confident for some classes. Then these interactions are used as the input for a succeeding classification model. Since irrelevant predictors in the original input are dropped and useful high-order interactions are explicitly added, interaction detection is likely to be beneficial for the prediction performance.
3 Random Intersection Chains
In this section we give a naive version of Random Intersection Chains at first, and show that there exist appropriate parameters for it to find all the frequent patterns while the infrequent ones are left out. Then a modification is provided to prevent the exponential computational burden for selecting the most frequent subpatterns from a given pattern.
3.1 A Basic Algorithm
Drawing inspiration from association rule mining and Random Intersection Trees, we suggest an algorithm that can efficiently detect useful interactions, named Random Intersection Chains. Like other data mining algorithms, frequent itemsets are discovered at first and then confident rules are generated. But instead of scanning the complete database, we adopt random intersections to mine the frequent itemsets, then their frequency and confidence are calculated with the assistant of maximum likelihood estimation or Bayes’ theorem.
The first node of a chain, called the head node, contains the items in a randomly chosen instance. The other nodes in the chain contain the intersection of its previous node and a new randomly chosen instance. We repeatedly choose random instances until the length of a chain reaches the pre-defined threshold, or the number of the items in the last node (named the tail node) is sufficiently small. Finally the itemsets in the tail nodes as well as their subsets are regarded as frequent itemsets. For example, if we want to generate a chain consisting of three nodes, and the chosen instances are , , , where , and , then the chain is . The procedure of generating such chains of length can be seen straightly from Algorithm 1.
The larger frequency a pattern has, the more likely it is to appear in a uniformly chosen instance. Therefore, it’s reasonable to assume the pattern in tail nodes are more frequent than others. After detecting the frequent patterns, Shah and Meinshausen (2014) adopt an estimator based on min-wise hashing to obtain their frequency. Though this estimator enjoys reduced variance compared to that which would be obtained using subsampling, it seems somewhat redundant and unnatural because it’s independent of Random Intersection Trees. However, Random Intersection Chains can estimate frequency by themselves.
For a pattern with frequency , denote the number of its appearance in the -th chain for class by . The likelihood of observing this chain is
(3) |
where we omit the superscript “” to keep notation uncluttered. And we have the likelihood of observing chains as shown in Equation 4,
(4) | ||||
where , . Thus the log of likelihood is
(5) |
Setting the derivative of Equation 5 with respect to equalling to zero and rearranging, we obtain
(6) |
where , .
Algorithm 2 estimates the frequency of a pattern by maximum likelihood estimation based on Random Intersection Chains. Algorithm 3 provides an estimator of confidence by Bayes’ theorem once the frequency is available.
After the chains are generated, we estimate the frequency of the patterns in the tail nodes. Then the confidence of these patterns is calculated, after which the confident patterns are returned as the useful interactions. We formally describe a basic version of Random Intersection Chains in Algorithm 4, which combines the characteristic of both random intersections and association rule mining.
We now explain Algorithm 4 line by line. There are three parameters in the algorithm, named , and , among which the first two are inherited from random intersections, and the last is from association rule mining. represents the length of a chain, stands for the number of chains and is the threshold of confidence.
Line 1 calculates the proportion of each class, where represents the indices of observations in class , is the set of class labels. These proportions will be used later to calculate the confidence. Line 2 initializes the set of frequent patterns, which is now an empty set, for each class. For every class, chains are generated at Line 4, then the patterns in the tail nodes as well as all their subpatterns are added to the sets created at Line 2. The frequency of a pattern in every class is calculated at Line 10, after which the confidence is calculated at Line 11. If the confidence of a pattern is larger than the pre-defined threshold, this pattern will be included in the resulting set at Line 13. Finally the resulting set is returned at Line 17.
The longer a chain is, the harder for a pattern to appear in its tail node. Oppositely, the more chains, the more likely for a pattern to be observed in at least one of their tail nodes. By adjusting and carefully, we can control which patterns will be considered as “frequent”. As proven in Theorem 1, there actually exist choices of parameters such that the returned set contains frequent patterns with arbitrarily high probability while including the infrequent ones with probability lower than any given threshold.
Theorem 1
Given , for any , there exist choices of , such that the set returned by Algorithm 4 contains with probability at least if , and with probability at most if .
So we have Corollary 2.
Corollary 2
Usually and are small, thus is large and is small. Assume , then the first item in the braces of Equation 7 is no greater than 0. If , the second item in the braces of Equation 7 is no less than 0. In this case, Corollary 3 holds.
Corollary 3
Compared with Random Intersection Trees, it is more convenient to apply Random Intersection Chains on multi-class classification tasks. The former is originally designed for binary classification, and detects the interesting patterns for one class at a time. But it’s unable to answer which patterns are the most useful ones among different classes. On the contrary, since Random Intersection Chains not only detects the frequent patterns but also estimates their frequency and confidence, we can directly compare the frequency or confidence of patterns in different classes. Thus we can select the patterns from all the classes simultaneously.
3.2 Random Intersection Chains with Priority Queues
Thanks to its abandon of scanning the complete database, the naive version of Random Intersection Chains is more efficient than the traditional methods on large data sets. Another advantage is that it can extract high-order interactions without discovering all lower-order interactions beforehand. However, this characteristic is also where a drawback of Random Intersection Chains comes from.
It is obvious that any subpattern of a frequent pattern is also frequent. Since the pattern in the tail node of a chain may contain many components, there are in fact a huge number of frequent patterns we have obtained. For example, if an interaction in a tail node contains components, then all the combinations of these components are frequent. So there are frequent patterns indeed. It’s computationally infeasible to calculate frequency and confidence for every frequent pattern in this sense. And it’s even impossible to pass over all these subpatterns due to its exponential complexity.
Random Intersection Trees (Shah and Meinshausen, 2014) gets over this difficulty by giving priority to the largest frequent patterns. That is, the authors only consider the patterns whose every superset is infrequent. They limit their attention to the patterns of the leaf nodes, but ignore their subpatterns. This may not be a satisfactory solution, since a subpattern is more frequent than the complete pattern, and sometimes can also have higher confidence. Overlooking them may lead to the missing of informative interactions. Fortunately, by taking advantage of a data structure named priority queue, we find an approach that selects the most frequent patterns from the power set of a given interaction in polynomial time.
A priority queue is a data structure for maintaining a set of elements, each with an associated value called a key (Cormen et al., 2009). In a priority queue, an element with high priority is served before an element with low priority. Like many implementations, we prefer the element enqueued earlier if two elements have the same priority. This may seem arbitrary, but is meaningful later in this work. In this paper, we give priority to elements with larger keys, so what we used is exactly a max-priority queue. A series of operations should be supported by such a queue, and the following are required in our method.
-
•
INSERT(): inserts the element with into the set , which is equivalent to the operation ;
-
•
EXTRACT-MAX(): removes and returns the element with the largest key in ;
-
•
COPY(): returns a copy of .
We add a new attribute to a priority queue, which indicates its maximum capacity. In other words, we will discard an element if its is not the largest, which can be done by removing the element with the smallest key after inserting a new element into a full queue. If is zero, then this priority queue will always be empty.
With priority queues defined above, we come up with Algorithm 5. The main difference between Algorithm 4 and Algorithm 5 lies in the approach of identifying which patterns are frequent or confident. In Algorithm 4, all the patterns in the tail nodes and their subpatterns are considered to be frequent, which may result in heavy computation. On the contrary, Algorithm 5 only takes the most frequent patterns into consideration. What’s more, Algorithm 5 returns the most confident patterns, while Algorithm 4 identifies confident patterns by a pre-defined threshold.
The most important improvement of Algorithm 5 lies in Line 6, where the algorithm named “InsertFreqSubset” is used to select the most frequent subsets among the power set of a given itemset. The algorithm works by selecting frequent itemsets level-wisely. We have known that an itemset can not be more frequent then any of its subsets. So if an itemset fails to be in the priority queue, so do its supersets. For an itemset consisting of more than one item, it could be uniquely represented by , where is the most frequent item in (if there are several items having the same frequency, choose the one enqueued earliest, which coincides with the implementation of our priority queue). is by definition more frequent than the other singleton subsets of , thus it’s more frequent than . So if we extract the elements from the priority queue in order, occurs later than . This sheds some light on the search for frequent -order itemsets. We need first take care of the 1-order itemsets in the priority queue. For each 1-order itemset, we only pay attention to the (-1)-order itemsets that are extracted later than it. If the 1-order itemset and a (-1)-order itemset are disjoint, then their union is a candidate frequent -order itemset. We need only calculate the frequency of these -order itemsets.
The detailed InsertFreqSubset algorithm is given in Algorithm 6. Frequent 1-order itemsets are selected in Line 1-4. This is followed by a for-loop, where the loop index variable represents the size of the candidate itemsets. Since extracting the elements from a queue will change the queue, we make a copy of the queue at Line 5 and Line 11, then apply the EXTRACT-MAX operation on the replica to prevent unwanted changes. The outer while-loop aims to find 1-order itemsets in the priority queue. For an 1-order itemset, the inner while-loop is conducted to find its frequent -order supersets. (-1)-order itemsets in the remaining queue is caught at Line 15, then a candidate -order itemset is generated by combining the 1- and (-1)-order itemset, whose frequency is estimated at Line 16. After that the candidate itemset is inserted into the resulting queue at Line 17.
The changes of the queue capacity at Line 9 and 14, as well as INSERT operations at Line 18 and 19 have no effects on the resulting queue. But they can squeeze out the infrequent 1- or (-1)-order itemsets as soon as possible, which reduces the number of candidates and speed up the algorithm.
4 Computational Complexity
In this section, we analysis the computational complexity of generating a chain and selecting the most frequent subsets of a given itemset, from which we can see that the proposed algorithms are very efficient in some sense.
4.1 Complexity of Chain Generation
Most intuitively, a chain can be represented by recording each itemsets at a node. But this not only causes a wastage of space, but also makes it troublesome to generate or check a chain. For example, to compute the intersection, Shah and Meinshausen (2014) check whether each component of the current interaction is in the new observation. Every such check is even if a binary search is adopted. If most of the components are sufficiently frequent, the size of interactions will keep close to , so the time complexity of an intersection can be near . The total time needed to generate a chain of length is near , and the memory required to store this chain is near .
It’s worth noting that the itemsets in a chain is descending. That is to say, the itemset in a node (except the head node) must be a subset of its previous node. So rather than record the chain as a series of ordered itemsets, we can view it from the aspect of items. A chain can be represented by two -dimensional vectors, where the first is a copy of the first randomly chosen observation, and the other records how many times the corresponding item occurs in the chain. For instance, the chain can be represented by {, [3, 1, 2]}, or simply {, [3, 1, 2]}. The memory to store a chain is thus , and is independent of its length. The memory required to store chains is the same as observations. For large data sets, the number of observations can be very huge, thus additional observations usually have little influence on the requirement of storage. An item is in the -th node if and only if it occurs at least times. When adding a new node to the chain, we need only pay attention to the items whose number of occurrences equals to the current length of the chain. If these items also appear in the new randomly chosen observation, then making the intersection can be done simply by adding one to their number of occurrences. In this way, the time complexity of adding a node is , and generating a chain of length has a time complexity of . Making use of this representation, we have Theorem 4 and Theorem 5.
Theorem 4
Theorem 5
From Theorem 4 and Theorem 5 we can conclude that Random Intersection Chains will be efficient when is large and is small. For the ideal situation where approaches 1 and tends to 0, and are near 1, both the space and time complexity of chain generation are linear with the number of features or the number of different labels in .
4.2 Complexity of Subset Selection
As stated earlier in Section 3.2, finding a frequent pattern means we have actually found frequent patterns, since every subpattern of is frequent. It’s not realistic to take all of them into consideration, not even to perform a traversal. Algorithm 6 solves this problem with the help of priority queues. We provide Theorem 6 to guarantee the validity and time complexity of Algorithm 6.
Theorem 6
If the input priority queue of Algorithm 6 is empty, then it contains the most frequent subsets of when the algorithm ends, and the number of frequency calculation is .
If the input priority queue of Algorithm 6 is not empty, e.g. , then during the running of the algorithm, itemsets with small frequency in will be squeezed out by the subsets of the input itemset , while itemsets with large frequency in occupies a position in the priority queue from beginning to end. So at the end of the algorithm, the priority queue contains the most frequent itemsets in . As a result, after the for-loop in Line 5-7 of Algorithm 5, contains the most frequent itemsets found by the chains for class .
5 Convergence Analysis
The previous section analyzes the computational complexity of Random Intersection Chains, from which we can see that it is efficient in some sense. Another main concern about Random Intersection Chains is how effective it is. In other words, since the frequency and confidence are estimated on the basis of random intersection, one may wonder how well they can approximate their true value. We find that these estimators have some good properties. To illustrate this point, asymptotic behaviors of and are given in Theorem 7 and Theorem 8. The derivations are given in the appendix, which are mainly based on the multivariate delta method.
From Theorem 7 we can see that converges to an unbiased estimator in distribution as goes to infinity. The limiting estimator multiplied by would have variance . This variance is monotone decreasing with the increase of . Remember that the space needed for chains is independent of . So if time permits, setting as large as possible seems a good choice. The variance tends to 0 if is close to either 0 or 1, which means the estimator is more accurate if the itemset is extremely frequent or extremely infrequent.
Theorem 8 leads to some similar results. also converges to an unbiased estimator in distribution as goes to infinity. But the variance of the limiting estimator multiplied by is more complex. Anyway, the variance is monotone decreasing with the increase of , too. This makes setting larger more appealing. In general, large (means is frequent in the whole database), small (means is a minor class) and extremely large or small (means is either very frequent or very infrequent for each class) will leads to relatively small variance.
6 Numerical Studies
To verify the efficiency and effectiveness of Random Intersection Chains, we give the results of several numerical examples. We first conduct a series of experiments on two benchmark data sets for click-through rate (CTR) prediction, which aims to illustrate the efficiency, consistency and effectiveness of Random Intersection Chains on large-scale data sets. We also adopt the data sets used by Shah and Meinshausen (2014), from whose experimental results two conclusions can be obtained: (1) Random Intersection Chains can find almost all meaningful patterns for an ideal data set, (2) rather than act as a classifier themselves, the detected patterns can lead to a better result if they serve as input features for another classification model. We also show that Random Intersection Chains helps to find interactions of numerical features if they are transformed to discrete ones at first, by comparing it to some existing interactive feature selection algorithms on another two UCI data sets.
According to the discussion in Section 5, longer chains lead to better estimations. So we set =100,000, and introduce the maximum order of interaction as an additional parameter. A chain stops growing if either its length is larger than , or the number of items in its tail node is no larger than .
6.1 Click-Through Rate Prediction
Click-through rate (CTR) prediction is an important application of machine learning algorithms, which aims to predict the ratio of clicks to impressions of a specific link. The input features associate with either a user or an item, many of which are categorical and of high cardinality. The label indicates the clicks of a user to an item. Usually few items will be clicked by a user, which makes the data unbalanced.
We conduct experiments on two public real-world datasets, named Criteo and Avazu. Criteo data set consists of a portion of Criteo’s traffic over a period of 7 days. There are 45 million users’ clicking records on displayed ads in the data, and the rows are chronologically ordered. It contains 26 categorical features and 13 numerical features. Avazu data set contains the records of whether a displayed mobile ad is clicked by a user or not. Click-through data of 10 days, ordered chronologically, is provided. It has 23 features, all of which are categorical. And the total number of samples is above 40 million.
If one-hot encoding is applied, there will be 998,960 binary features for Criteo data set and 1,544,428 features for Avazu data set, which is obviously unacceptable. We first unify all the uncommon categories into a single category “others”. A category is “uncommon” if the number of its occurrences is less than 10 for Criteo or 5 for Avazu. Then the categorical features are label encoded. As for numerical features, a value will be transformed to if . Finally, each data set is divided into 10 parts, where 8 parts are used for training, 1 part for validation and 1 part for test. This procedure is actually the same as in the work of Song et al. (2019), which is also adopted by Song et al. (2020); Tsang et al. (2020).
As analyzed in Section 4 and Section 5, the more chains are generated, the more time and memory are needed, but the more accurate estimations of frequency and confidence are obtained. We apply Random Intersection Chains on both data sets with from 100 to 15,000 for each part in the training set. The running time is shown in Figure 1(a). As for memory requirements, the additional space cost caused by Random Intersection Chains is at most the same as observations, which is very small when compared with the original 40 million observations. To show the consistency of Random Intersection Chains, we adopt the Jaccard-index as the criterion of similarity between two sets and , which is defined as
(13) |
We calculate the Jaccard-index for interactions found by =15000 and the interactions found by smaller values of , and the results are exhibited in Figure 1(b). As can be seen from Figure 1(a), the running time is linear with the number of chains, and it’s relatively small considering the rather large size of the data sets. Figure 1(b) indicates that the returns of Random Intersection Chains are very similar for large , which verifies the consistency.


One of the advantages of the interactions defined in this paper is their interpretability. Since the Avazu data set contains non-anonymized features, we list the 10 most confident interactions with their estimated or accurate frequency and confidence in Table 1, where we only list out the name of features, and omit the specific value for each feature to keep notation uncluttered. We can conclude that “banner_pos” plays an important role in advertising. This observation coincides with the intuition that an advertisement is more likely to be clicked if it’s exhibited in a good position. Many interactions have a feature associated with “app” and a feature about “device”, which indicates the relationship between an item and a user. Also the estimations of frequency and confidence are pretty good, the RMSE of the estimations are , and for Frequency(-), Frequency(-) and Confidence, respectively. What’s more, the corresponding Pearson correlation coefficient are 0.9987, 0.9977 and 0.9879, which means the numerical order of frequency or confidence is well preserved. Thus the patterns found by Random Intersection Chains is likely to be the most frequent and confident ones.
interaction | Frequency(-) | Frequency(+) | Confidence | |||
---|---|---|---|---|---|---|
Est. | True | Est. | True | Est. | True | |
device_id, C20 | 0.3607 | 0.3610 | 0.4495 | 0.4518 | 0.2032 | 0.2038 |
banner_pos,app_domain,app_category,device_conn_type | 0.3646 | 0.3633 | 0.4486 | 0.4497 | 0.2011 | 0.2020 |
banner_pos, app_category, device_conn_type | 0.3646 | 0.3633 | 0.4486 | 0.4497 | 0.2011 | 0.2020 |
banner_pos, app_domain, app_category | 0.3835 | 0.3821 | 0.4717 | 0.4735 | 0.2010 | 0.2022 |
banner_pos, app_category | 0.3835 | 0.3821 | 0.4717 | 0.4735 | 0.2010 | 0.2022 |
banner_pos, app_id, app_domain | 0.3759 | 0.3748 | 0.4604 | 0.4619 | 0.2003 | 0.2014 |
banner_pos, app_id | 0.3759 | 0.3748 | 0.4604 | 0.4619 | 0.2003 | 0.2014 |
banner_pos, app_domain, device_conn_type | 0.3684 | 0.3669 | 0.4498 | 0.4510 | 0.1998 | 0.2009 |
device_type, device_conn_type, C20 | 0.3690 | 0.3684 | 0.4502 | 0.4537 | 0.1997 | 0.2012 |
banner_pos, app_domain | 0.3882 | 0.3867 | 0.4731 | 0.4751 | 0.1995 | 0.2008 |
Finally, the interactions are used as binary features, based on which several popular CTR prediction models are trained. We adopt 5 models, namely Wide&Deep (Cheng et al., 2016), DeepFM (Guo et al., 2017), xDeepFM (Lian et al., 2018), Deep&Cross (Wang et al., 2017) and AutoInt (Song et al., 2019), to test whether adding the interactions to the input is helpful. The results are shown in Table 2, where the performance of GLIDER, an interaction detection methods proposed by Tsang et al. (2020), is also presented as a comparison. We can see that in most cases, adding the interactions found by Random Intersection Chains leads to a significant improvement. In fact, an improvement as small as 0.001 is desirable for the Criteo data set (Cheng et al., 2016; Guo et al., 2017; Wang et al., 2017; Song et al., 2019; Tsang et al., 2020), and Random Intersection Chains lives up to this expectation. Perhaps due to the better learning rate we used, our baseline is better than GLIDER. Despite the difference between baselines, Random Intersection Chains makes a comparable improvement to GLIDER. But according to Tsang et al. (2020), it will take several hours and more than 150 GB memory to perform GLIDER, while the requirement of Random Intersection Chains is much lower.
Model | Criteo | Avazu | ||
---|---|---|---|---|
AUC | logloss | AUC | logloss | |
Wide&Deep | 0.8087(4e-4) | 0.4427(3e-4) | 0.7826(2e-4) | 0.3781(1e-4) |
+RIC | 0.8097(3e-4) | 0.4418(2e-4) | 0.7828(2e-4) | 0.3779(1e-4) |
Wide&Deep* | 0.8069(5e-4) | 0.4446(4e-4) | 0.7794(3e-4) | 0.3804(2e-4) |
+GLIDER* | 0.8080(3e-4) | 0.4436(3e-4) | 0.7795(1e-4) | 0.3802(9e-5) |
DeepFM | 0.8087(4e-4) | 0.4427(3e-4) | 0.7819(3e-4) | 0.3785(2e-4) |
+RIC | 0.8097(3e-4) | 0.4418(2e-4) | 0.7826(4e-4) | 0.3781(2e-4) |
DeepFM* | 0.8079(3e-4) | 0.4436(2e-4) | 0.7792(3e-4) | 0.3804(9e-5) |
+GLIDER* | 0.8097(2e-4) | 0.4420(2e-4) | 0.7795(2e-4) | 0.3802(2e-4) |
Deep&Cross | 0.8084(7e-4) | 0.4430(7e-4) | 0.7824(2e-4) | 0.3782(1e-4) |
+RIC | 0.8096(6e-4) | 0.4419(5e-4) | 0.7835(8e-4) | 0.3776(4e-4) |
Deep&Cross* | 0.8076(2e-4) | 0.4438(2e-4) | 0.7791(2e-4) | 0.3805(1e-4) |
+GLIDER* | 0.8086(3e-4) | 0.4428(2e-4) | 0.7792(2e-4) | 0.3803(9e-5) |
xDeepFM | 0.8082(5e-4) | 0.4432(4e-4) | 0.7824(4e-4) | 0.3782(2e-4) |
+RIC | 0.8103(2e-4) | 0.4414(2e-4) | 0.7825(4e-4) | 0.3781(2e-4) |
xDeepFM* | 0.8084(2e-4) | 0.4433(2e-4) | 0.7785(3e-4) | 0.3808(2e-4) |
+GLIDER* | 0.8097(3e-4) | 0.4421(3e-4) | 0.7787(4e-4) | 0.3806(1e-4) |
AutoInt | 0.8077(3e-4) | 0.4436(3e-4) | 0.7788(2e-4) | 0.3804(1e-4) |
+RIC | 0.8090(4e-4) | 0.4425(4e-4) | 0.7795(3e-4) | 0.3802(1e-4) |
AutoInt* | 0.8083 | 0.4434 | 0.7774 | 0.3811 |
+GLIDER* | 0.8090(2e-4) | 0.4426(2e-4) | 0.7773(1e-4) | 0.3811(5e-5) |
6.2 Tic-Tac-Toe Endgame Data
Tic-Tac-Toe endgame data set (Matheus and Rendell, 1989; Aha et al., 1991) encodes the complete set of possible board configurations at the end of tic-tac-toe games. There are 958 instances and 9 categorical features in the data set. The possible values for each feature are “x”, “o” and “b”, which stand for “black”, “white” and “blank”, respectively. There are 8 possible ways to win for both players (3 horizontal lines, 3 vertical lines and 2 diagonal lines). Our target is to learn these rules that determine which player wins the game.
This is an ideal data set to test the effectiveness of Random Intersection Chains, since all the features are categorical and the label is intrinsically determined by some rules. As the result of an ending game is completely determined by some rules, we can make an accurate prediction if we find all the interesting rules. So we make an effort to extract the total 16 valid rules, with different .
We set =4 and =10, so only the 10 most confident patterns for each player are kept. The range of we adopted is [20, 1000], and the number of found interesting rules is given in Figure 2. At the beginning, there are more interesting patterns can be found as grows. Actually for larger than 400, all the interesting patterns corresponding to x’s victory are found, while there is one missing pattern for “o”. We check the list of the found patterns, and found that the missing pattern is “a3=o, b3=o, c3=o”. Its support is 32/958=0.0334, indeed a very small value. When is too large, not only the missing pattern doesn’t occur, but also some already found patterns disappear. This is because some uncommon patterns have high confidence coincidentally. For example, “a1=x, b1=o, b3=b” occurs 30 times in the database, and all the instances containing this pattern happen to be in positive class. Since the number of the remained patterns are limited, these occasional patterns squeeze out the interesting ones.
The results indicate that the size of priority queue influences the extracted patterns in two ways. On the one hand, small may lead to a neglect of some meaningful patterns. On the other hand, if it’s too large, some occasionally confident patterns will be chosen, which can be treated as the “overfitting” phenomenon of Random Intersection Chains.

6.3 Reuters RCV1 Text Data
The Reuters RCV1 text data contains the tf-idf (term frequency-inverse document frequency) weighted presence of 47,148 word-stems in each document (Lewis et al., 2004).
Lewis et al. (2004) used a data set consisting of 23,149 documents as the training set. Like the processing approach adopted by Shah and Meinshausen (2014), we only consider word-stems appearing in at least 100 documents and the topics that contain at least 200 documents. This leaves 2484 word-stems as predictor variables and 52 topics as prediction targets. Also, tf-idf is transformed to a binary version, using 1 or 0 to represent whether it is positive. In this work, we split the original training data into a smaller training set and a test set. The first 13,149 instances are used for training and the rest for testing.
For each topic , the target is to predict whether a document belongs to this topic. Setting =400, =200 and =4, =300, we apply Random Intersection Chains on the training set. Then we evaluate the interactions in two different ways. The first method, labeled by “Best-Rule”, is classifying the instances by the best rule directly. The “best” rule is defined as the most confident one among the rules with supports larger than /10, which is the same as what is used by Shah and Meinshausen (2014). The other method is treating the rules as selected features for a linear model, which is called “Rules+LR”. We also fit a linear model on the total 2484 features as a comparison, labeled as “LR”. The precision and recall for the models are given in Figure 3, where the best rules found by Random Intersection Chains are shown in the right side of the figure.


We can see that “Best-Rule” sometimes gives good precision or recall, but its performance is usually the worst among the three models. This is not surprising because the information in a single rule is limited, and it’s unreasonable to ask a rule to be both general and reliable on a complicated data set. “Rules+LR” yields similar precision but generally smaller recall when compared with “LR”. At a first glance, it seems Random Intersection Chains brings few benefits. But noticing that the input dimension of the linear model in “Rules+LR” is 200, while the number is 2484 in “LR”. It rapidly reduces the data size and the computational burden, while the precision and recall only fall slightly. Also, the interpretability is enhanced since few features are used.
6.4 Other UCI Data
We also apply Random Intersection Chains on the two data sets used by Shah (2016). The first is Communities and Crime Unnormalized Data Set, “CCU” for short, which contains crime statistics for the year 1995 obtained from FBI data, and national census data from 1990. We take violent crimes per capita as our response, which makes it a regression task. We process the data in the same way as Shah (2016). This leads to a data set consisting of 1903 observations and 101 features.
The second data set is “ISOLET”, which consists of 617 features based on the speech waveforms generated from utterances of each letter of the English alphabet. We consider classification on the notoriously challenging E-set consisting of the letters “B”, “C”, “D”, “E”, “G”, “P”, “T”, “V” and “Z”. And finally we have 2700 observations spread equally among 9 classes.
We use the -penalised linear regression as the base regression procedure, and penalised multinomial regression for the classification example. The regularization coefficient is determined by 5-fold cross-validation. To evaluate the procedures, we randomly select 2/3 of the instances for training and the rest for testing. This procedure is repeated 200 times for each of the data sets. Mean square error is used as the criterion for the regression model and misclassification rate is used for the classification task. All the settings are exactly the same as those used by Shah (2016), except we use -regularizer to penalise the multinomial regression instead of group Lasso. This is because we don’t know how Shah (2016) grouped the features.
Since the inputs for these two data sets are numerical and CCU data set corresponds to a regression task, Random Intersection Chains can not be applied directly. To handle this difficulty, the continuous features and response should be transformed to a discrete version. The response of CCU data set is split into 5 categories by quantiles, and all the continuous features are then split to 5 intervals according to information gain. Setting =2500, =1225 and , , we add the product as an interactive feature to the input if there is a rule in the form of (=, =, …, =) for some , , …, in the resulting rule sets. The results of models with and without adding the rules found by Random Intersection Chains are shown in Table 3, labeled by “RIC” and “Main”. We also list out the results reported by Shah (2016), including base procedures (“Main*”), iterated Lasso fits (“Iterated”), Lasso following marginal screening for interactions (“Screening”), Backtracking, Random Forests (Breiman, 2001), hierNet (Bien et al., 2013) and MARS (Friedman, 1991). For CCU data set, our base model outperforms the one used by Shah (2016), which may caused by a better penalty parameter. Random Intersection Chains leads to comparable or better result when compared to existing algorithms. As for ISOLET data set, the result of our base model is not as good as the one used by Shah (2016). This is not surprising since we simply use -regularizer while Shah (2016) adopted group Lasso to penalise the model. But we can see that Random Intersection Chains can run on this data set and leads to a good improvement, while some existing methods such as Screening, hierNet, MARS are inapplicable. We think this could be an evidence of our method’s efficiency.
method | ERROR | |
---|---|---|
Communities and crime | ISOLET | |
Main | ) | |
RIC | ) | ) |
Main* | ) | ) |
Iterate* | ) | ) |
Screening* | ) | - |
Backtracking* | ) | ) |
Random Forest* | ) | ) |
hierNet* | ) | - |
MARS* | ) | - |
7 Conclusion
Based on the contents of association rule mining and random intersections, we propose Random Intersection Chains to discover meaningful categorical interactions. A number of chains are generated by intersections, and the patterns in the tail nodes are regarded as frequent patterns. Then the frequency and confidence of these patterns are estimated by maximum likelihood estimation and Bayes’ theorem. Finally the most confident patterns are selected. An efficient algorithm for selecting the most frequent subpatterns from a given pattern in polynomial time is also provided.
We prove that there exist appropriate parameters that can keep all the frequent patterns, while the infrequent ones are prevented. The time and space complexities are analyzed, showing that the algorithm is both time- and memory-efficient. The asymptotic behavior of the estimations is guaranteed. When the number of chains goes to infinity, the estimated frequency and confidence converge to their true values.
As a supplementary, s series of experiments are conducted to verify the effectiveness of Random Intersection Chains. We show it’s time efficient and consistent to detect the most frequent and confident patterns by applying the algorithm on two CTR prediction data sets. The prediction result verifies that adding these interactions are beneficial for CTR prediction. The ability of detecting useful patterns is further tested on the Tic-Toc-Toe data, where almost all the meaningful rules are found if parameters are appropriately chosen. The experiments on Reuters RCV1 Text data show that the found patterns can not only serve as a classifier themselves, but also be the input features for another model. We also compare our algorithm with some other interaction detection methods on several UCI data sets with continuous features or response. The results show that Random Intersection Chains can help if the features or response are transformed into categorical ones beforehand.
One limitation of Random Intersection Chains is that it can not be applied directly on numerical features or response. We are trying to extend its application domain to these cases. Another difficulty lies in the choice of parameters. Different parameter settings influences the prediction performance, but tuning the parameters by grid search is time-consuming. We hope to find a better approach to chose the parameters.
Acknowledgments
This work was partially supported by the National Natural Science Foundation of China under grants 11671418 and 12071428 and by the Zhejiang Provincial Natural Science Foundation of China under grant LZ20A010002.
Appendix A.
In this appendix we give the proofs omitted earlier in the paper.
Proof of Theorem 1. To keep the notation uncluttered, we omit the superscript “” on the probabilities. For a pattern , we use the notation for the probability of ’s occurrence conditioned on . And define
For a chain of length ,
And for chains,
We can see that is monotone increasing with the increasing of and , and the decreasing of . For , if , then
Define
Thus . Then for , we have if .
Next we give the conditions for containing with probability at most if . Fixing , for we have
Define
Take the derivative of , then we have
where
So the corresponding derivative is
Denote the numerator as , and take the derivative, then we have
Again denoting the numerator as and taking the derivative, we have
Denoting , we have
Therefore for ,
Noticing that , we have , and thus . So is a monotone decreasing function of . Extend the domain of to real numbers, according to rule,
(14) | ||||
Then according to Heine theorem,
Thus we have
So for any , there exists such that if .
Proof of Corollary 2. From the proof of Theorem 1, it follows that there exist a choice and , where is a feasible solution of subject to
The inequality holds for . Substituting for in the first inequality and rearranging, we have
(15) |
Similarly taking the place of by in the second inequality and rearranging, we have
(16) |
Taking advantage of Inequalities 15 and 16, if
(17) |
then
where the first, second and third inequality is obtained from Inequalities 15, 17 and 16, respectively. So we need only to ensure Inequality 17. Denote ,
So are subject to the conditions in Theorem 1.
Proof of Theorem 4. As stated in Section 4, we can represent a chain by two vectors of dimension . Thus for every chain, the memory needed is . For every class in , chains are generated. Thus the entire space complexity is therefore .
Proof of Theorem 5. For the -th node of the -th chain, we need to check whether the element in occurs in , the time complexity is . So the time complexity of generating a chain is . And the total time complexity is .
Proof of Theorem 6. Denote
We prove the first part of this theorem by the loop invariant: at the beginning of the th iteration of Line 5-24, the priority queue contains the patterns in .
Initialization: Before the loop in Line 5-24, this loop invariant is true since every element in was inserted into in Line 1-4. By the property of priority queue, stores the most frequent elements, each as an 1-order pattern.
Maintenance: During the -th iteration, by definition we have and . If , no patterns can be inserted into the priority queue, thus keeps unchanged, and satisfies the loop invariant. Otherwise, for any pattern , we can write , where is the most frequent 1-order subpattern of . Since is more frequent than , it belongs to . Noticing is an -order pattern, it belongs , thus it also belongs to . By the definition of , its frequency is larger than any 1-order subpattern of , and thus also larger than . According to the property of priority queue, patterns with larger frequency are extracted earlier. So when is extracted from the priority queue at Line 8, is still in . And will be extracted from the queue later at Line 13. and satisfy the conditions in Line 14 and thus will be inserted into at Line 17. Since every pattern in will not be missed by the algorithm, and contains all the patterns in at the beginning of iteration by the assumption, we can conclude that contains every pattern in . The loop invariant is still true.
Termination: When the loop ends, it is the -th iteration, thus contains all patterns in , which is exactly what we need.
As for computational complexity, we only need to calculate the frequency in Line 2 and 16. It can be seen directly that Line 2 repeats times. Line 16 repeats at most times due to Line 12, which will as a whole repeats at most times due to Line 7, and finally repeats times due to Line 5 in total. Thus there are at most computations of frequency overall.
To prove Theorem 7 and 8, we need to to know the expectation and variance of , and their covariance. So we give Lemma 9, 10 and 11 first. To simplify the notation, we omit the superscript “” unless otherwise stated.
Lemma 9
Proof.
Lemma 10
Proof.
Lemma 11
The covariance of and in Algorithm 2 is
Define
By the Multivariate Delta Method,
in distribution.
Proof of Theorem 8. Without loss of generality, we take for example. From the definition of , we have
The partial derivatives are
Substitute the variables with their expectations, and denote as the marginal probability of , then
where represents the corresponding expectations of all the variables in . Denote
(18) | ||||
Then
(19) | ||||
Since the chains for different classes are independent, for all we have
Define
(20) | ||||
According to the Multivariate Delta Method, noticing that , we can conclude that
in distribution.
References
- Agrawal and Srikant (1994) R. Agrawal and R. Srikant. Fast algorithms for mining association rules. In Proc. of 20th Intl. Conf. on VLDB, pages 487–499, 1994.
- Agrawal et al. (1993) R. Agrawal, T. Imieliundefinedski, and A. Swami. Mining association rules between sets of items in large databases. SIGMOD Rec., 22(2):207–216, June 1993. ISSN 0163-5808. doi: 10.1145/170036.170072.
- Agrawal et al. (2019) R. Agrawal, J. H. Huggins, B. L. Trippe, and T. Broderick. The kernel interaction trick: Fast bayesian discovery of pairwise interactions in high dimensions. In ICML 2019, volume 2019-June, pages 199 – 214, Long Beach, CA, United states, 2019.
- Aha et al. (1991) D. W. Aha, D. Kibler, and M. K. Albert. Instance-based learning algorithms. Machine Learning, 6(1):37–66, Jan. 1991. ISSN 0885-6125. doi: 10.1023/A:1022689900470. URL https://doi.org/10.1023/A:1022689900470.
- Bien et al. (2013) J. Bien, J. Taylor, and R. Tibshirani. A lasso for hierarchical interactions. The Annals of Statistics, 41(3):1111–1141, 06 2013. doi: 10.1214/13-AOS1096.
- Breiman (2001) L. Breiman. Random forests. Machine Learning, 45(1):5–32, 2001.
- Breiman et al. (1984) L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification and Regression Trees. Wadsworth, 01 1984.
- Cheng et al. (2016) H.-T. Cheng, L. Koc, J. Harmsen, T. Shaked, T. Chandra, H. Aradhye, G. Anderson, G. Corrado, W. Chai, M. Ispir, R. Anil, Z. Haque, L. Hong, V. Jain, X. Liu, and H. Shah. Wide & deep learning for recommender systems. In DLRS’16, page 7–10, 2016.
- Cormen et al. (2009) T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction to Algorithms, Third Edition. The MIT Press, 3rd edition, 2009. ISBN 0262033844.
- Friedman (1991) J. Friedman. Multivariate adaptive regression spline. Ann. Statist., 19:1–61, Mar 1991. doi: 10.1214/aos/1176347973.
- Friedman and Popescu (2008) J. H. Friedman and B. E. Popescu. Predictive learning via rule ensembles. The Annals of Applied Statistics, 2(3):916 – 954, 2008. doi: 10.1214/07-AOAS148. URL https://doi.org/10.1214/07-AOAS148.
- Guo et al. (2017) H. Guo, R. Tang, Y. Ye, Z. Li, and X. He. Deepfm: A factorization-machine based neural network for ctr prediction. In IJCAI’17, page 1725–1731, 2017.
- Han et al. (2000) J. Han, J. Pei, and Y. Yin. Mining frequent patterns without candidate generation. SIGMOD Rec., 29(2):1–12, May 2000. ISSN 0163-5808. doi: 10.1145/335191.335372. URL https://doi.org/10.1145/335191.335372.
- Hao and Zhang (2014) N. Hao and H. H. Zhang. Interaction screening for ultrahigh-dimensional data. Journal of the American Statistical Association, 109(507):1285–1301, 2014. doi: 10.1080/01621459.2014.881741. PMID: 25386043.
- Hao et al. (2018) N. Hao, Y. Feng, and H. H. Zhang. Model selection for high-dimensional quadratic regression via regularization. Journal of the American Statistical Association, 113(522):615–625, 2018. doi: 10.1080/01621459.2016.1264956.
- Jian Pei et al. (2001) Jian Pei, Jiawei Han, Hongjun Lu, Shojiro Nishio, Shiwei Tang, and Dongqing Yang. H-mine: hyper-structure mining of frequent patterns in large databases. In Proceedings 2001 IEEE International Conference on Data Mining, pages 441–448, 2001. doi: 10.1109/ICDM.2001.989550.
- Lewis et al. (2004) D. Lewis, Y. Yang, T. Russell-Rose, and F. Li. Rcv1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 5:361–397, 04 2004.
- Lian et al. (2018) J. Lian, X. Zhou, F. Zhang, Z. Chen, X. Xie, and G. Sun. Xdeepfm: Combining explicit and implicit feature interactions for recommender systems. In SIGKDD, page 1754–1763, 2018.
- Matheus and Rendell (1989) C. J. Matheus and L. A. Rendell. Constructive induction on decision trees. In Proceedings of the 11th International Joint Conference on Artificial Intelligence - Volume 1, IJCAI’89, page 645–650, San Francisco, CA, USA, 1989. Morgan Kaufmann Publishers Inc.
- Qu et al. (2016) Y. Qu, H. Cai, K. Ren, W. Zhang, Y. Yu, Y. Wen, and J. Wang. Product-based neural networks for user response prediction. In 2016 IEEE 16th International Conference on Data Mining (ICDM), pages 1149–1154, 2016. doi: 10.1109/ICDM.2016.0151.
- Quinlan (1986) J. R. Quinlan. Induction of decision trees. Machine Learning, 1(1):81–106, 1986.
- Quinlan (1993) R. Quinlan. C4.5: Programs for Machine Learning, volume 1. 01 1993. ISBN 1-55860-238-0.
- Shah (2016) R. D. Shah. Modelling interactions in high-dimensional data with backtracking. Journal of Machine Learning Research, 17(1):7225–7255, Jan. 2016. ISSN 1532-4435.
- Shah and Meinshausen (2014) R. D. Shah and N. Meinshausen. Random intersection trees. Journal of Machine Learning Research, 15(1):629–654, 1 2014. ISSN 1532-4435.
- Song et al. (2020) Q. Song, D. Cheng, H. Zhou, J. Yang, Y. Tian, and X. Hu. Towards automated neural interaction discovery for click-through rate prediction. In SIGKDD, page 945–955, 2020.
- Song et al. (2019) W. Song, C. Shi, Z. Xiao, Z. Duan, Y. Xu, M. Zhang, and J. Tang. Autoint: Automatic feature interaction learning via self-attentive neural networks. In CIKM’19, 2019.
- Sorokina et al. (2008) D. Sorokina, R. Caruana, M. Riedewald, and D. Fink. Detecting statistical interactions with additive groves of trees. In ICML, page 1000–1007, 2008.
- Thanei et al. (2018) G.-A. Thanei, N. Meinshausen, and R. Shah. The xyz algorithm for fast interaction search in high-dimensional data. Journal of Machine Learning Research, 19:1–42, 10 2018.
- Tsang et al. (2017) M. Tsang, D. Cheng, and Y. Liu. Detecting statistical interactions from neural network weights. In ICLR, 05 2017.
- Tsang et al. (2020) M. Tsang, D. Cheng, H. Liu, X. Feng, E. Zhou, and Y. Liu. Feature interaction interpretability: A case for explaining ad-recommendation systems via neural interaction detection. In ICLR, 2020.
- Wang et al. (2017) R. Wang, B. Fu, G. Fu, and M. Wang. Deep & cross network for ad click predictions. In ADKDD, 2017.
- Yu et al. (2019) G. Yu, J. Bien, and R. Tibshirani. Reluctant interaction modeling. 07 2019.