A local geometry of hyperedges in hypergraphs, and its applications to social networks
Abstract.
In many real world datasets arising from social networks, there are hidden higher order relations among data points which cannot be captured using graph modeling. It is natural to use a more general notion of hypergraphs to model such social networks. In this paper, we introduce a new local geometry of hyperdges in hypegraphs which allows to capture higher order relations among data points. Furthermore based on this new geometry, we also introduce new methodology–the nearest neighbors method in hypegraphs–for analyzing datasets arising from sociology.
1. Introduction
One of the challenges in the modern age is to classify data arising from many resources; for example, following the rapid developments of several areas in mathematics, a large number of publications in mathematics creates a tremendous amount of data, which signifies useful information such as relationship (or collaborations) among authors and their publications, and their influences on development of mathematics. It is often the case that analyzing such data is not straightforward, and very difficult task because of the extremely fast growth of relations among data, and of data itself.
One of the main problems in machine learning is weight (respectively, label) prediction problem in which each instance is associated with a set of weights (respectively, labels), and the aim is to propose predictive models for predicting weights (respectively, labels) for unobserved data. These learning problem have important applications in many areas such as protein function classification [4], text categorization [17], and semantic scene classification [2], to cite a few. For networks which can be modeled as graphs, many approaches to prediction problem were proposed (see, for example, [7], [5], [6], [13], [14], [15]).
In real world datasets, for example, in social networks, it often occurs that higher order relations among data points are present which can not be captured because graph modeling of such datasets only signifies binary relation. So it is natural to consider weight (respectively, label) prediction problem for a generalization of graphs–hypergraphs. Recall that a hypergraph is a pair , where is the set of data points (called vertices of ), and is a subset of the power set of which represents higher order relations among data points. Each element in is called a hyperedge. Since each hyperedge can consist of a collection of data points, it captures the higher order correlation information among these data points. Recent trends in learning problem have been focused on hypergraph datasets (see, for example, [1], [16])
In this paper, we propose a geometric approach for studying weight (respectively, label) prediction problem on hypergraph data which will be used to apply for analyzing networks arising from sociology. Traditionally for datasets which can be modeled as graphs, there is a standard method called nearest neighbors which applies to weight (respectively, label) prediction problem. The classical nearest neighbors method relies on the Euclidean metrics. In contrast with many traditionally geometric approaches based on the Euclidean metrics (or distances), we propose a new methodology of nearest neighbors in hypergraphs, using metrics modulo equivalence relations which is a weaker notion than that of usual metrics, but it seems natural to use such metrics in learning data. Equivalence relation signifies the correlation among data points, and a metric modulo such an equivalence relation signifies how much different among data points are, with respect to such correlation information. In this work, we suggest that equivalence relation is a natural mathematical notion for modeling correlation information in datasets, and that metrics modulo equivalence relations are more suitable for learning correlation in datasets.
The structure of our paper is as follows. In Subsection 2.1, we introduce a notion of metric spaces with respect to equivalence relations. In Subsection 2.2, based on the notion of metric spaces with respect to equivalence relation, we propose our methodology for studying weight (respectively, label) prediction problem. In Subsection 2.3, we construct a class of metrics modulo equivalence relations on the set of hyperedges of a hypergraph which can be combined with the methodology proposed in Subsection 2.2 for studying prediction problems. In Section 3, we apply our methodology to real world datasets from sociology, and report our experimental results.
2. A class of metrics modulo certain equivalence relations for the set of hyperedges of a hypergraph, and methodologies
In weight (respectively, label) prediction problem, one is given a dataset consisting of data points such that there is a subset, say , each of whose element is associated to a weight (respectively, label). The aim is to propose a predictive model for predicting the weight (respectively, label) of each element in the set . In this paper, we propose a new approach using insights from metric geometry to weight (respectively, label) prediction problem for datasets which can be modeled as hypergraphs.
A hypergraph is a pair , where is the set of points, called vertices of , and is a subset of the power set of , each of whose elements is called a hyperedge of . Each hyperedge is a subset of . If has exactly elements, is called an -hyperedge of . In this paper, we are interested in weight (respectively, label) prediction problem for hypergraphs, and so the following notion is natural for modeling such datasets.
Definition 2.1.
(incomplete hypergraphs)
Let be a hypergraph, and let be either an interval in or a set of labels for some positive integer . is called an incomplete hypergraph with respect to if there exists a map for some subset of .
Remark 2.2.
In this paper, we consider two types of incomplete hypergraphs which are specified in the following.
-
(i)
In weight prediction problem, is an interval , and a subset of the set of hyperedges in is given such that there is a map . In this case is the map in Definition 2.1. Each value for is called the weight of . Weight prediction problem asks for a predictive model of such that is a map from to for which
for every .
-
(ii)
In label prediction problem, is a set of labels for some positive integer , and a subset of the set of hyperedges in is given such that there is a map . In this case is the map in Definition 2.1. Each value for is called the label of . Label prediction problem asks for a predictive model of such that is a map from to for which
for every .
In classical weight (respectively, label) prediction problem for datasets which can be modeled as graphs, one can apply the classical nearest neighbors method. The main idea of the classical nearest neighbors method is that a dataset is equipped with a Euclidean metric (or distance), say . For each , let be the nearest points from to with respect to the metric such that
Then the nearest neighbors method expect that the weight (respectively, label) of should be in proximity with the for .
Our main contribution in this paper is to introduce modified nearest neighbors methods for learning hypergraphs, especially for weight (respectively, label) prediction problem. In order to do that, we introduce a class of metrics modulo equivalence relations on the set of hyperedges in a hypergraph. Note that in contrast with the usual metric, a metric modulo an equivalence relation allows that as long as (which means that is equivalent to with respect to certain properties, for example, similar weights or labels.) Thus such a metric is more suitable for studying weight (respectively, label) prediction problem since two distinct points in a dataset may have the same weight (respectively, label). In this case, the equivalence relation signifies the relation that certain distinct points in datasets share similar properties with respect to weight (respectively, label) map, and encodes the information that the points are expected to have the same weight (respectively, label).
We first introduce a notion of metric spaces modulo equivalence relations which is used in many places in this paper.
2.1. Metric spaces modulo equivalence relations
We present in this subsection a notion of metric spaces modulo equivalence relations. We also explain why this notion is suitable for applying to geometrical structures of hypergraphs. We first recall a notion of equivalence relations on sets.
Definition 2.3.
(Equivalence relation)
Let be a set. An equivalence relation, denoted by , on is a subset of such that the following are true:
-
(i)
(Reflexivity) for every .
-
(ii)
(Symmetry) if and only if .
-
(iii)
(Transitivity) if and then .
When , we say that is –equivalent to . Throughout this paper, in order to signify this relation, we write whenever .
An equivalence relation on a set provides a way to identify similar elements in the set. Equivalently if one can find a measurement to measure how similar elements in a set are, then one can modify this measurement to introduce an equivalence relation on the set.
Reflexivity in Definition 2.3 says that an element in should be naturally considered to be similar to itself. If an element is similar to an element , then should be naturally considered to be similar to , which is exactly the symmetry condition in Definition 2.3. For the last condition, Transitivity, if an element is similar to an element and is similar an element , it is natural to view that is similar to .
Equivalence relation is a weaker notion than that of the identity relation which is more suitable and natural to study hypergraph data from the metric-geometry viewpoint. The main aim and intuition behind our paper is that in order to study questions in machine learning on a hypergraph , it is not necessary to construct a metric on the hypergraph, i.e., a mapping which satisfies similar conditions as the usual absolute value in , for example, if and only if . Instead one only needs to construct a metric up to an equivalence relation “”, i.e., a mapping which satisfies all the conditions of a metric except the identity relation replaced by a weaker condition that if and only if . It turns out that such a metric exists naturally in a very general class of hypergraphs, and can be exploited to introduce several modified methologies in machine learning which apply to weight prediction problem, and multi-label classification problem. Especially in this paper, we introduce a class of metrics for the set of hyperedges of a hypergraph which will be used to provide a modifed kNN for predicting labels of hyperedges. Up to the knowledge of the authors, this paper is the first one which realizes the existence of such metrics for the set of hyperedges in a hypergraph. In order to obtain such metrics, we introduce a notion of neighbors of hyperedges in a hypergraph which contains a topological feature of the hyperedges. It seems to the authors that this notion of neighbors of hyperedges has never been introduced before in literature.
We recall the notion of metrics modulo equivalence relation on a set.
Definition 2.4.
(Metric modulo an equivalence relation)
Let be a set, and an equivalence relation on . A mapping is said to be a metric on modulo the equivalence relation if the following condition are satisfied:
-
(i)
for all .
-
(ii)
if and only if .
-
(iii)
(Symmetry) for all .
-
(iv)
(Triangle inequality) for any ,
A metric modulo an equivalence relation on a set acts almost like a metric. The only difference between a metric modulo an equivalence relation and a metric on a set is that condition (ii) in Definition 2.4 is replaced by a stronger condition that if and only if . But in studying some properties, say , associated to a given dataset, where each denotes a property of interest, it is often the case that distinct elements in the dataset share exactly the same properties . In this case, it is natural to view that the distance between these distinct elements as zero since they are considered to be equivalent with respect to the properties . Note that if one uses a usual metric on this dataset, then one cannot identify similarities among distinct elements sharing the same properties. Thus it is more natural to use a metric modulo an equivalence relation on the dataset to study the structure of the dataset related to a given set of properties. From this point of view, one can apply this idea to a variety of problems such as weight prediction, or multi-label classification as shown in this paper.
2.2. Methodology
In this subsection, we introduce our methodology for weight (respectively, label) prediction problem for hypergraph datasets. Throughout this subsection, let be a hypergraph. The main contributions of our work are the constructions of several special metric structures of the set of hyperedges . The main motivation for equipping with such metric structures is that we want to modify the classical nearest neighbors (kNN) method (which only works if the underlying space is Euclidean), and apply it to the weight (respectively, label) prediction problem for elements in . In this work, we introduce two different paths to reach to the modification of classical kNN method, the first one called modified kNN (which directly use the metrics we construct on ), and the second one called embedded kNN (which instead of using the metrics, we only need to make use of the local geometry around each hyperedge in , which will be also introduced also in this work.) In the following, we describe in detail our modified and embedded kNN methods.
2.2.1. Modified kNN methods
Assume that there is a metric modulo an equivalence relation , denoted by on .
Weight Prediction Problem.
In the weight prediction problem, we assume that there is a subset of such that there is a map for some interval in . The value is called the weight of . The aim of the weight prediction problem is to predict what possible values of with are. We propose a modified kNN method for predicting weights of elements in as follows.
Fix a positive integer . Take an arbitrary element . Let denote the set of all distances from to elements in , i.e.,
Note that the set only consists of finitely many distinct positive real numbers, say for some positive integer such that
In experimental analysis, we always choose such that for every .
We only choose the smallest values in , say , and let denote the set of all elements such that there exists an integer with for which , i.e.,
(1) |
We propose a predictive model for , denoted by , as follows.
where denotes the number of elements in .
In experimental analysis, we perform the modified kNN method introduced above with replaced by classes of metrics modulo certain equivalence relations which we construct on the set of hyperedges in Subsection 2.3. In Table 1, we list the class of metrics we use in the modified kNN methods.
The metric |
---|
defined in (2.11) |
defined in (2.13) |
Label Prediction Problem
In the label prediction problem, we assume that there is a subset, say of such that there is a map for some integer . The value is called the label of . The aim of the label prediction problem is to predict what possible values of with are. We propose a modified kNN method for predicting labels of elements in as follows.
For each , we define the set as in (1).
We propose a predictive model for , denoted by , as follows. Let be the multi-set of labels for . If there is a mode, say , in for some , we set ; otherwise let be the closest integer to the average .
2.2.2. Embedded kNN methods
In this subsection, we introduce our embedded kNN methods. In the construction of each metric in this work, we need to introduce a local geometry at each element in the set . The local geometry at each element needs to contain the local structure around each element which we want to study from the set of hyperedges . Thus the local geometry at a hyperedge gathers the local hypergraph structure around the hyperedge with respect to certain properties of hypergraphs that we want to investigate. In this work, we introduce the viewpoint that the local geometry at a hyperedge is a set of certain hyperedges in which we consider as sharing similar features of with respect to certain properties. We introduce two different types of local geometry in the set of hyperedges which we call type I and type II neighborhoods of hyperedges, respectively.
Table 2 lists types of neighborhoods for hyperedges introduced in this paper.
Types of neighborhoods |
---|
Type I neighborhoods , defined in (2.10) |
Type II neighborhoods , defined in (2.12) |
We first consider label prediction problem.
Label Prediction Problem
Let of for which each hyperedge in is equipped with a label in , i.e., there is a map .
In this case, let denote either the type I neighborhood or type II neighborhood in Table 2. We define the map which represents each hyperedge as a point in as follows. For each ,
where for each , denotes the number of hyperedges such that . We call the feature map of . Note that the map is well-defined since by our notion of neighborhoods of hyperedges, each hyperedge in the neighborhood belongs to the set , and thus one can associate a label to , say .
Under the map , the set of hyperedges can be represented as a subset, say , of . For label prediction problem, we view the label of each element for a given hyperedge is the same as that of . In order to predict the label of a hyperedge , we perform the classical kNN method for the set to predict the label of which we view as the label of the hyperedge .
Weight Prediction Problem
In this case, let of for which each hyperedge in is equipped with a weight in an interval in , i.e., there is a map . For an integer , we associate a map to as follows. We first divide the interval into sub-intervals of equal length such that , , …, where , , and for each . For each , we let if , and if is the unique integer in such that . Thus one obtains the map .
Repeating the same arguments as in Label Prediction Problem, one obtains the feature map defined by
Under the map , the set of hyperedges can be represented as a subset, say , of . For weight prediction problem, we view the weight of each element for a given hyperedge is the same as that of . In order to predict the weight of a hyperedge , we perform the classical kNN method for the set to predict the weight of which we view as the weight of the hyperedge .
In our experimental analysis, for simplicity, we always let .
2.3. A class of metrics for the set of hyperedges of a hypergraph
In this subsection, we introduce a class of metrics modulo certain equivalent relations on the set of hyperedges in a hypergraph. We consider incomplete hypergraphs introduced in Definition 2.1 which is suitable for studying the weight (respectively, label) prediction problem for the set of hyperedges.
For the rest of this section, we fix an incomplete hypergraph such that there is a subset of and a map , where is either an interval in in or a set of labels for some integer . In weight prediction problem, is the weight map , and is an interval whereas in label prediction problem, is the label map , and .
We first introduce a notion of neighborhoods of hyperedges in . For each , denote by the size of , i.e., the number of vertices in . Let denote the set consisting of the sizes of all hyperedges in , i.e., . Let be a mapping which will be used to control the sizes of neighbors of hyperedges.
Definition 2.5.
(-neighborhoods)
Let be a hyperedge in . The -neighborhood of is defined by
where is the set of vertices that and have in common, and denotes the floor function.
For the construction of a metric for an incomplete hypergraph, for each hyperedge , we only focus on -neighbors of such that the differences between the values and the predicted value of are very small, and can be controlled using a tuning parameter . The smaller these differences become, the more precise the predicted value of is.
We now introduce a notion of neighborhoods of hyperedges, depending on a tuning parameter .
Definition 2.6.
Let be a tuning parameter. For each vertex , set
Set

For each , set
that is is the number of elements in .
We introduce an equivalence relation on the set of hyperedges which allows to identify certain hyperedges in . Note that if two hyperedges, say and satisfy , then it is natural to view and as similar hyperedge in weight (respectively, label) prediction problem since their neighborhood structures around the average mean of weights (respectively, labels) are very similar.
Hence it is natural to define a binary relation on as follows: two hyperedges and are equivalent, denoted by if . One obtains the following.
Proposition 2.7.
The binary relation “” is an equivalence relation.
Example 2.8.
Firgure 1 is an example of hypergraph that indicates which hyperedges are -equivalent.
In this example, . The weights of are respectively. Let for each . Then , which indicates .
Now we define a mapping as follows. For hyperedges , define
(2) |
Theorem 2.9.
is a metric modulo the equivalence relation .
Proof.
It is clear that which shows that (i) in Definition 2.4 holds.
Assume that for some . Then it follows from (2) that which implies that . Thus (ii) in Definition 2.4 holds.
It is obvious that , and thus (iii) in Definition 2.4 holds.
Let be hyperedges in . We see that
Therefore (iv) in Definition 2.4 holds, and thus is a metric modulo the equivalence relation .
∎
Depending on the range of the values of , one can obtain different types of neighborhoods of hyperedges, and different versions of the metric . Two distinguished cases which we study in the paper are as follows.
Definition 2.10.
(Type I neighborhoods of hyperedges)
Let for each hyperedge . For , following the construction of the neighborhood with replaced by , one obtains a new type of neighborhood of , denoted by , which we call the Type I neighborhood of .
Definition 2.11.
(A first metric on )
Let for each hyperedge . Following the construction of the metric with replaced by , one obtains a metric modulo the equivalence relation which we denote by .
The key feature of the metric is that it does not control the sizes of neighbors of hyperedges. So it contains all weight information from the neighborhoods of hyperedges in its definition without filtering which weights are possibly important for prediction.
Definition 2.12.
(Type II neighborhoods of hyperedges)
Let for each hyperedge . For , following the construction of the neighborhood with replaced by , one obtains a new type of neighborhood of , denoted by , which we call the Type II neighborhood of .
Definition 2.13.
(A second metric on )
Let for each hyperedge where is a constant in the interval . Following the construction of the metric with replaced by , one obtains a metric modulo the equivalence relation which we denote by .
3. Experimental analysis
In this section, we apply the methodology proposed in Subsection 2.2 combined with the class of metrics in Subsection 2.3 to construct predictive models for weight (respectively, label) prediction problem for hypergraph datasets.
We are not aware of any hypergraph datasets containing weights (respectively, labels) of hyperedges. So in our experimental analysis, we use datasets of weighted bipartite networks, and transform them into hypergraphs.
Let be a bipartite network, where and are disjoint sets of vertices, is the set of edges. For each , is uniquely formed by a pair of vertices where and . A hypergraph induced by the bipartite graph is defined as follows.
-
(i)
By exchanging the notation between and , one, without loss of generality, lets , that is, the set of vertices in is defined to be one set of vertices in .
-
(ii)
For each , let denotes the set of all vertices in such that . We define the set of hyperedges of as
In this case, each hyperedge in uniquely corresponds to a vertex in . And thus is in bijection with .
In our experimental analysis, we use two real-world datasets from social networks which are Epinions network and MovieLens network. They are both bipartite rating networks where one set of vertices are users and the other set of vertices are items. The weight of an edge presents a rating score from a user to an item. In the hypergraphs induced by these two networks, the set of vertices is the set of users, then each hyperedge is formed by the set of all users that rate the same item. Therefore, each hyperedge corresponds to an item in the bipartite graph. An example of transforming a bipartite graph into a hypergraph is shown in Figure 2 and 3. Descriptions of the two datasets are as follows.
-
•
Epinions. This dataset was collected by Paola Massa in a -week crawl (Novemeber/December 2003) from the Epinions.com Website (see the dataset at http://www.trustlet.orgdownloaded_epinions.html )(see [9]). In Epinions, the vertices are users and reviews, each user rates the helpfullness of a review on a scale, where means totally not helpful and means totally helpful.
-
•
MovieLens. This dataset contains movie ratings collected from https://movielens.org/ (see [10]). In this dataset, the vertices are users and movies, each edge connects a user with a movie and represents a rating score. The rating scores are integers from to .
Table 3 contains descriptions of hypergraphs used in our computation. The weights of hyperedges are computed using the edge weights from bipartite network datasets and the notion of goodness introduced in [7]. In both of the two hypergraphs, each hyperedge corresponds to an item in the bipartite graphs, the weight of each hyperedge, which is computed using goodness, indicates how good an item is. In order to associate each hyperedge with a label, we divide the range of weights, which is a closed interval from minimum value of weights to maximum value of weights, into intervals with equal length. Then the label of a hyperedge is if its weight belongs to the th interval. The set of labels are integers from to .
According to the definitions of neighborhoods, one needs to select tuning parameters for , and . In our computation, we set the value of based on the neighborhood of each hyperedge. For each , equals to the standard deviation of weights of all hyperedges that belong to the neighborhood of . The choices of for NN are integers from to , and the values of are set to be , or . The performances of weight predictions are evaluated using mean of absolute error (MAE) and root mean squared error (RMSE). The performances of label predictions are evaluated using error rate which is computed by the proportion of incorrect prediction. We select the values of and that correspond to the smallest MAE or error rate.
Figures 4 and 5 present the results of predicting weights of hyperedges using Epinions dataset. In this example, the smallest MAE and RMSE values are obtained by setting equal to . In general, using the second type of neighborhood () leads to a better prediction results than using the first type of neighborhood ().
Tables 4 and 5 present the results of predicting weights of hyperedges using modified NN method and embedded NN method. In our experimental analysis, for simplicity, we set for the computation of in the embedded NN method, where is either type I or type II neighborhood of . Each cell reports a pair of numbers (mean of absolute error (MAE), root mean squared error (RMSE)).
Table 6 presents the error rates of predicting labels of hyperedges using modified NN methods. Table 7 presents the error rates of predicting labels of hyperedges using embedded NN methods.

This is a subgraph of the Epinions network. stands for the set of users from to , stands for the set of books from to .



number of hyperedges | number of vertices | :Size of hyperedge | |
---|---|---|---|
Epinions | 33774 | 32610 | |
MovieLens | 7498 | 75106 |
Metric | ||
---|---|---|
Epinions | (0.236, 0.301) | (0.234, 0.299) |
MovieLens | (0.172, 0.217) | (0.184, 0.232) |
Neighborhood | ||
---|---|---|
Epinions | (0.234, 0.298) | (0.232, 0.296) |
MovieLens | (0.171, 0.217) | (0.183, 0.231) |
Dataset | Epinions | MovieLens | ||
---|---|---|---|---|
Metric | ||||
2 labels | 0.182 (k=1) | 0.176 (k=1) | 0.250 (k=1) | 0.250 (k=4) |
3 labels | 0.316 (k=3) | 0.306(=2) | 0.296 (k=1) | 0.296 (k=1) |
4 labels | 0.406 (k=4) | 0.412 (k=1) | 0.303 (k=6) | 0.303 (k=4) |
5 labels | 0.507 (k=3) | 0.512 (k=4) | 0.535 (k=5) | 0.532 (k=1) |
Dataset | Epinions | MovieLens | ||
---|---|---|---|---|
Neighborhood | ||||
2 labels | 0.182(k=2) | 0.176 (k=2) | 0.252 (k=19) | 0.249 (k=13) |
3 labels | 0.323 (k=2) | 0.319 (k=5) | 0.295 (k=10) | 0.296 (k=3) |
4 labels | 0.406 (k=10) | 0.412 (k=7) | 0.304 (k=15) | 0.303 (k=10) |
5 labels | 0.511 (k=5) | 0.520 (k=2) | 0.519 (k=15) | 0.527 (k=1) |
References
- [1] S. Agarwal, K. Branson, and S. Belongie, Higher order learning with graphs. In ICML, pages 17–24, 2006.
- [2] M. R. Boutell, J. Luo, X. Shen, and C. M. Brown, Learning multi-label scene classification. Pattern Recognition, 37(9):1757–1771, 2004.
- [3] G. Chen, J. Zhang, F. Wang, C. Zhang, and Y. Gao, Efficient multi- label classification with hypergraph regularization, Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, vol. 0, pp. 1658–1665, 2009.
- [4] A. Elisseeff and J. Weston, A kernel method for multi-labelled classification, In Advances in Neural Information Processing Systems 14, pages 681–687, 2001.
- [5] F. Kang, R. Jin, and R. Sukthankar, Correlated label propagation with application to multi-label learning. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 1719–1726, 2006.
- [6] H. Kazawa, T. Izumitani, H. Taira, and E. Maeda, Maximal margin labeling for multi-topic text categorization. In Advances in Neural Information Processing Systems 17, pages 649–656. 2005.
- [7] Srijan Kumar, Francesca Spezzano, V. S Subrahmanian, and Christos Faloutsos, Edge Weight Prediction in Weighted Signed Networks. 2016 IEEE 16th International Conference on Data Mining (ICDM) (2016): 221–230.
- [8] Dong Li, Zhiming Xu, Sheng Li, and Xin Sun, Link prediction in social networks based on hypergraph, Proceedings of the 22nd International Conference on World Wide Web, Pages 41-42
- [9] P. Massa, and P. Avesani,Trust-aware Recommender Systems. Proceedings of the 2007 ACM Conference on Recommender Systems (2007): 17–24. Web.
- [10] F. Maxwell Harper and Joseph A. Konstan, The MovieLens Datasets: History and Context, ACM Transactions on Interactive Intelligent Systems 5, 4: 19:1-19:19.
- [11] J. Silva and R. Willett, Hypergraph-based anomaly detection of high-dimensional co-occurrences, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, pp. 563–569, 2009.
- [12] Z. Tian, T. Hwang, and R. Kuang, A hypergraph- based learning algorithm for classifying gene expression and arraycgh data with prior knowledge, Bioinformatics, Volume 25, Issue 21, 1 November 2009, Pages 2831–2838.
- [13] A. Torralba, K. P. Murphy, and W. T. Freeman, Sharing visual features for multiclass and multiview object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(5):854–869, 2007.
- [14] N. Ueda and K. Saito, Parametric mixture models for multi-labeled text. In Advances in Neural Information Processing Systems 16, pages 721–728. 2003.
- [15] M.-L. Zhang and Z.-H. Zhou. Multilabel neural networks with applications to functional genomics and text categorization. IEEE Transactions on Knowledge and Data Engineering, 18(10):1338–1351, 2006.
- [16] Dengyong Zhou, Jiayuan Huang, and Bernhard Schölkopf, Learning with hypergraphs: Clustering, classification, and embedding, In NIPS, 2006.
- [17] S. Yu, K. Yu, V. Tresp, and H.-P. Kriegel, Multi-output regularized feature projection, IEEE Transactions on Knowledge and Data Engineering, 18(12):1600–1613, 2006.