Wasserstein -means with sparse simplex projection
Abstract
This paper presents a proposal of a faster Wasserstein -means algorithm for histogram data by reducing Wasserstein distance computations and exploiting sparse simplex projection. We shrink data samples, centroids, and the ground cost matrix, which leads to considerable reduction of the computations used to solve optimal transport problems without loss of clustering quality. Furthermore, we dynamically reduced the computational complexity by removing lower-valued data samples and harnessing sparse simplex projection while keeping the degradation of clustering quality lower. We designate this proposed algorithm as sparse simplex projection based Wasserstein -means, or SSPW -means. Numerical evaluations conducted with comparison to results obtained using Wasserstein -means algorithm demonstrate the effectiveness of the proposed SSPW -means for real-world datasets 111This paper has been accepted in ICPR2020 [1]..
1 Introduction
The -means clustering algorithm is a simple yet powerful algorithm, which has been widely used in various application areas including, for example, computer vision, statistic, robotics, bioinformatics, machine learning, signal processing and medical image engineering. The standard Euclidean -means algorithm proposed by Lloyd [2] minimizes the total squared distances between all data samples and their assigned cluster centroids. This algorithm consists of two iterative steps, the assignment step and the update step, when clustering data samples into clusters. Because the assignment step requires calculation cost , this step becomes prohibitively expensive when is larger. For this issue, many authors have proposed efficient approaches [3, 4, 5, 6, 7, 8, 9, 10]. It should be also noted that -means algorithm is equivalent to non-negative matrix factorization (NMF) [11], and NMF has been applied to many fields [12, 13, 14, 15]. Regarding another direction of the research in the -means clustering algorithm, some efforts have achieved improvement of the clustering quality. Some of them address the point that the -means clustering algorithm ignores latent structure of data samples, such as histogram data [10]. One approach in this category achieves better performances than the standard algorithm in terms of the clustering quality by exploiting the Wasserstein distance [16], a metric between probability distributions of histograms which respects their latent geometry of the space. This is called the Wasserstein -means algorithm hereinafter. The Wasserstein distance, which originates from optimal transport theory [17], has many favorable properties [18]. Therefore, it is applied for a wide variety problems in machine learning [19], statistical learning and signal processing. Nevertheless, calculation of the Wasserstein distance requires much higher computational costs than that of the standard Euclidean -means algorithm because it calculates an optimization problem every iteration, which makes the Wasserstein -means infeasible for a larger-scale data.
To this end, this paper presents a proposal of a faster Wasserstein -means algorithm for histogram data by reducing the Wasserstein distance computations exploiting sparse simplex projection. More concretely, we shrink data samples, centroids, and the ground cost matrix, which drastically reduces computations to solve linear programming optimization problems in optimal transport problems without loss of clustering quality. Furthermore, we dynamically reduce the computational complexity by removing lower-valued histogram elements and harnessing the sparse simplex projection while maintaining degradation of the clustering quality lower. We designate this proposed algorithm as sparse simplex projection based Wasserstein -means, or SSPW -means. Numerical evaluations are then conducted against the standard Wasserstein -means algorithm to demonstrate the effectiveness of the proposed SSPW -means for real-world datasets.
The source code will be made available at https://github.com/hiroyuki-kasai.
2 Preliminaries
This section first presents the notation used in the remainder of this paper. It then introduces the optimal transport problem, followed by explanation of the Wasserstein -means algorithm.
2.1 Notation
We denote scalars with lower-case letters , vectors as bold lower-case letters , and matrices as bold-face capitals . The -th element of and the element at the position of A are represented as and , respectively. is used for the -dimensional vector of ones. Operator stands for the matrix transpose. Operator represents that outputs when , and otherwise. Given a set , the complement is defined with respect to , and the cardinality is . The support set of is . extracts the elements of in , of which size is . represents a nonnegative matrix of size . The unit-simplex, simply called simplex, is denoted by , which is the subset of comprising all nonnegative vectors whose sums is 1. represents the floor function, which outputs the greatest integer less than or equal to .
2.2 Euclidean -means and speed-up methods
The goal of the standard Euclidean -means algorithm is to partition given data into separated groups, i.e., clusters, as such that the following -means cost function is minimized:
(1) |
where is a set of cluster centers, designated as centroids hereinafter. Also, represents the squared Euclidean distance. Consequently, our aim is to find cluster centroids C. It is noteworthy that the problem defined in (1) is non-convex. It is known to be an NP-hard problem [20, 21]. Therefore, many efforts have been undertaken to seek a locally optimized solution of (1). Among them, the most popular but simple and powerful algorithm is the one proposed by Lloyd [2], called Lloyd’s -means. This algorithm consists of two iterative steps where the first step, called the assignment step, finds the closest cluster centroid for each data sample . It then updates the cluster centroids C at the second step, called the update step. These two steps continue until the clusters do not change anymore. The algorithm is presented in Algorithm 1.
Although Lloyd’s -means is simple and powerful, the obtained result is not guaranteed to reach a global optimum because the problem is not convex. For that reason, several efforts have been launched to find good initializations of Lloyd’s algorithm [22, 23, 24]. Apart from the lack of the global guarantee, Lloyd’s algorithm is adversely affected by high computation complexity. The highest complexity comes from the assignment step, where the Euclidean distances among all the data samples and all cluster centroids must be calculated. This cost of complexity is , which is infeasible for numerous data samples or clusters. It is also problematic for higher dimension . For this particular problem, many authors have proposed efficient approaches. One category of approaches is to generate a hierarchical tree structure of the data samples and to make the assignment step more efficient [3, 4, 5]. However, this also entails high costs to maintain this structure. It becomes prohibitively expensive for larger dimension data samples. Another category is to reduce redundant calculations of the distance at the assignment step. For example, if the assigned centroid changes closer to a data sample in successive iterations, then it is readily apparent that all other centroids that did not change cannot move to . Consequently, the distance calculations to the corresponding centroids can be omitted. In addition, the triangle inequality is useful to skip the calculation. This category includes, for example, [6, 7, 8, 9, 10]. However, it remains unclear whether these approaches are applicable to Wasserstein -means described later in Section 2.4.
2.3 Optimal transport [17]
Let be an arbitrary space, a metric on that space, and the set of Borel probability measures on . For any point is the Dirac unit mass on . Here, we consider two families and of point in , where and respectively satisfy and with . The ground cost matrix , or simply C, represents the distances between elements of and raised to the power as . The cost transportation polytope of and is a feasible set defined as the set of nonnegative matrices such that their row and column marginals are respectively equal to and . It is formally defined as
Consequently, the optimal transport matrix is defined as a solution of the following optimal transport problem [17]
(2) |
2.4 Wasserstein distance and Wasserstein -means [16]
The Wasserstein -means algorithm replaces the distance metric in (1) with the following the -Wasserstein distance , which is formally defined as shown below.
Definition 1 (-Wasserstein distance). For and , the Wasserstein distance of order is defined as
In addition, the calculation in Step 2 in Algorithm 1, i.e., the update step, is replaced with the Wasserstein barycenter calculation, which is defined as presented below.
Definition 2 (Wasserstein barycenter [25]). A Wasserstein barycenter of measures is a minimizer of over as
Addressing that holds in this case, we assume hereinafter. Then, the overall algorithm is summarized in Algorithm 2.
The computation of the Wasserstein barycenters has recently gained increasing attention in the literature because it has a wide range of applications. Many fast algorithms have been proposed, and they include, for example, [26, 27, 28, 16, 29, 30]. Although they are effective at the update step in the Wasserstein -means algorithm, they do not directly tackle the computational issue at the assignment step that this paper particularly addresses. [31] proposes a mini-batch algorithm for the Wasserstein -means. However, it is not clear whether it gives better performances than the original Wasserstein -means algorithm due to the lack of comprehensive experiments. Regarding another direction, the direct calculation of a Wasserstein barycenter of sparse support may give better performances on the problem of interest in this paper. However, as suggested and proved in [32], the finding such a sparse barycenter is a hard problem, even in dimension and for only measures.
3 Sparse simplex projection-based Wasserstein -means: SSPW -means
After this section describes the motivation of the proposed method, it presents elaboration of the algorithm of the proposed sparse simplex projection based Wasserstein -means.
3.1 Motivation
Consequently, this problem is reduced to the linear programming (LP) problem with linear equality constraints. The LP problem is an optimization problem with a linear objective function and linear equality and inequality constraints. The problem is solvable using general LP problem solvers such as the simplex method, the interior-point method and those variants. The computational complexity of those methods, however, scales at best cubically in the size of the input data. Here, if the measures have bits, then the number of unknowns is , and the computational complexities of the solvers are at best [33, 34]. To alleviate this difficulty, the entropy-regularized algorithm, known as Sinkhorn algorithm, is proposed under the entropy-regularized Wasserstein distance. This algorithm regularizes the Wasserstein metric by the entropy of the transport plan and enables much faster numerical schemes with complexity or . It is nevertheless difficult to obtain a higher-accuracy solution. In addition, it can be unstable if the regularization parameter is reduced to a smaller value to improve the accuracy.
Subsequently, this paper continues to describe the original LP problem in (3.1). We attempt to reduce the size of the measures. More specifically, we adopt the following approaches: (i) we make the input data samples and the centroids sparser than the original one by projecting them onto sparser simplex, and (ii) we shrink the projected data sample and centroid and the corresponding ground cost matrix by removing their zero elements. Noteworthy points are that the first approach enables to speed-up the computation while maintaining degradation of the clustering quality as small as possible. The second approach yields no degradation.
3.2 Sparse simplex projection
We first consider the sparse simplex projection , where is the control parameter of the sparsity, which is described in Section 3.4. projects the -th data sample and the -th centroid , respectively, into sparse and on . For this purpose, we exploit the projection method proposed in [35], which is called GSHP.
More concretely, denoting the original or as , and the projected or as , then is generated as
where , and where is defined as , where is the operator that keeps the -largest elements of and sets the rest to zero. Additionally, the -th element of is defined as
where is defined as
This computational complexity is .
method | shrink | projection | Purity | NMI | Accurarcy | Time | |||||
oper. | [] | [] | [] | [sec] | |||||||
-means | 38.0 | 41.2 | 35.4 | ||||||||
baseline | 59.2 | 65.5 | 58.2 | 6.51 | |||||||
shrink | 5.01 | ||||||||||
– | – | 0.7 | 0.8 | 0.7 | 0.8 | 0.7 | 0.8 | 0.7 | 0.8 | ||
56.5 | 57.7 | 65.2 | 64.4 | 55.0 | 55.9 | 2.50 | 3.28 | ||||
FIX | 61.3 | 60.9 | 66.0 | 66.6 | 59.1 | 59.7 | 2.73 | 3.60 | |||
60.7 | 57.3 | 66.8 | 65.5 | 59.0 | 55.8 | 1.62 | 2.40 | ||||
59.5 | 59.7 | 65.0 | 65.6 | 58.4 | 58.6 | 2.62 | 3.43 | ||||
DEC | 60.7 | 59.1 | 66.7 | 65.5 | 59.7 | 58.0 | 4.15 | 4.00 | |||
60.0 | 59.5 | 66.0 | 65.4 | 59.0 | 58.5 | 2.83 | 3.45 | ||||
58.4 | 58.5 | 65.0 | 65.5 | 57.0 | 57.3 | 6.20 | 6.65 | ||||
INC | 63.5 | 61.2 | 68.1 | 66.8 | 62.5 | 59.9 | 6.27 | 6.76 | |||
58.3 | 57.2 | 65.4 | 64.5 | 56.5 | 55.9 | 5.12 | 5.82 |

(a) Purity

(b) NMI

(c) Accuracy

(d) Computation time
3.3 Shrinking operations according to zero elements
The sparse simplex projection in the previous section increases the number of zeros in the centroid and data samples. We reduce the computational complexity by harnessing this sparse structure. The vector shrinking operator, denoted as , removes the zero elements from the projected sample and centroid , and generates and , respectively. It should be noted that this operation does not produce any degradations in terms of the solution of the LP optimization problem. More specifically, can be calculated as
where . Similarly to this, we calculate
where .
Accordingly, we must also shrink the ground cost matrix based on the change of the size of and . For this purpose, we newly introduce the matrix shrinking operator , which removes the row and the column vectors from . The removal is performed against and . Consequently, is compressed into using as
It should also be noted that the size of is . Those sizes change according to the control parameter of the sparse ratio , which is described in the next subsection.
3.4 Control parameter of sparse ratio
As described in Section 3.2, the control parameter of the sparsity is denoted as , where represents the iteration number . We also propose three control algorithms, which are denoted respectively as ‘FIX’ (fixed), ‘DEC’ (decrease), and ‘INC’(increase). They are mathematically formulated as
where is the minimum value, and where is the maximum number of the iterations. The overall algorithm of the proposed SSPW -means is presented in Algorithm 3.
method | shrink | projection | Purity | NMI | Accuracy | Time | |||||||||||||
oper. | [] | [] | [] | [sec] | |||||||||||||||
-means | 65.5 | 65.8 | 64.1 | ||||||||||||||||
baseline | 66.6 | 65.0 | 65.5 | 8.29 | |||||||||||||||
shrink | 7.02 | ||||||||||||||||||
– | – | 0.5 | 0.6 | 0.7 | 0.8 | 0.5 | 0.6 | 0.7 | 0.8 | 0.5 | 0.6 | 0.7 | 0.8 | 0.5 | 0.6 | 0.7 | 0.8 | ||
67.8 | 67.0 | 66.9 | 66.8 | 66.2 | 65.5 | 65.9 | 65.2 | 66.4 | 65.8 | 65.8 | 65.8 | 3.11 | 3.25 | 3.99 | 5.04 | ||||
fix | 66.0 | 67.2 | 66.2 | 66.4 | 64.6 | 65.4 | 65.0 | 64.9 | 65.0 | 66.1 | 65.1 | 65.3 | 1.87 | 2.98 | 3.91 | 4.60 | |||
68.1 | 67.2 | 66.3 | 66.6 | 66.3 | 65.6 | 65.0 | 65.0 | 67.0 | 65.9 | 65.2 | 65.5 | 0.98 | 1.68 | 2.48 | 3.46 | ||||
66.8 | 66.8 | 66.8 | 66.8 | 65.4 | 65.4 | 65.3 | 65.3 | 65.7 | 65.7 | 65.7 | 65.7 | 4.12 | 4.45 | 5.65 | 6.29 | ||||
dec | 67.0 | 67.0 | 66.8 | 66.6 | 65.3 | 65.3 | 65.3 | 65.0 | 65.8 | 65.8 | 65.7 | 65.5 | 5.22 | 5.82 | 6.13 | 7.37 | |||
66.8 | 66.8 | 66.8 | 66.8 | 65.4 | 65.4 | 65.4 | 65.3 | 65.7 | 65.7 | 65.7 | 65.7 | 3.51 | 3.91 | 4.89 | 5.62 | ||||
66.3 | 65.7 | 66.9 | 66.9 | 65.0 | 64.8 | 65.4 | 65.2 | 65.1 | 64.6 | 65.7 | 65.9 | 9.74 | 10.47 | 11.29 | 12.04 | ||||
inc | 66.8 | 66.4 | 66.3 | 66.2 | 65.3 | 65.0 | 64.9 | 65.0 | 65.6 | 65.3 | 65.2 | 65.1 | 7.11 | 6.64 | 6.97 | 6.63 | |||
66.8 | 65.8 | 66.3 | 66.5 | 65.4 | 64.8 | 65.2 | 64.9 | 65.5 | 64.7 | 65.2 | 65.5 | 6.06 | 7.05 | 7.48 | 9.71 |

(a) Purity

(b) NMI

(c) Accuracy

(d) Computation time

(a) projection of sample

(b) projection of centroid

(c) projection of centroid and sample
4 Numerical evaluation
We conducted numerical experiments with respect to computational efficiency and clustering quality on real-world datasets to demonstrate the effectiveness of the proposed SSPW -means. Regarding the Wasserstein barycenter algorithm, we use that proposed in [25]. The initial centroid is calculated using the Euclidean -means with litekmeans222http://www.cad.zju.edu.cn/home/dengcai/Data/code/litekmeans.m.. linprog of Mosek [36]333https://www.mosek.com/ is used to solve the LP problem. Throughout the following experiment, the standard Wasserstein -means described in Section 2.4 is referred as baseline method. The Euclidean -means algorithm is also compared to evaluate the clustering quality metrics although its computation is much lower than the Wasserstein one. We set . The experiments were performed on a four quad-core Intel Core i5 computer at 3.6 GHz, with 64 GB DDR2 2667 MHz RAM. All the codes are implemented in MATLAB.
4.1 1-D histogram evaluation
We first conducted a preliminary experiment using the COIL-100 object image dataset444http://www1.cs.columbia.edu/CAVE/software/softlib/coil-100.php [37], which includes images of objects. From this dataset, we randomly select classes and images per class. We first convert the pixel information into intensity with the range of to . Then we generate a one-dimensional histogram of which the bin size is by removing the intensity of zero. We set .
The averaged clustering performances on Purity, NMI and Accuracy over runs are summarized in TABLE 1. For ease of comparison, the difference values against the baseline method are presented in Fig. 1, of which panels (a), (b) and (c) respectively show results of Purity, NMI, and Accuracy. Here, positive values represent improvements against the baseline method. Additionally, Fig. 1 (d) presents the speed-up ratio of the computation time against that of the baseline method, where the value more than than indicate faster computations than that of the baseline. The figures specifically summarize the results in terms of the combinations of different algorithms and different . They also show the differences among the different projection types, i.e., the projection of the centroid, data sample, and both the centroid and data sample. From Fig. 1, one can find that the case with the centroid projection stably outperforming other cases. Especially, the INC algorithm with the centroid projection stably outperforms others. However, it requires a greater number of computations. Moreover, the effectiveness in terms of the computation complexity reduction is lower. On the other hand, the FIX algorithm with engenders comparable improvements while keeping the computation time much lower.
4.2 2-D histogram evaluation
In this experiment, comprehensive evaluations have been conducted. We investigate a wider range of than that of the earlier experiment, i.e., . For this purpose, we used the USPS handwritten dataset555http://www.gaussianprocess.org/gpml/data/, which includes completely handwritten single digits between and , each of which consists of pixel image. Pixel values are normalized to be in the range of . From this dataset, we randomly select images per class. We first convert the pixel information into intensity with the range of to . Then we generate a two-dimensional histogram for which the sum of intensities of all pixels is equal to one.
The averaged clustering qualities over runs are presented in TABLE 2. As shown there, the FIX algorithm outperforms others in terms of both the clustering quality and the computation time. Similarly to the earlier experiment, for ease of comparison, the difference values against the baseline method are also shown in Fig. 2. It is surprising that the proposed algorithm maintains the clustering quality across all the metrics as higher than the baseline method does, even while reducing the computation time. Addressing individual algorithms, although the INC algorithm exhibits degradations in some cases, the DEC algorithm outperforms the baseline method in all settings. Furthermore, as unexpected, the performances of the algorithms with lower , i.e., , are comparable with or better than those with .
Second, the computation time under the same settings is presented in Fig. 2(d). From this result, it is apparent that the computation time is approximately proportional to in each control algorithm. In other words, the case with requires the lowest computation time among all. Moreover, with regard to the different projection types, the case in which the sparse simplex projection is performed onto both the centroid and data sample requires the shortest computation time, as expected.
4.3 Convergence performance
This subsection investigates the convergence performances of the proposed algorithm. The same settings as those of earlier experiments are used. We first address the convergence speed in terms of the computing time with respect to the different projection types as well as different . For this purpose, we examine the DEC algorithm. However, the two remaining algorithms behave very similar. Fig. 3 shows convergence of Accuracy in terms of computation time at the first instance of runs in the earlier experiment. Fig. 3(a)–Fig. 3(c) respectively show the results obtained when the projection is performed onto the data sample, centroid, and both the centroid and data sample. Regarding the different , the case with the smaller indicates faster convergence. Apart from that, the convergence behaviors among three projection types are similar. However, the case with the which projection of both the centroid and data sample yields the fastest results among all. This is also demonstrated explicitly in Fig. 5 under .
We also address performance differences in terms of the different algorithms of the control parameters. Fig. 5 presents the convergence speeds obtained when projecting both the centroid and data sample with , where the plots explicitly show individual iteration. As might be readily apparent, the FIX control algorithm and the INC control algorithm require much less time than the others because these two algorithms can process much smaller centroids and data samples than others at the beginning of the iterations. However, the computation time of the INC control algorithm increases gradually as the iterations proceed, whereas that of the DEC control algorithm decreases gradually. Overall, the FIX control algorithm outperforms the other algorithms.


4.4 Performance comparison on different ratios
Finally, we compared performances on different sparsity ratios of under the same configurations of the earlier experiment. We specifically address the FIX algorithm of and the case where the projection is performed onto both the centroid and sample. Regarding the time comparison, we use the same speed-up metric against the baseline method as the earlier experiment. Therefore, the range of the values is more than , and the value more than means a speed-up against the baseline method. For this experiment, we have re-conducted the same simulation, thereby the obtained value are slightly different from those in the earlier experiment due to its randomness.
The results are summarized in TABLE 3. Fig. 6 also demonstrates the results for ease of comparison. From both of the results, the cases with yield higher performances than other cases with respect to the clustering quality as well as the computation efficiency. Surprisingly, we obtained around points higher values in the clustering quality metrics with more than -fold speed-up. On the other hand, results in the worst performance. Therefore, the optimization of to give best performances is one of the important topics in the future research.
method | Purity | NMI | Accuracy | Time | |
---|---|---|---|---|---|
[] | [] | [] | [sec] | ||
baseline | 0.00 | 69.7 | 69.6 | 66.9 | 7.98 |
shrink | 6.77 | ||||
0.90 | 69.9 | 69.7 | 67.1 | 4.68 | |
FIX | 0.80 | 69.9 | 69.7 | 67.1 | 3.14 |
0.70 | 69.9 | 69.3 | 67.0 | 2.34 | |
with | 0.60 | 70.4 | 70.0 | 67.7 | 1.54 |
shrink & | 0.50 | 71.8 | 71.8 | 69.4 | 0.88 |
projection | 0.40 | 72.4 | 72.4 | 69.6 | 0.56 |
of & | 0.30 | 73.9 | 72.7 | 71.4 | 0.35 |
0.20 | 68.8 | 67.6 | 65.8 | 0.23 |

4.5 Discussions
From the exhaustive experiments above, it can be seen that the proposed proposed SSPW -means algorithm dynamically reduce the computational complexity by removing lower-valued histogram elements and harnessing the sparse simplex projection while maintaining degradation of the clustering quality lower. Although the performance of each setting of the proposed algorithm depends on the dataset, we conclude that the FIX algorithm with the projections of both the centroid and sample under lower sparsity ratios is the best option because it gives stably better classification qualities with the fastest computing times among all the settings. In fact, this algorithm gives comparable performances to the best one in TABLE 1, and outperforms the others in TABLE 2. The algorithm also yiedls around points higher values in the clustering quality metrics with more than -fold speed-up as seen in TABLE 3.
5 Conclusions
This paper proposed a faster Wasserstein -means algorithm by reducing the computational complexity of Wasserstein distance, exploiting sparse simplex projection operations. Numerical evaluations demonstrated the effectiveness of the proposed SSPW -means algorithm. As for a future avenue, we would the proposed approach into a stochastic setting [38].
References
- [1] T. Fukunaga and H. Kasai. Wasserstein k-means with sparse simplex projection. In 25th International Conference on Pattern Recognition (ICPR), 2020.
- [2] S. Lloyd. Least squaares quantization in pcm. IEEE Trans. Inf. Theory, 28(2):129–137, 1982.
- [3] K Alsabti, S. Rank, and V. Sing. An efficient k-means clustering algorithm. In IPPS/SPDP Workshop on High Performance Data Mining, 1998.
- [4] D. Pelleg and A. Moore. Accelerating exact -means algorithms with geometric reasoning. In Proc. 5th ACM SIGKDD Int. Conf. Knowl. Disc. Data Mining (KDD), 1999.
- [5] T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Silverman, and A. Y. Wu. An efficient k-means clustering algorithm: analysis and implementation. IEEE Trans. Pattern Anal. Mach. Intell., 24(7):881–892, 2002.
- [6] C. Elkan. Using the triangle inequality to accelerate k-means. In ICML, 2003.
- [7] G. Hamerly. Making k-means even faster. In SIAM Int. Conf. Data Mining (SDM), 2010.
- [8] J. Drake and G. Hamerly. Accelerated -means with adaptive distance bounds. In NIPS Work. Optim., 2012.
- [9] Y. Ding, Y. Zhao, X. Shen, M. Musuvathi, and T. Mytkowicz. Yinyang k-means: A drop-in replacement of the classic k-means with consistent speedup. In ICML, 2015.
- [10] T. Bottesch, T. Buhler, and M. Kachele. Speeding up k-means by approximating euclidean distances via block vectors. In ICML, 2016.
- [11] D. Chris, X. He, and H. D. Simon. On the equivalence of nonnegative matrix factorization and spectral clustering (sdm). In Proceedings of the 2005 SIAM International Conference on Data Mining, 2005.
- [12] S. Innami and H. Kasai. NMF-based environmental sound source separation using time-variant gain features. Computers & Mathematics with Applications, 64(5):1333–1342, 2012.
- [13] H. Kasai. Stochastic variance reduced multiplicative update for nonnegative matrix factorization. In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2018.
- [14] H. Kasai. Accelerated stochastic multiplicative update with gradient averaging for nonnegative matrix factorization. In 26th European Signal Processing Conference (EUSIPCO), 2018.
- [15] R. Hashimoto and H. Kasai. Sequential semi-orthogonal multi-level nmf with negative residual reduction for network embedding. In nternational Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020.
- [16] Y. Ye, P. Wu, J. Z. Wang, and J. Li. Fast discrete distribution clustering using wasserstein barycenter with sparse support. IEEE Trans. Signal Process, 65(9):2317–2332, 2017.
- [17] G. Peyre and M. Cuturi. Computational optimal transport. Foundations and Trends in Machine Learning, 11(5-6):355–607, 2019.
- [18] C. Villani. Optimal transport: Old and new. Springer, 2008.
- [19] K. Kasai. Multi-view wasserstein discriminant analysis with entropic regularized wasserstein distance. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020.
- [20] S. Dasgupta. The hardness of k-means clustering. Technical Report Techni- cal Report CS2007-0890, UC San Diego, 2007.
- [21] M. Mahajan, P. Nimbhorkar, and K. Varadarajan. The planar k-means problem is np-hard. In Proc. 3rd Int. Work. Alg. Comput. (WALCOM), 2009.
- [22] D. Arthur and S. Vassilvitskii. K-means++: The advantages of careful seeding. In Proc. 18th Ann. ACM-SIAM Symp. Discr. Alg. (SODA), 2007.
- [23] R. Ostrovsky, Y. Rabani, L. J. Schulman, and C. Swamy. The effectiveness of lloyd-type methods for the k-means problem. In 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2006.
- [24] B. Bahmani, B. Moseley, A. Vattani, R. Kumar, and S. Vassilvitskii. Scalable k-means++. In Proc. VLDB Endow, 2012.
- [25] J. D. Benamou, G. Carlier, M. Cuturi, L. Nenna, and G. Peyré. Iterative bregman projections for regularized transportation problems. SIAM Journal on Scientific Computing,, 37(2):1111–A1138, 2015.
- [26] M. Cuturi and A. Doucet. Fast computation of wasserstein barycenters. In ICML, 2014.
- [27] N. Bonneel, J. Rabin, and G. Peyré. Sliced and radon wasserstein barycenters of measures. Journal of Mathematical Imaging and Vision, 51:22–45, 2015.
- [28] E. Anderes, S. Borgwardt, and J. Miller. Discrete wasserstein barycenters: optimal transport for discrete data. Mathematical Methods of Operations Research, 84:389–409, 2016.
- [29] S. Claici, E. Chien, and Solomon. J. Stochastic wasserstein barycenters. In ICML, 2018.
- [30] G. Puccetti, L. Rüschendorf, and S. Vanduffe. Onthecomputationofwassersteinbarycenters. Journal of Multivariate Analysis, 176:1–16, 2020.
- [31] M. Staib and S. Jegelka. Wasserstein k-means++ for cloud regime histogram clustering. In Proceedings of the Seventh International Workshop on Climate Informatics (CI), 2017.
- [32] S. Borgwardt and S. Patterson. On the computational complexity of finding a sparse wasserstein barycenter. arXiv preprint: arXiv:1910.07568, 2019.
- [33] M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Annual Conference on Neural Information Processing Systems (NIPS), 2013.
- [34] Y. Rubner, C. Tomasi, and L. J. Guibas. The earth mover’s distance as a metric for image retrieval. International Journal of Computer Vision, 40(2):99–121, 2000.
- [35] A. Kyrillidis, S. Becker, V. Cevher, and C. Koch. Sparse projections onto the simplex. In ICML, 2013.
- [36] E. D. Andersen, C. Roos, and T. Terlaky. High Performance Optimization, volume 33, chapter The MOSEK interior point optimizer for linear programming: an implementation of the homogeneous algorithm. Springer, Boston, MA, 2000.
- [37] Columbia university image library (COIL-100).
- [38] H. Kasai. SGDLibrary: A MATLAB library for stochastic optimization algorithms. Journal of Machine Learning Research (JMLR), 18, 2018.