This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Learning nonnegative matrix factorizations from compressed data

Abraar Chaudhry and Elizaveta Rebrova
Abstract.

We propose a flexible and theoretically supported framework for scalable nonnegative matrix factorization. The goal is to find nonnegative low-rank components directly from compressed measurements, accessing the original data only once or twice. We consider compression through randomized sketching methods that can be adapted to the data, or can be oblivious. We formulate optimization problems that only depend on the compressed data, but which can recover a nonnegative factorization which closely approximates the original matrix. The defined problems can be approached with a variety of algorithms, and in particular, we discuss variations of the popular multiplicative updates method for these compressed problems. We demonstrate the success of our approaches empirically and validate their performance in real-world applications.

Princeton University, Department of Operations Research and Financial Engineering. Emails: azc@alumni.princeton.edu, elre@princeton.edu. The authors acknowledge partial support by NSF DMS-2309685 and DMS-2108479.

1. Introduction

Low-rank approximations are arguably the main tool for simplifying and interpreting large, complex datasets. Methods based on singular value decomposition of the data matrix deliver deterministic, useful results via polynomial-time algorithms. However, for nonnegative data, spatial localization and interpretability of the features can be boosted by additionally making the factors element-wise nonnegative [23]. In the standard form, given a nonnegative matrix ๐—โˆˆโ„mร—n{\mathbf{X}}\in{\mathbb{R}}^{m\times n}, and a target rank rr, the Nonnegative Matrix Factoriration (NMF) problem is the task of finding matrices ๐”โˆˆโ„mร—r{\mathbf{U}}\in{\mathbb{R}}^{m\times r} and ๐•โˆˆโ„nร—r{\mathbf{V}}\in{\mathbb{R}}^{n\times r} solving the problem

min๐”,๐•โ‰ฅ0โกโ€–๐—โˆ’๐”๐•Tโ€–F2.\min_{{\mathbf{U}},{\mathbf{V}}\geq 0}\|{\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}\|_{F}^{2}. (1.1)

From the component matrices ๐”{\mathbf{U}} and ๐•{\mathbf{V}}, one can obtain soft clustering of the data with additional interpretability of the components, compared to the SVD results [23]. NMF became a standard analysis tool across many application areas, such as topic modeling and text mining [2, 40], image processing [23], hyperspectral unmixing [18, 9], genomics [34], and others. It is amenable to numerous extensions, such as, incorporating semi-supervision [24], tree-like hierarchy of the clusters [13, 21], or additional information about the domain [40, 12].

The problem (1.1) is in general NP-hard [38] to solve and it never possesses a unique solution. Despite these challenges, several iterative algorithms have been developed to solve the NMF problem, including multiplicative updates (MU), alternating least squares (ALS), alternating nonnegative least squares (ANLS), and hierarchical alternating least squares (HALS). See [11, Section 3] for a discussion of these methods.

When the size of the data matrix ๐—{\mathbf{X}} is large, it is challenging or impossible to store it, and even harder to use it within the optimization algorithms searching for the NMF decomposition. Note that the resulting factors ๐”{\mathbf{U}} and ๐•{\mathbf{V}} collectively contain only rโ€‹(n+m)r(n+m) entries which is much less than the total of nโ€‹mnm entries in ๐—{\mathbf{X}} if rโ‰ชminโก{n,m}r\ll\min\{n,m\}. Thus, it can be possible to store and process the resulting factors, even when processing the whole data is challenging.

This motivates us to apply a sketch-and-solve approach to the NMF problem. This means that we will first compute a linear function of our input matrix โ„’โ€‹(๐—)\mathcal{L}({\mathbf{X}}), known as a sketch, and then second, find a good factorization ๐”๐•T{\mathbf{U}}{\mathbf{V}}^{T} based only on the sketch and the linear function โ„’\mathcal{L}, known as the sketching map. If the size of the sketch is much smaller than ๐—{\mathbf{X}}, the second task may be significantly more practical when there are memory limitations. The practice of using sketches is also applicable to matrix factorization problems in other settings such as when different entries of ๐—{\mathbf{X}} are revealed over time. In certain streaming applications, it has been shown that there is little benefit to considering nonlinear sketching functions as opposed to linear sketches [25].

A standard and well-developed application of the linear sketch-and-solve approach is for the simpler problem of linear regression [33, 5, 29]. Wide classes of random matrices, defined obliviously to the underlying data, can be used as linear sketching operators, and deterministic conditions related to the geometry preservation properties of the sketching operators were formulated [26, 43]. Another prominent example of the sketch-and-solve approach is randomized SVD algorithms. To find low-rank factorization of a given matrix from its sketched measurements, the sketch should retain spectral properties of the matrix rather than being data oblivious. In [14], a simple procedure of forming refined data-adapted sketches via just one pass over the data โ€“ a randomized rangefinder algorithm โ€“ was proposed.

In this work, we develop the theory showing why and how these data-adapted sketches are useful in finding nonnegative low-rank components; and we also consider the cases when random, independent from the data, sketches can be used.

One way to sketch a structured object โ€“ in our case, a matrix โ€“ is to vectorize it and use random linear map on the resulting vector. This includes standard compressive sensing approaches for low-rank matrix recovery such as in [3, 7]. Another way (which is the focus of this work) is to consider sketches that take of the form of left and right matrix products with ๐—{\mathbf{X}}, e.g., ๐€๐—{\mathbf{A}}{\mathbf{X}} or ๐—๐{\mathbf{X}}{\mathbf{B}} for appropriately sized matrices ๐€{\mathbf{A}} and ๐{\mathbf{B}}. Column and row sketching have been used successfully for matrix factorization and approximation problems [37, 8, 44], and its higher order analogue modewise sketching was used to speed up tensor low-rank fitting [19, 15]. An advantage of this approach is in compact sketching matrices ๐€โˆˆโ„kร—m{\mathbf{A}}\in{\mathbb{R}}^{k\times m} and ๐โˆˆโ„nร—k{\mathbf{B}}\in{\mathbb{R}}^{n\times k} that contain kโ€‹(n+m)k(n+m) elements together, compared to the kโ€‹mโ€‹nkmn entries in a linear sketch that is applied to a vectorization of the matrix ๐—{\mathbf{X}} in โ„mโ€‹n{\mathbb{R}}^{mn}. Another advantage is in preserving the matrix structure of the data throughout the sketching, which is crucial in this work for integrating the compressed matrices within learning algorithms, such as the multiplicative updates algorithm.

1.1. Contributions and outline

The idea to make NMF problem scalable through randomized sketching was considered earlier. In Section 2, we review relevant related work. What was missing in the prior works is the crucial link between the algorithmic outcomes of compressed problems and their fitness for the original problem. Establishing such connection is challenging partially due to the limited theoretical guarantees for the convergence of NMF algorithms (which is essentially inevitable dues to the NP-hardness of the problem). We approach this challenge in the following way: (1) we define the compressed problems such that we can provably compare their optimal solutions with the optimal solutions to the uncompressed problem, and (2) we propose efficient algorithms for these compressed problems. Due to (1), this also means getting good solutions for the original problem.

In Section 3, we formulate optimization problems which depend only on sketches, but whose optimal solutions approximately solve the original NMF problem. In addition, these problems are formulated to be amenable for efficient solvers. We propose three types of such problems: (I) for two-sided row and column sketching, (II) for one-sided sketching with orthogonal data-adapted measurement matrices and (III) with approximately orthogonal (e.g., random) data-oblivious measurement matrices.

The proposed compressed problem with row and column sketching is as follows

(I)๐”~,๐•~=argโ€‹min๐”,๐•โ‰ฅ0\displaystyle\text{(I)}\quad\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}}=\operatorname*{arg\,min}_{{\mathbf{U}},{\mathbf{V}}\geq 0} โ€–๐€1โ€‹(๐—โˆ’๐”๐•T)โ€–F2+โ€–(๐—โˆ’๐”๐•T)โ€‹๐€2โ€–F2\displaystyle\|{\mathbf{A}}_{1}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|_{F}^{2}+\|({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}){\mathbf{A}}_{2}\|_{F}^{2}
+ฮป1โ€‹โ€–P๐€2Tโ€‹๐—TโŸ‚โ€‹๐”๐•Tโ€–F2+ฮป2โ€‹โ€–๐”๐•Tโ€‹P๐€1โ€‹๐—โŸ‚โ€–F2(in Theorem 3.1).\displaystyle+\lambda_{1}\|P_{{\mathbf{A}}_{2}^{T}{\mathbf{X}}^{T}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T}\|_{F}^{2}+\lambda_{2}\|{\mathbf{U}}{\mathbf{V}}^{T}P_{{\mathbf{A}}_{1}{\mathbf{X}}}^{\perp}\|_{F}^{2}\quad\text{(in Theorem~{}\ref{thm:exact-two-sided}).}

Theorem 3.1 guarantees that if ๐—{\mathbf{X}} has an exact nonnegative factorization of rank rr, then the solution to the problem above is also exact ๐—=๐”~โ€‹๐•~{\mathbf{X}}=\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}} as long as the sketching dimension is at least rr. Crucially, the matrices ๐€1{\mathbf{A}}_{1} and ๐€2{\mathbf{A}}_{2} can be generic. We explain how to solve this problem in the sketched space despite the inclusion of the regularizer terms involving orthogonal projections. Empirically, as shown in Section 5, this problem can be employed in a simplified form with ฮป1=ฮป2=0\lambda_{1}=\lambda_{2}=0, and it is suitable for the data with approximately low nonnegative rank: if the sketching matrices are generic (in particular, not designed to approximate the range of the data), the two-sided method should be employed for tight recovery.

The one-sided sketching can be more convenient for some types of the data and also is more compact. Indeed, iteratively solving the two-sided problem requires storing and using both ๐€1{\mathbf{A}}_{1} and ๐€2{\mathbf{A}}_{2}, whereas using sketches on one side takes twice less space for the same amount of compression. The proposed one-sided compressed problems formulations are

(II)๐”~,๐•~=argโ€‹min๐”,๐•โ‰ฅ0โก[โ€–๐€โ€‹(๐—โˆ’๐”๐•T)โ€–F2+ฮปโ€‹โ€–P๐€โŸ‚โ€‹๐”๐•Tโ€–F2](in Theorem 3.4), or \text{(II)}\quad\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}}=\operatorname*{arg\,min}_{{\mathbf{U}},{\mathbf{V}}\geq 0}\left[\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T}\|^{2}_{F}\right]\quad\text{(in Theorem~{}\ref{thm:oneside_penalty}), or }
(III)๐”~,๐•~=argโ€‹min๐”,๐•โ‰ฅ0โก[โ€–๐€โ€‹(๐—โˆ’๐”๐•T)โ€–F2+ฮปโ€‹โ€–๐”๐•Tโ€–F2](in Theorem 3.10). \text{(III)}\quad\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}}=\operatorname*{arg\,min}_{{\mathbf{U}},{\mathbf{V}}\geq 0}\left[\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}_{F}+\lambda\|{\mathbf{U}}{\mathbf{V}}^{T}\|^{2}_{F}\right]\quad\text{(in Theorem~{}\ref{thm: apx orthogonal A}). }

So, what is required from the sketching matrices to work successfully within one-sided compression, (II) or (III)? Theorem 3.4 is stated for the sketching matrices with orthonormal rows: in this case, the regularizer in the form P๐€โŸ‚โ€‹๐”๐•T=(๐ˆโˆ’๐€๐€T)โ€‹๐”๐•TP^{\perp}_{\mathbf{A}}{\mathbf{U}}{\mathbf{V}}^{T}=({\mathbf{I}}-{\mathbf{A}}{\mathbf{A}}^{T}){\mathbf{U}}{\mathbf{V}}^{T} can be conveniently incorporated in the efficient solvers. Otherwise we can use the simplified regularizer without projection operator, Theorem 3.10, where the resulting loss depends on ๐€{\mathbf{A}} being approximately orthogonal (to the extent easily satisfied by generic random matrices, as described in Remark 3.11). We note that in the one-sided cases, nonzero regularization ฮป\lambda is crucial both theoretically an empirically (Figure 5).

Informally, both Theorems 3.4 and 3.10 state that when we find ๐”~\tilde{{\mathbf{U}}} and ๐•~T\tilde{{\mathbf{V}}}^{T} solving the compressed problems stated above, the error โ€–๐—โˆ’๐”~โ€‹๐•~Tโ€–F2\|{\mathbf{X}}-\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T}\|^{2}_{F} depends on (a) how well an existent (unknown) solution of rank rr fits the uncompressed problem โ€–๐—โˆ’๐”๐•Tโ€–F2\|{\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}\|_{F}^{2}, (b) how well the sketching matrix approximates the range of the data โ€–P๐€โŸ‚โ€‹๐—โ€–F2\|P^{\perp}_{\mathbf{A}}{\mathbf{X}}\|^{2}_{F}, and (c) how close to orthonormal are the rows of ๐€{\mathbf{A}}. In particular, this explains and quantfies how orthogonal data-adapted measurements (e.g., constructed via the randomized rangefinder algorithm [14]) are useful in finding nonnegative low-rank components. By Corollary 3.6, in this case, the solution of the compressed problem above is exact for the original ๐—{\mathbf{X}} that admits exact rank rr decomposition with the sketch size kk slightly oversamples rr. Compared to that, data-oblivious one-sided measurements incur additional loss, both theoretically and empirically, however they can be are advantageous for other reasons. For example, they can be re-generated when needed without any access to data and they do not require an additional pass over the data to form them.

In Section 4, we move from comparing the optimal values to solving the three compressed problems above. We formulate new variants of the multiplicative updates (MU) algorithm for each of them and show that the losses are nonincreasing during their iterations in Corollaries 4.3, 4.4, and 4.5 respectively. These corollaries follow from a more general result Theorem 4.1 formulated for a generic loss function with sketching. We also briefly discuss using projected gradient descent method on our compressed objectives.

One special technical challenge for using MU on the compressed objectives is that random sketching violates nonnegativity of the compressed components, which is the property that ensures the validity of the step sizes used in the MU algorithm. To address this challenge, we further generalize the defined compressed objective functions to include small shifts of the form ฯƒโ€‹โ€–๐ŸmTโ€‹(๐—โˆ’๐”๐•T)โ€–2\sigma\|\mathbf{1}_{m}^{T}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}, where ๐Ÿm\mathbf{1}_{m} is a vector of all ones in โ„m{\mathbb{R}}^{m}. This results in corrections of the form ๐€Tโ€‹๐€+ฯƒโ€‹๐Ÿmโ€‹๐ŸmT{\mathbf{A}}^{T}{\mathbf{A}}+\sigma\mathbf{1}_{m}\mathbf{1}_{m}^{T} that restore required nonnegativity inside the recovery algorithms (relevant theory is developed in Subsection 3.4).

In Section 5 we give some examples on real and synthetic datasets, in particular, showing successful performance of the proposed methods using about 5%5\% of the initial data. Finally, in Section 6, we conclude with some directions for future research.

2. Related work on scalable NMF

A number of randomized methods were proposed to improve the scalability of NMF, most of them in the form of heuristics.

First, several works propose iterative random sketching approach, which requires sampling new random sketching matrices as the algorithm progresses. Such works include the methods involving random projection streams [45, 46] that allow for Johnson-Lindenstrauss-type sketching matrices (random and oblivious to the data) but require multiple sketches and passes over the initial data. Similarly, in [28, 27] compressed and distributed versions of different descent methods for NMF use different compression matrices in iterations as opposed to a singular compression. Our focus is on the setting that requires one or two passes over the original data in the preprocessing stage while the algorithm that searches for nonnegative factors works solely on the compressed data.

In [35], the factor matrices are additionally assumed to be sparse and they were recovered with compressed sensing techniques. We do not make additional assumptions on the structure of the factor matrices.

Data-adapted sketching with randomized SVD techniques was used in [6] in the context of the hierarchical alternating least squares algorithm, although no theoretical justification of the proposed methods was given. Recently, these ideas were extended to a symmetric variant of the NMF problem in [16]. Additionally, in [49], a randomized SVD approximation is integrated into alternating multiplicative updates in a way that saves space, also without theoretical guarantees.

The two works most closely related to ours are [42] and [36]. Both of these papers derive compressed objective functions and seek to apply semi-NMF methods to iteratively improve a nonnegative factorization. A semi-NMF is a factorization in which one factor is entrywise nonnegative and the other factor is not constrained. Both papers apply the semi-NMF multiplicative updates from [4] and the latter also considers other update methods including updates based on the alternating direction method of multipliers (ADMM). Although the updates of [4] do possess guarantees to not increase their corresponding semi-NMF objectives, neither [42] nor [36] show whether these guarantees can be extended to their NMF objectives. So, the validity of the derived objective functions or the convergence of proposed iterative methods on the original NMF problem was not theoretically justified. A crucial motivation of this work is to create a connection between the algorithms working on the compressed problems and their performance with respect to the solution to the original problem. We achieve this with new variants of the standard NMF algorithms (different from those in [36, 42]) for the newly formulated compressed objective functions. We also provide some numerical comparison between the methods in Section 5.

2.1. Notations

Throughout the text, matrices and vectors are denoted with bold letters. We denote Frobenius matrix norm as โˆฅ.โˆฅF\|.\|_{F} and the spectral (operator) norm as โˆฅ.โˆฅ\|.\|. The matrix ๐—โˆˆโ„+mร—n{\mathbf{X}}\in{\mathbb{R}}^{m\times n}_{+} means it is element-wise nonnegative, the same is denoted by ๐—โ‰ฅ0{\mathbf{X}}\geq 0 when the size of the matrix is clear from the context. Element-wise positive and negative parts of vectors and matrices are denoted as (โ‹…)+=maxโก(โ‹…,0)(\cdot)_{+}=\max(\cdot,0) and (โ‹…)โˆ’=โˆ’minโก(โ‹…,0)(\cdot)_{-}=-\min(\cdot,0) respectively. Further, โˆ˜\circ denotes element-wise multiplication and // denotes element-wise division. P๐™P_{\mathbf{Z}} is the linear projection operator onto the column space of a tall matrix ๐™{\mathbf{Z}}, projection to the orthogonal complement is P๐™โŸ‚:=๐ˆโˆ’P๐™P_{\mathbf{Z}}^{\perp}:={\mathbf{I}}-P_{\mathbf{Z}}.

3. Compressed problems with reliable solutions

We formulate optimization problems analogous to (1.1), which do not require storing the entire data matrix ๐—{\mathbf{X}} and instead use sketched measurements. This is achieved by the use of carefully chosen regularization terms. In this section, we prove that the formulated problems are guaranteed to approximately solve the original NMF problem. In the next section, we show that the standard NMF solvers are easily extendable to these new regularized objective functions.

3.1. Two-sided compression

First, we show that if a matrix has an exact low-rank nonnegative matrix factorization, then one can recover an exact factorization using linear measurements on both sides.

Theorem 3.1.

Suppose ๐—{\mathbf{X}} has an exact nonnegative factorization ๐—=๐”0โ€‹๐•0T{\mathbf{X}}={\mathbf{U}}_{0}{\mathbf{V}}_{0}^{T}, where ๐”0โˆˆโ„+mร—r{\mathbf{U}}_{0}\in{\mathbb{R}}_{+}^{m\times r}, ๐•0โˆˆโ„+nร—r{\mathbf{V}}_{0}\in{\mathbb{R}}_{+}^{n\times r} and they are both full-rank, rโ‰คminโก{n,m}r\leq\min\{n,m\}. Let ๐€1{\mathbf{A}}_{1} and ๐€2{\mathbf{A}}_{2} be matrices of sizes rร—mr\times m and nร—rn\times r respectively such that ๐€1โ€‹๐—๐€2{\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2} is invertible111Note that this condition holds generically, i.e. for all but a (Lebesgue) measure-zero set of matrices.. For any ฮป1,ฮป2>0\lambda_{1},\lambda_{2}>0, let

๐”~,๐•~=argโ€‹min๐”,๐•โ‰ฅ0โกโ€–๐€1โ€‹(๐—โˆ’๐”๐•T)โ€–F2\displaystyle\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}}=\operatorname*{arg\,min}_{{\mathbf{U}},{\mathbf{V}}\geq 0}\|{\mathbf{A}}_{1}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|_{F}^{2} +โ€–(๐—โˆ’๐”๐•T)โ€‹๐€2โ€–F2\displaystyle+\|({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}){\mathbf{A}}_{2}\|_{F}^{2} (3.1)
+ฮป1โ€‹โ€–P๐€2Tโ€‹๐—TโŸ‚โ€‹๐”๐•Tโ€–F2+ฮป2โ€‹โ€–๐”๐•Tโ€‹P๐€1โ€‹๐—โŸ‚โ€–F2,\displaystyle+\lambda_{1}\|P_{{\mathbf{A}}_{2}^{T}{\mathbf{X}}^{T}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T}\|_{F}^{2}+\lambda_{2}\|{\mathbf{U}}{\mathbf{V}}^{T}P_{{\mathbf{A}}_{1}{\mathbf{X}}}^{\perp}\|_{F}^{2},

where (๐”,๐•โ‰ฅ0)({\mathbf{U}},{\mathbf{V}}\geq 0) means (๐”โˆˆโ„+mร—r,๐•โˆˆโ„+nร—r)({\mathbf{U}}\in{\mathbb{R}}_{+}^{m\times r},{\mathbf{V}}\in{\mathbb{R}}_{+}^{n\times r}). Then ๐—=๐”~โ€‹๐•~T{\mathbf{X}}=\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T}.

Remark 3.2.

We can similarly take the sketching matrices ๐€1{\mathbf{A}}_{1} and ๐€2{\mathbf{A}}_{2} of any sizes are k1ร—nk_{1}\times n and mร—k2m\times k_{2} respectively with k1,k2โ‰ฅrk_{1},k_{2}\geq r.

Remark 3.3 (Implementation considerations).

This and further objective functions are formulated to allow for memory-efficient computations. For example, in the above objective function (3.1), one need not store ๐—{\mathbf{X}} and can store ๐€1โ€‹๐—{\mathbf{A}}_{1}{\mathbf{X}} and ๐—๐€2{\mathbf{X}}{\mathbf{A}}_{2} instead. Likewise, one does not need to store or compute P๐€2Tโ€‹๐—TโŸ‚P_{{\mathbf{A}}_{2}^{T}{\mathbf{X}}^{T}}^{\perp} which is an mร—mm\times m matrix, since one can instead compute

โ€–P๐€2Tโ€‹๐—TโŸ‚โ€‹๐”๐•Tโ€–F2\displaystyle\|P_{{\mathbf{A}}_{2}^{T}{\mathbf{X}}^{T}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T}\|_{F}^{2} =Trโ€‹(๐•๐”Tโ€‹P๐€2Tโ€‹๐—TโŸ‚โ€‹๐”๐•T)\displaystyle=\textrm{Tr}({\mathbf{V}}{\mathbf{U}}^{T}P_{{\mathbf{A}}_{2}^{T}{\mathbf{X}}^{T}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T})
=Trโ€‹(๐•๐”Tโ€‹๐”๐•T)โˆ’Trโ€‹(๐•๐”Tโ€‹๐2โ€‹๐2Tโ€‹๐”๐•T),\displaystyle=\textrm{Tr}({\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{U}}{\mathbf{V}}^{T})-\textrm{Tr}({\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{Q}}_{2}{\mathbf{Q}}_{2}^{T}{\mathbf{U}}{\mathbf{V}}^{T}),

where ๐2{\mathbf{Q}}_{2} is an mร—rm\times r matrix with columns that form the orthonormal basis of the columns of ๐—๐€2{\mathbf{X}}{\mathbf{A}}_{2}, and so P๐€2Tโ€‹๐—T=๐2โ€‹๐2TP_{{\mathbf{A}}_{2}^{T}{\mathbf{X}}^{T}}={\mathbf{Q}}_{2}{\mathbf{Q}}_{2}^{T}. One can do a similar trick to compute P๐€1โ€‹๐—โŸ‚P_{{\mathbf{A}}_{1}{\mathbf{X}}}^{\perp} in terms of an analogous matrix ๐1{\mathbf{Q}}_{1}222The efficient ways to find orthogonal bases of the column/row span are well-known, e.g., see the discussion in [14] Section 4.1.. Thus, one can compute the objective function of (3.1) with total storage cost 3โ€‹rโ€‹(n+m)3r(n+m), by storing the matrices ๐”,๐•,๐€1โ€‹๐—,๐—๐€2,๐1,๐2{\mathbf{U}},{\mathbf{V}},{\mathbf{A}}_{1}{\mathbf{X}},{\mathbf{X}}{\mathbf{A}}_{2},{\mathbf{Q}}_{1},{\mathbf{Q}}_{2}. This and similar considerations are crucial in Section 4, when we study computationally efficient iterative algorithms that solve the optimization problems on compressed data defined here.

The proof of Theorem 3.1 is loosely inspired by the row-column matrix sensing argument from [8].

Proof.

First, we show that the matrices ๐”0,๐•0{\mathbf{U}}_{0},{\mathbf{V}}_{0} such that ๐—=๐”0โ€‹๐•0T{\mathbf{X}}={\mathbf{U}}_{0}{\mathbf{V}}_{0}^{T} are not only feasible for the problem (3.1) but also give zero objective value. Indeed, since ๐€1โ€‹๐—๐€2=๐€1โ€‹๐”0โ€‹๐•0Tโ€‹๐€2{\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2}={\mathbf{A}}_{1}{\mathbf{U}}_{0}{\mathbf{V}}_{0}^{T}{\mathbf{A}}_{2} is invertible, if follows that the square matrices ๐€1โ€‹๐”0{\mathbf{A}}_{1}{\mathbf{U}}_{0} and ๐•0Tโ€‹๐€2{\mathbf{V}}_{0}^{T}{\mathbf{A}}_{2} are also invertible. So, from ๐—๐€2=๐”0โ€‹๐•0Tโ€‹๐€2{\mathbf{X}}{\mathbf{A}}_{2}={\mathbf{U}}_{0}{\mathbf{V}}_{0}^{T}{\mathbf{A}}_{2} we can then compute ๐”0=๐—๐€2โ€‹(๐•0Tโ€‹๐€2)โˆ’1{\mathbf{U}}_{0}={\mathbf{X}}{\mathbf{A}}_{2}({\mathbf{V}}_{0}^{T}{\mathbf{A}}_{2})^{-1} and similarly ๐•0T=(๐€1โ€‹๐”0)โˆ’1โ€‹๐€1โ€‹๐—{\mathbf{V}}_{0}^{T}=({\mathbf{A}}_{1}{\mathbf{U}}_{0})^{-1}{\mathbf{A}}_{1}{\mathbf{X}}. Then we have

๐—=๐”0๐•0T=๐—๐€2(๐•0T๐€2)โˆ’1(๐€1๐”0)โˆ’1๐€1๐—=๐—๐€2(๐€1๐—๐€2)โˆ’1๐€1๐—=:๐˜.{\mathbf{X}}={\mathbf{U}}_{0}{\mathbf{V}}_{0}^{T}={\mathbf{X}}{\mathbf{A}}_{2}({\mathbf{V}}_{0}^{T}{\mathbf{A}}_{2})^{-1}({\mathbf{A}}_{1}{\mathbf{U}}_{0})^{-1}{\mathbf{A}}_{1}{\mathbf{X}}={\mathbf{X}}{\mathbf{A}}_{2}({\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2})^{-1}{\mathbf{A}}_{1}{\mathbf{X}}=:{\mathbf{Y}}.

This implies

๐—โ€‹P๐€1โ€‹๐—โŸ‚\displaystyle{\mathbf{X}}P_{{\mathbf{A}}_{1}{\mathbf{X}}}^{\perp} =๐˜โ€‹P๐€1โ€‹๐—โŸ‚\displaystyle={\mathbf{Y}}P_{{\mathbf{A}}_{1}{\mathbf{X}}}^{\perp}
=๐—๐€2โ€‹(๐€1โ€‹๐—๐€2)โˆ’1โ€‹๐€1โ€‹๐—โ€‹(๐ˆโˆ’(๐€1โ€‹๐—)Tโ€‹(๐€1โ€‹๐—โ€‹(๐€1โ€‹๐—)T)โˆ’1โ€‹๐€1โ€‹๐—)=๐ŸŽ,\displaystyle={\mathbf{X}}{\mathbf{A}}_{2}({\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2})^{-1}{\mathbf{A}}_{1}{\mathbf{X}}({\mathbf{I}}-({\mathbf{A}}_{1}{\mathbf{X}})^{T}({\mathbf{A}}_{1}{\mathbf{X}}({\mathbf{A}}_{1}{\mathbf{X}})^{T})^{-1}{\mathbf{A}}_{1}{\mathbf{X}})=\mathbf{0},

and similarly P๐€2Tโ€‹๐—TโŸ‚โ€‹๐—=๐ŸŽP_{{\mathbf{A}}_{2}^{T}{\mathbf{X}}^{T}}^{\perp}{\mathbf{X}}=\mathbf{0} and matrices ๐”0,๐•0{\mathbf{U}}_{0},{\mathbf{V}}_{0} give zero objective value.

Then, since ๐”~,๐•~\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}} are optimal for (3.1), they must also result in objective value 0 and all four summands in (3.1) vanish:

๐€1โ€‹(๐—โˆ’๐”~โ€‹๐•~T)=(๐—โˆ’๐”~โ€‹๐•~T)โ€‹๐€2=๐ŸŽ,{\mathbf{A}}_{1}({\mathbf{X}}-\tilde{\mathbf{U}}\tilde{\mathbf{V}}^{T})=({\mathbf{X}}-\tilde{\mathbf{U}}\tilde{\mathbf{V}}^{T}){\mathbf{A}}_{2}=\mathbf{0}, (3.2)

and

P๐€2Tโ€‹๐—TโŸ‚โ€‹๐”~โ€‹๐•~T=๐”~โ€‹๐•~Tโ€‹P๐€1โ€‹๐—โŸ‚=๐ŸŽ.P_{{\mathbf{A}}_{2}^{T}{\mathbf{X}}^{T}}^{\perp}\tilde{\mathbf{U}}\tilde{\mathbf{V}}^{T}=\tilde{\mathbf{U}}\tilde{\mathbf{V}}^{T}P_{{\mathbf{A}}_{1}{\mathbf{X}}}^{\perp}=\mathbf{0}. (3.3)

By (3.3), we can write

๐”~โ€‹๐•~T=๐”~โ€‹๐•~Tโ€‹P๐€1โ€‹๐—=P๐€2Tโ€‹๐—Tโ€‹๐”~โ€‹๐•~Tโ€‹P๐€1โ€‹๐—=๐—๐€2โ€‹๐Œ๐€1โ€‹๐—,\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T}=\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T}P_{{\mathbf{A}}_{1}{\mathbf{X}}}=P_{{\mathbf{A}}_{2}^{T}{\mathbf{X}}^{T}}\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T}P_{{\mathbf{A}}_{1}{\mathbf{X}}}={\mathbf{X}}{\mathbf{A}}_{2}{\mathbf{M}}{\mathbf{A}}_{1}{\mathbf{X}}, (3.4)

where the matrix

๐Œ:=((๐—๐€2)Tโ€‹๐—๐€2)โˆ’1โ€‹(๐—๐€2)Tโ€‹๐”~โ€‹๐•~Tโ€‹(๐€1โ€‹๐—)Tโ€‹(๐€1โ€‹๐—โ€‹(๐€1โ€‹๐—)T)โˆ’1.{\mathbf{M}}:=(({\mathbf{X}}{\mathbf{A}}_{2})^{T}{\mathbf{X}}{\mathbf{A}}_{2})^{-1}({\mathbf{X}}{\mathbf{A}}_{2})^{T}\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T}({\mathbf{A}}_{1}{\mathbf{X}})^{T}({\mathbf{A}}_{1}{\mathbf{X}}({\mathbf{A}}_{1}{\mathbf{X}})^{T})^{-1}.

Now,

๐€1โ€‹๐—๐€2โ€‹=(โ€‹3.2โ€‹)โ€‹๐€1โ€‹๐”~โ€‹๐•~Tโ€‹๐€2โ€‹=(โ€‹3.4โ€‹)โ€‹๐€1โ€‹(๐—๐€2โ€‹๐Œ๐€1โ€‹๐—)โ€‹๐€2=(๐€1โ€‹๐—๐€2)โ€‹๐Œโ€‹(๐€1โ€‹๐—๐€2),{\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2}\overset{\eqref{eq:sum-12-vanish}}{=}{\mathbf{A}}_{1}\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T}{\mathbf{A}}_{2}\overset{\eqref{two-sided-1}}{=}{\mathbf{A}}_{1}({\mathbf{X}}{\mathbf{A}}_{2}{\mathbf{M}}{\mathbf{A}}_{1}{\mathbf{X}}){\mathbf{A}}_{2}=({\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2}){\mathbf{M}}({\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2}),

and since ๐€1โ€‹๐—๐€2{\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2} is invertible we have ๐Œ=(๐€1โ€‹๐—๐€2)โˆ’1{\mathbf{M}}=({\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2})^{-1}. Thus, (3.4) implies

๐”~โ€‹๐•~T=๐—๐€2โ€‹(๐€1โ€‹๐—๐€2)โˆ’1โ€‹๐€1โ€‹๐—=๐˜=๐—.\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T}={\mathbf{X}}{\mathbf{A}}_{2}({\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2})^{-1}{\mathbf{A}}_{1}{\mathbf{X}}={\mathbf{Y}}={\mathbf{X}}.

โˆŽ

3.2. One-sided compression: orthogonal sketching matrices

Note that the described method requires measurements on both sides (and taking either ๐€1{\mathbf{A}}_{1} or ๐€2{\mathbf{A}}_{2} to be the identity matrix results in a necessity to work with the full matrix ๐—{\mathbf{X}}). Now, we will show that it can be enough to measure the matrix on one side only.

Theorem 3.4.

(Orthogonal ๐€{\mathbf{A}}) Let ๐—โˆˆโ„+mร—n{\mathbf{X}}\in{\mathbb{R}}_{+}^{m\times n} be any matrix and let ๐€โˆˆโ„kร—m{\mathbf{A}}\in{\mathbb{R}}^{k\times m} be a matrix with orthogonal rows (i.e. ๐€๐€T=๐ˆ{\mathbf{A}}{\mathbf{A}}^{T}={\mathbf{I}}). Let ๐”0โˆˆโ„+mร—r{\mathbf{U}}_{0}\in{\mathbb{R}}_{+}^{m\times r}, ๐•0โˆˆโ„+rร—n{\mathbf{V}}_{0}\in{\mathbb{R}}_{+}^{r\times n} give a solution to the original NMF problem (1.1) of rank rr and ๐—0=๐”0โ€‹๐•0T{\mathbf{X}}_{0}={\mathbf{U}}_{0}{\mathbf{V}}_{0}^{T}. If ๐”~,๐•~\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}} solve a compressed NMF problem with the same rank rr, that is,

๐”~,๐•~=argโ€‹min๐”,๐•โ‰ฅ0โก[โ€–๐€โ€‹(๐—โˆ’๐”๐•T)โ€–F2+ฮปโ€‹โ€–P๐€โŸ‚โ€‹๐”๐•Tโ€–F2],\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}}=\operatorname*{arg\,min}_{{\mathbf{U}},{\mathbf{V}}\geq 0}\left[\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T}\|^{2}_{F}\right], (3.5)

where ฮป>0\lambda>0, P๐€โŸ‚:=๐ˆโˆ’๐€Tโ€‹๐€P_{\mathbf{A}}^{\perp}:={\mathbf{I}}-{\mathbf{A}}^{T}{\mathbf{A}}, and (๐”,๐•โ‰ฅ0)({\mathbf{U}},{\mathbf{V}}\geq 0) means (๐”โˆˆโ„+mร—r,๐•โˆˆโ„+rร—n)({\mathbf{U}}\in{\mathbb{R}}_{+}^{m\times r},{\mathbf{V}}\in{\mathbb{R}}_{+}^{r\times n}). Then ๐—~:=๐”~โ€‹๐•~T\tilde{{\mathbf{X}}}:=\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T} satisfies

โ€–๐—โˆ’๐—~โ€–F2โ€–๐—โ€–F2โ‰คcฮปโ€‹[โ€–๐—โˆ’๐—0โ€–F2โ€–๐—โ€–F2+โ€–P๐€โŸ‚โ€‹๐—โ€–F2โ€–๐—โ€–F2],\frac{\|{\mathbf{X}}-\tilde{{\mathbf{X}}}\|^{2}_{F}}{\|{\mathbf{X}}\|^{2}_{F}}\leq c_{\lambda}\left[\frac{\|{\mathbf{X}}-{\mathbf{X}}_{0}\|^{2}_{F}}{\|{\mathbf{X}}\|^{2}_{F}}+\frac{\|P^{\perp}_{\mathbf{A}}{\mathbf{X}}\|^{2}_{F}}{\|{\mathbf{X}}\|^{2}_{F}}\right], (3.6)

where cฮป=maxโก(2/ฮป,6,2โ€‹ฮป+2).c_{\lambda}=\max(2/\lambda,6,2\lambda+2).

We note that this and further results do not require the data matrix to have an exact nonnegative low rank rr as in Theorem 3.1 (although if it is, the first term in the bound for the loss vanishes). Before we start the proof, let us recall a simple corollary of the Pythagorean theorem and the triangle inequality to be used several times below: for any matrices ๐—,๐˜{\mathbf{X}},{\mathbf{Y}} and a projection operator P๐€P_{\mathbf{A}},

โ€–๐—โˆ’๐˜โ€–F2\displaystyle\|{\mathbf{X}}-{\mathbf{Y}}\|^{2}_{F} โ‰คโ€–P๐€โ€‹(๐—โˆ’๐˜)โ€–F2+2โ€‹โ€–P๐€โŸ‚โ€‹๐˜โ€–F2+2โ€‹โ€–P๐€โŸ‚โ€‹๐—โ€–F2.\displaystyle\leq\|P_{\mathbf{A}}({\mathbf{X}}-{\mathbf{Y}})\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{Y}}\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}. (3.7)

Indeed, this follows from

โ€–๐—โˆ’๐˜โ€–F2\displaystyle\|{\mathbf{X}}-{\mathbf{Y}}\|^{2}_{F} =โ€–P๐€โ€‹(๐—โˆ’๐˜)โ€–F2+โ€–P๐€โŸ‚โ€‹(๐—โˆ’๐˜)โ€–F2\displaystyle=\|P_{\mathbf{A}}({\mathbf{X}}-{\mathbf{Y}})\|^{2}_{F}+\|P_{\mathbf{A}}^{\perp}({\mathbf{X}}-{\mathbf{Y}})\|^{2}_{F}
โ‰คโ€–P๐€โ€‹(๐—โˆ’๐˜)โ€–F2+(โ€–P๐€โŸ‚โ€‹๐˜โ€–F+โ€–P๐€โŸ‚โ€‹๐—โ€–F)2\displaystyle\leq\|P_{\mathbf{A}}({\mathbf{X}}-{\mathbf{Y}})\|^{2}_{F}+(\|P_{\mathbf{A}}^{\perp}{\mathbf{Y}}\|_{F}+\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|_{F})^{2}
โ‰คโ€–P๐€โ€‹(๐—โˆ’๐˜)โ€–F2+2โ€‹โ€–P๐€โŸ‚โ€‹๐˜โ€–F2+2โ€‹โ€–P๐€โŸ‚โ€‹๐—โ€–F2.\displaystyle\leq\|P_{\mathbf{A}}({\mathbf{X}}-{\mathbf{Y}})\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{Y}}\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}.
Proof of Theorem 3.4.

First, note that

โ€–๐€โ€‹(๐—โˆ’๐—~)โ€–F2+ฮปโ€‹โ€–P๐€โŸ‚โ€‹๐—~โ€–F2โ‰คโ€–๐€โ€‹(๐—โˆ’๐—0T)โ€–F2+ฮปโ€‹โ€–P๐€โŸ‚โ€‹๐—0โ€–F2,\|{\mathbf{A}}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}\tilde{{\mathbf{X}}}\|^{2}_{F}\leq\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{X}}_{0}^{T})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}_{0}\|^{2}_{F}, (3.8)

since ๐”~,๐•~\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}} minimize the objective of (3.5) over all nonnegative matrices of the appropriate sizes. Now, since ๐€๐€T=๐ˆ{\mathbf{A}}{\mathbf{A}}^{T}={\mathbf{I}}, observe that for any ๐Œโˆˆโ„mร—n{\mathbf{M}}\in{\mathbb{R}}^{m\times n} matrix

โ€–๐€๐Œโ€–F=โ€–๐€Tโ€‹๐€๐Œโ€–F=โ€–P๐€โ€‹๐Œโ€–F.\|{\mathbf{A}}{\mathbf{M}}\|_{F}=\|{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{M}}\|_{F}=\|P_{\mathbf{A}}{\mathbf{M}}\|_{F}.

So, using identity (3.7) for the matrices ๐—{\mathbf{X}} and ๐—~\tilde{{\mathbf{X}}}, we can estimate

โ€–๐—โˆ’๐—~โ€–F2\displaystyle\|{\mathbf{X}}-\tilde{{\mathbf{X}}}\|^{2}_{F} โ‰คโ€–๐€โ€‹(๐—โˆ’๐—~)โ€–F2+2โ€‹โ€–P๐€โŸ‚โ€‹๐—~โ€–F2+2โ€‹โ€–P๐€โŸ‚โ€‹๐—โ€–F2\displaystyle\leq\|{\mathbf{A}}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}\tilde{{\mathbf{X}}}\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
โ‰คc1โ€‹(โ€–๐€โ€‹(๐—โˆ’๐—~)โ€–F2+ฮปโ€‹โ€–P๐€โŸ‚โ€‹๐—~โ€–F2)+2โ€‹โ€–P๐€โŸ‚โ€‹๐—โ€–F2\displaystyle\leq c_{1}\big{(}\|{\mathbf{A}}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}\tilde{{\mathbf{X}}}\|^{2}_{F}\big{)}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
โ‰ค(โ€‹3.8โ€‹)โ€‹c1โ€‹(โ€–๐€โ€‹(๐—โˆ’๐—0)โ€–F2+ฮปโ€‹โ€–P๐€โŸ‚โ€‹๐—0โ€–F2)+2โ€‹โ€–P๐€โŸ‚โ€‹๐—โ€–F2\displaystyle\overset{\eqref{feas}}{\leq}c_{1}\big{(}\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{X}}_{0})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}_{0}\|^{2}_{F}\big{)}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}

for c1=maxโก(2/ฮป,1).c_{1}=\max(2/\lambda,1). Using identity (3.7) for the matrices ๐—{\mathbf{X}} and ๐—0{\mathbf{X}}_{0}, we can estimate the term in parentheses as

โ€–๐€โ€‹(๐—โˆ’๐—0)โ€–F2+ฮปโ€‹โ€–P๐€โŸ‚โ€‹๐—0โ€–F2\displaystyle\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{X}}_{0})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}_{0}\|^{2}_{F} โ‰คโ€–๐€โ€‹(๐—โˆ’๐—0)โ€–F2+2โ€‹ฮปโ€‹โ€–P๐€โŸ‚โ€‹(๐—โˆ’๐—0)โ€–F2+2โ€‹ฮปโ€‹โ€–P๐€โŸ‚โ€‹๐—โ€–F2\displaystyle\leq\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{X}}_{0})\|^{2}_{F}+2\lambda\|P_{\mathbf{A}}^{\perp}({\mathbf{X}}-{\mathbf{X}}_{0})\|^{2}_{F}+2\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
โ‰คc2โ€‹โ€–๐—โˆ’๐—0โ€–F2+2โ€‹ฮปโ€‹โ€–P๐€โŸ‚โ€‹๐—โ€–F2\displaystyle\leq c_{2}\|{\mathbf{X}}-{\mathbf{X}}_{0}\|^{2}_{F}+2\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}

for c2=maxโก(2โ€‹ฮป,1).c_{2}=\max(2\lambda,1). Combining the estimates and regrouping, we get

โ€–๐—โˆ’๐—~โ€–F2โ‰คc1โ€‹c2โ€‹โ€–๐—โˆ’๐—0โ€–F2+(2โ€‹ฮปโ€‹c1+2)โ€‹โ€–P๐€โŸ‚โ€‹๐—โ€–F2.\|{\mathbf{X}}-\tilde{{\mathbf{X}}}\|^{2}_{F}\leq c_{1}c_{2}\|{\mathbf{X}}-{\mathbf{X}}_{0}\|^{2}_{F}+(2\lambda c_{1}+2)\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}.

With cฮป=maxโก(c1โ€‹c2,2โ€‹ฮปโ€‹c1+2)=maxโก(2/ฮป,6,2โ€‹ฮป+2)c_{\lambda}=\max(c_{1}c_{2},2\lambda c_{1}+2)=\max(2/\lambda,6,2\lambda+2), this concludes the proof of Theorem 3.4. โˆŽ

Theorem 3.4 shows that the solution to the compressed NMF problem (1.1) will work for the original uncompressed problem (3.5) as long as the terms โ€–๐—โˆ’๐—~โ€–F2\|{\mathbf{X}}-\tilde{{\mathbf{X}}}\|^{2}_{F} and โ€–P๐€โŸ‚โ€‹๐—โ€–F2\|P^{\perp}_{\mathbf{A}}{\mathbf{X}}\|^{2}_{F} are small. Luckily, with just one more pass over the original data one can get such sketching matrices using the standard approaches of randomized linear algebra, such as those in [14].

Theorem 3.5 (โ€œRandomized rangefinder algorithm lossโ€, [14]).

Let r,kr,k be integers such that rโ‰ฅ2r\geq 2 and r+2โ‰คkโ‰คminโก{m,n}r+2\leq k\leq\min\{m,n\}. Let ๐—{\mathbf{X}} be an mร—nm\times n matrix and ๐’{\mathbf{S}} be a nร—kn\times k standard Gaussian matrix. Then

๐”ผโ€‹โ€–๐—โˆ’P๐’Tโ€‹๐—Tโ€‹๐—โ€–Fโ‰ค(1+rkโˆ’rโˆ’1)12โ€‹(โˆ‘j>rฯƒj2โ€‹(๐—))12,\mathbb{E}\|{\mathbf{X}}-P_{{\mathbf{S}}^{T}{\mathbf{X}}^{T}}{\mathbf{X}}\|_{F}\leq\left(1+\frac{r}{k-r-1}\right)^{\frac{1}{2}}\left(\sum_{j>r}\sigma_{j}^{2}({\mathbf{X}})\right)^{\frac{1}{2}},

where ฯƒjโ€‹(๐—)\sigma_{j}({\mathbf{X}}) is the jj-th largest singular value of ๐—{\mathbf{X}}.

Corollary 3.6 (Data-adapted one-sided sketches).

Suppose the matrix ๐—{\mathbf{X}} has an approximate nonnegative factorization, that is, there exist ๐”0โˆˆโ„+mร—r{\mathbf{U}}_{0}\in{\mathbb{R}}_{+}^{m\times r}, ๐•0โˆˆโ„+rร—n{\mathbf{V}}_{0}\in{\mathbb{R}}_{+}^{r\times n} so that ๐—0=๐”0โ€‹๐•0T{\mathbf{X}}_{0}={\mathbf{U}}_{0}{\mathbf{V}}_{0}^{T} satisfies โ€–๐—โˆ’๐—0โ€–Fโ‰คฮตโ€‹โ€–๐—โ€–F\|{\mathbf{X}}-{\mathbf{X}}_{0}\|_{F}\leq\varepsilon\|{\mathbf{X}}\|_{F}.

Take kk such that 2โ€‹r+1โ‰คkโ‰คminโก{m,n}2r+1\leq k\leq\min\{m,n\}. Form a sketch ๐—๐’{\mathbf{X}}{\mathbf{S}} with ๐’{\mathbf{S}} is an nร—kn\times k standard Gaussian matrix; find ๐{\mathbf{Q}}, orthonormal basis of the column space of ๐—๐’{\mathbf{X}}{\mathbf{S}}; take a sketching matrix ๐€:=๐T{\mathbf{A}}:={\mathbf{Q}}^{T}. If ๐”~โˆˆโ„+mร—r,๐•~โˆˆโ„+rร—n\tilde{{\mathbf{U}}}\in{\mathbb{R}}_{+}^{m\times r},\tilde{{\mathbf{V}}}\in{\mathbb{R}}_{+}^{r\times n} solve a compressed NMF problem (3.5) with this ๐€{\mathbf{A}} and some ฮป>0\lambda>0, then

๐”ผโ€‹โ€–๐—โˆ’๐—~โ€–Fโ€–๐—โ€–Fโ‰ค2โ€‹cฮปโ€‹ฮต\frac{{\mathbb{E}}\|{\mathbf{X}}-\tilde{{\mathbf{X}}}\|_{F}}{\|{\mathbf{X}}\|_{F}}\leq\sqrt{2}c_{\lambda}\varepsilon (3.9)

and cฮปc_{\lambda} is the constant from (3.6).

Proof.

By Theorem 3.5 and approximate low-rankness of ๐—{\mathbf{X}}, we have

๐”ผโ€‹โ€–๐—โˆ’P๐€โ€‹๐—โ€–Fโ‰ค1+rkโˆ’rโˆ’1โ€‹โˆ‘j>rฯƒjโ€‹(๐—)โ‰ค2โ€‹โ€–๐—โˆ’๐—0โ€–Fโ‰ค2โ€‹ฮตโ€‹โ€–๐—โ€–F,\mathbb{E}\|{\mathbf{X}}-P_{\mathbf{A}}{\mathbf{X}}\|_{F}\leq\sqrt{1+\frac{r}{k-r-1}}\sqrt{\sum_{j>r}\sigma_{j}({\mathbf{X}})}\leq\sqrt{2}\|{\mathbf{X}}-{\mathbf{X}}_{0}\|_{F}\leq\sqrt{2}\varepsilon\|{\mathbf{X}}\|_{F},

using that kโ‰ฅ2โ€‹r+1k\geq 2r+1 in the second inequality. Combining this with Theorem 3.4, we have

๐”ผโ€‹โ€–๐—โˆ’๐—~โ€–Fโ‰คcโ€‹(โ€–๐—โˆ’๐—0โ€–F+๐”ผโ€‹โ€–๐—โˆ’P๐€โ€‹๐—โ€–F)โ‰คcโ€‹(1+2)โ€‹ฮตโ€‹โ€–๐—โ€–F,{\mathbb{E}}\|{\mathbf{X}}-\tilde{{\mathbf{X}}}\|_{F}\leq c(\|{\mathbf{X}}-{\mathbf{X}}_{0}\|_{F}+{\mathbb{E}}\|{\mathbf{X}}-P_{\mathbf{A}}{\mathbf{X}}\|_{F})\leq c(1+\sqrt{2})\varepsilon\|{\mathbf{X}}\|_{F},

where c=maxโก(2/ฮป,6,2โ€‹ฮป+2).c=\max(2/\lambda,6,2\lambda+2). โˆŽ

Remark 3.7.

A high probability deviation bound for the loss โ€–๐—โˆ’๐๐Tโ€‹๐—โ€–F\|{\mathbf{X}}-{\mathbf{Q}}{\mathbf{Q}}^{T}{\mathbf{X}}\|_{F} is also known [14, Theorem 10.7]. It implies a high probability estimate for (3.9) in a straightforward way. Instead of Gaussian initial sketching, one can employ subsampled random Fourier transform [14] or other cost-efficient matrices [31, 10].

It is easy to see that the oversampling condition k>2โ€‹rk>2r can be relaxed to any k>r+1k>r+1 by suitably increasing the constant factor 2\sqrt{2}. Notwithstanding this factor, we see that if ๐—{\mathbf{X}} has an exact NMF decomposition of rank rr and k>r+1k>r+1 then the error of the optimal solution to the compressed problem must be also zero, comparable with the result of Theorem 3.1.

3.3. One-sided compression: nonorthogonal sketching matrices

The orthogonality assumption on ๐€{\mathbf{A}} can be relaxed to having approximately orthogonal rows, such as those of appropriately normalized random matrices. This case is more than a straightforward extension of Theorem 3.4 because of the following computational challenge: if the sketching matrix ๐€{\mathbf{A}} does not have orthogonal rows, the orthogonal projection operator P๐€โŸ‚P^{\perp}_{\mathbf{A}} does not have a nicely decomposable form ๐€Tโ€‹๐€{\mathbf{A}}^{T}{\mathbf{A}}. Theorem 3.10 below shows how to having projection matrices in the regularizer term.

Definition 3.8 (Approximately orthogonal matrices).

For a positive constant ฮต<1\varepsilon<1, we call a matrix ๐€โˆˆโ„kร—m{\mathbf{A}}\in{\mathbb{R}}^{k\times m} ฮต\varepsilon-approximately orthogonal if its singular values lie in the interval [1โˆ’ฮต,1+ฮต][1-\varepsilon,1+\varepsilon].

The convenience of the definition above stems from the following simple observation.

Lemma 3.9.

If the matrix ๐€โˆˆโ„kร—m{\mathbf{A}}\in{\mathbb{R}}^{k\times m} is ฮต\varepsilon-approximately orthogonal, then for any ๐Œโˆˆโ„mร—n{\mathbf{M}}\in{\mathbb{R}}^{m\times n} matrix, we have

(1โˆ’ฮต)โ€‹โ€–P๐€โ€‹๐Œโ€–Fโ‰คโ€–๐€๐Œโ€–Fโ‰ค(1+ฮต)โ€‹โ€–P๐€โ€‹๐Œโ€–F.(1-\varepsilon)\|P_{\mathbf{A}}{\mathbf{M}}\|_{F}\leq\|{\mathbf{A}}{\mathbf{M}}\|_{F}\leq(1+\varepsilon)\|P_{\mathbf{A}}{\mathbf{M}}\|_{F}. (3.10)
Proof.

For a positive semidefinite matrix ๐™{\mathbf{Z}}, let ๐™\sqrt{{\mathbf{Z}}} denote the unique positive semidefinite matrix such that (๐™)2=๐™(\sqrt{{\mathbf{Z}}})^{2}={\mathbf{Z}}. Then, if ๐€=๐”โ€‹๐šบโ€‹๐•T{\mathbf{A}}={\mathbf{U}}\boldsymbol{\Sigma}{\mathbf{V}}^{T} is a compact SVD decomposition of ๐€{\mathbf{A}}, ๐€Tโ€‹๐€=๐•โ€‹๐šบโ€‹๐•T\sqrt{{\mathbf{A}}^{T}{\mathbf{A}}}={\mathbf{V}}\boldsymbol{\Sigma}{\mathbf{V}}^{T} and P๐€=๐•๐•TP_{\mathbf{A}}={\mathbf{V}}{\mathbf{V}}^{T}. This implies โ€–P๐€โ€‹๐Œโ€–F=โ€–๐•Tโ€‹๐Œโ€–F,\|P_{\mathbf{A}}{\mathbf{M}}\|_{F}=\|{\mathbf{V}}^{T}{\mathbf{M}}\|_{F}, โ€–๐€๐Œโ€–F=โ€–๐€Tโ€‹๐€โ€‹๐Œโ€–F=โ€–๐šบโ€‹๐•Tโ€‹๐Œโ€–F\|{\mathbf{A}}{\mathbf{M}}\|_{F}=\|\sqrt{{\mathbf{A}}^{T}{\mathbf{A}}}{\mathbf{M}}\|_{F}=\|\boldsymbol{\Sigma}{\mathbf{V}}^{T}{\mathbf{M}}\|_{F} and

(1โˆ’ฮต)โ€‹โ€–P๐€โ€‹๐Œโ€–F\displaystyle(1-\varepsilon)\|P_{\mathbf{A}}{\mathbf{M}}\|_{F} โ‰ค(1โˆ’โ€–๐ˆโˆ’๐šบโ€–)โ€‹โ€–๐•Tโ€‹๐Œโ€–Fโ‰คโ€–๐•Tโ€‹๐Œโ€–Fโˆ’โ€–(๐ˆโˆ’๐šบ)โ€‹๐•Tโ€‹๐Œโ€–F\displaystyle\leq(1-\|{\mathbf{I}}-\boldsymbol{\Sigma}\|)\|{\mathbf{V}}^{T}{\mathbf{M}}\|_{F}\leq\|{\mathbf{V}}^{T}{\mathbf{M}}\|_{F}-\|({\mathbf{I}}-\boldsymbol{\Sigma}){\mathbf{V}}^{T}{\mathbf{M}}\|_{F}
โ‰คโ€–๐šบโ€‹๐•Tโ€‹๐Œโ€–Fโ‰คโ€–๐šบโ€–โ€‹โ€–๐•Tโ€‹๐Œโ€–Fโ‰ค(1+ฮต)โ€‹โ€–P๐€โ€‹๐Œโ€–F.\displaystyle\leq\|\boldsymbol{\Sigma}{\mathbf{V}}^{T}{\mathbf{M}}\|_{F}\leq\|\boldsymbol{\Sigma}\|\|{\mathbf{V}}^{T}{\mathbf{M}}\|_{F}\leq(1+\varepsilon)\|P_{\mathbf{A}}{\mathbf{M}}\|_{F}.

โˆŽ

The next theorem justifies solving a compressed NMF problem with a simplified regularization term:

Theorem 3.10.

(Approximately orthogonal ๐€{\mathbf{A}}) Let ๐—โˆˆโ„+mร—n{\mathbf{X}}\in{\mathbb{R}}_{+}^{m\times n} be any matrix and let ๐€โˆˆโ„kร—m{\mathbf{A}}\in{\mathbb{R}}^{k\times m} be ฮต\varepsilon-approximately orthogonal, with ฮตโ‰ค0.5\varepsilon\leq 0.5. Let ๐”0โˆˆโ„+mร—r{\mathbf{U}}_{0}\in{\mathbb{R}}_{+}^{m\times r}, ๐•0โˆˆโ„+rร—n{\mathbf{V}}_{0}\in{\mathbb{R}}_{+}^{r\times n} give a solution to the original NMF problem (1.1) of rank rr and ๐—0=๐”0โ€‹๐•0T{\mathbf{X}}_{0}={\mathbf{U}}_{0}{\mathbf{V}}_{0}^{T}. If ๐”~,๐•~\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}} solve the following compressed NMF problem with the same rank rr

๐”~,๐•~=argโ€‹min๐”,๐•โ‰ฅ0โก[โ€–๐€โ€‹(๐—โˆ’๐”๐•T)โ€–F2+ฮปโ€‹โ€–๐”๐•Tโ€–F2].\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}}=\operatorname*{arg\,min}_{{\mathbf{U}},{\mathbf{V}}\geq 0}\left[\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}_{F}+\lambda\|{\mathbf{U}}{\mathbf{V}}^{T}\|^{2}_{F}\right]. (3.11)

Then ๐—~:=๐”~โ€‹๐•~T\tilde{{\mathbf{X}}}:=\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T} satisfies

โ€–๐—โˆ’(1+ฮป)โ€‹๐—~โ€–F2โ€–๐—โ€–F2โ‰คcโ€‹[โ€–๐—โˆ’๐—~โ€–F2โ€–๐—โ€–F2+โ€–P๐€โŸ‚โ€‹๐—โ€–F2โ€–๐—โ€–F2+ฮต2]\frac{\left\|{\mathbf{X}}-\left(1+\lambda\right)\tilde{{\mathbf{X}}}\right\|^{2}_{F}}{\|{\mathbf{X}}\|^{2}_{F}}\leq c\left[\frac{\|{\mathbf{X}}-\tilde{{\mathbf{X}}}\|^{2}_{F}}{\|{\mathbf{X}}\|^{2}_{F}}+\frac{\|P^{\perp}_{\mathbf{A}}{\mathbf{X}}\|^{2}_{F}}{\|{\mathbf{X}}\|^{2}_{F}}+\varepsilon^{2}\right] (3.12)

where c=maxโก(4+54โ€‹ฮป,6,48โ€‹ฮป)c=\max(4+\frac{5}{4\lambda},6,48\lambda).

Proof.

By optimality, for any matrix ๐˜=๐”๐•T{\mathbf{Y}}={\mathbf{U}}{\mathbf{V}}^{T} for some nonnegative ๐”{\mathbf{U}} and ๐•{\mathbf{V}} of the appropriate size (to be the scaled version of ๐—0{\mathbf{X}}_{0} as specified below) we have

โ€–๐€โ€‹(๐—โˆ’๐—~)โ€–F2+ฮปโ€‹โ€–๐—~โ€–F2โ‰คโ€–๐€โ€‹(๐—โˆ’๐˜)โ€–F2+ฮปโ€‹โ€–๐˜โ€–F2.\|{\mathbf{A}}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}_{F}+\lambda\|\tilde{{\mathbf{X}}}\|^{2}_{F}\leq\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{Y}})\|^{2}_{F}+\lambda\|{\mathbf{Y}}\|^{2}_{F}.

Approximate orthogonality in the form of (3.10) applied to the matrices ๐Œ=๐—โˆ’๐—~{\mathbf{M}}={\mathbf{X}}-\tilde{{\mathbf{X}}} and ๐Œ=๐—โˆ’๐˜{\mathbf{M}}={\mathbf{X}}-{\mathbf{Y}} allows to orthogonalize this inequality:

(1โˆ’ฮต)2โ€‹โ€–๐โ€‹(๐—โˆ’๐—~)โ€–F2+ฮปโ€‹โ€–๐—~โ€–F2โ‰ค(1+ฮต)2โ€‹โ€–๐โ€‹(๐—โˆ’๐˜)โ€–F2+ฮปโ€‹โ€–๐˜โ€–F2,(1-\varepsilon)^{2}\|{\mathbf{Q}}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}_{F}+\lambda\|\tilde{{\mathbf{X}}}\|^{2}_{F}\leq(1+\varepsilon)^{2}\|{\mathbf{Q}}({\mathbf{X}}-{\mathbf{Y}})\|^{2}_{F}+\lambda\|{\mathbf{Y}}\|^{2}_{F},

where ๐{\mathbf{Q}} denotes the kร—mk\times m matrix with orthogonal rows such that P๐€=๐Tโ€‹๐P_{\mathbf{A}}={\mathbf{Q}}^{T}{\mathbf{Q}}. Indeed, this implies โ€–P๐€โ€‹๐Œโ€–F=โ€–๐Tโ€‹๐๐Œโ€–F=โ€–๐๐Œโ€–F\|P_{\mathbf{A}}{\mathbf{M}}\|_{F}=\|{\mathbf{Q}}^{T}{\mathbf{Q}}{\mathbf{M}}\|_{F}=\|{\mathbf{Q}}{\mathbf{M}}\|_{F} for any ๐Œโˆˆโ„mร—n.{\mathbf{M}}\in{\mathbb{R}}^{m\times n}. So,

โ€–๐โ€‹(๐—โˆ’๐—~)โ€–F2+ฮดโ€‹โ€–๐—~โ€–F2โ‰คโ€–๐โ€‹(๐—โˆ’๐˜)โ€–F2+ฮดโ€‹โ€–๐˜โ€–F2+3โ€‹ฮต2โ€‹โ€–๐โ€‹(๐—โˆ’๐˜)โ€–F2.\|{\mathbf{Q}}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}_{F}+\delta\|\tilde{{\mathbf{X}}}\|^{2}_{F}\leq\|{\mathbf{Q}}({\mathbf{X}}-{\mathbf{Y}})\|^{2}_{F}+\delta\|{\mathbf{Y}}\|^{2}_{F}+3\varepsilon^{2}\|{\mathbf{Q}}({\mathbf{X}}-{\mathbf{Y}})\|^{2}_{F}. (3.13)

with ฮด=ฮป/(1โˆ’ฮต)2\delta=\lambda/(1-\varepsilon)^{2}.

We will further rearrange the optimality condition using the following identity based on completion of the square on both sides of (3.13): for any matrices ๐Œ1{\mathbf{M}}_{1}, ๐Œ2{\mathbf{M}}_{2} of appropriate size,

โ€–๐Œ1โˆ’๐Œ2โ€–F2+ฮดโ€‹โ€–๐Œ2โ€–F2\displaystyle\|{\mathbf{M}}_{1}-{\mathbf{M}}_{2}\|_{F}^{2}+\delta\|{\mathbf{M}}_{2}\|_{F}^{2} =โ€–๐Œ1โ€–F2+(1+ฮด)โ€‹โ€–๐Œ2โ€–F2โˆ’2โ€‹โŸจ๐Œ1,๐Œ2โŸฉ\displaystyle=\|{\mathbf{M}}_{1}\|_{F}^{2}+(1+\delta)\|{\mathbf{M}}_{2}\|_{F}^{2}-2\langle{\mathbf{M}}_{1},{\mathbf{M}}_{2}\rangle
=ฮด1+ฮดโ€‹โ€–๐Œ1โ€–F2+11+ฮดโ€‹โ€–๐Œ1โˆ’(1+ฮด)โ€‹๐Œ2โ€–F2.\displaystyle=\frac{\delta}{1+\delta}\|{\mathbf{M}}_{1}\|_{F}^{2}+\frac{1}{1+\delta}\left\|{\mathbf{M}}_{1}-(1+\delta){\mathbf{M}}_{2}\right\|^{2}_{F}.

Using this identity for ๐Œ1=๐๐—{\mathbf{M}}_{1}={\mathbf{Q}}{\mathbf{X}} and ๐Œ2=๐โ€‹๐—~{\mathbf{M}}_{2}={\mathbf{Q}}\tilde{{\mathbf{X}}} on the left and ๐Œ2=๐๐˜{\mathbf{M}}_{2}={\mathbf{Q}}{\mathbf{Y}} on the right of (3.13), we obtain

ฮด1+ฮดโ€‹โ€–๐๐—โ€–F2+11+ฮดโ€‹โ€–๐โ€‹(๐—โˆ’(1+ฮด)โ€‹๐—~)โ€–F2+ฮดโ€‹โ€–P๐€โŸ‚โ€‹๐—~โ€–F2\displaystyle\frac{\delta}{1+\delta}\|{\mathbf{Q}}{\mathbf{X}}\|_{F}^{2}+\frac{1}{1+\delta}\|{\mathbf{Q}}({\mathbf{X}}-(1+\delta)\tilde{{\mathbf{X}}})\|^{2}_{F}+\delta\|P_{\mathbf{A}}^{\perp}\tilde{{\mathbf{X}}}\|^{2}_{F}
โ‰คฮด1+ฮดโ€‹โ€–๐๐—โ€–F2+11+ฮดโ€‹โ€–๐โ€‹(๐—โˆ’(1+ฮด)โ€‹๐˜)โ€–F2+ฮดโ€‹โ€–P๐€โŸ‚โ€‹๐˜โ€–F2+3โ€‹ฮต2โ€‹โ€–๐โ€‹(๐—โˆ’๐˜)โ€–F2.\displaystyle\leq\frac{\delta}{1+\delta}\|{\mathbf{Q}}{\mathbf{X}}\|_{F}^{2}+\frac{1}{1+\delta}\|{\mathbf{Q}}({\mathbf{X}}-(1+\delta){\mathbf{Y}})\|^{2}_{F}+\delta\|P_{\mathbf{A}}^{\perp}{\mathbf{Y}}\|^{2}_{F}+3\varepsilon^{2}\|{\mathbf{Q}}({\mathbf{X}}-{\mathbf{Y}})\|^{2}_{F}.

Cancelling common terms, letting ๐˜:=๐—0/(1+ฮด){\mathbf{Y}}:={\mathbf{X}}_{0}/(1+\delta), we have

11+ฮดโˆฅ๐(๐—\displaystyle\frac{1}{1+\delta}\|{\mathbf{Q}}({\mathbf{X}} โˆ’(1+ฮด)๐—~)โˆฅ2F+ฮด(1+ฮด)2โˆฅP๐€โŸ‚(1+ฮด)๐—~โˆฅ2F\displaystyle-(1+\delta)\tilde{{\mathbf{X}}})\|^{2}_{F}+\frac{\delta}{(1+\delta)^{2}}\|P_{\mathbf{A}}^{\perp}(1+\delta)\tilde{{\mathbf{X}}}\|^{2}_{F}
โ‰ค11+ฮดโˆฅ๐(๐—โˆ’๐—0)โˆฅF2+ฮดโˆฅP๐€โŸ‚๐—01+ฮดโˆฅF2+3ฮต2โˆฅ๐(๐—โˆ’๐—01+ฮด)โˆฅF2=:๐–.\displaystyle\leq\frac{1}{1+\delta}\|{\mathbf{Q}}({\mathbf{X}}-{\mathbf{X}}_{0})\|^{2}_{F}+\delta\|P_{\mathbf{A}}^{\perp}\frac{{\mathbf{X}}_{0}}{1+\delta}\|^{2}_{F}+3\varepsilon^{2}\|{\mathbf{Q}}({\mathbf{X}}-\frac{{\mathbf{X}}_{0}}{1+\delta})\|^{2}_{F}=:{\mathbf{W}}.

To estimate the loss on the uncompressed problem, we use (3.7) with the matrices ๐—{\mathbf{X}}, (1+ฮด)โ€‹๐—~(1+\delta)\tilde{{\mathbf{X}}} and ๐{\mathbf{Q}} to get

โ€–๐—โˆ’(1+ฮด)โ€‹๐—~โ€–F2โ‰คโ€–๐โ€‹(๐—โˆ’(1+ฮด)โ€‹๐—~)โ€–F2+2โ€‹โ€–P๐€โŸ‚โ€‹(1+ฮด)โ€‹๐—~โ€–F2+2โ€‹โ€–P๐€โŸ‚โ€‹๐—โ€–F2\displaystyle\|{\mathbf{X}}-(1+\delta)\tilde{{\mathbf{X}}}\|^{2}_{F}\leq\|{\mathbf{Q}}({\mathbf{X}}-(1+\delta)\tilde{{\mathbf{X}}})\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}(1+\delta)\tilde{{\mathbf{X}}}\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
โ‰ค2โ€‹(1+ฮด)2ฮดโ€‹๐–+2โ€‹โ€–P๐€โŸ‚โ€‹๐—โ€–F2\displaystyle\quad\quad\quad\quad\quad\quad\leq\frac{2(1+\delta)^{2}}{\delta}{\mathbf{W}}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
โ‰ค2โ€‹(1+ฮด)ฮดโ€‹โ€–๐โ€‹(๐—โˆ’๐—0)โ€–F2+2โ€‹โ€–P๐€โŸ‚โ€‹๐—0โ€–F2+6โ€‹ฮต2ฮดโ€‹โ€–๐โ€‹((1+ฮด)โ€‹๐—โˆ’๐—0)โ€–F2+2โ€‹โ€–P๐€โŸ‚โ€‹๐—โ€–F2\displaystyle\leq\frac{2(1+\delta)}{\delta}\|{\mathbf{Q}}({\mathbf{X}}-{\mathbf{X}}_{0})\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}_{0}\|^{2}_{F}+\frac{6\varepsilon^{2}}{\delta}\|{\mathbf{Q}}((1+\delta){\mathbf{X}}-{\mathbf{X}}_{0})\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
โ‰ค(2+12โ€‹ฮต2ฮด+2)โ€‹โ€–๐โ€‹(๐—โˆ’๐—0)โ€–F2+4โ€‹โ€–P๐€โŸ‚โ€‹(๐—โˆ’๐—0)โ€–F2+6โ€‹โ€–P๐€โŸ‚โ€‹๐—โ€–F2+12โ€‹ฮต2โ€‹ฮดโ€‹โ€–๐๐—โ€–F2\displaystyle\leq\left(\frac{2+12\varepsilon^{2}}{\delta}+2\right)\|{\mathbf{Q}}({\mathbf{X}}-{\mathbf{X}}_{0})\|^{2}_{F}+4\|P_{\mathbf{A}}^{\perp}({\mathbf{X}}-{\mathbf{X}}_{0})\|^{2}_{F}+6\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}+12\varepsilon^{2}\delta\|{\mathbf{Q}}{\mathbf{X}}\|_{F}^{2}
โ‰ค(4+54โ€‹ฮป)โ€‹โ€–๐—โˆ’๐—0โ€–F2+6โ€‹โ€–P๐€โŸ‚โ€‹๐—โ€–F2+12โ€‹ฮต2โ€‹ฮป(1โˆ’ฮต)2โ€‹โ€–๐—โ€–F2,\displaystyle\leq\left(4+\frac{5}{4\lambda}\right)\|{\mathbf{X}}-{\mathbf{X}}_{0}\|^{2}_{F}+6\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}+12\frac{\varepsilon^{2}\lambda}{(1-\varepsilon)^{2}}\|{\mathbf{X}}\|_{F}^{2},

using that ฮด=ฮป(1โˆ’ฮต)2\delta=\frac{\lambda}{(1-\varepsilon)^{2}} and ฮตโ‰ค12\varepsilon\leq\frac{1}{2}. โˆŽ

Remark 3.11.

We conclude the section with the discussion of Theorem 3.10 result. We note that

  • โ€ข

    Theorem 3.10 shows it is possible to regularize the compressed NMF problem without the projection operator and to find a (1+ฮป)(1+\lambda)-rescaled factors. Note that the rescaling does not affect the learned nonnegative low-rank structure.

  • โ€ข

    The property (3.10) โ€–P๐€โ€‹๐Œโ€–F2โˆผโ€–๐€๐Œโ€–F2\|P_{\mathbf{A}}{\mathbf{M}}\|^{2}_{F}\sim\|{\mathbf{A}}{\mathbf{M}}\|^{2}_{F} is significantly more relaxed than the standard geometry preservation properties of the form โ€–๐Œโ€–F2โˆผโ€–๐€๐Œโ€–F2\|{\mathbf{M}}\|^{2}_{F}\sim\|{\mathbf{A}}{\mathbf{M}}\|^{2}_{F}, such as Johnson-Lindenstrauss, oblivious subspace embedding, or restricted isometry property. The latter wonโ€™t be satisfied for, e.g., random Gaussian matrix ๐€{\mathbf{A}} and arbitrary nonnegative rank rr matrices ๐Œ{\mathbf{M}} (as needed within Theorem 3.10), unless there is no compression and kโ‰ฅmk\geq m.

  • โ€ข

    The approximate orthogonality property (3.10) is not hard to satisfy with generic random matrices. For example, an i.i.d. Gaussian matrix ๐€โˆˆโ„kร—m{\mathbf{A}}\in{\mathbb{R}}^{k\times m} with each entry having mean 0 and variance 1m\frac{1}{m} is ฮต\varepsilon-approximately orthogonal with probability 1โˆ’2โ€‹expโก(โˆ’ฮต2โ€‹m/8)1-2\exp(-\varepsilon^{2}m/8) as soon as kโ‰คmโ€‹ฮต2/4k\leq m\varepsilon^{2}/4 (by [41, Corollary 5.35]).

  • โ€ข

    While it is easy to guarantee approximate orthogonality with generic matrices ๐€{\mathbf{A}} (not learned from ๐—{\mathbf{X}}), the term P๐€โŸ‚โ€‹๐—P_{\mathbf{A}}^{\perp}{\mathbf{X}} is still the part of the error bound. So, data-oblivious one-sided compression in general is not expected to result in exact recovery even if data matrix ๐—{\mathbf{X}} admits exact nonnegative factorization.

3.4. Nonnegativity in compression

In the next section, we discuss the approaches to solve compressed nonnegative matrix factorization problems. In particular, we consider the variations of multiplicative updates algorithm to iteratively minimize the objective functions that we have formulated in this section. The multiplicative updates algorithm is valid due to the fact that the matrices involved in the iterative process are nonnegative. This convenient property is destroyed by sketching unless we have that ๐€Tโ€‹๐€{\mathbf{A}}^{T}{\mathbf{A}} is an element-wise nonnegative matrix. While this is not expected to be true neither for approximately orthonormal random sketches nor for the data-adapted sketching matrices coming from randomized rangefinder algorithm, to overcome this issue, it suffices to add some extra penalty terms taking the form

ฯƒโ€‹โ€–๐ŸmTโ€‹(๐—โˆ’๐”๐•T)โ€–2and/orฯƒโ€‹โ€–(๐—โˆ’๐”๐•T)โ€‹๐Ÿnโ€–2,\sigma\|\mathbf{1}_{m}^{T}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}\quad\text{and/or}\quad\sigma\|({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\mathbf{1}_{n}\|^{2}, (3.14)

where ๐Ÿm\mathbf{1}_{m} is a vector of all ones in โ„m{\mathbb{R}}^{m}.

Corollary 3.12.

Suppose ๐—{\mathbf{X}} has an exact nonnegative factorization ๐—=๐”0โ€‹๐•0T{\mathbf{X}}={\mathbf{U}}_{0}{\mathbf{V}}_{0}^{T}, where ๐”0โˆˆโ„nร—k{\mathbf{U}}_{0}\in{\mathbb{R}}^{n\times k}, ๐•0โˆˆโ„mร—k{\mathbf{V}}_{0}\in{\mathbb{R}}^{m\times k} and they are both full-rank, kโ‰คminโก{n,m}k\leq\min\{n,m\}. Let ๐€1{\mathbf{A}}_{1} and ๐€2{\mathbf{A}}_{2} are generic random matrices of sizes kร—nk\times n and mร—km\times k, respectively. If for some ฮป1,ฮป2>0\lambda_{1},\lambda_{2}>0 and ฯƒ1,ฯƒ2โ‰ฅ0\sigma_{1},\sigma_{2}\geq 0

๐”~,๐•~=argโ€‹min๐”,๐•โ‰ฅ0โกL\displaystyle\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}}=\operatorname*{arg\,min}_{{\mathbf{U}},{\mathbf{V}}\geq 0}L (๐—โˆ’๐”๐•T)\displaystyle({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}) (3.15)
+ฯƒ1โ€‹โ€–๐ŸmTโ€‹(๐—โˆ’๐”๐•T)โ€–2+ฯƒ2โ€‹โ€–(๐—โˆ’๐”๐•T)โ€‹๐Ÿnโ€–2,\displaystyle+\sigma_{1}\|\mathbf{1}_{m}^{T}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}+\sigma_{2}\|({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\mathbf{1}_{n}\|^{2},

where Lโ€‹(๐—โˆ’๐”๐•T):=โ€–๐€1โ€‹(๐—โˆ’๐”๐•T)โ€–F2+โ€–(๐—โˆ’๐”๐•T)โ€‹๐€2โ€–F2+ฮป1โ€‹โ€–P๐—๐€2โŸ‚โ€‹๐”๐•Tโ€–F2+ฮป2โ€‹โ€–๐”๐•Tโ€‹P๐€1โ€‹๐—โŸ‚โ€–F2,L({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}):=\|{\mathbf{A}}_{1}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|_{F}^{2}+\|({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}){\mathbf{A}}_{2}\|_{F}^{2}+\lambda_{1}\|P_{{\mathbf{X}}{\mathbf{A}}_{2}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T}\|_{F}^{2}+\lambda_{2}\|{\mathbf{U}}{\mathbf{V}}^{T}P_{{\mathbf{A}}_{1}{\mathbf{X}}}^{\perp}\|_{F}^{2}, then ๐—=๐”~โ€‹๐•~T{\mathbf{X}}=\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T}.

Proof.

Note that

min๐”,๐•โ‰ฅ0โกLโ€‹(๐—โˆ’๐”๐•T)\displaystyle\min_{{\mathbf{U}},{\mathbf{V}}\geq 0}L({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}) +ฯƒ1โ€‹โ€–๐ŸmTโ€‹(๐—โˆ’๐”๐•T)โ€–2\displaystyle+\sigma_{1}\|\mathbf{1}_{m}^{T}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}
โ‰คmin๐”,๐•โ‰ฅ0๐ŸmTโ€‹๐—=๐ŸmTโ€‹๐”๐•TโกLโ€‹(๐—โˆ’๐”๐•T)+ฯƒ1โ€‹โ€–๐ŸmTโ€‹(๐—โˆ’๐”๐•T)โ€–2\displaystyle\leq\min_{\begin{subarray}{c}{\mathbf{U}},{\mathbf{V}}\geq 0\\ \mathbf{1}_{m}^{T}{\mathbf{X}}=\mathbf{1}_{m}^{T}{\mathbf{U}}{\mathbf{V}}^{T}\end{subarray}}L({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})+\sigma_{1}\|\mathbf{1}_{m}^{T}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}
=min๐”,๐•โ‰ฅ0๐ŸmTโ€‹๐—=๐ŸmTโ€‹๐”๐•TโกLโ€‹(๐—โˆ’๐”๐•T),\displaystyle=\min_{\begin{subarray}{c}{\mathbf{U}},{\mathbf{V}}\geq 0\\ \mathbf{1}_{m}^{T}{\mathbf{X}}=\mathbf{1}_{m}^{T}{\mathbf{U}}{\mathbf{V}}^{T}\end{subarray}}L({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}),

and similarly for adding the term ฯƒ2โ€‹โ€–(๐—โˆ’๐”๐•T)โ€‹๐Ÿnโ€–F2\sigma_{2}\|({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\mathbf{1}_{n}\|^{2}_{F}. Then the statement follows directly from Theorem 3.1. โˆŽ

When we do not assume that ๐—{\mathbf{X}} has exactly nonnegative rank kk, adding the regularizer of the form (3.14) is still possible under an additional condition, essentially imposing that the column-sums of ๐—{\mathbf{X}} and ๐”๐•T{\mathbf{U}}{\mathbf{V}}^{T} approximately match if ๐”,๐•{\mathbf{U}},{\mathbf{V}} are optimal to (1.1).

Corollary 3.13.

Let ๐—โˆˆโ„+mร—n{\mathbf{X}}\in{\mathbb{R}}_{+}^{m\times n} be a nonnegative matrix and ๐€โˆˆโ„kร—m{\mathbf{A}}\in{\mathbb{R}}^{k\times m} is a matrix with orthogonal rows. Let ๐”0โˆˆโ„+mร—r{\mathbf{U}}_{0}\in{\mathbb{R}}_{+}^{m\times r}, ๐•0โˆˆโ„+rร—n{\mathbf{V}}_{0}\in{\mathbb{R}}_{+}^{r\times n} give a solution to the original NMF problem (1.1) of rank rr and ๐—0=๐”0โ€‹๐•0T{\mathbf{X}}_{0}={\mathbf{U}}_{0}{\mathbf{V}}_{0}^{T}. Additionally assume that โ€–๐ŸTโ€‹(๐—0โˆ’๐—)/๐ŸTโ€‹๐—โ€–โ‰คฮต<1\left\|\mathbf{1}^{T}({\mathbf{X}}_{0}-{\mathbf{X}})/\mathbf{1}^{T}{\mathbf{X}}\right\|\leq\varepsilon<1, where // denotes element-wise division and ๐Ÿโˆˆโ„m\mathbf{1}\in{\mathbb{R}}^{m} is the vector of all ones. If ฮป,ฯƒ>0,\lambda,\sigma>0,

๐”~,๐•~=argโ€‹min๐”,๐•โ‰ฅ0โกโ€–๐€โ€‹(๐—โˆ’๐”๐•T)โ€–F2+ฮปโ€‹โ€–P๐€โŸ‚โ€‹๐”๐•Tโ€–F2+ฯƒโ€‹โ€–๐ŸTโ€‹(๐—โˆ’๐”๐•T)โ€–2,\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}}=\operatorname*{arg\,min}_{{\mathbf{U}},{\mathbf{V}}\geq 0}\left\|{\mathbf{A}}\left({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}\right)\right\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T}\|^{2}_{F}+\sigma\|\mathbf{1}^{T}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}, (3.16)

then ๐—~:=๐”~โ€‹๐•~T\tilde{{\mathbf{X}}}:=\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T} satisfies

โ€–๐—โˆ’๐—~โ€–F2โ€–๐—โ€–F2โ‰คcฮปโ€‹[โ€–๐—โˆ’๐—0โ€–F2โ€–๐—โ€–F2+โ€–๐—โˆ’P๐€โ€‹๐—โ€–F2โ€–๐—โ€–F2+ฮต2].\frac{\|{\mathbf{X}}-\tilde{{\mathbf{X}}}\|^{2}_{F}}{\|{\mathbf{X}}\|^{2}_{F}}\leq c_{\lambda}\left[\frac{\|{\mathbf{X}}-{\mathbf{X}}_{0}\|^{2}_{F}}{\|{\mathbf{X}}\|^{2}_{F}}+\frac{\|{\mathbf{X}}-P_{\mathbf{A}}{\mathbf{X}}\|^{2}_{F}}{\|{\mathbf{X}}\|^{2}_{F}}+\varepsilon^{2}\right].

For example, if we have that ฮป,ฮตโˆˆ(0,1/2)\lambda,\varepsilon\in(0,1/2) one can bound c=maxโก{6,8/ฮป}.c=\max\{6,8/\lambda\}.

Proof.

Let ๐ƒโˆˆโ„nร—n{\mathbf{D}}\in{\mathbb{R}}^{n\times n} be the diagonal matrix with nonzero entries given by (๐ŸTโ€‹๐—/๐ŸTโ€‹๐—0)(\mathbf{1}^{T}{\mathbf{X}}/\mathbf{1}^{T}{\mathbf{X}}_{0}), and so ๐ŸTโ€‹๐—0โ€‹๐ƒ=๐ŸTโ€‹๐—\mathbf{1}^{T}{\mathbf{X}}_{0}{\mathbf{D}}=\mathbf{1}^{T}{\mathbf{X}}. Note that ๐—0โ€‹๐ƒ{\mathbf{X}}_{0}{\mathbf{D}} is some (not necessarily optimal) solution to (3.16) and so we have

โ€–P๐€โ€‹(๐—โˆ’๐—~)โ€–F2+ฮปโ€‹โ€–P๐€โŸ‚โ€‹๐—~โ€–F2\displaystyle\|P_{\mathbf{A}}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}\tilde{{\mathbf{X}}}\|^{2}_{F} โ‰คโ€–P๐€โ€‹(๐—โˆ’๐—~)โ€–F2+ฮปโ€‹โ€–P๐€โŸ‚โ€‹๐—~โ€–F2+ฯƒโ€‹โ€–๐ŸTโ€‹(๐—โˆ’๐—~)โ€–2\displaystyle\leq\|P_{\mathbf{A}}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}\tilde{{\mathbf{X}}}\|^{2}_{F}+\sigma\|\mathbf{1}^{T}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}
โ‰คโ€–P๐€โ€‹(๐—โˆ’๐—0โ€‹๐ƒ)โ€–F2+ฮปโ€‹โ€–P๐€โŸ‚โ€‹๐—0โ€‹๐ƒโ€–F2.\displaystyle\leq\|P_{\mathbf{A}}({\mathbf{X}}-{\mathbf{X}}_{0}{\mathbf{D}})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}_{0}{\mathbf{D}}\|^{2}_{F}. (3.17)

Then, for c1=maxโก(2/ฮป,1),c_{1}=\max(2/\lambda,1),

โ€–๐—โˆ’๐—~โ€–F2\displaystyle\|{\mathbf{X}}-\tilde{{\mathbf{X}}}\|^{2}_{F} โ‰ค(โ€‹3.7โ€‹)โ€‹โ€–P๐€โ€‹(๐—โˆ’๐—~)โ€–F2+2โ€‹โ€–P๐€โŸ‚โ€‹๐—~โ€–F2+2โ€‹โ€–P๐€โŸ‚โ€‹๐—โ€–F2\displaystyle\overset{\eqref{eq:triangle}}{\leq}\|P_{\mathbf{A}}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}\tilde{{\mathbf{X}}}\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
โ‰คc1โ€‹(โ€–P๐€โ€‹(๐—โˆ’๐—~)โ€–F2+ฮปโ€‹โ€–P๐€โŸ‚โ€‹๐—~โ€–F2)+2โ€‹โ€–P๐€โŸ‚โ€‹๐—โ€–F2\displaystyle\leq c_{1}\left(\|P_{\mathbf{A}}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}\tilde{{\mathbf{X}}}\|^{2}_{F}\right)+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
โ‰ค(โ€‹3.4โ€‹)โ€‹c1โ€‹(โ€–P๐€โ€‹(๐—โˆ’๐—0โ€‹๐ƒ)โ€–F2+ฮปโ€‹โ€–P๐€โŸ‚โ€‹๐—0โ€‹๐ƒโ€–F2)+2โ€‹โ€–P๐€โŸ‚โ€‹๐—โ€–F2\displaystyle\overset{\eqref{eq:shifted-optimality}}{\leq}c_{1}\left(\|P_{\mathbf{A}}({\mathbf{X}}-{\mathbf{X}}_{0}{\mathbf{D}})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}_{0}{\mathbf{D}}\|^{2}_{F}\right)+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
โ‰คc1โ€‹โ€–P๐€โ€‹(๐—โˆ’๐—0โ€‹๐ƒ)โ€–F2+2โ€‹ฮปโ€‹c1โ€‹โ€–P๐€โŸ‚โ€‹(๐—โˆ’๐—0โ€‹๐ƒ)โ€–F2+(2โ€‹ฮปโ€‹c1+2)โ€‹โ€–P๐€โŸ‚โ€‹๐—โ€–F2\displaystyle\leq c_{1}\|P_{\mathbf{A}}({\mathbf{X}}-{\mathbf{X}}_{0}{\mathbf{D}})\|^{2}_{F}+2\lambda c_{1}\|P_{\mathbf{A}}^{\perp}({\mathbf{X}}-{\mathbf{X}}_{0}{\mathbf{D}})\|^{2}_{F}+(2\lambda c_{1}+2)\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
โ‰คc2=maxโก(2โ€‹ฮป,1)โ€‹c1โ€‹c2โ€‹โ€–๐—โˆ’๐—0โ€‹๐ƒโ€–F2+(2โ€‹ฮปโ€‹c1+2)โ€‹โ€–P๐€โŸ‚โ€‹๐—โ€–F2\displaystyle\overset{c_{2}=\max(2\lambda,1)}{\leq}c_{1}c_{2}\|{\mathbf{X}}-{\mathbf{X}}_{0}{\mathbf{D}}\|^{2}_{F}+(2\lambda c_{1}+2)\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
โ‰ค2โ€‹c1โ€‹c2โ€‹โ€–๐—โˆ’๐—0โ€–F2+2โ€‹c1โ€‹c2โ€‹โ€–๐—0โ€–F2โ€‹โ€–๐ˆโˆ’๐ƒโ€–2+(2โ€‹ฮปโ€‹c1+2)โ€‹โ€–P๐€โŸ‚โ€‹๐—โ€–F2\displaystyle\leq 2c_{1}c_{2}\|{\mathbf{X}}-{\mathbf{X}}_{0}\|^{2}_{F}+2c_{1}c_{2}\|{\mathbf{X}}_{0}\|^{2}_{F}\|{\mathbf{I}}-{\mathbf{D}}\|^{2}+(2\lambda c_{1}+2)\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
โ‰ค2โ€‹c1โ€‹c2โ€‹(1+2โ€‹ฮต2)โ€‹โ€–๐—โˆ’๐—0โ€–F2+4โ€‹c1โ€‹c2โ€‹ฮต2โ€‹โ€–๐—โ€–F2+(2โ€‹ฮปโ€‹c1+2)โ€‹โ€–P๐€โŸ‚โ€‹๐—โ€–F2.\displaystyle\leq 2c_{1}c_{2}(1+2\varepsilon^{2})\|{\mathbf{X}}-{\mathbf{X}}_{0}\|^{2}_{F}+4c_{1}c_{2}\varepsilon^{2}\|{\mathbf{X}}\|^{2}_{F}+(2\lambda c_{1}+2)\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}.

โˆŽ

4. Methods that solve compressed problems

In this section we define iterative methods that solve the formulated optimization problems directly, without referring to the original data or any matrices of the large uncompressed size.

4.1. General convergence for sketched multiplicative updates

Multiplicative updates has been one of the most popular algorithms for NMF since the introduction in [22]. In this section we show how to modify the classical multiplicative updates algorithm for the various objectives we have derived in earlier sections.

To this end, we prove a general theorem for multiplicative updates for multiple minimization terms. We will see in the next Section 4.2 that giving nonnegativity conditions on the sums (4.2) (rather than the stronger conditions on the individual terms) matters so we can put realistic assumption on our regularization terms that include orthogonal projection operators.

Theorem 4.1.

Consider an objective function in the generic form

argโ€‹min๐”,๐•โ‰ฅ0โก12โ€‹โˆ‘i=1sโ€–๐€iโ€‹(๐—i(๐€)โˆ’๐”๐•T)โ€–F2+12โ€‹โˆ‘j=1tโ€–(๐—j(๐)โˆ’๐”๐•T)โ€‹๐jโ€–F2,\operatorname*{arg\,min}_{{\mathbf{U}},{\mathbf{V}}\geq 0}\frac{1}{2}\sum_{i=1}^{s}\|{\mathbf{A}}_{i}({\mathbf{X}}^{({\mathbf{A}})}_{i}-{\mathbf{U}}{\mathbf{V}}^{T})\|_{F}^{2}+\frac{1}{2}\sum_{j=1}^{t}\|({\mathbf{X}}^{({\mathbf{B}})}_{j}-{\mathbf{U}}{\mathbf{V}}^{T}){\mathbf{B}}_{j}\|_{F}^{2}, (4.1)

where all ๐—i(๐€),๐—j(๐)โˆˆโ„mร—n{\mathbf{X}}_{i}^{({\mathbf{A}})},{\mathbf{X}}_{j}^{({\mathbf{B}})}\in{\mathbb{R}}^{m\times n}, ๐€iโˆˆโ„kร—m{\mathbf{A}}_{i}\in{\mathbb{R}}^{k\times m}, ๐jโˆˆโ„nร—k{\mathbf{B}}_{j}\in{\mathbb{R}}^{n\times k} and (๐”,๐•โ‰ฅ0)({\mathbf{U}},{\mathbf{V}}\geq 0) means (๐”โˆˆโ„+mร—r,๐•โˆˆโ„+nร—r).({\mathbf{U}}\in{\mathbb{R}}_{+}^{m\times r},{\mathbf{V}}\in{\mathbb{R}}_{+}^{n\times r}). Let ๐”{\mathbf{U}} and ๐•{\mathbf{V}} be mร—rm\times r and nร—rn\times r matrices respectively. If all six matrices

{๐”,๐•,(โˆ‘i=1s๐€iTโ€‹๐€i),(โˆ‘j=1t๐jโ€‹๐jT),(โˆ‘i=1s๐€iTโ€‹๐€iโ€‹๐—i(๐€)),(โˆ‘j=1t๐—j(๐)โ€‹๐jโ€‹๐jT)}\displaystyle\left\{{\mathbf{U}},{\mathbf{V}},(\sum_{i=1}^{s}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}),(\sum_{j=1}^{t}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}),(\sum_{i=1}^{s}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{X}}^{({\mathbf{A}})}_{i}),(\sum_{j=1}^{t}{\mathbf{X}}^{({\mathbf{B}})}_{j}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T})\right\} (4.2)

are entry-wise nonnegative, then the objective (4.1) is nonincreasing under the updates

๐”\displaystyle{\mathbf{U}} โ†๐”โˆ˜โˆ‘i=1s๐€iTโ€‹๐€iโ€‹๐—i(๐€)โ€‹๐•+โˆ‘j=1t๐—j(๐)โ€‹๐jโ€‹๐jTโ€‹๐•โˆ‘i=1s๐€iTโ€‹๐€iโ€‹๐”๐•Tโ€‹๐•+โˆ‘j=1t๐”๐•Tโ€‹๐jโ€‹๐jTโ€‹๐•,\displaystyle\leftarrow{\mathbf{U}}\circ\frac{\sum_{i=1}^{s}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{X}}^{({\mathbf{A}})}_{i}{\mathbf{V}}+\sum_{j=1}^{t}{\mathbf{X}}^{({\mathbf{B}})}_{j}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}{\mathbf{V}}}{\sum_{i=1}^{s}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}+\sum_{j=1}^{t}{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}{\mathbf{V}}},
๐•\displaystyle{\mathbf{V}} โ†๐•โˆ˜โˆ‘i=1s(๐—i(๐€))Tโ€‹๐€iTโ€‹๐€iโ€‹๐”+โˆ‘j=1t๐jโ€‹๐jTโ€‹(๐—j(๐))Tโ€‹๐”โˆ‘i=1s๐•๐”Tโ€‹๐€iTโ€‹๐€iโ€‹๐”+โˆ‘j=1t๐jโ€‹๐jTโ€‹๐•๐”Tโ€‹๐”.\displaystyle\leftarrow{\mathbf{V}}\circ\frac{\sum_{i=1}^{s}({\mathbf{X}}^{({\mathbf{A}})}_{i})^{T}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{U}}+\sum_{j=1}^{t}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}({\mathbf{X}}^{({\mathbf{B}})}_{j})^{T}{\mathbf{U}}}{\sum_{i=1}^{s}{\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{U}}+\sum_{j=1}^{t}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}{\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{U}}}.
Remark 4.2.

(Implementation considerations) Note that we can compute the matrix ๐€iTโ€‹๐€iโ€‹๐”๐•Tโ€‹๐•{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}} by the multiplication order given by ๐€iTโ€‹((๐€iโ€‹๐”)โ€‹(๐•Tโ€‹๐•)){\mathbf{A}}_{i}^{T}(({\mathbf{A}}_{i}{\mathbf{U}})({\mathbf{V}}^{T}{\mathbf{V}})). This order never requires us to store a matrix of size larger than maxโก(n,m)ร—maxโก(r,k)\max(n,m)\times\max(r,k). Similar procedures can be used for other terms appearing in the updates of Theorem 4.1.

The proof of the theorem follows the standard approach that justifies the validity of multiplicative updates algorithm for NMF problem (e.g., [22]), where nonnegativity assumption ensures its validity on the sketched problem.

Proof of Theorem 4.1.

Let us consider the step updating the matrix ๐”โˆˆโ„+mร—r{\mathbf{U}}\in\mathbb{R}^{m\times r}_{+}. Let ๐ฎโˆˆโ„+mโ€‹r{\mathbf{u}}\in{\mathbb{R}}^{mr}_{+} be the vectorization of ๐”{\mathbf{U}} denoted as ๐ฎ=vecโ€‹(๐”){\mathbf{u}}=\text{vec}({\mathbf{U}}). Define

๐ฒi(1)=vecโ€‹(๐€iโ€‹๐—i(๐€)),๐–i(1)=(๐•โŠ—๐€i) for โ€‹i=1,โ€ฆ,s;,\displaystyle{\mathbf{y}}^{(1)}_{i}=\text{vec}({\mathbf{A}}_{i}{\mathbf{X}}^{({\mathbf{A}})}_{i}),\quad{\mathbf{W}}^{(1)}_{i}=({\mathbf{V}}\otimes{\mathbf{A}}_{i})\quad\text{ for }i=1,\ldots,s;,
๐ฒj(2)=vecโ€‹(๐—j(B)โ€‹๐j),๐–j(2)=(๐jTโ€‹๐•โŠ—๐ˆmร—m) for โ€‹j=1,โ€ฆ,t,\displaystyle{\mathbf{y}}^{(2)}_{j}=\text{vec}({\mathbf{X}}^{(B)}_{j}{\mathbf{B}}_{j}),\quad{\mathbf{W}}^{(2)}_{j}=({\mathbf{B}}_{j}^{T}{\mathbf{V}}\otimes{\mathbf{I}}_{m\times m})\quad\text{ for }j=1,\ldots,t,

where โŠ—\otimes denotes matrix Kronecker product. Define ๐ฒโˆˆโ„kโ€‹(m+n){\mathbf{y}}\in{\mathbb{R}}^{k(m+n)} to be all of the vectors ๐ฒi(1){\mathbf{y}}^{(1)}_{i} and ๐ฒj(2){\mathbf{y}}^{(2)}_{j} stacked together vertically. Similarly, ๐–โˆˆโ„(kโ€‹n+kโ€‹m)ร—rโ€‹m{\mathbf{W}}\in{\mathbb{R}}^{(kn+km)\times rm} is a stack of all of the matrices ๐–i(1){\mathbf{W}}^{(1)}_{i} and ๐–j(2){\mathbf{W}}^{(2)}_{j}.

Using the mixed Kronecker matrix-vector product property (๐Œ1โŠ—๐Œ2)โ€‹vecโ€‹(๐”)=vecโ€‹(๐Œ2โ€‹๐”๐Œ1T)({\mathbf{M}}_{1}\otimes{\mathbf{M}}_{2})\text{vec}({\mathbf{U}})=\text{vec}({\mathbf{M}}_{2}{\mathbf{U}}{\mathbf{M}}_{1}^{T}) that holds for any appropriately sized matrices ๐”,๐Œ1,๐Œ2{\mathbf{U}},{\mathbf{M}}_{1},{\mathbf{M}}_{2}, we can rewrite the objective function (4.1) as

Fโ€‹(๐ฎ)=12โ€‹โ€–๐ฒโˆ’๐–๐ฎโ€–2.F({\mathbf{u}})=\frac{1}{2}\|{\mathbf{y}}-{\mathbf{W}}{\mathbf{u}}\|^{2}.

Step 1: Define quadratic majorizing function. Consider the function

Gโ€‹(๐ฎโ€ฒ,๐ฎ)=Fโ€‹(๐ฎ)+(๐ฎโ€ฒโˆ’๐ฎ)Tโ€‹โˆ‡Fโ€‹(๐ฎ)+12โ€‹(๐ฎโ€ฒโˆ’๐ฎ)Tโ€‹๐Š๐ฎโ€‹(๐ฎโ€ฒโˆ’๐ฎ),G({\mathbf{u}}^{\prime},{\mathbf{u}})=F({\mathbf{u}})+({\mathbf{u}}^{\prime}-{\mathbf{u}})^{T}\nabla F({\mathbf{u}})+\frac{1}{2}({\mathbf{u}}^{\prime}-{\mathbf{u}})^{T}{\mathbf{K}}_{\mathbf{u}}({\mathbf{u}}^{\prime}-{\mathbf{u}}), (4.3)

where the matrix ๐Š๐ฎ{\mathbf{K}}_{\mathbf{u}} is a diagonal matrix with the diagonal (๐–Tโ€‹๐–๐ฎ)/๐ฎ({\mathbf{W}}^{T}{\mathbf{W}}{\mathbf{u}})/{\mathbf{u}}, recall that // represents elementwise division. We claim that GG majorizes FF, i.e. Gโ€‹(๐ฎ,๐ฎ)=Fโ€‹(๐ฎ)G({\mathbf{u}},{\mathbf{u}})=F({\mathbf{u}}) and Gโ€‹(๐ฎโ€ฒ,๐ฎ)โ‰ฅFโ€‹(๐ฎโ€ฒ)G({\mathbf{u}}^{\prime},{\mathbf{u}})\geq F({\mathbf{u}}^{\prime}). It is clear that Gโ€‹(๐ฎ,๐ฎ)=Fโ€‹(๐ฎ)G({\mathbf{u}},{\mathbf{u}})=F({\mathbf{u}}). We can write

Gโ€‹(๐ฎโ€ฒ,๐ฎ)โˆ’Fโ€‹(๐ฎโ€ฒ)=12โ€‹(๐ฎโ€ฒโˆ’๐ฎ)Tโ€‹(๐Š๐ฎโˆ’๐–Tโ€‹๐–)โ€‹(๐ฎโ€ฒโˆ’๐ฎ)G({\mathbf{u}}^{\prime},{\mathbf{u}})-F({\mathbf{u}}^{\prime})=\frac{1}{2}({\mathbf{u}}^{\prime}-{\mathbf{u}})^{T}({\mathbf{K}}_{\mathbf{u}}-{\mathbf{W}}^{T}{\mathbf{W}})({\mathbf{u}}^{\prime}-{\mathbf{u}})

from the comparison of (4.3) with the Taylor decomposition for the quadratic function Fโ€‹(๐ฎโ€ฒ)F({\mathbf{u}}^{\prime}) at ๐ฎ{\mathbf{u}}. So, to check that Gโ€‹(๐ฎโ€ฒ,๐ฎ)โ‰ฅFโ€‹(๐ฎโ€ฒ)G({\mathbf{u}}^{\prime},{\mathbf{u}})\geq F({\mathbf{u}}^{\prime}), it is sufficient to show that the matrix ๐Š๐ฎโˆ’๐–Tโ€‹๐–{\mathbf{K}}_{\mathbf{u}}-{\mathbf{W}}^{T}{\mathbf{W}} is positive semidefinite. Equivalently, it is sufficient to show that

๐Œ=(๐Šuโˆ’๐–Tโ€‹๐–)โˆ˜๐ฎ๐ฎT{\mathbf{M}}=({\mathbf{K}}_{u}-{\mathbf{W}}^{T}{\mathbf{W}})\circ{\mathbf{u}}{\mathbf{u}}^{T}

is positive semidefinite. Indeed, for any vector ๐ณ{\mathbf{z}} of the appropiate size, ๐ณTโ€‹๐Œ๐ณ=(๐ณโˆ˜๐ฎ)Tโ€‹(๐Š๐ฎโˆ’๐–Tโ€‹๐–)โ€‹(๐ณโˆ˜๐ฎ){\mathbf{z}}^{T}{\mathbf{M}}{\mathbf{z}}=({\mathbf{z}}\circ{\mathbf{u}})^{T}({\mathbf{K}}_{\mathbf{u}}-{\mathbf{W}}^{T}{\mathbf{W}})({\mathbf{z}}\circ{\mathbf{u}}), recall that โˆ˜\circ defines element-wise product.


Step 2: Matrix ๐Œ{\mathbf{M}} is positive semidefinite. We will check positive semidefitness of the matrix ๐Œ{\mathbf{M}} directly, Consider any ๐ฏโˆˆโ„nโ€‹k{\mathbf{v}}\in{\mathbb{R}}^{nk}. Then

๐ฏTโ€‹๐Œ๐ฏ\displaystyle{\mathbf{v}}^{T}{\mathbf{M}}{\mathbf{v}} =โˆ‘iโ€‹j๐ฏiโ€‹๐Œiโ€‹jโ€‹๐ฏj\displaystyle=\sum_{ij}{\mathbf{v}}_{i}{\mathbf{M}}_{ij}{\mathbf{v}}_{j}
=โˆ‘iโ€‹j๐ฎiโ€‹(๐–Tโ€‹๐–)iโ€‹jโ€‹๐ฎjโ€‹๐ฏi2โˆ’๐ฏiโ€‹๐ฎiโ€‹(๐–Tโ€‹๐–)iโ€‹jโ€‹๐ฎjโ€‹๐ฏj\displaystyle=\sum_{ij}{\mathbf{u}}_{i}({\mathbf{W}}^{T}{\mathbf{W}})_{ij}{\mathbf{u}}_{j}{\mathbf{v}}_{i}^{2}-{\mathbf{v}}_{i}{\mathbf{u}}_{i}({\mathbf{W}}^{T}{\mathbf{W}})_{ij}{\mathbf{u}}_{j}{\mathbf{v}}_{j}
=โˆ‘iโ€‹j(๐–Tโ€‹๐–)iโ€‹jโ€‹๐ฎiโ€‹๐ฎjโ€‹(0.5โ€‹(๐ฏi2+๐ฏj2)โˆ’๐ฏiโ€‹๐ฏj)\displaystyle=\sum_{ij}({\mathbf{W}}^{T}{\mathbf{W}})_{ij}{\mathbf{u}}_{i}{\mathbf{u}}_{j}(0.5({\mathbf{v}}_{i}^{2}+{\mathbf{v}}_{j}^{2})-{\mathbf{v}}_{i}{\mathbf{v}}_{j})
=12โ€‹โˆ‘iโ€‹j(๐–Tโ€‹๐–)iโ€‹jโ€‹๐ฎiโ€‹๐ฎjโ€‹(๐ฏiโˆ’๐ฏj)2.\displaystyle=\frac{1}{2}\sum_{ij}({\mathbf{W}}^{T}{\mathbf{W}})_{ij}{\mathbf{u}}_{i}{\mathbf{u}}_{j}({\mathbf{v}}_{i}-{\mathbf{v}}_{j})^{2}.

Now observe

๐–Tโ€‹๐–\displaystyle{\mathbf{W}}^{T}{\mathbf{W}} =โˆ‘i๐–iTโ€‹๐–i+โˆ‘j๐–jTโ€‹๐–j\displaystyle=\sum_{i}{\mathbf{W}}_{i}^{T}{\mathbf{W}}_{i}+\sum_{j}{\mathbf{W}}_{j}^{T}{\mathbf{W}}_{j}
=โˆ‘i(๐•โŠ—๐€i)Tโ€‹(๐•โŠ—๐€i)+โˆ‘j(๐jTโ€‹๐•โŠ—๐ˆ)Tโ€‹(๐jTโ€‹๐•โŠ—๐ˆ)\displaystyle=\sum_{i}({\mathbf{V}}\otimes{\mathbf{A}}_{i})^{T}({\mathbf{V}}\otimes{\mathbf{A}}_{i})+\sum_{j}({\mathbf{B}}_{j}^{T}{\mathbf{V}}\otimes{\mathbf{I}})^{T}({\mathbf{B}}_{j}^{T}{\mathbf{V}}\otimes{\mathbf{I}})
=โˆ‘i(๐•Tโ€‹๐•)โŠ—(๐€iTโ€‹๐€i)+โˆ‘j(๐•Tโ€‹๐jโ€‹๐jTโ€‹๐•)โŠ—๐ˆ\displaystyle=\sum_{i}({\mathbf{V}}^{T}{\mathbf{V}})\otimes({\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i})+\sum_{j}({\mathbf{V}}^{T}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}{\mathbf{V}})\otimes{\mathbf{I}}
=(๐•Tโ€‹๐•)โŠ—(โˆ‘i๐€iTโ€‹๐€i)+๐•Tโ€‹(โˆ‘j๐jโ€‹๐jT)โ€‹๐•โŠ—๐ˆ.\displaystyle=({\mathbf{V}}^{T}{\mathbf{V}})\otimes\big{(}\sum_{i}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}\big{)}+{\mathbf{V}}^{T}\big{(}\sum_{j}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}\big{)}{\mathbf{V}}\otimes{\mathbf{I}}.

Thus, by the entry-wise nonnegativity of ๐•{\mathbf{V}}, โˆ‘i๐€iTโ€‹๐€i\sum_{i}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}, and โˆ‘j๐jโ€‹๐jT\sum_{j}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}, the matrix ๐–Tโ€‹๐–{\mathbf{W}}^{T}{\mathbf{W}} is also entrywise nonnegative. Since ๐”{\mathbf{U}} is also nonnegative,

๐ฏTโ€‹๐Œ๐ฏ=12โ€‹โˆ‘iโ€‹j(๐–Tโ€‹๐–)iโ€‹jโ€‹๐ฎiโ€‹๐ฎjโ€‹(๐ฏiโˆ’๐ฏj)2โ‰ฅ0.{\mathbf{v}}^{T}{\mathbf{M}}{\mathbf{v}}=\frac{1}{2}\sum_{ij}({\mathbf{W}}^{T}{\mathbf{W}})_{ij}{\mathbf{u}}_{i}{\mathbf{u}}_{j}({\mathbf{v}}_{i}-{\mathbf{v}}_{j})^{2}\geq 0.

This shows that ๐Œ{\mathbf{M}} is positive semidefinite, and therefore, GG majorizes FF.


Step 3: The updates minimize the majorizing function. To conclude, observe that

argโกmin๐ฎโ€ฒโกGโ€‹(๐ฎโ€ฒ,๐ฎ)=๐ฎโˆ’๐Š๐ฎโˆ’1โ€‹โˆ‡Fโ€‹(๐ฎ)=๐ฎโˆ˜๐–Tโ€‹๐ฒ๐–Tโ€‹๐–๐ฎ.\arg\min_{{\mathbf{u}}^{\prime}}G({\mathbf{u}}^{\prime},{\mathbf{u}})={\mathbf{u}}-{\mathbf{K}}_{\mathbf{u}}^{-1}\nabla F({\mathbf{u}})={\mathbf{u}}\circ\frac{{\mathbf{W}}^{T}{\mathbf{y}}}{{\mathbf{W}}^{T}{\mathbf{W}}{\mathbf{u}}}. (4.4)

In matrix form, this corresponds exactly to the update for ๐”{\mathbf{U}} given by

๐”โˆ˜โˆ‘i๐€iTโ€‹๐€iโ€‹๐—i(๐€)โ€‹๐•+โˆ‘j๐—j(๐)โ€‹๐jโ€‹๐jTโ€‹๐•โˆ‘i๐€iTโ€‹๐€iโ€‹๐”๐•Tโ€‹๐•+โˆ‘j๐”๐•Tโ€‹๐jโ€‹๐jTโ€‹๐•.{\mathbf{U}}\circ\frac{\sum_{i}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{X}}^{({\mathbf{A}})}_{i}{\mathbf{V}}+\sum_{j}{\mathbf{X}}^{({\mathbf{B}})}_{j}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}{\mathbf{V}}}{\sum_{i}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}+\sum_{j}{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}{\mathbf{V}}}.

From majorization property, we have

Fโ€‹(๐ฎโˆ˜๐–Tโ€‹๐ฒ๐–Tโ€‹๐–๐ฎ)โ‰คGโ€‹(๐ฎโˆ˜๐–Tโ€‹๐ฒ๐–Tโ€‹๐–๐ฎ,๐ฎ)โ€‹โ‰ค(โ€‹4.4โ€‹)โ€‹Gโ€‹(๐ฎ,๐ฎ)=Fโ€‹(๐ฎ)F\big{(}{\mathbf{u}}\circ\frac{{\mathbf{W}}^{T}{\mathbf{y}}}{{\mathbf{W}}^{T}{\mathbf{W}}{\mathbf{u}}}\big{)}\leq G\big{(}{\mathbf{u}}\circ\frac{{\mathbf{W}}^{T}{\mathbf{y}}}{{\mathbf{W}}^{T}{\mathbf{W}}{\mathbf{u}}},{\mathbf{u}}\big{)}\overset{\eqref{eq:min-g}}{\leq}G({\mathbf{u}},{\mathbf{u}})=F({\mathbf{u}})

and thus, the iterates do not increase the objective.

The updates for ๐•{\mathbf{V}} also do not increase the objective by a similar argument. The conditions on (โˆ‘i=1s๐€iTโ€‹๐€iโ€‹๐—i)(\sum_{i=1}^{s}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{X}}_{i}) and (โˆ‘j=1t๐—jโ€‹๐jโ€‹๐jT)(\sum_{j=1}^{t}{\mathbf{X}}_{j}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}) ensure that the matrices ๐”{\mathbf{U}} and ๐•{\mathbf{V}} are recursively nonnegative under the updates. โˆŽ

4.2. Multiplicative updates for solving regularized compressed problems

Now, we demonstrate how the general framework (4.1) applies to the compressed problems from Section 3.

Corollary 4.3 (Two-sided updates).

Let ๐—โˆˆโ„+mร—n{\mathbf{X}}\in{\mathbb{R}}_{+}^{m\times n} be a nonnegative matrix and ๐€1โˆˆโ„kร—m{\mathbf{A}}_{1}\in{\mathbb{R}}^{k\times m}, ๐€2โˆˆโ„nร—k{\mathbf{A}}_{2}\in{\mathbb{R}}^{n\times k} are generic conpression matrices such that ๐€1โ€‹๐—๐€2{\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2} is invertible. Form ๐1โˆˆโ„mร—r{\mathbf{Q}}_{1}\in{\mathbb{R}}^{m\times r} and ๐2โˆˆโ„nร—r{\mathbf{Q}}_{2}\in{\mathbb{R}}^{n\times r}, the matrices which columns form orthonormal bases of the column space of ๐—๐€2{\mathbf{X}}{\mathbf{A}}_{2} and the row space of ๐€1โ€‹๐—{\mathbf{A}}_{1}{\mathbf{X}} respectively. For any ฮป1,ฮป2โ‰ฅ0\lambda_{1},\lambda_{2}\geq 0, if

ฯƒ1\displaystyle\sigma_{1} โ‰ฅmaxโก{(๐€1Tโ€‹๐€1)โˆ’},maxโก{(๐€1Tโ€‹๐€1+ฮป1โ€‹(๐ˆโˆ’๐1โ€‹๐1T))โˆ’},\displaystyle\geq\max\{\left({\mathbf{A}}_{1}^{T}{\mathbf{A}}_{1}\right)_{-}\},\max\{\left({\mathbf{A}}_{1}^{T}{\mathbf{A}}_{1}+\lambda_{1}({\mathbf{I}}-{\mathbf{Q}}_{1}{\mathbf{Q}}_{1}^{T})\right)_{-}\},
ฯƒ2\displaystyle\sigma_{2} โ‰ฅmaxโก{(๐€2โ€‹๐€2T)โˆ’},maxโก{(๐€2โ€‹๐€2T+ฮป2โ€‹(๐ˆโˆ’๐2โ€‹๐2T))โˆ’},\displaystyle\geq\max\{\left({\mathbf{A}}_{2}{\mathbf{A}}_{2}^{T}\right)_{-}\},\max\{\left({\mathbf{A}}_{2}{\mathbf{A}}_{2}^{T}+\lambda_{2}({\mathbf{I}}-{\mathbf{Q}}_{2}{\mathbf{Q}}_{2}^{T})\right)_{-}\},

where max\max is taken entry-wise, then the objective

โ€–๐€1โ€‹(๐—โˆ’๐”๐•T)โ€–F2\displaystyle\|{\mathbf{A}}_{1}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}_{F} +โ€–(๐—โˆ’๐”๐•T)โ€‹๐€2โ€–F2\displaystyle+\|({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}){\mathbf{A}}_{2}\|^{2}_{F}
+ฮป1โ€‹โ€–P๐—๐€2โŸ‚โ€‹๐”๐•Tโ€–F2+ฮป2โ€‹โ€–๐”๐•Tโ€‹P๐€1โ€‹๐—โŸ‚โ€–F2\displaystyle+\lambda_{1}\|P_{{\mathbf{X}}{\mathbf{A}}_{2}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T}\|^{2}_{F}+\lambda_{2}\|{\mathbf{U}}{\mathbf{V}}^{T}P_{{\mathbf{A}}_{1}{\mathbf{X}}}^{\perp}\|^{2}_{F}
+ฯƒ1โ€‹โ€–๐ŸmTโ€‹(๐—โˆ’๐”๐•T)โ€–2+ฯƒ2โ€‹โ€–(๐—โˆ’๐”๐•T)โ€‹๐Ÿnโ€–2,\displaystyle\quad\quad\quad\quad+\sigma_{1}\|\mathbf{1}_{m}^{T}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}+\sigma_{2}\|({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\mathbf{1}_{n}\|^{2},

is nonincreasing under the updates

๐”\displaystyle{\mathbf{U}} โ†๐”โˆ˜๐€1,ฯƒโ€‹๐—๐•+๐—๐€2,ฯƒโ€‹๐•(๐€1,ฯƒ+ฮป1โ€‹(๐ˆโˆ’๐1โ€‹๐1T))โ€‹๐”๐•Tโ€‹๐•+๐”๐•Tโ€‹(๐€2,ฯƒ+ฮป2โ€‹(๐ˆโˆ’๐2โ€‹๐2T))โ€‹๐•\displaystyle\leftarrow{\mathbf{U}}\circ\frac{{\mathbf{A}}_{1,\sigma}{\mathbf{X}}{\mathbf{V}}+{\mathbf{X}}{\mathbf{A}}_{2,\sigma}{\mathbf{V}}}{({\mathbf{A}}_{1,\sigma}+\lambda_{1}({\mathbf{I}}-{\mathbf{Q}}_{1}{\mathbf{Q}}_{1}^{T})){\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}+{\mathbf{U}}{\mathbf{V}}^{T}({\mathbf{A}}_{2,\sigma}+\lambda_{2}({\mathbf{I}}-{\mathbf{Q}}_{2}{\mathbf{Q}}_{2}^{T})){\mathbf{V}}}
๐•\displaystyle{\mathbf{V}} โ†๐•โˆ˜๐—Tโ€‹๐€1,ฯƒโ€‹๐”+๐€2,ฯƒโ€‹๐—Tโ€‹๐”๐•๐”Tโ€‹(๐€1,ฯƒ+ฮป1โ€‹(๐ˆโˆ’๐1โ€‹๐1T))โ€‹๐”+(๐€2,ฯƒ+ฮป2โ€‹(๐ˆโˆ’๐2โ€‹๐2T))โ€‹๐•๐”Tโ€‹๐”,\displaystyle\leftarrow{\mathbf{V}}\circ\frac{{\mathbf{X}}^{T}{\mathbf{A}}_{1,\sigma}{\mathbf{U}}+{\mathbf{A}}_{2,\sigma}{\mathbf{X}}^{T}{\mathbf{U}}}{{\mathbf{V}}{\mathbf{U}}^{T}({\mathbf{A}}_{1,\sigma}+\lambda_{1}({\mathbf{I}}-{\mathbf{Q}}_{1}{\mathbf{Q}}_{1}^{T})){\mathbf{U}}+({\mathbf{A}}_{2,\sigma}+\lambda_{2}({\mathbf{I}}-{\mathbf{Q}}_{2}{\mathbf{Q}}_{2}^{T})){\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{U}}},

where

๐€1,ฯƒ:=๐€1Tโ€‹๐€1+ฯƒ1โ€‹๐Ÿmโ€‹๐ŸmTโ€‹ and โ€‹๐€2,ฯƒ:=๐€2โ€‹๐€2T+ฯƒ2โ€‹๐Ÿnโ€‹๐ŸnT.{\mathbf{A}}_{1,\sigma}:={\mathbf{A}}_{1}^{T}{\mathbf{A}}_{1}+\sigma_{1}\mathbf{1}_{m}\mathbf{1}_{m}^{T}\;\text{ and }\;{\mathbf{A}}_{2,\sigma}:={\mathbf{A}}_{2}{\mathbf{A}}_{2}^{T}+\sigma_{2}\mathbf{1}_{n}\mathbf{1}_{n}^{T}.

Note that Theorem 3.1 and Corollary 3.12 claim that the optimal solution for (4.5) is the optimal solution for the uncompressed NMF problem if ๐—{\mathbf{X}} has exactly nonnegative decomposition of the rank at most kk.

Proof.

Consider the setting of Theorem 4.1, with the matrices

{๐€i}i=1,2,3={๐€1,ฯƒ1โ€‹๐ŸmT,ฮป1โ€‹P๐—๐€2โŸ‚}={๐€1,ฯƒ1โ€‹๐ŸmT,ฮป1โ€‹(๐ˆโˆ’๐2โ€‹๐2T)},\displaystyle\{{\mathbf{A}}_{i}\}_{i=1,2,3}=\{{\mathbf{A}}_{1},\sqrt{\sigma_{1}}\mathbf{1}_{m}^{T},\sqrt{\lambda_{1}}P_{{\mathbf{X}}{\mathbf{A}}_{2}}^{\perp}\}=\{{\mathbf{A}}_{1},\sqrt{\sigma_{1}}\mathbf{1}_{m}^{T},\sqrt{\lambda_{1}}({\mathbf{I}}-{\mathbf{Q}}_{2}{\mathbf{Q}}_{2}^{T})\},
{๐j}j=1,2,3={๐€2,ฯƒ2โ€‹๐Ÿn,ฮป1โ€‹P๐€1โ€‹๐—โŸ‚}={๐€2,ฯƒ2โ€‹๐Ÿn,ฮป1โ€‹(๐ˆโˆ’๐1โ€‹๐1T)},\displaystyle\{{\mathbf{B}}_{j}\}_{j=1,2,3}=\{{\mathbf{A}}_{2},\sqrt{\sigma_{2}}\mathbf{1}_{n},\sqrt{\lambda_{1}}P_{{\mathbf{A}}_{1}{\mathbf{X}}}^{\perp}\}=\{{\mathbf{A}}_{2},\sqrt{\sigma_{2}}\mathbf{1}_{n},\sqrt{\lambda_{1}}({\mathbf{I}}-{\mathbf{Q}}_{1}{\mathbf{Q}}_{1}^{T})\},
{๐—i(๐€)}i=1,2,3={๐—j(๐)}j=1,2,3={๐—,๐—,๐ŸŽ}.\displaystyle\{{\mathbf{X}}^{({\mathbf{A}})}_{i}\}_{i=1,2,3}=\{{\mathbf{X}}^{({\mathbf{B}})}_{j}\}_{j=1,2,3}=\{{\mathbf{X}},{\mathbf{X}},\mathbf{0}\}.

Clearly, we have that the matrices ๐—i(๐€){\mathbf{X}}^{({\mathbf{A}})}_{i} and ๐—i(๐){\mathbf{X}}^{({\mathbf{B}})}_{i} are nonnegative. Then to apply Theorem 4.1, we must check that โˆ‘i๐€iTโ€‹๐€i\sum_{i}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}, โˆ‘i๐€iTโ€‹๐€iโ€‹๐—i(๐€)\sum_{i}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{X}}^{({\mathbf{A}})}_{i}, โˆ‘j๐jโ€‹๐jT\sum_{j}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T} and โˆ‘j๐—j(๐)โ€‹๐jโ€‹๐jT\sum_{j}{\mathbf{X}}^{({\mathbf{B}})}_{j}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T} are entry-wise nonnegative. First, we calculate

โˆ‘i๐€iTโ€‹๐€i\displaystyle\sum_{i}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i} =๐€1Tโ€‹๐€1+ฯƒ1โ€‹๐Ÿmโ€‹๐ŸmT+ฮป1โ€‹(๐ˆโˆ’๐1โ€‹๐1T),\displaystyle={\mathbf{A}}_{1}^{T}{\mathbf{A}}_{1}+\sigma_{1}\mathbf{1}_{m}\mathbf{1}_{m}^{T}+\lambda_{1}({\mathbf{I}}-{\mathbf{Q}}_{1}{\mathbf{Q}}_{1}^{T}),
โˆ‘j๐jโ€‹๐jT\displaystyle\sum_{j}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T} =๐€2โ€‹๐€2T+ฯƒ2โ€‹๐Ÿnโ€‹๐ŸnT+ฮป2โ€‹(๐ˆโˆ’๐2โ€‹๐2T).\displaystyle={\mathbf{A}}_{2}{\mathbf{A}}_{2}^{T}+\sigma_{2}\mathbf{1}_{n}\mathbf{1}_{n}^{T}+\lambda_{2}({\mathbf{I}}-{\mathbf{Q}}_{2}{\mathbf{Q}}_{2}^{T}).

Thus to ensure entry-wise nonnegativity of both sums, we need

ฯƒ1โ‰ฅmaxโก{(๐€1Tโ€‹๐€1+ฮป1โ€‹(๐ˆโˆ’๐1โ€‹๐1T))โˆ’},ฯƒ2โ‰ฅmaxโก{(๐€2โ€‹๐€2T+ฮป2โ€‹(๐ˆโˆ’๐2โ€‹๐2T))โˆ’}.\sigma_{1}\geq\max\{\left({\mathbf{A}}_{1}^{T}{\mathbf{A}}_{1}+\lambda_{1}({\mathbf{I}}-{\mathbf{Q}}_{1}{\mathbf{Q}}_{1}^{T})\right)_{-}\},\;\sigma_{2}\geq\max\{\left({\mathbf{A}}_{2}{\mathbf{A}}_{2}^{T}+\lambda_{2}({\mathbf{I}}-{\mathbf{Q}}_{2}{\mathbf{Q}}_{2}^{T})\right)_{-}\}.

Similarly, for

โˆ‘i๐€iTโ€‹๐€iโ€‹๐—i\displaystyle\sum_{i}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{X}}_{i} =(๐€1Tโ€‹๐€1+ฯƒ1โ€‹๐Ÿmโ€‹๐ŸmT)โ€‹๐—,\displaystyle=({\mathbf{A}}_{1}^{T}{\mathbf{A}}_{1}+\sigma_{1}\mathbf{1}_{m}\mathbf{1}_{m}^{T}){\mathbf{X}},
โˆ‘j๐—jโ€‹๐jโ€‹๐jT\displaystyle\sum_{j}{\mathbf{X}}_{j}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T} =๐—โ€‹(๐€2โ€‹๐€2T+ฯƒ2โ€‹๐Ÿnโ€‹๐ŸnT).\displaystyle={\mathbf{X}}({\mathbf{A}}_{2}{\mathbf{A}}_{2}^{T}+\sigma_{2}\mathbf{1}_{n}\mathbf{1}_{n}^{T}).

we need to ensure ฯƒ1โ‰ฅmaxโก{(๐€1Tโ€‹๐€1)โˆ’}\sigma_{1}\geq\max\{\left({\mathbf{A}}_{1}^{T}{\mathbf{A}}_{1}\right)_{-}\} and ฯƒ2โ‰ฅmaxโก{(๐€2โ€‹๐€2T)โˆ’}\sigma_{2}\geq\max\{\left({\mathbf{A}}_{2}{\mathbf{A}}_{2}^{T}\right)_{-}\}. โˆŽ

Corollary 4.4 (One-sided updates for orthogonal ๐€{\mathbf{A}}).

If ๐—โˆˆโ„+mร—n{\mathbf{X}}\in{\mathbb{R}}_{+}^{m\times n} and sketching matrix ๐€โˆˆโ„kร—m{\mathbf{A}}\in{\mathbb{R}}^{k\times m} has orthogonal rows, ฮปโˆˆ[0,1]\lambda\in[0,1] and the nonnegativity correction term ฯƒโ‰ฅmaxโก{(๐€Tโ€‹๐€)โˆ’}\sigma\geq\max\{\left({\mathbf{A}}^{T}{\mathbf{A}}\right)_{-}\}, then the objective

โ€–๐€โ€‹(๐—โˆ’๐”๐•T)โ€–F2+ฮปโ€‹โ€–P๐€โŸ‚โ€‹๐”๐•Tโ€–F2+ฯƒโ€‹โ€–๐ŸTโ€‹(๐—โˆ’๐”๐•T)โ€–2\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T}\|^{2}_{F}+\sigma\|\mathbf{1}^{T}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2} (4.5)

with respect to the variables ๐”โˆˆโ„+mร—r,๐•โˆˆโ„+nร—r{\mathbf{U}}\in{\mathbb{R}}_{+}^{m\times r},{\mathbf{V}}\in{\mathbb{R}}_{+}^{n\times r} is nonincreasing under the updates

๐”\displaystyle{\mathbf{U}} โ†๐”โˆ˜๐€Tโ€‹๐€๐—๐•+ฯƒโ€‹๐Ÿ๐ŸTโ€‹๐—๐•(1โˆ’ฮป)โ€‹๐€Tโ€‹๐€๐”๐•Tโ€‹๐•+ฯƒโ€‹๐Ÿ๐ŸTโ€‹๐”๐•Tโ€‹๐•+ฮปโ€‹๐”๐•Tโ€‹๐•\displaystyle\leftarrow{\mathbf{U}}\circ\frac{{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{X}}{\mathbf{V}}+\sigma\mathbf{1}\mathbf{1}^{T}{\mathbf{X}}{\mathbf{V}}}{(1-\lambda){\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}+\sigma\mathbf{1}\mathbf{1}^{T}{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}+\lambda{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}} (4.6)
๐•\displaystyle{\mathbf{V}} โ†๐•โˆ˜๐—Tโ€‹๐€Tโ€‹๐€๐”+ฯƒโ€‹๐—Tโ€‹๐Ÿ๐ŸTโ€‹๐”(1โˆ’ฮป)โ€‹๐•๐”Tโ€‹๐€Tโ€‹๐€๐”+ฯƒโ€‹๐•๐”Tโ€‹๐Ÿ๐ŸTโ€‹๐”+ฮปโ€‹๐•๐”Tโ€‹๐”.\displaystyle\leftarrow{\mathbf{V}}\circ\frac{{\mathbf{X}}^{T}{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{U}}+\sigma{\mathbf{X}}^{T}\mathbf{1}\mathbf{1}^{T}{\mathbf{U}}}{(1-\lambda){\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{U}}+\sigma{\mathbf{V}}{\mathbf{U}}^{T}\mathbf{1}\mathbf{1}^{T}{\mathbf{U}}+\lambda{\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{U}}}.

Here, ๐Ÿ=(1,โ€ฆ,1)โˆˆโ„m\mathbf{1}=(1,\ldots,1)\in{\mathbb{R}}^{m}. Note that Corollary 3.13 claims that the optimal solution for (4.5) results in a good solution for the uncompressed NMF problem as long as the original NMF error min๐”,๐•โ‰ฅ0โกโ€–๐—โˆ’๐”๐•โ€–F2\min\limits_{{\mathbf{U}},{\mathbf{V}}\geq 0}\|{\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}\|^{2}_{F} and โ€–P๐€โŸ‚โ€‹๐—โ€–F2\|P^{\perp}_{\mathbf{A}}{\mathbf{X}}\|^{2}_{F} term are small.

Proof.

In the language of Theorem 4.1, โ€œ๐—i{\mathbf{X}}_{i}โ€ matrices are {๐—,๐ŸŽ,๐—}\{{\mathbf{X}},\mathbf{0},{\mathbf{X}}\} and โ€œ๐€i{\mathbf{A}}_{i}โ€ matrices are {๐€,ฮปโ€‹(๐ˆโˆ’๐€Tโ€‹๐€),ฯƒโ€‹๐ŸT}\{{\mathbf{A}},\sqrt{\lambda}({\mathbf{I}}-{\mathbf{A}}^{T}{\mathbf{A}}),\sqrt{\sigma}\mathbf{1}^{T}\} for i=1,2,3i=1,2,3 respectively. One can see that

โˆ‘i=13๐€iTโ€‹๐€i=(1โˆ’ฮป)โ€‹๐€Tโ€‹๐€+ฮปโ€‹๐ˆ+ฯƒโ€‹๐Ÿ๐ŸTโ€‹ and โ€‹โˆ‘i=13๐€iTโ€‹๐€iโ€‹๐—i=๐€Tโ€‹๐€๐—+ฯƒโ€‹๐Ÿ๐ŸTโ€‹๐—.\sum_{i=1}^{3}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}=(1-\lambda){\mathbf{A}}^{T}{\mathbf{A}}+\lambda{\mathbf{I}}+\sigma\mathbf{1}\mathbf{1}^{T}\;\text{ and }\;\sum_{i=1}^{3}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{X}}_{i}={\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{X}}+\sigma\mathbf{1}\mathbf{1}^{T}{\mathbf{X}}.

These matrices are entry-wise nonnegative if ฯƒโ‰ฅmaxโก{(๐€Tโ€‹๐€)โˆ’}\sigma\geq\max\{\left({\mathbf{A}}^{T}{\mathbf{A}}\right)_{-}\} and ฮปโˆˆ[0,1]\lambda\in[0,1]. Theorem 4.1 applies to justify that the objective (4.5) is nonincreasing under the updates (4.6). โˆŽ

The proof of the next corollary for not necessarily orthogonal sketching matrices is similar to the one above and is omitted for brevity.

Corollary 4.5 (One-sided updates for nonorthogonal sketching matrices).

If ๐—โˆˆโ„+mร—n{\mathbf{X}}\in{\mathbb{R}}_{+}^{m\times n} and ๐€โˆˆโ„kร—m{\mathbf{A}}\in{\mathbb{R}}^{k\times m} is an arbitrary matrix, ฮปโˆˆ[0,1]\lambda\in[0,1] and the nonnegativity correction term ฯƒโ‰ฅmaxโก((๐€Tโ€‹๐€)โˆ’)\sigma\geq\max(({\mathbf{A}}^{T}{\mathbf{A}})_{-}), then the objective

โ€–๐€โ€‹(๐—โˆ’๐”๐•T)โ€–F2+ฮปโ€‹โ€–๐”๐•Tโ€–F2+ฯƒโ€‹โ€–๐ŸTโ€‹(๐—โˆ’๐”๐•T)โ€–2\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}_{F}+\lambda\|{\mathbf{U}}{\mathbf{V}}^{T}\|^{2}_{F}+\sigma\|\mathbf{1}^{T}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}

with respect to the variables ๐”โˆˆโ„+mร—r,๐•โˆˆโ„+nร—r{\mathbf{U}}\in{\mathbb{R}}_{+}^{m\times r},{\mathbf{V}}\in{\mathbb{R}}_{+}^{n\times r} is nonincreasing under the updates

๐”\displaystyle{\mathbf{U}} โ†๐”โˆ˜๐€Tโ€‹๐€๐—๐•+ฯƒโ€‹๐Ÿ๐ŸTโ€‹๐—๐•๐€Tโ€‹๐€๐”๐•Tโ€‹๐•+ฯƒโ€‹๐Ÿ๐ŸTโ€‹๐”๐•Tโ€‹๐•+ฮปโ€‹๐”๐•Tโ€‹๐•\displaystyle\leftarrow{\mathbf{U}}\circ\frac{{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{X}}{\mathbf{V}}+\sigma\mathbf{1}\mathbf{1}^{T}{\mathbf{X}}{\mathbf{V}}}{{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}+\sigma\mathbf{1}\mathbf{1}^{T}{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}+\lambda{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}}
๐•\displaystyle{\mathbf{V}} โ†๐•โˆ˜๐—Tโ€‹๐€Tโ€‹๐€๐”+ฯƒโ€‹๐—Tโ€‹๐Ÿ๐ŸTโ€‹๐”๐•๐”Tโ€‹๐€Tโ€‹๐€๐”+ฯƒโ€‹๐•๐”Tโ€‹๐Ÿ๐ŸTโ€‹๐”+ฮปโ€‹๐•๐”Tโ€‹๐”.\displaystyle\leftarrow{\mathbf{V}}\circ\frac{{\mathbf{X}}^{T}{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{U}}+\sigma{\mathbf{X}}^{T}\mathbf{1}\mathbf{1}^{T}{\mathbf{U}}}{{\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{U}}+\sigma{\mathbf{V}}{\mathbf{U}}^{T}\mathbf{1}\mathbf{1}^{T}{\mathbf{U}}+\lambda{\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{U}}}.

Here, ๐Ÿ\mathbf{1} is a vector of all ones in โ„m{\mathbb{R}}^{m}.

4.3. Solving compressed problems with projected gradient descent

Multiplicative updates is a popular approach to find nonnegative factorizations due to its simplicity and good convergence properties. However, other standard methods such as alternating nonnegative least squares, hierarchical least squares, projected gradient descent, among others, can be run on the compressed problems. For comparison and additional example, we will consider projected gradient descent (GD) method on compressed data.

For an arbitrary loss function Lโ€‹(๐”,๐•)L({\mathbf{U}},{\mathbf{V}}), nonnegative projected GD can be defined

๐”\displaystyle{\mathbf{U}} โ†(๐”โˆ’ฮฑโ€‹โˆ‡๐”Lโ€‹(๐”,๐•))+\displaystyle\leftarrow({\mathbf{U}}-\alpha\nabla_{\mathbf{U}}L({\mathbf{U}},{\mathbf{V}}))_{+}
๐•\displaystyle{\mathbf{V}} โ†(๐•โˆ’ฮฑโ€‹โˆ‡๐•Lโ€‹(๐”,๐•))+,\displaystyle\leftarrow({\mathbf{V}}-\alpha\nabla_{\mathbf{V}}L({\mathbf{U}},{\mathbf{V}}))_{+},

where ฮฑ\alpha is the step size. For the sake of concreteness, we will give an example of the updates for one of our formulated objective functions. The projected gradient descent updates for the objective โ€–๐€โ€‹(๐—โˆ’๐”๐•T)โ€–F2+ฮปโ€‹โ€–P๐€โŸ‚โ€‹๐”๐•Tโ€–F2\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T}\|^{2}_{F} (as in Theorem 3.4) are

๐”\displaystyle{\mathbf{U}} โ†(๐”โˆ’ฮฑโ€‹((1โˆ’ฮป)โ€‹๐€Tโ€‹๐€๐”๐•Tโ€‹๐•+๐”๐•Tโ€‹๐•โˆ’๐€Tโ€‹๐€๐—๐•))+\displaystyle\leftarrow\left({\mathbf{U}}-\alpha\left((1-\lambda){\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}+{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}-{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{X}}{\mathbf{V}}\right)\right)_{+} (4.7)
๐•\displaystyle{\mathbf{V}} โ†(๐•โˆ’ฮฑโ€‹((1โˆ’ฮป)โ€‹๐•๐”Tโ€‹๐€Tโ€‹๐€๐”+๐•๐”Tโ€‹๐”โˆ’๐—Tโ€‹๐€Tโ€‹๐€๐”))+.\displaystyle\leftarrow\left({\mathbf{V}}-\alpha\left((1-\lambda){\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{U}}+{\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{U}}-{\mathbf{X}}^{T}{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{U}}\right)\right)_{+}.

We can similarly derive updates for our other objective functions. A disadvantage of this method is that it possesses no guarantee of convergence or even a nonincreasing property. Empirically, we see (Figure 3 below) that on some datasets projected GD shows competitive performance and that its convergence is indeed not monotonic, unlike the convergence of the regularized compressed MU algorithm.

5. Experiments

We experiment with three datasets coming from various domains. The 20 Newsgroups dataset (โ€œ20Newsโ€) [30] is a a standard dataset for text classification and topic modeling tasks. It is a collection of articles divided into a total of 2020 subtopics of the general topics of religion, sales, science, sports, tech and politics. The Olivetti faces (โ€œFacesโ€) [32] is a standard image dataset containing grayscale facial images. It is often used in the literature as an example for different factorization methods including NMF [47, 48]. Finally, we construct a synthetic dataset with regular random data and nonnegative rank of exactly 2020. Specifically, we let ๐”{\mathbf{U}} and ๐•{\mathbf{V}} be 1000ร—201000\times 20 matrices whose entries are distributed like standard lognormal random variables and define ๐—=๐”๐•T{\mathbf{X}}={\mathbf{U}}{\mathbf{V}}^{T}. The dimensions of the datasets are reported in Table 1.

Dataset nn mm
Synthetic 1000 1000
Faces [32] 400 4096
20News [30] 11314 101322
Table 1. Dimensions of all datasets studied.

All experiments were run on a cluster with 4 2.6 GHz Intel Skylake CPU cores and a NVIDIA V100 GPU. Compressed methods were implemented using the JAX library to take advantage of GPU acceleration. The uncompressed methods that we compare were not implemented to use GPU acceleration since in applications the full data matrix may be too large to store on a GPU. In our case, the 20News data matrix was too large to store as a dense matrix on our GPU.

To measure the convergence of the methods, besides interpretability of the topics or images, we use the relative error metric โ€–๐—โˆ’๐”๐•Tโ€–F/โ€–๐—โ€–F\|{\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}\|_{F}/\|{\mathbf{X}}\|_{F} and the scale invariant cosine similarity metric (normalized dot product metric) between ๐”๐•T{\mathbf{U}}{\mathbf{V}}^{T} and ๐—{\mathbf{X}}, defined as โŸจ๐—,๐”๐•TโŸฉ/โ€–๐—โ€–Fโ€‹โ€–๐”๐•Tโ€–F\langle{\mathbf{X}},{\mathbf{U}}{\mathbf{V}}^{T}\rangle/\|{\mathbf{X}}\|_{F}\|{\mathbf{U}}{\mathbf{V}}^{T}\|_{F}.

5.1. Synthetic data: exact recovery from compressed measurements is achievable

Synthetic data has exactly low nonnegative rank k=20k=20. Yet, the theoretical results suggest the possibility of exact recovery from the compressed data in two cases: (a) if two-sided compression was employed (Theorem 3.1), or (b) if the compression matrix is such that the projection of the data onto its orthogonal complement P๐€โŸ‚โ€‹๐—P_{{\mathbf{A}}}^{\perp}{\mathbf{X}} is zero, for example, if the compression matrix is learned via the randomized rangefinder algorithm (one-sided data-adapted compression, Corollary 3.6). Two-sided compression requires twice more memory for the same sketch size but works with generic (data-oblivious) measurements that can be advantageous for various reasons as they do not require access to data or another pass over the data.

Refer to caption
Figure 1. NMF recovery of the synthetic data with MU from full data (Uncompressed); from compressed data with one-sided data-adapted sketches using only 4%4\% of original memory (One-sided), and with two-sided Gaussian sketches using 8%8\% of original memory (Two-sided). All three methods achieve recovery within 10โˆ’310^{-3} relative error.

In Figure 1, we compare nonnegative factors found by the MU from full data ๐—{\mathbf{X}}, from the one-sided data-adapted sketches, and from the two-sided oblivious (i.i.d. Gaussian) linear measurements. We take the target rank r=20r=20 and the sketch size k=20k=20 (so, the sketching matrices have the shape 20ร—100020\times 1000). In this case, the matrix ๐—{\mathbf{X}} has one million elements whereas the total number of elements in ๐€{\mathbf{A}} and ๐—๐€{\mathbf{X}}{\mathbf{A}} combined is only forty thousand. This represents a memory ratio of 4%4\% for the one-sided method. For the two sided method, the total number of elements in ๐€1{\mathbf{A}}_{1},๐€2{\mathbf{A}}_{2},๐€1โ€‹๐—{\mathbf{A}}_{1}{\mathbf{X}}, and ๐—๐€2{\mathbf{X}}{\mathbf{A}}_{2} is eighty thousand representing a memory ratio of 8%8\%.

We employ (compressed) MU algorithm as in Corollary 4.4 with ฮป=0.1\lambda=0.1 and Corollary 4.3 with ฮป=0\lambda=0. The parameter ฯƒ\sigma is chosen minimal so that we have ๐€๐€Tโ‰ฅโˆ’ฯƒ{\mathbf{A}}{\mathbf{A}}^{T}\geq-\sigma, and similarly for the ๐€1{\mathbf{A}}_{1}, ๐€2{\mathbf{A}}_{2}, ฯƒ1\sigma_{1} and ฯƒ2\sigma_{2} in the two-sided case. We see that the algorithm achieves a near exact fit, eventually reducing relative error below 10โˆ’310^{-3} which was our stopping criterion in this case. The one-sided method and the oblivious two-sided methods seem to be converging at a similar rate as the uncompressed method, albeit after a โ€œburn-inโ€ phase.

5.2. Synthetic data: effect of compression size

In Figure 2, we compare the compressed multiplicative updates for different amounts of compression. The target rank r=20r=20 and the sketch size kk varies. We employ the compressed MU algorithm as in Corollaries 4.3, 4.4, and 4.5. We choose the compression matrix ๐€{\mathbf{A}} to be (a) a random Gaussian matrix, (b) a random matrix with orthogonal rows, or (c) via the randomized SVD procedure (3.5). For the two-sided method (d), we choose both compression matrices to be random Gaussian matrices and ฮป=0\lambda=0. For the one-sided compression, we take ฮป=.1\lambda=.1. We report cosine similarity loss.

We show that on an โ€œeasyโ€ synthetic problem oblivious one-sided measurements can be used: the compressed MU algorithm results in a good, although not perfect, recovery (Figure 2 (a,b)). The amount of limiting loss depending on the amount of compression. Then, the one-sided data-adapted compression and two-sided random compression attain exact reconstruction at different rates depending on the amount of compression (Figure 2 (c,d)). Not surprisingly, in every case, less compression leads to a faster or better convergence performance.

Refer to caption
(a)
Refer to caption
(b)
Refer to caption
(c)
Refer to caption
(d)
Figure 2. NMF recovery of the synthetic data with MU. MU on the uncompressed data achieve limiting similarity 11 after a few but slow iterations (only the limiting level is shown as โ€œUncompressedโ€). Displays (c,d) show that MU on data-adapted and random two-sided sketched data also tend to the limiting similarity 11. Across all methods, less compression (larger kk) improves convergence.

5.3. Real-world data: performance comparison

To showcase the proposed methods, we include recovery from two-sided and one-sided sketches. For the two-sided method we solve a simplified version of the two-sided objective (3.15) with ฮป=0\lambda=0. We take ๐€1{\mathbf{A}}_{1} and ๐€2{\mathbf{A}}_{2} to be random Gaussian matrices in the oblivious case or according to the randomized rangefinder procedure as in Corollary 3.6 in the adaptive case. For the data-adapted one-sided method, we take ฮป=0.1\lambda=0.1 and solve the problem (4.5). Then, we include the recovery via projected gradient descent (GD), as described in Section 4.3 with a step size of ฮฑ=.001\alpha=.001.

We also compare our proposed methods with a โ€œNMF with random projectionsโ€ method proposed in [42] and in [36]. These works adapted the updates of [4] to the compressed NMF setting resulting in the updates:

๐”\displaystyle{\mathbf{U}} โ†๐”โˆ˜(๐˜2โ€‹๐€2Tโ€‹๐•)++(๐”๐•Tโ€‹๐€2โ€‹๐€2Tโ€‹๐•)โˆ’(๐˜2โ€‹๐€2Tโ€‹๐•)โˆ’+(๐•Tโ€‹๐€2โ€‹๐€2Tโ€‹๐•)+\displaystyle\leftarrow{\mathbf{U}}\circ\sqrt{\frac{({\mathbf{Y}}_{2}{\mathbf{A}}_{2}^{T}{\mathbf{V}})_{+}+({\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{A}}_{2}{\mathbf{A}}_{2}^{T}{\mathbf{V}})_{-}}{({\mathbf{Y}}_{2}{\mathbf{A}}_{2}^{T}{\mathbf{V}})_{-}+({\mathbf{V}}^{T}{\mathbf{A}}_{2}{\mathbf{A}}_{2}^{T}{\mathbf{V}})_{+}}} (5.1)

and similar for ๐•{\mathbf{V}}, see equations (3.28) and (3.29) of [42]. The work of [42] propose to use the updates (5.1) where ๐€1{\mathbf{A}}_{1} and ๐€2{\mathbf{A}}_{2} are chosen to be Gaussian matrices. In this case we denote these updates โ€œWLโ€ in the legends. The work of [36] also proposed using the randomized rangefinder procedure [14] as in our Corollary 3.6 to choose the matrices ๐€1{\mathbf{A}}_{1} and ๐€2{\mathbf{A}}_{2}. In this case we denote this method โ€œTSโ€ in the legends. Observe that these iterations are approximately two times faster than those of MU.

Refer to caption
(a)
Refer to caption
(b)
Refer to caption
(c)
Refer to caption
(d)
Figure 3. (a,c): recovery from random Gaussian measurements, averaged over 55 runs; our two-sided MU methods lead to better convergence than WL [42]. (b,d): data-adapted methods with sketching matrices obtained with the randomized rangefinder algorithm (like in Corollary 3.6); our two-sided multiplicative updates perform slightly better than TS [36] after enough iterations.

In Figure 3 (a,b), we study the performance of compressed nonnegative matrix factorization for the Faces dataset with target rank r=6r=6 and sketch size k=20k=20. For the memory, the data matrix ๐—{\mathbf{X}} contains 16384001638400 elements whereas the total number of elements in ๐€{\mathbf{A}} and ๐—๐€{\mathbf{X}}{\mathbf{A}} is 8992089920, representing a memory ratio of approximately 5%5\% in the one-sided case and 10%10\% in the two-sided compression. On the Faces dataset, our methods, especially data-adapted versions, attain almost the quality of MU on the uncompressed data while using only 5%5\% of the memory (10%10\% in the two-sided case).

In Figure 3 (c,d), we study the performance of compressed nonnegative matrix factorization for the 20News dataset with target rank r=20r=20 and sketch size k=100k=100. 20News is a โ€œhard datasetโ€ for NMF โ€“ we can see that even in a full uncompressed dataset NMF achieves only 0.20.2 cosine similarity (however, this similarity can be enough to do a meaningful job for topic modeling applications) and our compressed MU from data-adapted measurements achieve higher than 0.170.17 cosine similarity while using only 2%2\% of memory required for the uncompressed MU with the one-sided compression. Indeed, the number of elements in ๐—{\mathbf{X}} (including zeros) is 11463571081146357108. The total number of elements in ๐˜1{\mathbf{Y}}_{1}, ๐˜2{\mathbf{Y}}_{2}, ๐ŸTโ€‹๐—\mathbf{1}^{T}{\mathbf{X}}, ๐—๐Ÿ{\mathbf{X}}\mathbf{1}, ๐€1{\mathbf{A}}_{1}, ๐€2{\mathbf{A}}_{2}, ๐”{\mathbf{U}}, and ๐•{\mathbf{V}} is 2500519225005192. On the 20News dataset, our compressed MU from data-adapted measurements attain 85%85\% of the similarity using only 2%2\% of the memory (4%4\% in the two-sided case) compared to the uncompressed NMF problem.

Since it might be infeasible to even run an uncompressed problem from a memory perspective, we do not specifically focus on time performance here. However, we note that while it typically requires less iterations for the uncompressed NMF to learn the factorization, the iterations themselves are considerably slower. In Figure 3 (c,d) it would take several hours to run the uncompressed experiment until 60,00060,000 iterations, while the other methods take at most several minutes, so we show only the first 10310^{3} iterations for uncompressed NMF. For the Faces dataset (Figure 3 (b)), it took 77 sec to run compressed MU and 66 sec to run GD until 60,00060,000 iterations, and we can see that 1010 times less iterations would have been enough for approximately same quality. The uncompressed method took 88 sec for the plotted 10310^{3} iterations, so at least 88 min would be required to run it for 60,00060,000 iterations.

5.4. Real-world data: interpretable low rank decompositions

An important property of nonnegative low-rank factorizations is getting interpretable components. In Figure 4, we briefly demonstrate that the components learned from the compressed data are also interpretable. That is, we show the columns of the fitted ๐•{\mathbf{V}} matrix reshaped back to 64ร—6464\times 64 images in the same setup as in Figure 3 (b) for the one-sided data-adapted measurements.

Refer to caption
(a)
Refer to caption
(b)
Refer to caption
(c)
Figure 4. Six โ€œrepresentativeโ€ faces from the Faces dataset learned from the compressed dataset of the size โˆผ5%\sim 5\% of initial data. Data-adapted compression matrix ๐€{\mathbf{A}} is used.

5.5. Real-world data: choosing regularization parameter ฮป\lambda

We have chosen the regularization parameter ฮป=0.1\lambda=0.1 for the one-sided experiments above. Here, we demonstrate that it is important empirically, as well as theoretically, to add nonzero regularization term in the one-sided compression case. In Figure 5, we consider the compressed MU algorithm from one-sided data-adapted measurements for the 20News data with k=100k=100 and r=20r=20. We can see that regularization can have a beneficial effect on performance and ฮป=0\lambda=0 compromises the convergence. At the same time, too large ฮป\lambda could slow down the convergence or result in slightly worse limiting loss.

Refer to caption
Figure 5. Effect of regularization parameter ฮป\lambda on the MU algorithm (4.6). 20News dataset compressed with data-adapted one-sided measurements, ฯƒ\sigma is chosen minimal so that we have ๐€๐€Tโ‰ฅโˆ’ฯƒ{\mathbf{A}}{\mathbf{A}}^{T}\geq-\sigma. The absence of regularization compromises convergence and too strong regularization results in a higher loss.

6. Conclusion and future directions

In this paper, we propose several formulations of the NMF problem that (a) work using only compressed initial data, where the compression is done with linear maps (sketches) that access initial data only once or twice; (b) have optimal solutions that are provably compatible with the optimal solutions of the uncompressed NMF problem; (c) are supplemented with memory-efficient algorithms that solve the compressed problems without returning to the initial large data or forming any matrices of the original size. The convergence of these algorithms is proved in a standard for the NMF-related algorithms form, that is, showing that the objective function (that we provably connect to the original uncompressed objective) does not increase under the updates. We supplement the theoretical results with the experiments showing comparable nonnegative factorization performance using only โˆผ5%\sim 5\% of initial data, on artificial, text and image datasets.

There are multiple venues of future work stemming from our approach. For the two-sided measurements, we currently do not have a theorem that holds for the data matrices with approximately low nonnegative rank, like in the one-sided case. The experimental evidence clearly suggests similar more general results should hold in the two-sided case. Also, it would be interesting to explain theoretically why the two-sided compression is less sensitive to the regularization (in practice, we take ฮป=0\lambda=0 in the two-sided experiments, which significantly simplifies the updates from Corollary 4.3).

Then, it is important to study the scalable versions of other nonnegative factorization algorithms besides multiplicative updates and projected gradient descent. We focused on multiplicative updates because of their relative popularity and simplicity, but perhaps other methods may be better adapted to sketched problems. A related question is to get theoretical guarantees for the methods proposed in [36] that empirically show comparable performance and typically faster convergence than our proposed algorithms.

Further, it is natural and meaningful to extend the framework to the compressed versions of high-order (tensor) problems. It has been recently shown [20, 39] that nonnegative tensor factorization result in more interpretable decomposition for naturally high-order data, such as temporal or multi-agent, than their matrix NMF counterparts. At the same time, the tensor methods are even more computationally demanding and would benefit from scalable approximations. Scalable versions of other NMF-based algorithms such as semi-supervised versions [1] or sparse NMF [20, 17] are also of interest.

References

  • [1] M. Ahn, R. Grotheer, J. Haddock, L. Kassab, A. Kryshchenko, K. Leonard, S. Li, A. Madushani, T. Merkh, D. Needell, E. Sizikova, and C. Wang. Semi-supervised nonnegative matrix factorization models for topic modeling in learning tasks. In Proc. 53rd Asilomar Conference on Signals, Systems and Computers, 2020.
  • [2] Y. Chen, H. Zhang, R. Liu, Z. Ye, and J. Lin. Experimental explorations on short text topic mining between LDA and NMF based schemes. Knowledge-Based Systems, 163:1โ€“13, 2019.
  • [3] M. A. Davenport and J. Romberg. An overview of low-rank matrix recovery from incomplete observations. IEEE Journal of Selected Topics in Signal Processing, 10(4):608โ€“622, 2016.
  • [4] C. H. Ding, T. Li, and M. I. Jordan. Convex and semi-nonnegative matrix factorizations. IEEE transactions on pattern analysis and machine intelligence, 32(1):45โ€“55, 2008.
  • [5] P. Drineas, M. W. Mahoney, S. Muthukrishnan, and T. Sarlรณs. Faster least squares approximation. Numerische mathematik, 117(2):219โ€“249, 2011.
  • [6] N. B. Erichson, A. Mendible, S. Wihlborn, and J. N. Kutz. Randomized nonnegative matrix factorization. Pattern Recognition Letters, 104:1โ€“7, 2018.
  • [7] H. Fawzi and P. A. Parrilo. Lower bounds on nonnegative rank via nonnegative nuclear norms. Mathematical Programming, 153:41โ€“66, 2015.
  • [8] M. Fazel, E. Candes, B. Recht, and P. Parrilo. Compressed sensing and robust recovery of low rank matrices. In 2008 42nd Asilomar Conference on Signals, Systems and Computers, pages 1043โ€“1047. IEEE, 2008.
  • [9] X.-R. Feng, H.-C. Li, R. Wang, Q. Du, X. Jia, and A. Plaza. Hyperspectral unmixing based on nonnegative matrix factorization: A comprehensive review. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 15:4414โ€“4436, 2022.
  • [10] M. Ghashami, E. Liberty, J. M. Phillips, and D. P. Woodruff. Frequent directions: Simple and deterministic matrix sketching. SIAM Journal on Computing, 45(5):1762โ€“1792, 2016.
  • [11] N. Gillis. The why and how of nonnegative matrix factorization. Regularization, Optimization, Kernels, and Support Vector Machines, 12, 2014.
  • [12] N. Gillis and F. Glineur. Nonnegative factorization and the maximum edge biclique problem. arXiv preprint arXiv:0810.4225, 2008.
  • [13] R. Grotheer, L. Huang, Y. Huang, A. Kryshchenko, O. Kryshchenko, P. Li, X. Li, E. Rebrova, K. Ha, and D. Needell. COVID-19 literature topic-based search via hierarchical NMF. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020. Association for Computational Linguistics, 2020.
  • [14] N. Halko, P.-G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217โ€“288, 2011.
  • [15] C. A. Haselby, M. A. Iwen, D. Needell, M. Perlmutter, and E. Rebrova. Modewise operators, the tensor restricted isometry property, and low-rank tensor recovery. Applied and Computational Harmonic Analysis, 66:161โ€“192, 2023.
  • [16] K. Hayashi, S. G. Aksoy, G. Ballard, and H. Park. Randomized algorithms for symmetric nonnegative matrix factorization. arXiv preprint arXiv:2402.08134, 2024.
  • [17] P. O. Hoyer. Non-negative matrix factorization with sparseness constraints. Journal of machine learning research, 5(9), 2004.
  • [18] R. Huang, X. Li, and L. Zhao. Spectralโ€“spatial robust nonnegative matrix factorization for hyperspectral unmixing. IEEE Transactions on Geoscience and Remote Sensing, 57(10):8235โ€“8254, 2019.
  • [19] M. A. Iwen, D. Needell, E. Rebrova, and A. Zare. Lower memory oblivious (tensor) subspace embeddings with fewer random bits: modewise methods for least squares. SIAM Journal on Matrix Analysis and Applications, 42(1):376โ€“416, 2021.
  • [20] L. Kassab, A. Kryshchenko, H. Lyu, D. Molitor, D. Needell, E. Rebrova, and J. Yuan. Sparseness-constrained nonnegative tensor factorization for detecting topics at different time scales. Frontiers Applied Mathematics and Statistics, accepted.
  • [21] D. Kuang and H. Park. Fast rank-22 nonnegative matrix factorization for hierarchical document clustering. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 739โ€“747, 2013.
  • [22] D. Lee and H. S. Seung. Algorithms for non-negative matrix factorization. Advances in neural information processing systems, 13, 2000.
  • [23] D. D. Lee and H. S. Seung. Learning the parts of objects by non-negative matrix factorization. nature, 401(6755):788โ€“791, 1999.
  • [24] H. Lee, J. Yoo, and S. Choi. Semi-supervised nonnegative matrix factorization. IEEE Signal Processing Letters, 17(1):4โ€“7, 2009.
  • [25] Y. Li, H. L. Nguyen, and D. P. Woodruff. Turnstile streaming algorithms might as well be linear sketches. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing, pages 174โ€“183, 2014.
  • [26] M. W. Mahoney et al. Randomized algorithms for matrices and data. Foundations and Trendsยฎ in Machine Learning, 3(2):123โ€“224, 2011.
  • [27] Y. Qian, C. Tan, D. Ding, H. Li, and N. Mamoulis. Fast and secure distributed nonnegative matrix factorization. IEEE Transactions on Knowledge and Data Engineering, 2020.
  • [28] Y. Qian, C. Tan, N. Mamoulis, and D. W. Cheung. DSANLS: Accelerating distributed nonnegative matrix factorization via sketching. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 450โ€“458, 2018.
  • [29] G. Raskutti and M. W. Mahoney. A statistical perspective on randomized sketching for ordinary least-squares. Journal of Machine Learning Research, 17(213):1โ€“31, 2016.
  • [30] J. Rennie. 20 Newsgroups, 2008.
  • [31] A. K. Saibaba and A. Miedlar. Randomized low-rank approximations beyond Gaussian random matrices. arXiv preprint arXiv:2308.05814, 2023.
  • [32] F. Samaria and A. Harter. Parameterisation of a stochastic model for human face identification. In Proceedings of 1994 IEEE workshop on applications of computer vision, pages 138โ€“142. IEEE, 1994.
  • [33] T. Sarlos. Improved approximation algorithms for large matrices via random projections. In 2006 47th annual IEEE symposium on foundations of computer science (FOCSโ€™06), pages 143โ€“152. IEEE, 2006.
  • [34] C. Shao and T. Hรถfer. Robust classification of single-cell transcriptome data by nonnegative matrix factorization. Bioinformatics, 33(2):235โ€“242, 2017.
  • [35] V. Sharan, K. S. Tai, P. Bailis, and G. Valiant. Compressed factorization: Fast and accurate low-rank factorization of compressively-sensed data. In International Conference on Machine Learning, pages 5690โ€“5700. PMLR, 2019.
  • [36] M. Tepper and G. Sapiro. Compressed nonnegative matrix factorization is fast and accurate. IEEE Transactions on Signal Processing, 64(9):2269โ€“2283, 2016.
  • [37] J. A. Tropp, A. Yurtsever, M. Udell, and V. Cevher. Practical sketching algorithms for low-rank matrix approximation. SIAM Journal on Matrix Analysis and Applications, 38(4):1454โ€“1485, 2017.
  • [38] S. A. Vavasis. On the complexity of nonnegative matrix factorization. SIAM Journal on Optimization, 20(3):1364โ€“1377, 2010.
  • [39] J. Vendrow, J. Haddock, and D. Needell. Neural nonnegative CP decomposition for hierarchical tensor analysis. In 2021 55th Asilomar Conference on Signals, Systems, and Computers, pages 1340โ€“1347. IEEE, 2021.
  • [40] J. Vendrow, J. Haddock, E. Rebrova, and D. Needell. On a guided nonnegative matrix factorization. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3265โ€“32369. IEEE, 2021.
  • [41] R. Vershynin. Compressed Sensing: Theory and Applications, chapter Introduction to the non-asymptotic analysis of random matrices. Cambridge University Press, 2011.
  • [42] F. Wang and P. Li. Efficient nonnegative matrix factorization with random projections. In Proceedings of the 2010 SIAM International Conference on Data Mining, pages 281โ€“292. SIAM, 2010.
  • [43] D. P. Woodruff. Sketching as a tool for numerical linear algebra. Foundations and Trendsยฎ in Theoretical Computer Science, 10(1โ€“2):1โ€“157, 2014.
  • [44] F. Woolfe, E. Liberty, V. Rokhlin, and M. Tygert. A fast randomized algorithm for the approximation of matrices. Applied and Computational Harmonic Analysis, 25(3):335โ€“366, 2008.
  • [45] F. Yahaya, M. Puigt, G. Delmaire, and G. Roussel. Gaussian compression stream: Principle and preliminary results, 2020.
  • [46] F. Yahaya, M. Puigt, G. Delmaire, and G. Roussel. Random projection streams for (weighted) nonnegative matrix factorization. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3280โ€“3284. IEEE, 2021.
  • [47] T. Zhang, B. Fang, Y. Y. Tang, G. He, and J. Wen. Topology preserving non-negative matrix factorization for face recognition. IEEE Transactions on Image Processing, 17(4):574โ€“584, 2008.
  • [48] Y. Zhao, H. Wang, and J. Pei. Deep non-negative matrix factorization architecture based on underlying basis images learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(6):1897โ€“1913, 2019.
  • [49] G. Zhou, A. Cichocki, and S. Xie. Fast nonnegative matrix/tensor factorization based on low-rank approximation. IEEE Transactions on Signal Processing, 60(6):2928โ€“2940, 2012.