This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

[1]\fnmEric F. \surLock

[1]\orgdivDivision of Biostatistics and Health Data Science, \orgnameSchool of Public Health, University of Minnesota, \orgaddress\cityMinneapolis, \postcode55455, \stateMN, \countryUSA

Empirical Bayes Linked Matrix Decomposition

Abstract

Data for several applications in diverse fields can be represented as multiple matrices that are linked across rows or columns. This is particularly common in molecular biomedical research, in which multiple molecular “omics" technologies may capture different feature sets (e.g., corresponding to rows in a matrix) and/or different sample populations (corresponding to columns). This has motivated a large body of work on integrative matrix factorization approaches that identify and decompose low-dimensional signal that is shared across multiple matrices or specific to a given matrix. We propose an empirical variational Bayesian approach to this problem that has several advantages over existing techniques, including the flexibility to accommodate shared signal over any number of row or column sets (i.e., bidimensional integration), an intuitive model-based objective function that yields appropriate shrinkage for the inferred signals, and a relatively efficient estimation algorithm with no tuning parameters. A general result establishes conditions for the uniqueness of the underlying decomposition for a broad family of methods that includes the proposed approach. For scenarios with missing data, we describe an associated iterative imputation approach that is novel for the single-matrix context and a powerful approach for “blockwise" imputation (in which an entire row or column is missing) in various linked matrix contexts. Extensive simulations show that the method performs very well under different scenarios with respect to recovering underlying low-rank signal, accurately decomposing shared and specific signals, and accurately imputing missing data. The approach is applied to gene expression and miRNA data from breast cancer tissue and normal breast tissue, for which it gives an informative decomposition of variation and outperforms alternative strategies for missing data imputation.

keywords:
Data integration, dimension reduction, low-rank factorization, missing data imputation, variational Bayes

1 Introduction

Low-rank matrix factorization techniques are widely used for many machine learning tasks such as data compression and dimension reduction, denoising to approximate underlying signal, and matrix completion. Often, instead of a single matrix, the data for a given application takes the form of multiple matrices that are linked by rows and/or columns. For example, in Section 9 we consider data from The Cancer Genome Atlas, in which gene expression (mRNA) and microRNA (miRNA) data are available from different platforms for breast cancer tumor samples and normal breast tissue samples from unrelated individuals. Thus, the data can be represented as four matrices: (1) mRNA for cancer tissues, (2) mRNA for normal tissues, (3) miRNA for cancer tissues, and (4) miRNA for normal tissues; these matrices share both rows (here, molecular features) columns (here, samples), giving a bidimensionally linked structure. Furthermore, these data have column-wise missingness, in which molecular data for one of the platforms (mRNA or miRNA) is entirely missing for some samples in each of the cancer and control cohorts. We are interested in imputing the missing data, as well as identifying patterns of variation that are shared or specific to the different data sources and sample cohorts; for these tasks, a low-rank factorization approach that integrates all of the matrices together while accounting for their differences is attractive.

There is a growing and dynamic literature on multi-matrix low-rank factorization approaches. The majority of existing methods assume that just one dimension, rows or columns, is shared across matrices, i.e., unidimensional integration. For example, the Joint and Individual Variation Explained (JIVE) approach [12, 19] decomposes low-rank structure that is shared across matrices from variation that is specific to each matrix. Several related approaches yield a similar decomposition using different strategies or assumptions [3, 23, 28, 26], while other approaches such as structural learning and integrative decomposition (SLIDE) [6] allow for structure that is partially shared across any subset of matrices. Such an approach is efficient and useful for interpretation, as for many applications identifying shared, partially shared, and specific underlying structure is of interest.

The bidimensional linked matrix factorization (BIDIFAC) approach [20, 13] was developed to accommodate bidimensionally linked integration scenarios, by identifying structure that is shared or specific across both row sets and column sets. The objective function for BIDIFAC involves a structured nuclear norm penalty, which approximates the data as a decomposition of low-rank modules that are shared across row or column subsets. This approach is attractive in the bidimensionally linked context because it has favorable convergence properties and generally leads to a uniquely-defined decomposition [13]; methods for unidimensional integration without shrinkage penalties typically require orthogonality [12, 3, 6, 29] or other constraints to assure uniqueness, which do not extend to the bidimensionally linked context. Furthermore, the application of the structured nuclear norm penalty for unidimensional integration (termed UNIFAC) has been shown to outperform alternative approaches with respect to accurately decomposing shared and specific structures [13] and imputing missing data [20] under certain scenarios. However, a drawback is that it tends to over-penalize the estimated structure, and is thus biased toward an over-conservative solution with respect to the underlying low-rank structure [29].

For a single matrix the tendency of the nuclear norm to over-shrink low-rank signal has been well-documented, and in contrast low-rank approximations without any further shrinkage tend to overfit [9]. Empirical Bayes approaches to low-rank factorization are an attractive compromise, in which the optimal levels of shrinkage are determined empirically within a model-based framework [27]. Under a variational Bayes approach, the true posterior distribution for Bayesian model is approximated by minimizing the free energy (reducing to Kullback-Leibler divergence) under simpler distributional assumptions [4]. In particular, an empirical variational Bayes approach to low-rank matrix factorization under an assumption of normally distributed factor matrices has desirable theoretical properties, and the resulting approximation for a scenario without missing data can be efficiently computed via a closed-form shrinkage formula on the observed singular values of the matrix [18, 17].

In this article, we describe an empirical variational Bayesian approach to identify and decompose shared, partially shared, or specific low-rank structure in the context of unidimensionally or bidimensionally linked matrices. Moreover, we describe an accompanying iterative imputation approach analogous to an expectation-maximization (EM) algorithm that can handle entry-wise missing or row/column-wise missing values. The proposed approach is free of any tuning parameters, and is shown to share desirable properties of the BIDIFAC approach while addressing the issue of over-shrinkage.

2 Notation and Setting

Throughout this article bold uppercase characters (‘𝐗\mathbf{X}’) correspond to matrices and bold lowercase characters (‘𝐱\mathbf{x}’) correspond to vectors. Square brackets index the entries within a vector or matrix, e.g., ‘𝐗[m,n]\mathbf{X}[m,n]’. The singular value decomposition (SVD) of a matrix 𝐗:M×N\mathbf{X}:M\times N is given by 𝐔𝐗𝐃𝐗𝐕𝐗T\mathbf{U}_{\mathbf{X}}\mathbf{D}_{\mathbf{X}}\mathbf{V}_{\mathbf{X}}^{T}, where the diagonal entries of 𝐃𝐗\mathbf{D}_{\mathbf{X}}, 𝐃𝐗[r,r]\mathbf{D}_{\mathbf{X}}[r,r], are the singular values of 𝐗\mathbf{X} ordered from largest to smallest for r=1,,min(M,N)r=1,\ldots,\mbox{min}(M,N). The Frobenius norm is given by ||||F||\cdot||_{F}, so that XF2||X||_{F}^{2} is the sum of the squared entries in 𝐗\mathbf{X}; the nuclear norm is given by ||||||\cdot||_{*}, which is the sum of the singular values in a matrix:

𝐗=r=1min(M,N)𝐃𝐗[r,r].||\mathbf{X}||_{*}=\sum_{r=1}^{\mbox{min}(M,N)}\mathbf{D}_{\mathbf{X}}[r,r].

We consider the setting of a collection of matrices 𝐗ij:Mi×Nj\mathbf{X}_{ij}:M_{i}\times N_{j} for i=1,,Ii=1,\ldots,I and j=1,,Jj=1,\ldots,J. The column sets N1,,NJN_{1},\ldots,N_{J} and row sets M1,,MLM_{1},\ldots,M_{L} are consistent across the matrices. The subscript ‘\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}’ defines concatenation across row or column sets, and the full data 𝐗\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}} can be represented as a single M×NM\times N matrix where M=M1++MIM=M_{1}+\ldots+M_{I} and N=N1++NJN=N_{1}+\ldots+N_{J}:

𝐗=[𝐗11𝐗12𝐗1J𝐗I1𝐗I2𝐗IJ] where 𝐗ij are Mi×Nj.\displaystyle\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}=\left[\begin{array}[]{c c c c}\mathbf{X}_{11}&\mathbf{X}_{12}&\ldots&\mathbf{X}_{1J}\\ \vdots&\vdots&\vdots&\vdots\\ \mathbf{X}_{I1}&\mathbf{X}_{I2}&\ldots&\mathbf{X}_{IJ}\end{array}\right]\;\text{ where }\mathbf{X}_{ij}\text{ are }M_{i}\times N_{j}. (4)

As for our motivating application in Section 9, the II column sets may correspond to different sample cohorts and the JJ row sets may correspond to features from different high-dimensional sources.

3 Single-matrix results

Here we review some established results for a single matrix that are critical to what follows. Propositions 1 and 2 describe solutions to the least squares matrix approximation problem under a rank-constrained or nuclear norm penalized criterion, respectively.

Proposition 1.

For 𝐗:M×N\mathbf{X}:M\times N, the minimizer of the least squares objective X𝐒F2||X-\mathbf{S}||_{F}^{2} under the constraint rank(𝐒)=Rmin(M,N)(\mathbf{S})=R\leq\mbox{min}(M,N) is given by 𝐒=𝐔𝐗𝐃S𝐕𝐗T\mathbf{S}=\mathbf{U}_{\mathbf{X}}\mathbf{D}_{S}\mathbf{V}_{\mathbf{X}}^{T} where 𝐃𝐒\mathbf{D}_{\mathbf{S}} is diagonal with 𝐃𝐒[r,r]=𝐃𝐗[r,r]\mathbf{D}_{\mathbf{S}}[r,r]=\mathbf{D}_{\mathbf{X}}[r,r] for r=1,,Rr=1,\ldots,R and 𝐃𝐒[r,r]=0\mathbf{D}_{\mathbf{S}}[r,r]=0 for r>Rr>R.

Proposition 2.

For 𝐗:M×N\mathbf{X}:M\times N, the minimizer of the nuclear norm penalized objective 12X𝐒F2+λ𝐒\frac{1}{2}||X-\mathbf{S}||_{F}^{2}+\lambda||\mathbf{S}||_{*} is given by 𝐒=𝐔𝐗𝐃S𝐕𝐗T\mathbf{S}=\mathbf{U}_{\mathbf{X}}\mathbf{D}_{S}\mathbf{V}_{\mathbf{X}}^{T} where 𝐃S\mathbf{D}_{S} is diagonal with 𝐃𝐒[r,r]=max(𝐃𝐗[r,r]λ,0)\mathbf{D}_{\mathbf{S}}[r,r]=\max(\mathbf{D}_{\mathbf{X}}[r,r]-\lambda,0) for all rr.

Proposition 1 is well-known, and a proof of Proposition 2 can be found in Mazumder et al. [15]. Because of how they operate on the singular values 𝐗\mathbf{X}, Proposition 1 represents a hard-thresholding and Proposition 2 a soft-thresholding approach to low-rank approximation. Proposition 3 below establishes that the soft-thresholding operator also solves an objective with L2L_{2}-penalties on the factor components of 𝐒\mathbf{S}, while Corollary 4 further establishes that it is the posterior mode of a Bayesian model with normal priors on the factor components.

Proposition 3.

For 𝐗:M×N\mathbf{X}:M\times N,

min𝐔:M×H,𝐕:N×H𝐗𝐔𝐕TF2+λ(𝐔F2+𝐕F2)=min𝐒:rank(𝐒)H𝐗𝐒F2+2λ𝐒\displaystyle\underset{\mathbf{U}:M\times H,\mathbf{V}:N\times H}{\mbox{min}}||\mathbf{X}-\mathbf{U}\mathbf{V}^{T}||_{F}^{2}+\lambda(||\mathbf{U}||_{F}^{2}+||\mathbf{V}||_{F}^{2})=\underset{\mathbf{S}:\text{rank}(\mathbf{S})\leq H}{\mbox{min}}||\mathbf{X}-\mathbf{S}||_{F}^{2}+2\lambda||\mathbf{S}||_{*} (5)

Moreover, 𝐒^=𝐔^𝐕^T\widehat{\mathbf{S}}=\widehat{\mathbf{U}}\widehat{\mathbf{V}}^{T}, where 𝐒^\widehat{\mathbf{S}} solves the right-hand side of (5) and {𝐔^,𝐕^}\{\widehat{\mathbf{U}},\widehat{\mathbf{V}}\} solves the left-hand side of (5).

Corollary 4.

For 𝐗:M×N\mathbf{X}:M\times N, if 𝐔:M×H\mathbf{U}:M\times H and 𝐕:N×H\mathbf{V}:N\times H have independent Normal(0,σ2/λ)(0,\sigma^{2}/\lambda) entries, and 𝐗=𝐔𝐕T+𝐄\mathbf{X}=\mathbf{U}\mathbf{V}^{T}+\mathbf{E} where 𝐄\mathbf{E} has independent Normal(0,σ2)(0,\sigma^{2}) entries, then the mode of the posterior distribution p(𝐔,𝐕𝐗)p(\mathbf{U},\mathbf{V}\mid\mathbf{X}) satisfies 𝐒^=𝐔^𝐕^T\widehat{\mathbf{S}}=\widehat{\mathbf{U}}\widehat{\mathbf{V}}^{T}, where 𝐒^\widehat{\mathbf{S}} solves the right-hand side of (5).

A proof of Proposition 3 can be found in Mazumder et al. [15], and Corollary 4 follows by noting that the log of the posterior density for the specified model is proportional to the left-hand side of (5). While the results hold for general HH, in what follows we set HH as the largest possible rank H=min(M,N)H=\mbox{min}(M,N); the actual estimated rank will tend to be smaller due to singular value thresholding (as in Proposition 2). The distribution of singular values for the error matrix 𝐄\mathbf{E} has been well-studied, including the following results on the first singular value under general conditions from Rudelson and Vershynin [21]:

Proposition 5.

If 𝐄:M×N\mathbf{E}:M\times N has independent entries with mean 0, variance σ2\sigma^{2} and finite fourth moment, then 𝐃[1,1]σ((M)+N\mathbf{D}[1,1]\approx\sigma(\sqrt{(}M)+\sqrt{N}) as M,NM,N\rightarrow\infty; further, if the entries are Gaussian, 𝔼(𝐃[1,1])σ((M)+N)\mathbb{E}(\mathbf{D}[1,1])\leq\sigma(\sqrt{(}M)+\sqrt{N}) for any M,NM,N.

These results can motivate choice of the nuclear norm penalty λ\lambda. A conservative approach, if the error variance σ2\sigma^{2} is known, is to select λ=σ(M+N)\lambda=\sigma(\sqrt{M}+\sqrt{N}); per Propositions 5 and 2, this will tend to shrink to 0 any singular values in 𝐗\mathbf{X} that result from error with no true low-rank signal. Another approach is to adopt an empirical Bayes estimate for λ\lambda based on the model in Corollary 4. However, a general limitation of the nuclear norm approach to low-rank matrix reconstruction is the use of a universal threshold on all singular values; in practice larger singular values should be penalized less, as they capture a relatively lower ratio of error and higher signal [22]. An analogous interpretation is that the entries of 𝐔\mathbf{U} and 𝐕\mathbf{V} should not have identical variance across columns, and this can be relaxed by a model in which they have column-specific variances: 𝐔[,r]Normal(𝟎,τ𝐔,r2𝐈)\mathbf{U}[\cdot,r]\sim\text{Normal}(\mathbf{0},\tau^{2}_{\mathbf{U},r}\mathbf{I}) and 𝐕[,r]Normal(𝟎,τ𝐕,r2𝐈)\mathbf{V}[\cdot,r]\sim\text{Normal}(\mathbf{0},\tau^{2}_{\mathbf{V},r}\mathbf{I}) for r=1,,Hr=1,\dots,H.111For most scenarios τ𝐔,r2\tau^{2}_{\mathbf{U},r} and τ𝐕,r2\tau^{2}_{\mathbf{V},r} are not independently identifiable, as the level of signal is determined by their product.

For given 𝝉={τ𝐔,rτ𝐕,r}r=1H\mbox{\boldmath{${\tau}$}}=\{\tau_{\mathbf{U},r}\tau_{\mathbf{V},r}\}_{r=1}^{H}, the approximation resulting from the posterior mode for p(𝐔,𝐕𝐗,𝝉)p(\mathbf{U},\mathbf{V}\mid\mathbf{X},\mbox{\boldmath{${\tau}$}}) is given by thresholding the singular values of 𝐗\mathbf{X}, analogous to Corollary 4. To determine 𝝉{\tau}, directly maximizing the joint likelihood p(𝐗,𝐔,𝐕𝝉)p(\mathbf{X},\mathbf{U},\mathbf{V}\mid\mbox{\boldmath{${\tau}$}}) over 𝐔,𝐕\mathbf{U},\mathbf{V} and 𝝉{\tau} degenerates to 𝝉𝟎\mbox{\boldmath{${\tau}$}}\rightarrow\mathbf{0} in the global solution, though it can have a reasonable local mode that is a singular value thresholding operator [16]. Further, a marginal empirical Bayes estimator that optimizes p(𝐗𝝉)p(\mathbf{X}\mid\mbox{\boldmath{${\tau}$}}) (effectively integrating over 𝐔\mathbf{U} and 𝐕\mathbf{V}) is difficult to obtain computationally or analytically. As an alternative, one can use an empirical variational Bayes (EVB) approach. In general, for a variational Bayes approach [1], the true posterior of a Bayesian model with parameters Θ\Theta and data 𝐲\mathbf{y}, p(Θ𝐲)p(\Theta\mid\mathbf{y}) is approximated by another distribution q(Θ)q(\Theta) under simplifying assumptions to minimize the free energy

F(q)=Eqlogq(Θ)p(Θ,𝐲)=Eqlogq(Θ)p(Θ𝐲)logp(𝐲),F(q)=E_{q}\mbox{log}\frac{q(\Theta)}{p(\Theta,\mathbf{y})}=E_{q}\mbox{log}\frac{q(\Theta)}{p(\Theta\mid\mathbf{y})}-\mbox{log}\;p(\mathbf{y}),

where Eq()E_{q}(\cdot) is the expected value with respect to the distribution defined by q()q(\cdot). In the context of low-rank approximation, a useful simplifying assumption for q()q(\cdot) is that the left and right factor matrices are independent, q(𝐔,𝐕)=qu(𝐔)qv(𝐕)q(\mathbf{U},\mathbf{V})=q_{u}(\mathbf{U})q_{v}(\mathbf{V}). Thus, for hyperparameters 𝝉{\tau} and σ\sigma the free energy to minimize is

F(qσ,𝝉)=Eqlogqu(𝐔)qv(𝐕)p(𝐗𝐔,𝐕,σ)p(𝐔𝝉𝐔)p(𝐕𝝉𝐕),\displaystyle F(q\mid\sigma,\mbox{\boldmath{${\tau}$}})=E_{q}\mbox{log}\frac{q_{u}(\mathbf{U})q_{v}(\mathbf{V})}{p(\mathbf{X}\mid\mathbf{U},\mathbf{V},\sigma)p(\mathbf{U}\mid\mbox{\boldmath{${\tau}$}}_{\mathbf{U}})p(\mathbf{V}\mid\mbox{\boldmath{${\tau}$}}_{\mathbf{V}})}, (6)

and a point estimate for low-rank structure 𝐒^\hat{\mathbf{S}} can be obtained via its expected value under q()q(\cdot):

𝐒^=Eq(𝐔𝐕T).\displaystyle\hat{\mathbf{S}}=E_{q}(\mathbf{U}\mathbf{V}^{T}). (7)

Under an empirical variational Bayes approach, 𝝉{\tau} and σ\sigma are also estimated by minimizing the free energy in (6). For fully-observed 𝐗\mathbf{X}, Nakajima et al. [18] showed that the global solution to the empirical variation Bayes objective can be obtained in closed form as a singular value thresholding operator; their result for fixed σ\sigma is given in Theorem 6, and for estimation of σ\sigma in Theorem 7.

Theorem 6.

The approximation 𝐒^\hat{\mathbf{S}} (7) obtained by minimizing (6) over qq and 𝛕{\tau} is 𝐒^=𝐒=𝐔𝐗𝐃S𝐕𝐗T\hat{\mathbf{S}}=\mathbf{S}=\mathbf{U}_{\mathbf{X}}\mathbf{D}_{S}\mathbf{V}_{\mathbf{X}}^{T} where 𝐃S\mathbf{D}_{S} is diagonal with

𝐃𝐒[r,r]=0 if D𝐗[r,r]<σM+N+MN(κ+1/κ)\mathbf{D}_{\mathbf{S}}[r,r]=0\text{ if }D_{\mathbf{X}}[r,r]<\sigma\sqrt{M+N+\sqrt{MN}(\kappa+1/\kappa)}

and

𝐃𝐒[r,r]=(𝐃𝐗[r,r]2(M+N)σ2+(𝐃𝐗[r,r]2(M+N)σ2)24MNσ4)/(2𝐃𝐗[r,r]) otherwise.\mathbf{D}_{\mathbf{S}}[r,r]=\left(\mathbf{D}_{\mathbf{X}}[r,r]^{2}-(M+N)\sigma^{2}+\sqrt{(\mathbf{D}_{\mathbf{X}}[r,r]^{2}-(M+N)\sigma^{2})^{2}-4MN\sigma^{4}}\right)/(2\mathbf{D}_{\mathbf{X}}[r,r])\text{ otherwise.}

The value κ\kappa is given as the zero-cross point of the function

f(x)=xN/Mlog(xM/N+1)+xM/Nlog(xN/M+1)1.f(x)=x\sqrt{N/M}\,log\left(x\sqrt{M/N}+1\right)+x\sqrt{M/N}\,log\left(x\sqrt{N/M}+1\right)-1.
Theorem 7.

Assume without loss of generality that M>NM>N and α=N/M\alpha=N/M. The estimate σ^\hat{\sigma} that minimizes (6) is the global minimizer of

Ψ(σ)=r=1Nψ1(𝐃X[r,r]2Mσ2),\Psi(\sigma)=\sum_{r=1}^{N}\psi_{1}\left(\frac{\mathbf{D}_{X}[r,r]^{2}}{M\sigma^{2}}\right),

where

ψ2(x)=xlogx+1{x>c}ψ2(x),\psi_{2}(x)=x-\mbox{log}\,x+{1}_{\{x>c\}}\psi_{2}(x),
ψ2(x)=log(αψ3(x)+1)+αlog(ψ3(x)/α+1)αψ3(x),\psi_{2}(x)=\mbox{log}\,(\sqrt{\alpha}\psi_{3}(x)+1)+\alpha\mbox{log}\,(\psi_{3}(x)/\sqrt{\alpha}+1)-\alpha\psi_{3}(x),

and

ψ3(x)=12α(x1α+(x(1+α))24α).\psi_{3}(x)=\frac{1}{2\sqrt{\alpha}}\left(x-1-\alpha+\sqrt{(x-(1+\alpha))^{2}-4\alpha}\right).

Together, Theorems 7 and 6 provide a powerful and efficient way to approximate low-rank signal without any tuning parameters. The only substantial computing requirement is in obtaining the SVD of 𝐗\mathbf{X}, as other steps involve univariate functions that are trivial to solve.

4 Bidimensional linked matrix factorization

BIDIFAC+ [13] was developed to decompose bidimensionally linked data as in (4) into modules of low-rank structure that are shared across column sets and/or row sets. That is,

𝐗k=1K𝐒(k),\displaystyle\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}\approx\sum_{k=1}^{K}\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}, (8)

where each 𝐒(k)\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)} is a concatenation of blocks 𝐒ij(k)\mathbf{S}^{(k)}_{ij} as in (4) and

𝐒ij(k)={𝟎Mi×Njif 𝐑[i,k]=0 or 𝐂[j,k]=0𝐔i(k)𝐕j(k)Tif 𝐑[i,k]=1 and 𝐂[j,k]=1.\mathbf{S}^{(k)}_{ij}=\begin{cases}\mathbf{0}_{M_{i}\times N_{j}}&\text{if }\mathbf{R}[i,k]=0\text{ or }\mathbf{C}[j,k]=0\\ \mathbf{U}_{i}^{(k)}\mathbf{V}_{j}^{(k)T}&\text{if }\mathbf{R}[i,k]=1\text{ and }\mathbf{C}[j,k]=1\end{cases}.

Here, 𝐑\mathbf{R} and 𝐂\mathbf{C} are binary matrices that indicate the presence of module kk (𝐒(k)\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}) across the row and column sets; that is, 𝐒(k)\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)} is low-rank structure specific to the submatrix defined by the row sets identified in 𝐑[,k]\mathbf{R}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},k] and the column sets identified in 𝐂[,k]\mathbf{C}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},k]. This general framework subsumes several other integrative decompositions of structured variation. For example, in the unidimensionally linked case with shared columns J=1J=1, JIVE [12] identifies components that are either shared across all matrices (𝐑[i,k]=1\mathbf{R}[i,k]=1 for all ii) or specific to a single matrix (𝐑[i,k]=1\mathbf{R}[i,k]=1 for just one ii), while SLIDE [6] allows for components that are shared across any number of matrices. The BIDIFAC method [20] is a special case in which modules are either globally shared (𝐑[i,k]=𝐂[j,k]=1\mathbf{R}[i,k]=\mathbf{C}[j,k]=1 for all i,ji,j), shared across all rows for a given column set (𝐑[i,k]=1\mathbf{R}[i,k]=1 for all ii and 𝐂[j,k]=1\mathbf{C}[j,k]=1 for one jj), shared across all columns for a given row set (𝐂[j,k]=1\mathbf{C}[j,k]=1 for all jj and 𝐑[i,k]=1\mathbf{R}[i,k]=1 for one ii), or specific to one matrix (𝐑[i,k]=1\mathbf{R}[i,k]=1 and 𝐂[j,k]=1\mathbf{C}[j,k]=1 for just one i,ji,j).

The BIDIFAC and BIDIFAC+ approaches solve a structured nuclear norm objective:

argmin{𝐒(k)}k=1K12𝐗k=1K𝐒(k)F2+k=1Kλk𝐒(k),\displaystyle\underset{\{\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}}{\mbox{argmin}}\;\;\frac{1}{2}||\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}-\sum_{k=1}^{K}\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}||^{2}_{F}+\sum_{k=1}^{K}\lambda_{k}||\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}||_{*}, (9)

subject to 𝐒ij(k)=𝟎Mi×Nj\mathbf{S}^{(k)}_{ij}=\mathbf{0}_{M_{i}\times N_{j}} if 𝐑[i,k]=0\mathbf{R}[i,k]=0 or 𝐂[j,k]=0\mathbf{C}[j,k]=0. For fixed 𝐑\mathbf{R} and 𝐂\mathbf{C}, objective (9) may be solved with an iterative soft-singular value thresholding algorithm that cycles through the estimation of each 𝐒(k)\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)} with applications of Proposition 3. For smaller II and JJ, 𝐑\mathbf{R} and 𝐂\mathbf{C} can enumerate all possible submatrices of the row and column sets; the solution may still be sparse, with 𝐒(k)=𝟎\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}=\mathbf{0} if the submatrix corresponding to module kk has no particular shared structure. Alternatively, if the number of row and column sets II and JJ are large, Lock et al. [13] describe an algorithm to adaptively estimate 𝐑\mathbf{R} and 𝐂\mathbf{C} to give the submatrices with the most significant low-rank structure during estimation.

The choice of the module-specific penalty parameters λk\lambda_{k} is critical. By default, λk\lambda_{k} are selected by an approach motivated by Proposition 5; under the assumption that each matrix 𝐗ij\mathbf{X}_{ij} has error variance 11, λk\lambda_{k} depends on the total number of non-zero rows and columns in its submatrix,

λk=i=1I𝐑[i,k]Mi+j=1J𝐂[j,k]Nj,\lambda_{k}=\sqrt{\sum_{i=1}^{I}\mathbf{R}[i,k]M_{i}}+\sqrt{\sum_{j=1}^{J}\mathbf{C}[j,k]N_{j}},

for k=1,,Kk=1,\ldots,K. This approach will shrink to 0 any singular values in the submatrix that are due to error only. In practice each data matrix 𝐗ij\mathbf{X}_{ij} is scaled to have error variance 11 beforehand; if the error variance is unknown, then it can be taken as the total variance in 𝐗ij\mathbf{X}_{ij} (a conservative choice) or via other approaches such as the median absolute deviation estimator in Gavish and Donoho [5].

5 Empirical variational BIDIFAC (EV-BIDIFAC)

While the BIDIFAC+ approach has several advantages, one weakness is the required standardization by the noise variance in each data matrix, which is estimated pre-hoc and not within a cohesive framework. More critically, the use of nuclear norm penalties generally results in over-shrinkage of the signal, and the default theoretically-motivated choice of the λk\lambda_{k}’s is particularly conservative. Thus, we aim to develop a more flexible empirical variational Bayes approach to the decomposition of bidimensionally linked matrices, analogous to that for a single matrix in Section 3. Our psuedo-Bayesian model is as follows for i=1,Ii=1,\ldots I and j=1,,Jj=1,\ldots,J:

𝐗ij=k=1K𝐔i(k)𝐕j(k)T+𝐄i,j where 𝐄i,j[l,m]iidNormal(0,σij2),\displaystyle\mathbf{X}_{ij}=\sum_{k=1}^{K}\mathbf{U}_{i}^{(k)}\mathbf{V}_{j}^{(k)T}+\mathbf{E}_{i,j}\text{ where }\mathbf{E}_{i,j}[l,m]\overset{iid}{\sim}\text{Normal}(0,\sigma^{2}_{ij}), (10)
𝐔i(k)[,r]=𝟎 if 𝐑[i,k]=0 and 𝐔i(k)[,r]Normal(𝟎,σij𝝉u2[k,r]𝐈) if 𝐑[i,k]=1,\displaystyle\mathbf{U}_{i}^{(k)}[\cdot,r]=\mathbf{0}\text{ if }\mathbf{R}[i,k]=0\text{ and }\mathbf{U}_{i}^{(k)}[\cdot,r]\sim\text{Normal}(\mathbf{0},\sigma_{ij}\mbox{\boldmath{${\tau}$}}^{2}_{u}[k,r]\mathbf{I})\text{ if }\mathbf{R}[i,k]=1,
𝐕j(k)[,r]=𝟎 if 𝐂[j,k]=0 and 𝐕j(k)[,r]Normal(𝟎,σij𝝉v2[k,r]𝐈) if 𝐂[j,k]=1,\displaystyle\mathbf{V}_{j}^{(k)}[\cdot,r]=\mathbf{0}\text{ if }\mathbf{C}[j,k]=0\text{ and }\mathbf{V}_{j}^{(k)}[\cdot,r]\sim\text{Normal}(\mathbf{0},\sigma_{ij}\mbox{\boldmath{${\tau}$}}^{2}_{v}[k,r]\mathbf{I})\text{ if }\mathbf{C}[j,k]=1,

where 𝐈\mathbf{I} is an identity matrix of appropriate dimension. Using repeated applications of Corollary 4, the BIDIFAC+ objective can be shown to give the posterior mode of the model specified in (10) for which 𝝈={σij:i=1,,I,j=1,,J}\mbox{\boldmath{${\sigma}$}}=\{\sigma_{ij}:i=1,\ldots,I,j=1,\ldots,J\} and 𝝉={𝝉u,𝝉v}\mbox{\boldmath{${\tau}$}}=\{\mbox{\boldmath{${\tau}$}}_{u},\mbox{\boldmath{${\tau}$}}_{v}\} are fixed, and 𝝉u2[k,r]=𝝉v2[k,r]=1/λk\mbox{\boldmath{${\tau}$}}^{2}_{u}[k,r]=\mbox{\boldmath{${\tau}$}}^{2}_{v}[k,r]=1/\lambda_{k} for all rr. We extend this framework by allowing 𝝈{\sigma} and 𝝉{\tau} to be estimated within a cohesive empirical variational Bayes framework. We approximate the true posterior with a distribution q()q(\cdot), for which the factor matrices are mutually independent:

q(𝐔,𝐕)=k=1Kqu,k(𝐔(k))qv,k(𝐕(k)).q(\mathbf{U},\mathbf{V})=\prod_{k=1}^{K}q_{u,k}(\mathbf{U}^{(k)})q_{v,k}(\mathbf{V}^{(k)}).

The corresponding free energy is given by

F(q𝝈,𝝉)=Eqlogk=1Kqu,k(𝐔(k))qv,k(𝐕(k))p(𝐗𝐔,𝐕,𝝈)p({𝐔(k)}k=1K𝝉𝐔)p({𝐕(k)}k=1K𝝉𝐕),\displaystyle F(q\mid\mbox{\boldmath{${\sigma}$}},\mbox{\boldmath{${\tau}$}})=E_{q}\mbox{log}\frac{\prod_{k=1}^{K}q_{u,k}(\mathbf{U}^{(k)})q_{v,k}(\mathbf{V}^{(k)})}{p(\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}\mid\mathbf{U},\mathbf{V},\mbox{\boldmath{${\sigma}$}})p(\{\mathbf{U}^{(k)}\}_{k=1}^{K}\mid\mbox{\boldmath{${\tau}$}}_{\mathbf{U}})p(\{\mathbf{V}^{(k)}\}_{k=1}^{K}\mid\mbox{\boldmath{${\tau}$}}_{\mathbf{V}})}, (11)

with the densities p()p(\cdot) in the denominator as specified in model (10). The resulting low-rank fit and decomposition is then given by

𝐒^=Eq(k=1K𝐔(k)𝐕(k)T)=k=1K𝐒^(k)\displaystyle\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}=E_{q}\left(\sum_{k=1}^{K}\mathbf{U}_{\cdot}^{(k)}\mathbf{V}_{\cdot}^{(k)T}\right)=\sum_{k=1}^{K}\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)} (12)

where for k=1,,Kk=1,\ldots,K,

𝐒^(k)=Equ,qv(𝐔(k)𝐕(k)T).\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}=E_{q_{u},q_{v}}\left(\mathbf{U}_{\cdot}^{(k)}\mathbf{V}_{\cdot}^{(k)T}\right).

Algorithm 1 describes an iterative approach to compute the empirical variational Bayes solution. This algorithm converges to a blockwise minimum of the free energy (11) over qq and 𝝉{\tau} with 𝝈{\sigma} fixed at the distinct minimizer of (11) for each 𝐗ij\mathbf{X}_{ij} separately. Alternatively, one can update the variances 𝝈{\sigma} over the iterative algorithm to minimize free energy under the joint model. In Algorithm 1, the number of modules KK and the row- and column-set inclusions for the modules, 𝐑\mathbf{R} and 𝐂\mathbf{C}, are assumed to be fixed. For a smaller number of linked matrices, one can set 𝐑\mathbf{R} and 𝐂\mathbf{C} to enumerate all of the possible submatrices of shared, partially shared, or specific structure: K=(2I1)(2J1)(2^{I}-1)(2^{J}-1). The estimated decomposition may still be sparse in that 𝐒^(k)=𝟎\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}=\mathbf{0} if module kk has no distinct low-rank structure, as demonstrated in Section 8.3. If enumerating all submatrices of the row- and column-sets is not feasible due to a large number matrices, one can specify KK, 𝐑\mathbf{R} and 𝐂\mathbf{C} based on prior knowledge. Alternatively, we may approximate the fully enumerated decomposition with a large upper bound on the number distinct modules, KK, and the row- and column-set inclusions 𝐑\mathbf{R} and 𝐂\mathbf{C} estimated empirically as in Lock et al. [13].

Algorithm 1 Estimation of EV-BIDIFAC model for fully observed data
  1. 1.

    Estimate σij\sigma_{ij} for each 𝐗ij\mathbf{X}_{ij}, as in Theorem 7.

  2. 2.

    Define 𝐗~\mathbf{\tilde{X}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}} as in (4), with 𝐗~ij=𝐗ij/σij\tilde{\mathbf{X}}_{ij}=\mathbf{X}_{ij}/\sigma_{ij} for i=1,,Ii=1,\ldots,I and j=1,,Jj=1,\ldots,J.

  3. 3.

    Initialize 𝐒~(k)=𝟎\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}=\mathbf{0} for k=1,,Kk=1,\ldots,K

  4. 4.

    For k=1,,Kk=1,\ldots,K:

    1. (a)

      Let 𝐗~(k)\mathbf{\tilde{X}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)} be the submatrix of 𝐗~kk𝐒~(k)\mathbf{\tilde{X}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}-\sum_{k^{\prime}\neq k}\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k^{\prime})} over row sets indicated by 𝐑[,k]\mathbf{R}[\cdot,k] and column sets 𝐂[,k]\mathbf{C}[\cdot,k].

    2. (b)

      Update the non-zero submatrix in 𝐒~(k)\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)} by the application of the singular value thresholding in Theorem 6 to 𝐗~(k)\mathbf{\tilde{X}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}.

  5. 5.

    Repeat Step 3. until convergence of the {𝐒~(k)}k=1K\{\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}.

  6. 6.

    Unscale the estimated structure: 𝐒^ij(k)=σij𝐒~ij(k)\hat{\mathbf{S}}^{(k)}_{ij}=\sigma_{ij}\tilde{\mathbf{S}}^{(k)}_{ij} for k=1,,Kk=1,\ldots,K, j=1,,Jj=1,\ldots,J, i=1,,Ii=1,\ldots,I.

6 Uniqueness

The uniqueness of decompositions into shared, partially shared, and specific low-rank structure requires careful consideration, as the terms in the model are generally not uniquely defined in general without further constraints. For unidimensional decompositions of low-rank structure, conditions of orthogonality are commonly used to assure the different components are uniquely identified [12, 3, 6, 29]; however, such conditions are not straightforward to extend to decompositions for bidimensionally linked data. Alternatively, Lock et al. [13] showed that the terms in the BIDIFAC+ solution are uniquely defined under certain conditions, without orthogonality, as the minimizer of the structured nuclear norm objective in (9). Here we extend this result to a much broader class of penalties, that includes the EV-BIDIFAC decomposition.

For fixed row-set and column-set indicators for the modules, 𝐑\mathbf{R} and 𝐂\mathbf{C}, let 𝕊𝐗^\mathbb{S}_{\hat{\mathbf{X}}} be the set of possible decompositions that yield a given approximation 𝐗^\hat{\mathbf{X}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}:

𝕊𝐗^={{𝐒(k)}k=1K𝐗^=k=1K𝐒(k)}.\mathbb{S}_{\hat{\mathbf{X}}}=\left\{\{\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}\mid\hat{\mathbf{X}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}=\sum_{k=1}^{K}\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\right\}.

In most setting the cardinality of |𝕊𝐗^||\mathbb{S}_{\hat{\mathbf{X}}}| is infinite without further conditions. Define P𝝀()P_{\mbox{\boldmath{${\lambda}$}}}(\cdot) for a vector 𝝀{\lambda} as a weighted sum of the singular values of a matrix:

P𝝀(𝐒)=𝝀[r]𝐃𝐒[r,r],P_{\mbox{\boldmath{${\lambda}$}}}(\mathbf{S})=\sum\mbox{\boldmath{${\lambda}$}}[r]\mathbf{D}_{\mathbf{S}}[r,r],

and let fpen()f_{\text{pen}}(\cdot) give a structured penalty on the different modules of a decomposition,:

fpen({𝐒(k)}k=1K)=k=1KP𝝀k(𝐒(k)),f_{\text{pen}}(\{\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K})=\sum_{k=1}^{K}P_{\mbox{\boldmath{${\lambda}$}}_{k}}(\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}),

with non-zero entries for each 𝝀k\mbox{\boldmath{${\lambda}$}}_{k}. Theorem 8 gives sufficient conditions for the minimizer of this penalty to be uniquely determined.

Theorem 8.

Consider {𝐒^(k)}k=1K𝕊𝐗^\{\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}\in\mathbb{S}_{\hat{\mathbf{X}}} and let 𝐔(k)𝐃^(k)𝐕(k)T\mathbf{U}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\hat{\mathbf{D}}^{(k)}\mathbf{V}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)T} give the SVD of 𝐒^(k)\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)} for k=1,,Kk=1,\ldots,K. The following three properties uniquely identify {𝐒^(k)}k=1K\{\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}.

  1. 1.

    {𝐒^(k)}k=1K\{\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K} minimizes fpen()f_{\text{pen}}(\cdot) over 𝕊𝐗^\mathbb{S}_{\hat{\mathbf{X}}},

  2. 2.

    {𝐔^i(k)[,r]:𝐑[i,k]=1 and 𝐃^(k)[r,r]>0}\{\hat{\mathbf{U}}_{i}^{(k)}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},r]:\mathbf{R}[i,k]=1\text{ and }\hat{\mathbf{D}}^{(k)}[r,r]>0\} are linearly independent for i=1,Ii=1,\ldots I,

  3. 3.

    {𝐕^j(k)[,r]:𝐂[j,k]=1 and 𝐃^(k)[r,r]>0}\{\hat{\mathbf{V}}_{j}^{(k)}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},r]:\mathbf{C}[j,k]=1\text{ and }\hat{\mathbf{D}}^{(k)}[r,r]>0\} are linearly independent for j=1,,Jj=1,\ldots,J.

The proof of Theorem 8 is given in Appendix A.1. Note that the linear independence conditions are straightforward to verify, and are likely to hold in practice if the ranks of the terms in {𝐒^(k)}k=1K\{\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K} are small relative to the matrix dimensions. As noted in Proposition 9, the terms in any accumulation point of Algorithm 1 also give a coordinate-wise optimum of fpen()f_{\text{pen}}(\cdot) for a certain set of penalties {𝝀k}k=1K\{\mbox{\boldmath{${\lambda}$}}_{k}\}_{k=1}^{K}, and thus the EV-BIDIFAC solution is unique in this sense.

Proposition 9.

If {𝐒^(k)}k=1K𝕊𝐗^\{\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}\in\mathbb{S}_{\hat{\mathbf{X}}} is an accumulation point of Algorithm 1, then it is a coordinate-wise optimum of fpen()f_{\text{pen}}(\cdot) over 𝕊𝐗^\mathbb{S}_{\hat{\mathbf{X}}} for some {𝛌k}k=1K\{\mbox{\boldmath{${\lambda}$}}_{k}\}_{k=1}^{K}.

7 Missing data imputation

Now we consider the context in which the data are not fully observed and missing data must be imputed. Let \mathcal{M} index observations in the full dataset 𝐗\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}} that are not observed: ={(m,n):𝐗[m,n] is missing}\mathcal{M}=\{(m,n):\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n]\text{ is missing}\}. We develop an Expectation-Maximization (EM) approach for imputation that is analogous to previous approaches described for hard-thresholding [11], soft-thresholding [15], and BIDIFAC+ [13]. The procedure for fixed estimates of the noise variance for each matrix, σij\sigma_{ij}, is described in Algorithm 2.

Algorithm 2 Estimation of EV-BIDIFAC model for missing data
  1. 1.

    Initialize 𝐒~(k)=𝟎\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}=\mathbf{0} for k=1,,Kk=1,\ldots,K

  2. 2.

    Define 𝐗~\mathbf{\tilde{X}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}} as in (4), with 𝐗~ij=𝐗ij/σij\tilde{\mathbf{X}}_{ij}=\mathbf{X}_{ij}/\sigma_{ij} for i=1,,Ii=1,\ldots,I and j=1,,Jj=1,\ldots,J.

  3. 3.

    Set 𝐗~[m,n]=𝐒~[m,n]\mathbf{\tilde{X}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n]=\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n] for (m,n)(m,n)\in\mathcal{M}, where 𝐒~=k=1K𝐒~(k)\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}=\sum_{k=1}^{K}\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}.

  4. 4.

    Update {𝐒~(k)}k=1K\{\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K} as in Step 4. of Algorithm 1.

  5. 5.

    Repeat Steps 3. and 4. until convergence of the {𝐒~(k)}k=1K\{\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}.

  6. 6.

    Unscale the estimated structure: 𝐒^ij(k)=σij𝐒~ij(k)\hat{\mathbf{S}}^{(k)}_{ij}=\sigma_{ij}\tilde{\mathbf{S}}^{(k)}_{ij} for k=1,,Kk=1,\ldots,K, j=1,,Jj=1,\ldots,J, i=1,,Ii=1,\ldots,I.

  7. 7.

    Impute 𝐗[m,n]=𝐒^[m,n]\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n]=\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n] for (m,n)(m,n)\in\mathcal{M}.

Note that Step 4. in Algorithm 2 is a partial maximization step, as the free energy in (11) is conditionally minimized for each module given the current imputed values, while Step 5. is an expectation step replacing the missing data with their expected value (12) under the current model fit. Further, the algorithm can be shown to minimize the free energy over the missing data {𝐗[m,n]:(m,n)}\{\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n]:(m,n)\in\mathcal{M}\}; this result is given as Theorem 10, and is appealing because all unknowns (model hyperpameters, parameters and missing entries) are then estimated via optimizing the same unified free energy objective.

Theorem 10.

Algorithm 2 iteratively minimizes the free energy (12) over q()q(\cdot), 𝛕{\tau}, and {𝐗[m,n]:(m,n)}\{\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n]:(m,n)\in\mathcal{M}\}.

A proof of Theorem 10 is provided in Appendix A.2. For the choice of 𝝈{\sigma}, we suggest two different approaches, depending on the type of missingness. If entries are missing from an entire row or columns of a given data matrix 𝐗ij\mathbf{X}_{ij} (i.e., row- or column-wise missing), then σij\sigma_{ij} can be estimated by the applying Theorem 2 to the complete submatrix {𝐗[m,n]:(m,n)}\{\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n]:(m,n)\notin\mathcal{M}\}. If the matrix is missing entries without entire rows or columns, then we update σij\sigma_{ij} iteratively during Algorithm 2 as follows at the end of each cycle: 1. apply Theorem 2 to the matrix 𝐗ij\mathbf{X}_{ij} with currently imputed values to obtain σij\sigma^{\prime}_{ij}, 2. update σij=σij(MiNjMiNj|ij|)\sigma_{ij}=\sigma^{\prime}_{ij}\left(\frac{M_{i}N_{j}}{M_{i}N_{j}-|\mathcal{M}_{ij}|}\right) where |ij||\mathcal{M}_{ij}| is the number of missing entries in 𝐗ij\mathbf{X}_{ij}.

8 Simulations

Here we present a collection of simulations to illustrate and compare the proposed approach with respect to various criteria. We explore several conditions covering different signal-to-noise ratios, missing data scenarios, and linked matrix patterns with shared and specific structure. Further simulation results (e.g., exploring different matrix dimensions) are available at
https://ericfrazerlock.com/ev-bidifac-sims.html.

8.1 Single matrix

We first describe a simple simulation on a single matrix to illustrate the empirical variational Bayes approach to matrix factorization as a flexible compromise between a hard-thresholding and a conservative soft-thresholding approach. We consider a fully observed matrix 𝐗:1000×100\mathbf{X}:1000\times 100, with 𝐗=c𝐔𝐕T+𝐄\mathbf{X}=c\mathbf{U}\mathbf{V}^{T}+\mathbf{E} where the entries of 𝐄\mathbf{E}, 𝐔:M×10\mathbf{U}:M\times 10 and 𝐕:N×10\mathbf{V}:N\times 10 are each independent standard normal, and cc is a manipulated value controlling the signal-to-noise ratio. We consider 1010 values of cc distributed on a log-uniform scale between c=0.05c=0.05 and c=1c=1, and generate 1010 replications of the simulation for each value of cc. For each generated dataset, we apply the following approaches:

  • OPT: The oracle operator on the singular values when the structure 𝐒=c𝐔𝐕T\mathbf{S}=c\mathbf{U}\mathbf{V}^{T} is known: 𝐒^OPT=𝐔X𝐃^𝐕XT\hat{\mathbf{S}}_{\text{OPT}}=\mathbf{U}_{X}\hat{\mathbf{D}}\mathbf{V}_{X}^{T} where 𝐃^\hat{\mathbf{D}} is diagonal with entries that solve the OLS problem to minimize 𝐒𝐔X𝐃^𝐕XTF2||\mathbf{S}-\mathbf{U}_{X}\hat{\mathbf{D}}\mathbf{V}_{X}^{T}||_{F}^{2}.

  • EVB: the empirical variational Bayes approach applying Theorems 6 and 7.

  • HT: The hard-thresholding approach described in Proposition 1, using the true rank R=10R=10.

  • NN: The nuclear norm penalized soft-thresholding approach described in Proposition 2, with λ=M+N\lambda=\sqrt{M}+\sqrt{N} (this correspond to a threshold motivated by Proposition 5 with true noise variance σ=1\sigma=1).

For each setting and for each method we compute the relative squared error (RSE) and oracle-normalized squared error (ONSE)

RSE=𝐒𝐒^F2𝐒F2 and ONSE=𝐒𝐒^F2𝐒𝐒^OPTF2.\displaystyle\text{RSE}=\frac{||\mathbf{S}-\hat{\mathbf{S}}||_{F}^{2}}{||\mathbf{S}||_{F}^{2}}\text{ and }\text{ONSE}=\frac{||\mathbf{S}-\hat{\mathbf{S}}||_{F}^{2}}{||\mathbf{S}-\hat{\mathbf{S}}_{\text{OPT}}||_{F}^{2}}. (13)

Figure 1 shows the mean values for both RSE and ONSE under the different methods as a function of the signal-to-noise cc. The hard-thresholding approach tends to overfit the signal, which can lead to particularly poor recovery when the signal is smaller relative to the noise. In contrast, the nuclear norm penalty is appropriately conservative in that it never overfits and performs better for lower signal; however, it tends to over-shrink for higher signal and can have several fold higher error than hard-thresholding. The EVB approach is a flexible compromise between the two methods, performing comparably or better than both methods for all scenarios and substantially outperforming them for moderate levels of signal. Moreover, performance of the EVB method is impressively close to the optimal in most scenarios.

Figure 1: Error in estimating the underlying low-rank signal for a single matrix under different methods, and under different signal-to-noise (s2n) ratios. The left-panel gives relative squared error, and the right-panel gives oracle normalized standard error. All axes are on a log-scale.
Refer to caption

We consider another scenario with low-rank structure in which the underlying components have heterogeneous levels of signal. First, we simulate 𝐒=𝐔𝐕T\mathbf{S}^{\prime}=\mathbf{U}^{\prime}\mathbf{V}^{\prime T} where the entries of 𝐔:M×10\mathbf{U}:M\times 10 and 𝐕:10×N\mathbf{V}:10\times N are independent standard normal. Then, we generate the signal as

𝐒=𝐔S𝐃S𝐕ST\displaystyle\mathbf{S}=\mathbf{U}_{S^{\prime}}\mathbf{D}_{S}\mathbf{V}_{S^{\prime}}^{T} (14)

where 𝐃S\mathbf{D}_{S} is diagonal with entries sampled randomly from a log-uniform distribution between 0.05MN0.05\sqrt{MN} and MN\sqrt{MN}, and 𝐗=𝐒+𝐄\mathbf{X}=\mathbf{S}+\mathbf{E} with entries of 𝐄\mathbf{E} independent standard normal. Thus, some singular values of the true low-rank structure 𝐒\mathbf{S} may correspond to high signal, others may be of moderate signal, and others may be of weak or undetectable signal. We generate 100100 replications in this case, and for each we apply the EVB, HT, and NN methods described above, as well as the following additional approaches:

  • HT-OPT: the oracle hard-thresholding approach, in which the rank is selected that results in minimal squared error between the estimated signal and the true signal.

  • NN-OPT: the oracle soft-thresholding approach, in which the nuclear norm penalty is selected that results in minimal squared error between the estimated signal and the true signal.

  • RMT: the low-rank reconstruction approach proposed in Shabalin and Nobel [22], based on minimizing squared error loss using asymptotic random matrix theory, with the true error variance σ2=1\sigma^{2}=1.

Boxplots showing distribution of RSE and ONSE for recovering the underlying structure across replications are shown in Figure 2. The EVB and RMT methods have performance close to the oracle, and vastly outperform hard-thresholding and nuclear norm penalized approaches. The relatively poor performance of HT-OPT suggests that singular values benefit from some shrinkage, while the poor performance of NN-OPT indicates that the level of shrinkage should not be uniform across singular values.

Figure 2: Error in estimating underlying low-rank structure in which the rank-1 components have heterogenous signal sizes. The left-panel gives relative squared error (RSE), and the right-panel gives oracle normalized standard error (ONSE). All axes are on a log-scale.
Refer to caption
Figure 3: Missing data imputation accuracy for different levels of missingness. The left column gives RSEmiss\text{RSE}_{\text{miss}} and the right gives ONSEmiss\text{ONSE}_{\text{miss}}.
Refer to caption

We also compare missing data imputation performance of the EVB approach using Algorithm 2. For each of the 100100 simulated datasets generated, we consider scenarios in which 20%20\%, 50%50\%, or 80%80\% of the entries are randomly held out as missing and imputed. In addition to EVB imputation, we apply similar EM-imputation methods for the soft-thresholding (corresponding to softImpute of Mazumder et al. [15]) and hard thresholding [11] approaches. The RMT approach is not considered here, as it does not have an associated missing data imputation procedure. For the NN-OPT and HT-OPT approaches, we select the value of λ\lambda or RR, respectively, that minimizes squared imputation error for the signal of the missing data

RSEmiss=(m,n)(𝐒[m,n]𝐒^[m,n])2(m,n)𝐒[m,n]2.\displaystyle\text{RSE}_{\text{miss}}=\frac{\sum_{(m,n)\in\mathcal{M}}(\mathbf{S}[m,n]-\hat{\mathbf{S}}[m,n])^{2}}{\sum_{(m,n)\in\mathcal{M}}\mathbf{S}[m,n]^{2}}. (15)

For oracle normalized imputation error ONSEmiss\text{ONSE}_{\text{miss}}, we divide RSEmiss\text{RSE}_{\text{miss}} by the RSE for the OPT method in (13). The results are shown in Figure 3, and demonstrate that the EVB imputation approach is stable for different levels of missingness and substantially outperforms the other approaches.

8.2 Two linked matrices

Here we generate two matrices, 𝐗1:500×100\mathbf{X}_{1}:500\times 100 and 𝐗2:500×100\mathbf{X}_{2}:500\times 100, that are linked by columns: 𝐗=[𝐗1;𝐗2]:1000×100\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}=[\mathbf{X}_{1};\mathbf{X}_{2}]:1000\times 100. We consider this context primarily to allow comparison of the EV-BIDIFAC approach with several other methods that identify shared and unshared low-rank structure in this setting.

First, we perform an illustrative simulation akin to that in Figure 1 for a single matrix. We simulate shared structure 𝐒0=c𝐔0𝐕0T\mathbf{S}_{0}=c\mathbf{U}_{0}\mathbf{V}_{0}^{T} and individual structures 𝐒1=c𝐔1𝐕1T\mathbf{S}_{1}=c\mathbf{U}_{1}\mathbf{V}_{1}^{T} 𝐒2=c𝐔2𝐕2T\mathbf{S}_{2}=c\mathbf{U}_{2}\mathbf{V}_{2}^{T}, where 𝐔0:1000×5\mathbf{U}_{0}:1000\times 5, 𝐕0:100×5\mathbf{V}_{0}:100\times 5, 𝐔1:500×5\mathbf{U}_{1}:500\times 5, 𝐔2:500×5\mathbf{U}_{2}:500\times 5, 𝐕1:100×5\mathbf{V}_{1}:100\times 5, and 𝐕2:100×5\mathbf{V}_{2}:100\times 5 each have independent entries from a standard normal distribution. The observed data are then generated as 𝐗=𝐒0+[𝐒1;𝐒2]+𝐄\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}=\mathbf{S}_{0}+[\mathbf{S}_{1};\mathbf{S}_{2}]+\mathbf{E} where entries of 𝐄\mathbf{E} are independent standard normal. Thus cc again controls the signal to noise ratio and we consider 1010 values of cc distributed on a log-uniform scale between c=0.05c=0.05 and c=1c=1, and generate 1010 replications for each value of cc. Here we consider estimates for the following methods:

  • EVB: the EV-BIDIFAC approach described in Algorithm 2

  • JIVE: the JIVE method implemented in O’Connell and Lock [19] for true shared and unshared ranks R=5,R1=5,R2=5R=5,R_{1}=5,R_{2}=5.

  • UNIFAC: the BIDIFAC method [20] for vertical integration using the true noise variance σ2=1\sigma^{2}=1.

Here JIVE is analogous to HT and UNIFAC to NN for a single matrix, as they involve iterative applications HT or NN, respectively, to the shared and unshared components. For each method we compute RSE for the overall structure 𝐒^=𝐒^0+[𝐒^1;𝐒^2]\hat{\mathbf{S}}=\hat{\mathbf{S}}_{0}+[\hat{\mathbf{S}}_{1};\hat{\mathbf{S}}_{2}] as in (13), and also compute the relative decomposition squared error (RDSE) as

RDSE=𝐒0𝐒^0F2+𝐒1𝐒^1F2+𝐒2𝐒^2F2𝐒0F2+𝐒1F2+𝐒2F2.\text{RDSE}=\frac{||\mathbf{S}_{0}-\hat{\mathbf{S}}_{0}||_{F}^{2}+||\mathbf{S}_{1}-\hat{\mathbf{S}}_{1}||_{F}^{2}+||\mathbf{S}_{2}-\hat{\mathbf{S}}_{2}||_{F}^{2}}{||\mathbf{S}_{0}||_{F}^{2}+||\mathbf{S}_{1}||_{F}^{2}+||\mathbf{S}_{2}||_{F}^{2}}.

The results are shown in Figure 4. The results for overall RSE are similar to that for a single matrix, with UNIFAC appropriately conservative for lower signal but substantially underperforming JIVE for higher signal due to over-shrinkage. However, per the RDSE results, UNIFAC can still identify the shared and unshared components more accurately with higher signal – this phenomenon was noted previously [13] and is in part due to orthogonality restrictions imposed by JIVE and related methods that are required for uniqueness of the decomposition without shrinkage. The EVB approach again acts as a flexible compromise between the two methods, and substantially outperforms both for many s2n scenarios, particularly with respect to RDSE.

Figure 4: Error in estimating the underlying low-rank signal for two linked matrices under different signal-to-noise (s2n) ratios. The left-panel gives the RSE, and the right-panel gives RDSE. All axes are on a log-scale.
Refer to caption

Now, we compare RSE and RDSE under a scenario with heterogenous signal levels for the factors in each of the shared and unshared structures. The rank-5 signal matrices 𝐒0,𝐒1,\mathbf{S}_{0},\mathbf{S}_{1}, and 𝐒2\mathbf{S}_{2} are each generated as in (14) with singular values again generated uniformly from a log-uniform distribution between 0.05MN0.050.05\sqrt{MN}*0.05 and MN\sqrt{MN} for shared structure and between 0.05MiN0.05\sqrt{M_{i}N} and MiN\sqrt{M_{i}N} for specific structure, and 100100 replications are generated for this scenario. In addition to the EVB, JIVE, UNIFAC methods we also consider the following approaches:

  • SLIDE: the structural learning and integrative decomposition (SLIDE) method [6], with the true number of shared and unshared components (i.e., the ranks) specified.

  • HNN.OPT: the hierarchical nuclear norm (HNN) approach [29], with tuning parameters selected on a grid to minimize actual RSE for the true signal.

Figure 5 gives distribution of RSE and RDSE over the replications for the different methods. The EVB approach universally outperforms other methods in terms of both RSE and RDSE, and the benefit is large (by a factor of 2 or more) for both criteria in several cases. While the HNN method is motivated by a model with shared and unshared low-rank structure, it does not provide an explicit decomposition of the structure and thus it is not shown for RDSE.

Figure 5: RSE (left) and RDSE (right) for low-rank structure with heterogenous signal levels for two linked matrices.
Refer to caption

8.3 Bidimensionally linked matrices

Now, we consider the full bidimensionally linked framework in (4), with I=J=2I=J=2, N1=N2=50N_{1}=N_{2}=50 and D1=D2=500D_{1}=D_{2}=500. Data were generated as

𝐗=k=1K𝐒(k)+𝔼,\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}=\sum_{k=1}^{K}\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}+\mathbb{E},

where the entries of 𝔼\mathbb{E} are independent standard normal. There are K=9K=9 possible distinct modules of low-rank structure in this setting: globally shared, shared across rows for either of the of the J=2J=2 column sets, shared across columns for either of the I=2I=2 row sets, or structure specific to any of the 44 matrices 𝐗ij\mathbf{X}_{ij}. We generate 100100 datasets, where for each replication 55 of the 99 possible modules are randomly selected to have non-zero low-rank structure and the others are set to zero, 𝐒(k)=0\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}=0. All non-zero modules are generated to have rank 22 structure on the given submatrix as in (14). For each simulated dataset, we apply the following methods:

  • EB-BIDI: the approach in Algorithm 1, with 𝐑\mathbf{R} and 𝐂\mathbf{C} enumerating all K=9K=9 possible modules.

  • BIDIFAC: the BIDIFAC algorithm described in [13] with 𝐑\mathbf{R} and 𝐂\mathbf{C} enumerating all possible modules and using the true noise variance σ2=1\sigma^{2}=1.

  • EB-SEP: the EVB approach applied to each matrix 𝐗ij\mathbf{X}_{ij} separately

  • EB-JOINT: the EVB approach applied to the overall matrix 𝐗\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}.

For each method, we compute the overall RSE for 𝐒=k=1K𝐒(k)\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}=\sum_{k=1}^{K}\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)} as in (13), and for EB-BIDI and BIDIFAC we compute the RDSE as

RDSE=k=19𝐒(k)𝐒^(k)F2k=19𝐒(k)F2.\text{RDSE}=\frac{\sum_{k=1}^{9}||\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}-\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}||_{F}^{2}}{\sum_{k=1}^{9}||\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}||_{F}^{2}}.

The results are summarized in Figure 6. Overall RSE is substantially better for EB-BIDI vs. other methods; in comparison to EB-SEP and EB-JOINT, this illustrates the advantages of a parsimonious decomposition of the low-rank signal into the different shared and unshared components with respect recovering the overall signals. Moreover, EB-BIDI has substantially better RDSE compared to BIDIFAC, demonstrating the advantages of the empirical variational Bayes objective with respect to accurately decomposing the structure into its different components. Moreover, EB-BIDI correctly estimated modules with no true structure (𝐒(k)=𝟎\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}=\mathbf{0}) as 𝐒^(k)=𝟎\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}=\mathbf{0} for all of the 400400 total instances (100100 total datasets with 44 possible modules set to 𝟎\mathbf{0} in each case); on the other hand, it estimated 𝐒^(k)=𝟎\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}=\mathbf{0} in only 19/500=3.8%19/500=3.8\% of instances when 𝐒(k)0\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\neq 0, which occured when 𝐒(k)\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)} was difficult to detect due to a small signal size. This demonstrates how the approach can correctly recover the underlying rank-sparsity structure, only estimating non-zero structure for a module if it truly has distinct low-rank structure on its given rows and columns.

Figure 6: RSE (left) and RDSE (right) for the scenario with 2×22\times 2 bidimensionally linked matrices.
Refer to caption

We extend this simulation design to consider missing data imputation accuracy for the generated datasets. We consider two different missingness scenarios: an entry-wise missing scenario in which 20% of the entries are randomly set to missing in each matrix, and a blockwise missing scenario in which 10% of the columns and 10% of the rows are randomly selected to have all of their entries missing in each matrix. In addition to imputation using the previous methods, we also consider the NN.OPT and HT.OPT methods as described in Section 8.1. For entrywise imputation the RSEmiss\text{RSE}_{\text{miss}} is computed as in (15). For blockwise imputation, the true underlying structure is taken to be the sum of column-shared structure for missing rows and the sum of row-shared structure for missing columns, as this is the only estimable signal when an entire row or column is missing. That is, letting 𝐒C=k:𝐂[,k]=𝟏𝐒(k)\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{C}=\sum_{k:\mathbf{C}[\cdot,k]=\mathbf{1}}\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)} and 𝐒R=k:𝐑[,k]=𝟏𝐒(k)\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{R}=\sum_{k:\mathbf{R}[\cdot,k]=\mathbf{1}}\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)},

RSEmiss=(m,n)R(𝐒C[m,n]𝐒^[m,n])2+(m,n)C(𝐒R[m,n]𝐒^[m,n])2(m,n)R𝐒C[m,n]2+(m,n)C𝐒R[m,n]2,\displaystyle\text{RSE}_{\text{miss}}=\frac{\sum_{(m,n)\in\mathcal{M}^{R}}(\mathbf{S}^{C}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n]-\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n])^{2}+\sum_{(m,n)\in\mathcal{M}^{C}}(\mathbf{S}^{R}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n]-\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n])^{2}}{\sum_{(m,n)\in\mathcal{M}^{R}}\mathbf{S}^{C}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n]^{2}+\sum_{(m,n)\in\mathcal{M}^{C}}\mathbf{S}^{R}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n]^{2}}, (16)

where R\mathcal{M}^{R} indexes row-missing entries and C\mathcal{M}^{C} indexes column missing entries. The results are shown in Figure 7. The EB-BIDI method substantially and consistently outperforms the other approaches with respect to missing data imputation, and the advantages are particularly large for blockwise missing imputation. Note that the EB-SEP approach does not estimate any structure for blockwise missing data, as it is not estimable when considering each matrix separately.

Figure 7: RSE for entrywise missing data imputation (left) and blockwise missing data imputation (right), for the scenario with 2×22\times 2 bidimensionally linked matrices.
Refer to caption

9 Application to BRCA Data

Here we describe an application to gene expression (mRNA) and microRNA (miRNA) data for breast cancer (BRCA) from the Cancer Genome Atlas (TCGA) [25]. The data were processed as described in Park and Lock [20], resulting in D1=500D_{1}=500 mRNAs and D2=500D_{2}=500 miRNAs for N1=660N_{1}=660 cancer tumor samples and N2=86N_{2}=86 non-cancerous breast tissue samples from unrelated individuals. Thus, the data can be represented as bidimensionally linked matrices as in (4), where 𝐗11\mathbf{X}_{11} is mRNA data for cancer samples, 𝐗12\mathbf{X}_{12} is mRNA data for normal samples, 𝐗21\mathbf{X}_{21} is miRNA data for cancer samples and 𝐗22\mathbf{X}_{22} is miRNA data for normal samples. The decomposition of data into different modules of shared structured variation is very well-motivated in this context, as low-rank structure on any of the K=9K=9 possible modules is plausible and interpretable. For example, it is reasonable to expect shared structure across mRNA and miRNAs, as miRNAs can regulate mRNA. It is also reasonable to expect some shared structure across tumor and normal samples, as breast tumors arise from normal tissue and so they likely share some patterns of molecular variation. However, it is also reasonable to expect specific structure in each case, e.g., patterns of molecular variability that are ‘cancer-specific’ (i.e., not present in normal tissue) are of particular interest. The BIDIFAC approach decomposes such shared and unique structure across both row sets (mRNA/miRNA) and column sets (tumor/normal), while other methods do not. Moreover, as demonstrated in our simulations, the empirical variational (EV) Bayes approach captures underlying low-rank structure more efficiently and accurately than other low-rank approximation methods. Thus, we expect our EV-BIDIFAC approach to decompose the underlying shared and unshared signals in an informative way, and accurately recover the underlying signal in the full dataset 𝐗\mathbf{X}_{\cdot\cdot} better than alternative methods.

Figure 8: Heatmaps of the BRCA data (left) and the full low-rank structure estimated by EV-BIDIFAC. Higher values are colored red, lower values are colored blue, and missing columns are colored black.
Refer to caption

We applied EV-BIDIFAC wherein 𝐑\mathbf{R} and 𝐂\mathbf{C} enumerate all K=9K=9 possible modules, and significant distinct low-rank structure was identified for each module (𝐒^(k)𝟎\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\neq\mathbf{0} for all kk). Figure 8 shows heatmaps of the original data (including missing data for miRNA and mRNA) and the resulting overall fit 𝐒^=k=1K𝐒^(k)\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}=\sum_{k=1}^{K}\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}. While all estimated structures are low-rank, it captures the original data well and imputation of the missing columns appears reasonable. Considering the relative amount of variation present in the different modules reveals the extent of shared variation across mRNA and miRNA, and across tumor and normal cells. Interestingly, only 1.47% of the low-rank signal is in the module shared globally across all submatrices, while 59.8% of the signal is shared across mRNA and miRNA but specific to tumors. This suggests that the substantial coordinated activity between mRNA and miRNA in breast tumors is cancer-specific and not present in the normal cells they derive from.

We further compare imputation accuracy for data under entry-wise, column-wise or row-wise missingness under a cross-validation scheme. The accuracy of entry-wise imputation is of interest because it is a good proxy for the accuracy of the the overall fit of the estimated structure. Column-wise imputation accuracy is of interest for two reasons: (1) the actual data have column-wise missingness, as not all samples have both mRNA and miRNA data in the full TCGA cohort, and (2) it provides a robust measure of the level of association between the two data sources, addressing how well can we predict gene expression with miRNA data, and vice versa. Row-wise imputation accuracy is also of interest, as it provides a measure of similarity in molecular structure between cancer and normal samples (e.g., how well can we predict levels of a missing gene in cancer samples, given its associations in normal samples).

We create 20 folds in the data, where for each fold 5% of the columns, 5% of the rows, and 5% of the remaining entries in each matrix are randomly withheld as missing. For each fold, the missing data are imputed using the EV-BIDIFAC, BIDIFAC, EB-SEP or EB-JOINT approaches described in Section 8.3. We further consider nuclear norm penalized imputation (NN) and hard-thresholding (HT) using softImpute [7], jointly for the full matrix (NN-JOINT and HT-JOINT) or separately for each submatrix (NN-SEP and HT-SEP). In each case the nuclear norm penalty is selected via random matrix theory, and the rank for hard-thresholding is determined via cross-validation by optimizing MSE for an additional 5% of randomly withheld entries. As an alternative to low-rank methods we consider k-nearest neighbor (KNN) imputation using the package impute [8] on the full matrix. Lastly, we consider a structured matrix completion approach [2] designed to impute a contiguous missing submatrix under a low-rank approximation using the StructureMC package [14]. As StructureMC imputes only one submatrix, it does not apply to randomly missing entries, and we hold out one set of rows or columns at a time to impute from the full matrix, with all other entries observed. In each case, the relative squared error for the true observed data is computed and averaged across the four datasets yielding

MRSEmiss=14i=12j=12(m,n)ij(𝐗ij[m,n]𝐒^ij[m,n])2(m,n)ij𝐗ij[m,n]2.\text{MRSE}_{\text{miss}}=\frac{1}{4}\sum_{i=1}^{2}\sum_{j=1}^{2}\frac{\sum_{(m,n)\in\mathcal{M}_{ij}}(\mathbf{X}_{ij}[m,n]-\hat{\mathbf{S}}_{ij}[m,n])^{2}}{\sum_{(m,n)\in\mathcal{M}_{ij}}\mathbf{X}_{ij}[m,n]^{2}}.

The MRSEmiss\text{MRSE}_{\text{miss}} under different imputation scenarios are summarized in Table 1. The EV-BIDIFAC approach performs relatively well across all scenarios. For entry-wise missing data EV-BIDIFAC performs much better than BIDIFAC (likely due to over-shrinkage of the signal for BIDIFAC) and similarly to EB-SEP, indicating that structure can be well-estimated by considering each dataset separately; however, separate estimation does not allow for row- or column imputation and does not estimate the shared variation that is of interest. The MRSEmiss\text{MRSE}_{\text{miss}} for row-missing data is relatively modest (0.900.90) compared to entry-wise missing (0.380.38), indicating that there is little shared information between tumor and normal cells. The joint approach EB-JOINT on 𝐗\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}} performs relatively poorly across all scenarios as it tends to overfit shared variation when there substantial specific variation across platforms (mRNA and miRNA) and cohorts (tumor and normal); this illustrates the benefits of a comprehensive decomposition of shared and specific variation across the constituent matrices.

Table 1: Imputation accuracy (MRSEmiss\text{MRSE}_{\text{miss}}) for BRCA and normal tissue mRNA and miRNA data. The lowest values in each column are shown in bold font; if multiple values are bolded, then their means were not significantly different at a 0.05 level using a paired t-test over the 20 folds

. Method Name Entry-missing Column-missing Row-missing Overall EV-BIDIFAC 0.389 0.756 0.903 0.682 BIDIFAC 0.526 0.870 0.909 0.768 EB-SEP 0.381 1.00 1.00 0.79 EB-JOINT 0.510 1.01 1.67 1.06 NN-SEP 0.536 1.00 1.00 0.845 NN-JOINT 0.614 0.891 0.881 0.796 HT-SEP 0.459 1.00 1.03 0.831 HT-JOINT 0.520 4.50 13.1 6.05 KNN 0.646 1.26 1.01 0.974 StructureMC - 0.977 1.01 -

10 Conclusion and discussion

The proposed EV-BIDIFAC approach can be considered as a flexible compromise between hard- and soft- thresholding techniques for the linked matrix decomposition problem, and has impressive performance with respect to 1. estimating underlying low-rank structure, 2. decomposing underlying structure into its shared, partially-shared and specific components, and 3. missing data imputation under a variety of scenarios. An appealing aspect of the method is that it derives from a unified and intuitive model with a corresponding objective function that does not require any pre-hoc choices or parameters to tune. In simulation comparisons the proposed approach tended to outperform other methods even when their parameters are set to the true values (e.g., true rank(s) or true error variance) or ideal values (e.g., to minimize error with knowledge of the true structure).

While we have focused on obtaining a point estimate for underlying structure that minimizes free energy, this could also be used to empirically estimate hyperparameters and initialize a sampling algorithm for the associated fully Bayesian model. This would capture uncertainty in the underlying factorization, which can be used for multiple imputation of missing values or to propagate uncertainty if the lower-dimensional components are used in subsequent statistical modeling. A related Bayesian model is described in Samorodnitsky et al. [24], which also illustrates the importance of uncertainty propagation.

Often, data take the form of higher-order tensors instead of matrices, and low-rank tensor factorizations are useful to uncover their underlying structure [10]. The models considered here may be extended to the tensor setting, e.g., by allowing factors for each dimension of a tensor to have a distribution analogous to that for 𝐔\mathbf{U} and 𝐕\mathbf{V} in the matrix setting. However, the algorithms are not straightforward to extend, as there is likely not a closed-form solution for a tensor akin to that in Theorem 6 for a matrix. Extending this approach to the setting of a single tensor or linked tensors is an interesting direction for future work.

Supplementary materials

Reproducible R scripts for all results presented in this manuscript are available in a supplementary zipped folder. Software to perform the EV-BIDIFAC method is available at https://github.com/lockEF/bidifac. Additional simulation study results are available at https://ericfrazerlock.com/ev-bidifac-sims.html.

Acknowledgement

This work was supported by the NIH National Institute of General Medical Sciences (NIGMS) grant R01-GM130622.

Appendix A Proofs

A.1 Proof of Theorem 8

The proof for Theorem 8 follows a similar structure to that for Theorem 1 in Lock et al. [13]. We first establish four lemmas, Lemmas 11, 12, 13, and 14, which are useful to prove the main result.

Lemma 11.

Take two decompositions {𝐒^(k)}k=1K𝕊𝐗^\{\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}\in\mathbb{S}_{\hat{\mathbf{X}}} and {𝐒~(k)}k=1K𝕊𝐗^\{\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}\in\mathbb{S}_{\hat{\mathbf{X}}}, and assume that both minimize the structured penalty for given {𝛌}k=1K\{\mbox{\boldmath{${\lambda}$}}\}_{k=1}^{K}:

fpen({𝐒^(k)}k=1K)=fpen({𝐒~(k)}k=1K)=min𝕊𝐗^fpen({𝐒(k)}k=1K).f_{\text{pen}}(\{\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K})=f_{\text{pen}}\left(\{\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}\right)=\underset{\mathbb{S}_{\hat{\mathbf{X}}}}{\mbox{min}}\;f_{\text{pen}}(\{\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}).

Then, for any α[0,1]\alpha\in[0,1],

α𝐒^(k)+(1α)𝐒~(k)=α𝐒^(k)+(1α)𝐒~(k)\displaystyle||\alpha\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}+(1-\alpha)\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}||_{*}=\alpha||\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}||_{*}+(1-\alpha)||\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}||_{*}

for k=1,,Kk=1,\ldots,K.

Proof.

Because 𝕊𝐗^\mathbb{S}_{\hat{\mathbf{X}}} is a convex space and fpenf_{\text{pen}} is a convex function, the set of minimizers of fpenf_{\text{pen}} over 𝕊𝐗^\mathbb{S}_{\hat{\mathbf{X}}} is also convex. Thus,

fpen({α𝐒^(k)+(1α)𝐒~(k)}k=1K)=min𝕊𝐗^fpen({𝐒(k)}k=1K).f_{\text{pen}}\left(\{\alpha\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}+(1-\alpha)\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}\right)=\underset{\mathbb{S}_{\hat{\mathbf{X}}}}{\mbox{min}}\;f_{\text{pen}}(\{\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}).

The result follows from the convexity of P𝝀P_{\mbox{\boldmath{${\lambda}$}}}, which implies that for any two matrices of equal size 𝐀^\hat{\mathbf{A}} and 𝐀~\tilde{\mathbf{A}},

P𝝀(α𝐀^+(1α)𝐀~)αP𝝀(𝐀^)+(1α)P𝝀(𝐀~).\displaystyle P_{\mbox{\boldmath{${\lambda}$}}}(\alpha\hat{\mathbf{A}}+(1-\alpha)\tilde{\mathbf{A}})\leq\alpha P_{\mbox{\boldmath{${\lambda}$}}}(\hat{\mathbf{A}})+(1-\alpha)P_{\mbox{\boldmath{${\lambda}$}}}(\tilde{\mathbf{A}}). (17)

Applying (17) to each additive term in fpenf_{\text{pen}} gives

fpen({α𝐒^(k)+(1α)𝐒~(k)}k=1K)\displaystyle f_{\text{pen}}\left(\{\alpha\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}+(1-\alpha)\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}\right) αfpen({𝐒^(k)}k=1K)+(1α)fpen({𝐒~(k)}k=1K)\displaystyle\leq\alpha f_{\text{pen}}(\{\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K})+(1-\alpha)f_{\text{pen}}(\{\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}) (18)
=min𝕊𝐗^fpen({𝐒(k)}k=1K).\displaystyle=\underset{\mathbb{S}_{\hat{\mathbf{X}}}}{\mbox{min}}\;f_{\text{pen}}(\{\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}).

Because {α𝐒^(k)+(1α)𝐒~(k)}k=1K𝕊𝐗^\{\alpha\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}+(1-\alpha)\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}\in\mathbb{S}_{\hat{\mathbf{X}}}, the inequality in (18) must be an equality, and it follows that the inequality (17) must be an equality for each penalized term in the decomposition. ∎

Lemma 12.

Take two matrices 𝐀^\hat{\mathbf{A}} and 𝐀~\tilde{\mathbf{A}}. If 𝐀^+𝐀~=𝐀^+𝐀~||\hat{\mathbf{A}}+\tilde{\mathbf{A}}||_{*}=||\hat{\mathbf{A}}||_{*}+||\tilde{\mathbf{A}}||_{*}, and 𝐔𝐃+𝐕T\mathbf{U}\mathbf{D}_{+}\mathbf{V}^{T} is the SVD of 𝐀^+𝐀~\hat{\mathbf{A}}+\tilde{\mathbf{A}}, then 𝐀^=𝐔^𝐃^𝐕^T\hat{\mathbf{A}}=\hat{\mathbf{U}}\hat{\mathbf{D}}\hat{\mathbf{V}}^{T} where 𝐃^\hat{\mathbf{D}} is diagonal and 𝐀^=𝐃^||\hat{\mathbf{A}}||_{*}=||\hat{\mathbf{D}}||_{*}, and 𝐀~=𝐔𝐃~𝐕T\tilde{\mathbf{A}}=\mathbf{U}\tilde{\mathbf{D}}\mathbf{V}^{T} where 𝐃~\tilde{\mathbf{D}} is diagonal and 𝐀~=𝐃~||\tilde{\mathbf{A}}||_{*}=||\tilde{\mathbf{D}}||_{*}.

Proof.

This result is proved in Lemma 2 of Lock et al. [13]. ∎

Lemma 13.

Take two matrices 𝐀^\hat{\mathbf{A}} and 𝐀~\tilde{\mathbf{A}}. If P𝛌(𝐀^+𝐀~)=P𝛌(𝐀^)+P𝛌(𝐀~)P_{\mbox{\boldmath{${\lambda}$}}}(\hat{\mathbf{A}}+\tilde{\mathbf{A}})=P_{\mbox{\boldmath{${\lambda}$}}}(\hat{\mathbf{A}})+P_{\mbox{\boldmath{${\lambda}$}}}(\tilde{\mathbf{A}}), and 𝐔𝐃+𝐕T\mathbf{U}\mathbf{D}_{+}\mathbf{V}^{T} is the SVD of 𝐀^+𝐀~\hat{\mathbf{A}}+\tilde{\mathbf{A}}, then 𝐀^=𝐔^𝐃^𝐕^T\hat{\mathbf{A}}=\hat{\mathbf{U}}\hat{\mathbf{D}}\hat{\mathbf{V}}^{T} where 𝐃^\hat{\mathbf{D}} is diagonal and P𝛌(𝐀^)=P𝛌(𝐃^)P_{\mbox{\boldmath{${\lambda}$}}}(\hat{\mathbf{A}})=P_{\mbox{\boldmath{${\lambda}$}}}(\hat{\mathbf{D}}), and 𝐀~=𝐔𝐃~𝐕T\tilde{\mathbf{A}}=\mathbf{U}\tilde{\mathbf{D}}\mathbf{V}^{T} where 𝐃~\tilde{\mathbf{D}} is diagonal and P𝛌(𝐀~)=P𝛌(𝐃~)P_{\mbox{\boldmath{${\lambda}$}}}(\tilde{\mathbf{A}})=P_{\mbox{\boldmath{${\lambda}$}}}(\tilde{\mathbf{D}}).

Proof.

The result follows from repeated applications of Lemma 12 to the rank-1 terms of 𝐀^\hat{\mathbf{A}} and 𝐀~\tilde{\mathbf{A}}. Note that P𝝀(𝐀^+𝐀~)=P𝝀(𝐀^)+P𝝀(𝐀~)P_{\mbox{\boldmath{${\lambda}$}}}(\hat{\mathbf{A}}+\tilde{\mathbf{A}})=P_{\mbox{\boldmath{${\lambda}$}}}(\hat{\mathbf{A}})+P_{\mbox{\boldmath{${\lambda}$}}}(\tilde{\mathbf{A}}) implies

P𝝀(r𝐔𝐀^[,r]𝐃𝐀^[r,r]𝐕𝐀^[,r]T+𝐔𝐀~[,r]𝐃𝐀~[r,r]𝐕𝐀~[,r]T)=r𝝀[r](𝐃𝐀^[r,r]+𝐃𝐀~[r,r]),P_{\mbox{\boldmath{${\lambda}$}}}\left(\sum_{r}\mathbf{U}_{\hat{\mathbf{A}}}[\cdot,r]\mathbf{D}_{\hat{\mathbf{A}}}[r,r]\mathbf{V}_{\hat{\mathbf{A}}}[\cdot,r]^{T}+\mathbf{U}_{\tilde{\mathbf{A}}}[\cdot,r]\mathbf{D}_{\tilde{\mathbf{A}}}[r,r]\mathbf{V}_{\tilde{\mathbf{A}}}[\cdot,r]^{T}\right)=\sum_{r}\mbox{\boldmath{${\lambda}$}}[r](\mathbf{D}_{\hat{\mathbf{A}}}[r,r]+\mathbf{D}_{\tilde{\mathbf{A}}}[r,r]),

which requires that

𝐔𝐀^[,r]𝐃𝐀^[r,r]𝐕𝐀^[,r]T+𝐔𝐀~[,r]𝐃𝐀~[r,r]𝐕𝐀~[,r]T\displaystyle||\mathbf{U}_{\hat{\mathbf{A}}}[\cdot,r]\mathbf{D}_{\hat{\mathbf{A}}}[r,r]\mathbf{V}_{\hat{\mathbf{A}}}[\cdot,r]^{T}+\mathbf{U}_{\tilde{\mathbf{A}}}[\cdot,r]\mathbf{D}_{\tilde{\mathbf{A}}}[r,r]\mathbf{V}_{\tilde{\mathbf{A}}}[\cdot,r]^{T}||_{*}
=𝐔𝐀^[,r]𝐃𝐀^[r,r]𝐕𝐀^[,r]T+𝐔𝐀~[,r]𝐃𝐀~[r,r]𝐕𝐀~[,r]T\displaystyle=||\mathbf{U}_{\hat{\mathbf{A}}}[\cdot,r]\mathbf{D}_{\hat{\mathbf{A}}}[r,r]\mathbf{V}_{\hat{\mathbf{A}}}[\cdot,r]^{T}||_{*}+||\mathbf{U}_{\tilde{\mathbf{A}}}[\cdot,r]\mathbf{D}_{\tilde{\mathbf{A}}}[r,r]\mathbf{V}_{\tilde{\mathbf{A}}}[\cdot,r]^{T}||_{*}

for each rr. ∎

Lemma 14.

Take two decompositions {𝐒^(k)}k=1K𝕊𝐗^\{\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}\in\mathbb{S}_{\hat{\mathbf{X}}} and {𝐒~(k)}k=1K𝕊𝐗^\{\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}\in\mathbb{S}_{\hat{\mathbf{X}}}, and assume that both satisfy

fpen({𝐒^(k)}k=1K)=fpen({𝐒~(k)}k=1K)=min𝕊𝐗^fpen({𝐒(k)}k=1K).f_{\text{pen}}(\{\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K})=f_{\text{pen}}\left(\{\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}\right)=\underset{\mathbb{S}_{\hat{\mathbf{X}}}}{\mbox{min}}\;f_{\text{pen}}(\{\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}).

Then, 𝐒^(k)=𝐔(k)𝐃^𝐕(k)T\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}=\mathbf{U}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\hat{\mathbf{D}}\mathbf{V}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)T} and 𝐒^(k)=𝐔(k)𝐃~(k)𝐕(k)T\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}=\mathbf{U}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\tilde{\mathbf{D}}^{(k)}\mathbf{V}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)T} where 𝐔(k):M×Rk\mathbf{U}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}:M\times R_{k} and 𝐕(k):N×Rk\mathbf{V}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}:N\times R_{k} have orthonormal columns, and 𝐃^(k)\hat{\mathbf{D}}^{(k)} and 𝐃~(k)\tilde{\mathbf{D}}^{(k)} are diagonal.

Proof.

The result follows as a direct corollary of Lemmas 11 and 13, as Lemma 11 implies P𝝀(𝐒^(k)+𝐒~(k))=P𝝀(𝐒^(k)|)+P𝝀(𝐒~(k))P_{\mbox{\boldmath{${\lambda}$}}}(\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}+\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)})=P_{\mbox{\boldmath{${\lambda}$}}}(\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}|)+P_{\mbox{\boldmath{${\lambda}$}}}(\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}) for each kk, and then Lemma 13 implies the result. ∎

Theorem 8 is then established as follows:

Proof.

Take two decomposition {𝐒^(k)}k=1K\{\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K} and {𝐒~(k)}k=1K\{\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K} that satisfy properties 1., 2., and 3. of Theorem 1; we will show that {𝐒^(k)}k=1K={𝐒~(k)}k=1K\{\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}=\{\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\}_{k=1}^{K}. For each k=1,,Kk=1,\ldots,K, write 𝐒^(k)=𝐔(k)𝐃^𝐕(k)T\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}=\mathbf{U}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\hat{\mathbf{D}}\mathbf{V}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)T} and 𝐒^(k)=𝐔(k)𝐃~(k)𝐕(k)T\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}=\mathbf{U}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}\tilde{\mathbf{D}}^{(k)}\mathbf{V}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)T} as in Lemma 14. Then, it suffices to show that 𝐃^(k)[r,r]=𝐃~(k)[r,r]\hat{\mathbf{D}}^{(k)}[r,r]=\tilde{\mathbf{D}}^{(k)}[r,r] for all k,rk,r.

First, consider module k=1k=1 with 𝐑[,1]=[1 0 0]T\mathbf{R}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},1]=[1\;0\;\cdots\;0]^{T} and 𝐂[,1]=[1 0 0]T\mathbf{C}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},1]=[1\;0\;\cdots\;0]^{T}. By way of contradiction, assume 𝐃^(1)[1,1]>0\hat{\mathbf{D}}^{(1)}[1,1]>0 and 𝐃~(1)[1,1]=0\tilde{\mathbf{D}}^{(1)}[1,1]=0. The linear independence of {𝐕j(k)[,r]:𝐃^(k)[r,r]>0}\{\mathbf{V}_{j}^{(k)}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},r]:\hat{\mathbf{D}}^{(k)}[r,r]>0\} and {𝐕j(k)[,r]:𝐃~(k)[r,r]>0}\{\mathbf{V}_{j}^{(k)}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},r]:\tilde{\mathbf{D}}^{(k)}[r,r]>0\} implies that

row(𝐗)=span{𝐔(k)[,r]:𝐃^(k)[r,r]>0}=span{{𝐔(k)[,r]:𝐃~(k)[r,r]>0}.\mbox{row}(\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}})=\mbox{span}\{\mathbf{U}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},r]:\hat{\mathbf{D}}^{(k)}[r,r]>0\}=\mbox{span}\{\{\mathbf{U}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},r]:\tilde{\mathbf{D}}^{(k)}[r,r]>0\}.

Thus, 𝐔(1)[,1]]span{{𝐔(k)[,r]:𝐃~(k)[r,r]>0}\mathbf{U}^{(1)}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},1]]\in\mbox{span}\{\{\mathbf{U}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},r]:\tilde{\mathbf{D}}^{(k)}[r,r]>0\}, and it follows from the orthogonality of 𝐔(1)[,1]\mathbf{U}^{(1)}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},1] and {𝐔(1)[,r],r>1}\{\mathbf{U}^{(1)}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},r],r>1\} that

𝐔(1)[,1]span{{𝐔(k)[,r]:𝐃~(k)[r,r]>0 and k>1}.\mathbf{U}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(1)}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},1]\in\mbox{span}\{\{\mathbf{U}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},r]:\tilde{\mathbf{D}}^{(k)}[r,r]>0\text{ and }k>1\}.

Moreover, because 𝐔i(1)=𝟎\mathbf{U}_{i}^{(1)}=\mathbf{0} for any i>1i>1 and {𝐔i(k)[,r]:𝐃~(k)[r,r]>0}\{\mathbf{U}_{i}^{(k)}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},r]:\tilde{\mathbf{D}}^{(k)}[r,r]>0\} are linearly independent it follows that

𝐔(1)[,1]span{𝐔(k)[,r]:𝐃~(k)[r,r]>0,k>1, and 𝐑[i,k]=0 for any i>1}.\displaystyle\mathbf{U}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(1)}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},1]\in\mbox{span}\{\mathbf{U}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},r]:\tilde{\mathbf{D}}^{(k)}[r,r]>0,\;k>1,\;\text{ and }\mathbf{R}[i,k]=0\text{ for any }i>1\}. (19)

Note that (19) implies 𝐔1(1)[,1]row(𝐗12++row(𝐗1J)\mathbf{U}_{1}^{(1)}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},1]\in\mbox{row}(\mathbf{X}_{12}+\cdots+\mbox{row}(\mathbf{X}_{1J}), however, this is contradicted by the linear independence of 𝐔1(1)[,1]\mathbf{U}_{1}^{(1)}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},1] and {𝐔i(k)[,r]:𝐃^(k)[r,r]>0,k>1}\{\mathbf{U}_{i}^{(k)}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},r]:\hat{\mathbf{D}}^{(k)}[r,r]>0,k>1\}. Thus, we conclude that 𝐃~(1)[1,1]>0\tilde{\mathbf{D}}^{(1)}[1,1]>0 implies 𝐃~(1)[1,1]>0\tilde{\mathbf{D}}^{(1)}[1,1]>0. Analogous arguments show that 𝐃~(k)[r,r]>0\tilde{\mathbf{D}}^{(k)}[r,r]>0 if and only if 𝐃~(k)[r,r]>0\tilde{\mathbf{D}}^{(k)}[r,r]>0 for any pair (r,k)(r,k). It follows that {𝐔i(k)[,r]:𝐃^(k)[r,r]>0 or 𝐃~(k)[r,r]>0}\{\mathbf{U}_{i}^{(k)}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},r]:\hat{\mathbf{D}}^{(k)}[r,r]>0\text{ or }\tilde{\mathbf{D}}^{(k)}[r,r]>0\} are linearly independent for i=1,Ii=1,\ldots I, and {𝐕j(k)[,r]:𝐃^(k)[r,r]>0 or 𝐃~(k)[r,r]>0}\{\mathbf{V}_{j}^{(k)}[\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}},r]:\hat{\mathbf{D}}^{(k)}[r,r]>0\text{ or }\tilde{\mathbf{D}}^{(k)}[r,r]>0\} are linearly independent for j=1,,Jj=1,\ldots,J. Thus,

k=1K𝐔(k)(𝐃^(k)𝐃~(k))𝐕(k)T=k=1K𝐒^(k)𝐒~(k)=𝐗𝐗=𝟎\displaystyle\sum_{k=1}^{K}\mathbf{U}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}(\hat{\mathbf{D}}^{(k)}-\tilde{\mathbf{D}}^{(k)})\mathbf{V}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)T}=\sum_{k=1}^{K}\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}-\tilde{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}^{(k)}=\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}-\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}=\mathbf{0}

implies that 𝐃^(k)[r,r]=𝐃~(k)[r,r]\hat{\mathbf{D}}^{(k)}[r,r]=\tilde{\mathbf{D}}^{(k)}[r,r] for all k,rk,r.

A.2 Proof of Theorem 10

It is clear that the partial ‘M-steps’ in Algorithm 2 each partially optimize the free energy. It remains to be shown that the missing entries minimize free energy when they are replaced with their conditional expectation under the model. Note that the free energy (11) can be expressed as

Eqlog(k=1Kqu,k(𝐔k)qv,k(𝐕k))Eqlog(p({𝐔}k=1K𝝉𝐔)p({𝐕}k=1K𝝉𝐕))Eqlog(p({𝐗𝐔,𝐕,𝝈)).E_{q}\mbox{log}\left(\prod_{k=1}^{K}q_{u,k}(\mathbf{U}_{k})q_{v,k}(\mathbf{V}_{k})\right)-E_{q}\mbox{log}\left(p(\{\mathbf{U}\}_{k=1}^{K}\mid\mbox{\boldmath{${\tau}$}}_{\mathbf{U}})p(\{\mathbf{V}\}_{k=1}^{K}\mid\mbox{\boldmath{${\tau}$}}_{\mathbf{V}})\right)-E_{q}\mbox{log}\left(p(\{\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}\mid\mathbf{U},\mathbf{V},\mbox{\boldmath{${\sigma}$}})\right).

Consider the last term in the expression, decomposing the missing and non-missing entries of 𝐗\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}:

Eqlog(p(𝐗𝐔,𝐕,𝝈))\displaystyle E_{q}\mbox{log}\left(p(\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}\mid\mathbf{U},\mathbf{V},\mbox{\boldmath{${\sigma}$}})\right) =Eqlog(p({𝐗[m,n]:(m,n)}𝐔,𝐕,𝝈)p({𝐗[m,n]:(m,n)}𝐔,𝐕,𝝈))\displaystyle=E_{q}\mbox{log}\left(p(\{\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n]:(m,n)\notin\mathcal{M}\}\mid\mathbf{U},\mathbf{V},\mbox{\boldmath{${\sigma}$}})p(\{\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n]:(m,n)\in\mathcal{M}\}\mid\mathbf{U},\mathbf{V},\mbox{\boldmath{${\sigma}$}})\right)
=Eqlog(p({𝐗[m,n]:(m,n)}𝐔,𝐕,𝝈))+\displaystyle=E_{q}\mbox{log}\left(p(\{\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n]:(m,n)\notin\mathcal{M}\}\mid\mathbf{U},\mathbf{V},\mbox{\boldmath{${\sigma}$}})\right)+ (20)
Eqlog(p({𝐗[m,n]:(m,n)}𝐔,𝐕,𝝈)).\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;E_{q}\mbox{log}\left(p(\{\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n]:(m,n)\in\mathcal{M}\}\mid\mathbf{U},\mathbf{V},\mbox{\boldmath{${\sigma}$}}\right)).

Missing data {𝐗[m,n]:(m,n)}\{\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n]:(m,n)\in\mathcal{M}\} are random variables that are independent and normally distributed given {𝐔,𝐕}\{\mathbf{U},\mathbf{V}\} as in (10). Thus, the last term in (20) can be expressed as follows:

Eqlog(p({𝐗[m,n]:(m,n)}𝐔,𝐕,𝝈))\displaystyle E_{q}\mbox{log}\left(p(\{\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n]:(m,n)\in\mathcal{M}\}\mid\mathbf{U},\mathbf{V},\mbox{\boldmath{${\sigma}$}}\right)) =c1+(m,n)12𝝈2[m,n]Eq(𝐗[m,n]𝐒[m,n])2\displaystyle=c_{1}+\sum_{(m,n)\in\mathcal{M}}\frac{1}{2\mbox{\boldmath{${\sigma}$}}^{2}[m,n]}E_{q}(\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n]-\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n])^{2}
c1+(m,n)12𝝈2[m,n]Eq(𝐒[m,n])𝐒[m,n])2.\displaystyle\geq c_{1}+\sum_{(m,n)\in\mathcal{M}}\frac{1}{2\mbox{\boldmath{${\sigma}$}}^{2}[m,n]}E_{q}(\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n])-\mathbf{S}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n])^{2}.

Thus, for any given qq, the free energy is minimized at 𝐗[m,n]=EqS[m,n]=𝐒^[m,n]\mathbf{X}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n]=E_{q}S_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n]=\hat{\mathbf{S}}_{\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\displaystyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\textstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptstyle\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.5}{$\scriptscriptstyle\bullet$}}}}}}[m,n] for (m,n)(m,n)\in\mathcal{M}.

.

References

  • \bibcommenthead
  • Attias [1999] Attias, H.: Inferring parameters and structure of latent variable models by variational Bayes. In: Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, pp. 21–30. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (1999)
  • Cai et al. [2016] Cai, T., Cai, T.T., Zhang, A.: Structured matrix completion with applications to genomic data integration. Journal of the American Statistical Association 111(514), 621–633 (2016)
  • Feng et al. [2018] Feng, Q., Jiang, M., Hannig, J., Marron, J.: Angle-based joint and individual variation explained. Journal of multivariate analysis 166, 241–265 (2018)
  • Fox and Roberts [2012] Fox, C.W., Roberts, S.J.: A tutorial on variational Bayesian inference. Artificial intelligence review 38, 85–95 (2012)
  • Gavish and Donoho [2017] Gavish, M., Donoho, D.L.: Optimal shrinkage of singular values. IEEE Transactions on Information Theory 63(4), 2137–2152 (2017)
  • Gaynanova and Li [2019] Gaynanova, I., Li, G.: Structural learning and integrative decomposition of multi-view data. Biometrics 75(4), 1121–1132 (2019)
  • Hastie and Mazumder [2021] Hastie, T., Mazumder, R.: softImpute: Matrix Completion Via Iterative Soft-Thresholded SVD. (2021). R package version 1.4-1. https://CRAN.R-project.org/package=softImpute
  • Hastie et al. [2021] Hastie, T., Tibshirani, R., Narasimhan, B., Chu, G.: Impute: Impute: Imputation for Microarray Data. (2021). R package version 1.68.0
  • Josse and Sardy [2016] Josse, J., Sardy, S.: Adaptive shrinkage of singular values. Statistics and Computing 26, 715–724 (2016)
  • Kolda and Bader [2009] Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM review 51(3), 455–500 (2009)
  • Kurucz et al. [2007] Kurucz, M., Benczúr, A.A., Csalogány, K.: Methods for large scale SVD with missing values. In: Proceedings of KDD Cup and Workshop, vol. 12, pp. 31–38 (2007)
  • Lock et al. [2013] Lock, E.F., Hoadley, K., Marron, J.S., Nobel, A.B.: Joint and Individual Variation Explained (JIVE) for integrated analysis of multiple data types. The Annals of Applied Statistics 7(1), 523 (2013)
  • Lock et al. [2022] Lock, E.F., Park, J.Y., Hoadley, K.: Bidimensional linked matrix factorization for pan-omics pan-cancer analysis. Annals of Applied Statistics 16(1), 193–215 (2022)
  • Liu and Zhang [2019] Liu, Y., Zhang, A.: StructureMC: Structured Matrix Completion. (2019). R package version 1.0. https://CRAN.R-project.org/package=StructureMC
  • Mazumder et al. [2010] Mazumder, R., Hastie, T., Tibshirani, R.: Spectral regularization algorithms for learning large incomplete matrices. The Journal of Machine Learning Research 11, 2287–2322 (2010)
  • Nakajima and Sugiyama [2014] Nakajima, S., Sugiyama, M.: Analysis of empirical map and empirical partially bayes: Can they be alternatives to variational Bayes? In: Artificial Intelligence and Statistics, pp. 20–28 (2014). PMLR
  • Nakajima et al. [2013] Nakajima, S., Sugiyama, M., Babacan, S.D., Tomioka, R.: Global analytic solution of fully-observed variational Bayesian matrix factorization. Journal of Machine Learning Research 14(1), 1–37 (2013)
  • Nakajima et al. [2012] Nakajima, S., Tomioka, R., Sugiyama, M., Babacan, S.: Perfect dimensionality recovery by variational Bayesian pca. In: Pereira, F., Burges, C.J., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 25 (2012)
  • O’Connell and Lock [2016] O’Connell, M.J., Lock, E.F.: R.JIVE for exploration of multi-source molecular data. Bioinformatics 32(18), 2877–2879 (2016)
  • Park and Lock [2020] Park, J.Y., Lock, E.F.: Integrative factorization of bidimensionally linked matrices. Biometrics 76(1), 61–74 (2020)
  • Rudelson and Vershynin [2010] Rudelson, M., Vershynin, R.: Non-asymptotic theory of random matrices: extreme singular values. In: Proceedings of the International Congress of Mathematicians 2010 (ICM 2010) (In 4 Volumes) Vol. I: Plenary Lectures and Ceremonies Vols. II–IV: Invited Lectures, pp. 1576–1602 (2010). World Scientific
  • Shabalin and Nobel [2013] Shabalin, A.A., Nobel, A.B.: Reconstruction of a low-rank matrix in the presence of Gaussian noise. Journal of Multivariate Analysis 118, 67–76 (2013)
  • Schouteden et al. [2014] Schouteden, M., Van Deun, K., Wilderjans, T.F., Van Mechelen, I.: Performing disco-sca to search for distinctive and common information in linked data. Behavior research methods 46(2), 576–587 (2014)
  • Samorodnitsky et al. [2024] Samorodnitsky, S., Wendt, C.H., Lock, E.F.: Bayesian simultaneous factorization and prediction using multi-omic data. Computational Statistics & Data Analysis 197, 107974 (2024)
  • TCGA Research Network et al. [2012] TCGA Research Network, et al.: Comprehensive molecular portraits of human breast tumors. Nature 490(7418), 61 (2012)
  • Tang and Allen [2021] Tang, T.M., Allen, G.I.: Integrated principal components analysis. Journal of Machine Learning Research 22(198), 1–71 (2021)
  • Wang and Stephens [2021] Wang, W., Stephens, M.: Empirical Bayes matrix factorization. Journal of Machine Learning Research 22(120), 1–40 (2021)
  • Yang and Michailidis [2016] Yang, Z., Michailidis, G.: A non-negative matrix factorization method for detecting modules in heterogeneous omics multi-modal data. Bioinformatics 32(1), 1–8 (2016)
  • Yi et al. [2023] Yi, S., Wong, R.K.W., Gaynanova, I.: Hierarchical nuclear norm penalization for multi-view data integration. Biometrics (2023)