[1]\fnmEric F. \surLock
[1]\orgdivDivision of Biostatistics and Health Data Science, \orgnameSchool of Public Health, University of Minnesota, \orgaddress\cityMinneapolis, \postcode55455, \stateMN, \countryUSA
Empirical Bayes Linked Matrix Decomposition
Abstract
Data for several applications in diverse fields can be represented as multiple matrices that are linked across rows or columns. This is particularly common in molecular biomedical research, in which multiple molecular “omics" technologies may capture different feature sets (e.g., corresponding to rows in a matrix) and/or different sample populations (corresponding to columns). This has motivated a large body of work on integrative matrix factorization approaches that identify and decompose low-dimensional signal that is shared across multiple matrices or specific to a given matrix. We propose an empirical variational Bayesian approach to this problem that has several advantages over existing techniques, including the flexibility to accommodate shared signal over any number of row or column sets (i.e., bidimensional integration), an intuitive model-based objective function that yields appropriate shrinkage for the inferred signals, and a relatively efficient estimation algorithm with no tuning parameters. A general result establishes conditions for the uniqueness of the underlying decomposition for a broad family of methods that includes the proposed approach. For scenarios with missing data, we describe an associated iterative imputation approach that is novel for the single-matrix context and a powerful approach for “blockwise" imputation (in which an entire row or column is missing) in various linked matrix contexts. Extensive simulations show that the method performs very well under different scenarios with respect to recovering underlying low-rank signal, accurately decomposing shared and specific signals, and accurately imputing missing data. The approach is applied to gene expression and miRNA data from breast cancer tissue and normal breast tissue, for which it gives an informative decomposition of variation and outperforms alternative strategies for missing data imputation.
keywords:
Data integration, dimension reduction, low-rank factorization, missing data imputation, variational Bayes1 Introduction
Low-rank matrix factorization techniques are widely used for many machine learning tasks such as data compression and dimension reduction, denoising to approximate underlying signal, and matrix completion. Often, instead of a single matrix, the data for a given application takes the form of multiple matrices that are linked by rows and/or columns. For example, in Section 9 we consider data from The Cancer Genome Atlas, in which gene expression (mRNA) and microRNA (miRNA) data are available from different platforms for breast cancer tumor samples and normal breast tissue samples from unrelated individuals. Thus, the data can be represented as four matrices: (1) mRNA for cancer tissues, (2) mRNA for normal tissues, (3) miRNA for cancer tissues, and (4) miRNA for normal tissues; these matrices share both rows (here, molecular features) columns (here, samples), giving a bidimensionally linked structure. Furthermore, these data have column-wise missingness, in which molecular data for one of the platforms (mRNA or miRNA) is entirely missing for some samples in each of the cancer and control cohorts. We are interested in imputing the missing data, as well as identifying patterns of variation that are shared or specific to the different data sources and sample cohorts; for these tasks, a low-rank factorization approach that integrates all of the matrices together while accounting for their differences is attractive.
There is a growing and dynamic literature on multi-matrix low-rank factorization approaches. The majority of existing methods assume that just one dimension, rows or columns, is shared across matrices, i.e., unidimensional integration. For example, the Joint and Individual Variation Explained (JIVE) approach [12, 19] decomposes low-rank structure that is shared across matrices from variation that is specific to each matrix. Several related approaches yield a similar decomposition using different strategies or assumptions [3, 23, 28, 26], while other approaches such as structural learning and integrative decomposition (SLIDE) [6] allow for structure that is partially shared across any subset of matrices. Such an approach is efficient and useful for interpretation, as for many applications identifying shared, partially shared, and specific underlying structure is of interest.
The bidimensional linked matrix factorization (BIDIFAC) approach [20, 13] was developed to accommodate bidimensionally linked integration scenarios, by identifying structure that is shared or specific across both row sets and column sets. The objective function for BIDIFAC involves a structured nuclear norm penalty, which approximates the data as a decomposition of low-rank modules that are shared across row or column subsets. This approach is attractive in the bidimensionally linked context because it has favorable convergence properties and generally leads to a uniquely-defined decomposition [13]; methods for unidimensional integration without shrinkage penalties typically require orthogonality [12, 3, 6, 29] or other constraints to assure uniqueness, which do not extend to the bidimensionally linked context. Furthermore, the application of the structured nuclear norm penalty for unidimensional integration (termed UNIFAC) has been shown to outperform alternative approaches with respect to accurately decomposing shared and specific structures [13] and imputing missing data [20] under certain scenarios. However, a drawback is that it tends to over-penalize the estimated structure, and is thus biased toward an over-conservative solution with respect to the underlying low-rank structure [29].
For a single matrix the tendency of the nuclear norm to over-shrink low-rank signal has been well-documented, and in contrast low-rank approximations without any further shrinkage tend to overfit [9]. Empirical Bayes approaches to low-rank factorization are an attractive compromise, in which the optimal levels of shrinkage are determined empirically within a model-based framework [27]. Under a variational Bayes approach, the true posterior distribution for Bayesian model is approximated by minimizing the free energy (reducing to Kullback-Leibler divergence) under simpler distributional assumptions [4]. In particular, an empirical variational Bayes approach to low-rank matrix factorization under an assumption of normally distributed factor matrices has desirable theoretical properties, and the resulting approximation for a scenario without missing data can be efficiently computed via a closed-form shrinkage formula on the observed singular values of the matrix [18, 17].
In this article, we describe an empirical variational Bayesian approach to identify and decompose shared, partially shared, or specific low-rank structure in the context of unidimensionally or bidimensionally linked matrices. Moreover, we describe an accompanying iterative imputation approach analogous to an expectation-maximization (EM) algorithm that can handle entry-wise missing or row/column-wise missing values. The proposed approach is free of any tuning parameters, and is shown to share desirable properties of the BIDIFAC approach while addressing the issue of over-shrinkage.
2 Notation and Setting
Throughout this article bold uppercase characters (‘’) correspond to matrices and bold lowercase characters (‘’) correspond to vectors. Square brackets index the entries within a vector or matrix, e.g., ‘’. The singular value decomposition (SVD) of a matrix is given by , where the diagonal entries of , , are the singular values of ordered from largest to smallest for . The Frobenius norm is given by , so that is the sum of the squared entries in ; the nuclear norm is given by , which is the sum of the singular values in a matrix:
We consider the setting of a collection of matrices for and . The column sets and row sets are consistent across the matrices. The subscript ‘’ defines concatenation across row or column sets, and the full data can be represented as a single matrix where and :
(4) |
As for our motivating application in Section 9, the column sets may correspond to different sample cohorts and the row sets may correspond to features from different high-dimensional sources.
3 Single-matrix results
Here we review some established results for a single matrix that are critical to what follows. Propositions 1 and 2 describe solutions to the least squares matrix approximation problem under a rank-constrained or nuclear norm penalized criterion, respectively.
Proposition 1.
For , the minimizer of the least squares objective under the constraint rank is given by where is diagonal with for and for .
Proposition 2.
For , the minimizer of the nuclear norm penalized objective is given by where is diagonal with for all .
Proposition 1 is well-known, and a proof of Proposition 2 can be found in Mazumder et al. [15]. Because of how they operate on the singular values , Proposition 1 represents a hard-thresholding and Proposition 2 a soft-thresholding approach to low-rank approximation. Proposition 3 below establishes that the soft-thresholding operator also solves an objective with -penalties on the factor components of , while Corollary 4 further establishes that it is the posterior mode of a Bayesian model with normal priors on the factor components.
Proposition 3.
Corollary 4.
For , if and have independent Normal entries, and where has independent Normal entries, then the mode of the posterior distribution satisfies , where solves the right-hand side of (5).
A proof of Proposition 3 can be found in Mazumder et al. [15], and Corollary 4 follows by noting that the log of the posterior density for the specified model is proportional to the left-hand side of (5). While the results hold for general , in what follows we set as the largest possible rank ; the actual estimated rank will tend to be smaller due to singular value thresholding (as in Proposition 2). The distribution of singular values for the error matrix has been well-studied, including the following results on the first singular value under general conditions from Rudelson and Vershynin [21]:
Proposition 5.
If has independent entries with mean , variance and finite fourth moment, then ) as ; further, if the entries are Gaussian, for any .
These results can motivate choice of the nuclear norm penalty . A conservative approach, if the error variance is known, is to select ; per Propositions 5 and 2, this will tend to shrink to any singular values in that result from error with no true low-rank signal. Another approach is to adopt an empirical Bayes estimate for based on the model in Corollary 4. However, a general limitation of the nuclear norm approach to low-rank matrix reconstruction is the use of a universal threshold on all singular values; in practice larger singular values should be penalized less, as they capture a relatively lower ratio of error and higher signal [22]. An analogous interpretation is that the entries of and should not have identical variance across columns, and this can be relaxed by a model in which they have column-specific variances: and for .111For most scenarios and are not independently identifiable, as the level of signal is determined by their product.
For given , the approximation resulting from the posterior mode for is given by thresholding the singular values of , analogous to Corollary 4. To determine , directly maximizing the joint likelihood over and degenerates to in the global solution, though it can have a reasonable local mode that is a singular value thresholding operator [16]. Further, a marginal empirical Bayes estimator that optimizes (effectively integrating over and ) is difficult to obtain computationally or analytically. As an alternative, one can use an empirical variational Bayes (EVB) approach. In general, for a variational Bayes approach [1], the true posterior of a Bayesian model with parameters and data , is approximated by another distribution under simplifying assumptions to minimize the free energy
where is the expected value with respect to the distribution defined by . In the context of low-rank approximation, a useful simplifying assumption for is that the left and right factor matrices are independent, . Thus, for hyperparameters and the free energy to minimize is
(6) |
and a point estimate for low-rank structure can be obtained via its expected value under :
(7) |
Under an empirical variational Bayes approach, and are also estimated by minimizing the free energy in (6). For fully-observed , Nakajima et al. [18] showed that the global solution to the empirical variation Bayes objective can be obtained in closed form as a singular value thresholding operator; their result for fixed is given in Theorem 6, and for estimation of in Theorem 7.
Theorem 6.
Theorem 7.
Assume without loss of generality that and . The estimate that minimizes (6) is the global minimizer of
where
and
4 Bidimensional linked matrix factorization
BIDIFAC+ [13] was developed to decompose bidimensionally linked data as in (4) into modules of low-rank structure that are shared across column sets and/or row sets. That is,
(8) |
where each is a concatenation of blocks as in (4) and
Here, and are binary matrices that indicate the presence of module () across the row and column sets; that is, is low-rank structure specific to the submatrix defined by the row sets identified in and the column sets identified in . This general framework subsumes several other integrative decompositions of structured variation. For example, in the unidimensionally linked case with shared columns , JIVE [12] identifies components that are either shared across all matrices ( for all ) or specific to a single matrix ( for just one ), while SLIDE [6] allows for components that are shared across any number of matrices. The BIDIFAC method [20] is a special case in which modules are either globally shared ( for all ), shared across all rows for a given column set ( for all and for one ), shared across all columns for a given row set ( for all and for one ), or specific to one matrix ( and for just one ).
The BIDIFAC and BIDIFAC+ approaches solve a structured nuclear norm objective:
(9) |
subject to if or . For fixed and , objective (9) may be solved with an iterative soft-singular value thresholding algorithm that cycles through the estimation of each with applications of Proposition 3. For smaller and , and can enumerate all possible submatrices of the row and column sets; the solution may still be sparse, with if the submatrix corresponding to module has no particular shared structure. Alternatively, if the number of row and column sets and are large, Lock et al. [13] describe an algorithm to adaptively estimate and to give the submatrices with the most significant low-rank structure during estimation.
The choice of the module-specific penalty parameters is critical. By default, are selected by an approach motivated by Proposition 5; under the assumption that each matrix has error variance , depends on the total number of non-zero rows and columns in its submatrix,
for . This approach will shrink to any singular values in the submatrix that are due to error only. In practice each data matrix is scaled to have error variance beforehand; if the error variance is unknown, then it can be taken as the total variance in (a conservative choice) or via other approaches such as the median absolute deviation estimator in Gavish and Donoho [5].
5 Empirical variational BIDIFAC (EV-BIDIFAC)
While the BIDIFAC+ approach has several advantages, one weakness is the required standardization by the noise variance in each data matrix, which is estimated pre-hoc and not within a cohesive framework. More critically, the use of nuclear norm penalties generally results in over-shrinkage of the signal, and the default theoretically-motivated choice of the ’s is particularly conservative. Thus, we aim to develop a more flexible empirical variational Bayes approach to the decomposition of bidimensionally linked matrices, analogous to that for a single matrix in Section 3. Our psuedo-Bayesian model is as follows for and :
(10) | |||
where is an identity matrix of appropriate dimension. Using repeated applications of Corollary 4, the BIDIFAC+ objective can be shown to give the posterior mode of the model specified in (10) for which and are fixed, and for all . We extend this framework by allowing and to be estimated within a cohesive empirical variational Bayes framework. We approximate the true posterior with a distribution , for which the factor matrices are mutually independent:
The corresponding free energy is given by
(11) |
with the densities in the denominator as specified in model (10). The resulting low-rank fit and decomposition is then given by
(12) |
where for ,
Algorithm 1 describes an iterative approach to compute the empirical variational Bayes solution. This algorithm converges to a blockwise minimum of the free energy (11) over and with fixed at the distinct minimizer of (11) for each separately. Alternatively, one can update the variances over the iterative algorithm to minimize free energy under the joint model. In Algorithm 1, the number of modules and the row- and column-set inclusions for the modules, and , are assumed to be fixed. For a smaller number of linked matrices, one can set and to enumerate all of the possible submatrices of shared, partially shared, or specific structure: K=. The estimated decomposition may still be sparse in that if module has no distinct low-rank structure, as demonstrated in Section 8.3. If enumerating all submatrices of the row- and column-sets is not feasible due to a large number matrices, one can specify , and based on prior knowledge. Alternatively, we may approximate the fully enumerated decomposition with a large upper bound on the number distinct modules, , and the row- and column-set inclusions and estimated empirically as in Lock et al. [13].
-
1.
Estimate for each , as in Theorem 7.
-
2.
Define as in (4), with for and .
-
3.
Initialize for
-
4.
For :
-
(a)
Let be the submatrix of over row sets indicated by and column sets .
-
(b)
Update the non-zero submatrix in by the application of the singular value thresholding in Theorem 6 to .
-
(a)
-
5.
Repeat Step 3. until convergence of the .
-
6.
Unscale the estimated structure: for , , .
6 Uniqueness
The uniqueness of decompositions into shared, partially shared, and specific low-rank structure requires careful consideration, as the terms in the model are generally not uniquely defined in general without further constraints. For unidimensional decompositions of low-rank structure, conditions of orthogonality are commonly used to assure the different components are uniquely identified [12, 3, 6, 29]; however, such conditions are not straightforward to extend to decompositions for bidimensionally linked data. Alternatively, Lock et al. [13] showed that the terms in the BIDIFAC+ solution are uniquely defined under certain conditions, without orthogonality, as the minimizer of the structured nuclear norm objective in (9). Here we extend this result to a much broader class of penalties, that includes the EV-BIDIFAC decomposition.
For fixed row-set and column-set indicators for the modules, and , let be the set of possible decompositions that yield a given approximation :
In most setting the cardinality of is infinite without further conditions. Define for a vector as a weighted sum of the singular values of a matrix:
and let give a structured penalty on the different modules of a decomposition,:
with non-zero entries for each . Theorem 8 gives sufficient conditions for the minimizer of this penalty to be uniquely determined.
Theorem 8.
Consider and let give the SVD of for . The following three properties uniquely identify .
-
1.
minimizes over ,
-
2.
are linearly independent for ,
-
3.
are linearly independent for .
The proof of Theorem 8 is given in Appendix A.1. Note that the linear independence conditions are straightforward to verify, and are likely to hold in practice if the ranks of the terms in are small relative to the matrix dimensions. As noted in Proposition 9, the terms in any accumulation point of Algorithm 1 also give a coordinate-wise optimum of for a certain set of penalties , and thus the EV-BIDIFAC solution is unique in this sense.
Proposition 9.
If is an accumulation point of Algorithm 1, then it is a coordinate-wise optimum of over for some .
7 Missing data imputation
Now we consider the context in which the data are not fully observed and missing data must be imputed. Let index observations in the full dataset that are not observed: . We develop an Expectation-Maximization (EM) approach for imputation that is analogous to previous approaches described for hard-thresholding [11], soft-thresholding [15], and BIDIFAC+ [13]. The procedure for fixed estimates of the noise variance for each matrix, , is described in Algorithm 2.
Note that Step 4. in Algorithm 2 is a partial maximization step, as the free energy in (11) is conditionally minimized for each module given the current imputed values, while Step 5. is an expectation step replacing the missing data with their expected value (12) under the current model fit. Further, the algorithm can be shown to minimize the free energy over the missing data ; this result is given as Theorem 10, and is appealing because all unknowns (model hyperpameters, parameters and missing entries) are then estimated via optimizing the same unified free energy objective.
A proof of Theorem 10 is provided in Appendix A.2. For the choice of , we suggest two different approaches, depending on the type of missingness. If entries are missing from an entire row or columns of a given data matrix (i.e., row- or column-wise missing), then can be estimated by the applying Theorem 2 to the complete submatrix . If the matrix is missing entries without entire rows or columns, then we update iteratively during Algorithm 2 as follows at the end of each cycle: 1. apply Theorem 2 to the matrix with currently imputed values to obtain , 2. update where is the number of missing entries in .
8 Simulations
Here we present a collection of simulations to illustrate and compare the proposed approach with respect to various criteria. We explore several conditions covering different signal-to-noise ratios, missing data scenarios, and linked matrix patterns with shared and specific structure. Further simulation results (e.g., exploring different matrix dimensions) are available at
https://ericfrazerlock.com/ev-bidifac-sims.html.
8.1 Single matrix
We first describe a simple simulation on a single matrix to illustrate the empirical variational Bayes approach to matrix factorization as a flexible compromise between a hard-thresholding and a conservative soft-thresholding approach. We consider a fully observed matrix , with where the entries of , and are each independent standard normal, and is a manipulated value controlling the signal-to-noise ratio. We consider values of distributed on a log-uniform scale between and , and generate replications of the simulation for each value of . For each generated dataset, we apply the following approaches:
-
•
OPT: The oracle operator on the singular values when the structure is known: where is diagonal with entries that solve the OLS problem to minimize .
- •
-
•
HT: The hard-thresholding approach described in Proposition 1, using the true rank .
- •
For each setting and for each method we compute the relative squared error (RSE) and oracle-normalized squared error (ONSE)
(13) |
Figure 1 shows the mean values for both RSE and ONSE under the different methods as a function of the signal-to-noise . The hard-thresholding approach tends to overfit the signal, which can lead to particularly poor recovery when the signal is smaller relative to the noise. In contrast, the nuclear norm penalty is appropriately conservative in that it never overfits and performs better for lower signal; however, it tends to over-shrink for higher signal and can have several fold higher error than hard-thresholding. The EVB approach is a flexible compromise between the two methods, performing comparably or better than both methods for all scenarios and substantially outperforming them for moderate levels of signal. Moreover, performance of the EVB method is impressively close to the optimal in most scenarios.

We consider another scenario with low-rank structure in which the underlying components have heterogeneous levels of signal. First, we simulate where the entries of and are independent standard normal. Then, we generate the signal as
(14) |
where is diagonal with entries sampled randomly from a log-uniform distribution between and , and with entries of independent standard normal. Thus, some singular values of the true low-rank structure may correspond to high signal, others may be of moderate signal, and others may be of weak or undetectable signal. We generate replications in this case, and for each we apply the EVB, HT, and NN methods described above, as well as the following additional approaches:
-
•
HT-OPT: the oracle hard-thresholding approach, in which the rank is selected that results in minimal squared error between the estimated signal and the true signal.
-
•
NN-OPT: the oracle soft-thresholding approach, in which the nuclear norm penalty is selected that results in minimal squared error between the estimated signal and the true signal.
-
•
RMT: the low-rank reconstruction approach proposed in Shabalin and Nobel [22], based on minimizing squared error loss using asymptotic random matrix theory, with the true error variance .
Boxplots showing distribution of RSE and ONSE for recovering the underlying structure across replications are shown in Figure 2. The EVB and RMT methods have performance close to the oracle, and vastly outperform hard-thresholding and nuclear norm penalized approaches. The relatively poor performance of HT-OPT suggests that singular values benefit from some shrinkage, while the poor performance of NN-OPT indicates that the level of shrinkage should not be uniform across singular values.


We also compare missing data imputation performance of the EVB approach using Algorithm 2. For each of the simulated datasets generated, we consider scenarios in which , , or of the entries are randomly held out as missing and imputed. In addition to EVB imputation, we apply similar EM-imputation methods for the soft-thresholding (corresponding to softImpute of Mazumder et al. [15]) and hard thresholding [11] approaches. The RMT approach is not considered here, as it does not have an associated missing data imputation procedure. For the NN-OPT and HT-OPT approaches, we select the value of or , respectively, that minimizes squared imputation error for the signal of the missing data
(15) |
For oracle normalized imputation error , we divide by the RSE for the OPT method in (13). The results are shown in Figure 3, and demonstrate that the EVB imputation approach is stable for different levels of missingness and substantially outperforms the other approaches.
8.2 Two linked matrices
Here we generate two matrices, and , that are linked by columns: . We consider this context primarily to allow comparison of the EV-BIDIFAC approach with several other methods that identify shared and unshared low-rank structure in this setting.
First, we perform an illustrative simulation akin to that in Figure 1 for a single matrix. We simulate shared structure and individual structures , where , , , , , and each have independent entries from a standard normal distribution. The observed data are then generated as where entries of are independent standard normal. Thus again controls the signal to noise ratio and we consider values of distributed on a log-uniform scale between and , and generate replications for each value of . Here we consider estimates for the following methods:
-
•
EVB: the EV-BIDIFAC approach described in Algorithm 2
-
•
JIVE: the JIVE method implemented in O’Connell and Lock [19] for true shared and unshared ranks .
-
•
UNIFAC: the BIDIFAC method [20] for vertical integration using the true noise variance .
Here JIVE is analogous to HT and UNIFAC to NN for a single matrix, as they involve iterative applications HT or NN, respectively, to the shared and unshared components. For each method we compute RSE for the overall structure as in (13), and also compute the relative decomposition squared error (RDSE) as
The results are shown in Figure 4. The results for overall RSE are similar to that for a single matrix, with UNIFAC appropriately conservative for lower signal but substantially underperforming JIVE for higher signal due to over-shrinkage. However, per the RDSE results, UNIFAC can still identify the shared and unshared components more accurately with higher signal – this phenomenon was noted previously [13] and is in part due to orthogonality restrictions imposed by JIVE and related methods that are required for uniqueness of the decomposition without shrinkage. The EVB approach again acts as a flexible compromise between the two methods, and substantially outperforms both for many s2n scenarios, particularly with respect to RDSE.

Now, we compare RSE and RDSE under a scenario with heterogenous signal levels for the factors in each of the shared and unshared structures. The rank-5 signal matrices and are each generated as in (14) with singular values again generated uniformly from a log-uniform distribution between and for shared structure and between and for specific structure, and replications are generated for this scenario. In addition to the EVB, JIVE, UNIFAC methods we also consider the following approaches:
-
•
SLIDE: the structural learning and integrative decomposition (SLIDE) method [6], with the true number of shared and unshared components (i.e., the ranks) specified.
-
•
HNN.OPT: the hierarchical nuclear norm (HNN) approach [29], with tuning parameters selected on a grid to minimize actual RSE for the true signal.
Figure 5 gives distribution of RSE and RDSE over the replications for the different methods. The EVB approach universally outperforms other methods in terms of both RSE and RDSE, and the benefit is large (by a factor of 2 or more) for both criteria in several cases. While the HNN method is motivated by a model with shared and unshared low-rank structure, it does not provide an explicit decomposition of the structure and thus it is not shown for RDSE.

8.3 Bidimensionally linked matrices
Now, we consider the full bidimensionally linked framework in (4), with , and . Data were generated as
where the entries of are independent standard normal. There are possible distinct modules of low-rank structure in this setting: globally shared, shared across rows for either of the of the column sets, shared across columns for either of the row sets, or structure specific to any of the matrices . We generate datasets, where for each replication of the possible modules are randomly selected to have non-zero low-rank structure and the others are set to zero, . All non-zero modules are generated to have rank structure on the given submatrix as in (14). For each simulated dataset, we apply the following methods:
-
•
EB-BIDI: the approach in Algorithm 1, with and enumerating all possible modules.
-
•
BIDIFAC: the BIDIFAC algorithm described in [13] with and enumerating all possible modules and using the true noise variance .
-
•
EB-SEP: the EVB approach applied to each matrix separately
-
•
EB-JOINT: the EVB approach applied to the overall matrix .
For each method, we compute the overall RSE for as in (13), and for EB-BIDI and BIDIFAC we compute the RDSE as
The results are summarized in Figure 6. Overall RSE is substantially better for EB-BIDI vs. other methods; in comparison to EB-SEP and EB-JOINT, this illustrates the advantages of a parsimonious decomposition of the low-rank signal into the different shared and unshared components with respect recovering the overall signals. Moreover, EB-BIDI has substantially better RDSE compared to BIDIFAC, demonstrating the advantages of the empirical variational Bayes objective with respect to accurately decomposing the structure into its different components. Moreover, EB-BIDI correctly estimated modules with no true structure () as for all of the total instances ( total datasets with possible modules set to in each case); on the other hand, it estimated in only of instances when , which occured when was difficult to detect due to a small signal size. This demonstrates how the approach can correctly recover the underlying rank-sparsity structure, only estimating non-zero structure for a module if it truly has distinct low-rank structure on its given rows and columns.

We extend this simulation design to consider missing data imputation accuracy for the generated datasets. We consider two different missingness scenarios: an entry-wise missing scenario in which 20% of the entries are randomly set to missing in each matrix, and a blockwise missing scenario in which 10% of the columns and 10% of the rows are randomly selected to have all of their entries missing in each matrix. In addition to imputation using the previous methods, we also consider the NN.OPT and HT.OPT methods as described in Section 8.1. For entrywise imputation the is computed as in (15). For blockwise imputation, the true underlying structure is taken to be the sum of column-shared structure for missing rows and the sum of row-shared structure for missing columns, as this is the only estimable signal when an entire row or column is missing. That is, letting and ,
(16) |
where indexes row-missing entries and indexes column missing entries. The results are shown in Figure 7. The EB-BIDI method substantially and consistently outperforms the other approaches with respect to missing data imputation, and the advantages are particularly large for blockwise missing imputation. Note that the EB-SEP approach does not estimate any structure for blockwise missing data, as it is not estimable when considering each matrix separately.

9 Application to BRCA Data
Here we describe an application to gene expression (mRNA) and microRNA (miRNA) data for breast cancer (BRCA) from the Cancer Genome Atlas (TCGA) [25]. The data were processed as described in Park and Lock [20], resulting in mRNAs and miRNAs for cancer tumor samples and non-cancerous breast tissue samples from unrelated individuals. Thus, the data can be represented as bidimensionally linked matrices as in (4), where is mRNA data for cancer samples, is mRNA data for normal samples, is miRNA data for cancer samples and is miRNA data for normal samples. The decomposition of data into different modules of shared structured variation is very well-motivated in this context, as low-rank structure on any of the possible modules is plausible and interpretable. For example, it is reasonable to expect shared structure across mRNA and miRNAs, as miRNAs can regulate mRNA. It is also reasonable to expect some shared structure across tumor and normal samples, as breast tumors arise from normal tissue and so they likely share some patterns of molecular variation. However, it is also reasonable to expect specific structure in each case, e.g., patterns of molecular variability that are ‘cancer-specific’ (i.e., not present in normal tissue) are of particular interest. The BIDIFAC approach decomposes such shared and unique structure across both row sets (mRNA/miRNA) and column sets (tumor/normal), while other methods do not. Moreover, as demonstrated in our simulations, the empirical variational (EV) Bayes approach captures underlying low-rank structure more efficiently and accurately than other low-rank approximation methods. Thus, we expect our EV-BIDIFAC approach to decompose the underlying shared and unshared signals in an informative way, and accurately recover the underlying signal in the full dataset better than alternative methods.

We applied EV-BIDIFAC wherein and enumerate all possible modules, and significant distinct low-rank structure was identified for each module ( for all ). Figure 8 shows heatmaps of the original data (including missing data for miRNA and mRNA) and the resulting overall fit . While all estimated structures are low-rank, it captures the original data well and imputation of the missing columns appears reasonable. Considering the relative amount of variation present in the different modules reveals the extent of shared variation across mRNA and miRNA, and across tumor and normal cells. Interestingly, only 1.47% of the low-rank signal is in the module shared globally across all submatrices, while 59.8% of the signal is shared across mRNA and miRNA but specific to tumors. This suggests that the substantial coordinated activity between mRNA and miRNA in breast tumors is cancer-specific and not present in the normal cells they derive from.
We further compare imputation accuracy for data under entry-wise, column-wise or row-wise missingness under a cross-validation scheme. The accuracy of entry-wise imputation is of interest because it is a good proxy for the accuracy of the the overall fit of the estimated structure. Column-wise imputation accuracy is of interest for two reasons: (1) the actual data have column-wise missingness, as not all samples have both mRNA and miRNA data in the full TCGA cohort, and (2) it provides a robust measure of the level of association between the two data sources, addressing how well can we predict gene expression with miRNA data, and vice versa. Row-wise imputation accuracy is also of interest, as it provides a measure of similarity in molecular structure between cancer and normal samples (e.g., how well can we predict levels of a missing gene in cancer samples, given its associations in normal samples).
We create 20 folds in the data, where for each fold 5% of the columns, 5% of the rows, and 5% of the remaining entries in each matrix are randomly withheld as missing. For each fold, the missing data are imputed using the EV-BIDIFAC, BIDIFAC, EB-SEP or EB-JOINT approaches described in Section 8.3. We further consider nuclear norm penalized imputation (NN) and hard-thresholding (HT) using softImpute [7], jointly for the full matrix (NN-JOINT and HT-JOINT) or separately for each submatrix (NN-SEP and HT-SEP). In each case the nuclear norm penalty is selected via random matrix theory, and the rank for hard-thresholding is determined via cross-validation by optimizing MSE for an additional 5% of randomly withheld entries. As an alternative to low-rank methods we consider k-nearest neighbor (KNN) imputation using the package impute [8] on the full matrix. Lastly, we consider a structured matrix completion approach [2] designed to impute a contiguous missing submatrix under a low-rank approximation using the StructureMC package [14]. As StructureMC imputes only one submatrix, it does not apply to randomly missing entries, and we hold out one set of rows or columns at a time to impute from the full matrix, with all other entries observed. In each case, the relative squared error for the true observed data is computed and averaged across the four datasets yielding
The under different imputation scenarios are summarized in Table 1. The EV-BIDIFAC approach performs relatively well across all scenarios. For entry-wise missing data EV-BIDIFAC performs much better than BIDIFAC (likely due to over-shrinkage of the signal for BIDIFAC) and similarly to EB-SEP, indicating that structure can be well-estimated by considering each dataset separately; however, separate estimation does not allow for row- or column imputation and does not estimate the shared variation that is of interest. The for row-missing data is relatively modest () compared to entry-wise missing (), indicating that there is little shared information between tumor and normal cells. The joint approach EB-JOINT on performs relatively poorly across all scenarios as it tends to overfit shared variation when there substantial specific variation across platforms (mRNA and miRNA) and cohorts (tumor and normal); this illustrates the benefits of a comprehensive decomposition of shared and specific variation across the constituent matrices.
. Method Name Entry-missing Column-missing Row-missing Overall EV-BIDIFAC 0.389 0.756 0.903 0.682 BIDIFAC 0.526 0.870 0.909 0.768 EB-SEP 0.381 1.00 1.00 0.79 EB-JOINT 0.510 1.01 1.67 1.06 NN-SEP 0.536 1.00 1.00 0.845 NN-JOINT 0.614 0.891 0.881 0.796 HT-SEP 0.459 1.00 1.03 0.831 HT-JOINT 0.520 4.50 13.1 6.05 KNN 0.646 1.26 1.01 0.974 StructureMC - 0.977 1.01 -
10 Conclusion and discussion
The proposed EV-BIDIFAC approach can be considered as a flexible compromise between hard- and soft- thresholding techniques for the linked matrix decomposition problem, and has impressive performance with respect to 1. estimating underlying low-rank structure, 2. decomposing underlying structure into its shared, partially-shared and specific components, and 3. missing data imputation under a variety of scenarios. An appealing aspect of the method is that it derives from a unified and intuitive model with a corresponding objective function that does not require any pre-hoc choices or parameters to tune. In simulation comparisons the proposed approach tended to outperform other methods even when their parameters are set to the true values (e.g., true rank(s) or true error variance) or ideal values (e.g., to minimize error with knowledge of the true structure).
While we have focused on obtaining a point estimate for underlying structure that minimizes free energy, this could also be used to empirically estimate hyperparameters and initialize a sampling algorithm for the associated fully Bayesian model. This would capture uncertainty in the underlying factorization, which can be used for multiple imputation of missing values or to propagate uncertainty if the lower-dimensional components are used in subsequent statistical modeling. A related Bayesian model is described in Samorodnitsky et al. [24], which also illustrates the importance of uncertainty propagation.
Often, data take the form of higher-order tensors instead of matrices, and low-rank tensor factorizations are useful to uncover their underlying structure [10]. The models considered here may be extended to the tensor setting, e.g., by allowing factors for each dimension of a tensor to have a distribution analogous to that for and in the matrix setting. However, the algorithms are not straightforward to extend, as there is likely not a closed-form solution for a tensor akin to that in Theorem 6 for a matrix. Extending this approach to the setting of a single tensor or linked tensors is an interesting direction for future work.
Supplementary materials
Reproducible R scripts for all results presented in this manuscript are available in a supplementary zipped folder. Software to perform the EV-BIDIFAC method is available at https://github.com/lockEF/bidifac. Additional simulation study results are available at https://ericfrazerlock.com/ev-bidifac-sims.html.
Acknowledgement
This work was supported by the NIH National Institute of General Medical Sciences (NIGMS) grant R01-GM130622.
Appendix A Proofs
A.1 Proof of Theorem 8
The proof for Theorem 8 follows a similar structure to that for Theorem 1 in Lock et al. [13]. We first establish four lemmas, Lemmas 11, 12, 13, and 14, which are useful to prove the main result.
Lemma 11.
Take two decompositions and , and assume that both minimize the structured penalty for given :
Then, for any ,
for .
Proof.
Because is a convex space and is a convex function, the set of minimizers of over is also convex. Thus,
The result follows from the convexity of , which implies that for any two matrices of equal size and ,
(17) |
Applying (17) to each additive term in gives
(18) | ||||
Because , the inequality in (18) must be an equality, and it follows that the inequality (17) must be an equality for each penalized term in the decomposition. ∎
Lemma 12.
Take two matrices and . If , and is the SVD of , then where is diagonal and , and where is diagonal and .
Proof.
This result is proved in Lemma 2 of Lock et al. [13]. ∎
Lemma 13.
Take two matrices and . If , and is the SVD of , then where is diagonal and , and where is diagonal and .
Proof.
The result follows from repeated applications of Lemma 12 to the rank-1 terms of and . Note that implies
which requires that
for each . ∎
Lemma 14.
Take two decompositions and , and assume that both satisfy
Then, and where and have orthonormal columns, and and are diagonal.
Proof.
Theorem 8 is then established as follows:
Proof.
Take two decomposition and that satisfy properties 1., 2., and 3. of Theorem 1; we will show that . For each , write and as in Lemma 14. Then, it suffices to show that for all .
First, consider module with and . By way of contradiction, assume and . The linear independence of and implies that
Thus, , and it follows from the orthogonality of and that
Moreover, because for any and are linearly independent it follows that
(19) |
Note that (19) implies , however, this is contradicted by the linear independence of and . Thus, we conclude that implies . Analogous arguments show that if and only if for any pair . It follows that are linearly independent for , and are linearly independent for . Thus,
implies that for all .
∎
A.2 Proof of Theorem 10
It is clear that the partial ‘M-steps’ in Algorithm 2 each partially optimize the free energy. It remains to be shown that the missing entries minimize free energy when they are replaced with their conditional expectation under the model. Note that the free energy (11) can be expressed as
Consider the last term in the expression, decomposing the missing and non-missing entries of :
(20) | ||||
Missing data are random variables that are independent and normally distributed given as in (10). Thus, the last term in (20) can be expressed as follows:
Thus, for any given , the free energy is minimized at for .
.
References
- \bibcommenthead
- Attias [1999] Attias, H.: Inferring parameters and structure of latent variable models by variational Bayes. In: Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, pp. 21–30. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (1999)
- Cai et al. [2016] Cai, T., Cai, T.T., Zhang, A.: Structured matrix completion with applications to genomic data integration. Journal of the American Statistical Association 111(514), 621–633 (2016)
- Feng et al. [2018] Feng, Q., Jiang, M., Hannig, J., Marron, J.: Angle-based joint and individual variation explained. Journal of multivariate analysis 166, 241–265 (2018)
- Fox and Roberts [2012] Fox, C.W., Roberts, S.J.: A tutorial on variational Bayesian inference. Artificial intelligence review 38, 85–95 (2012)
- Gavish and Donoho [2017] Gavish, M., Donoho, D.L.: Optimal shrinkage of singular values. IEEE Transactions on Information Theory 63(4), 2137–2152 (2017)
- Gaynanova and Li [2019] Gaynanova, I., Li, G.: Structural learning and integrative decomposition of multi-view data. Biometrics 75(4), 1121–1132 (2019)
- Hastie and Mazumder [2021] Hastie, T., Mazumder, R.: softImpute: Matrix Completion Via Iterative Soft-Thresholded SVD. (2021). R package version 1.4-1. https://CRAN.R-project.org/package=softImpute
- Hastie et al. [2021] Hastie, T., Tibshirani, R., Narasimhan, B., Chu, G.: Impute: Impute: Imputation for Microarray Data. (2021). R package version 1.68.0
- Josse and Sardy [2016] Josse, J., Sardy, S.: Adaptive shrinkage of singular values. Statistics and Computing 26, 715–724 (2016)
- Kolda and Bader [2009] Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM review 51(3), 455–500 (2009)
- Kurucz et al. [2007] Kurucz, M., Benczúr, A.A., Csalogány, K.: Methods for large scale SVD with missing values. In: Proceedings of KDD Cup and Workshop, vol. 12, pp. 31–38 (2007)
- Lock et al. [2013] Lock, E.F., Hoadley, K., Marron, J.S., Nobel, A.B.: Joint and Individual Variation Explained (JIVE) for integrated analysis of multiple data types. The Annals of Applied Statistics 7(1), 523 (2013)
- Lock et al. [2022] Lock, E.F., Park, J.Y., Hoadley, K.: Bidimensional linked matrix factorization for pan-omics pan-cancer analysis. Annals of Applied Statistics 16(1), 193–215 (2022)
- Liu and Zhang [2019] Liu, Y., Zhang, A.: StructureMC: Structured Matrix Completion. (2019). R package version 1.0. https://CRAN.R-project.org/package=StructureMC
- Mazumder et al. [2010] Mazumder, R., Hastie, T., Tibshirani, R.: Spectral regularization algorithms for learning large incomplete matrices. The Journal of Machine Learning Research 11, 2287–2322 (2010)
- Nakajima and Sugiyama [2014] Nakajima, S., Sugiyama, M.: Analysis of empirical map and empirical partially bayes: Can they be alternatives to variational Bayes? In: Artificial Intelligence and Statistics, pp. 20–28 (2014). PMLR
- Nakajima et al. [2013] Nakajima, S., Sugiyama, M., Babacan, S.D., Tomioka, R.: Global analytic solution of fully-observed variational Bayesian matrix factorization. Journal of Machine Learning Research 14(1), 1–37 (2013)
- Nakajima et al. [2012] Nakajima, S., Tomioka, R., Sugiyama, M., Babacan, S.: Perfect dimensionality recovery by variational Bayesian pca. In: Pereira, F., Burges, C.J., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 25 (2012)
- O’Connell and Lock [2016] O’Connell, M.J., Lock, E.F.: R.JIVE for exploration of multi-source molecular data. Bioinformatics 32(18), 2877–2879 (2016)
- Park and Lock [2020] Park, J.Y., Lock, E.F.: Integrative factorization of bidimensionally linked matrices. Biometrics 76(1), 61–74 (2020)
- Rudelson and Vershynin [2010] Rudelson, M., Vershynin, R.: Non-asymptotic theory of random matrices: extreme singular values. In: Proceedings of the International Congress of Mathematicians 2010 (ICM 2010) (In 4 Volumes) Vol. I: Plenary Lectures and Ceremonies Vols. II–IV: Invited Lectures, pp. 1576–1602 (2010). World Scientific
- Shabalin and Nobel [2013] Shabalin, A.A., Nobel, A.B.: Reconstruction of a low-rank matrix in the presence of Gaussian noise. Journal of Multivariate Analysis 118, 67–76 (2013)
- Schouteden et al. [2014] Schouteden, M., Van Deun, K., Wilderjans, T.F., Van Mechelen, I.: Performing disco-sca to search for distinctive and common information in linked data. Behavior research methods 46(2), 576–587 (2014)
- Samorodnitsky et al. [2024] Samorodnitsky, S., Wendt, C.H., Lock, E.F.: Bayesian simultaneous factorization and prediction using multi-omic data. Computational Statistics & Data Analysis 197, 107974 (2024)
- TCGA Research Network et al. [2012] TCGA Research Network, et al.: Comprehensive molecular portraits of human breast tumors. Nature 490(7418), 61 (2012)
- Tang and Allen [2021] Tang, T.M., Allen, G.I.: Integrated principal components analysis. Journal of Machine Learning Research 22(198), 1–71 (2021)
- Wang and Stephens [2021] Wang, W., Stephens, M.: Empirical Bayes matrix factorization. Journal of Machine Learning Research 22(120), 1–40 (2021)
- Yang and Michailidis [2016] Yang, Z., Michailidis, G.: A non-negative matrix factorization method for detecting modules in heterogeneous omics multi-modal data. Bioinformatics 32(1), 1–8 (2016)
- Yi et al. [2023] Yi, S., Wong, R.K.W., Gaynanova, I.: Hierarchical nuclear norm penalization for multi-view data integration. Biometrics (2023)