Response to Reviewers’ Comments
We thank the anonymous reviewers for their helpful comments on the manuscript. Those comments helped us to improve our paper. We list our answers to the reviewers’ comments below.
1 Response to Reviewer #1
- Comment 1.
-
Algorithm 4 might need more details. In particular, it is not clear to see how to delete frequencies in (33). The paper mentions that the top frequencies of the power spectrum density for each row are preserved. I have not been able to find how to choose this . It is not clear in the numerical experiments either.
Response: We thank the reviewer for this comment. In the revision, we added the variable (the number of remaining frequencies) and Equations (33) and (34) in Algorithm 4 to demonstrate how to delete frequencies. Furthermore, below Algorithm 4, we elaborated the process of removing frequencies.
- Comment 2.
-
In both Theorem 1 and Theorem 2, the results are on the convergence to stationary points. The optimal model (7) is constrained. Could the limit points be on the boundary?
Response: Yes. There is no such restriction that limit points cannot be on the boundary. Consider a simple example: and and are 1 by 1 matrices with the loss function . First, is a stationary point. (definition of stationary point in [tseng2001convergence]). Now, consider iterations in Algorithm 1 with and step size ,
(1) | ||||
(2) | ||||
(3) | ||||
(4) |
From these iterations, we can check that converges to the stationary point , which is on the boundary of the .
- Comment 3.
-
In the proof of Theorem 1, more details on verifying the conditions of Lemma 1 might be needed. For instance, how the hemivariate condition is satisfied?
Response: We noticed that our proof had a minor error. While the hemivariate condition is generically satisfied, it is not always satisfied. For example, let and . Consider the line segment between and as for . In this case, the Frobenius norm is constant with respect to . Therefore, we need to impose additional conditions on . Note that, in this example, does not have a unique minimum. The hemivariate condition is introduced to ensure that the loss function has at most one minimum in each block coordinate. (cf. If is quasi convex and hemivariate with respect to , then the function has at most one minimum, p.483 in [tseng2001convergence].) To avoid this, we can consider (small) -regularization of . More precisely, for every , since is positive definite, the function is strictly convex [grasmair2016basic]. Therefore, it has at most one global minimum.
Revision of Lemma 1 and part of (BCD convergence) in Theorems 1 and 2
From the Reviewer #1’s Comment 3, we noticed that the proofs of BCD convergence in Theorem 1 and Theorem 2 had an error. It can be resolved by introducing (small) -regularization terms for and . Furthermore, we realized that we have proved the sequence converges to a coordinatewise minimum (Nash equilibrium), not a stationary point. In fact, BCD for nonsmooth block optimization does not always converge to a stationary point (see, e.g., [Auslender1976OptimisationM, p.94]).
To address these problems, we have revised Lemma 1. We cited other statements from [tseng2001convergence], which is the same paper referenced in the original Lemma 1. The propositions in [tseng2001convergence] might seem different from our new Lemma 1 at first glance. For example, [tseng2001convergence] introduces terms like under the Essentially cyclic rule. In our case, we iterate and in sequence, not randomly, which satisfies the Essentially cyclic rule. Additionally, there is a difference in notation between our approach and that in [tseng2001convergence] regarding the case. Our single step consists of updating and in sequence, whereas in [tseng2001convergence] – this is considered as three steps. For example, when , it means that our sequence has been updated 1, 2, and 3 times, respectively. In summary, our Lemma 1 is mathematically consistent with the results presented in [tseng2001convergence].
For the reviewer’s convenience, we compare Lemma 1 in the initial submission and in the revision.
Old version of Lemma 1
Let . Suppose that satisfy the followings
-
1.
is continuous.
-
2.
is quasi convex and hemivariate (not constant on any line segment on the domain).
-
3.
are lower semicontinuous.
-
4.
for some .
Then, for , as or every cluster point is a coordinatewise minimum point of .
New version of Lemma 1
Let . Suppose that satisfy the followings
-
1.
The is open and has a directional derivative in all direction on .
-
2.
The set is compact, and is continuous on .
-
3.
The mapping has at most one minimum for .
Then, for , every cluster point is a stationary point of .
Remark related to the convergence at a stationary point: In our loss function, the term with the Frobenius norm is continuously differentiable. In other words, our loss function have a better condition than the condition in original Lemma 1. Continuously differentiable property is used to show that our loss function
minimize | (5) | |||
subject to |
where
-
1.
(Soft frequency regularization) ( Minkowski 1-norm), or
-
2.
(Hard frequency regularization) if only uses a prespecified set of frequencies and otherwise.
satisfies condition 1 in the new version of Lemma 1.
Remark related to hemivariate property: We consider arbitrarily fixed positive -regularization terms for and . This condition is used to show conditions 2 and 3 in the new version of Lemma 1. This is because, for any positive regularization constant, the loss function becomes strictly convex. Therefore, it satisfies condition 3, and condition 2 can be easily demonstrated using the regularization terms of , and . In our experiments, we didn’t consider the -regularization term, but it can be regarded as if we set the regularization constant to be approximately .
Finally, we have made slight adjustments to the part of (BCD convergence) in Theorems 1 and 2 to ensure that they satisfy the conditions in the new version of Lemma 1.
2 Response to Reviewer #2
- Comment 1.
-
In this manuscript, ‘Supervised low-rank semi-nonnegative matrix factorization with frequency regularization for forecasting spatio-temporal data’, the authors introduced novel methods based on Supervised low-rank Semi-Nonnegative Matrix Factorization (SSNMF). The author propose two different optimization procedures, soft and hard regularizations, to achieve semi-nonnegative matrix factorization. Additionally, the authors prove convergence guarantees to first-order stationary points of the corresponding constrained optimization problem. They demonstrate and compares there methods to multiple different datasets and provide very detailed analysis and explainations to these numerical experiments. The paper is well written and formulated, with great analysis, numerical results, and explanations. This manuscript should be considered for publication.
Response: We thank the Reviewer #2 for the positive feedback.
3 Additional changes
-
1.
In the paper, the iteration index has been changed from or to to avoid the confusion with the index for frequency or time notation.
-
2.
We have placed Definition 3 right before Lemma 1. This adjustment was made because, after revising Lemma 1, we thought it would be beneficial to introduce Definition 3 earlier.
-
3.
In Remark 3, we have updated the notation of the loss function to for the consistency with the previous notation.
-
4.
In Equation (37), we corrected a typo and changed the coefficient from to .