Statistical Proxy based Mean-Reverting Portfolios with Sparsity and Volatility Constraints
Abstract
Mean-reverting portfolios with volatility and sparsity constraints are of prime interest to practitioners in finance since they are both profitable and well-diversified, while also managing risk and minimizing transaction costs. Three main measures that serve as statistical proxies to capture the mean-reversion property are predictability, portmanteau criterion, and crossing statistics. If in addition, reasonable volatility and sparsity for the portfolio are desired, a convex quadratic or quartic objective function, subject to nonconvex quadratic and cardinality constraints needs to be minimized. In this paper, we introduce and investigate a comprehensive modeling framework that incorporates all the previous proxies proposed in the literature and develop an effective unifying algorithm that is enabled to obtain a Karush–Kuhn–Tucker (KKT) point under mild regularity conditions. Specifically, we present a tailored penalty decomposition method that approximately solves a sequence of penalized subproblems by a block coordinate descent algorithm. To the best of our knowledge, our proposed algorithm is the first method for directly solving volatile, sparse, and mean-reverting portfolio problems based on the portmanteau criterion and crossing statistics proxies. Further, we establish that the convergence analysis can be extended to a nonconvex objective function case if the starting penalty parameter is larger than a finite bound and the objective function has a bounded level set. Numerical experiments on the S&P 500 data set, demonstrate the efficiency of the proposed algorithm in comparison to a semidefinite relaxation-based approach and suggest that the crossing statistics proxy yields more desirable portfolios.
Index Terms—Penalty Decomposition Methods, Sparse Optimization, Mean-reverting Portfolios, Predictability Proxy, Portmanteau Criterion, Crossing Statistics.
1 Introduction
Portfolios exhibiting mean-reverting behavior are of great interest to practitioners due to their predictability property that creates arbitrage opportunities. The task is to construct a portfolio that tends to return to its average value over time and use its performance to make trading decisions. Traditionally such portfolios are obtained by considering a linear combination of assets that are stationary through co-integration methods. However, such an approach often results in a large number of assets, which may not own reasonable volatility; making such portfolios practically inapplicable [12, 28]. Therefore, along with mean-reversion, reasonable volatility is a mandatory property in realistic situations. Moreover, trading costs can be notably reduced by limiting the number of assets, and hence sparse portfolios offer advantages (note that the notion of sparsity is widely used in signal processing and machine learning applications; e.g., [19] and references therein). Hence, constructing a mean-reverting portfolio exhibiting reasonable volatility and comprising a relatively smaller number of assets has found attention in the literature [20, 8]. Recognize that the concept of sparsity holds significant relevance across various applications within contemporary science [18, 24, 17, 1].
Three standard proxies that reflect the mean-reversion property are the predictability measure, the portmanteau criterion, and crossing statistics [9]. The predictability measure, initially defined for stationary processes in [4] and later generalized for non-stationary processes by [3], assesses how similar a time series is to a noise process. Let ’s for be empirical autocovariance matrices related to a multivariate time series (see (16) for details), then the mean-reversion property via the predictability proxy can be captured using the following problem [9]:
(1) |
The portmanteau statistic, proposed by [16], is used to determine if a process is a white noise. In particular, mean reversion is obtained via portamento criterion proxy as follows [9]:
(2) |
The choice of depends on factors such as data availability, portfolio size, risk tolerance, and computational complexity; hence, in applications, sensitivity analysis is recommended for selecting an appropriate [11, 14]. The crossing statistic calculates the likelihood of a process crossing its mean per unit of time [26]. Precisely, the crossing statistics proxy results in mean reversion by [9]:
(3) |
serves as a hyperparameter, governing the influence of lag- autocovariance matrices (see (16)) on the mean reversion characteristic of the target portfolio. On the other hand, recall that sufficient volatility can be obtained by imposing a constraint on the variance, that is, for some positive threshold , where is the covariance matrix. Hence, statistical proxy-based problems (1), (2), and (3) together with volatility and sparsity constraints can be formulated into the following comprehensive model:
() | ||||
subject to |
where for , and (for further details see 17)). We emphasize that autocovariance matrices provide information about the temporal dependence structure among variables in a time series data set and are commonly used in time series analysis and econometrics for forecasting, model estimation, and inference; see Section 3 for details. Lastly, it is important to emphasize that is not a hyperparameter; instead, it is a parameter that is determined immediately by the model choice, as detailed in the equation (17). Also, the hyperparameter only matters when we consider the crossing statistics-based model and in fact, does not play a role in predictability- and portmanteau criterion-based models.
Despite the extensive literature on solving close problems in portfolio optimization [7, 13, 15, 23], the only approach for handling () suggests to apply the following semidefinite (SDP) relaxation [8]:
() | ||||
subject to |
with and . However, this method exhibits several drawbacks: (i) optimal values of () and () may differ, (ii) a solution of () is not necessarily rank-one, and (iii) even if it is rank-one, a common approach is to apply sparse principal component analysis to it but, the qualitative properties of a solution of the sparse component analysis is not well-understood with respect to the original problem. Therefore, the effectiveness of applying an SDP relaxation to () lacks rigorous theoretical support. The main reason is that the presence of in the objective function of () breaks down the standard techniques that result in a relationship between a QCQP (without a cardinality constraint) with its original SDP relaxation. More specifically, there are no known theoretical results that specify the relationship between the optimal values and points of a solution of a QCQP that has a cordiality constraint with its SDP relaxation. Further, there exists a body of literature addressing general cardinality problems through other relaxation techniques like mixed-integer nonlinear programming [2], and continuous relaxations of mixed-integer programs [6] but such methods are less promising because they are not using the interesting structure of (P) in their design.
Therefore, we propose an efficient tailored algorithm that exploits the structure of (P) and utilizes a penalty decomposition method to directly solve () which obtains a KKT point of (). To the best of our knowledge, the proposed algorithm is the first specialized approach specifically designed to directly tackle the problem () based on the portmanteau criterion and crossing statistics proxies. This unifying algorithm not only is applicable to all three proxies mentioned above but also works for nonconvex objective functions. In the proposed methodology, we use the variable splitting technique to simplify our complex problem () that involves highly nonlinearly coupled constraints. Generally, the splitting approach involves introducing additional variables to the problem, which helps loosen the coupling between the original variables [27]. This transformation enables the problem to be tackled more efficiently. After splitting and penalizing over the related equality constraints, a sequence of penalized subproblems is in hand that could be approximately solved. Each subproblem is efficiently tackled by a block coordinate algorithm that outputs a saddle point of the corresponding subproblem. Each step of the block coordinate algorithm either has a closed-form solution or is shown to be tractable; see problems (), (), and (), respectively, in Section 2. In addition, the sequence of objective values in the block coordinate algorithm is either strictly decreasing or reaches equality at a finite step, which also yields a saddle point. We establish that our tailored penalty decomposition algorithm finds a KKT point of the original problem (). Moreover, in the nonconvex case, the analysis of the algorithm can be simply extended to demonstrate that our algorithm is successful again, if the starting penalty parameter is larger than a finite positive bound and if the objective function has a bounded level set; see Theorem 2.3 in Section 5 and its proof 5.4 in the Appendix.
We mention that [20] introduces a penalty decomposition method that exclusively addresses the sparse, volatile, and mean-reverting problem associated with the predictability proxy, specifically for when and in (). In this work, we extend this methodology to efficiently tackle problems arising from other proxies as well. We emphasize that, to the best of our knowledge, the portmanteau criterion- and crossing statistics-based problems with volatility and sparsity constraints have not been previously considered in the literature, even though they could produce more profitable portfolios as more information is leveraged by them. We numerically demonstrate that this is indeed the case for crossing statistics, which highlights the importance of designing an effective algorithm for this intractable problem; see Section 3. Further, our empirical results reveal the quadratic term in the objective function is crucial in capturing mean reversion.
We demonstrate the effectiveness of the algorithm by conducting a numerical comparison with the SDP relaxation approach using the S&P 500 dataset. Our method outperforms the SDP relaxation in terms of practical performance measures. Further, we provide a comparison of portfolio performance generated from predictability, portmanteau criterion, and crossing statistics proxies, when our algorithm is applied. The results show that after tuning the parameters and , the crossing statistics-based portfolio not only effectively captures the mean-reversion property, but also achieves superior performance in terms of Sharpe ratio and cumulative return and loss, compared to predictability and portmanteau criterion-based portfolios; see Section 3. This is consistent with the fact that its corresponding problem exploits more information from the data in this case. However, the portmanteau criterion-based portfolio practically may not be successful in retaining mean reversion, underscoring the significance of the quadratic term in the objective function for capturing this desirable property.
Organization. The remainder of the paper is organized as follows. Section 2 presents the proposed penalty decomposition algorithm for directly solving () together with all technical issues. Extensive numerical experiments are provided in 3, while Section 4 draws some concluding remarks. The proofs of the convergence analysis are presented in the Appendix.
Notation. The complement of a set is denoted as and its cardinality as . For a natural number , let . For and is the coordinate projection of with respect to indices in , that is, for and for . By , we mean the standard Euclidean norm. We show is positive semidefinite or definite by and , respectively. We denote the smallest and largest eigenvalues of with , and , respectively.
2 A Penalty Decomposition Method for Statistical Proxy-based Portfolios with Sparsity and Volatility Constraints
An algorithm for directly minimizing () that obtains a KKT point is presented next, with all proofs delegated to the Appendix. The PPC (standing for predictability, portmanteau criterion, and crossing statistics) based algorithm leverages a tailored penalty decomposition method in which a sequence of penalty subproblems is approximately solved. For each penalty subproblem, a block coordinate descent (BCD) algorithm is utilized that produces a saddle point of the penalty subproblem. The limit point of a suitable subsequence of such saddle points is demonstrated to be a KKT point of ().
2.1 Details of PPC and BCD Algorithms
By introducing two new variables and , we can equivalently reformulate () as follows:
() | ||||
subject to | ||||
The reason for such splitting is to effectively handle the highly nonlinearly coupled constraints of the original problem. Specifically, we aim for the subproblems of Algorithm 1 to be as simple as possible. Let
(4) |
and
By penalizing the last two constraints in (), we tackle this problem by a sequence of penalty subproblems as follows:
() | ||||
subject to |
with going to infinity incrementally (such techniques have demonstrated their efficacy in the existing literature [21, 20]). The auxiliary variables and are introduced such that simpler subproblems can be obtained in Algorithm 1. We establish in the sequel that this method efficiently finds a saddle point of (), which means, and
Next, we discuss how to efficiently solve the restricted subproblems in Algorithm 1.
: This subproblem becomes the following nonconvex quadratic optimization problem:
() | ||||
subject to |
Clearly, this problem has a solution and its KKT conditions are as follows:
(5) |
In general, this nonconvex program does not have a closed-form solution, nevertheless, we will discuss how to find its global minimizer by exploiting its SDP relaxation. Recall that the SDP relaxation of a general quadratic program with exactly one general quadratic constraint obtains the same optimal value provided that it is strictly feasible [5]. It is easy to see () is strictly feasible because . Hence, we can find the optimal value of this nonconvex problem exactly by solving its convex SDP relaxation given next:
() | ||||
subject to Tr |
where
Its dual problem is given by
subject to | |||
We first show that both problems are strictly feasible. Clearly, since , we see with is a strictly feasible point of the primal problem. To see that the dual problem is strictly feasible, it is enough to show that there exists a positive and a real such that This block matrix is positive definite, if and only if (i) and (ii) . To guarantee inequality (i), it is enough to select small enough such that
(6) |
which is doable because in view of for and . Inequality (ii) can be easily guaranteed by selecting a sufficiently large negative number for .
Given that both the primal and the dual problems have strictly feasible solutions, their solutions have the same optimal value. Let and be their optimal solutions. If of () has rank one, then the solution for () is trivially obtained. If not, using the procedure in [25, Lemma 2.2], we can use the rank-one decomposition with and for all such that for all .
Since , there exists such that with . Further, the KKT conditions imply that and since , we have . Thus, and satisfy the KKT conditions and consequently, yields a global solution to (). Thus, indeed satisfies (5).
: This subproblem is as follows:
() |
Thus, due to Lemma 3.2 in [20], the closed-form solution of this problem is given by
(7) |
where is defined in (9).
: This subproblem becomes the following:
() |
which has the closed-form solution of
(8) |
Next, we develop our proposed PPC Algorithm 2 that starts from an initial positive penalty parameter and smoothly enlarges it to numerically go to infinity. For each individual penalty parameter, Algorithm 1 is utilized to solve the corresponding subproblem. We assume that () is feasible and a feasible point is available. To find such a point, one can solve the following sparse principal component analysis problem:
A stationary point of this problem can be obtained by the power method for (with arbitrary) suggested in [22]. We can run this cheap method starting from different ’s until the achieved stationary point satisfies the volatility constraint. The sparsifying operator is defined as
(9) |
where is the index set containing the largest components of in absolute value. To provide this algorithm and its analysis later, we let:
(10) |
Note that Algorithm 2 consists of two nested loops,we stop the inner loop, lines 5 to 10, if (after dropping for simplicity):
(11) |
and the outer loop, lines 3 to 15, is stopped when the following convergence criterion is met:
(12) |
2.2 Convergence Analysis
Analysis of Algorithm 1. We analyze a sequence generated by Algorithm 1 and provide a tailored proof that any such sequence obtains a saddle point of (). This justifies the use of Algorithm 1 for this nonconvex problem.
Lemma 2.1.
Let and for . Consider the iterates of Algorithm 1, that is, , and further, we get
(13) |
This lemma establishes that the sequence created by Algorithm 1 is bounded. Therefore, every sequence generated by this algorithm possesses a point of accumulation. The subsequent theorem establishes that every accumulation point is unquestionably a saddle point of (). Additionally, the sequence is either strictly decreasing or two consecutive terms produce the same value, resulting in a saddle point. Essentially, Algorithm 1 produces a saddle point either in finite steps or in the limit.
Theorem 2.1.
Let be a sequence generated by Algorithm 1 for solving (). Suppose that is an accumulation point of this sequence, then is a saddle point of the nonconvex problem (). Moreover, is a non-increasing sequence. If for some , then is a saddle point of .
Analysis of Algorithm 2. Suppose that is a local minimum of (), then there exists an index set such that and such that is also a local minimizer of the following problem:
subject to |
where is defined in the objective function of (). Recall that Robinson’s constraint qualification conditions for a local minimizer for the above problem are as follows [20]:
(14) |
It can be seen that , due to and , and thus Robinson’s conditions for case (i) always hold. For case (ii), Robinson’s conditions hold, if and only if is linearly independent. It is easy to see this set is linearly independent almost always. This implies that except for a set of measure zero, Robinson’s conditions are satisfied for (), that is, such an assumption for is essentially the case in practice.
Under Robinson’s conditions mentioned above, the KKT conditions for a local minimizer of () are the existence of Lagrangian multipliers and with such that the following holds:
(15) | ||||
The next result states that an arbitrary sequence generated by Algorithm 2 has a convergent subsequence, whose limit point is a KKT point of ().
Theorem 2.2.
Suppose that and for . Let be a sequence generated by Algorithm 2 for solving (). Then, the following hold:
-
(i)
has a convergent subsequence whose accumulation point satisfies . Further, there exists an index subset with such that .
- (ii)
Extension to Nonconvex Objective Functions. Note that the focus has been on constructing portfolios based on the predictability, portmanteau criterion, and crossing statistics proxies, for which as defined in (10) is convex. Nevertheless, we point out that from a technical viewpoint, one can employ Algorithm 2 to solve () even when the objective function is not convex. For this case, we require the boundedness of the level set defined in (10) and to be large enough in this algorithm.
3 Numerical Experiments
Here, we examine the performance of Algorithm 2 in finding sparse, volatile, and statistically proxy-based mean-reverting portfolios on real data coming from the US stock market Standard and Poor’s (SP 500) Index. This data set is often used for showing the practical effectiveness of an algorithm or for comparing numerical performance between competing approaches.
Recall that a statistical arbitrage strategy typically involves four main steps, namely constructing an appropriate asset pool, designing a mean-reverting portfolio, verifying the mean-reverting property through a unit-root test, and finally trading the favorable portfolio. The construction of an asset pool can be achieved using a method described in [8], which utilizes the smallest eigenvalue of the covariance matrix. In this paper, we focus on the crucial second step of developing a mean-reverting portfolio by minimizing predictability, portmanteau criterion, or crossing statistics measures, while incorporating volatility and sparsity constraints to ensure the profitability of the portfolio in practice. The mean-reversion property can be verified using a unit-root test such as the Dickey-Fuller test [10]. A comprehensive discussion on the trading strategy of a mean-reverting portfolio in [30, 29] is based on observed/normalized spread values. Normalizing the spread values makes the strategy less sensitive to absolute price levels. Table 1 in [29] provides examples of how the strategy works for different trading positions and normalized spread values. The strategy involves closing long positions and taking short positions when the normalized spread value exceeds a threshold, and not taking any action when the normalized spread value is negative. It is a simple yet effective strategy, but careful testing and analysis are necessary before adopting it as a long-term trading strategy.
In our numerical experiments, various standard performance metrics are used to measure the quality of portfolios, which in our case, are obtained from the limit point of Algorithm 2 and applying sparse PCA to the solution of (SDP-P). The first measure is called cumulative profit and loss () is used to measure the overall return of a mean-reverting portfolio within a single trading period, from to , and is calculated as
where when a long position is opened, and when a short position is opened at time . For a particular asset, the value of , where represents the price of the asset at time . Table 1 in [29] is used for this purpose with being the standard deviation of the portfolio. The Return on Investment (ROI) is another metric used to evaluate the investment return of a mean-reverting portfolio. It is calculated as follows: where is the profit and loss of the portfolio at time , and is the L1-norm of the portfolio weights. The last metric used to evaluate the performance of a portfolio over a given period of time from to is the Sharpe ratio (), which is defined as
where , and . A portfolio with a higher is considered more profitable.
To conduct the numerical experiments, we start by explaining how to select matrices ’s for . Given a multivariate stochastic process with values in . For a nonnegative integer , the lag- empirical autocovariance matrix of a sample path of is defined as
(16) |
In particular, we have
(17) |
We construct the asset pool by combining the pools suggested in optimization portfolio studies [8, 30, 29, 28], resulting in a pool of assets. The trading time period is from February 1st, 2012 to June 30th, 2014.
We run Algorithm 2 with defined in (11) and , defined in (12), and the volatility threshold selected to be larger than thirty percent of the median variance of all assets in the pool (see [8]). In order to obtain reasonable results and compare the performance of the portmanteau criterion- and crossing statistics-based portfolios, we must decide upon a specific . For this goal, we select the best among the candidate set when we apply our algorithm for the case of the portmanteau criterion as follows. We measure the average Sharpe ratios for a small, medium, and large sparsity level and . Based on this procedure (results not reported due to space considerations), we get that produces better results than , but the differences between and are rather marginal. Hence, in our experiments, we set . After fixing , the next step involves selecting an appropriate . For this aim, we concentrate exclusively on portfolio optimization based on crossing statistics. We opt for a range of values within the interval , incrementally adjusting with a step size of 0.0009. While applying Algorithm 2 to the crossing statistic-based model across various sparsity levels, specifically , we evaluate the average Sharpe ratios. By this process, the best candidate is realized as . Furthermore, for (SDP-P), we need to determine as well. To select this hyperparameter, we apply the methodology presented in [8] (that is, we solve this convex problem and apply sparse principal component analysis on its solution to introduce the corresponding portfolio), for all three cases reported in (17) (with , and ) and values from 0.001 to 1.999, with 0.009 increments. This grid search revealed that the range between 0.8 and 1.2 showed better promise for values. We then narrowed down our selection within this range, using a 0.01 step. Throughout this iterative approach, we found that yielded the highest Sharpe ratio. As such, is selected as 1 in our comparisons.
The spread pictures in Figures 1, 2, and 3 depict that the proposed method illustrates better mean reversion overall and the second and third pictures in these figures show that our method surpasses the SDP relaxation technique across both standard cumulative P&L and Sharpe ratio assessments. This holds true for a range of sparsity levels, including small, medium, and large, as well as across all three portfolio models: predictability-based, portmanteau criterion-based, and crossing statistics-based portfolios. Nevertheless, we should emphasize that the numerical performance of the SDP relaxation idea raises intriguing theoretical research questions that surely require thorough investigation. For example, under what conditions on the problem data in (), its SDP relaxation () has a rank-one solution. And, if available, how to use a rank-one solution of () to construct a solution of ().
Next, we compare the performance of portfolios generated from the predictability, portmanteau criterion, and crossing statistics proxies, based on Algorithm 2. Figure 4 demonstrates that after careful tuning of the parameters and , the crossing statistics-based portfolio not only accurately captures the mean-reversion property, but also achieves superior performance in terms of Sharpe ratio and cumulative return and loss, surpassing the predictability and portmanteau criterion-based portfolios. This observation completely aligns with the idea that Algorithm 2 leverages more information from the data in this case. In addition, the portmanteau criterion proxy seems to fail in retaining mean reversion, underscoring the critical role of the quadratic term in the objective function for capturing this desirable property.
![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/b9f93e85-b79c-4e64-896c-2d69b94024ab/Pred_SDP_k_5.png)
![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/b9f93e85-b79c-4e64-896c-2d69b94024ab/Port_SDP_k_5.png)

![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/b9f93e85-b79c-4e64-896c-2d69b94024ab/Pred_SDP_k_10.png)
![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/b9f93e85-b79c-4e64-896c-2d69b94024ab/Port_SDP_k_10.png)

![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/b9f93e85-b79c-4e64-896c-2d69b94024ab/Pred_SDP_k_17.png)
![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/b9f93e85-b79c-4e64-896c-2d69b94024ab/Port_SDP_k_17.png)

![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/b9f93e85-b79c-4e64-896c-2d69b94024ab/P_P_C_k_5.png)
![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/b9f93e85-b79c-4e64-896c-2d69b94024ab/P_P_C_k_10.png)

4 Conclusion
The paper developed an algorithm that solves the underlying optimization problem of constructing portfolios of assets, having the mean-reverting property, and also being sparse and exhibiting reasonable volatility. Three main proxies for capturing mean-reversion are predictability, portmanteau criterion, and crossing statistics. A comprehensive optimization model was developed for this task. We leverage the variable splitting technique to unfold highly coupled nonconvex constraints of this model and propose a tailored penalty decomposition method, that solves a sequence of penalty subproblems via an efficient block coordinate method. We establish that not only the proposed algorithm finds a KKT point when the objective function is convex, but also its analysis can be extended to a nonconvex objective function (with partial modifications). We numerically examine the performance of the algorithm and show that it outperforms the SDP relaxation approach in terms of standard measures on an S&P 500 data set. Further, we compare these statistical proxy-based portfolios and demonstrate that, after tuning hyperparameters, the crossing statistics surrogate captures mean-reversion well and achieves better performance compared to predictability and portmanteau criterion-based portfolios. Our method has significant potential applications in finance, economics, and other related fields.
References
- [1] Ramin Ayanzadeh, Ahmad Mousavi, Milton Halem, and Tim Finin. Quantum annealing based binary compressive sensing with matrix uncertainty. arXiv preprint arXiv:1901.00088, 2019.
- [2] Dimitris Bertsimas and Romy Shioda. Algorithm for cardinality-constrained quadratic optimization. Computational Optimization and Applications, 43(1):1–22, 2009.
- [3] Ronald Bewley, David Orden, Minxian Yang, and Lance A Fisher. Comparison of box—tiao and johansen canonical estimators of cointegrating vectors in vec (1) models. Journal of Econometrics, 64(1-2):3–27, 1994.
- [4] George EP Box and George C Tiao. A canonical analysis of multiple time series. Biometrika, 64(2):355–365, 1977.
- [5] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
- [6] Oleg P Burdakov, Christian Kanzow, and Alexandra Schwartz. Mathematical programs with cardinality constraints: reformulation by complementarity-type conditions and a regularization method. SIAM Journal on Optimization, 26(1):397–425, 2016.
- [7] Catarina Coelho, José Luis Santos, and Pedro Júdice. The performance of bank portfolio optimization. International Transactions in Operational Research, 2023.
- [8] Marco Cuturi and Alexandre d’Aspremont. Mean-reverting portfolios: Tradeoffs between sparsity and volatility. Financial Signal Processing and Machine Learning, pages 23–40, 2016.
- [9] Marco Cuturi and Alexandre d’Aspremont. Mean reversion with a variance threshold. In International Conference on Machine Learning, pages 271–279. PMLR, 2013.
- [10] David A Dickey and Wayne A Fuller. Distribution of the estimators for autoregressive time series with a unit root. Journal of the American Statistical Association, 74(366a):427–431, 1979.
- [11] Jin-Hong Du, Yifeng Guo, and Xueqin Wang. High-dimensional portfolio selection with cardinality constraints. Journal of the American Statistical Association, pages 1–13, 2022.
- [12] Norbert Fogarasi and Janos Levendovszky. Sparse, mean reverting portfolio selection using simulated annealing. Algorithmic Finance, 2(3-4):197–211, 2013.
- [13] F Hooshmand, Z Anoushirvani, and SA MirHassani. Model and efficient algorithm for the portfolio selection problem with real-world constraints under value-at-risk measure. International Transactions in Operational Research, 2023.
- [14] Takahiro Imai and Kei Nakagawa. Statistical arbitrage strategy in multi-asset market using time series analysis. Journal of Mathematical Finance, 10(2):334–344, 2020.
- [15] Kui Jing, Fengmin Xu, and Xuepeng Li. A bi-level programming framework for identifying optimal parameters in portfolio selection. International Transactions in Operational Research, 29(1):87–112, 2022.
- [16] Greta M Ljung and George EP Box. On a measure of lack of fit in time series models. Biometrika, 65(2):297–303, 1978.
- [17] Hossein Moosaei, Ahmad Mousavi, Milan Hladík, and Zheming Gao. Sparse l1-norm quadratic surface support vector machine with universum data. Soft Computing, 27(9):5567–5586, 2023.
- [18] Ahmad Mousavi and George Michailidis. Cardinality constrained mean-variance portfolios: A penalty decomposition algorithm. arXiv preprint arXiv:2309.16004, 2023.
- [19] Ahmad Mousavi, Mehdi Rezaee, and Ramin Ayanzadeh. A survey on compressive sensing: classical results and recent advancements. Journal of Mathematical Modeling, 8(3):309–344, 2020.
- [20] Ahmad Mousavi and Jinglai Shen. A penalty decomposition algorithm with greedy improvement for mean-reverting portfolios with sparsity and volatility constraints. International Transactions in Operational Research, 2022.
- [21] Parvin Nazari, Ahmad Mousavi, Davoud Ataee Tarzanagh, and George Michailidis. A penalty based method for communication-efficient decentralized bilevel programming. arXiv preprint arXiv:2211.04088, 2022.
- [22] Andrzej Ruszczynski. Nonlinear Optimization. Princeton University Press, 2011.
- [23] Ruchika Sehgal and Aparna Mehra. Robust reward–risk ratio portfolio optimization. International Transactions in Operational Research, 28(4):2169–2190, 2021.
- [24] Jinglai Shen and Ahmad Mousavi. Least sparsity of -norm based optimization problems with . SIAM Journal on Optimization, 28(3):2721–2751, 2018.
- [25] Yinyu Ye and Shuzhong Zhang. New results on quadratic minimization. SIAM Journal on Optimization, 14(1):245–267, 2003.
- [26] N Donald Ylvisaker. The expected number of zeros of a stationary gaussian process. The Annals of Mathematical Statistics, 36(3):1043–1046, 1965.
- [27] Jinshan Zeng, Tim Tsz-Kit Lau, Shaobo Lin, and Yuan Yao. Global convergence of block coordinate descent in deep learning. In International conference on machine learning, pages 7313–7323. PMLR, 2019.
- [28] Jize Zhang, Tim Leung, and Aleksandr Aravkin. Sparse mean-reverting portfolios via penalized likelihood optimization. Automatica, 111:108651, 2020.
- [29] Ziping Zhao and Daniel P Palomar. Mean-reverting portfolio with budget constraint. IEEE Transactions on Signal Processing, 66(9):2342–2357, 2018.
- [30] Ziping Zhao, Rui Zhou, Zhongju Wang, and Daniel P Palomar. Optimal portfolio design for statistical arbitrage in finance. In 2018 IEEE Statistical Signal Processing Workshop (SSP), pages 801–805. IEEE, 2018.
5 Appendix
All technical proofs are given next.
5.1 Proof of Lemma 2.1.
Proof.
We drop the subscripts for simplicity. If , the Lagrangian multiplier in () must be zero such that we have and consequently, . This leads to
(18) | ||||
where we used that in the third inequality. Moreover, since solves () and , the equation (8) implies
(19) | ||||
Through (18), we see ; leading to in this case. If , we have . By (19), we have ∎
5.2 Proof of Theorem 2.1.
Proof.
By observing definitions of , and in steps 3-5 of Algorithm 1, we get
(20) |
This simply leads to the following:
(21) | ||||
Thus, is a bounded below and non-increasing sequence; implying that is convergent. From the other side, since is an accumulation point of , there exist a subsequence such that . The continuity of yields
By the continuity of and taking the limit of both sides of (5.2) as , we have
Further, it is clear from (21) that is non-increasing.
Now suppose for some . Then because of the last inequality in (21), we have
Since and , we see .
Further, if for some , using the third inequality in (21) we get
. Since and , we see . By the last inequality in (21), we have , so .
Recall that satisfies by definition. Hence, is a saddle point of ().
∎
5.3 Proof of Theorem 2.2.
Proof.
(i) From (13), we know that for every . Thus, is bounded and therefore, has a convergent subsequence. For our purposes, without loss of generality, we suppose that the sequence itself is convergent. Let be its accumulation point. First, let us show that . Since for and , we see , and thus in view of (4), (21), and step 13 of Algorithm 2, we have . The latter leads to
and thus when as ; implying that .
Next, let be such that and for every and . Then, since is a bounded sequence of indices, it has a convergent subsequence, which means that there exists an index subset with and a subsequence of the above convergent subsequence such that for all large ’s. Therefore, since and , we see , which further implies .
By injecting the first two lines of (22) into the third, we get
By observing that implies and letting
we get
where for each and also, and We next prove that is bounded under Robinson’s condition on . Suppose not, consider the normalized sequence
Through boundedness, there exists a convergent subsequence of whose limit is given by such that , we obtain, in view of , and the boundedness of , and passing the limits, that
(23) |
where , , and . Let us consider two cases: , and as follows. When , by the Robinson’s conditions at , there exist a vector and a constant such that , , and . Since and , we see that . Therefore, which implies that . Since and , we have , which is a contradiction. Therefore, the sequence is bounded. Now, suppose . In this case, since converges to , we have for all sufficiently large. Hence, for all large . This shows that . In view of the Robinson’s condition at given in (2.2), there exist a vector and a constant such that , , and . Using these equations (23) together with , we can see that implying that , which is again a contradiction.
5.4 Proof of Theorem 2.3.
Proof.
Note that only a few arguments given in the above proofs must be partially modified in order to make sure that all the subproblems are well-defined and that the convergence analysis similarly holds in the nonconvexity case. Therefore, we avoid repetition and instead, shortly discuss all the arguments that must be revisited.
Specifically, we must put conditions on as follows.
(1) , which is to guarantee the existence of a strictly feasible point in the dual of the SDP relaxation of (); simply inequality (6). (2) , which is to have well-defined in (8). And,
(3) , which is for the inequality (18).
Since , (3) is satisfied if (1) holds. It is easy to show that conditions (1) and (2) are satisfied if
Recall that for two symmetric matrices and , we and for two symmetric matrices . Thus, we can show
implying that .
In the same manner
such that we have
. Therefore, conditions (1) and (2) are satisfied if
Moreover, for part (i) in Theorem 2.2, we can have
which by the new assumption on the boundedness of the level set yields
which leads to . Consequently, (i) and (ii) in Theorem 2.2 hold for this case as well.
∎