This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Pruned Wasserstein Index Generation Model and wigpy Package

Fangzhou Xie
Department of Economics, New York University
Department of Economics, Rutgers University
fangzhou.xie@{nyu,rutgers}.edu
Abstract

Recent proposal of Wasserstein Index Generation model (WIG) has shown a new direction for automatically generating indices. However, it is challenging in practice to fit large datasets for two reasons. First, the Sinkhorn distance is notoriously expensive to compute and suffers from dimensionality severely. Second, it requires to compute a full N×NN\times N matrix to be fit into memory, where NN is the dimension of vocabulary. When the dimensionality is too large, it is even impossible to compute at all. I hereby propose a Lasso-based shrinkage method to reduce dimensionality for the vocabulary as a pre-processing step prior to fitting the WIG model. After we get the word embedding from Word2Vec model, we could cluster these high-dimensional vectors by kk-means clustering, and pick most frequent tokens within each cluster to form the “base vocabulary”. Non-base tokens are then regressed on the vectors of base token to get a transformation weight and we could thus represent the whole vocabulary by only the “base tokens”. This variant, called pruned WIG (pWIG), will enable us to shrink vocabulary dimension at will but could still achieve high accuracy. I also provide a wigpy111 https://github.com/mark-fangzhou-xie/wigpy module in Python to carry out computation in both flavor. Application to Economic Policy Uncertainty (EPU) index is showcased as comparison with existing methods of generating time-series sentiment indices.

Keywords Wasserstein Index Generation Model (WIG)  \cdot Lasso Regression  \cdot Pruned Wassersteinn Index Generation (pWIG)  \cdot Economic Policy Uncertainty Index (EPU).

1 Introduction

Recently, the Wasserstein Index Generation model Xie (\APACyear2020) was proposed to generate time-series sentiment indices automatically. There have been several methods (Azqueta-Gavaldón, \APACyear2017; Baker \BOthers., \APACyear2016; Castelnuovo \BBA Tran, \APACyear2017; Ghirelli \BOthers., \APACyear2019) proposed to generate time series sentiment indices, but, to the best of my knowledge, WIG is the first automatic method to produce sentiment indices completely free of manual work.

The WIG model runs as follows. Given a set of documents, each of which is associated with a time-stamp, it will first cluster them into several topics, shrink each topic to a sentiment score, then multiply weights for each document to get document sentiment, and then aggreagate over each time period. However, its computation on large dataset come with two challenges: (1) the calculation for Sinkhorn algorithm suffers from its notoriously computational complexity and the computation will soon become prohibitive; (2) this Optimal Transport-based method requires to compute a full N×NN\times N matrix, where NN is the size of vocabulary, and it will become impossible to fit this distance matrix into memory after some threshold. Therefore, I propose a pruned Wasserstein Index Generation model (pWIG) to reduce dimensionality of vocabulary prior to fitting into the WIG model. This variant could represent the whole corpus in a much smaller vocabulary and then be fit in any memory-limited machine for the generation of time-series index. What is more, I also provide the wigpy 222 Codes are available here: https://github.com/mark-fangzhou-xie/wigpy. package for Python that could perform both version of WIG computation1.

This paper first contributes to the EPU literature and tries to provide better estimations of that seminal time-series indices automatically. This article also relates itself to the new area of Narrative Economics Shiller (\APACyear2017), where we could extract time-series sentiment indices from textual data, and thus provide a better understanding of how do narratives and sentiments relate to our economy.

2 Pruned Wasserstein Index Generation Model

We first review the original WIG model.

2.1 Review of Wassserstein Index Generation model

A major component of WIG model is the Wasserstein Dictionary Learning (Schmitz \BOthers., \APACyear2018). Given a set of document Y=[ym]N×MY=\left[y_{m}\right]\in\mathbb{R}^{N\times M}, each doc ymΣNy_{m}\in\Sigma^{N} is associated with a timestamp and N,MN,M are length of dictionary and number of documents in corpus, respectively. Our first step is to cluster documents into topics T=[tk]N×K,T=\left[t_{k}\right]\in\mathbb{R}^{N\times K}, where KM,K\ll M, and associated weights Λ=[λm]K×M.\Lambda=\left[\lambda_{m}\right]\in\mathbb{R}^{K\times M}. Thus, for a single document ym,y_{m}, we could represent it as ymtkλm.y_{m}\approx t_{k}\lambda_{m}. Documents and topics lie in NN-dimensional simplex, and are word distributions. Another important quantity for computing WIG, is the cost matrix CN×NC^{N\times N} and Cij=d2(xi,xj),C_{ij}=d^{2}\left(x_{i},x_{j}\right), where each xi1×Dx_{i}\in\mathbb{R}^{1\times D} is the DD-dimensional word embedding vector for the ii -th word in the vocabulary. In other words, matrix CC measures the “cost” of moving masses of words, and now we can proceed and define the Sinkhorn Distance.

Definition 1 (Sinkhorn Distance).

Given discrete distributions μ,v+N,\mu,v\in\mathbb{R}_{+}^{N}, and CC as cost matrix,

Sε(μ,v;C):=minπ(μ,v)π,C+ε(π)s.t. Π(μ,v):={π+N×N,π1N=μ,πT1N=v}\begin{array}[]{c}S_{\varepsilon}(\mu,v;C):=\min_{\pi\in\mathbb{R}(\mu,v)}\langle\pi,C\rangle+\varepsilon\mathcal{H}(\pi)\\ \text{s.t. }\Pi(\mu,v):=\left\{\pi\in\mathbb{R}_{+}^{N\times N},\pi 1_{N}=\mu,\pi^{T}1_{N}=v\right\}\end{array}

where (π):=ijπijlog(πij1),\mathcal{H}(\pi):=\sum_{ij}\pi_{ij}\log\left(\pi_{ij}-1\right), negative entropy, and ε\varepsilon is the Sinkhorn regularization weight.

We could then set up the loss function and minimization problem as follows:

minm=1M(ym,ySε(T(R),λm(A);C,ε))\displaystyle\min\sum_{m=1}^{M}\mathcal{L}\left(y_{m},y_{S_{\varepsilon}}\left(T(R),\lambda_{m}(A);C,\varepsilon\right)\right) (1)
s.t.tnk(R):=ernknernk,λnk(A):=eakmkeakm\displaystyle s.t.~{}t_{nk}(R):=\frac{e^{r_{nk}}}{\sum_{n^{\prime}}e^{r_{n^{\prime}}k}},\lambda_{nk}(A):=\frac{e^{a_{km}}}{\sum_{k^{\prime}}e^{a_{k^{\prime}m}}}

By this formula, we wish to minimize the divergence between original document ymy_{m} and the predicted (reconstructed) y𝒮(.)y_{\mathcal{S}_{\mathcal{E}}}(.) given by Sinkhorn distance. Moreover, the constaints of this minimization problem considers Softmax operation on each of the columns of the matrices RR and AA, so that TT and Λ\Lambda will be (column-wise) discrete densities, as is required by the Sinkhorn distance.

For computation, we first initialize matrices RR and AA by drawing from Standard Normal distribution and then perform Softmax to obtain TT and Λ.\Lambda. During training process, we keep track of computational graph and obtain the gradient T(;ε)\nabla_{T}\mathcal{L}(\cdot;\varepsilon) and Λ(;ε)\nabla_{\Lambda}\mathcal{L}(\cdot;\varepsilon) with respect to TT and Λ.\Lambda. RR and AA are then optimized by Adam optimizer (Kingma \BBA Ba, \APACyear2015) after each batch, and the automatic differentiation is done by PyTorch framework (Paszke \BOthers., \APACyear2017).

After conducting WDL on documents for clustering, the next step of WIG would be to generate time-series indices based on the topics. The model first reduce each topic vector tkt_{k} to a scalar by SVD and then multiply the weight matrix to get document-wise sentiment score for the whole corpus. We then add up the scores for each month and then produce the final monthly index.

2.2 Pruned WIG (pWIG) Model

Although enjoying many nice theoretical properties (Villani, \APACyear2003), the computation for Optimal Transport has been known for its complexity. This burden has been eased by Cuturi (\APACyear2013) and it has attracted much attention in machine learning community since then.

However, there are still two aspects that hindering our application to textual analysis. First of all, vocabulary will easily go to a very large one, and the computation for Sinkhorn loss will soon become prohibitive. Moreover, after passing a certain point, it not even possible to fit the distance matrix CC into the memory, especially when considering the limited VRAM for GPU acceleration.333 My configuration is Nvidia 1070Ti (8G). Under single presicion, each digit will occupy 4 byte and in my case, I can only fit, theoretically at most, a square matrix of dimension 44,721. I have a relative small dataset from The New York Times and my vocabulary is of length 9437, but many NLP applications will have much more tokens than I do. In such a case, the WIG model will become infeasible.

I therefore propose the following procedure to reduce the vocabulary dimension and could avoid feeding the full vocabulary matrix into WIG model. It first clusters all word vectors by kk-means clustering, and then selects a subset of tokens from each of the cluster to form “base tokens.”444 The number of tokens to be considered as “base tokens” is arbitrary, meaning that the compression ratio could potentially be made arbitrarily small. In other words, the researcher could choose such a number that the model can be fitted into the memory of her machine, regardless of the number of tokens she had for the corpus. And that is exactly the way why we need to compress the dictionary by “pruning” some non-important tokens. We could then use Lasso555 A similar approach (Mallapragada \BOthers., \APACyear2010) was proposed using group-LASSO to prune visual vocabulary, but in the area of image processing. to regress word vectors of all other tokens on the vectors of these “base tokens” to ensure sparse weight vector, which will have zero component on non-import features.

Formally speaking, we set up the following minimization problem for the kk -means clustering:

argmin𝒳1,𝒳2,,𝒦nk=1K𝒳𝒳kxμk\operatorname{argmin}_{\mathcal{X}_{1},\mathcal{X}_{2},\cdots,\mathcal{K}_{n}}\sum_{k=1}^{K}\sum_{\mathcal{X}\in\mathcal{X}_{k}}\left\|x-\mu_{k}\right\|

where μk\mu_{k} is the mean of points in cluster 𝒦k\mathcal{K}_{k} and k{1,,K}.k\in\{1,\cdots,K\}. We can certainly choose some most frequent tokens from each cluster to form a final subset whose length matches our desire. 666 A very simple choice would be Word per Cluster = Maximum Vocabulary Length  Number of Clusters =\frac{\text{ Maximum Vocabulary Length }}{\text{ Number of Clusters }} By doing so, we also represent the whole vocabulary by the most representative tokens. The indices for these “base tokens” are collected in the index set,

𝔅={b{1,,N}|xb=1}\mathfrak{B}=\left\{b\in\{1,\cdots,N\}|x_{b}=1\right\}

Obviously, 𝔅C\mathfrak{B}^{C} is also defined by excluding “base tokens” from the whole vocabulary. NN is the size of vocabulary and xbx_{b} is the bb-th token in the vocabulary. Denote word vector for “base tokens” as vbv_{b} and others as vo,v_{o}, we have

α^lasso =argmin𝛼{12b=1B(voj=1pvbαo,b)2+λj=1p|αo,b|},o\hat{\alpha}^{\text{lasso }}=\underset{\alpha}{\operatorname{argmin}}\left\{\frac{1}{2}\sum_{b=1}^{B}\left(v_{o}-\sum_{j=1}^{p}v_{b}\alpha_{o,b}\right)^{2}+\lambda\sum_{j=1}^{p}\left|\alpha_{o,b}\right|\right\},\forall o

For each o,o, we will have a vector αo,b\alpha_{o,b} of length B,B, where BB is the dimension of “base vocabulary”.

Previously in the WIG model, we obtain the word distribution for each single document ymy_{m} by calculating its word frequency, and that will give us a NN -dimensional distribution vector. Here, in the pWIG variant, we replace the non-base tokens by weighted base-tokens and could thus represent the word simplex of documents in only BB -dimensional spaces.

Now that we have successfully represent our dataset in ss smaller vocabulary, we could proceed to define our distance matrix Cij=d2(xi,xj),C_{ij}=d^{2}\left(x_{i},x_{j}\right), where i,j𝔅.i,j\in\mathfrak{B}. Here we have everything we need for the regular WIG model and we fit it using the shrinkage transformed word distributions and distance matrix.

3 Numerical Experiments

3.1 wigpy Package for Python

To carry out the computation of WIG and pWIG model, I also provide the wigpy package under MIT license. Notice that the WIG model in wigpy package is a new implementation, though part of the codes are modified from the codes of original WIG paper.

The main model is wrapped in the “WIG” class, where it contains a set of hyper-parameters 777 For example, embedding depth (emsizeemsize), batch size (batch_sizebatch\_size), number of topics (num_topicsnum\_topics), sinkhorn regularization weight (regreg), optimizer learning rate (lrlr) (L2L2 penalty for optimizer (wdecaywdecay), L1/LASSO weight (l1_regl1\_reg), maximum number of tokens allowed by pWIG algorithm (prune_topkprune\_topk). to tune the model, and some parameters to control the behavior of preprocessing and Word2Vec training process.

Notice that the previous implementation of WIG model only supports hand-written Adam optimizer, and the optimization for document weights were optimized column-wise. In other words, each document will only be used to update the column of weight in matrix Λ\Lambda for that given document. The new implementation wraps the whole model in PyTorch, providing many optimizers to choose by PyTorch optimizer class. What is more, each document will accumulate gradient and the whole Λ\Lambda matrix will be updated all together.

3.2 Application to Generating Economic Policy Uncertainty Index (EPU)

To test for the pWIG model’s performance, I run the model on the same dataset from the WIG paper. It consists of news headlines collected from The New York Times from 1980 to 2018. As I am implementing a new version of WIG, as provided by the wigpy module, I run the original WIG model and report its result as well.

I run both variants of WIG model separately, by calling wigpy package, to set for hyperparameters by splitting training, evaluation, and testing data as 60%,10%,60\%,10\%, and 30%30\% respectively.

For the original WIG, hyper-parameters are chosen as follows: depth of embedding D=50D=50 batch size s=32,s=32, number of topics K=4,K=4, learning rate for Adam ρ=0.001,\rho=0.001, Sinkhorn regularization weight ε=0.1;\varepsilon=0.1; for the pWIG, depth of embedding D=50,D=50, batch size s=s= 64, number of topics K=4,K=4, learning rate for Adam ρ=0.001,\rho=0.001, Sinkhorn regularization weight ε=0.08\varepsilon=0.08 I also report Pearson’s and Spearman’s correlation test on four set of automatically generated EPU indices (one LDA-based EPU (Azqueta-Gavaldón, \APACyear2017), one WIG-based EPU (Xie, \APACyear2020), and two flavor of WIG given by wigpy package in this paper), against the original EPU 888 https://www.policyuncertainty.com/. (Baker \BOthers., \APACyear2016).

EPU flavor Pearson’s Spearman’s
LDA 77.48% 75.42%
WIG 80.24% 77.49%
WIG-wigpy 80.53% 77.71%
pWIG-wigpy 80.50% 77.64%
Table 1: Pearson’s and Spearman’s correlation statistics101010Since the LDA-based EPU was only available from 1989-2016, the test is performed using time-series indices within the same range.

Apparently, as is shown in Table 1,1, all three WIG methods outperforms LDA-based method by 3%3\% in Pearson’s test and more than 2%2\% in Spearman’s test. This fact has been established by the previous WIG paper. Moreover, if we compare results within three WIG-related methods, this new implementation of original WIG in wigpy package shows better result than the previous implementation. The pruning method does not differ much from the new implemented WIG algorithm, and is even better than the previous implementation of original WIG!

VIX Pearson’s VIX Spearman’s Michigan Pearson’s Michigan Spearman’s
WIG-wigpy 34.20% 19.56% -56.40% -49.38%
pWIG-wigpy 34.27% 19.82% -56.45% -49.62%

In Table 2, the correlation statistics between EPU generated by WIGs and two other indices: VIX and Michigan Consumer Confidence Sentiment index. As reported (Baker \BOthers., \APACyear2016), EPU has a correlation of 0.58 between VIX and -0.742 between Michigan index. Since our objective is to produce a similar index of EPU, but using an automatic approach, we should expect our WIG-based EPU to have a similar relationship with these other two indices. This is indeed the case here, and we can certainly observe the positive and negative relationship when comparing the VIX and Michigan indices111111 It may be confusing why the “sentiment index” generated by WIG models has a negative relationship with “Michigan Consumer Sentiment index,” since both names contain “sentiment.” However, there is a clear distinction of the usage of the same word in two different contexts. The famous Michigan index is expressed as the consumer confidence levels, and the higher the index, the more confident the consumers are. The word “sentiment”, as used by WIG, is to capture the subjective information expressed in the texts. In the application of EPU, it is used to capture the intensity of opinions towards the uncertainty of policy, as conveyed by newspaper articles. It is very obvious that what it captures is negative feelings, and the higher the index, the more uncertain that people feel. In other words, although bearing the same word “sentiment” in their names, the underlying element is strikingly different and thus show a negative relationship between each other. Moreover, the WIG model does not limit its use in EPU. As soon as we apply the WIG models to other (textual) datasets, the meaning of “sentiment” will be changed accordingly. In total, the word “sentiment” used in WIG models is more versatile and should be distinguished from the usage as in the Michigan index. .

4 Conclusion

This paper further extends the Wasserstein Index Generation (WIG) model, by selecting a subset of tokens to represent the whole vocabulary to shrink the dimension. The showcase of generating EPU has shown that the performance is retained while dimension being reduced. Moreover, a package, wigpy, is provided to carry out the computation of two variants of WIG.

References

  • Azqueta-Gavaldón (\APACyear2017) \APACinsertmetastarazqueta-gavaldon2017{APACrefauthors}Azqueta-Gavaldón, A.  \APACrefYearMonthDay2017. \BBOQ\APACrefatitleDeveloping News-Based Economic Policy Uncertainty Index with Unsupervised Machine Learning Developing news-based Economic Policy Uncertainty index with unsupervised machine learning.\BBCQ \APACjournalVolNumPagesEconomics Letters15847–50. {APACrefDOI} \doi10.1016/j.econlet.2017.06.032 \PrintBackRefs\CurrentBib
  • Baker \BOthers. (\APACyear2016) \APACinsertmetastarbaker2016{APACrefauthors}Baker, S\BPBIR., Bloom, N.\BCBL \BBA Davis, S\BPBIJ.  \APACrefYearMonthDay2016. \BBOQ\APACrefatitleMeasuring Economic Policy Uncertainty Measuring Economic Policy Uncertainty.\BBCQ \APACjournalVolNumPagesThe Quarterly Journal of Economics13141593–1636. {APACrefDOI} \doi10.1093/qje/qjw024 \PrintBackRefs\CurrentBib
  • Castelnuovo \BBA Tran (\APACyear2017) \APACinsertmetastarcastelnuovo2017{APACrefauthors}Castelnuovo, E.\BCBT \BBA Tran, T\BPBID.  \APACrefYearMonthDay2017. \BBOQ\APACrefatitleGoogle It Up! A Google Trends-Based Uncertainty Index for the United States and Australia Google It Up! A Google Trends-based Uncertainty index for the United States and Australia.\BBCQ \APACjournalVolNumPagesEconomics Letters161149–153. {APACrefDOI} \doi10.1016/j.econlet.2017.09.032 \PrintBackRefs\CurrentBib
  • Cuturi (\APACyear2013) \APACinsertmetastarcuturi2013{APACrefauthors}Cuturi, M.  \APACrefYearMonthDay2013. \BBOQ\APACrefatitleSinkhorn Distances: Lightspeed Computation of Optimal Transport Sinkhorn Distances: Lightspeed Computation of Optimal Transport.\BBCQ \BIn C\BPBIJ\BPBIC. Burges, L. Bottou, M. Welling, Z. Ghahramani\BCBL \BBA K\BPBIQ. Weinberger (\BEDS), \APACrefbtitleAdvances in Neural Information Processing Systems 26 Advances in Neural Information Processing Systems 26 (\BPGS 2292–2300). \APACaddressPublisherCurran Associates, Inc. \PrintBackRefs\CurrentBib
  • Ghirelli \BOthers. (\APACyear2019) \APACinsertmetastarghirelli2019{APACrefauthors}Ghirelli, C., Pérez, J\BPBIJ.\BCBL \BBA Urtasun, A.  \APACrefYearMonthDay2019. \BBOQ\APACrefatitleA New Economic Policy Uncertainty Index for Spain A new economic policy uncertainty index for Spain.\BBCQ \APACjournalVolNumPagesEconomics Letters18264–67. {APACrefDOI} \doi10.1016/j.econlet.2019.05.021 \PrintBackRefs\CurrentBib
  • Kingma \BBA Ba (\APACyear2015) \APACinsertmetastarkingma2015{APACrefauthors}Kingma, D\BPBIP.\BCBT \BBA Ba, J.  \APACrefYearMonthDay2015. \BBOQ\APACrefatitleAdam: A Method for Stochastic Optimization Adam: A method for stochastic optimization.\BBCQ \BIn \APACrefbtitleInternational Conference on Learning Representations (ICLR). International Conference on Learning Representations (ICLR). \PrintBackRefs\CurrentBib
  • Mallapragada \BOthers. (\APACyear2010) \APACinsertmetastarmallapragada2010{APACrefauthors}Mallapragada, P\BPBIK., Jin, R.\BCBL \BBA Jain, A\BPBIK.  \APACrefYearMonthDay2010. \BBOQ\APACrefatitleOnline Visual Vocabulary Pruning Using Pairwise Constraints Online visual vocabulary pruning using pairwise constraints.\BBCQ \BIn \APACrefbtitle2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (\BPGS 3073–3080). {APACrefDOI} \doi10.1109/CVPR.2010.5540062 \PrintBackRefs\CurrentBib
  • Paszke \BOthers. (\APACyear2017) \APACinsertmetastarpaszke2017{APACrefauthors}Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z.\BDBLLerer, A.  \APACrefYearMonthDay2017. \BBOQ\APACrefatitleAutomatic Differentiation in PyTorch Automatic differentiation in PyTorch.\BBCQ \BIn \APACrefbtitleNIPS-W. NIPS-W. \PrintBackRefs\CurrentBib
  • Schmitz \BOthers. (\APACyear2018) \APACinsertmetastarschmitz2018{APACrefauthors}Schmitz, M\BPBIA., Heitz, M., Bonneel, N., Ngolè, F., Coeurjolly, D., Cuturi, M.\BDBLStarck, J\BHBIL.  \APACrefYearMonthDay2018. \BBOQ\APACrefatitleWasserstein Dictionary Learning: Optimal Transport-Based Unsupervised Nonlinear Dictionary Learning Wasserstein Dictionary Learning: Optimal Transport-Based Unsupervised Nonlinear Dictionary Learning.\BBCQ \APACjournalVolNumPagesSIAM Journal on Imaging Sciences111643–678. {APACrefDOI} \doi10.1137/17M1140431 \PrintBackRefs\CurrentBib
  • Shiller (\APACyear2017) \APACinsertmetastarshiller2017{APACrefauthors}Shiller, R\BPBIJ.  \APACrefYearMonthDay2017. \BBOQ\APACrefatitleNarrative Economics Narrative Economics.\BBCQ \APACjournalVolNumPagesAmerican Economic Review1074967–1004. {APACrefDOI} \doi10.1257/aer.107.4.967 \PrintBackRefs\CurrentBib
  • Villani (\APACyear2003) \APACinsertmetastarvillani2003{APACrefauthors}Villani, C.  \APACrefYear2003. \APACrefbtitleTopics in Optimal Transportation Topics in Optimal Transportation (\BVOL 58). \APACaddressPublisherAmerican Mathematical Society. {APACrefDOI} \doi10.1090/gsm/058 \PrintBackRefs\CurrentBib
  • Xie (\APACyear2020) \APACinsertmetastarxie2020{APACrefauthors}Xie, F.  \APACrefYearMonthDay2020. \BBOQ\APACrefatitleWasserstein Index Generation Model: Automatic Generation of Time-Series Index with Application to Economic Policy Uncertainty Wasserstein Index Generation Model: Automatic generation of time-series index with application to Economic Policy Uncertainty.\BBCQ \APACjournalVolNumPagesEconomics Letters186108874. {APACrefDOI} \doi10.1016/j.econlet.2019.108874 \PrintBackRefs\CurrentBib