This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Unsupervised Multilingual Alignment using Wasserstein Barycenter

Xin Lian1,3    Kshitij Jain2    Jakub Truszkowski2    Pascal Poupart1,2,3 &Yaoliang Yu1,3 1University of Waterloo, Waterloo, Canada
2Borealis AI, Waterloo, Canada
3Vector Institute, Toronto, Canada {x9lian, k22jain, yaoliang.yu, ppoupart}@uwaterloo.ca, {jakub.truszkowski}@borealisai.com
Abstract

We study unsupervised multilingual alignment, the problem of finding word-to-word translations between multiple languages without using any parallel data. One popular strategy is to reduce multilingual alignment to the much simplified bilingual setting, by picking one of the input languages as the pivot language that we transit through. However, it is well-known that transiting through a poorly chosen pivot language (such as English) may severely degrade the translation quality, since the assumed transitive relations among all pairs of languages may not be enforced in the training process. Instead of going through a rather arbitrarily chosen pivot language, we propose to use the Wasserstein barycenter as a more informative “mean” language: it encapsulates information from all languages and minimizes all pairwise transportation costs. We evaluate our method on standard benchmarks and demonstrate state-of-the-art performances.

1 Introduction

Many natural language processing tasks, such as part-of-speech tagging, machine translation and speech recognition, rely on learning a distributed representation of words. Recent developments in computational linguistics and neural language modeling have shown that word embeddings can capture both semantic and syntactic information. This led to the development of the zero-shot learning paradigm as a way to address the manual annotation bottleneck in domains where other vector-based representations must be associated with word labels. This is a fundamental step to make natural language processing more accessible. A key input for machine translation tasks consists of embedding vectors for each word. Mikolov et al. (2013b) were the first to release their pre-trained model and gave a distributed representation of words. After that, more software for training and using word embeddings emerged.

The rise of continuous word embedding representations has revived research on the bilingual lexicon alignment problem Rapp (1995); Fung (1995), where the initial goal was to learn a small dictionary of a few hundred words by leveraging statistical similarities between two languages. Mikolov et al. (2013a) formulated bilingual word embedding alignment as a quadratic optimization problem that learns an explicit linear mapping between word embeddings, which enables us to even infer meanings of out-of-dictionary words Zhang et al. (2016); Dinu et al. (2015); Mikolov et al. (2013a). Xing et al. (2015) showed that restricting the linear mapping to be orthogonal further improves the result. These pioneering works required some parallel data to perform the alignment. Later on, Smith et al. (2017); Artetxe et al. (2017, 2018a) reduced the need of supervision by exploiting common words or digits in different languages, and more recently, unsupervised methods that rely solely on monolingual data have become quite popular Gouws et al. (2015); Zhang et al. (2017b, a); Lample et al. (2018); Artetxe et al. (2018b); Dou et al. (2018); Hoshen and Wolf (2018); Grave et al. (2019).

Encouraged by the success on bilingual alignment, the more ambitious task that aims at simultaneously and unsupervisedly aligning multiple languages has drawn a lot of attention recently.

A naive approach that performs all pairwise bilingual alignment separately would not work well, since it fails to exploit all language information, especially when there are low resource ones. A second approach is to align all languages to a pivot language, such as English Smith et al. (2017), allowing us to exploit recent progresses on bilingual alignment while still using information from all languages.

More recently, Chen and Cardie (2018); Taitelbaum et al. (2019b, a); Alaux et al. (2019); Wada et al. (2019) proposed to map all languages into the same language space and train all language pairs simultaneously. Please refer to the related work section for more details.

In this work, we first show that the existing work on unsupervised multilingual alignment (such as Alaux et al. (2019)) amounts to simultaneously learning an arithmetic “mean” language from all languages and aligning all languages to the common mean language, instead of using a rather arbitrarily pre-determined input language (such as English). Then, we argue for using the (learned) Wasserstein barycenter as the pivot language as opposed to the previous arithmetic barycenter, which, unlike the Wasserstein barycenter, fails to preserve distributional properties in word embeddings. Our approach exploits available information from all languages to enforce coherence among language spaces by enabling accurate compositions between language mappings. We conduct extensive experiments on standard publicly available benchmark datasets and demonstrate competitive performance against current state-of-the-art alternatives. The source code is available at https://github.com/alixxxin/multi-lang. 111Preliminary results appeared in first author thesis Lian, Xin (2020).

2 Multilingual Lexicon Alignment

In this section we set up the notations and define our main problem: the multilingual lexicon alignment problem.

Given mm languages 1,,m\mathcal{L}_{1},\ldots,\mathcal{L}_{m}, each represented by a vocabulary 𝒱i\mathcal{V}_{i} consisting of nin_{i} respective words. Following Mikolov et al. (2013a), we assume a monolingual word embedding 𝐗i=[𝐱i,1,,𝐱i,ni]ni×di\mathbf{X}_{i}=[\mathbf{x}_{i,1},\ldots,\mathbf{x}_{i,n_{i}}]^{\top}\in\mathds{R}^{n_{i}\times d_{i}} for each language i\mathcal{L}_{i} has been trained independently on its own data. We are interested in finding all pairwise mappings 𝖳ik:didk\mathsf{T}_{i\to k}:\mathds{R}^{d_{i}}\to\mathds{R}^{d_{k}} that translate a word 𝐱i,ji\mathbf{x}_{i,j_{i}} in language i\mathcal{L}_{i} to a corresponding word 𝐱k,jk=𝖳ik(𝐱i,ji)\mathbf{x}_{k,j_{k}}=\mathsf{T}_{i\to k}(\mathbf{x}_{i,j_{i}}) in language k\mathcal{L}_{k}. In the following, for the ease of notation, we assume w.l.o.g. that ninn_{i}\equiv n and didd_{i}\equiv d. Note that we do not have access to any parallel data, i.e., we are in the much more challenging unsupervised learning regime.

Our work is largely inspired by that of Alaux et al. (2019), which we review below first. Along the way we point out some crucial observations that motivated our further development. Alaux et al. (2019) employ the following joint alignment approach that minimizes the total sum of mis-alignment costs between every pair of languages:

minQi𝔒d,Pik𝒫ni=1mk=1,kim𝐗iQiPik𝐗kQk2,\!\min_{Q_{i}\in\mathfrak{O}_{d},P_{ik}\in\mathcal{P}_{n}}\sum_{i=1}^{m}\sum_{k=1,k\neq i}^{m}\|\mathbf{X}_{i}Q_{i}-P_{ik}\mathbf{X}_{k}Q_{k}\|^{2},\! (1)

where Qi𝔒dQ_{i}\in\mathfrak{O}_{d} is a d×dd\times d orthogonal matrix and Pik𝒫nP_{ik}\in\mathcal{P}_{n} is an n×nn\times n permutation matrix222Alaux et al. (2019) also introduced weights αik>0\alpha_{ik}>0 to encode the relative importance of the language pair (i,k)(i,k).. Since QiQ_{i} is orthogonal, this approach ensures transitivity among word embeddings: QiQ_{i} maps the ii-th word embedding space 𝐗i\mathbf{X}_{i} into a common space 𝐗\mathbf{X}, and conversely Qi1=QiQ_{i}^{-1}=Q_{i}^{\top} maps 𝐗\mathbf{X} back to 𝐗i\mathbf{X}_{i}. Thus, QiQkQ_{i}Q_{k}^{\top} maps 𝐗i\mathbf{X}_{i} to 𝐗k\mathbf{X}_{k}, and if we transit through an intermediate word embedding space 𝐗t\mathbf{X}_{t}, we still have the desired transitive property QiQtQtQk=QiQkQ_{i}Q_{t}^{\top}\cdot Q_{t}Q_{k}^{\top}=Q_{i}Q_{k}^{\top}.

The permutation matrix PikP_{ik} serves as an “inferred” correspondence between words in language i\mathcal{L}_{i} and language k\mathcal{L}_{k}. Naturally, we would again expect some form of transitivity in these pairwise correspondences, i.e., PikPktPitP_{ik}\cdot P_{kt}\approx P_{it}, which, however, is not enforced in (1). A simple way to fix this is to decouple PikP_{ik} into the product PiPkP_{i}^{\top}P_{k}, in the same way as how we dealt with QiQ_{i}. This leads to the following variant:

argminQi𝔒d,Pi𝒫ni=1mk=1mPi𝐗iQiPk𝐗kQk2\displaystyle\underset{Q_{i}\in\mathfrak{O}_{d},P_{i}\in\mathcal{P}_{n}}{\operatorname{argmin}}\sum_{i=1}^{m}\sum_{k=1}^{m}\|P_{i}\mathbf{X}_{i}Q_{i}-P_{k}\mathbf{X}_{k}Q_{k}\|^{2} (2)
=\displaystyle= argminQi𝔒d,Pi𝒫ni=1mPi𝐗iQi1mk=1mPk𝐗kQk2\displaystyle\underset{Q_{i}\in\mathfrak{O}_{d},P_{i}\in\mathcal{P}_{n}}{\operatorname{argmin}}\sum_{i=1}^{m}\|P_{i}\mathbf{X}_{i}Q_{i}-\frac{1}{m}\sum_{k=1}^{m}P_{k}\mathbf{X}_{k}Q_{k}\|^{2} (3)
=\displaystyle= argminQi𝔒d,Pi𝒫nmin𝐗¯n×di=1mPi𝐗iQi𝐗¯2\displaystyle\underset{Q_{i}\in\mathfrak{O}_{d},P_{i}\in\mathcal{P}_{n}}{\operatorname{argmin}}\min_{\bar{\mathbf{X}}\in\mathds{R}^{n\times d}}\sum_{i=1}^{m}\|P_{i}\mathbf{X}_{i}Q_{i}-\bar{\mathbf{X}}\|^{2} (4)

where Eq. 3 follows from the definition of variance and 𝐗¯\bar{\mathbf{X}} in Eq. 4 admits the closed-form solution:

𝐗¯=1mk=1mPk𝐗kQk.\displaystyle\bar{\mathbf{X}}=\frac{1}{m}\sum_{k=1}^{m}P_{k}\mathbf{X}_{k}Q_{k}. (5)

Thus, had we known the arithmetic “mean” language 𝐗¯\bar{\mathbf{X}} beforehand, the joint alignment approach of Alaux et al. (2019) would reduce to a separate alignment of each language 𝐗i\mathbf{X}_{i} to the “mean” language 𝐗¯\bar{\mathbf{X}} that serves as the pivot. An efficient optimization strategy would then consist of alternating between separate alignment (i.e., computing QiQ_{i} and PiP_{i}) and computing the pivot language (i.e., (5)).

We now point out two problems in the above formulation. First, a permutation assignment is a 1-1 correspondence that completely ignores polysemy in natural languages, that is, a word in language i\mathcal{L}_{i} can correspond to multiple words in language k\mathcal{L}_{k}. To address this, we propose to relax the permutation PiP_{i} into a coupling matrix that allows splitting a word into different words. Second, the pivot language in (5), being a simple arithmetic average, may be statistically very different from any of the mm given languages, see Figure 1 and below. Besides, intuitively it is perhaps more reasonable to allow the pivot language to have a larger dictionary so that it can capture all linguistic regularities in all mm languages. To address this, we propose to use the Wasserstein barycenter as the pivot language.

The advantage of using Wasserstein barycenter instead of the arithmetic average is that the Wasserstein metric gives a natural geometry for probability measures supported on a geometric space. In Figure 1, we demonstrate the difference between Wasserstein Barycenter and arithmetic average of two input distributions.

It is intuitively clear that the Wasserstein barycenter preserves the geometry of the input distributions.

Refer to caption
Figure 1: Comparing the Wasserstein barycenter and arithmetic mean (bottom panel) for two input distributions (top panel).

3 Our Approach

We take a probabilistic approach, treating each language i\mathcal{L}_{i} as a probability distribution over its word embeddings:

πi=j=1npijδ𝐱ji\pi_{i}=\sum_{j=1}^{n}p_{ij}\delta_{\mathbf{x}^{i}_{j}} (6)

where pijp_{ij} is the probability of occurrence of the jj-th word 𝐱ji\mathbf{x}_{j}^{i} in language i\mathcal{L}_{i} (often approximated by the relative frequency of word 𝐱ji\mathbf{x}_{j}^{i} in its training documents), and δ𝐱ji\delta_{\mathbf{x}^{i}_{j}} is the unit mass at 𝐱ji\mathbf{x}_{j}^{i}. We project word embeddings into a common space through the orthogonal matrix Qi𝔒dQ_{i}\in\mathfrak{O}_{d}. Taking a word 𝐱i\mathbf{x}^{i} from each language i\mathcal{L}_{i}, we associate a cost c(Q1𝐱1,,Qm𝐱m)+c(Q_{1}\mathbf{x}^{1},\ldots,Q_{m}\mathbf{x}^{m})\in\mathds{R}_{+} for bundling these words in our joint translation. To allow polysemy, we find a joint distribution π\pi with fixed marginals πi\pi_{i} so that the average cost

c(Q1𝐱1,,Qm𝐱m)dπ(𝐱1,,𝐱m)\displaystyle\int c(Q_{1}\mathbf{x}^{1},\ldots,Q_{m}\mathbf{x}^{m})~{}\mathrm{d}\pi(\mathbf{x}^{1},\ldots,\mathbf{x}^{m}) (7)

is minimized. If we fix QiQ_{i}, then the above problem is known as multi-marginal optimal transport Gangbo and Świkech (1998).

To simplify the computation, we take the pairwise approach of Alaux et al. (2019), where we set the joint cost cc as the total sum of all pairwise costs:

c(𝐱1,,𝐱m)=i,k𝐱i𝐱j2.\displaystyle\textstyle c(\mathbf{x}^{1},\ldots,\mathbf{x}^{m})=\sum_{i,k}\|\mathbf{x}^{i}-\mathbf{x}^{j}\|^{2}. (8)

Interestingly, with this choice, we can significantly simplify the numerical computation of the multi-marginal optimal transport.

We recall the definition of Wasserstein barycenter ν\nu of mm given probability distributions π1,,πm\pi_{1},\ldots,\pi_{m}:

ν=argminμi=1mλi𝖶22(πi,μ),\displaystyle\nu=\arg\min_{\mu}~{}\sum_{i=1}^{m}\lambda_{i}\cdot\mathsf{W}_{2}^{2}(\pi_{i},\mu), (9)

where λ0\lambda\geq 0 are the weights, and the (squared) Wasserstein distance 𝖶22\mathsf{W}_{2}^{2} is given as:

𝖶22(πi,μ)=minΠiΓ(πi,μ)𝐱𝐲2dΠi(𝐱,𝐲).\displaystyle\!\!\!\mathsf{W}_{2}^{2}(\pi_{i},\mu)\!=\!\!\min_{\Pi_{i}\in\Gamma(\pi_{i},\mu)}\int\!\!\!\|\mathbf{x}-\mathbf{y}\|^{2}\mathrm{d}\Pi_{i}(\mathbf{x},\mathbf{y}).\!\! (10)

The notation Γ(πi,μ)\Gamma(\pi_{i},\mu) denotes all joint probability distributions (i.e. couplings) Πi\Pi_{i} with (fixed) marginal distributions πi\pi_{i} and μ\mu. As proven by Agueh and Carlier (2011), with the pairwise distance (8), the multi-marginal problem in (7) and the barycenter problem in (9) are formally equivalent. Hence, from now on we will focus on the latter since efficient computational algorithms for it exist. We use the push-forward notation (Qi)#πi(Q_{i})_{\#}\pi_{i} to denote the distribution of Qi𝐱iQ_{i}\mathbf{x}^{i} when 𝐱i\mathbf{x}^{i} follows the distribution πi\pi_{i}. Thus, we can write our approach succinctly as:

minμminQi𝔒di=1mλi𝖶22[(Qi)#πi,μ],\displaystyle\min_{\mu}\min_{Q_{i}\in\mathfrak{O}_{d}}~{}\sum_{i=1}^{m}\lambda_{i}\cdot\mathsf{W}_{2}^{2}[(Q_{i})_{\#}\pi_{i},\mu], (11)

where the barycenter μ\mu serves as the pivot language in some common word embedding space. Unlike the arithmetic average in (5), the Wasserstein barycenter can have a much larger support (dictionary size) than the mm given language distributions.

We can again apply the alternating minimization strategy to solve (11): fixing all orthogonal matrices QiQ_{i}, we find the Wasserstein barycenter using an existing algorithm of Cuturi and Doucet (2014) or Claici et al. (2018); fixing the Wasserstein barycenter μ\mu, we solve each orthogonal matrix QiQ_{i} separately:

minQi𝔒dminΠiΓ(πi,μ)Qi𝐱𝐲2dΠi(𝐱,𝐲).\displaystyle\min_{Q_{i}\in\mathfrak{O}_{d}}\min_{\Pi_{i}\in\Gamma(\pi_{i},\mu)}\int\|Q_{i}\mathbf{x}-\mathbf{y}\|^{2}\mathrm{d}\Pi_{i}(\mathbf{x},\mathbf{y}).\! (12)

For fixed coupling Πin×s\Pi_{i}\in\mathds{R}^{n\times s}, where ss is the dictionary size for the barycenter μ\mu, the integral can be simplified as:

jl(Πi)jlQi𝐱ji𝐲l2𝐗iΠi𝐘,Qi.\displaystyle\!\!\sum\nolimits_{jl}(\Pi_{i})_{jl}\|Q_{i}\mathbf{x}_{j}^{i}-\mathbf{y}_{l}\|^{2}\!\equiv\!-\langle\mathbf{X}_{i}^{\top}\Pi_{i}\mathbf{Y},Q_{i}\rangle.\! (13)

Thus, using the well-known theorem of Schönemann (1966), QiQ_{i} is given by the closed-form solution UiViU_{i}V_{i}^{\top}, where UiΣiVi=𝐗iΠi𝐘U_{i}\Sigma_{i}V_{i}^{\top}=\mathbf{X}_{i}^{\top}\Pi_{i}\mathbf{Y} is the singular value decomposition. Our approach is presented in Algorithm 1.

Input: Language distribution Li=(Xi,pi)i=1m,pL_{i}=(X_{i},p_{i})_{i=1}^{m},p
Output: Translation for LkL_{k} and LmL_{m}
for i=1,,mi=1,\ldots,m do
       𝐗i𝐗i𝗆𝖾𝖺𝗇(𝐗i)\mathbf{X}_{i}\leftarrow\mathbf{X}_{i}-\mathsf{mean}(\mathbf{X}_{i})
      {Ci}cosine_dist(𝐗i,j,𝐗i,k)j,k\{C_{i}\}\leftarrow cosine\_dist(\mathbf{X}_{i,j},\mathbf{X}_{i,k})\forall{j,k}
      Πi𝖦𝖶(Ci,C1,pi,p1)\Pi_{i}\leftarrow\mathsf{GW}(C_{i},C_{1},p_{i},p_{1})
      UΣV𝖲𝖵𝖣(𝐗iΠi𝐗1)U\Sigma V^{\top}\leftarrow\mathsf{SVD}(\mathbf{X}_{i}^{\top}\Pi_{i}\mathbf{X}_{1})
      QiUVQ_{i}\leftarrow UV^{\top}
      𝐗i𝐗iQi\mathbf{X}_{i}\leftarrow\mathbf{X}_{i}Q_{i}
while not converged do
       ν𝖶𝖡(π1,,πm;λ1,,λm)\nu\leftarrow\mathsf{WB}(\pi_{1},\cdots,\pi_{m};\lambda_{1},\cdots,\lambda_{m})
      for i=1,,mi=1,\ldots,m do
             Πi𝖮𝖳(πi,ν)\Pi_{i}\leftarrow\mathsf{OT}(\pi_{i},\nu)
            UΣV𝖲𝖵𝖣(𝐗iΠi𝐘)U\Sigma V^{\top}\leftarrow\mathsf{SVD}(\mathbf{X}_{i}^{\top}\Pi_{i}\mathbf{Y})
            QiUVQ_{i}\leftarrow UV^{\top}
            𝐗i𝐗iQi\mathbf{X}_{i}\leftarrow\mathbf{X}_{i}Q_{i} ;
      
return (Π1,,Πm,Q1,,Qm)(\Pi_{1},\ldots,\Pi_{m},Q_{1},\ldots,Q_{m})
Algorithm 1 Barycenter Alignment

4 Experiments

We evaluate our algorithm on two standard publicly available datasets: MUSE Lample et al. (2018) and XLING Glavas et al. (2019). The MUSE benchmark is a high-quality dictionary containing up to 100k pairs of words and has now become a standard benchmark for cross-lingual alignment tasks Lample et al. (2018). On this dataset, we conducted an experiment with 6 European languages: English, French, Spanish, Italian, Portuguese, and German. The MUSE dataset contains a direct translation for any pair of languages in this set. We also conducted an experiment with the XLING dataset with a more diverse set of languages: Croatian (HR), English (EN), Finnish (FI), French (FR), German (DE), Italian (IT), Russian (RU), and Turkish (TR). In this set of languages, we have languages coming from three different Indo-European branches, as well as two non-Indo-European languages (FI from Uralic and TR from the Turkic family) Glavas et al. (2019).

4.1 Implementation Details

To speed up the computation, we took a similar approach as Alaux et al. (2019) and initialized space alignment matrices with the Gromov-Wasserstein approach Alvarez-Melis and Jaakkola (2018) applied to the first 5k vectors (Alaux et al. (2019) used the first 2k vectors) and with regularization parameter ϵ\epsilon of 5e55e^{-5}. After the initialization, we use the space alignment matrices to map all languages into the language space of the first language. Multiplying all language embedding vectors with the corresponding space alignment matrix, we realign all languages into a common language space. In the common space, we compute the Wasserstein barycenter of all projected language distributions. The support locations for the barycenter are initialized with random samples from a standard normal distribution.

The next step is to compute the optimal transport plans from the barycenter distribution to all language distributions. After obtaining optimal transport plans TiT_{i} from the barycenter to every language i\mathcal{L}_{i}, we can imply translations from i\mathcal{L}_{i} to j\mathcal{L}_{j} from the coupling TiTjT_{i}T_{j}^{\top}. The coupling is not necessarily a permutation matrix, and indicates the probability with which a word corresponds to another. Method and code for computing accuracies of bilingual translation pairs are borrowed from Alvarez-Melis and Jaakkola (2018).

4.2 Baselines

We compare the results of our method on MUSE with the following methods: 1) Procrustes Matching with RSLS as similarity function to imply translation pairs Lample et al. (2018); 2) the state-of-the-art bilingual alignment method, Gromov-Wasserstein alignment (GW) Alvarez-Melis and Jaakkola (2018); 3) the state-of-the-art multilingual alignment method (UMH) Alaux et al. (2019); 4) bilingual alignment with multilingual auxiliary information (MPPA) Taitelbaum et al. (2019b); and 5) unsupervised multilingual word embeddings trained with multilingual adversarial training Chen and Cardie (2018).

We compare the results of our method on XLING dataset with Ranking-Based Optimization (RCSLS) Joulin et al. (2018), solution to the Procrustes Problem (PROC) Artetxe et al. (2018b); Lample et al. (2018); Glavas et al. (2019), Gromov-Wasserstein alignment (GW) Alvarez-Melis and Jaakkola (2018), and VECMAP Artetxe et al. (2018b). RCLS and PROC are supervised methods, while GW and VECMAP are both unsupervised methods.

The translation accuracies for Gromov-Wasserstein are computed using the source code released by Alvarez-Melis and Jaakkola (2018). For the multilingual alignment method (UMH) Alaux et al. (2019), and the two multilingual adversarial methods Chen and Cardie (2018),  Taitelbaum et al. (2019b), we directly compare our accuracies to previous methods as reported in  Glavas et al. (2019).

4.3 Results

Table 2 depicts precision@1 results for all bilingual tasks on the MUSE benchmark Lample et al. (2018). For most language pairs, our method Barycenter Alignment (BA) outperforms all current unsupervised methods. Our barycenter approach infers a “potential universal language” from input languages. Transiting through that universal language, we infer translation for all pairs of languages. From the experimental results in Table 2, we can see that our approach is clearly at an advantage and it benefits from using the information from all languages. Our method achieves statistically significant improvement for 22 out of 30 language pairs (p0.05p\leq 0.05, McNemar’s test, one-sided).

Table 3 shows mean average precision (MAP) for 10 bilingual tasks on the XLING dataset Glavas et al. (2019).

In Table 1, we show several German to English translations and compare the results to Gromov-Wasserstein direct bilingual alignment. Our method is capable of incorporating both the semantic and syntactic information of one word. For example, the top ten predicted English translations for the German word München, are “Cambridge, Oxford, Munich, London, Birmingham, Bristol, Edinburgh, Dublin, Hampshire, Baltimore”. In this case, we hit the English translation Munich. What’s more important in this example is that all predicted English words are the name of some city. Therefore, our method is capable of implying München is a city name. Another example is the German word sollte, which means “should” in English. The top five words predicted for sollte are syntactically correct - would, could, will, should and might are all modal verbs. The last three examples show polysemous words, and in all these cases our method performs better than the Gromov-Wasserstein. For German word erschienen, our algorithm predicts all three words released, appeared, and published in the top ten translations as compared to Gromov-Wasserstein which only predicts published .

4.4 Ablation Study

In this section, we show the impact of some of our design choices and hyperparameters. One of the parameters is the number of support locations. In theory, the optimal barycenter distribution could have as many support locations as the sum of the total number of support locations for all input distributions. In Figure 2, we show the impact on translation performance when we have a different number of support locations. Let njn_{j} be the number of words we have in language LjL_{j}. We picked the three most representative cases: the average number of words =j=1mnj/m=\sum_{j=1}^{m}n_{j}/m, twice the average number of words =2j=1mnj/m=2\sum_{j=1}^{m}n_{j}/m, and the total number of words =j=1mnj=\sum_{j=1}^{m}n_{j}. As we increase the number of support locations for the barycenter distribution, we can see in Figure 2 that the performance for language translation improves. However, when we increase the number of support locations for the barycenter, the algorithm becomes costly. Therefore, in an effort to balance accuracy and computational complexity, we decided to use 10000 support locations (twice the average number of words).

We also conducted a set of experiments to determine whether the inclusion of distant languages increases bilingual translation accuracy. Excluding two non-Indo-European languages Finnish and Turkish, we calculated the barycenter of Croatian (HR), English (EN), French (FR), German (DE), and Italian (IT). Figure 3 contains results for common bilingual pairs. The red bar show the bilingual translation accuracy when translating through the barycenter for all languages including Finnish and Turkish, whereas blue bar indicate the accuracy of translations that use the barycenter of the five Indo-European languages.

Refer to caption
Figure 2: Accuracies for language pairs using different numbers of support locations for the barycenter. In our experimental setup, we have 5000 words in each language.
Refer to caption
Figure 3: This graph shows the accuracy of bilingual translation pairs. The red bar indicate translation accuracy using the barycenter of all languages (HR, EN, FI, FR, DE, RU, IT, TR), while the blue bar correspond to the barycenter of (HR, EN, FR, DE, IT, RU).
German English GW Prediction BA Prediction
München Munich London, Dublin, Oxford, Birmingham, Wellington Cambridge, Oxford, Munich, London, Birmingham
Glasgow, Edinburgh, Cambridge, Toronto, Hamilton Rristol, Edinburgh, Dublin, Hampshire, Baltimore
sollte should would, could, might, will, needs, would, could, will, supposed,
supposed, put, willing, wanted, meant might, meant, needs, expected, able, should
erschienen released published, editions, publication, edition published, editions, volumes, publication
appeared printed, volumes, compilation released, titled, printed
published publications, releases, titled appeared, edition, compilation
aufgenommen admitted recorded, taken, recording, selected recorded, taken, recording, admitted
recorded roll, placed, performing selected, sample, included
taken, included eligible, motion, assessed track, featured, mixed
viel lots, lot much, lot, little, more, less much, lot, little, less, too
much bit, too, plenty, than, better more, than, bit, far, lots
Table 1: German-to-English translation prediction comparing results by 1) using GW alignment to imply direct bilingual mapping and 2) using Barycenter Alignment method described in Algorithm 1. We show top-10 translations of both methods. Last three examples show the polysemous words and their corresponding translations.
it-es it-fr it-pt it-en it-de es-it es-fr es-pt es-en es-de
GW 92.63 91.78 89.47 80.38 74.03 89.35 91.78 92.82 81.52 75.03
GWo - - - 75.2 - - - - 80.4 -
PA 87.3 87.1 81.0 76.9 67.5 83.5 85.8 87.3 82.9 68.3
MAT+MPPA 87.5 87.7 81.2 77.7 67.1 83.7 85.9 86.8 83.5 66.5
MAT+MPSR 88.2 88.1 82.3 77.4 69.5 84.5 86.9 87.8 83.7 69.0
UMH 87.0 86.7 80.4 79.9 67.5 83.3 85.1 86.3 85.3 68.7
BA 92.32 92.54 90.14 81.84 75.65 89.38 92.19 92.85 83.5 78.25
fr-it fr-es fr-pt fr-en fr-de pt-it pt-es pt-fr pt-en pt-de
GW 88.0 90.3 87.44 82.2 74.18 90.62 96.19 89.9 81.14 74.83
GWo - - - 82.1 - - - - - -
PA 83.2 82.6 78.1 82.4 69.5 81.1 91.5 84.3 80.3 63.7
MAT+MPPA 83.1 83.6 78.7 82.2 69.0 82.6 92.2 84.6 80.2 63.7
MAT+MPSR 83.5 83.9 79.3 81.8 71.2 82.6 92.7 86.3 79.9 65.7
UMH 82.5 82.7 77.5 83.1 69.8 81.1 91.7 83.6 82.1 64.4
BA 88.38 90.77 88.22 83.23 76.63 91.08 96.04 91.04 82.91 76.99
en-it en-es en-fr en-pt en-de de-it de-es de-fr de-pt de-en Average
GW 80.84 82.35 81.67 83.03 71.73 75.41 72.18 77.14 74.38 72.85 82.84
GWo 78.9 81.7 81.3 - 71.9 - - - - 72.8 78.04
PA 77.3 81.4 81.1 79.9 73.5 69.5 67.7 73.3 59.1 72.4 77.98
MAT+MPPA 78.5 82.2 82.7 81.3 74.5 70.1 68.0 75.2 61.1 72.9 78.47
MAT+MPSR 78.8 82.5 82.4 81.5 74.8 72.0 69.6 76.7 63.2 72.9 79.29
UMH 78.9 82.5 82.7 82.0 75.1 68.7 67.2 73.5 59.0 75.5 78.46
BA 81.45 84.26 82.94 84.65 74.08 78.09 75.93 78.93 77.18 75.85 84.24

Table 2: Pairs of languages in multilingual alignment problem results for English, German, French, Spanish, Italian, and Portuguese. All reported results are precision@1 percentage. The method achieving the highest precision for each bilingual pair is highlighted in bold. Methods we are comparing to in the table are: Procrustes Matching with CSLS metric to infer translation pairs (PA)  Lample et al. (2018); Gromov-Wasserstein alignment (GW) Alvarez-Melis and Jaakkola (2018) (reproduced by us using their source code); GWo refers to the results reported by Alvarez-Melis and Jaakkola (2018) in the paper; bilingual alignment with multilingual auxiliary information (MPPA) Taitelbaum et al. (2019b); Multilingual pseudo-supervised refinement method Chen and Cardie (2018); multilingual alignment method (UMH) Alaux et al. (2019). Asterisks denote significant differences between BA and GW (McNemar’s test, one-sided), the only methods for which predictions were available.
en-de it-fr hr-ru en-hr de-fi tr-fr ru-it fi-hr tr-hr tr-ru
PROC (1k) 0.458 0.615 0.269 0.225 0.264 0.215 0.360 0.187 0.148 0.168
PROC (5k) 0.544 0.669 0.372 0.336 0.359 0.338 0.474 0.294 0.259 0.290
PROC-B 0.521 0.665 0.348 0.296 0.354 0.305 0.466 0.263 0.210 0.230
RCSLS (1k) 0.501 0.637 0.291 0.267 0.288 0.247 0.383 0.214 0.170 0.191
RCSLS (5k) 0.580 0.682 0.404 0.375 0.395 0.375 0.491 0.321 0.285 0.324
VECMAP 0.521 0.667 0.376 0.268 0.302 0.341 0.463 0.280 0.223 0.200
GW 0.667 0.751 0.683 0.123 0.454 0.485 0.508 0.634 0.482 0.295
BA 0.683 0.799 0.667 0.646 0.508 0.513 0.512 0.601 0.481 0.355
Table 3: Mean average precision (MAP) accuracies of several current methods on XLING dataset.

5 Related Work

We briefly describe related work on supervised and unsupervised techniques for bilingual and multilingual alignment.

5.1 Supervised Bilingual Alignment

Mikolov et al. (2013a) formulated the problem of aligning word embeddings as a quadratic optimization problem to find an explicit linear map QQ between the word embeddings 𝐗1\mathbf{X}_{1} and 𝐗2\mathbf{X}_{2} of two languages.

minQ𝐗1QP𝐗222\min_{Q}~{}||\mathbf{X}_{1}Q-P\mathbf{X}_{2}||_{2}^{2} (14)

This setting is supervised since the assignment matrix PP that maps words of one language to another is known. Later, Xing et al. (2015) showed that the results can be improved by restricting the linear mapping QQ to be orthogonal. This corresponds to Orthogonal Procrustes Schönemann (1966).

5.2 Unsupervised Bilingual Alignment

In the unsupervised setting, the assignment matrix PP between words is unknown, and we resort to the joint optimization:

minQminP𝐗1QP𝐗222.\min_{Q}\min_{P}~{}||\mathbf{X}_{1}Q-P\mathbf{X}_{2}||_{2}^{2}. (15)

As a result, the optimization problem becomes non-convex and therefore more challenging. The problem can be relaxed into a (convex) semidefinite program.This method provides high accuracy at the expense of high computation complexity. Therefore, it is not suitable for large scale problems. Another way to solve (15) is to use Block Coordinate Relaxation, where we iteratively optimize each variable with other variables fixed. When QQ is fixed, optimizing PP can be done with the Hungarian algorithm in O(n3)O(n^{3}) time (which is prohibitive since nn is the number of words). Cuturi and Doucet (2014) developed an efficient approximation (complexity O(n2)O(n^{2})) achieved by adding a negative entropy regularizer.Observing that both PP and QQ preserve the intra-language distances, Alvarez-Melis and Jaakkola (2018) cast the unsupervised bilingual alignment problem as a Gromov-Wasserstein optimal transport problem, and give a solution with minimum hyper-parameter to tune.

5.3 Multilingual Alignment

In multilingual alignment, we seek to align multiple languages together while taking advantage of inter-dependencies to ensure consistency among them. A common approach consists of mapping each language to a common space X0X_{0} by minimizing some loss function ll:

minQi𝔒d,Pi𝒫nil(XiQi,PiX0)\min_{Q_{i}\in\mathfrak{O}_{d},P_{i}\in\mathcal{P}_{n}}\sum\nolimits_{i}l(X_{i}Q_{i},P_{i}X_{0}) (16)

The common space may be a pivot language such as English Smith et al. (2017); Lample et al. (2018); Joulin et al. (2018). Nakashole and Flauger (2017) and Alaux et al. (2019) showed that constraining coherent word alignments between triplets of nearby languages improves the quality of induced bilingual lexicons. Chen and Cardie (2018) extended the work of Lample et al. (2018) to the multilingual case using adversarial algorithms. Taitelbaum et al. extended Procrustes Matching to the multi-Pairwise case Taitelbaum et al. (2019b), and also designed an improved representation of the source word using auxiliary languages Taitelbaum et al. (2019a).

6 Conclusion

In this paper, we discussed previous attempts to solve the multilingual alignment problem, compared similarity between the approaches and pointed out a problem with existing formulations. Then we proposed a new method using the Wasserstein barycenter as a pivot for the multilingual alignment problem. At the core of our algorithm lies a new inference method based on an optimal transport plan to predict the similarity between words. Our barycenter can be interpreted as a virtual universal language, capturing information from all languages. The algorithm we proposed improves the accuracy of pairwise translations compared to the current state-of-the-art method..

Acknowledgments

We thank the reviewers for their critical comments and we are grateful for funding support from NSERC and Mitacs.

References

7 Appendix

7.1 Barycenter Convergence

Each iteration of our barycenter algorithm optimizes the barycenter weights and then the support locations. In this section, we investigate the speed of convergence of our approach. In figure 4, we plot the translation accuracy for all language pairs as a function of the number of iterations. As we can see, the accuracy stabilizes after roughly 55 iterations.

Refer to caption
Figure 4: Translation accuracies for language pairs as a function of the number of iterations. The barycenter stabilizes after the 55-th iteration.

7.2 Hierarchical Approach

Training a joint barycenter for all languages captures shared information across all languages. We hypothesized that distant languages might potentially impair performance for some language pairs. To leverage existing knowledge of similarities between languages, we constructed a language tree whose topology was consistent with the widely agreed phylogeny of Indo-European languages (see e.g. Gray and Atkinson [2003]). For each non-leaf node, we set it the barycenter for all its children. We traverse the language tree in depth-first order and store the mappings corresponding to each edge. The translation between any two languages can be implied by traversing through the tree structure and multiplying the mappings corresponding to each edge.

Table 4 shows the results for the hierarchical barycenter. We see that the hierarchical approach yields slightly better performance for some language pairs, particularly for closely related languages such as Spanish and Portuguese or Italian and Spanish. For most language pairs, it does not improve over the weighted barycenter. More details about the hierarchical approach are available in first author’s thesis Lian, Xin [2020].

GW benchmark unweighted hierarchical weighted
P@1 P@10 P@1 P@10 P@1 P@10 P@1 P@10
it-es 92.63 98.05 91.52 97.95 92.49 98.11 92.32 98.01
it-fr 91.78 98.11 91.27 97.89 92.61 98.14 92.54 98.14
it-pt 89.47 97.35 88.22 97.25 89.89 97.87 90.14 97.84
it-en 80.38 93.3 79.23 93.18 79.54 93.21 81.84 93.77
it-de 74.03 93.66 74.41 92.96 73.06 92.26 75.65 93.82
es-it 89.35 97.3 88.8 97.05 89.73 97.5 89.38 97.43
es-fr 91.78 98.21 91.34 98.03 91.74 98.29 92.19 98.33
es-pt 92.82 98.32 91.83 98.18 92.65 98.35 92.85 98.35
es-en 81.52 94.79 82.43 94.63 81.63 94.27 83.5 95.48
es-de 75.03 93.98 76.47 93.73 74.86 93.73 78.25 94.74
fr-it 88.0 97.5 87.55 97.19 88.35 97.64 88.38 97.71
fr-es 90.3 97.97 90.18 97.68 90.66 98.04 90.77 98.04
fr-pt 87.44 96.89 86.7 96.79 88.35 97.11 88.22 97.08
fr-en 82.2 94.19 81.26 94.25 80.89 94.13 83.23 94.42
fr-de 74.18 92.94 74.07 92.73 74.44 92.68 76.63 93.41
pt-it 90.62 97.61 89.36 97.75 90.59 98.17 91.08 97.96
pt-es 96.19 99.29 95.36 99.08 96.04 99.23 96.04 99.32
pt-fr 89.9 97.57 90.1 97.43 90.67 97.74 91.04 97.87
pt-en 81.14 94.17 81.42 94.14 81.42 93.86 82.91 94.64
pt-de 74.83 93.76 75.94 93.21 74.45 93.1 76.99 94.32
en-it 80.84 93.97 79.88 93.93 80.25 93.76 81.45 94.58
en-es 82.35 94.67 83.05 94.79 81.62 94.82 84.26 95.28
en-fr 81.67 94.24 81.86 94.33 81.42 93.99 82.94 94.67
en-pt 83.03 94.45 82.72 94.64 82.25 94.79 84.65 95.29
en-de 71.73 90.48 72.92 90.76 71.88 90.42 74.08 91.46
de-it 75.41 94.3 76.4 93.87 75.19 93.65 78.09 94.52
de-es 72.18 92.64 74.21 92.6 73.58 92.48 75.93 93.83
de-fr 77.14 93.29 77.93 93.61 77.14 93.51 78.93 93.77
de-pt 74.38 93.71 74.99 93.54 74.22 93.81 77.18 94.14
de-en 72.85 91.06 74.36 91.21 72.17 90.81 75.85 91.98
average 82.84 95.26 82.86 95.15 82.79 95.18 84.24 95.67
Table 4: Accuracy results for translation pairs between all pairs of languages for all evaluated methods. The column GW-benchmark contains results from Gromov-Wasserstein direct bilingual alignment. Unweighted is the barycenter approach without optimizing on support location weights. Hierarchical contains results from traversing through edges and infer translation mapping through hierarchical barycenters. The weighted column is what Algorithm 1 returns, optimizing both on support locations and weights on the support.