This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Look Ahead or Look Around?
A Theoretical Comparison Between Autoregressive and Masked Pretraining

Qi Zhang    Tianqi Du    Haotian Huang    Yifei Wang    Yisen Wang
Abstract

In recent years, the rise of generative self-supervised learning (SSL) paradigms has exhibited impressive performance across visual, language, and multi-modal domains. While the varied designs of generative SSL objectives lead to distinct properties in downstream tasks, a theoretical understanding of these differences remains largely unexplored. In this paper, we establish the first theoretical comparisons between two leading generative SSL paradigms: autoregressive SSL and masked SSL. Through establishing theoretical frameworks, we elucidate the strengths and limitations of autoregressive and masked SSL within the primary evaluation tasks of classification and content generation. Our findings demonstrate that in classification tasks, the flexibility of targeted tokens in masked SSL fosters more inter-sample connections compared to the fixed position of target tokens in autoregressive SSL, which yields superior clustering performance. In content generation tasks, the misalignment between the flexible lengths of test samples and the fixed length of unmasked texts in masked SSL (vs. flexible lengths of conditional texts in autoregressive SSL) hinders its generation performance. To leverage each other’s strengths and mitigate weaknesses, we propose diversity-enhanced autoregressive and variable-length masked objectives, which substantially improve the classification performance of autoregressive SSL and the generation performance of masked SSL. Code is available at https://github.com/PKU-ML/LookAheadLookAround.

Machine Learning, ICML

1 Introduction

Recently, self-supervised learning (SSL) paradigms have emerged to be promising in a wide variety of domains (He et al., 2022; Radford et al., 2018; Devlin et al., 2019; Radford et al., 2021). Generative SSL, characterized by its ability to reconstruct target tokens from conditional tokens within the same sequence, stands out for its distinct advantages in fine-tuning (He et al., 2022), zero-shot learning (Brown et al., 2020; Radford et al., 2021), and many other downstream tasks. In general, generative SSL can be divided into two categories: autoregressive SSL (represented by GPT (Radford et al., 2018)) and masked SSL (represented by BERT (Devlin et al., 2019)). As shown in Figure 1, the key difference between them lies in the choice of target tokens. Taking the language models as an example, autoregressive SSL predicts the next word in a sentence based on preceding words, while masked SSL predicts randomly masked words in a sentence based on bidirectional words. As a result, these two generative SSL models demonstrate significant differences in many aspects. For example, on two primary evaluation tasks of pretrained models, i.e., classification and content generation tasks, masked SSL tends to perform better in classification tasks, whereas autoregressive SSL shows superior performance in content generation tasks (Table 1). Despite empirical evidence supporting their differing efficacies, the underlying reasons explaining the distinct properties of autoregressive and masked SSL remain unclear.

EuphoriameansjoyhappinessandexcitementAutoregressiveSSLMaskedSSLEuphoriameansjoyhappinessandexcitementUnmaskedTextsMaskedTexts
Figure 1: Illustration of two primary paradigms in generative SSL: autoregressive SSL and masked SSL.

In this paper, we propose the first theoretical comparison between autoregressive and masked SSL, with a focus on classification and content generation tasks as widely considered in the literature (Radford et al., 2018; Devlin et al., 2019; Dong et al., 2019). To analyze the classification performance of autoregressive and masked SSL uniformly, we establish the unified connections between generative SSL objectives and matrix decomposition objectives. Based on these connections, we establish a theoretical downstream guarantee for generative SSL and describe the downstream classification performance by the properties of generative SSL co-occurrence matrices. From this perspective, we conduct both empirical and theoretical comparisons of autoregressive and masked SSL. Our analysis demonstrates that randomly selected target tokens in masked SSL could lead to more connections between different semantically similar samples, leading to superior downstream classification performance compared with autoregressive SSL. To bridge this gap, for autoregressive SSL we propose a diversity-enhanced autoregressive objective, which improves its classification performance in real-world datasets.

Table 1: The autoregressive and masked SSL models with similar scales of parameters show distinct properties in classification (evaluated by GLUE score (Wang et al., 2018a)) and content generation tasks (evaluated by perplexity on WikiText-2 (Merity, 2016)). More details can be found in Appendix B.1.
Pretraining GLUE (\uparrow) Perplexity (\downarrow)
Autoregressive 72.8 22.8
Masked 80.5 34.1

On content generation tasks (Dong et al., 2019), we first compare the generation ability of masked and autoregressive SSL on different sample lengths. We observe that the masked SSL exhibits particularly inferior performance when generating short texts. Intuitively, it is attributed to a misalignment between the pretraining and downstream objectives. For instance, in a question-answering scenario, the length of questions is variable while the length of masked texts in the masked SSL objective is fixed. As a result, the model cannot find the specific part of the information that is used to infer the masked texts during the pretraining process and may output a random guess. To formalize this intuition, we theoretically compare autoregressive and masked SSL generation abilities in the sample with different lengths. We prove that the performance gap is decided by the degree of the misalignment between pretraining objectives and downstream evaluation tasks. Building on this analysis, for masked SSL we introduce variable-length masked objectives, which exhibit significant improvements in its content generation performance.

We summarize our contributions as follows:

  • We establish the first unified theoretical framework to compare the downstream classification performance of autoregressive and masked SSL, showcasing the unique advantage of masked SSL in enhancing sample connectivity.

  • We build a theoretical comparison in generation abilities between autoregressive and masked SSL, which reveals that masked SSL’s underperformance is attributed to the misalignment between the pretraining and downstream objectives.

  • To further verify the theoretical insights, we propose the diversity-enhanced autoregressive and variable-length masked objectives, which substantially improve the classification performance of autoregressive SSL and the generation performance of masked SSL.

2 Related Work

Self-supervised Learning. Traditional deep learning relies on label information to cluster semantically similar samples in the feature space (Krizhevsky et al., 2012). However, due to the expensive costs of labeled data, the performance of deep learning is constrained. As a result, self-supervised (SSL) paradigms are proposed to extract meaningful representations from unlabeled data. Among various self-supervised objectives (e.g., generative SSL objectives (Devlin et al., 2019; He et al., 2022), contrastive SSL objectives (Chen et al., 2020b; Giorgi et al., 2021) and context-based SSL objectives (Larsson et al., 2016; Gidaris et al., 2018)), generative paradigms have achieved impressive performance in vision, language, and multi-modal domains.

Generative SSL. Generative SSL objectives can be broadly categorized into two main types: autoregressive SSL (predicting the next token of the prefix information) and masked SSL (predicting random tokens with the bidirectional information). In language models, the autoregressive representative model GPT (Radford et al., 2018) and its variants (Radford et al., 2019; Brown et al., 2020) obtain promising performance in content generation, long-text comprehension, and many other downstream tasks. Concurrently, the masked SSL representative language model BERT (Devlin et al., 2019) and its variants (Lan et al., 2020; Liu et al., 2019) also show unique advantages and achieve remarkable performance in downstream tasks including sequence labeling, text classification, etc. Besides the language tasks, the generative SSL objectives have also shown great potential in visual representation learning (He et al., 2022; Xie et al., 2022) and multi-modal representation learning (Bachmann et al., 2022).

Theoretical Understanding of SSL. Despite the impressive empirical performance exhibited by various self-supervised objectives in diverse downstream tasks, the theoretical understanding of the mechanisms behind them (especially for the generative SSL objectives) is still under-explored. In the language domain, previous studies mainly demonstrated the advantages of different pretraining objectives through empirical comparisons. For example, Wang et al. (2022a) shows that the autoregressive SSL models can achieve superior performance in zero-shot tasks, while the masked SSL models perform better in fine-tuning tasks. Besides, Dong et al. (2019) observes that the autoregressive SSL is more proficient in generation tasks, while the masked SSL outperforms in classification tasks. In the vision domain, researchers mainly focus on the theoretical analysis of contrastive objectives instead of generative SSL objectives. For example, they have established the theoretical connections between the contrastive objectives and the downstream performance (Saunshi et al., 2019; HaoChen et al., 2021) or analyzed them from a mutual information perspective (Oord et al., 2018; Tian et al., 2020). Different from prior studies, our work mainly focuses on uncovering the mechanism behind the generative SSL objectives by theoretically comparing the two major generative SSL objectives.

3 Mathematical Formulation

We begin by introducing the basic notations of autoregressive and masked SSL. Without loss of generality, we take the language models as an example, and the analysis can be easily extended to visual and other domains. In general, both autoregressive and masked SSL comprise two stages: self-supervised pretraining and downstream evaluation.

3.1 Pretraining Objectives

Taking an unsupervised language dataset 𝒟={xi}i=1N{\mathcal{D}}=\{x_{i}\}_{i=1}^{N} as an example, each sample xi=(xi,1,,xi,s)x_{i}=(x_{i,1},\cdots,x_{i,s}) consists of ss tokens. The objective of autoregressive SSL is to predict the subsequent word based on prefix texts, i.e., minimizing the following Negative Log-Likelihood (NLL) Loss:

ar(Θ)=𝔼xiklogP(xi,k|xi,1,,xi,k1;Θ),\displaystyle{\mathcal{L}}_{ar}(\Theta)=-\mathbb{E}_{x_{i}}\sum_{k}\log P(x_{i,k}|x_{i,1},\cdots,x_{i,k-1};\Theta),

where Θ\Theta are the weights of the neural network.

During the masked SSL pretraining process, for each sample xi=(xi,1,,xi,s)x_{i}=(x_{i,1},\cdots,x_{i,s}), we draw a random mask m{0,1}sm\in\{0,1\}^{s} (drawing 0 with probability ρm\rho_{m}, i.e., the mask ratio) and apply the mask to generate the unmasked text xi1x^{1}_{i} and masked text xi2x^{2}_{i}:

xi1=xi[m]s(1ρm),xi2=xi[1m]sρm.\displaystyle x^{1}_{i}=x_{i}[m]\in\mathbb{R}^{s(1-\rho_{m})},x^{2}_{i}=x_{i}[1-m]\in\mathbb{R}^{s\rho_{m}}.

The objective of masked SSL is to predict the masked tokens from unmasked texts, i.e.,

m(Θ)=𝔼(xi1,xi2)klogP(xi,k2|xi,11,,xi,s(1ρm)1;Θ),\displaystyle{\mathcal{L}}_{m}(\Theta)=-\mathbb{E}_{(x_{i}^{1},x_{i}^{2})}\sum_{k}\log P(x^{2}_{i,k}|x^{1}_{i,1},\cdots,x^{1}_{i,s(1-\rho_{m})};\Theta),

where xi,k1,xi,k2x^{1}_{i,k},x^{2}_{i,k} respectively denote the kk-th token of the unmasked text xi1x^{1}_{i} and masked text xi2x^{2}_{i}.

In practice, both the representative autoregressive SSL model GPT (Radford et al., 2018) and the representative masked SSL model BERT (Devlin et al., 2019) use the multi-layer Transformer to generate the predicted distribution and apply the Cross-Entropy (CE) loss. Let f(Xi)tf(X_{i})\in\mathbb{R}^{t} be the output of the network and Wt×NDW\in\mathbb{R}^{t}\times\mathbb{R}^{N_{D}} be the token embedding matrix that transforms the output features to the predicted distribution of NDN_{D} different tokens. Then the predicted probability in autoregressive and masked SSL can be uniformly reformulated as:

P(Xi+|Xi;Θ)=exp((Wf(Xi))𝟙Xi+)Xiexp((Wf(Xi))𝟙Xi),\displaystyle P(X_{i}^{+}|X_{i};\Theta)=\frac{\exp((Wf(X_{i}))^{\top}\mathbbm{1}_{X_{i}^{+}})}{\sum_{X_{i}^{-}}{\exp((Wf(X_{i}))^{\top}\mathbbm{1}_{X_{i}^{-}}})},

where XiX_{i} are the conditional tokens, Xi+X_{i}^{+} are the target tokens and XiX_{i}^{-} are the independent tokens. To be specific, taking the sentence in Figure 1 as an example (”Euphoria means joy, happiness and excitement”), for autoregressive SSL, XiX_{i} represents the prefix tokens (e.g., [Euphoria, means, joy, happiness, and]) and Xi+X_{i}^{+} is the subsequent token ([excitement]). For masked SSL, XiX_{i} represents the unmasked tokens (e.g., [Euphoria, joy, excitement]) while Xi+X_{i}^{+} represents one of the masked tokens (e.g., [happiness]). Besides, we use a one-hot vector 𝟙Xi+\mathbbm{1}_{X_{i}^{+}} to indicate the target tokens (the Xi+X_{i}^{+}-th position of 𝟙Xi+\mathbbm{1}_{X_{i}^{+}} is set to 1 ). The corresponding objective is:

CE(f,W)=𝔼(Xi,Xi+)logexp((Wf(Xi))𝟙Xi+)Xiexp((Wf(Xi))𝟙Xi),\displaystyle{\mathcal{L}}_{CE}(f,W)=-\mathbb{E}_{(X_{i},X_{i}^{+})}\log\frac{\exp((Wf(X_{i}))^{\top}\mathbbm{1}_{X_{i}^{+}})}{\sum_{X_{i}^{-}}{\exp((Wf(X_{i}))^{\top}\mathbbm{1}_{X_{i}^{-}}})},

We note that the main objective of generative SSL loss is to maximize the probability of target words and minimize the independent word probability. Following the same spirits, we consider the following spectral loss (HaoChen et al., 2021) to simplify our theoretical analysis:

SL(f,W)=\displaystyle{\mathcal{L}}_{SL}(f,W)= 𝔼(Xi,Xi+)(Wf(Xi))𝟙Xi+\displaystyle-\mathbb{E}_{(X_{i},X_{i}^{+})}(Wf(X_{i}))^{\top}\mathbbm{1}_{X_{i}^{+}} (1)
+𝔼Xi,Xi((Wf(Xi))𝟙Xi)2.\displaystyle+\mathbb{E}_{X_{i},X_{i}^{-}}((Wf(X_{i}))^{\top}\mathbbm{1}_{X_{i}^{-}})^{2}.

3.2 Revisiting Objectives from a Matrix Perspective

When theoretically comparing the generalization performance of autoregressive and masked SSL pretraining, it’s challenging to analyze the overall optimal solutions for their objectives as both of them are usually seen as instance-level tasks, which hinders us from demonstrating the downstream performance of pretrained models. Inspired by the previous work (HaoChen et al., 2021) which addressed analogous challenges in contrastive learning, we consider reformulating the spectral loss and analyzing it from a matrix decomposition perspective.

We start by introducing the co-occurrence matrix AA, which formulates the joint distribution PM(Xi,Xi+)P_{M}(X_{i},X_{i}^{+}) of the conditional texts and target texts as a matrix, i.e.,

AXi,Xi+=PM(Xi,Xi+)0.\displaystyle A_{X_{i},X_{i}^{+}}=P_{M}(X_{i},X_{i}^{+})\geq 0.

And we denote PC(Xi)P_{C}(X_{i})=Xi+PM(Xi,Xi+)=\sum_{X_{i}^{+}}P_{M}(X_{i},X_{i}^{+}), PG(Xi+)=XiPM(Xi,Xi+)P_{G}(X_{i}^{+})=\sum_{X_{i}}P_{M}(X_{i},X_{i}^{+}) as the marginal distributions of conditional and target texts. Based on these definitions, we establish a crucial connection: the spectral loss is equivalent to an asymmetric matrix decomposition objective on the co-occurrence matrix.

Theorem 3.1.

Let A¯\bar{A} be the normalized co-occurrence matrix, i.e., A¯Xi,Xi+=AXi,Xi+PC(Xi)PG(Xi+)\bar{A}_{X_{i},X_{i}^{+}}=\frac{A_{X_{i},X_{i}^{+}}}{\sqrt{P_{C}(X_{i})P_{G}(X_{i}^{+})}}. Then we obtain

SL(f,W)=A¯FW2+const.\displaystyle{\mathcal{L}}_{SL}(f,W)=\|\bar{A}-FW^{\prime\top}\|^{2}+const.

where the XiX_{i}-th row of FF and the Xi+X_{i}^{+}-th row of WW^{\prime} respectively represents encoded features and token embeddings, i.e., FXi=PC(Xi)f(Xi)F_{X_{i}}=\sqrt{P_{C}(X_{i})}f(X_{i})^{\top}, WXi+=PG(Xi+)WXi+W^{\prime}_{X_{i}^{+}}=\sqrt{P_{G}(X_{i}^{+})}W_{X_{i}^{+}}.

3.3 Downstream Tasks

Classification tasks and content generation tasks are the most commonly adopted evaluation protocols for pretrained models (Radford et al., 2018; Brown et al., 2020; Dong et al., 2019; Devlin et al., 2019; Yang et al., 2019; Chen et al., 2020a; He et al., 2022). Therefore, in this paper, we mainly consider the classification tasks and content generation tasks for generality.

Linear classification. For the classification tasks, we assume that the labels of 𝒟{\mathcal{D}} can be accessed, i.e., 𝒟={(xi,y(xi))}i=1N{\mathcal{D}}=\{(x_{i},y(x_{i}))\}_{i=1}^{N}, where y(xi)y(x_{i}) is the ground-truth label of xix_{i}. For each sample xix_{i}, we first encode it using the pretrained model ff and then apply a linear classifier gg following that to generate the predicted distribution of different labels. For ease of theoretical analysis, we follow the settings of linear evaluation (Saunshi et al., 2019), i.e., the pretrained encoder ff is frozen during the downstream evaluation process. Then we evaluate the prediction error

(f)=𝔼(xi,y(xi))P(y^i(xi)y(xi)),\displaystyle\mathcal{E}(f)=\mathbb{E}_{(x_{i},y(x_{i}))}P(\hat{y}_{i}(x_{i})\neq y(x_{i})),

where y^i(xi)=argmax(g(f(xi)))\hat{y}_{i}(x_{i})={\rm argmax}(g(f(x_{i}))).

Content Generation. For the content generation tasks, we consider an unsupervised text dataset 𝒟u={xi}i=1M{\mathcal{D}}_{u}=\{x_{i}\}_{i=1}^{M} that consists of MM samples, we still assume that each xx consists of ss tokens, i.e., xi=(xi,1,,xi,s)x_{i}=(x_{i,1},\cdots,x_{i,s}). The downstream objective is to evaluate the following likelihood with the pretrained model weights:

gen(Θ)=𝔼xiklogP(xi,k+1|xi,1,,xi,k;Θ),{\mathcal{L}}_{gen}(\Theta)=-\mathbb{E}_{x_{i}}\sum_{k}\log P(x_{i,k+1}|x_{i,1},\cdots,x_{i,k};\Theta), (2)

The exponential of this downstream objective is also called as perplexity, which is an important metric to evaluate the language model.

4 A Theoretical Comparison between Autoregressive and Masked SSL

4.1 Generalization on Linear Classification

In this section, we compare the generalization performance between autoregressive and masked SSL on linear classification tasks. To achieve this, we first establish a unified downstream classification guarantee of generative SSL and then compare autoregressive and masked SSL based on that.

4.1.1 Downstream Classification Guarantees of Generative SSL

Theorem 3.1 establishes the mathematical equivalence between the generative SSL objectives and the asymmetric matrix decomposition objectives. Leveraging the Eckart-Young Theorem (Eckart & Young, 1936), we can explicitly characterize the optimal solutions of the asymmetric decomposition objectives, which allows us to characterize the ideal features learned with generative SSL objectives. Substituting these pretrained features into the downstream classification objective yields the following theorem, which establishes the downstream guarantees for generative SSL.

Theorem 4.1.

We define the labeling error α\alpha as the probability that the conditional texts and the target text have different labels, i.e.,

α=𝔼(Xi,Xi+)𝟙[y(Xi)y(Xi+)],\displaystyle\alpha=\mathbb{E}_{(X_{i},X_{i}^{+})}\mathbbm{1}[y(X_{i})\neq y(X_{i}^{+})],

where y()y(\cdot) denotes the ground-truth label. Let ff^{\star} be the optimal solutions of SL(f,W){\mathcal{L}}_{SL}(f,W), then we obtain

𝔼xi(y^(xi)y(xi))c1j=t+1NDσj4+c2α\displaystyle\mathbb{E}_{x_{i}}(\hat{y}(x_{i})\neq y(x_{i}))\leq c_{1}\sum\limits_{j={t+1}}^{N_{D}}{\sigma_{j}^{4}}+c_{2}\alpha

where σj2\sigma^{2}_{j} is the jj-th largest singular value of the normalized co-occurrence matrix and c1,c2c_{1},c_{2} are the constants.

As shown in Theorem 4.1, the downstream performance is mainly decided by two critical factors: the labeling error and the singular values of the co-occurrence matrix. In this paper, we follow the assumptions used in (Saunshi et al., 2019; Wang et al., 2022b) that the labeling error between different tokens of the same samples is negligible. Consequently, our primary focus in this paper is on elucidating how the diverse objectives of autoregressive and masked SSL impact the singular values of the co-occurrence matrix.

A common way to understand the singular values is from a graph perspective. More precisely, the co-occurrence matrix AA can be viewed as the adjacent matrix of a bipartite graph 𝒢{\mathcal{G}}, where the nodes represent the conditional and target texts, and the edge weights denote the joint probability between them. The spectral graph theory (Chung, 1997) states that the smaller singular values of the adjacent matrices correspond to stronger connectivity in the graphs, indicating properties such as fewer disconnected sub-graphs and shorter diameters. Consequently, Theorem 4.1 suggests that achieving superior downstream classification performance requires enhanced connectivity in the co-occurrence matrix.

This perspective provides insight into the differences between masked and autoregressive SSL models in downstream classification tasks. Intuitively, the multiple positions of target tokens in the masked SSL objectives have the potential to generate more connections than the single position in the autoregressive SSL objectives. In the following, we conduct both empirical and theoretical comparisons of the connectivity in autoregressive and masked co-occurrence matrices.

4.1.2 Comparing Autoregressive and Masked SSL

Refer to caption
Figure 2: Comparisons on estimated connectivity of the co-occurrence matrices of GPT and BERT. Details in Appendix B.2

Based on the perspective above, we first empirically investigate the connectivity of autoregressive and masked co-occurrence matrices in the real-world dataset PILE(Gao et al., 2020). As the co-occurrence matrices of the real-world datasets are not accessed, we calculate the average feature similarity between different texts as a surrogate metric for the sample connectivity. To ensure a fair comparison, we pretrain the same scale network with autoregressive and masked SSL objectives respectively. More details can be in Appendix. As shown in Figure 2, the estimated connectivity of the masked SSL co-occurrence matrices is significantly stronger than that of autoregressive SSL.

Besides, to theoretically compare the connectivity of autoregressive and masked SSL, we construct a toy model and calculate the singular values of the respective co-occurrence matrices.

Toy model. For the dataset 𝒟sim={(xi,y(xi))}{\mathcal{D}}_{sim}=\{(x_{i},y(x_{i}))\}, we denote that there exist rr classes in the dataset, i.e., y(xi){1,,r}y(x_{i})\in\{1,\cdots,r\}. For the kk-th token in the text xix_{i}, we assume that xi,kx_{i,k} are uniformly selected from the set {kTr+yi,kTr+yi2,,kTr+yiT}\{k\cdot T\cdot r+y_{i},k\cdot T\cdot r+y_{i}\cdot 2,\cdots,k\cdot T\cdot r+y_{i}\cdot T\} with TT elements. Then we can explicitly calculate the singular values of the co-occurrence matrix.

Theorem 4.2.

For the normalized co-occurrence matrix A¯,A\bar{A},{A}^{\prime} of autoregressive and masked SSL on the toy model dataset 𝒟sim{\mathcal{D}}_{sim}, when the length of masked texts sρm>1s\rho_{m}>1, we obtain

{σj=σj=1,jr,σj=1>s(1ρm)(sρm)(s1)=σj,r<jrs,σj=σj=0,j>rs,\begin{cases}\sigma_{j}=\sigma^{\prime}_{j}=1,&j\leq r,\\ \displaystyle\sigma_{j}=1>\sqrt{\frac{s(1-\rho_{m})}{(s\rho_{m})(s-1)}}=\sigma^{\prime}_{j},&r<j\leq rs,\\ \sigma_{j}=\sigma^{\prime}_{j}=0,&j>rs,\\ \end{cases}

where σj,σj\sigma_{j},\sigma^{\prime}_{j} are the jj-th largest singular values of A¯,A¯\bar{A},\bar{A^{\prime}}, rr is the number of class, ss is the length of a sample and ρm\rho_{m} is the mask ratio.

According to Theorem 4.2, the singular values of the masked SSL co-occurrence matrix are smaller than that of the autoregressive SSL, which verifies the intuition that the masked SSL can foster more connections by the multiple-lace predictions. Combined with Theorem 4.1, the masked SSL objective has a superior downstream guarantee. Additionally, it is noteworthy that the singular values of the masked SSL co-occurrence matrix decrease with a larger mask ratio. This observation implies that an aggressive mask ratio can effectively cluster more samples in the feature space, which is consistent with previous empirical findings in generative SSL (He et al., 2022; Devlin et al., 2019).

In summary, both the empirical and theoretical analysis verify that multiple-place word predictions can bring more connections compared to the next word prediction, which suggests that the autoregressive SSL objective should introduce more diverse predictions to strengthen the connectivity between different texts. Naturally, we propose the following diversity-enhanced autoregressive SSL objective:

dar,t(Θ)=𝔼xiklogP(xi,[k+1,k+t]|xi,1,,xi,k;Θ),\displaystyle{\mathcal{L}}_{dar,t}(\Theta)=-\mathbb{E}_{x_{i}}\sum_{k}\log P(x_{i,[k+1,k+t]}|x_{i,1},\cdots,x_{i,k};\Theta), (3)

where xi,[k+1,k+t]x_{i,[k+1,k+t]} is a token randomly selected from {xi,k+1,,xi,k+t},t1\{x_{i,k+1},\cdots,x_{i,k+t}\},\ t\geq 1. This objective lets the conditional sequence xi,1,,xi,k+1x_{i,1},\cdots,x_{i,k+1} additionally predict t1t-1 more subsequent words, which introduces more diverse prediction targets and helps bring more connections. We will verify the effectiveness of this objective in Section 5.

4.2 Generalization on Content Generation

Besides the classification tasks we analyzed above, another important evaluation task for pretrained models is the content generation ability (Radford et al., 2018; Devlin et al., 2019). Previous works have empirically shown that the autoregressive SSL models outperform masked SSL models in downstream content generation tasks (Dong et al., 2019). To further understand the advantages of autoregressive SSL models in content generation tasks, we first observe the generation ability of masked SSL models with different types of texts.

Refer to caption
Figure 3: Comparison between different conditional sequence lengths in generation evaluation of masked SSL. Shorter conditional sequences suffer from low prediction likelihood.

As shown in Figure 3, we observe that the masked SSL model performs particularly unsatisfactory when generating from short texts. Intuitively, it can be attributed to a misalignment between the pretraining objective and the downstream evaluation tasks. To be specific, when predicting the masked words in the pretraining process, the lengths of inputs are fixed (e.g., 15% of the texts). Consequently, the masked SSL model may struggle to accurately infer the complete texts due to limited information in the downstream generation tasks. In the subsequent analysis, we aim to theoretically substantiate this intuition. For the ease of our theoretical analysis, we consider using the linear attention as the pretrained encoder to compare the generation abilities of autoregressive and masked SSL models.

Linear Attention. The general form of linear attention is given by:

f(xi)=xiWQ(xiWK)TxiWV=QKTV,f(x_{i})=x_{i}W^{Q}(x_{i}W^{K})^{T}x_{i}W^{V}=QK^{T}V,

where WQ,WK,WVW^{Q},W^{K},W^{V} are projections.

By comparing the evaluation error on the downstream generation tasks, we obtain the following theorem.

Theorem 4.3.

Let fmaskf_{mask} be the model pretrained by masked SSL. We establish the upper bound of masked SSL in content generation tasks:

gen(fmask)k(wk2(k1)6+wkW22η)2s(1ρm)+δmask+1,\displaystyle\mathcal{L}_{gen}(f_{mask})\leq\frac{\sum_{k}\left(\frac{w_{k}^{2}}{(k-1)^{6}}+w_{k}\left\lVert W\right\rVert_{2}^{2}\eta\right)}{2s(1-\rho_{m})}+\delta_{mask}+1,

where ss is the length of a sequence, ρm\rho_{m} is the mask ratio, wk=((s(1ρm))3(k1)3)w_{k}=((s(1-\rho_{m}))^{3}-(k-1)^{3}) is the misalignment degree of input lengths, η=maxxi,aWQ(xi,bWK)xi,cWVxi,αWQ(xi,βWK)xi,γWV\eta=\max\|x_{i,a}W^{Q}(x_{i,b}W^{K})^{\top}x_{i,c}W^{V}-x_{i,\alpha}W^{Q}(x_{i,\beta}W^{K})^{\top}x_{i,\gamma}W^{V}\| represents the difference between pretrained model outputs of different positions, and δmask=max((Wf(Xi))𝟙Xi++((Wf(Xi))𝟙Xi)2)\delta_{mask}=\max(-(Wf(X_{i}))^{\top}\mathbbm{1}_{X_{i}^{+}}+((Wf(X_{i}))^{\top}\mathbbm{1}_{X_{i}^{-}})^{2}) represents the pretraining error of masked SSL.

For the autoregressive SSL, we obtain

gen(far)δar.\displaystyle\mathcal{L}_{gen}(f_{ar})\leq\delta_{ar}.

Comparing two upper bounds, the gap between masked and autoregressive SSL is:

k(wk2(k1)6+wkW22η)2s(1ρm)+1+δmaskδar.\frac{\sum_{k}\left(\frac{w_{k}^{2}}{(k-1)^{6}}+w_{k}\left\lVert W\right\rVert_{2}^{2}\eta\right)}{2s(1-\rho_{m})}+1+\delta_{mask}-\delta_{ar}. (4)

Consequently, autoregressive SSL obtains a smaller error when the pretraining errors are negligible.

As stated in Theorem 4.3, besides the canonical pretraining error, the autoregressive SSL models have superior performance guarantees in downstream content generation tasks compared to masked SSL models, and the performance gap is mainly decided by two crucial factors: the alignment of input lengths (wkw_{k}) and the consistency of different positions (η\eta). In the following, we respectively discuss these two factors.

Alignment of input lengths. We note that when the lengths of test samples are close to the length of unmasked texts in the pretraining objective, the performance of masked SSL models can approach autoregressive SSL models, which proves the length misalignment between unmasked texts in pretraining and test examples in downstream is a crucial reason for the inferior performance of masked SSL models.

Consistency of different positions. The term η\eta in Equation (4) evaluates the difference in model output across various positions in a sequence. When the output distribution is more consistent, the performance of the masked SSL model is better. Consequently, it is advantageous to encourage the model to generate predictions based on the entire sentence rather than focusing on specific tokens. Besides, the consistency of different positions offers some other potential benefits. For example, by uniformly considering the semantic information across different positions in a sentence, the model can avoid generating shortcuts and overfit solutions.

In summary, the theoretical bounds in Theorem 4.3 provide two principled guidelines to improve the content generation performance of masked SSL: (1) we should try to mitigate the length misalignment between pretraining and downstream examples, and (2) the model predictions should be encouraged to be consistent with different tokens in the same sentences.

Inspired by the theoretical analysis, predicting the masked texts with different lengths of unmasked texts is a straightforward solution to mitigate the misalignment and improve the generation performance of masked SSL. Consequently, we propose the following variable-length masked objective:

vm(Θ)\displaystyle{\mathcal{L}}_{vm}(\Theta) (5)
=𝔼ρm𝔼(xi1,xi2)klogP(xi,k2|xi,11,,xi,s(1ρm)1;Θ).\displaystyle=-\mathbb{E}_{\rho_{m}}\mathbb{E}_{(x_{i}^{1},x_{i}^{2})}\sum_{k}\log P(x^{2}_{i,k}|x^{1}_{i,1},\cdots,x^{1}_{i,s(1-\rho_{m})};\Theta).

In this objective, the mask ratio ρm\rho_{m} is not fixed. Instead, it is randomly sampled from a range. The variable mask ratio has the potential to mitigate the length misalignment between pretraining and downstream examples, improving the content generation performance. We will verify its effectiveness in Section 5.

Table 2: GLUE test set results of autoregressive objective and diversity-enhanced autoregressive objective scored with 5 epochs fine-tuning on each test set. We report F1 scores for QQP and MRPC, Pearson correlation coefficient for STS-B, Matthews correlation coefficient for CoLA and accuracy scores for the other tasks.
Objective MNLI SST-2 STSB RTE QNLI QQP MRPC CoLA Avg
Autoregressive 78.2/79.3 90.3 82.1 54.3 84.7 85.6 81.9 30.9 74.1
Diversity-enhanced Autoregressive 80.5/80.6 89.9 83.7 56.4 87.2 86.3 85.1 34.2 76.0

4.3 Discussion

In summary, this section introduces the first theoretical comparison between autoregressive and masked SSL. As autoregressive and masked SSL exhibit contrasting advantages in downstream classification and content generation tasks (Radford et al., 2018; Devlin et al., 2019), the two most crucial evaluation tasks for pretrained models seem to be contradictory. However, by revealing the limitations of autoregressive SSL in classification tasks (fixed position of target tokens) and limitations of masked SSL in content generation tasks (fixed length of unmasked texts), we note that the limitations of these two paradigms are not contradictory to each other. Instead, they deliver a consistent message: we should encourage diversity (in inputs, prediction objectives, etc) to the generative SSL objectives to improve the generalization performance. As a verification of our analysis, we will show the performance of our proposed objectives across both language and vision tasks in the following sections.

5 Experiments

In this section, we will verify the effectiveness of our new proposed objectives presented in Equation (3) and Equation (5). We will conduct experiments on both vision and language tasks to demonstrate the generality of our methods.

5.1 Diversity-enhanced Autoregressive Objective Improves Classification Ability

In this part, we will consider the diversity-enhanced autoregressive objective as proposed in Equation (3). The diversity-enhanced autoregressive objective is to predict the next tt tokens of conditional texts. Inspired by (Wang et al., 2018b), which predicts several tokens simultaneously, we perform group-autoregressive modeling on the sequence to effectively realize this objective. Specifically, the sample sequence (xi,1,,xi,s)(x_{i,1},\dots,x_{i,s}) is divided into several consecutive groups Gi,1,,Gi,lG_{i,1},\dots,G_{i,l} in order, with each group containing any number of tokens. The prediction is performed group by group, i.e., within each group, the tokens are predicted in parallel, while across group, the predictions are sequential. In this way, given Gi,1,,Gi,sG_{i,1},\dots,G_{i,s}, the model is to predict all tokens from Gi,s+1G_{i,s+1}. Therefore, by setting |Gi,2|==|Gi,l1|=t|G_{i,2}|=\dots=|G_{i,l-1}|=t and summing |Gi,1||G_{i,1}| from 11 to tt, the semi-autoregressive modeling task

semi,t(Θ)\displaystyle\mathcal{L}_{semi,t}(\Theta) (6)
=𝔼xi|G1|slogP(Gi,s+1|Gi,1,,Gi,s;Θ)\displaystyle=-\mathbb{E}_{x_{i}}\sum_{|G_{1}|}\sum_{s}\log P(G_{i,s+1}|G_{i,1},\dots,G_{i,s};\Theta)

is equivalent to Equation (3). In our experiments, we set t=2t=2 and uniformly sampling |Gi,1||G_{i,1}| from {1,2,,t}\{1,2,\dots,t\} instead. Since there exist multiple prediction targets for one condition sequence, it is difficult for vanilla Transformer (Vaswani et al., 2017) to model. Therefore, for language tasks, we will leverage and extend the two-stream attention module proposed in XLNet (Yang et al., 2019) by designing causal masks for semi-autoregressive modeling, where the two-stream attention can be used to model more various dependency relationship between tokens compared to vanilla Transformer. More details on the realization will be illustrated in Appendix B.3. Similarly, for vision tasks, we use the two-stream attention version of ViT (Dosovitskiy et al., 2021), which is also adopted in the previous work (Hua et al., 2022).

Table 3: Experiment results of autoregressive objective and diversity-enhanced autoregressive objective on image classification tasks. ViT-S(mall) is trained on ImageNet-100 and ViT-B(ase) is trained on ImageNet-1K. LP ACC. refers to linear probing accuracy (%). FT Acc. refers to fine-tuning accuracy (%).
Arch. Objective LP Acc. FT Acc.
ViT-S Autoregressive 33.1 81.1
Diversity-enhanced 36.2 83.2
ViT-B Autoregressive 56.2 82.5
Diversity-enhanced 59.4 82.9

Language Tasks. The model is pretrained on the Pile dataset (Gao et al., 2020), which contains content from 22 diverse sources. We use a decoder-only Transformer with 16 layers and hidden size of 768. We train the model for 100K steps with batch size of 8192. The other pretraining procedure follows the protocol proposed by Cramming (Geiping & Goldstein, 2023). The model is then finetuned with 5 epochs and evaluated on the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018a), which is a collection of several language understanding tasks.

Table 2 presents the GLUE test results. These results reveal that the model trained with the diversity-enhanced autoregressive objective consistently exhibits improvement in classification tasks. Our proposed objective has a gain of 1.9% on the average score and 3.3% on the hardest task CoLA. These improvements support the effectiveness of our theoretical findings regarding the classification ability of autoregressive models.

Vision Tasks. The model is pretrained on ImageNet-1K with ViT-Base and ImageNet-100 with ViT-Small (Deng et al., 2009). The training epoch is 200 with a warm-up of 10 epochs. The batch size is set to 4096 following previous works (He et al., 2022; Hua et al., 2022; Xie et al., 2022). We set the base learning rate as 5e-4 and use Adam optimizer (Kingma & Ba, 2015). After the pretraining is finished, we perform downstream tasks under two protocols: For linear probing, we train a linear classifier on the frozen pretrained encoder for 90 epochs; For non-linear fine-tuning, we train both the pretrained encoder and the linear classifier with the soft target cross entropy loss (Peterson et al., 2019) for 100 epochs. The batch size is set to 4096.

The experiment results are presented in Table 3. The diversity-enhanced autoregressive objective improves linear probing accuracy by \sim3%. It also improves fine-tuning accuracy on ViT-S by 2.1% and on ViT-B by 0.4%. These positive outcomes affirm the effectiveness of our proposed objective, emphasizing our theoretical insight that it is beneficial to have more connections in generative SSL.

We also conduct various extensive experiments on vision tasks including few-shot learning as Dubois et al. (2023), multi-epoch training as Li et al. (2023) and transfer learning as Kong et al. (2023).

Few-shot learning. We conduct experiments with the diversity-enhanced autoregressive objective. In the stage of the downstream tasks, we only use 1% of the label. Finetuning and linear probing results are shown in Table 4. As shown in the above table, the diversity-enhanced autoregressive objective improves both the few-shot linear and finetuning accuracy, which further verifies the effectiveness of our proposed objectives across different tasks.

Table 4: Few-shot evaluation accuracy (%) on ImageNet-1K with ViT Base. Only 10% of the training data is given during training.
Objective LP Acc. FT Acc.
Autoregressive 41.2 70.2
Diversity-enchanced 45.3 71.1

Multi-epoch training. We conduct experiments with the diversity-enhanced autoregressive objective. We respectively pretrain the models with 100 epochs and 200 epochs. Finetuning and linear probing results are shown in Table 5. As shown in the above table, the diversity-enhanced autoregressive models with different training epochs consistently achieve significant improvement in classification tasks, which also verifies the effectiveness of our proposed objectives.

Table 5: Test accuracy (%) with multi-epoch training on ImageNet-1K with ViT Base.
Epochs Objective LP Acc. FT Acc.
100 Autoregressive 48.2 81.4
Diversity-enchanced 52.6 82.1
200 Autoregressive 56.2 82.5
Diversity-enchanced 59.4 82.9

Transfer learning. We conduct experiments with the diversity-enhanced autoregressive objective. We evaluate the transfer learning performance pretrained on ImageNet-1K on 9 downstream datasets as Zhao et al. (2022) and Ericsson et al. (2021), which are FGVC Aircraft, Caltech-101, Stanford Cars, CIFAR10, CIFAR100, DTD, Oxford 102 Flowers, Food-101 and Oxford-IIIT Pets. For linear evaluation, multinomial logistic regression is fit on the extracted features. Results are shown in Table 6. As shown in the table, the diversity-enhanced autoregressive models also show superior performance in transfer learning, which further verifies the generalization performance of our models.

Table 6: Transfer learning average accuracy (%) on 9 downstream datasets with ViT Base.
Objective Average accuracy
Autoregressive 76.2
Diversity-enchanced 78.9

The experiments across three tasks and multiple datasets show that the modified objectives consistently improve the performance of downstream tasks. This proves the superiority of our objective and verifies the effectiveness of our proposed theory.

Table 7: Perplexity of models trained with masked objective and variable-length masked objective on the Pile dataset and C4 dataset. Smaller perplexity indicates better generation ability. [0.15,0.3][0.15,0.3] means uniformly sampling from [0.15,0.3][0.15,0.3] during training.
Objective (Mask ratio) Pile (\downarrow) C4 (\downarrow)
Masked (0.150.15) 59.6 71.2
Masked (0.30.3) 50.2 63.8
Variable-length ([0.15,0.3][0.15,0.3]) 45.1 59.4
Table 8: Reconstruction ability of models trained with masked objective and variable-length masked objective on ImageNet dataset. There are two evaluation metrics: L2 loss and Perceptual loss. [0.5,0.75][0.5,0.75] means uniformly sampling from [0.5,0.75][0.5,0.75] during training.
Objective (Mask ratio) L2 (\downarrow) Perceptual (\downarrow)
Masked (0.750.75) 0.127 0.201
Masked (0.50.5) 0.115 0.192
Variable-length ([0.5,0.75][0.5,0.75]) 0.072 0.136

5.2 Variable-length Masked Objective Improves Generation Ability

In this part, we will consider the variable-length MIM objective as proposed in Equation (5). Recall that the variable-length MIM objective leverages variable mask ratio to mitigate the length misalignment problem in masked models. We will consider different sampling strategies for the mask ratio in language tasks and vision tasks.

Language Tasks. Similar to the experiments on diversity-enhanced autoregressive Objective, the model is pretrained on the Pile dataset (Gao et al., 2020). We use a decoder-only Transformer (without causal mask) with 16 layers and hidden size of 768. The pretraining procedure follows the protocol proposed by Cramming (Geiping & Goldstein, 2023). Regarding the mask ratio in the masked model, we explore three options: (1) retaining the original BERT value of 0.15, (2) opting for a larger ratio of 0.3, and (3) uniformly sampling from the range [0.15,0.3][0.15,0.3], aligning with our proposed variable-length objective. We adopt perplexity as the evaluation metric, which is the exponential of Equation (2). We will evaluate the model on two datasets: (1) the tail of the Pile dataset, which is never exhibited to the model during the pretraining process; (2) a subset of C4 dataset (Raffel et al., 2020), which is a colossal, cleaned version of Common Crawl’s web crawl corpus.

Table 8 demonstrates that the model trained with the variable-length objective exhibits significantly lower perplexity compared to the masked models trained with a fixed mask ratio on both datasets. This suggests that the variable-length objective can improve the generation ability of the model. These enhancements support our theoretical insights into the generation ability of masked models.

Vision Tasks. We select MAE (He et al., 2022) as the baseline model, with pretraining hyper-parameters consistent with those in the diversity-enhanced experiments. After the pretraining is finished, we perform image reconstruction tasks. The image is first randomly masked out with 25% portion. Then the model predicts the masked part token by token in an autoregressive way. Reconstruction quality is evaluated using two metrics between the reconstructed and original images: L2 loss at the pixel level and LPIPS loss (Zhang et al., 2018) at the high-level feature level. The average loss is calculated over the ImageNet validation set.

Table 8 reveals that the model trained with the variable-length objective experiences significantly lower (approximately 30%) reconstruction loss compared to the masked models trained with a fixed mask ratio. These enhancements underscore our theoretical insight that mitigating misalignment of unmasked lengths between pretraining and test examples is crucial for generation ability.

6 Conclusion

In this paper, we propose the first theoretical comparison between two primary generative self-supervised (SSL) paradigms: autoregressive SSL and masked SSL. In particular, we establish the theoretical guarantees for autoregressive SSL and masked SSL in the most common evaluation tasks, i.e., classification and content generation. Through empirical and theoretical analyses, we delineate the advantages of masked SSL in classification tasks and the advantages of autoregressive SSL in content generation tasks. Building upon the insights from our theoretical analysis, we formulate principled guidelines for generative SSL and introduce two improved generative SSL objectives. Empirically, we verify that our proposed objectives significantly improve the performance of generative SSL models accoss both visual and language datasets.

Acknowledgements

Yisen Wang was supported by National Key R&D Program of China (2022ZD0160300), National Natural Science Foundation of China (92370129, 62376010), Beijing Nova Program (20230484344), and CCF-Baichuan-EB Fund.

Impact Statement

This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here.

References

  • Bachmann et al. (2022) Bachmann, R., Mizrahi, D., Atanov, A., and Zamir, A. Multimae: Multi-modal multi-task masked autoencoders. In ECCV, 2022.
  • Brown et al. (2020) Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language models are few-shot learners. In NeurIPS, 2020.
  • Chen et al. (2020a) Chen, M., Radford, A., Child, R., Wu, J., Jun, H., Luan, D., and Sutskever, I. Generative pretraining from pixels. In ICML, 2020a.
  • Chen et al. (2020b) Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. A simple framework for contrastive learning of visual representations. In ICML, 2020b.
  • Chung (1997) Chung, F. R. Spectral graph theory, volume 92. American Mathematical Soc., 1997.
  • Deng et al. (2009) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
  • Devlin et al. (2019) Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, 2019.
  • Dong et al. (2019) Dong, L., Yang, N., Wang, W., Wei, F., Liu, X., Wang, Y., Gao, J., Zhou, M., and Hon, H.-W. Unified language model pre-training for natural language understanding and generation. In NeurIPS, 2019.
  • Dosovitskiy et al. (2021) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.
  • Dubois et al. (2023) Dubois, Y., Hashimoto, T., and Liang, P. Evaluating self-supervised learning via risk decomposition. In ICML, 2023.
  • Eckart & Young (1936) Eckart, C. and Young, G. The approximation of one matrix by another of lower rank. Psychometrika, 1936.
  • Ericsson et al. (2021) Ericsson, L., Gouk, H., and Hospedales, T. M. How well do self-supervised models transfer? In CVPR, 2021.
  • Gao et al. (2020) Gao, L., Biderman, S., Black, S., Golding, L., Hoppe, T., Foster, C., Phang, J., He, H., Thite, A., Nabeshima, N., et al. The pile: An 800gb dataset of diverse text for language modeling. ArXiv, 2020.
  • Geiping & Goldstein (2023) Geiping, J. and Goldstein, T. Cramming: Training a language model on a single gpu in one day. In ICML, 2023.
  • Gidaris et al. (2018) Gidaris, S., Singh, P., and Komodakis, N. Unsupervised representation learning by predicting image rotations. In ICLR, 2018.
  • Giorgi et al. (2021) Giorgi, J., Nitski, O., Wang, B., and Bader, G. Declutr: Deep contrastive learning for unsupervised textual representations. In ACL, 2021.
  • HaoChen et al. (2021) HaoChen, J. Z., Wei, C., Gaidon, A., and Ma, T. Provable guarantees for self-supervised deep learning with spectral contrastive loss. In NeurIPS, 2021.
  • He et al. (2022) He, K., Chen, X., Xie, S., Li, Y., Dollár, P., and Girshick, R. Masked autoencoders are scalable vision learners. In CVPR, 2022.
  • Hua et al. (2022) Hua, T., Tian, Y., Ren, S., Raptis, M., Zhao, H., and Sigal, L. Self-supervision through random segments with autoregressive coding (randsac). In ICLR, 2022.
  • Kingma & Ba (2015) Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In ICLR, 2015.
  • Kong et al. (2023) Kong, L., Ma, M. Q., Chen, G., Xing, E. P., Chi, Y., Morency, L.-P., and Zhang, K. Understanding masked autoencoders via hierarchical latent variable models. In CVPR, 2023.
  • Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. In NeurIPS, 2012.
  • Lan et al. (2020) Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., and Soricut, R. Albert: A lite bert for self-supervised learning of language representations. In ICLR, 2020.
  • Larsson et al. (2016) Larsson, G., Maire, M., and Shakhnarovich, G. Learning representations for automatic colorization. In ECCV, 2016.
  • Li et al. (2023) Li, S., Wu, D., Wu, F., Zang, Z., and Li, S. Z. Architecture-agnostic masked image modeling—from vit back to cnn. In ICML, 2023.
  • Liu et al. (2019) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. Roberta: A robustly optimized bert pretraining approach. ArXiv, 2019.
  • Merity (2016) Merity, S. The wikitext long term dependency language modeling dataset. Salesforce Metamind, 2016.
  • Oord et al. (2018) Oord, A. v. d., Li, Y., and Vinyals, O. Representation learning with contrastive predictive coding. ArXiv, 2018.
  • Peterson et al. (2019) Peterson, J. C., Battleday, R. M., Griffiths, T. L., and Russakovsky, O. Human uncertainty makes classification more robust. In ICCV, 2019.
  • Radford et al. (2018) Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al. Improving language understanding by generative pre-training. 2018.
  • Radford et al. (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al. Language models are unsupervised multitask learners. OpenAI blog, 2019.
  • Radford et al. (2021) Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. In ICML, 2021.
  • Raffel et al. (2020) Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 2020.
  • Saunshi et al. (2019) Saunshi, N., Plevrakis, O., Arora, S., Khodak, M., and Khandeparkar, H. A theoretical analysis of contrastive unsupervised representation learning. In ICML, 2019.
  • Tian et al. (2020) Tian, Y., Sun, C., Poole, B., Krishnan, D., Schmid, C., and Isola, P. What makes for good views for contrastive learning. In NeurIPS, 2020.
  • Vaswani et al. (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need. In NeurIPS, 2017.
  • Wang et al. (2018a) Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. Glue: A multi-task benchmark and analysis platform for natural language understanding. In EMNLP, 2018a.
  • Wang et al. (2018b) Wang, C., Zhang, J., and Chen, H. Semi-autoregressive neural machine translation. In EMNLP, 2018b.
  • Wang et al. (2019) Wang, C., Li, M., and Smola, A. J. Language models with transformers. ArXiv, 2019.
  • Wang et al. (2022a) Wang, T., Roberts, A., Hesslow, D., Le Scao, T., Chung, H. W., Beltagy, I., Launay, J., and Raffel, C. What language model architecture and pretraining objective works best for zero-shot generalization? In ICML, 2022a.
  • Wang et al. (2022b) Wang, Y., Zhang, Q., Wang, Y., Yang, J., and Lin, Z. Chaos is a ladder: A new theoretical understanding of contrastive learning via augmentation overlap. In ICLR, 2022b.
  • Xie et al. (2022) Xie, Z., Zhang, Z., Cao, Y., Lin, Y., Bao, J., Yao, Z., Dai, Q., and Hu, H. Simmim: A simple framework for masked image modeling. In CVPR, 2022.
  • Yang et al. (2019) Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., and Le, Q. V. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS, 2019.
  • Zhang et al. (2023a) Zhang, Q., Wang, Y., and Wang, Y. On the generalization of multi-modal contrastive learning. In ICML, 2023a.
  • Zhang et al. (2023b) Zhang, Q., Wang, Y., and Wang, Y. Identifiable contrastive learning with automatic feature importance discovery. In NeurIPS, 2023b.
  • Zhang et al. (2018) Zhang, R., Isola, P., Efros, A. A., Shechtman, E., and Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018.
  • Zhao et al. (2022) Zhao, X., Du, T., Wang, Y., Yao, J., and Huang, W. Arcl: Enhancing contrastive learning with augmentation-robust representations. In ICLR, 2022.

Appendix A Proofs

A.1 Proof of Theorem 3.1

Proof.

Expanding the decomposition object and we obtain,

A¯FW2\displaystyle\|\bar{A}-FW^{\prime\top}\|^{2} =Xi,Xi+(A¯Xi,Xi+FXi(WXi+))2\displaystyle=\sum\limits_{X_{i},X_{i}^{+}}\left(\bar{A}_{X_{i},X_{i}^{+}}-F_{X_{i}}(W^{\prime}_{X_{i}^{+}})^{\top}\right)^{2}
=Xi,Xi+(PM(Xi,Xi+)PC(Xi)PG(Xi+)PC(Xi)f(Xi)PG(Xi+)(WXi+))2\displaystyle=\sum\limits_{X_{i},X_{i}^{+}}\left(\frac{P_{M}(X_{i},X_{i}^{+})}{\sqrt{P_{C}(X_{i})P_{G}(X_{i}^{+})}}-\sqrt{P_{C}(X_{i})}f(X_{i})^{\top}\sqrt{P_{G}(X_{i}^{+})}(W_{X_{i}^{+}})^{\top}\right)^{2}
=Xi,Xi+(PM(Xi,Xi+)2PC(Xi)PG(Xi+)+PC(Xi)PG(Xi+)(f(Xi)(WXi+))22PM(Xi,Xi+)f(Xi)(WXi+))\displaystyle=\sum\limits_{X_{i},X_{i}^{+}}\left(\frac{P_{M}(X_{i},X_{i}^{+})^{2}}{P_{C}(X_{i})P_{G}(X_{i}^{+})}+P_{C}(X_{i})P_{G}(X_{i}^{+})\left(f(X_{i})^{\top}(W_{X_{i}^{+}})^{\top}\right)^{2}-2P_{M}(X_{i},X_{i}^{+})f(X_{i})^{\top}(W_{X_{i}^{+}})^{\top}\right)
=Xi,Xi+(PM(Xi,Xi+)2PC(Xi)PG(Xi+))2𝔼(Xi,Xi+)f(Xi)(WXi+)+𝔼Xi,Xi(f(Xi)(WXi))2\displaystyle=\sum\limits_{X_{i},X_{i}^{+}}\left(\frac{P_{M}(X_{i},X_{i}^{+})^{2}}{P_{C}(X_{i})P_{G}(X_{i}^{+})}\right)-2\mathbb{E}_{(X_{i},X_{i}^{+})}f(X_{i})^{\top}(W_{X_{i}^{+}})^{\top}+\mathbb{E}_{X_{i},X_{i}^{-}}\left(f(X_{i})^{\top}(W_{X_{i}^{-}})^{\top}\right)^{2}
=Xi,Xi+(PM(Xi,Xi+)2PC(Xi)PG(Xi+))𝔼(Xi,Xi+)(Wf(Xi))𝟙Xi++𝔼Xi,Xi((Wf(Xi))𝟙Xi)2\displaystyle=\sum\limits_{X_{i},X_{i}^{+}}\left(\frac{P_{M}(X_{i},X_{i}^{+})^{2}}{P_{C}(X_{i})P_{G}(X_{i}^{+})}\right)-\mathbb{E}_{(X_{i},X_{i}^{+})}(Wf(X_{i}))^{\top}\mathbbm{1}_{X_{i}^{+}}+\mathbb{E}_{X_{i},X_{i}^{-}}((Wf(X_{i}))^{\top}\mathbbm{1}_{X_{i}^{-}})^{2}
=SL(f,W)+const.\displaystyle={\mathcal{L}}_{SL}(f,W)+\mathrm{const}.

A.2 Proofs of Theorem 4.1

We first introduce a lemma that will be used in the following proofs.

Lemma A.1 (Lemma 3.1 in (HaoChen et al., 2021)).

For two learned embedding matrices FF, F~\widetilde{F}, a diagonal matrix DD and an invertible matrix RR, if F=DF~RF=D\widetilde{F}R, they have the equal linear probing error, i.e.,

(F)=(F~).\displaystyle\mathcal{E}(F)=\mathcal{E}(\widetilde{F}).
Proof.

The proofs mainly follow the theoretical framework proposed in (Zhang et al., 2023a). According to Eckart-Young Theorem (Eckart & Young, 1936), the optimal solution F,(W)F^{\star},(W^{\prime})^{\star} of the decomposition objective MF(F,W)=A¯FW2{\mathcal{L}}_{MF}(F,W^{\prime})=\|\bar{A}-FW^{\prime\top}\|^{2} satisfy:

F(W)=Utdiag(σ1,,σt)(Vt),\displaystyle F^{\star}(W^{\prime\star})^{\top}=U^{t}\operatorname{diag}(\sigma_{1},...,\sigma_{t})(V^{t})^{\top},

where we denote A¯=UΣV\bar{A}=U\Sigma V^{\top} as the singular value decomposition of A¯\bar{A}, (σ1,,σt)(\sigma_{1},...,\sigma_{t}) are the tt-largest singular values of A¯\bar{A}, the tt-th column of UtN×tU^{t}\in\mathbb{R}^{N\times t} contains the corresponding eigenvectors of the tt-th largest singular values and VtND×tV^{t}\in\mathbb{R}^{N_{D}\times t} is a unitary matrix. Then we respectively obtain the optimal solutions FF^{\star} and (W)(W^{\prime})^{\star}:

F\displaystyle F^{\star} =UtDR,\displaystyle=U^{t}DR,
(W)\displaystyle(W^{\prime})^{\star} =Vtdiag(σ1,,σt)D1R,\displaystyle=V^{t}\operatorname{diag}(\sigma_{1},...,\sigma_{t})D^{-1}R,

where Rt×tR\in\mathbb{R}^{t\times t} is a unitary matrix and DD is an invertible diagonal matrix. Then we define a symmetric spectral loss:

SCL(f)=𝔼(Xi,Xi)f(Xi)f(Xi)+𝔼Xi,(Xi)(f(Xi)f((Xi)))2,{\mathcal{L}}_{SCL}(f)=-\mathbb{E}_{(X_{i},X_{i}^{\prime})}f(X_{i})^{\top}f(X^{\prime}_{i})+\mathbb{E}_{X_{i},(X_{i}^{\prime})^{-}}(f(X_{i})^{\top}f((X_{i}^{\prime})^{-}))^{2}, (7)

where (Xi,Xi)PT,PT(Xi,Xi)=𝔼Xi+PM(Xi|Xi+)PM(Xi|Xi+)(X_{i},X_{i}^{\prime})\sim P_{T},P_{T}(X_{i},X_{i}^{\prime})=\mathbb{E}_{X_{i}^{+}}P_{M}(X_{i}|X_{i}^{+})P_{M}(X_{i}^{\prime}|X_{i}^{+}), and Xi,XiPC(Xi)X_{i},X_{i}^{\prime}\sim P_{C}(X_{i}). Following the proof of theorem 3.1, the symmetric spectral loss is also equivalent to a matrix decomposition loss, i.e., SCL(f)=P~TFF2+const{\mathcal{L}}_{SCL}(f)=\|\tilde{P}_{T}-FF^{\top}\|^{2}+const, where (P~T)(Xi,Xi)=PT(Xi,Xi)𝒫C(Xi)𝒫C(Xi)(\tilde{P}_{T})_{(X_{i},X_{i}^{\prime})}=\frac{P_{T}(X_{i},X_{i}^{\prime})}{\sqrt{{\mathcal{P}}_{C}(X_{i}){\mathcal{P}}_{C}(X_{i}^{\prime})}} and (F)Xi=f(Xi)PC(Xi)(F)_{X_{i}}=\frac{f(X_{i})^{\top}}{\sqrt{P_{C}(X_{i})}}. Then we consider the objective P~TFF2\|\tilde{P}_{T}-FF^{\top}\|^{2}. Similar to the asymmetric decomposition objective, the optimal solution can be represented as:

(F)=UTtDTRT,\displaystyle(F^{\star})^{\prime}=U_{T}^{t}D_{T}R_{T},

where UTtN×tU_{T}^{t}\in\mathbb{R}^{N\times t} contains tt corresponding eigenvectors of tt largest singular values of P~T\tilde{P}_{T}, DTt×tD_{T}\in\mathbb{R}^{t\times t} is an invertible diagonal matrix and RTt×tR_{T}\in\mathbb{R}^{t\times t} is a unitary matrix. In the next step, we analyze the relationship between A¯\bar{A} and P~T\tilde{P}_{T}. Considering the (Xi,Xi)(X_{i},X_{i}^{\prime})-th element of A¯A¯\bar{A}\bar{A}^{\top}, we have

(A¯A¯)Xi,Xi\displaystyle(\bar{A}\bar{A}^{\top})_{X_{i},X_{i}^{\prime}} =xi+(A¯)Xi,Xi+(A¯)Xi,Xi+\displaystyle=\sum\limits_{x_{i}^{+}}(\bar{A})_{X_{i},X_{i}^{+}}(\bar{A})_{X_{i}^{\prime},X_{i}^{+}}
=Xi+𝒫M(Xi,Xi+)𝒫M(Xi,Xi+)𝒫G(Xi+)𝒫C(Xi)𝒫C(Xi)\displaystyle=\sum\limits_{X_{i}^{+}}\frac{{\mathcal{P}}_{M}(X_{i},X_{i}^{+}){\mathcal{P}}_{M}(X_{i}^{\prime},X_{i}^{+})}{{\mathcal{P}}_{G}(X_{i}^{+})\sqrt{{\mathcal{P}}_{C}(X_{i}){\mathcal{P}}_{C}(X_{i}^{\prime})}}
=1𝒫C(Xi)𝒫C(Xi)Xi+𝒫G(Xi+)𝒫M(Xi|Xi+)𝒫M(Xi|Xi+)\displaystyle=\frac{1}{\sqrt{{\mathcal{P}}_{C}(X_{i}){\mathcal{P}}_{C}(X_{i}^{\prime})}}\sum\limits_{X_{i}^{+}}{\mathcal{P}}_{G}(X_{i}^{+}){\mathcal{P}}_{M}(X_{i}|X_{i}^{+}){\mathcal{P}}_{M}(X_{i}^{\prime}|X_{i}^{+}) (𝒫M(Xi,Xi+)=𝒫M(Xi|Xi+)𝒫G(Xi+))\displaystyle\left({\mathcal{P}}_{M}(X_{i},X_{i}^{+})={\mathcal{P}}_{M}(X_{i}|X_{i}^{+}){\mathcal{P}}_{G}(X_{i}^{+})\right)
=𝔼Xi+𝒫M(Xi|Xi+)𝒫M(Xi|Xi+)𝒫C(Xi)𝒫C(Xi)\displaystyle=\frac{\mathbb{E}_{X_{i}^{+}}{\mathcal{P}}_{M}(X_{i}|X_{i}^{+}){\mathcal{P}}_{M}(X_{i}^{\prime}|X_{i}^{+})}{\sqrt{{\mathcal{P}}_{C}(X_{i}){\mathcal{P}}_{C}(X_{i}^{\prime})}}
=(P~T)Xi,Xi.\displaystyle=(\tilde{P}_{T})_{X_{i},X_{i}^{\prime}}.

We know that P~T=A¯A¯\tilde{P}_{T}=\bar{A}\bar{A}^{\top}, so P~T\tilde{P}_{T} and A¯\bar{A} share the same eigenvectors, i.e., Ut=UTtU^{t}=U_{T}^{t}. As D,D2,R,DT,RTD,D_{2},R,D_{T},R_{T} are invertible matrices and the product of the invertible matrices is still invertible, we obtain

F=(F)T,\displaystyle F^{\star}=(F^{\star})^{\prime}T,

where T=(DT)1(RT)1DRT=(D_{T})^{-1}(R_{T})^{-1}DR is an invertible matrix. With Lemma A.1, we obtain

(f)=(fSCL).\displaystyle\mathcal{E}(f^{\star})=\mathcal{E}(f^{\star}_{SCL}).

Then we introduce another lemma:

Lemma A.2 (Theorem 5.1 in (Zhang et al., 2023b)).

We denote α~\tilde{\alpha} as the probability that the conditional texts have different labels, i.e., α~=𝔼(Xi,Xi)PT𝟙[y(Xi)y(Xi)]\tilde{\alpha}=\mathbb{E}_{(X_{i},X_{i}^{\prime})\sim P_{T}}\mathbbm{1}[y(X_{i})\neq y(X_{i}^{\prime})]. Let fSCLf^{\star}_{SCL} be the optimal solutions of the symmetric spectral objective Then, we have the downstream classification guarantee:

(fSCL)c1i=t+1NDσ~i2+c2α~,\displaystyle\mathcal{E}(f^{\star}_{SCL})\leq c_{1}\sum\limits_{i=t+1}^{N_{D}}\tilde{\sigma}_{i}^{2}+c_{2}\cdot\tilde{\alpha}, (8)

where σ~i\tilde{\sigma}_{i} is the ii-th largest eigenvalue of P~T\tilde{P}_{T}, and c1,c2c_{1},c_{2} are constants.

And we continue the proofs.

Proof.

Combined with Lemma A.2, we obtain

(f)=(fSCL)c1i=t+1NDσ~i2+c2α~.\displaystyle\mathcal{E}(f^{\star})=\mathcal{E}(f_{SCL}^{\star})\leq c_{1}\sum\limits_{i=t+1}^{N_{D}}\tilde{\sigma}_{i}^{2}+c_{2}\cdot\tilde{\alpha}.

As P~T=A¯A¯\tilde{P}_{T}=\bar{A}\bar{A}^{\top}, we obtain σ~=σ2\tilde{\sigma}=\sigma^{2}. And for the labeling error, we have

α~\displaystyle\tilde{\alpha} =Xi,Xi𝒫T(Xi,Xi)𝟙[y(Xi)y(Xi)]\displaystyle=\sum\limits_{X_{i},X^{\prime}_{i}}{\mathcal{P}}_{T}(X_{i},X_{i}^{\prime})\mathbbm{1}[y(X_{i})\neq y(X^{\prime}_{i})]
=Xi,Xi𝔼Xi+[𝒫M(Xi|Xi+)𝒫M(Xi|Xi+)𝟙[y(Xi)y(Xi)]]\displaystyle=\sum\limits_{X_{i},X^{\prime}_{i}}\mathbb{E}_{X_{i}^{+}}\left[{\mathcal{P}}_{M}(X_{i}|X_{i}^{+}){\mathcal{P}}_{M}(X_{i}|X_{i}^{+})\mathbbm{1}[y(X_{i})\neq y(X^{\prime}_{i})]\right]
Xi,Xi𝔼Xi+[𝒫M(Xi|Xi+)𝒫M(Xi|Xi+)(𝟙[y(Xi)y(Xi+)]+𝟙[y(Xi)y(Xi+)])]\displaystyle\leq\sum\limits_{X_{i},X^{\prime}_{i}}\mathbb{E}_{X_{i}^{+}}\left[{\mathcal{P}}_{M}(X_{i}|X_{i}^{+}){\mathcal{P}}_{M}(X_{i}|X_{i}^{+})(\mathbbm{1}[y(X_{i})\neq y(X_{i}^{+})]+\mathbbm{1}[y(X^{\prime}_{i})\neq y(X_{i}^{+})])\right]
=2𝔼Xi+[𝒫M(Xi|Xi+)𝟙[y(Xi)y(Xi+)]]\displaystyle=2\mathbb{E}_{X_{i}^{+}}[{\mathcal{P}}_{M}(X_{i}|X_{i}^{+})\mathbbm{1}[y(X_{i})\neq y(X_{i}^{+})]]
=2𝔼Xi,Xi+𝟙[y(Xi)y(Xi+)]\displaystyle=2\mathbb{E}_{X_{i},X_{i}^{+}}\mathbbm{1}[y(X_{i})\neq y(X_{i}^{+})]
=2α.\displaystyle=2\alpha.

So we obtain

(f)c1i=t+1NDσi4+c2α,\mathcal{E}(f^{\star})\leq c_{1}\sum\limits_{i=t+1}^{N_{D}}\sigma_{i}^{4}+c_{2}^{\prime}\cdot\alpha, (9)

which finishes the proofs of Theorem 4.1. ∎

A.3 Proofs of Theorem 4.2

We first introduce a lemma that will be used in the following proofs.

Lemma A.3 (Theorem 4.2 in (Zhang et al., 2023a)).

For a block matrix:

P=(papapbpbpbpbpapapbpbpbpbpbpbpapapbpbpbpbpapapbpbpbpbpapapbpbpapa),P=\begin{pmatrix}p_{a}&\cdots&p_{a}&p_{b}&\cdots&p_{b}&\cdots&p_{b}&\cdots&p_{b}\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\ p_{a}&\cdots&p_{a}&p_{b}&\cdots&p_{b}&\cdots&p_{b}&\cdots&p_{b}\\ p_{b}&\cdots&p_{b}&p_{a}&\cdots&p_{a}&\cdots&p_{b}&\cdots&p_{b}\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\ p_{b}&\cdots&p_{b}&p_{a}&\cdots&p_{a}&\cdots&p_{b}&\cdots&p_{b}\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\ p_{b}&\cdots&\cdots&\cdots&\cdots&p_{b}&\cdots&p_{a}&\cdots&p_{a}\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\ p_{b}&\cdots&\cdots&\cdots&\cdots&p_{b}&\cdots&p_{a}&\cdots&p_{a}\end{pmatrix},

each row and each columns has sasbs_{a}\cdot s_{b} elements. Among them, sas_{a} elements are pap_{a} and else are pbp_{b}. Then the singular values of it are:

σ1=sapa+(sb1)sapb,\displaystyle\sigma_{1}=s_{a}\cdot p_{a}+(s_{b}-1)\cdot s_{a}\cdot p_{b},
σ2==σsb=sa|pbpa|,\displaystyle\sigma_{2}=\cdots=\sigma_{s_{b}}=s_{a}\cdot|p_{b}-p_{a}|,
σsa+1==σsasb=0.\displaystyle\sigma_{s_{a}+1}=\cdots=\sigma_{s_{a}\cdot s_{b}}=0.
Proof.

For the autoregressive objective, the co-occurrence matrix size is (TTs1Tr×STr)(\frac{T-T^{s}}{1-T}r\times STr).

As the tokens are uniformly selected, we know that AX,X+=1rsTi+1A_{X,X^{+}}=\frac{1}{rsT^{i+1}} (when i0)i\neq 0), where ii is the length of the condition texts XX. Consequently, the elements in the normalized A¯\bar{A} satisfy A¯X,X+=1Ti+1\bar{A}_{X,X^{+}}=\frac{1}{\sqrt{T^{i+1}}}.

As the samples of different classes are disconnected, we only need to consider the singular values of the sub-matrix of intra-class samples. We note that each sub-matrix has ss non-zero diagonal blocks. The ii-th block size is (Ti×Ti)(T^{i}\times T^{i}) and the elements in the same block are equal to 1Ti+1\frac{1}{\sqrt{T^{i+1}}}, i.e., the sub-matrix is:

[1T21T2001T21T20000000000001Ts+11Ts+1001Ts+11Ts+1].\begin{bmatrix}\frac{1}{\sqrt{T^{2}}}&\cdots&\frac{1}{\sqrt{T^{2}}}&0&\cdots&\cdots&\cdots&\cdots&0\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\ \frac{1}{\sqrt{T^{2}}}&\cdots&\frac{1}{\sqrt{T^{2}}}&0&\cdots&\cdots&\cdots&\cdots&0\\ 0&\cdots&0&\cdots&\cdots&\cdots&0&\cdots&0\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\ 0&\cdots&0&\cdots&\cdots&\cdots&0&\cdots&0\\ 0&\cdots&\cdots&\cdots&\cdots&0&\frac{1}{\sqrt{T^{s+1}}}&\cdots&\frac{1}{\sqrt{T^{s+1}}}\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\ 0&\cdots&\cdots&\cdots&\cdots&0&\frac{1}{\sqrt{T^{s+1}}}&\cdots&\frac{1}{\sqrt{T^{s+1}}}\end{bmatrix}.

Then we consider the matrix A¯A¯\bar{A}^{\top}\bar{A}, it is still a block matrix. It has s{s} non-zero diagonal blocks and the size of each block is T×TT\times T The elements are 1T\frac{1}{T}, i.e., the sub-matrix is

[1T1T001T1T0000000000001T1T001T1T].\begin{bmatrix}\frac{1}{T}&\cdots&\frac{1}{T}&0&\cdots&\cdots&\cdots&\cdots&0\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\ \frac{1}{T}&\cdots&\frac{1}{T}&0&\cdots&\cdots&\cdots&\cdots&0\\ 0&\cdots&0&\cdots&\cdots&\cdots&0&\cdots&0\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\ 0&\cdots&0&\cdots&\cdots&\cdots&0&\cdots&0\\ 0&\cdots&\cdots&\cdots&\cdots&0&\frac{1}{T}&\cdots&\frac{1}{T}\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\ 0&\cdots&\cdots&\cdots&\cdots&0&\frac{1}{T}&\cdots&\frac{1}{T}\end{bmatrix}.

With Lemma A.3, the eigenvalues of the sub-matrix are

σ1=1,\displaystyle\sigma_{1}=1,
σ2==σs=T1T=1,\displaystyle\sigma_{2}=\cdots=\sigma_{s}=T\cdot\frac{1}{T}=1,
σs==σsTsT=0.\displaystyle\sigma_{s}=\cdots=\sigma_{sT\cdot sT}=0.

Combing different the samples of different classes, the eigenvalues of A¯A¯\bar{A}^{\top}\bar{A} are

σ1==σr=1,\displaystyle\sigma_{1}=\cdots=\sigma_{r}=1,
σr+1==σrs=T1T=1,\displaystyle\sigma_{r+1}=\cdots=\sigma_{r\cdot s}=T\cdot\frac{1}{T}=1,
σrs+1==σsTsT=0.\displaystyle\sigma_{r\cdot s+1}=\cdots=\sigma_{sT\cdot sT}=0.

With the definition of singular values, the singular values of A¯\bar{A} are

{σj=1,jrs,σj=0,else,\begin{cases}\sigma_{j}=1,&j\leq rs,\\ \sigma_{j}=0,&\text{else},\\ \end{cases}

For masked models, the co-occurrence matrix size is (𝒞s(1ρm)sT(1ρm)sr×STr)(\mathcal{C}_{s}^{(1-\rho_{m})s}T^{(1-\rho_{m})s}r\times STr). When the positions of conditional texts and target tokens are overlapped, the joint probability is 0. So each row in the co-occurrence matrix has 𝒞s1sm\mathcal{C}_{s-1}^{s_{m}} zeros, where sm=s(1ρm)s_{m}=s(1-\rho_{m}). And the else elements is 1𝒞s1sm(T)smsTr\frac{1}{\mathcal{C}_{s-1}^{s_{m}}(T)^{s_{m}}sTr}. By normalizing the co-occurrence matrix, the non-zero elements become A¯X,X+=ST𝒞ssm(T)sm𝒞s1sm\bar{A^{\prime}}_{X,X^{+}}=\frac{\sqrt{ST\mathcal{C}^{s_{m}}_{s}(T)^{s_{m}}}}{\mathcal{C}_{s-1}^{s_{m}}}.

Similar to autoregressive SSL, we calculate the intra-class sub-matrix of A¯A¯\bar{A^{\prime}}^{\top}\bar{A^{\prime}} and obtain:

(1(ssm)T1(ssm)T(ssm1)(ssm)(s1)T(ssm1)(ssm)(s1)T(ssm1)(ssm)(s1)T(ssm1)(ssm)(s1)T1(ssm)T1(ssm)T(ssm1)(ssm)(s1)T(ssm1)(ssm)(s1)T(ssm1)(ssm)(s1)T(ssm1)(ssm)(s1)T(ssm1)(ssm)(s1)T(ssm1)(ssm)(s1)T1(ssm)T1(ssm)T(ssm1)(ssm)(s1)T(ssm1)(ssm)(s1)T(ssm1)(ssm)(s1)T(ssm1)(ssm)(s1)T1(ssm)T1(ssm)T(ssm1)(ssm)(s1)T(ssm1)(ssm)(s1)T(ssm1)(ssm)(s1)T(ssm1)(ssm)(s1)T1(ssm)T1(ssm)T(ssm1)(ssm)(s1)T(ssm1)(ssm)(s1)T1(ssm)T1(ssm)T),\begin{pmatrix}\frac{1}{(s-s_{m})T}&\cdots&\frac{1}{(s-s_{m})T}&\frac{(s-s_{m}-1)}{(s-s_{m})(s-1)T}&\cdots&\frac{(s-s_{m}-1)}{(s-s_{m})(s-1)T}&\cdots&\frac{(s-s_{m}-1)}{(s-s_{m})(s-1)T}&\cdots&\frac{(s-s_{m}-1)}{(s-s_{m})(s-1)T}\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\ \frac{1}{(s-s_{m})T}&\cdots&\frac{1}{(s-s_{m})T}&\frac{(s-s_{m}-1)}{(s-s_{m})(s-1)T}&\cdots&\frac{(s-s_{m}-1)}{(s-s_{m})(s-1)T}&\cdots&\frac{(s-s_{m}-1)}{(s-s_{m})(s-1)T}&\cdots&\frac{(s-s_{m}-1)}{(s-s_{m})(s-1)T}\\ \frac{(s-s_{m}-1)}{(s-s_{m})(s-1)T}&\cdots&\frac{(s-s_{m}-1)}{(s-s_{m})(s-1)T}&\frac{1}{(s-s_{m})T}&\cdots&\frac{1}{(s-s_{m})T}&\cdots&\frac{(s-s_{m}-1)}{(s-s_{m})(s-1)T}&\cdots&\frac{(s-s_{m}-1)}{(s-s_{m})(s-1)T}\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\ \frac{(s-s_{m}-1)}{(s-s_{m})(s-1)T}&\cdots&\frac{(s-s_{m}-1)}{(s-s_{m})(s-1)T}&\frac{1}{(s-s_{m})T}&\cdots&\frac{1}{(s-s_{m})T}&\cdots&\frac{(s-s_{m}-1)}{(s-s_{m})(s-1)T}&\cdots&\frac{(s-s_{m}-1)}{(s-s_{m})(s-1)T}\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\ \frac{(s-s_{m}-1)}{(s-s_{m})(s-1)T}&\cdots&\cdots&\cdots&\cdots&\frac{(s-s_{m}-1)}{(s-s_{m})(s-1)T}&\cdots&\frac{1}{(s-s_{m})T}&\cdots&\frac{1}{(s-s_{m})T}\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\ \frac{(s-s_{m}-1)}{(s-s_{m})(s-1)T}&\cdots&\cdots&\cdots&\cdots&\frac{(s-s_{m}-1)}{(s-s_{m})(s-1)T}&\cdots&\frac{1}{(s-s_{m})T}&\cdots&\frac{1}{(s-s_{m})T}\end{pmatrix},

With Lemma A.3, the eigenvalues of the sub-matrix are

σ1=1,\displaystyle\sigma_{1}=1,
σ2==σs=T(1(ssm)Ts1sm(ssm)(s1)T)=sm(ssm)(s1)\displaystyle\sigma_{2}=\cdots=\sigma_{s}=T\cdot\left(\frac{1}{(s-s_{m})T}-\frac{s-1-s_{m}}{(s-s_{m})(s-1)T}\right)=\frac{s_{m}}{(s-s_{m})(s-1)}\,
σs==σsTsT=0.\displaystyle\sigma_{s}=\cdots=\sigma_{sT\cdot sT}=0.

Combing different the samples of different classes, the eigenvalues of A¯A¯\bar{A^{\prime}}^{\top}\bar{A^{\prime}} are

σ1==σr=1,\displaystyle\sigma_{1}=\cdots=\sigma_{r}=1,
σr+1==σrs=sm(ssm)(s1),\displaystyle\sigma_{r+1}=\cdots=\sigma_{r\cdot s}=\frac{s_{m}}{(s-s_{m})(s-1)},
σrs+1==σsTsT=0.\displaystyle\sigma_{r\cdot s+1}=\cdots=\sigma_{sT\cdot sT}=0.

When sm<(s1)s_{m}<(s-1), we obtain sm(ssm)(s1)<1\frac{s_{m}}{(s-s_{m})(s-1)}<1. With the definition of singular values, the singular values of A¯\bar{A^{\prime}} are

{σj=1,jr,σj=sm(ssm)(s1)<1,r<jrs,σj=0,else.\begin{cases}\sigma_{j}=1,&j\leq r,\\ \displaystyle\sigma_{j}=\sqrt{\frac{s_{m}}{(s-s_{m})(s-1)}}<1,&r<j\leq rs,\\ \sigma_{j}=0,&\text{else}.\\ \end{cases}

A.4 Proofs of Theorem 4.3

Proof.

Given a sample xix_{i}, let X<kiX_{<k}^{i} be the first k1k-1 tokens of xix_{i}, (Xki)+(X_{k}^{i})^{+} be the kk-th token of xix_{i} and (Xki)(X_{k}^{i})^{-} is any independent token. Expanding (Wfmask(X<ki))𝟙(Xki)+-(Wf_{\text{mask}}(X_{<k}^{i}))^{\top}\mathbbm{1}_{(X_{k}^{i})^{+}} and we have

(Wfmask(X<ki))𝟙(Xki)+=12Wfmask(X<ki)𝟙(Xki)+2212Wfmask(X<ki)2212𝟙(Xki)+22=12Wfmask(X<ki)𝟙(Xki)+221.\begin{split}-(Wf_{mask}(X_{<k}^{i}))^{\top}\mathbbm{1}_{(X_{k}^{i})^{+}}&=\frac{1}{2}\left\lVert Wf_{mask}(X_{<k}^{i})-\mathbbm{1}_{(X_{k}^{i})^{+}}\right\rVert_{2}^{2}-\frac{1}{2}\left\lVert Wf_{mask}(X_{<k}^{i})\right\rVert_{2}^{2}-\frac{1}{2}\left\lVert\mathbbm{1}_{(X_{k}^{i})^{+}}\right\rVert_{2}^{2}\\ &=\frac{1}{2}\left\lVert Wf_{mask}(X_{<k}^{i})-\mathbbm{1}_{(X_{k}^{i})^{+}}\right\rVert_{2}^{2}-1.\end{split} (10)

Let the mask mkm_{k} satisfy that {xi,1,,xi,k1}\{x_{i,1},\cdots,x_{i,k-1}\} are all included in xi[mk]x_{i}[m_{k}] and xi,kxi[1mk]x_{i,k}\in x_{i}[1-m_{k}]. We denote Xmki=xi[mk]X_{m_{k}}^{i}=x_{i}[m_{k}] and have the following equation:

12Wfmask(X<ki)𝟙(Xki)+2212Wfmask(Xmki)Wfmask(X<ki)22+12Wfmask(Xmki)𝟙(Xki)+22,\frac{1}{2}\left\lVert Wf_{mask}(X_{<k}^{i})-\mathbbm{1}_{(X_{k}^{i})^{+}}\right\rVert_{2}^{2}\leq\frac{1}{2}\left\lVert Wf_{mask}(X_{m_{k}}^{i})-Wf_{mask}(X_{<k}^{i})\right\rVert_{2}^{2}+\frac{1}{2}\left\lVert Wf_{mask}(X_{m_{k}}^{i})-\mathbbm{1}_{(X_{k}^{i})^{+}}\right\rVert_{2}^{2}, (11)

The upper bound of 12Wfmask(Xmki)𝟙(Xki)+22\displaystyle\frac{1}{2}\left\lVert Wf_{\text{mask}}(X_{m_{k}}^{i})-\mathbbm{1}_{(X_{k}^{i})^{+}}\right\rVert_{2}^{2} is:

12Wfmask(Xmki)𝟙(Xki)+22=1(Wfmask(Xmki))𝟙(Xki)+1+δmask.\displaystyle\frac{1}{2}\left\lVert Wf_{mask}(X_{m_{k}}^{i})-\mathbbm{1}_{(X_{k}^{i})^{+}}\right\rVert_{2}^{2}=1-(Wf_{mask}(X_{m_{k}}^{i}))^{\top}\mathbbm{1}_{(X_{k}^{i})^{+}}\leq 1+\delta_{mask}. (12)

Next, we consider the term 12Wfmask(Xmki)Wfmask(X<ki)22=12W(fmask(Xmki)fmask(X<ki))22\displaystyle\frac{1}{2}\left\lVert Wf_{mask}(X_{m_{k}}^{i})-Wf_{mask}(X_{<k}^{i})\right\rVert_{2}^{2}=\frac{1}{2}\left\lVert W(f_{mask}(X_{m_{k}}^{i})-f_{mask}(X_{<k}^{i}))\right\rVert_{2}^{2}. We denote Xm,>ki=XmkiX<kiX_{m,>k}^{i}=X_{m_{k}}^{i}-X_{<k}^{i} and g(xi,u,xi,v,xi,w)=xi,uWQ(xi,vWK)xi,wWVg(x_{i,u},x_{i,v},x_{i,w})=x_{i,u}W^{Q}(x_{i,v}W^{K})^{\top}x_{i,w}W^{V}. Expanding fmask(Xmki)f_{\text{mask}}(X_{m_{k}}^{i}) and fmask(X<ki)f_{\text{mask}}(X_{<k}^{i}) separately and we obtain:

fmask(Xmki)=XmkiWQ(XmkiWK)XmkiWV=xi,uXmkixi,vXmkixi,wXmkig(xi,u,xi,v,xi,w)=xi,uX<kixi,vXmkixi,wXmkig(xi,u,xi,v,xi,w)+xi,uXm,>kixi,vXmkixi,wXmkig(xi,u,xi,v,xi,w)=xi,uX<kixi,vX<kixi,wX<kig(xi,u,xi,v,xi,w)+xi,uX<kixi,vXm,>kixi,wX<kig(xi,u,xi,v,xi,w)+xi,uX<kixi,vX<kixi,wXm,>kig(xi,u,xi,v,xi,w)+xi,uX<kixi,vXm,>kixi,wXm,>kig(xi,u,xi,v,xi,w)+xi,uXm,>kixi,vXmkixi,wXmkig(xi,u,xi,v,xi,w)\begin{split}f_{mask}(X_{m_{k}}^{i})&=X_{m_{k}}^{i}W^{Q}(X_{m_{k}}^{i}W^{K})^{\top}X_{m_{k}}^{i}W^{V}\\ &=\sum_{x_{i,u}\in X_{m_{k}}^{i}}\sum_{x_{i,v}\in X_{m_{k}}^{i}}\sum_{x_{i,w}\in X_{m_{k}}^{i}}g(x_{i,u},x_{i,v},x_{i,w})\\ &=\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{m_{k}}^{i}}\sum_{x_{i,w}\in X_{m_{k}}^{i}}g(x_{i,u},x_{i,v},x_{i,w})+\sum_{x_{i,u}\in X_{m,>k}^{i}}\sum_{x_{i,v}\in X_{m_{k}}^{i}}\sum_{x_{i,w}\in X_{m_{k}}^{i}}g(x_{i,u},x_{i,v},x_{i,w})\\ &=\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{<k}^{i}}\sum_{x_{i,w}\in X_{<k}^{i}}g(x_{i,u},x_{i,v},x_{i,w})+\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{m,>k}^{i}}\sum_{x_{i,w}\in X_{<k}^{i}}g(x_{i,u},x_{i,v},x_{i,w})\\ &+\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{<k}^{i}}\sum_{x_{i,w}\in X_{m,>k}^{i}}g(x_{i,u},x_{i,v},x_{i,w})+\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{m,>k}^{i}}\sum_{x_{i,w}\in X_{m,>k}^{i}}g(x_{i,u},x_{i,v},x_{i,w})\\ &+\sum_{x_{i,u}\in X_{m,>k}^{i}}\sum_{x_{i,v}\in X_{m_{k}}^{i}}\sum_{x_{i,w}\in X_{m_{k}}^{i}}g(x_{i,u},x_{i,v},x_{i,w})\end{split} (13)

with

fmask(X<ki)=xi,uX<kixi,vX<kixi,wX<kig(xi,u,xi,v,xi,w).f_{mask}(X_{<k}^{i})=\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{<k}^{i}}\sum_{x_{i,w}\in X_{<k}^{i}}g(x_{i,u},x_{i,v},x_{i,w}). (14)

For any (xi,u,xi,v,xi,w)(x_{i,u},x_{i,v},x_{i,w}) and (xi,a,xi,b,xi,c)(x_{i,a},x_{i,b},x_{i,c}), we define:

εa,b,cu,v,w=g(xi,u,xi,v,xi,w)g(xi,a,xi,b,xi,c).\varepsilon_{a,b,c}^{u,v,w}=g(x_{i,u},x_{i,v},x_{i,w})-g(x_{i,a},x_{i,b},x_{i,c}).

We can obtain:

xi,uX<kixi,vXm,>kixi,wX<kig(xi,u,xi,v,xi,w)=|X<ki||Xm,>ki||X<ki|g(xi,a,xi,b,xi,c)+xi,uX<kixi,vXm,>kixi,wX<kiεa,b,cu,v,w,\begin{split}\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{m,>k}^{i}}\sum_{x_{i,w}\in X_{<k}^{i}}g(x_{i,u},x_{i,v},x_{i,w})&=\left\lvert X_{<k}^{i}\right\rvert\left\lvert X_{m,>k}^{i}\right\rvert\left\lvert X_{<k}^{i}\right\rvert g(x_{i,a},x_{i,b},x_{i,c})\\ &+\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{m,>k}^{i}}\sum_{x_{i,w}\in X_{<k}^{i}}\varepsilon^{u,v,w}_{a,b,c},\end{split}
xi,uX<kixi,vX<kixi,wXm,>kig(xi,u,xi,v,xi,w)=|X<ki||X<ki||Xm,>ki|g(xi,a,xi,b,xi,c)+xi,uX<kixi,vX<kixi,wXm,>kiεa,b,cu,v,w,\begin{split}\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{<k}^{i}}\sum_{x_{i,w}\in X_{m,>k}^{i}}g(x_{i,u},x_{i,v},x_{i,w})&=\left\lvert X_{<k}^{i}\right\rvert\left\lvert X_{<k}^{i}\right\rvert\left\lvert X_{m,>k}^{i}\right\rvert g(x_{i,a},x_{i,b},x_{i,c})\\ &+\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{<k}^{i}}\sum_{x_{i,w}\in X_{m,>k}^{i}}\varepsilon^{u,v,w}_{a,b,c},\end{split}
xi,uX<kixi,vXm,>kixi,wXm,>kig(xi,u,xi,v,xi,w)=|X<ki||Xm,>ki||Xm,>ki|g(xi,a,xi,b,xi,c)+xi,uX<kixi,vXm,>kixi,wXm,>kiεa,b,cu,v,w,\begin{split}\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{m,>k}^{i}}\sum_{x_{i,w}\in X_{m,>k}^{i}}g(x_{i,u},x_{i,v},x_{i,w})&=\left\lvert X_{<k}^{i}\right\rvert\left\lvert X_{m,>k}^{i}\right\rvert\left\lvert X_{m,>k}^{i}\right\rvert g(x_{i,a},x_{i,b},x_{i,c})\\ &+\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{m,>k}^{i}}\sum_{x_{i,w}\in X_{m,>k}^{i}}\varepsilon^{u,v,w}_{a,b,c},\end{split}
xi,uXm,>kixi,vXmkixi,wXmkig(xi,u,xi,v,xi,w)=|Xm,>ki||Xmki||Xmki|g(xi,a,xi,b,xi,c)+xi,uXm,>kixi,vXmkixi,wXmkiεa,b,cu,v,w,\begin{split}\sum_{x_{i,u}\in X_{m,>k}^{i}}\sum_{x_{i,v}\in X_{m_{k}}^{i}}\sum_{x_{i,w}\in X_{m_{k}}^{i}}g(x_{i,u},x_{i,v},x_{i,w})&=\left\lvert X_{m,>k}^{i}\right\rvert\left\lvert X_{m_{k}}^{i}\right\rvert\left\lvert X_{m_{k}}^{i}\right\rvert g(x_{i,a},x_{i,b},x_{i,c})\\ &+\sum_{x_{i,u}\in X_{m,>k}^{i}}\sum_{x_{i,v}\in X_{m_{k}}^{i}}\sum_{x_{i,w}\in X_{m_{k}}^{i}}\varepsilon^{u,v,w}_{a,b,c},\end{split}

where {xi,a,xi,b,xi,c}X<ki\{x_{i,a},x_{i,b},x_{i,c}\}\in X_{<k}^{i},|X<ki|=k1,|Xmki|=s(1ρm),|Xm,>ki|=s(1ρm)k+1\left\lvert X_{<k}^{i}\right\rvert=k-1,\left\lvert X_{m_{k}}^{i}\right\rvert=s(1-\rho_{m}),\left\lvert X_{m,>k}^{i}\right\rvert=s(1-\rho_{m})-k+1. It is easy to to calculate the summation:

|X<ki||Xm,>ki||X<ki|+|X<ki||X<ki||Xm,>ki|+|X<ki||Xm,>ki||Xm,>ki|+|Xm,>ki||Xmki||Xmki|=(s(1ρm))3(k1)3.\begin{split}&\left\lvert X_{<k}^{i}\right\rvert\left\lvert X_{m,>k}^{i}\right\rvert\left\lvert X_{<k}^{i}\right\rvert+\left\lvert X_{<k}^{i}\right\rvert\left\lvert X_{<k}^{i}\right\rvert\left\lvert X_{m,>k}^{i}\right\rvert+\left\lvert X_{<k}^{i}\right\rvert\left\lvert X_{m,>k}^{i}\right\rvert\left\lvert X_{m,>k}^{i}\right\rvert+\left\lvert X_{m,>k}^{i}\right\rvert\left\lvert X_{m_{k}}^{i}\right\rvert\left\lvert X_{m_{k}}^{i}\right\rvert\\ &=(s(1-\rho_{m}))^{3}-(k-1)^{3}.\end{split}

Let

εa,b,c=xi,uX<kixi,vXm,>kixi,wX<kiεa,b,cu,v,w+xi,uX<kixi,vX<kixi,wXm,>kiεa,b,cu,v,w+xi,uX<kixi,vXm,>kixi,wXm,>kiεa,b,cu,v,w+xi,uXm,>kixi,vXmkixi,wXmkiεa,b,cu,v,w\begin{split}\varepsilon_{a,b,c}&=\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{m,>k}^{i}}\sum_{x_{i,w}\in X_{<k}^{i}}\varepsilon^{u,v,w}_{a,b,c}+\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{<k}^{i}}\sum_{x_{i,w}\in X_{m,>k}^{i}}\varepsilon^{u,v,w}_{a,b,c}\\ &+\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{m,>k}^{i}}\sum_{x_{i,w}\in X_{m,>k}^{i}}\varepsilon^{u,v,w}_{a,b,c}+\sum_{x_{i,u}\in X_{m,>k}^{i}}\sum_{x_{i,v}\in X_{m_{k}}^{i}}\sum_{x_{i,w}\in X_{m_{k}}^{i}}\varepsilon^{u,v,w}_{a,b,c}\end{split}

we have

xi,uX<kixi,vXm,>kixi,wX<kig(xi,u,xi,v,xi,w)+xi,uX<kixi,vX<kixi,wXm,>kig(xi,u,xi,v,xi,w)+xi,uX<kixi,vXm,>kixi,wXm,>kig(xi,u,xi,v,xi,w)+xi,uXm,>kixi,vXmkixi,wXmkig(xi,u,xi,v,xi,w)=((s(1ρm))3(k1)3)g(xi,a,xi,b,xi,c)+εa,b,c,\begin{split}&\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{m,>k}^{i}}\sum_{x_{i,w}\in X_{<k}^{i}}g(x_{i,u},x_{i,v},x_{i,w})+\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{<k}^{i}}\sum_{x_{i,w}\in X_{m,>k}^{i}}g(x_{i,u},x_{i,v},x_{i,w})\\ +&\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{m,>k}^{i}}\sum_{x_{i,w}\in X_{m,>k}^{i}}g(x_{i,u},x_{i,v},x_{i,w})+\sum_{x_{i,u}\in X_{m,>k}^{i}}\sum_{x_{i,v}\in X_{m_{k}}^{i}}\sum_{x_{i,w}\in X_{m_{k}}^{i}}g(x_{i,u},x_{i,v},x_{i,w})\\ =&((s(1-\rho_{m}))^{3}-(k-1)^{3})g(x_{i,a},x_{i,b},x_{i,c})+\varepsilon_{a,b,c},\end{split} (15)

with

εa,b,c22((s(1ρm))3(k1)3)η.\left\lVert\varepsilon_{a,b,c}\right\rVert_{2}^{2}\leq((s(1-\rho_{m}))^{3}-(k-1)^{3})\eta. (16)

Since equation 15 holds for any xi,a,xi,b,xi,cX<kix_{i,a},x_{i,b},x_{i,c}\in X_{<k}^{i}, taking the average over (xi,a,xi,b,xi,c)(x_{i,a},x_{i,b},x_{i,c}) yields

xi,uX<kixi,vXm,>kixi,wX<kig(xi,u,xi,v,xi,w)+xi,uX<kixi,vX<kixi,wXm,>kig(xi,u,xi,v,xi,w)+xi,uX<kixi,vXm,>kixi,wXm,>kig(xi,u,xi,v,xi,w)+xi,uXm,>kixi,vXmkixi,wXmkig(xi,u,xi,v,xi,w)=((s(1ρm))3(k1)31)fmask(X<ki)+1(k1)3xi,aX<kixi,bX<kixi,cX<kiεa,b,c.\begin{split}&\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{m,>k}^{i}}\sum_{x_{i,w}\in X_{<k}^{i}}g(x_{i,u},x_{i,v},x_{i,w})+\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{<k}^{i}}\sum_{x_{i,w}\in X_{m,>k}^{i}}g(x_{i,u},x_{i,v},x_{i,w})\\ +&\sum_{x_{i,u}\in X_{<k}^{i}}\sum_{x_{i,v}\in X_{m,>k}^{i}}\sum_{x_{i,w}\in X_{m,>k}^{i}}g(x_{i,u},x_{i,v},x_{i,w})+\sum_{x_{i,u}\in X_{m,>k}^{i}}\sum_{x_{i,v}\in X_{m_{k}}^{i}}\sum_{x_{i,w}\in X_{m_{k}}^{i}}g(x_{i,u},x_{i,v},x_{i,w})\\ =&\left(\frac{(s(1-\rho_{m}))^{3}}{(k-1)^{3}}-1\right)f_{mask}(X_{<k}^{i})+\frac{1}{(k-1)^{3}}\sum_{x_{i,a}\in X_{<k}^{i}}\sum_{x_{i,b}\in X_{<k}^{i}}\sum_{x_{i,c}\in X_{<k}^{i}}\varepsilon_{a,b,c}.\end{split} (17)

Let εk=1(k1)3xi,aX<kixi,bX<kixi,cX<kiεa,b,c\varepsilon_{k}=\frac{1}{(k-1)^{3}}\sum_{x_{i,a}\in X_{<k}^{i}}\sum_{x_{i,b}\in X_{<k}^{i}}\sum_{x_{i,c}\in X_{<k}^{i}}\varepsilon_{a,b,c}. Using equation 16 we have

εk22((s(1ρm))3(k1)3)η.\left\lVert\varepsilon_{k}\right\rVert_{2}^{2}\leq((s(1-\rho_{m}))^{3}-(k-1)^{3})\eta. (18)

Combining equation 17 and equation 13 we have

fmask(Xmki)=(s(1ρm))3(k1)3fmask(X<ki)+εk,f_{mask}(X_{m_{k}}^{i})=\frac{(s(1-\rho_{m}))^{3}}{(k-1)^{3}}f_{mask}(X_{<k}^{i})+\varepsilon_{k},

which implies that

fmask(Xmki)fmask(X<ki)=((s(1ρm))3(k1)31)fmask(Xmki)εk.f_{mask}(X_{m_{k}}^{i})-f_{mask}(X_{<k}^{i})=\left(\frac{(s(1-\rho_{m}))^{3}}{(k-1)^{3}}-1\right)f_{mask}(X_{m_{k}}^{i})-\varepsilon_{k}. (19)

The upper bound of 12Wfmask(Xmki)Wfmask(X<ki)22\displaystyle\frac{1}{2}\left\lVert Wf_{mask}(X_{m_{k}}^{i})-Wf_{mask}(X_{<k}^{i})\right\rVert_{2}^{2} is given by

12Wfmask(Xmki)Wfmask(X<ki)22=12((s(1ρm))3(k1)31)Wfmask(Xmki)Wεk2212((s(1ρm))3(k1)31)Wfmask(Xmki)22+12Wεk2212((s(1ρm))3(k1)3(k1)3)2+12((s(1ρm))3(k1)3)W22η=wk22(k1)6+wk2W22η.\begin{split}\frac{1}{2}&\left\lVert Wf_{mask}(X_{m_{k}}^{i})-Wf_{mask}(X_{<k}^{i})\right\rVert_{2}^{2}=\frac{1}{2}\left\lVert\left(\frac{(s(1-\rho_{m}))^{3}}{(k-1)^{3}}-1\right)Wf_{mask}(X_{m_{k}}^{i})-W\varepsilon_{k}\right\rVert_{2}^{2}\\ &\leq\frac{1}{2}\left\lVert\left(\frac{(s(1-\rho_{m}))^{3}}{(k-1)^{3}}-1\right)Wf_{mask}(X_{m_{k}}^{i})\right\rVert_{2}^{2}+\frac{1}{2}\left\lVert W\varepsilon_{k}\right\rVert_{2}^{2}\\ &\leq\frac{1}{2}\left(\frac{(s(1-\rho_{m}))^{3}-(k-1)^{3}}{(k-1)^{3}}\right)^{2}+\frac{1}{2}((s(1-\rho_{m}))^{3}-(k-1)^{3})\left\lVert W\right\rVert_{2}^{2}\eta\\ &=\frac{w_{k}^{2}}{2(k-1)^{6}}+\frac{w_{k}}{2}\left\lVert W\right\rVert_{2}^{2}\eta.\end{split}

This result, combined with equation 10 to equation 12, imply that

(Wfmask(X<ki))𝟙(Xki)+wk22(k1)6+wk2W22η+δmask-(Wf_{\text{mask}}(X_{<k}^{i}))^{\top}\mathbbm{1}_{(X_{k}^{i})^{+}}\leq\frac{w_{k}^{2}}{2(k-1)^{6}}+\frac{w_{k}}{2}\left\lVert W\right\rVert_{2}^{2}\eta+\delta_{mask} (20)

Meanwhile, the upper bound of ((Wfmask(X<ki))𝟙(Xki))2((Wf_{mask}(X_{<k}^{i}))^{\top}\mathbbm{1}_{(X_{k}^{i})^{-}})^{2} is given by:

((Wfmask(X<ki))𝟙(Xki))2Wfmask(X<ki)22𝟙(Xki)22=1.((Wf_{mask}(X_{<k}^{i}))^{\top}\mathbbm{1}_{(X_{k}^{i})^{-}})^{2}\leq\left\lVert Wf_{mask}(X_{<k}^{i})\right\rVert_{2}^{2}\left\lVert\mathbbm{1}_{(X_{k}^{i})^{-}}\right\rVert_{2}^{2}=1. (21)

The two upper bounds imply the following result:

gen(fmask)=𝔼(X<ki,(Xki)+)(Wfmask(X<ki))𝟙(Xki)++𝔼X<ki,(Xki)((Wfmask(X<ki))𝟙(Xki))2=𝔼xik(Wfmask(X<ki))𝟙(Xki)+s(1ρm)+𝔼X<ki,(Xki)((Wfmask(X<ki))𝟙(Xki))2k(wk2(k1)6+wkW22η)2s(1ρm)+δmask+1.\begin{split}\mathcal{L}_{gen}(f_{mask})&=-\mathbb{E}_{(X_{<k}^{i},(X_{k}^{i})^{+})}(Wf_{\text{mask}}(X_{<k}^{i}))^{\top}\mathbbm{1}_{(X_{k}^{i})^{+}}+\mathbb{E}_{X_{<k}^{i},(X_{k}^{i})^{-}}((Wf_{\text{mask}}(X_{<k}^{i}))^{\top}\mathbbm{1}_{(X_{k}^{i})^{-}})^{2}\\ &=-\mathbb{E}_{x_{i}}\frac{\sum_{k}(Wf_{\text{mask}}(X_{<k}^{i}))^{\top}\mathbbm{1}_{(X_{k}^{i})^{+}}}{s(1-\rho_{m})}+\mathbb{E}_{X_{<k}^{i},(X_{k}^{i})^{-}}((Wf_{\text{mask}}(X_{<k}^{i}))^{\top}\mathbbm{1}_{(X_{k}^{i})^{-}})^{2}\\ &\leq\frac{\sum_{k}\left(\frac{w_{k}^{2}}{(k-1)^{6}}+w_{k}\left\lVert W\right\rVert_{2}^{2}\eta\right)}{2s(1-\rho_{m})}+\delta_{mask}+1.\end{split} (22)

which finishes the proof. ∎

Appendix B Experimental Details

B.1 Model details in Table 1

In the comparison of GLUE score, the autoregressive model is chosen to be GPT-2 medium (345M parameters) (Radford et al., 2019) and the masked model is chosen to be BERT large (340M parameters) (Devlin et al., 2019). In the comparison of perplexity, the autoregressive model is chosen to be GPT-2 medium (345M parameters) and the masked model is chosen to be BERT-Large-CAS (395M parameters) (Wang et al., 2019).

B.2 More details in Figure 2

We employ GPT and BERT models that have been trained on a subset of the Pile dataset (Gao et al., 2020) using the Cramming protocol (Geiping & Goldstein, 2023). Following pretraining, we randomly sample several examples from the remaining Pile dataset and obtain output features using the pretrained models. We choose 8000 features for each model and calculate their similarity by computing the inner product for every pair of features. Subsequently, we sort these similarities and select the top 10510^{5} pairs to form Figure 2.

B.3 More details on Two-Stream Attention Transformer.

In our objective of Equation 6, we predict several tokens given one conditional sequence. It seems that we can modify the causal mask to suit the dependency relationship between tokens. But in fact, we are not capable of modeling the objective if we use a single stream. This is because of two requirements that are contradictory in a standard decoder-only Transformer architecture, which have been discussed in XLNet (Yang et al., 2019): (1) Suppose the output of the network in position ztz_{t} is parameterized by gθ(x𝐳<t,zt)g_{\theta}(x_{\mathbf{z}<t},z_{t}). In order to predict the token xztx_{z_{t}}, the network output in the ztz_{t} position in one layer gθ(x𝐳<t,zt)g_{\theta}(x_{\mathbf{z}<t},z_{t}) should only use the position ztz_{t} and not the content xztx_{z_{t}}, (2) to predict the other tokens xzjx_{z_{j}} with j>tj>t, gθ(x𝐳<t,zt)g_{\theta}(x_{\mathbf{z}<t},z_{t}) should also encode the content xztx_{z_{t}} to provide full contextual information. Therefore, it is not enough with only one stream. Since the two-stream attention proposed by XLNet facing the same problem works well, we build upon the two-stream attention mechanism proposed by XLNet and extend it by excavating important usage of the causal masks.

In the two-stream attention Transformer, the content stream hθh_{\theta} encodes the contextual information, and the query stream gθg_{\theta} predicts the targets with the help of the content stream. The two stream share the same weights of the attention block but differ in the causal mask. Intuitively, the causal masks in the two streams enable us to easily formulate various dependency relationship between tokens. The main difference for the two causal masks is that the content stream should ensure xztx_{z_{t}} can be attended to itself while the query stream is the contrary. The two-stream attention allows us to arbitrarily change the sequential order without confronting the inconsistency as mentioned above. For each self-attention layer l=1,,Ll=1,\dots,L, the two streams of representations are schematically updated with a shared set of parameters. In the ll-th layer, the outputs of a self-attention head 𝐀h(l)\mathbf{A}_{h}^{(l)} and 𝐀g(l)\mathbf{A}_{g}^{(l)} in the two-stream are computed in the form of:

𝐐h\displaystyle\mathbf{Q}_{h} =𝐇(l1)𝐖Ql,𝐊h=𝐇(l1)𝐖Kl,𝐕h=𝐇(l1)𝐖Vl\displaystyle=\mathbf{H}^{(l-1)}\mathbf{W}^{l}_{Q},\ \mathbf{K}_{h}=\mathbf{H}^{(l-1)}\mathbf{W}^{l}_{K},\ \mathbf{V}_{h}=\mathbf{H}^{(l-1)}\mathbf{W}^{l}_{V} (23)
𝐐g\displaystyle\mathbf{Q}_{g} =𝐆(l1)𝐖Ql,𝐊g=𝐇(l1)𝐖Kl,𝐕g=𝐇(l1)𝐖Vl\displaystyle=\mathbf{G}^{(l-1)}\mathbf{W}^{l}_{Q},\ \mathbf{K}_{g}=\mathbf{H}^{(l-1)}\mathbf{W}^{l}_{K},\ \mathbf{V}_{g}=\mathbf{H}^{(l-1)}\mathbf{W}^{l}_{V}
𝐀h(l)\displaystyle\mathbf{A}_{h}^{(l)} =softmax(𝐐h𝐊hdk+𝐌h)𝐕h,𝐀g(l)=softmax(𝐐g𝐊gdk+𝐌g)𝐕g,\displaystyle=\operatorname{softmax}(\frac{\mathbf{Q}_{h}\mathbf{K}_{h}^{\top}}{\sqrt{d_{k}}}+\mathbf{M}_{h})\mathbf{V}_{h},\mathbf{A}_{g}^{(l)}=\operatorname{softmax}(\frac{\mathbf{Q}_{g}\mathbf{K}_{g}^{\top}}{\sqrt{d_{k}}}+\mathbf{M}_{g})\mathbf{V}_{g},

where 𝐇(l1)\mathbf{H}^{(l-1)} and 𝐆(l1)\mathbf{G}^{(l-1)}, the representations before the (l1)(l-1)-th layer of the content stream and the query stream, are linearly projected to queries, keys and values using shared trainable parameter matrices 𝐖Ql,𝐖Kl,𝐖Vl\mathbf{W}^{l}_{Q},\mathbf{W}^{l}_{K},\mathbf{W}^{l}_{V} respectively, and dkd_{k} is the dimension of the representations. 𝐌h,𝐌gs×s\mathbf{M}_{h},\mathbf{M}_{g}\in\mathbb{R}^{s\times s} are the causal masks in the content stream and the query stream, respectively, which are the key factors in our framework. The value of each element (𝐌)ij(\mathbf{M})_{ij} in either causal mask can only be 0 or -\infty, where (𝐌)ij=0(\mathbf{M})_{ij}=0 means the jj-th token can be attended to the ii-th token and (𝐌)ij=(\mathbf{M})_{ij}=-\infty means the jj-th token cannot be attended to the ii-th token. This indicates that the causal mask has a one-to-one correspondence with the dependency relationship between tokens. Therefore, we only need to alter the causal masks 𝐌h\mathbf{M}_{h} and 𝐌g\mathbf{M}_{g} to adapt to different choices of the group setting, which eliminate the need of using multiple architectures. We will show how to construct the corresponding attention masks.

Note that the value of each element (𝐌)ij(\mathbf{M})_{ij} in either causal mask can only be 0 or -\infty, where (𝐌)ij=0(\mathbf{M})_{ij}=0 means the jj-th token can be attended to the ii-th token and (𝐌)ij=(\mathbf{M})_{ij}=-\infty means the jj-th token cannot be attended to the ii-th token. This implicates that by finding out the dependency between each token we can determine the causal mask. We provide the causal masks corresponding to Equation (6): Let f(t)f(t) denote the index of the group to which xtx_{t} belongs. In this situation, xix_{i} will be dependent on xjx_{j} if f(i)>f(j)f(i)>f(j). Besides, tokens in the same group is dependent on each other in the content stream to provide contextual information. The causal masks for the two streams should be

(𝐌h)ij={0f(i)f(j)>0f(j)>f(i)>0,(𝐌g)ij={0f(i)>f(j)>0f(j)f(i)>0(\mathbf{M}_{h})_{ij}=\left\{\begin{array}[]{ll}0&f(i)\geq f(j)>0\\ -\infty&f(j)>f(i)>0\end{array}\right.,\ (\mathbf{M}_{g})_{ij}=\left\{\begin{array}[]{ll}0&f(i)>f(j)>0\\ -\infty&f(j)\geq f(i)>0\end{array}\right. (24)

Appendix C Ablation Study

We conduct more ablation experiments on the choice of tt in the vision domain. We vary tt in [1,2,3,4][1,2,3,4] and investigate the mixture of t=1t=1 and t=2t=2. We use ImageNet as the dataset. The results are shown in the Table 9. The results indicate that there exists a sweet point between t=2t=2 and t=3t=3. This means that tt should not be too large. The mixture choice of tt does not bring much improvement as well.

P~MFVFL2\|\tilde{P}_{M}-F_{V}F_{L}^{\top}\|^{2} (25)
Table 9: Ablation study on the choice of tt in the diversity-enhanced autoregressive objective.
tt Linear probing accuracy Finetuning accuracy
1 (original autoregressive objective) 56.2 82.5
2 (used in the previous experiments) 59.4 82.9
3 59.6 82.7
4 58.8 82.5
mixture of 1 and 2 58.1 82.5