This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

How to Understand Masked Autoencoders

Shuhao Cao
Washington University in St. Louis
[email protected]
&Peng Xu
University of Oxford
[email protected]
David A. Clifton
University of Oxford
[email protected]
Corresponding Author
Abstract

“Masked Autoencoders (MAE) Are Scalable Vision Learners” revolutionizes the self-supervised learning method in that it not only achieves the state-of-the-art for image pre-training, but is also a milestone that bridges the gap between visual and linguistic masked autoencoding (BERT-style) pre-trainings. However, to our knowledge, to date there are no theoretical perspectives to explain the powerful expressivity of MAE. In this paper, we, for the first time, propose a unified theoretical framework that provides a mathematical understanding for MAE. Specifically, we explain the patch-based attention approaches of MAE using an integral kernel under a non-overlapping domain decomposition setting. To help the research community to further comprehend the main reasons of the great success of MAE, based on our framework, we pose five questions and answer them with mathematical rigor using insights from operator theory.

1 Introduction

“Masked Autoencoders (MAE) Are Scalable Vision Learners” [31] (illustrated in Figure 1) recently introduces a ground-breaking self-supervised paradigm for image pretraining. This seminal method makes great contributions at least in the following respects:

  1. (1)

    MAE achieves the state-of-the-art  on self-supervised pretraining on ImageNet-1K dataset [21], outperforming the strong competitors (e.g., BEiT [5]) by a clear margin in a simpler and faster manner. Particularly, a vanilla Vision Transformer (ViT) [23] Huge backbone based MAE achieves the best accuracy (87.8%87.8\%) among methods that use only ImageNet-1K data. This inspiring phenomenon motivates the researchers to re-consider the ViT variants in self-supervised contexts. Moreover, MAE achieves good transfer performance in downstream tasks beating the supervised pretraining and shows promising scaling behavior.

  2. (2)

    MAE is a generative learning method and beats the contrastive learning competitors (e.g., MoCo v3 [18]) that have dominated vision self-pretraining in recent years.

  3. (3)

    MAE is a mile-stone that bridged the gap between the visual and linguistic masked autoencoding (BERT-style [22]) pretrainings. Actually, the previous work prior to MAE fails to apply the masked autoencoding pretrainings on the visual domain. Thus, MAE paves a path that “Self-supervised learning in vision may now be embarking on a similar trajectory as in NLP” [31].

Refer to caption
Figure 1: Architecture of the Masked Autoencoder (MAE) in [31], with Vision Transformer (ViT) backbone. (Used under CC BY-NC 4.0 license from https://github.com/facebookresearch/mae.) Best viewed in color.

The great success of MAE is interpreted by its authors as “We hypothesize that this behavior occurs by way of a rich hidden representation inside the MAE” [31]. However, to the best of our knowledge, to date, there are no theoretical viewpoints to explain the powerful expressivity of MAE. Unfortunately, the theoretical analysis for the expressivity of ViT based models is so challenging that it is still under-studied.

Our Contributions In this paper, we, for the first time, propose a unified theoretical framework that provides a mathematical understanding for MAE. Particularly, we not only rethink MAE by regarding each image’s embedding as a learned basis function in certain Hilbert spaces instead of a 2D pixel grid, but also explain the patch-based attention approaches of MAE from the operator theoretic perspective of an integral kernel under a non-overlapping domain decomposition setting.

To help the researchers to further grasp the main reasons for the great success of MAE, based on our proposed unified theoretical framework, we contribute five questions, and for the first time answer them partially by insights from rigorous mathematical theory:

Q1: How is the representation space of MAE formed, optimized, and propagated through layers?
A: We illustrate that the attention mechanism in MAE is equivalent to a learnable integral kernel transform, and its representation power is dynamically updated by the Barron space with the positional embeddings that work as the coordinates in a high dimensional feature space. See Section 4.1.

Q2: Why and how does patchifying contribute to MAE?
A: We prove that the random patch selecting of MAE preserves the information of the original image, while reduces the computing costs, under common assumptions on the low-rank nature of images. This paves a theoretical foundation for the patch-based neural networks/models including but not limited to MAE or ViT. See Section 3.

Q3: Why are the internal representations in the lower and higher layers of MAE not significantly different?
A: We have provided a new theoretical justification for the first time by proving the stability of the internal presentations. The great success of MAE benefits from a main reason that the scaled dot-product attention in the ViT-backbone provides stable representations during the cross-layer propagation. Furthermore, we view the skip-connection for the attention mechanism from a completely new perspective: representing an approximated solution explicitly to a Tikhonov-regularized Fredholm integral equation. See Section 5.

Q4: Is decoder unimportant for MAE?
A: No. We argue that the decoder is vital to helping encoder to build better representations, even if decoder is discarded after pretraining. Due to the bigger patch dimension in the MAE decoder, it allows the representation space in the encoder enriched much more often by functions from the Barron space to learn a better basis. See Section 6.

Q5: Does MAE reconstruct each masked patch merely inferred from its adjacent neighbor patches?
A: No. We prove that the latent representations of the masked patches are interpolated globally based on an inter-patch topology that is learned by the attention mechanism. See Section 6.

Overall, our proposed unified theoretical framework provides a mathematical understanding for MAE and can be used to understand the intrinsic traits of the extensive patch and self-attention based models, not limited to MAE or ViT.

The rest of this paper is organized as follows: Section 2 summarizes related work. Based on our proposed theoretical framework, some mathematical understandings for the encoder and decoder of MAE are presented in Section 4 through Section 6. We draw some conclusions in Section 7.

2 Related Work

Vision Transformer (ViT) [23] is a strong image-oriented network based on a standard Transformer [44] encoder. It has an image-specific input pipeline where the input image needs to be split into fixed-size patches. After going through the linearly embed layer and adding with position embeddings, all the patch-wise sequences will be encoded by a standard Transformer encoder. ViT and its variants have been widely used in various computer vision tasks (e.g., recognition [43], detection [7], segmentation [36]), and meanwhile work well for both supervised [43] and self-supervised [18, 15] visual learning. Recently, some pioneering works provide further understanding for ViT, e.g., its internal representation robustness [38], the continuous behavior of its latent representation propagation [39]. However, the theoretical analysis for the expressivity of ViT based models is so challenging that it is still under-studied.

Masked Autoencoder (MAE) [31] is essentially a denoising autoencoder [45], which has a straightforward motivation that randomly mask patches of the input image and reconstruct the missing pixels. MAE works based on two key designs: (i) asymmetric encoder-decoder architecture where the encoder takes in only the visible patches and the lightweight decoder reconstructs the target image, (ii) high masking ratio (e.g., 75%75\%) for the input image yields a nontrivial and meaningful self-supervisory task. Its great success is attributed to “a rich hidden representation inside the MAE” [31]. However, to the best of our knowledge, to date, there are no theoretical viewpoints to explain the powerful expressivity of MAE.

Mathematical Theory Related to Attention Self-attention [44] is essentially processing each input as a fully-connected graph [51]. Therefore, as aforementioned, we start from a more general perspective of topological spaces [13] to rethink MAE, by regarding each image as a graph connecting patches instead of 2D pixel grid. Meanwhile, we study the patch-based attention approaches of MAE through the operator theory for the Fredholm integral equations [1], to formulate the dot-product attention matrix as an integral kernel [29], i.e., a learned Green’s function to reconstruct solutions to the partial differential equations e.g.. [26]. The skip-connection can be then explained through two perspectives, first as the term corresponding to the Tikhonov regularization [27, 48], or Neural Ordinary Differential Equations (ODE) [17]. On the other hand, the patchification is connected with the Domain Decomposition Methods (DDM) [42, 52].

3 Patch is All We Need?

Patchifying has become a standard practice in Transformer-based CV models since ViT [23], see also [30, 37]. In this section, we try to answer the question “Why and how does patchifying contribute to MAE?” from a perspective of the domain decomposition method to solve an integral equation.

We first consider the full grid ΩII\Omega\simeq I\otimes I with I=[0,1/N,,(N1)/N]I=[0,1/N,\dots,(N-1)/N], then the patchification of Ω\Omega is essentially a non-overlapping domain decomposition method (DDM) [16]. DDM is commonly used in solving integral/partial differential equations bearing similar forms with (10), e.g., see [42], as well as image denoising problems [52].

DDM practices “divide and conquer”, which is a general methodology suitable for lots of science and engineering disciplines: the original domain of interest is decomposed into much smaller subdomains. Each subproblem associated with subdomains can be more efficiently solved by the same algorithm, especially when the algorithm has a super-linear dependence on the grid size. The attention mechanism computational complexity and storage requirement have a quadratic dependence on nn, and this translates a quartic dependence on the coarse grid size pp (how many patches along one axis) as n=p2n=p^{2}. Without patchification, we can see the operation in (6) is of 𝒪(N4)\mathcal{O}(N^{4}) that renders the algorithm unattainable.

In this setup, Ω\Omega is first partitioned to equal-sized non-overlapping subdomains: Ω=i=1nΩi\Omega=\cup_{i=1}^{n}\Omega_{i}, int(Ωi)int(Ωj)=\operatorname{int}(\Omega_{i})\cap\operatorname{int}(\Omega_{j})=\emptyset as well as int(Ωi)int(Ωj)\operatorname{int}(\Omega_{i})\simeq\operatorname{int}(\Omega_{j}) if iji\neq j. For the simplicity of the presentation, we consider a single-channel image and define the space of bounded variation (BV) on Ω\Omega as BV(Ω)BV(\Omega), and for any unpatchified image 𝐮BV(Ω)N×N\mathbf{u}\in BV(\Omega)\subset\mathbb{R}^{N\times N}

|𝐮|BV(Ω):=xiΩ{xjNxiwij(𝐮(xi)𝐮(xj))2}12,|\mathbf{u}|_{BV(\Omega)}:=\sum_{x_{i}\in\Omega}\left\{\sum_{x_{j}\in N_{x_{i}}}w_{ij}(\mathbf{u}(x_{i})-\mathbf{u}(x_{j}))^{2}\right\}^{\frac{1}{2}}, (1)

where NxiN_{x_{i}} denotes the grid points that are connected with xix_{i} through an undirected edge xixjx_{i}x_{j}, and wijw_{ij} is some positive weights measuring the interaction strength. When being viewed in the continuum such that 𝐮\mathbf{u} is a discrete sampling of the function u:2+u:\mathbb{R}^{2}\to\mathbb{R}^{+}, this norm approximates |u|L1|\nabla u|_{L^{1}} and measures the smoothness of an image, and is widely used in graph/image recovery e.g., see [8].

After the patchification we have 𝐮=i=1n2Ei𝐮i\mathbf{u}=\sum_{i=1}^{n^{2}}E_{i}\mathbf{u}_{i}, where Ei:ΩiΩE_{i}:\Omega_{i}\to\Omega is an extension operator that maps the patch pixel values on Ωi\Omega_{i} to Ω\Omega by zero-padding. It is straightforward to see that the representation consist of the concatenated patch matrices, and is in Πi=1n2BV(Ωi)\Pi^{n^{2}}_{i=1}BV(\Omega_{i}). Here we denote 𝒴:=i=1n2EiBV(Ωi)\mathcal{Y}:=\oplus_{i=1}^{n^{2}}E_{i}\,BV(\Omega_{i}). By (1), the original BV space is only a subspace of the decomposed product space, i.e., BV(Ω)𝒴BV(\Omega)\subset\mathcal{Y}, while the reverse is not true. The reason is that by patchification, the underlying function spaces have completely lost the inter-patch topological relations: e.g., how the inter-patch pixel intensities change? There is also another analytical set of questions to answer: how to choose the size of the patch? If bigger patches are used, it then requires a bigger embedding dimension, thus harder to formulate the bases for the representation space. On the other hand, if smaller patches are used, because of the aforementioned quartic dependence on the grid size, the inter-patch topology is much harder and more expensive to learn.

In the following proposition, we shall show that, if an image has a low rank structure, then there exists a semi-randomly chosen set of patches that can represent the original image. This selection has a very high mask ratio, that is, the representation using a small fraction e.g., pp patches out of all the patches. Meanwhile, we assume that there exists an embedding that is able to represent the original image, in the sense that the difference between the reconstruction and the original image is small measured under ||BV(Ω)|\cdot|_{BV(\Omega)}.

Following the standard practices [23, 43], we consider the ViT-base case in MAE where the dimension of the feature embedding matches the original dimension of the pixel counts in the any given patch: 𝐮N×Nn2×d\mathbf{u}\in\mathbb{R}^{N\times N}\simeq\mathbb{R}^{n^{2}\times d}, i.e., for each patch’s embedding in the MAE encoder, there exists a continuous embedding that maps n2×dN×N\mathbb{R}^{n^{2}\times d}\hookrightarrow\mathbb{R}^{N\times N}, from the patch embeddings to images.

Proposition 3.1 (Existence of a near optimal patch embedding).

For any 𝐮BV(Ω)N×N\mathbf{u}\in BV(\Omega)\subset\mathbb{R}^{N\times N}, and an equal-sized n×nn\times n patchification of 𝐮\mathbf{u}, let the patch grid size Nc:=N/nN_{c}:=N/n, assuming there exists a rank rr approximation to 𝐮\mathbf{u} with r<Ncr<N_{c} and error ϵ\epsilon, and a unique patch is randomly chosen from each row such that the final selection’s columns set is a permutation of {1,,n2}\{1,\cdots,n^{2}\}, then there exists an embedding 𝐲p×d\mathbf{y}\in\mathbb{R}^{p\times d} such that

𝐮R(𝐲)BV(Ω)<C(r,Nc,p)ϵ,\|\mathbf{u}-R(\mathbf{y})\|_{BV(\Omega)}<C(r,N_{c},p)\epsilon, (2)

where RR is a rank preserving reconstruction operator R:p×dN×NR:\mathbb{R}^{p\times d}\to\mathbb{R}^{N\times N}.

4 Attention in MAE: a Kernel Perspective

In this section, we reexamine the attention block present in both encoder and decoder layers in MAE from multiple perspectives. The first and foremost understanding comes from the operator theory for the integral equations. By formulating the scaled dot-product attention as a nonlinear integral transform, learning the latent representation is equivalent to learning a set of basis in a Hilbert space. Moreover, upon adopting the kernel interpretation of the attention mechanism, a single patch is characterized by not only its own learned embedding, but also how it interacts with all other patches (reproducing property of a kernel).

4.1 Self-attention: A Nonlinear Integral Kernel Transform

For MAE [31], each input image patch is projected as a 1D token embedding. Following the practice in [31], we also omit the shared class token added to the embedding that can be removed from the pretraining of MAE. Given an attention block in an encoder or a decoder layer of MAE, the input and output are defined respectively as 𝐲in:=𝐲,𝐲outp×d\mathbf{y}_{\text{in}}:=\mathbf{y},\mathbf{y}_{\text{out}}\in\mathbb{R}^{p\times d}. When an image has not been masked, it has p=n2p=n^{2} patches, where nn is a positive integer. The embedding dimension is dd. In 𝐲\mathbf{y}, a positional embedding 𝒳:={xi}i=1p1×d\mathcal{X}:=\{x_{i}\}_{i=1}^{p}\subset\mathbb{R}^{1\times d}, such that xix_{i} is associated with the ii-th patch, is added. Each xix_{i} can be viewed as a coordinate in high dimension. We note that the patch ordering map ixii\mapsto x_{i} is injective, since the first component of xix_{i} is the polar coordinate in 2D, and as such the relative position of each patch can be recovered from xix_{i}. For simplicity, the analysis done in this section applies to a single head.

Refer to caption
Figure 2: The ii-th patch embedding from the scaled dot-product attention is a convex combination of the attention weights, which encodes the interactions among all patch embeddings. Best viewed in color.

Here we briefly review the self-attention from [44]. The query 𝐐\mathbf{Q}, key 𝐊\mathbf{K}, value 𝐕\mathbf{V} are generated by three learnable projection matrices 𝐖Q,𝐖K,𝐖Vd×d\mathbf{W}^{Q},\mathbf{W}^{K},\mathbf{W}^{V}\in\mathbb{R}^{d\times d}: 𝐐=𝐲𝐖Q\mathbf{Q}=\mathbf{y}\mathbf{W}^{Q}, 𝐊=𝐲𝐖K\mathbf{K}=\mathbf{y}\mathbf{W}^{K}, 𝐕=𝐲𝐖V\mathbf{V}=\mathbf{y}\mathbf{W}^{V}. The scaled dot-product attention is to obtain 𝐳p×d\mathbf{z}\in\mathbb{R}^{p\times d}:

𝐳=Attns(𝐲):=Softmax(𝐐𝐊/d)𝐕.\mathbf{z}=\text{Attn}_{s}(\mathbf{y}):=\operatorname{Softmax}\left(\mathbf{Q}\mathbf{K}^{\top}/\sqrt{d}\right)\mathbf{V}. (3)

Then the softmax attention Attn()\operatorname{Attn}(\cdot) with a global receptive field works as the following nonlinear mapping:

Attn:\displaystyle\operatorname{Attn}: p×dp×d,\displaystyle\;\mathbb{R}^{p\times d}\to\mathbb{R}^{p\times d}, (4)
𝐲LN(𝐲+𝐳+FFN(LN(𝐲+𝐳)))),\displaystyle\;\mathbf{y}\mapsto\operatorname{LN}\left(\mathbf{y}+\mathbf{z}+\operatorname{FFN}\Big{(}\operatorname{LN}(\mathbf{y}+\mathbf{z}))\Big{)}\right),

where LN()\operatorname{LN}(\cdot) denotes the Layer Normalization (LN) [2] that essentially is a learnable column scaling with a shift, and FFN()\operatorname{FFN}(\cdot) is a standard two-layer feedforward neural network (FFN) applied to the embedding of each patch.

Upon closer inspection of the scaled dot-product attention (3), for 𝐳\mathbf{z}, the jj-th element of its ii-th row 𝐳i\mathbf{z}_{i} is obtained by

(𝐳i)j=𝐀i 

𝐯j
,
(\mathbf{z}_{i})_{j}=\mathbf{A}_{i\mathchoice{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\displaystyle\bullet$}\hskip 0.20999pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\textstyle\bullet$}\hskip 0.20999pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\scriptstyle\bullet$}\hskip 0.20999pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\scriptscriptstyle\bullet$}\hskip 0.20999pt}}}}}\cdot\mathbf{v}^{j},
(5)

and the ii-th row 𝐀i 

\mathbf{A}_{i\mathchoice{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\displaystyle\bullet$}\hskip 0.20999pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\textstyle\bullet$}\hskip 0.20999pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\scriptstyle\bullet$}\hskip 0.20999pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\scriptscriptstyle\bullet$}\hskip 0.20999pt}}}}}
of the attention matrix 𝐀\mathbf{A} is

𝐀i 

=e(𝐐𝐊/d)i 

j=1pe(𝐐𝐊/d)ij
=:Softmax(𝐪i𝐊/d),
\mathbf{A}_{i\mathchoice{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\displaystyle\bullet$}\hskip 0.20999pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\textstyle\bullet$}\hskip 0.20999pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\scriptstyle\bullet$}\hskip 0.20999pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\scriptscriptstyle\bullet$}\hskip 0.20999pt}}}}}=\frac{e^{(\mathbf{Q}\mathbf{K}^{\top}/\sqrt{d})_{i\mathchoice{\mathbin{\vbox{\hbox{\hskip 0.15pt\scalebox{0.65}{$\displaystyle\bullet$}\hskip 0.15pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.15pt\scalebox{0.65}{$\textstyle\bullet$}\hskip 0.15pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.15pt\scalebox{0.65}{$\scriptstyle\bullet$}\hskip 0.15pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.15pt\scalebox{0.65}{$\scriptscriptstyle\bullet$}\hskip 0.15pt}}}}}}}{\sum_{j=1}^{p}e^{(\mathbf{Q}\mathbf{K}^{\top}/\sqrt{d})_{ij}}}=:\operatorname{Softmax}(\mathbf{q}_{i}\mathbf{K}^{\top}/\sqrt{d}),
(6)

i.e., using the diagram in Figure 2, we have

𝐳i=𝐀i 

( 𝐯1    𝐯n )
=j=1p𝐀ij𝐯j
.
\mathbf{z}_{i}=\mathbf{A}_{i\mathchoice{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\displaystyle\bullet$}\hskip 0.20999pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\textstyle\bullet$}\hskip 0.20999pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\scriptstyle\bullet$}\hskip 0.20999pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\scriptscriptstyle\bullet$}\hskip 0.20999pt}}}}}\begin{pmatrix}\rule[1.72218pt]{20.00003pt}{0.2pt}&\mathbf{v}_{1}&\rule[1.72218pt]{20.00003pt}{0.2pt}\\[-4.0pt] \rule[3.44444pt]{20.00003pt}{0.3pt}&\vdots&\rule[3.44444pt]{20.00003pt}{0.3pt}\\[-1.0pt] \rule[2.15277pt]{20.00003pt}{0.2pt}&\mathbf{v}_{n}&\rule[2.15277pt]{20.00003pt}{0.2pt}\end{pmatrix}=\sum_{j=1}^{p}\mathbf{A}_{ij}\mathbf{v}_{j}.
(7)

Due to the softmax, 𝐀i 

\mathbf{A}_{i\mathchoice{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\displaystyle\bullet$}\hskip 0.20999pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\textstyle\bullet$}\hskip 0.20999pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\scriptstyle\bullet$}\hskip 0.20999pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\scriptscriptstyle\bullet$}\hskip 0.20999pt}}}}}
contains the coefficients for the convex combination of the vector representations {𝐯j}j=1p\{\mathbf{v}_{j}\}_{j=1}^{p}. This basis set {𝐯j}j=1p\{\mathbf{v}_{j}\}_{j=1}^{p} form the 𝐕\mathbf{V}’s row space, and it further forms each row of the output 𝐳\mathbf{z} by multiplying with 𝐀\mathbf{A}. As a result, the scaled dot-product attention has the following basis expansion form:

𝐳i:=j=1pA(𝐪i,𝐤j)𝐯i,\mathbf{z}_{i}:=\sum_{j=1}^{p}A(\mathbf{q}_{i},\mathbf{k}_{j})\,\mathbf{v}_{i}, (8)

where 𝐮i\mathbf{u}_{i} denotes the ii-th row of certain latent representation 𝐮{𝐳,𝐐,𝐊,𝐕}\mathbf{u}\in\{\mathbf{z},\mathbf{Q},\mathbf{K},\mathbf{V}\}. The key term in (8), A(,)A(\cdot,\cdot), denotes the attention kernel, which maps each patch’s embedding represented by the rows of 𝐐,𝐊\mathbf{Q},\mathbf{K} to a measure of how they interact. From (8), we shall see that the representation space for an encoder layer in MAE is spanned by the row space of 𝐕\mathbf{V}, and is being nonlinearly updated layer-wise. This means, the embedding for each patch serves as a basis to form the representation space for the current attention block, whereas the row-wise attention weights are the Barycentric coordinates for a simplex in 1×d\mathbb{R}^{1\times d}.

Here we further assume that there exist a set of feature maps for query, key, and value e.g., see [20]. For 𝐳\mathbf{z}, the feature map z𝒱z\in\mathcal{V} that maps 1×dBV(Ωi)\mathbb{R}^{1\times d}\to BV(\Omega_{i}), i.e., 𝐳i=z(xi)\mathbf{z}_{i}=z(x_{i}), we can then define an asymmetric kernel function κ~(,):1×d×1×d\tilde{\kappa}(\cdot,\cdot):\mathbb{R}^{1\times d}\times\mathbb{R}^{1\times d}\to\mathbb{R},

A(𝐪i,𝐤j)\displaystyle A(\mathbf{q}_{i},\mathbf{k}_{j}) =α1(xi)𝐪i,𝐤j\displaystyle=\alpha^{-1}(x_{i})\langle\mathbf{q}_{i},\mathbf{k}_{j}\rangle (9)
=α1(xi)q(xi),k(xj)\displaystyle=\alpha^{-1}(x_{i})\langle q(x_{i}),k(x_{j})\rangle
=:α1(xi)κ~(xi,xj),\displaystyle=:\alpha^{-1}(x_{i})\,\tilde{\kappa}(x_{i},x_{j}),

where α~(xi):=l=1d(e𝐪i𝐊/d)l\tilde{\alpha}(x_{i}):=\sum_{l=1}^{d}(e^{\mathbf{q}_{i}\mathbf{K}^{\top}/\sqrt{d}})_{l}. Now the discrete kernel A(,)A(\cdot,\cdot) with vectorial input is rewritten to an integral kernel κ~(,)\tilde{\kappa}(\cdot,\cdot) whose inputs are positions. As a result, using the formulation above, we can express the scaled dot-product attention approximately as a nonlinear integral transform:

z(x)\displaystyle z(x) =α1(x)x𝒳(q(x)k(x))v(x)δx\displaystyle=\alpha^{-1}(x)\sum_{x^{\prime}\in\mathcal{X}}\big{(}q(x)\cdot k(x^{\prime})\big{)}v(x^{\prime})\,\delta_{x^{\prime}} (10)
α1(x)ωκ~(x,x)v(x)𝑑μ(x),\displaystyle\approx\alpha^{-1}(x)\int_{\omega}\tilde{\kappa}(x,x^{\prime})v(x^{\prime})\,d\mu(x^{\prime}),

where δx\delta_{x} is the Dirac measure associated with the position at xx. For the second equation, with a slight abuse of the order presentation, we assume that there exists κ~(,)\tilde{\kappa}(\cdot,\cdot) and a Borel measure μ()\mu(\cdot) such that κ~(,)\tilde{\kappa}(\cdot,\cdot) has a reproducing property under the integral by dμ()d\mu(\cdot), which shall be elaborated in the next paragraph. Here in this integral, the image domain ω\omega is approximated by a patchified grid 𝒳(0,1n,n1n)(0,1n,n1n)\mathcal{X}\simeq(0,\frac{1}{n}\cdots,\frac{n-1}{n})\otimes(0,\frac{1}{n}\cdots,\frac{n-1}{n}). Returning to the perspective of “embedding \approx basis” for the representation space, the backpropagation from the output z()z(\cdot) to update weights to obtain a new v()v(\cdot) during pretraining can be interpreted as an iterative procedure to solve the Fredholm integral equation of the first kind in (10).

How to Obtain a Translation Invariant Kernel.

We note that the formulation above in (10) is closely related to Reproducing Kernel Hilbert Space (RKHS) [9]. First, the dot-product in (6) is re-defined to be

𝐀i 

:=Softmax(γ(𝐪i𝐤i)(𝐐𝐊))
.
\mathbf{A}_{i\mathchoice{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\displaystyle\bullet$}\hskip 0.20999pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\textstyle\bullet$}\hskip 0.20999pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\scriptstyle\bullet$}\hskip 0.20999pt}}}}{\mathbin{\vbox{\hbox{\hskip 0.20999pt\scalebox{0.65}{$\scriptscriptstyle\bullet$}\hskip 0.20999pt}}}}}:=\operatorname{Softmax}\Big{(}-\gamma(\mathbf{q}_{i}-\mathbf{k}_{i})(\mathbf{Q}-\mathbf{K})^{\top}\Big{)}.
(11)

Then, a pre-layer normalization scheme in [4, 19, 46, 49] or pre-inner product normalization in [33, 12] and can be a cheap procedure to re-center the kernel. Rewritting the original dot-product for 𝐪,𝐤1×d\mathbf{q},\mathbf{k}\in\mathbb{R}^{1\times d} as follows

𝐪𝐤=12𝐪𝐪+12𝐤𝐤12(𝐪𝐤)(𝐪𝐤).\mathbf{q}\mathbf{k}^{\top}=\frac{1}{2}\mathbf{q}\mathbf{q}^{\top}+\frac{1}{2}\mathbf{k}\mathbf{k}^{\top}-\frac{1}{2}(\mathbf{q}-\mathbf{k})(\mathbf{q}-\mathbf{k})^{\top}. (12)

Thus, a layer normalization for each row in 𝐐\mathbf{Q} and 𝐊\mathbf{K} before 𝐐𝐊\mathbf{Q}\mathbf{K}^{\top} results the two moment terms above, 𝐪𝐪\mathbf{q}\mathbf{q}^{\top} and 𝐤𝐤\mathbf{k}\mathbf{k}^{\top}, in (12) being relatively small versus the third term. Consequently, using merely the third term in (12) to form (11) suffices to provided a good approximation to the scaled dot-product attention. As an important consequence, the kernel becomes symmetric, and translation-invariant, which is arguably one of the nicest traits of CNN kernels. As a result, the normalized attention kernel κ~(x,x)\tilde{\kappa}(x,x^{\prime}) based on (11) is

(𝐀)ij=γ𝐪i𝐤i,𝐪j𝐤j=:κ(xi,xj).(\mathbf{A})_{ij}=\gamma\langle\mathbf{q}_{i}-\mathbf{k}_{i},\mathbf{q}_{j}-\mathbf{k}_{j}\rangle=:\kappa(x_{i},x_{j}). (13)

Moreover, κ(x,x)\kappa(x,x^{\prime}) can be written as K(xx)K(x-x^{\prime}) for K:1×dK:\mathbb{R}^{1\times d}\to\mathbb{R} being the 1-dimensional Gaussian radial basis function (RBF) kernel, K(θ)=eγθ2K(\theta)=e^{-\gamma\|\theta\|^{2}}. By the positivity and the symmetrcity, κ(,)\kappa(\cdot,\cdot) becomes a reproducing kernel. We can define the following nonlinear integral operator 𝒦\mathcal{K}:

𝒦(v)(x):=ωκ(x,x)v(x)𝑑μ(x).\mathcal{K}(v)(x):=\int_{\omega}\kappa(x,x^{\prime})v(x^{\prime})\,d\mu(x^{\prime}). (14)

In the traditional settings such that κ(x,x)\kappa(x,x^{\prime}) is not explicitly dependent on v()v(\cdot), the vector version of the Mercer representation theorem for the integral kernel can be exploited [24, 14], and there exists an optimal representation space to approximate the solution to this integral equation: the eigenspace the nonlinear integral operator 𝒦\mathcal{K}. While for the scaled dot-product attention’s integral transform interpretation, explicitly pinning down the eigenspace is impossible during the dynamic training procedure. Nevertheless, we can still obtain a representation expansion to show that the internal representations are propagated in a stable fashion (Theorem 5.1).

Relation to Other Methods.

The integral transform formula derived in (10) resembles the nonlocal means method (NL-means) in tasks of traditional image signal processing such as denoising, e.g., [10]. In NL-means, κ(x,x)\kappa(x,x^{\prime}) is a non-learnable integral kernel, which measures the similarity between pixels at xx and xx^{\prime}. While in ViT, the layer-wise kernel κ(x,x;θ)\kappa(x,x^{\prime};\theta) is learnable, and establishes the inter-patch topology (continuity, total variation, etc). One key common trait is that they are both normalized in the sense that for any xωx\in\omega

α1(x)ωκ(x,x)𝑑μ(x)=1,\alpha^{-1}(x)\int_{\omega}\kappa(x,x^{\prime})\,d\mu(x^{\prime})=1, (15)

which is facilitated by the row-wise softmax operation to enforce a unit row sum. We shall see in Section 5 that this plays a key role in obtaining a stable representation in ViT layers mathematically.

Compared with another popular approach, the Convolutional Neural Network (CNN)-based models, the MAE has several differences. In CNN, the translation-invariance is inherited automatically from the convolution operation, i.e., the CNN kernel depends only on the difference of the positions (xx)(x-x^{\prime}). Moreover, K()K(\cdot) is acting on a pixel level and locally supported, thus having a small receptive field. For CNN to learn long-range dependencies, the deep stacking of convolutions is necessary but then renders the optimization algorithms difficult [40]. Moreover, repeating local operations around a small pixel patch is computationally inefficient in that the convolution filter sizes being small in CNN makes the “message passing” from distant positions back and forth difficult [47]. In MAE, the translation invariance is obtained through proper normalization. The learned kernel acts on the basis function representing each patch. Moreover, the kernel map is globally supported, which means it can learn effectively the interaction between even far away patches. As a result, this global nature makes it easier to learn a better representation, thus greatly reduces the number of layers needed to complete the same task.

5 Stable Representation Propagation in the Attention Block

In this section, we try to explain why the representations are continuously changing in a stable fashion in both the encoder and decoder of MAE, which is first observed in [39]. In the following theorem, we have proved a key result that the softmax normalization plays a vital role in stabilizing the propagation of the representations through the layers.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 3: The internal representation propagation during evaluation for a random sample from ImageNet-1K validation set. 3 input of MAE encoder; 3 output of MAE encoder, concatenated with the learnable mask token embedding that is shared among all masked patches; 33 the latent representations, i.e., the outputs of the decoder layers #1\#1 to #7\#7; 3 output of the last decoder layer (#8\#8). (We use the official released model and pretrained weights at %The␣model␣uses␣the␣trained␣weights␣of␣MAE␣fromhttps://github.com/facebookresearch/mae.) See texts for details. Best viewed in color.
Theorem 5.1.

Assume that a trained attention block in the MAE/ViT encoder layer such that 𝐖Q𝐖K2\|\mathbf{W}^{Q}-\mathbf{W}^{K}\|_{2} are uniformly bounded above and below by two positive constants, and 𝐖V=𝐈\mathbf{W}^{V}=\mathbf{I}, and the attention kernel being in the form of (13) such that it is translation invariant and symmetric, let v(t)()v^{(t)}(\cdot) and v(t+1)()v^{(t+1)}(\cdot) be the feature maps for the input and output of the scaled dot-product attention (8), and v(t)BV(ω)𝒱v^{(t)}\in BV(\omega)\cap\mathcal{V}, then we have

v(t+1)v(t)𝒱C(d)1n(v(t)𝒱+|v(t)|BV).\|v^{(t+1)}-v^{(t)}\|_{\mathcal{V}}\leq C(d)\frac{1}{n}\left(\|v^{(t)}\|_{\mathcal{V}}+|v^{(t)}|_{BV}\right). (16)

Proof Sketch and an Interpretation for Theorem 5.1.

The softmax operation in (6) makes the attention matrix to have a row sum being 1. This normalization further translates to the integral kernel’s integration being 1 in (15). This enables us to estimate of how the internal representations are propagated in a stable evolution from v(t)v^{(t)} to v(t+1)v^{(t+1)}. We can rewrite v(t)(x)v^{(t)}(x)

v(t)(x)=α1(x)ωκ(x,x)v(t)(x)𝑑μ(x),v^{(t)}(x)=\alpha^{-1}(x)\int_{\omega}\kappa(x,x^{\prime})v^{(t)}(x)\,d\mu(x^{\prime}), (17)

Thus, the propagation can lead the following modulus of continuity representation

v(t+1)(x)v(t)(x)\displaystyle v^{(t+1)}(x)-v^{(t)}(x) (18)
=\displaystyle= α1(x)ωκ(x,x)v(t)(x)𝑑μ(x)v(t)(x)\displaystyle\;\alpha^{-1}(x)\int_{\omega}\kappa(x,x^{\prime})v^{(t)}(x^{\prime})\,d\mu(x^{\prime})-v^{(t)}(x)
=\displaystyle= α1(x)ωκ(x,x)(v(t)(x)v(t)(x))𝑑μ(x).\displaystyle\;\alpha^{-1}(x)\int_{\omega}\kappa(x,x^{\prime})\bigl{(}v^{(t)}(x^{\prime})-v^{(t)}(x)\bigr{)}\,d\mu(x^{\prime}).

Using the argument in Section 4.1, κ(x,x)=K(xx)\kappa(x,x^{\prime})=K(x-x^{\prime}) thus a simple substitution can be exploited to get the following representation:

α1(x)ωK(xx)(v(t)(x)v(t)(x))𝑑μ(x)\displaystyle\alpha^{-1}(x)\int_{\omega}K(x-x^{\prime})\bigl{(}v^{(t)}(x^{\prime})-v^{(t)}(x)\bigr{)}\,d\mu(x^{\prime}) (19)
=\displaystyle= α1(x)ωξK(ξ)(v(t)(x+ξ)v(t)(x))𝑑μ(ξ).\displaystyle\;\alpha^{-1}(x)\int_{\omega_{\xi}}K(\xi)\bigl{(}v^{(t)}(x+\xi)-v^{(t)}(x)\bigr{)}\,d\mu(\xi).

By Mercer’s theorem, there exists an eigenfunction expansion for κ(x,x)=i=1aiψi(x)ψi(x)\kappa(x,x^{\prime})=\sum_{i=1}^{\infty}a_{i}\psi_{i}(x)\psi_{i}(x^{\prime}) that has a spectral decay. Then, the propagation bounds can be proved under the assumption that the current layer’s learned feature map offers a “reasonable” approximation to the eigenspace of κ(,)\kappa(\cdot,\cdot).

Overall, this bound describes that the propagation of the internal representation is continuously changing based on the inter-patch interaction encoded in the attention kernel. In the kernel interpretation of the attention mechanism, there is one key difference with the conventional kernel method: the conventional kernel measures the inter-sample similarity in the whole dataset; while the attention kernel κ(,)\kappa(\cdot,\cdot) or κ~(,)\tilde{\kappa}(\cdot,\cdot) is built for a single instance, and learns inter-patch topological relations to build a better representation space for the MAE. This learned representation for a specific data sample determines how amenable it is for the downstream tasks.

Additionally, we note that this result applies to the layer-wise representation propagation in the decoder layers as well. An enlightening illustration is demonstrated in Figure 3. There are multi-fold interesting aspects about this evolution diagram shown: (1) in the ViT-base MAE, the embedding dimension of its decoder is merely 512512, which is less than the embedding dimension in the encoder (d=768d=768). Note that 768=Nc×Nc×3768=N_{c}\times N_{c}\times 3 which enables that the latent representation (a row vector) in a single patch can be directly reshaped to an image patch. Thus, we need a patch-wise upsampling projection; (2) this projection, which maps vectors in 196×512\mathbb{R}^{196\times 512} to those in 196×768\mathbb{R}^{196\times 768} resides in the MAE decoder. It should be noted that this upsampling projection is only connected with the last decoder layer. Heuristically, this projection may only upsample the last decoder layer’s output 𝐳8196×512\mathbf{z}_{8}\in\mathbb{R}^{196\times 512} to the desired embedding dimensions. Nevertheless, due to the stable representation propagation, the outputs from the previous decoder layers can also benefit from the last layer projection weights to obtain sensible reconstruction results, as illustrated in Figure 3.

Skip-Connection as a Tikhonov Regularizer

After presenting Theorem 5.1, one might ask: given the integral transform representation is already stable, what is the role of the skip-connection? Diverting from the original interpretation of battling the diminishing gradient for the skip-connection [32], here we offer a new perspective and some heuristics inspired by functional analysis to articulate the reason why using the skip-connection would make the representation propagation more stable, which is also observed in [39].

Knowing that the integral kernel interpretation of attention (10) resembles the Fredholm integral equation of the first kind, we can interpret the skip-connection as a layer-wise Tikhonov regularization in the Fredholm integral equation of the second kind. Starting from (10)

z(x)=α1ωκ(x,x)v(x)𝑑μ(x).z(x)=\alpha^{-1}\int_{\omega}\kappa(x,x^{\prime})v(x^{\prime})d\mu(x^{\prime}). (20)

This Fredholm intergral equation of the first kind is usually extremely ill-posed [28], in the sense that the solution, even if it exists, does not depend continuously on v()v(\cdot).

If 𝐖Vβ1𝐈\mathbf{W}^{V}\approx\beta^{-1}\mathbf{I}, then for x𝒳x\in\mathcal{X}, we have that

z(x)βv(x)+α1ωκ(x,x)v(x)𝑑μ(x).z(x)\approx\beta v(x)+\alpha^{-1}\int_{\omega}\kappa(x,x^{\prime})v(x^{\prime})d\mu(x^{\prime}). (21)

This is the Fredholm integral equation of the second kind witha variable coefficient, the extra βv()\beta v(\cdot) term not only contributes to the well-posedness of this equation, but renders the numerical scheme to approximate a better v()v(\cdot) more stable. Tikhonov firstly introduced this method in [41], see also [35, Chapter 16].

Theorem 5.2 (Skip-connection can represent the minimizer to a Tikhonov-regularized integral equation functional).

For 𝒦()\mathcal{K}(\cdot) using κ(,)\kappa(\cdot,\cdot) as the integral kernel in (13), define the following functional

:=12α1()𝒦(v)z𝒱2+β𝒦(v),v,\mathcal{L}:=\frac{1}{2}\|\alpha^{-1}(\cdot)\mathcal{K}(v)-z\|_{\mathcal{V}*}^{2}+\beta\langle\mathcal{K}(v),v\rangle, (22)

where u𝒱:=supv𝒱|u,v|/v𝒱\|u\|_{\mathcal{V}^{\prime}}:=\sup_{v\in\mathcal{V}}|\langle u,v\rangle|/\|v\|_{\mathcal{V}} denotes the dual norm. The Euler-Lagrange equation associated with minv\min_{v}\mathcal{L} has the following form:

βu+α1𝒦(u)=z in (𝒦(𝒱)).\beta u+\alpha^{-1}\mathcal{K}(u)=z\quad\text{ in }\;(\mathcal{K}^{*}(\mathcal{V}))^{\prime}. (23)

Note that (23) bears exactly the same form with (21). Moreover, instead of the conventional L2L^{2}-type Tikhonov regularizer v𝒱2\|v\|_{\mathcal{V}}^{2}, here 𝒦(v),v\langle\mathcal{K}(v),v\rangle is an induced norm by the positive (semi)definite integral operator 𝒦()\mathcal{K}(\cdot). The skip-connection term in (23) comes from the Tikhonov regularization term in (22), and without it, the Euler-Lagrange equation reverts to the Fredholm integral equation of the first kind.

6 MAE Decoder: Low-Rank Reconstruction Through Global Interpolation

During pretraining, the major function of the MAE decoder is to map the low-rank representation obtained by the MAE encoder to a reconstruction. The encoder embedding has a bigger dimension, yet is define on only a fraction of patches. The decoder reduces the embedding dimension, but is able to obtain the embedding for all p×pp\times p patches including the masked ones. Despite of the fact that the decoder in MAE is only used in pretraining, not downstream tasks, it plays a vital role in learning a “good” representation space for the MAE encoder.

Enrichment of the Representation Space through Positional Embedding

Because the positional embedding 𝐱:=i=1nxip×d\mathbf{x}:=\|_{i=1}^{n}x_{i}\in\mathbb{R}^{p\times d} is added to the latent representation 𝐲\mathbf{y} in (3), the nonlinear universal approximator (FFN) in each attention block shall also contribute to learning a better representation. In every decoder layer, the basis functions in 𝒱\mathcal{V} are being constantly enriched by span{wj𝒳:wj(xi)=(FFN(𝐲+𝐱))ij,1jd}\operatorname{span}\{w_{j}\in\mathcal{X}:w_{j}(x_{i})=(\text{FFN}(\mathbf{y}+\mathbf{x}))_{ij},1\leq j\leq d\}. If we treat the positional embeddings {xi}\{x_{i}\} as coordinate again, FFN(𝐲+𝐱)\text{FFN}(\mathbf{y}+\mathbf{x}) is a subset of the famous Barron space [6, 3], which has a rich and powerful representation power that can approximate smooth function arbitrarily well, e.g., see [34, 50]. As a result, the representation space is dynamically updated layer by layer to try to build a more expressive representation to better characterize the inter-patch topology. FFNs themselves have no linear structure, however, the basis functions produced this way act as a building block to update the linear space for the expansion in (8) and (10). In the MAE decoder, the number of basis (p2=196p^{2}=196) is much greater than that of the MAE encoder (0.25p2=490.25\cdot p^{2}=49), thus heuristically speaking, this basis update mechanism is mostly working in the decoder.

Reconstruction is a Global Interpolation.

In MAE decoder, the mask token, a learnable vector shared by every masked patch, is concatenated to the unshuffled representation of the unmasked patches. To demonstrate that the reconstructed image is a global interpolation by using the unmasked patch embeddings, we first need to prove the following lemma, which states that the learned mask token embedding can be simply replaced by zero with an updated set of attention weights. For the simplicity, we denote the embedding dimension in the MAE decoder still as dd.

Lemma 6.1.

Let 𝐦1×d\mathbf{m}\in\mathbb{R}^{1\times d} be the learned mask token embedding that is shared by all masked patches, and let m()m(\cdot) be its feature map. Denote the set of masked and unmasked patches’ index as \mathcal{M} and 𝒩\mathcal{N}, respectively, and we assume that |𝒩|2|\mathcal{N}|\geq 2. For the input 𝐲p×d\mathbf{y}\in\mathbb{R}^{p\times d} that already contains the masked patches, i.e., all ii\in\mathcal{M}, 𝐲i=𝐦\mathbf{y}_{i}=\mathbf{m}, , there exists a new set of affine linear maps to generate {𝐐^,𝐊^,𝐕^}\{\widehat{\mathbf{Q}},\widehat{\mathbf{K}},\widehat{\mathbf{V}}\}, such that for any i=1,,ni=1,\dots,n,

j=1nA(𝐪i,𝐤j)𝐯ij𝒩A(𝐪^i,𝐤^j)𝐯^i<Cn1.\left\|\sum_{j=1}^{n}A(\mathbf{q}_{i},\mathbf{k}_{j})\,\mathbf{v}_{i}-\sum_{j\in\mathcal{N}}A(\hat{\mathbf{q}}_{i},\hat{\mathbf{k}}_{j})\,\hat{\mathbf{v}}_{i}\right\|<Cn^{-1}. (24)

With this lemma, we are already present our final result: for the first layer of the MAE decoder, the network interpolates the representation using global information from the embeddings learned by the MAE encoder, not just the ones from the nearby patches. Moreover, with Theorem 5.1, the MAE decoders continue to perform such a global interpolation in subsequent layers. For an empirical evidence, please refer to Figure 3.

Proposition 6.2 (Interpolation results for masked patches).

For the embedding of every masked patch ii\in\mathcal{M}, let vi(t+1)v_{i}^{(t+1)} be the output embedding of a decoder layer for this patch, and let vj(t)v_{j}^{(t)} be the input from the encoder for j=1,,pj=1,\dots,p, then vi(t+1)v_{i}^{(t+1)} is

vi(t+1)=j𝒩ajvj(t),v_{i}^{(t+1)}=\sum_{j\in\mathcal{N}}a_{j}v_{j}^{(t)}, (25)

for a set of weights based on unmasked patches aj(vi1,,vik)a_{j}(v_{i_{1}},\dots,v_{i_{k}}), 𝒩:={i1,,ik}\mathcal{N}:=\{i_{1},\dots,i_{k}\}. Moreover, the reconstruction quality is bounded above by the global reconstruction error of the unmasked patches,

Rvi𝐮iBV(Ωi)Csupj𝒩Rvj𝐮BV(Ωj)+Cn1𝐮.\|Rv_{i}-\mathbf{u}_{i}\|_{BV(\Omega_{i})}\leq C\sup_{j\in\mathcal{N}}\|Rv_{j}-\mathbf{u}\|_{BV(\Omega_{j})}+Cn^{-1}\|\mathbf{u}\|. (26)

7 Conclusion

To the best of our knowledge, to date, there are no theoretical viewpoints to explain the powerful expressivity of MAE. In this paper, we, for the first time, propose a unified theoretical framework that provides a mathematical understanding for MAE. Particularly, we explain the patch-based attention approaches of MAE from a perspective of an integral kernel under a non-overlapping domain decomposition setting. To help the researchers to further grasp the main reasons for the great success of MAE, our mathematical proof contributes the following major conclusions:

(1) The attention mechanism in MAE is a learnable integral kernel transform, and its representation power is dynamically updated by the Barron space with the positional embeddings that work as the coordinates in a high dimensional feature space.

(2) The random patch selecting of MAE preserves the information of the original image, while reduces the computing costs, under common assumptions on the low-rank nature of images. This paves a theoretical foundation for the patch-based neural networks/models including but not limited to MAE or ViT.

(3) The great success of MAE benefits from a main reason that the scaled dot-product attention built-in ViT provides the stable representations during the cross-layer propagation.

(4) In MAE, the decoder is vital to helping the encoder to build better representations, while decoder is discarded after pretraining.

(5) The latent representations of the masked patches are interpolated globally based on an inter-patch topology that is learned by the attention mechanism.

Furthermore, our proposed theoretical framework can be used to understand the intrinsic traits of not only the ViT-based models but also even the extensive networks/models made by patch and self-attention.

References

  • [1] Kendall Atkinson and Weimin Han. Numerical solution of fredholm integral equations of the second kind. In Theoretical Numerical Analysis, pages 473–549. Springer, 2009.
  • [2] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
  • [3] Francis Bach. Breaking the curse of dimensionality with convex neural networks. The Journal of Machine Learning Research, 18(1):629–681, 2017.
  • [4] Alexei Baevski and Michael Auli. Adaptive input representations for neural language modeling. arXiv preprint arXiv:1809.10853, 2018.
  • [5] Hangbo Bao, Li Dong, and Furu Wei. BEiT: BERT pre-training of image transformers. 2021.
  • [6] Andrew R Barron. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information theory, 39(3):930–945, 1993.
  • [7] Josh Beal, Eric Kim, Eric Tzeng, Dong Huk Park, Andrew Zhai, and Dmitry Kislyuk. Toward transformer-based object detection. arXiv preprint arXiv:2012.09958, 2020.
  • [8] Peter Berger, Gabor Hannak, and Gerald Matz. Graph signal recovery via primal-dual algorithms for total variation minimization. IEEE Journal of Selected Topics in Signal Processing, 11(6):842–855, 2017.
  • [9] Alain Berlinet and Christine Thomas-Agnan. Reproducing kernel Hilbert spaces in probability and statistics. Springer Science & Business Media, 2011.
  • [10] Antoni Buades, Bartomeu Coll, and Jean-Michel Morel. A review of image denoising algorithms, with a new one. Multiscale modeling & simulation, 4(2):490–530, 2005.
  • [11] I.U.D. Burago, J.D. Burago, and V.G. Maz’ya. Potential Theory and Function Theory for Irregular Regions. Seminars in mathematics. Consultants Bureau, 1969.
  • [12] Shuhao Cao. Choose a transformer: Fourier or galerkin. In 35th Conference on Neural Information Processing Systems (NeurIPS 2021), 2021.
  • [13] Gunnar Carlsson. Topology and data. Bulletin of the American Mathematical Society, 46(2):255–308, 2009.
  • [14] Claudio Carmeli, Ernesto De Vito, and Alessandro Toigo. Vector valued reproducing kernel hilbert spaces of integrable functions and mercer theorem. Analysis and Applications, 4(04):377–408, 2006.
  • [15] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. arXiv preprint arXiv:2104.14294, 2021.
  • [16] Tony F Chan and Tarek P Mathew. Domain decomposition algorithms. Acta numerica, 3:61–143, 1994.
  • [17] Ricky TQ Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural ordinary differential equations. arXiv preprint arXiv:1806.07366, 2018.
  • [18] Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. arXiv preprint arXiv:2104.02057, 2021.
  • [19] Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509, 2019.
  • [20] Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. Rethinking attention with performers. arXiv preprint arXiv:2009.14794, 2020.
  • [21] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
  • [22] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
  • [23] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
  • [24] JC Ferreira and VA2501172 Menegatto. Eigenvalues of integral operators defined by smooth positive definite kernels. Integral Equations and Operator Theory, 64(1):61–81, 2009.
  • [25] C. Gerhardt. Analysis II. Analysis. International Press, 2006.
  • [26] Craig R Gin, Daniel E Shea, Steven L Brunton, and J Nathan Kutz. Deepgreen: deep learning of green’s functions for nonlinear boundary value problems. Scientific reports, 11(1):1–14, 2021.
  • [27] CW Groetsch. The theory of tikhonov regularization for fredholm equations. 104p, Boston Pitman Publication, 1984.
  • [28] Jacques Hadamard. Lectures on Cauchy’s problem in linear partial differential equations. Courier Corporation, 2003.
  • [29] Paul Richard Halmos and Viakalathur Shankar Sunder. Bounded integral operators on L 2 spaces, volume 96. Springer Science & Business Media, 2012.
  • [30] Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu, and Yunhe Wang. Transformer in transformer. In 35th Conference on Neural Information Processing Systems (NeurIPS 2021), 2021.
  • [31] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. arXiv preprint arXiv:2111.06377, 2021.
  • [32] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • [33] Alex Henry, Prudhvi Raj Dachapally, Shubham Shantaram Pawar, and Yuxuan Chen. Query-key normalization for transformers. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4246–4253, Online, November 2020. Association for Computational Linguistics.
  • [34] Martin Hutzenthaler, Arnulf Jentzen, Thomas Kruse, and Tuan Anh Nguyen. A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations. SN partial differential equations and applications, 1(2):1–34, 2020.
  • [35] Rainer Kress. Linear Integral Equations. Springer New York, 2014.
  • [36] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. arXiv preprint arXiv:2103.14030, 2021.
  • [37] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10012–10022, 2021.
  • [38] Sayak Paul and Pin-Yu Chen. Vision transformers are robust learners. AAAI, 2022.
  • [39] Maithra Raghu, Thomas Unterthiner, Simon Kornblith, Chiyuan Zhang, and Alexey Dosovitskiy. Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems, 34, 2021.
  • [40] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015.
  • [41] Andrei Nikolajevits Tihonov. Solution of incorrectly formulated problems and the regularization method. Soviet Math., 4:1035–1038, 1963.
  • [42] Andrea Toselli and Olof Widlund. Domain decomposition methods-algorithms and theory, volume 34. Springer Science & Business Media, 2004.
  • [43] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In International Conference on Machine Learning, pages 10347–10357. PMLR, 2021.
  • [44] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008, 2017.
  • [45] Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pages 1096–1103, 2008.
  • [46] Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F Wong, and Lidia S Chao. Learning deep transformer models for machine translation. arXiv preprint arXiv:1906.01787, 2019.
  • [47] Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7794–7803, 2018.
  • [48] Jürgen Weese. A reliable and fast method for the solution of fredhol integral equations of the first kind based on tikhonov regularization. Computer physics communications, 69(1):99–111, 1992.
  • [49] Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tieyan Liu. On layer normalization in the transformer architecture. In International Conference on Machine Learning, pages 10524–10533. PMLR, 2020.
  • [50] Jinchao Xu. Finite neuron method and convergence analysis. Communications in Computational Physics, 28(5):1707–1745, 2020.
  • [51] Peng Xu, Chaitanya K Joshi, and Xavier Bresson. Multigraph transformer for free-hand sketch recognition. IEEE Transactions on Neural Networks and Learning Systems, 2021.
  • [52] Xiaoqun Zhang, Martin Burger, Xavier Bresson, and Stanley Osher. Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM Journal on Imaging Sciences, 3(3):253–276, 2010.

Appendix A Proof of Theorem 5.1

Assumption A.1 (Assumption of Theorem 5.1).

To prove Theorem 5.1, we assume the following conditions hold true,

  1. (B1B_{1})

    The underlying Hilbert space for the latent representation is 𝒱(L2(ω,μ))d\mathcal{V}\subset\big{(}L^{2}(\omega,\mu))^{d}, i.e., there is no a priori inter-channel continuity, and the inter-channel topology is learned from data.

  2. (B2B_{2})

    For a BV function on ω\omega, there exists a smooth extension to 2\mathbb{R}^{2} such that the extension is stable in ||BV(Ω)|\cdot|_{BV(\Omega)} (see e.g., [25], [11, Theorem 2]).

  3. (B3B_{3})

    For each v𝒱v\in\mathcal{V}, there exists a smooth KK such that κ(x,x)=K(xx)\kappa(x,x^{\prime})=K(x-x^{\prime}), and K(xx)CγeγxxK(x-x^{\prime})\leq C_{\gamma}e^{-\gamma\|x-x^{\prime}\|} with a uniform constant CγC_{\gamma} that only depends on γ\gamma.

  4. (B4B_{4})

    For each v𝒱v\in\mathcal{V}, α(x):=ωκ(x,x)𝑑μ(x)\alpha(x):=\int_{\omega}\kappa(x,x^{\prime})\,d\mu(x^{\prime}), and α\alpha is bounded below by a positive constant CαC_{\alpha} uniformly.

Theorem 5.1 (Stability result, rigorous version).

Under the assumption of Assumption A.1, we have

v(t+1)v(t)𝒱C(γ,d)1n(v(t)𝒱+|v(t)|BV).\|v^{(t+1)}-v^{(t)}\|_{\mathcal{V}}\leq C(\gamma,d)\frac{1}{n}\left(\|v^{(t)}\|_{\mathcal{V}}+|v^{(t)}|_{BV}\right). (27)
Proof.

First we extend the kernel function from ω\omega to the whole 2\mathbb{R}^{2} by

Ext(κ(x,x))={κ(x,x)if x and xω,0otherwise.\operatorname{Ext}(\kappa(x,x^{\prime}))=\begin{cases}\kappa(x,x^{\prime})&\text{if }x\text{ and }x^{\prime}\in\omega,\\ 0&\text{otherwise}.\end{cases} (28)

With a slight abuse of notation we still denote the extension Ext(κ(,))\operatorname{Ext}(\kappa(\cdot,\cdot)) still as κ(,)\kappa(\cdot,\cdot), and the normalization (15) still holds:

α1(x)ωκ(x,x)𝑑μ(x)=α1(x)2κ(x,x)𝑑μ(x)=1.\alpha^{-1}(x)\int_{\omega}\kappa(x,x^{\prime})\,d\mu(x^{\prime})=\alpha^{-1}(x)\int_{\mathbb{R}^{2}}\kappa(x,x^{\prime})\,d\mu(x^{\prime})=1. (29)

Without loss of generality, we also assume that the Thus, under assumption (B1)(B_{1})(B2)(B_{2}), we have the original integral on ω\omega can be written as on the full plane

v(t+1)(x)=α1(x)2κ(x,x)v(t)(x)𝑑μ(x).v^{(t+1)}(x)=\alpha^{-1}(x)\int_{\mathbb{R}^{2}}\kappa(x,x^{\prime})v^{(t)}(x^{\prime})\,d\mu(x^{\prime}). (30)

Similarly by (29),

v(t)(x)=α1(x)2κ(x,x)v(t)(x)𝑑μ(x).v^{(t)}(x)=\alpha^{-1}(x)\int_{\mathbb{R}^{2}}\kappa(x,x^{\prime})v^{(t)}(x)\,d\mu(x^{\prime}). (31)

Therefore,

v(t+1)(x)v(t)(x)\displaystyle v^{(t+1)}(x)-v^{(t)}(x) (32)
=\displaystyle= α1(x)2κ(x,x)(v(t)(x)v(t)(x))𝑑μ(x).\displaystyle\;\alpha^{-1}(x)\int_{\mathbb{R}^{2}}\kappa(x,x^{\prime})\bigl{(}v^{(t)}(x^{\prime})-v^{(t)}(x)\bigr{)}\,d\mu(x^{\prime}).

Using κ(x,x)=K(xx)\kappa(x,x^{\prime})=K(x-x^{\prime}), let ξ=xx\xi=x^{\prime}-x:

α1(x)2K(xx)(v(t)(x)v(t)(x))𝑑μ(x)\displaystyle\alpha^{-1}(x)\int_{\mathbb{R}^{2}}K(x-x^{\prime})\bigl{(}v^{(t)}(x^{\prime})-v^{(t)}(x)\bigr{)}\,d\mu(x^{\prime}) (33)
=\displaystyle= α1(x)2K(ξ)(v(t)(x+ξ)v(t)(x))𝑑μ(ξ).\displaystyle\;\alpha^{-1}(x)\int_{\mathbb{R}^{2}}K(\xi)\bigl{(}v^{(t)}(x+\xi)-v^{(t)}(x)\bigr{)}\,d\mu(\xi).

By v(t)BV(2)v^{(t)}\in BV(\mathbb{R}^{2}), and ω\omega being compact, there exists δ>0\delta>0 such that

supxωsupξ<δ/nv(t)(x+ξ)v(t)(x)<1n|vt|BV(2).\sup_{x\in\omega}\sup_{\|\xi\|<\delta/n}\|v^{(t)}(x+\xi)-v^{(t)}(x)\|<\frac{1}{n}|v^{t}|_{BV(\mathbb{R}^{2})}. (34)

Thus, for any xω{x\in\omega}

v(t+1)(x)v(t)(x)\displaystyle\|v^{(t+1)}(x)-v^{(t)}(x)\| (35)
\displaystyle\leq |α1|2K(ξ)(v(t)(x+ξ)v(t)(x))𝑑μ(ξ)\displaystyle\;|\alpha^{-1}|\int_{\mathbb{R}^{2}}\|K(\xi)\bigl{(}v^{(t)}(x+\xi)-v^{(t)}(x)\bigr{)}\|\,d\mu(\xi)
\displaystyle\leq |α1|supξ<δ/nv(t)(x+ξ)v(t)(x)ω|K(ξ)|𝑑μ(ξ)\displaystyle\;|\alpha^{-1}|\sup_{\|\xi\|<\delta/n}\|v^{(t)}(x+\xi)-v^{(t)}(x)\|\int_{\omega}\left|K(\xi)\right|d\mu(\xi)
+|α1|ξδ/n|K(ξ)|2v(t)𝑑μ(ξ)\displaystyle+|\alpha^{-1}|\int_{\|\xi\|\geq\delta/n}\left|K(\xi)\right|2\|v^{(t)}\|_{\infty}d\mu(\xi)
\displaystyle\leq  2Cα1Cγ1n(|v(t)|+|v(t)|BV).\displaystyle\;2C_{\alpha}^{-1}C_{\gamma}\frac{1}{n}\left(|v^{(t)}|_{\infty}+|v^{(t)}|_{BV}\right).

Taking supxω\sup_{x\in\omega} yields the result. ∎

Appendix B Proof of Theorem 5.2

Proof.

Define η():𝒱\eta(\cdot):\mathcal{V}\to\mathbb{R} such that ηu(v):=z,vα1𝒦u,v\eta_{u}(v):=\langle z,v\rangle-\langle\alpha^{-1}\mathcal{K}u,v\rangle, where u𝒲𝒱u\in\mathcal{W}\subseteq\mathcal{V} is the solution to the integral equation in (23), and 𝒲\mathcal{W} is the solution subspace. Clearly, ηu𝒱\eta_{u}\in\mathcal{V}^{\prime} defines a bounded functional on 𝒱\mathcal{V} for (𝒱,,)(\mathcal{V},\langle\cdot,\cdot\rangle). By Riesz representation theorem, there exists an isomorphism G:𝒱𝒱G:\mathcal{V}\to\mathcal{V}^{\prime} such that ϕu:=G1(η)𝒱\phi_{u}:=G^{-1}(\eta)\in\mathcal{V} and

ηu(v)=ϕu,v=z,vα1𝒦(u),v.\eta_{u}(v)=\langle\phi_{u},v\rangle=\langle z,v\rangle-\langle\alpha^{-1}\mathcal{K}(u),v\rangle. (36)

Then, define 𝒱2:=,\|\cdot\|_{\mathcal{V}}^{2}:=\langle\cdot,\cdot\rangle, we have

minu𝒱η()𝒱2=minu𝒱ϕu𝒱2=minu𝒱G1(z,α1()𝒦(u),)𝒱2.\min_{u\in\mathcal{V}}\|\eta(\cdot)\|_{\mathcal{V}^{\prime}}^{2}=\min_{u\in\mathcal{V}}\|\phi_{u}\|_{\mathcal{V}}^{2}=\min_{u\in\mathcal{V}}\|G^{-1}(\langle z,\cdot\rangle-\langle\alpha^{-1}(\cdot)\mathcal{K}(u),\cdot\rangle)\|_{\mathcal{V}}^{2}.

Thus, the functional (v)\mathcal{L}(v) can be written as:

(v)=12{α1()𝒦(v)z𝒱2+β𝒦(v),v}\mathcal{L}(v)=\frac{1}{2}\left\{\|\alpha^{-1}(\cdot)\mathcal{K}(v)-z\|_{\mathcal{V}^{\prime}}^{2}+\beta\langle\mathcal{K}(v),v\rangle\right\} (37)

Taking the Gateaux derivative limτ0d(u+τw)/dτ\lim_{\tau\to 0}\mathrm{d}\mathcal{L}(u+\tau w)/\mathrm{d}\tau in order to find the critical point(s) u𝒲u\in\mathcal{W}, we have for any perturbation w𝒲w\in\mathcal{W} such that u+τw𝒲u+\tau w\in\mathcal{W}

0=limτ0ddτ{\displaystyle 0=\lim_{\tau\to 0}\frac{d}{d\tau}\Bigg{\{} α1()G1(z,α1()𝒦(u+τv),),G1(z,α1()𝒦(u+τv),),\displaystyle\left\langle\alpha^{-1}(\cdot)G^{-1}(\langle z,\cdot\rangle-\alpha^{-1}(\cdot)\mathcal{K}(u+\tau v),\cdot\rangle),G^{-1}(\langle z,\cdot\rangle-\alpha^{-1}(\cdot)\mathcal{K}(u+\tau v),\cdot\rangle)\right\rangle,
+β𝒦(u+τv),u+τv}\displaystyle+\beta\langle\mathcal{K}(u+\tau v),u+\tau v\rangle\Bigg{\}}

and applying G1G^{-1} on 𝒦(,u)𝒱\mathcal{K}(\cdot,u)\in\mathcal{V}^{\prime}, it reads for any w𝒲w\in\mathcal{W}

α1()G1(z,𝒦(u),),G1(𝒦(w),)+𝒦(u),v\displaystyle\left\langle\alpha^{-1}(\cdot)G^{-1}\big{(}\langle z,\cdot\rangle-\langle\mathcal{K}(u),\cdot\rangle\big{)},G^{-1}(\langle\mathcal{K}(w),\cdot\rangle)\right\rangle+\langle\mathcal{K}(u),v\rangle (38)
=\displaystyle= α1ϕu,G1(𝒦(u),)+𝒦(u),w\displaystyle\;\left\langle\alpha^{-1}\phi_{u},G^{-1}(\langle\mathcal{K}(u),\cdot\rangle)\right\rangle+\langle\mathcal{K}(u),w\rangle
=\displaystyle= α1𝒦(w),ϕu+𝒦(u),w=0.\displaystyle\;\langle\alpha^{-1}\mathcal{K}(w),\phi_{u}\rangle+\langle\mathcal{K}(u),w\rangle=0.

As a result, combining (38), (36), and the self-adjointness of 𝒦\mathcal{K}, we have the system in the continuum becomes

{ϕu,v+α1𝒦(u),v=z,v,v𝒱,α1𝒦(w),ϕu+β𝒦(u),w=0,q𝒬.\left\{\begin{aligned} \langle\phi_{u},v\rangle+\langle\alpha^{-1}\mathcal{K}(u),v\rangle&=\langle z,v\rangle,&\forall v\in\mathcal{V},\\ \langle\alpha^{-1}\mathcal{K}(w),\phi_{u}\rangle+\beta\langle\mathcal{K}(u),w\rangle&=0,&\forall q\in\mathcal{Q}.\end{aligned}\right. (39)

The first equation implies that:

ϕu+α1𝒦(u)=z in 𝒱.\phi_{u}+\alpha^{-1}\mathcal{K}(u)=z\quad\text{ in }\;\mathcal{V}^{\prime}. (40)

Hence, by w𝒲𝒱w\in\mathcal{W}\subseteq\mathcal{V}, when plugging ww to the second equation in (36) we have

zα1𝒦(u)+βu,𝒦(w)=0 for w𝒲,\langle z-\alpha^{-1}\mathcal{K}(u)+\beta u,\mathcal{K}(w)\rangle=0\quad\text{ for }w\in\mathcal{W}, (41)

and the theorem follows.