This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Data Standardization for Robust Lip Sync

Chun Wang
Mashang Consumer Finance Co., Ltd.
Chongqing, China
[email protected]
Abstract

Lip sync is a fundamental audio-visual task. However, existing lip sync methods fall short of being robust in the wild. One important cause could be distracting factors on the visual input side, making extracting lip motion information difficult. To address these issues, this paper proposes a data standardization pipeline to standardize the visual input for lip sync. Based on recent advances in 3D face reconstruction, we first create a model that can consistently disentangle lip motion information from the raw images. Then, standardized images are synthesized with disentangled lip motion information, with all other attributes related to distracting factors set to predefined values independent of the input, to reduce their effects. Using synthesized images, existing lip sync methods improve their data efficiency and robustness, and they achieve competitive performance for the active speaker detection task.

Index Terms:
Lip sync, robustness, lip motions, disentanglement.

I Introduction

Lip sync is a fundamental audio-visual (AV) task. Its primary function is to determine when the audio and visual streams are out of sync. For this reason, it is frequently formulated as a cross-modal matching task and is resolved by comparing the representations of two modalities [1]. Active speaker detection (ASD), whose goal is to detect which or no subject in visual streams is the speaker, can be solved using such a formulation. A solid baseline for resolving ASD can be created by performing lip sync on each subject who presents in the video [2].

Lip sync performance, however, degrades in real-world conditions. One major cause is the very diverse nature of videos taken in the wild, with the majority of the diversity caused by distracting factors that could make extracting lip motion information difficult. For example, when faces are in large head poses, performance significantly decreases [3][4].

There are primarily two types of strategies for improving the robustness of AV methods. One is a data-driven strategy. Previous research, for example, has shown that training with a large number of non-frontal faces progressively makes AV systems more robust to big head poses [3][4]. However, adopting an entirely data-driven strategy would make lip sync data-hungry and hard to optimize, as a significant amount of data covering a wide range of combinations would be needed to address compound distracting factors.

As a complement, others take a more domain-knowledge-based strategy. For instance, several face frontalization techniques [5][6] are proposed as one way to explicitly reduce the effects of head poses. According to domain knowledge, see Fig. 1, in addition to head poses, subject-related factors (primarily identity and appearance) and scene-related factors (primarily illumination and background) should also be regarded as distracting factors, for the reason that judgments of AV synchronization should not be influenced by their variations. Meanwhile, lip motion information is the primary visual cue for lip sync. Methods that can disentangle lip motions from compound distracting factors are thus desired.

Refer to caption

Figure 1: A domain knowledge-based model of the main factors that affect videos, with miscellaneous factors like occlusion omitted.

Lip motions can be viewed as a form of facial expression in a broader sense. ExpNet [7] and DeepExp3D [8] demonstrate that the 3D Morphable Model (3DMM) [9] may be used to acquire disentangled expressions. The 3DMM defines a parametric space where expression is unrelated to other attributes. In particular, this expression subspace is demonstrated to capture speaking-related facial motions well [8]. Moreover, the image formation model (see Sec. III) built on top of 3DMM provides a larger parametric space with additional control over attributes that are related to many factors.

This paper proposes a data standardization pipeline (DSP) that can produce standardized expressive images with the effects of compound distracting factors reduced by leveraging the image formation model. First, a network, named E-Net, is developed to consistently disentangle expression from the input at the video level. Then, using the image formation model, expressions disentangled from the input are used to synthesize expressive images, with all other attributes corresponding to distracting factors set to predefined values, independent of the input, to reduce their effects on the synthesized images. Experiments demonstrate that by taking images standardized by the DSP as input, existing lip sync methods increase their data efficiency and generalizability and achieve competitive performance for the ASD task on the recent ASW dataset [2]. The rest of the paper is organized as follows: Sec. II discusses related works. Then, building on the preliminary information provided in Sec. III, the DSP is then thoroughly described in Sec. IV. The experimental settings and results are presented in Sec. V. Finally, Sec. VI concludes the paper with a discussion on future improvements.

II RELATED WORK

3DMM coefficients. Several ambiguities make estimating 3DMM coefficients difficult [9]. Major expression-related ambiguity results from the fact that certain changes in facial shape can be ascribed to either identity variations or expressions, which can be inferred from (1). In a recent benchmark [10], Deep3D [11] and MGCNet [12] are shown to fit the mouth area well. But they exhibit inconsistent ascriptions across images as they attempt to estimate all attributes using preimage features. To resolve the ambiguity, some [13][14] propose learning an improved version of 3DMM with more orthogonal identity and expression subspaces. Instead of enhancing the 3DMM model, some researchers prefer to introduce constraints during optimization. For instance, by aggregating multiple images of a subject and assuming that they are of the same underlying subject-specific attributes, [15] optimizes global attributes specific to the subject, and attributes image-varying facial morphing to the expression. Despite being simpler, optimization-based methods are less robust in the wild [7]. In this paper, we integrate this global constraint into Deep3D [11] to improve disentanglement consistency. Note that in this work, we are focused on improving the consistency of expression disentanglement at the video level, rather than on more accurate 3D face reconstruction.

Face synthesis/generation. There are various methods for face frontalization. Some use generative models [6] to translate a profile face to a frontal one. [16] blends generative models with 3DMM for 3D face rotations. However, they may not preserve the expression of the input image [5] and generative models are less robust when generating videos. Others, instead, propose to synthesize expression-preserving frontal faces via warping [5]. However, those methods are specifically designed to address head poses, and adapting them to manage compound factors could be challenging. Otherwise, it has been shown that full parametric models (see Sec. III) can be used to synthesize faces with diversified attribute combinations [11][12]. This method involves both controllability and steady synthesis quality. In this paper, we explore its potential in handling compound distracting factors.

Lip sync and ASD. For lip sync, SyncNet [3] is a reliable baseline. PerfectMatch [1] enhances performance by introducing a multi-way matching approach. With regard to ASD, diverse viewpoints exist. The most widely used AVA dataset [17] just demands the association of audio and visual data, whereas the more recent ASW dataset [2] requires the synchronization of two modalities, which excludes instances like dub-subbed movies. As it would be impossible to handle dubbings using the lip motion cue alone, advanced relational modeling [18] is required. Hence, we resolve to the ASW.

III PRELIMINARIES

The 3DMM. In 3DMM, the textured 3D face model 𝑭(𝑺,𝑻)\boldsymbol{F}(\boldsymbol{S},\boldsymbol{T}) is defined by the face shape 𝑺\boldsymbol{S} and the face texture 𝑻\boldsymbol{T}, and both are represented as parametric linear models:

𝑺(𝜶,𝜷)\displaystyle\boldsymbol{S}(\boldsymbol{\alpha},\boldsymbol{\beta}) =𝑺¯+𝑩id𝜶+𝑩exp𝜷\displaystyle=\boldsymbol{\bar{S}}+\boldsymbol{B}_{id}\boldsymbol{\alpha}+\boldsymbol{B}_{exp}\boldsymbol{\beta} (1)
𝑻(𝜹)\displaystyle\boldsymbol{T}(\boldsymbol{\delta}) =𝑻¯+𝑩tex𝜹\displaystyle=\boldsymbol{\bar{T}}+\boldsymbol{B}_{tex}\boldsymbol{\delta} (2)

where 𝑺¯\boldsymbol{\bar{S}} and 𝑻¯\boldsymbol{\bar{T}} are the mean face shape and texture; 𝑩id\boldsymbol{B}_{id}, 𝑩exp\boldsymbol{B}_{exp}, and 𝑩tex\boldsymbol{B}_{tex} are PCA bases for identity, expression, and texture, respectively. Specifically, Basel Face Model 2009 [19] and FaceWarehouse [20] built bases are adopted, thus identity 𝜶80\boldsymbol{\alpha}\in\mathbb{R}^{80}, expression 𝜷64\boldsymbol{\beta}\in\mathbb{R}^{64}, and texture 𝜹80\boldsymbol{\delta}\in\mathbb{R}^{80}.

The illumination model. Under the distant smooth illumination assumption and the Lambertian skin surface assumption, the illumination is modeled via Spherical Harmonics(SH) [21], and per-vertex radiosity is

𝒕i(𝒏i,𝒕i|𝜸)=𝒕ib=1B2γbΦb(𝒏i)\boldsymbol{t}^{\prime}_{i}(\boldsymbol{n}_{i},\boldsymbol{t}_{i}|\boldsymbol{\gamma})=\boldsymbol{t}_{i}\sum_{b=1}^{B^{2}}\gamma_{b}\Phi_{b}(\boldsymbol{n}_{i}) (3)

where 𝒏i\boldsymbol{n}_{i} is the normal, 𝒕i\boldsymbol{t}_{i} is the texture color, Φb\Phi_{b} are SH basis functions. We choose BB=3 for each of the red, green, and blue channels, resulting in illumination coefficients 𝜸27\boldsymbol{\gamma}\in\mathbb{R}^{27}.

The image formation model. The pose 𝒑6\boldsymbol{p}\in\mathbb{R}^{6} is represented by a rotation 𝑹SO(3)\boldsymbol{R}\in SO(3), parameterized with three Euler angles, and a translation 𝒕3\boldsymbol{t}\in\mathbb{R}^{3}. Thus, the image formation process can be presented as

𝑰(𝜶,𝜷,𝜹,𝜸,𝒑)=Π(𝑹𝑭(𝑺,𝑻)+𝒕)\boldsymbol{I}(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\delta},\boldsymbol{\gamma},\boldsymbol{p})=\Pi(\boldsymbol{R}\boldsymbol{F}(\boldsymbol{S},\boldsymbol{T}^{\prime})+\boldsymbol{t}) (4)

where Π:32\Pi:\mathbb{R}^{3}\rightarrow\mathbb{R}^{2} is the perspective projection matrix; and 𝑻\boldsymbol{T}^{\prime} is the illuminated 3D face texture.

By setting proper coefficients, we can set faces and environments accordingly and synthesize images of corresponding attributes through graphic rendering, e.g. rasterization. See [9] for a thorough introduction.

IV APPROACH

In this section, we introduce the design requirements and implementations of the proposed DSP.

Refer to caption

Figure 2: Lip sync with DSP schematic.

IV-A Overview of DSP

As depicted in Fig. 2, the visual front-end, audio front-end, and task-related back-end are the three parts that make up modern lip sync model architectures [1]. The DSP should be placed between the raw images and the visual front-end. It takes raw images as inputs and produces standardized images, which are then fed into the visual front-end. Further, the DSP is built using the parametric image formation model described in Sec. III and thus has two components: a network for estimating the required coefficients from the input, and a renderer that uses the coefficients to synthesize images. Their implementations should fulfill the design requirements.

  1. 1.

    The DSP is robust on its own in the wild.

  2. 2.

    Preserving lip motion information from the raw images.

  3. 3.

    Reducing compound distracting factors.

  4. 4.

    No significant new distracting factors are introduced.

The image formation model is based on physical rules, which are stable in the wild, hence the DSP meets requirement 1. To satisfy requirement 2, a network that focuses on consistently resolving expression-related ambiguities is customized by modifying [11], see Sec. IV-B to Sec. IV-E. To satisfy requirement 3, standardized images are sythesized with modified coefficients, see Sec. IV-F. Regarding requirement 4, missing teeth in synthesized images might indeed introduce a new distracting factor. However, experimental results presented in Sec. V suggest that this has negligible negative effects on lip sync, as the cues for lip motion remain intact.

IV-B Training Dataset Construction

For training, several image collections are necessary. First, a large number of single-speaker videos are gathered, and then each video is processed for face detection/tracking and 3D landmark detection [11] in order to create face tracks [2]. Each face track is a sequential sequence of a subject’s facial images, and a subject may show up in more than one face track. Then, around MM facial images are sampled from each of the high-quality face tracks [8]. Finally, the dataset contains KK collections, and each has MkM_{k} pairs of facial images and landmarks {ln}\{l_{n}\} of a subject, i.e. 𝑪k={(𝑰i,{ln}i)}i=1Mk\boldsymbol{C}_{k}=\left\{\left(\boldsymbol{I}_{i},\{l_{n}\}_{i}\right)\right\}_{i=1}^{M_{k}}.

We use speaking videos for the following reasons: 1) Facial images from short-duration videos are more likely to fulfill the assumption that they are of the same underlying subject-specific attributes. 2) They enhance training by introducing diversified lip motions and implicit temporal stability.

Refer to caption

(a) Model architecture

Refer to caption

(b) Face synthesis

Figure 3: Overview of the proposed approach.

IV-C Model Architecture

The model consists of a regression network and two trainable matrices, as depicted in Fig. 3(a). The estimation of preimage attributes is carried out using the regression network, termed as E-Net. Similar to [11] [12], the E-Net is implemented as a ResNet-50 [22] with the last fully-connected layer of 97 dimensions, for regressing a 97-dimensional vector of preimage attributes (𝜷64\boldsymbol{\beta}\in\mathbb{R}^{64}, 𝜸27\boldsymbol{\gamma}\in\mathbb{R}^{27}, 𝒑6\boldsymbol{p}\in\mathbb{R}^{6}). One of those trainable matrices is the shared identity matrix of dimension 80-by-KK, where 80 is the dimension of identity (𝜶80\boldsymbol{\alpha}\in\mathbb{R}^{80}) and KK is the number of collections in the dataset. It is denoted as shared because the identity shared by all of the facial images in the kk-th collection is represented in its kk-th column. Similarly, the other is the shared texture matrix of dimension 80-by-KK since 𝜹80\boldsymbol{\delta}\in\mathbb{R}^{80}.

IV-D Loss Functions

To measure the discrepancy for an estimated coefficient 𝒙^={𝜶^k,𝜹^k,𝜷^,𝒑^,𝜸^}\boldsymbol{\hat{x}}=\{\boldsymbol{\hat{\alpha}}_{k},\boldsymbol{\hat{\delta}}_{k},\boldsymbol{\hat{\beta}},\boldsymbol{\hat{p}},\boldsymbol{\hat{\gamma}}\}, different losses are used, following [11].

The photometric loss. It measures the photometric consistency between the input 𝑰\boldsymbol{I} and the synthesized one 𝑰\boldsymbol{I}^{\prime}.

Lpho(𝒙)=i𝓜𝑨i𝑰i𝑰i(𝒙)2i𝓜𝑨iL_{pho}\left(\boldsymbol{x}\right)=\dfrac{\sum_{i\in\boldsymbol{\mathcal{M}}}\boldsymbol{A}_{i}\cdot\left\|\boldsymbol{I}_{i}-\boldsymbol{I}_{i}^{\prime}\left(\boldsymbol{x}\right)\right\|_{2}}{\sum_{i\in\boldsymbol{\mathcal{M}}}\boldsymbol{A}_{i}} (5)

where 𝓜\boldsymbol{\mathcal{M}} and 𝑨\boldsymbol{A} are two masks indicating the facial area and the per-pixel skin probability [11], respectively.

The landmarks distance (LMD) loss. It measures the geometric consistency between detected landmarks {ln}\{l_{n}\} and ones {ln}\{l_{n}^{\prime}\} reprojected from predefined 3D shape vertices [11].

Llan(𝒙)=1Nn=1Nωnlnln(𝒙)2L_{lan}\left(\boldsymbol{x}\right)=\dfrac{1}{N}\sum^{N}_{n=1}\omega_{n}\left\|l_{n}-l_{n}^{\prime}\left(\boldsymbol{x}\right)\right\|_{2} (6)

where ωn\omega_{n} is the weight of the nn-th landmark. Lip landmarks are weighted to 10, otherwise 1.

The regularization loss. It adds zero-mean Gaussian distribution prior constraints on 3DMM coefficients.

Lreg(𝒙)=ω𝜶𝜶2+ω𝜷𝜷2+ω𝜹𝜹2L_{reg}\left(\boldsymbol{x}\right)=\omega_{\boldsymbol{\alpha}}\left\|\boldsymbol{\alpha}\right\|^{2}+\omega_{\boldsymbol{\beta}}\left\|\boldsymbol{\beta}\right\|^{2}+\omega_{\boldsymbol{\delta}}\left\|\boldsymbol{\delta}\right\|^{2} (7)

where ω𝜶=0.5\omega_{\boldsymbol{\alpha}}=0.5, ω𝜷=2.0\omega_{\boldsymbol{\beta}}=2.0 and ω𝜹=0.5\omega_{\boldsymbol{\delta}}=0.5.

In total, all terms are weighted combined as

L(𝒙)=\displaystyle L\left(\boldsymbol{x}\right)= λphoLpho(𝒙)+λlanLlan(𝒙)+λregLreg(𝒙)\displaystyle\lambda_{pho}L_{pho}\left(\boldsymbol{x}\right)+\lambda_{lan}L_{lan}\left(\boldsymbol{x}\right)+\lambda_{reg}L_{reg}\left(\boldsymbol{x}\right) (8)

where λpho=1.92\lambda_{pho}=1.92, λlan=1.6e3\lambda_{lan}=1.6e{-3}, and λreg=3.0e4\lambda_{reg}=3.0e{-4}.

Most hyper-parameters are directly adopted from [11], with stronger regularization on the expression term to prevent the identification of all variations in facial shapes as expressions, thereby making the identity attribute trivial [13]. Note that we do not explicitly regularize the disentanglement between texture and illumination, as they will be replaced during face synthesis. However, a similar idea can be applied.

IV-E Training Strategy

The whole model, including the E-Net and two trainable matrices, is trained jointly with built collections in a way similar to [11], with the following modifications made. 1) As depicted in Fig. 3(a), instead of being regressed, subject-specific attributes are trainable parameters indexed from shared matrices and shall be updated during training. 2) For a batch, we sample multiple collections first, then several data from each collection. Data from the same collection constrain one another and assist in resolving ambiguities as they share the same subject-specific attributes but differing preimage ones.

IV-F Face Synthesis

We propose to synthesize a standardized face with depth as inputs for subsequent AV tasks. As described in Sec. IV-A, it is expected that the effects of compound distracting factors will be reduced in the synthesized images while the expression information from the raw input will be preserved. This can be done by setting appropriate coefficients to the image formation model, as stated in Sec. III.

In specific, we utilize modified coefficients for rendering instead of all coefficients estimated from the raw input. As depicted in Fig. 3(b), the trained E-Net estimates the expression singly from the input. Meanwhile, attributes corresponding to distracting factors are explicitly set to predefined default values, regardless of inputs. By rendering with modified coefficients, the synthesized images are devoid of the effects of compound distracting factors while preserving the expression from the raw images. For the ease of subsequent AV tasks, we set uniform white illumination, mean identity, mean texture, and frontal pose as defaults, so that 𝒙^={𝜶=𝟎,𝜹=𝟎,𝜷^,𝜸=𝜸𝟎,𝒑=𝒑𝟎}\boldsymbol{\hat{x}}=\{\boldsymbol{\alpha}=\boldsymbol{0},\boldsymbol{\delta}=\boldsymbol{0},\boldsymbol{\hat{\beta}},\boldsymbol{\gamma}=\boldsymbol{\gamma}_{\boldsymbol{0}},\boldsymbol{p}=\boldsymbol{p}_{\boldsymbol{0}}\}, and render an expressive image 𝑰(𝒙^)\boldsymbol{I}^{\prime}(\boldsymbol{\hat{x}}). Examples are given in Fig. 4, where raw images (column a) vary greatly whereas synthesized images (column e) are standardized and free of distracting factors (pose, illumination, etc.).

We can also render a corresponding pseudo depth map 𝑫\boldsymbol{D}^{\prime} in order to more accurately describe 3D lip motions. This is done by rendering with 𝒁𝑺\boldsymbol{Z}^{\prime}_{\boldsymbol{S}} instead of illuminated 𝑻\boldsymbol{T}^{\prime} as the texture (see (4)), where 𝒁𝑺\boldsymbol{Z}^{\prime}_{\boldsymbol{S}} is the normalized Z-coordinator of 𝑺\boldsymbol{S},

𝒁𝑺=𝒁𝑺𝒁min𝒁max𝒁min,\boldsymbol{Z}^{\prime}_{\boldsymbol{S}}=\frac{\boldsymbol{Z}_{\boldsymbol{S}}-\boldsymbol{Z}_{min}}{\boldsymbol{Z}_{max}-\boldsymbol{Z}_{min}}, (9)

where 𝒁min=min(𝒁𝑺i)\boldsymbol{Z}_{min}=\min(\boldsymbol{Z}_{\boldsymbol{S}_{i}}) and 𝒁max=max(𝒁𝑺i)\boldsymbol{Z}_{max}=\max(\boldsymbol{Z}_{\boldsymbol{S}_{i}}) are the min depth and max depth of the video clip, respectively.

Overall, we synthesize standardized expressive RGB/RGBD images from raw facial images. Notably, we also reduce the effect of the often poor image quality found in raw data via synthesis.

V EXPERIMENTS

V-A Datasets

We develop the proposed model and the lip-sync model using VoxCeleb2 [23]. Its dev-split has over 1 million utterances from 145569 videos of 5994 subjects and its test-split has 36237 utterances from 4911 videos of 118 subjects.

To develop the proposed model, we collect 5000 subjects from the dev-split. Ten utterances are sampled from each of the two videos for each subject, resulting in K=50kK=50k collections with each having about M=50M=50 samples. Separately, a minidev-split consisting of 5000 utterances from 500 subjects is sampled from the dev-split for training the lip-sync model, while the entire test split is used for evaluation. Note that the minidev-split is less than 1/7{1}/{7} the size of the test-split.

Regarding the ASD task, we directly evaluate lip-sync models models trained with VoxCeleb2 on the test split of ASW dataset [2]. The test split consists of 53 videos (51 accessible), with 4.5 hours of active (i.e., sync) face tracks and 3.4 hours of inactive ones.

Refer to caption

Figure 4: Snapshots of synthesized data. (a) raw images; (b)-(d) synthesized RGB, with MGCNet estimated id, Deep3D estimated id, and Deep3D estimated id and exp, respectively; (e)-(f) synthesized RGBD with E-Net estimated exp.
TABLE I: Lip-fit accuracies. Lower is better.
Method Attributes Group LMD
min max avg.
GT 𝜶gt\boldsymbol{\alpha}_{gt}, 𝜷gt\boldsymbol{\beta}_{gt} 1.085 1.093 1.091
Deep3D [11] 𝜶gt\boldsymbol{\alpha}_{gt}, 𝜷^\boldsymbol{\hat{\beta}} 2.948 2.975 2.967
𝜶^\boldsymbol{\hat{\alpha}}, 𝜷^\boldsymbol{\hat{\beta}} 2.339 2.369 2.358
MGCNet [12] 𝜶gt\boldsymbol{\alpha}_{gt}, 𝜷^\boldsymbol{\hat{\beta}} 3.353 3.372 3.365
𝜶^\boldsymbol{\hat{\alpha}}, 𝜷^\boldsymbol{\hat{\beta}} 2.902 2.920 2.912
Ours E-Net 𝜶gt\boldsymbol{\alpha}_{gt}, 𝜷^\boldsymbol{\hat{\beta}} 1.235 1.241 1.240

V-B Results on Expression Disentanglement

According to Sec. IV-E, we first train the proposed model. More specifically, two matrices are initialized with Gaussian noise 𝒩(μ=0,σ=0.01)\mathcal{N}(\mu=0,\,\sigma=0.01) and the E-Net is ImageNet-pretrained [24]. Two collections and eight samples from each are sampled for each batch. With a constant learning rate of 1.0e41.0e{-4}, the model is trained using the Adam optimizer for 20 epochs.

Due to the lack of datasets with ground truth (GT) expressions of 𝜷64\boldsymbol{\beta}\in\mathbb{R}^{64}, we use 5-fold cross-validation for the evaluation. Particularly, 50k50k collections are split into 5 groups. Each time, we train with four groups and evaluate the holdout group, whose GT coefficients are obtained by overfitting it with the proposed method. We use the LMD (see (6)) on lip landmarks as a metric. The group LMD is defined as the mean of the collection LMDs, and each is defined as the mean of the data LMDs. We form mixed coefficients in order to isolate the effects of certain estimated attributes. For instance, the mixed coefficients are formed as 𝒙^={𝜶gt,𝜹gt,𝜷^,𝒑gt,𝜸gt}\boldsymbol{\hat{x}}=\{\boldsymbol{\alpha}_{gt},\boldsymbol{\delta}_{gt},\boldsymbol{\hat{\beta}},\boldsymbol{p}_{gt},\boldsymbol{\gamma}_{gt}\} in order to evaluate the estimated expression.

We compare with Deep3D [11] and MGCNet [12]. The min, max, and average group LMDs are reported in Tab. I, and results with GTs are also included as references. For Deep3D and MGCNet, results with estimated expressions perform worse than counterparts with GT expressions, and additionally performing image-wise identity estimation can improve results. These results suggest that they incorrectly ascribe some parts of face morphings to identity variations (see Fig. 4(b)(c)), and as a result, their estimated expressions when used alone cannot adequately describe lip motions. Our results using the E-Net estimated expressions, otherwise, are more in line with those using GTs, showing that the E-Net can more consistently attribute facial morphings to expressions.

TABLE II: Lip-sync accuracies of different methods.
Input Mode Attributes Train Eval Acc.
Raw data none dev test 94.1%
Raw data none minidev test 88.9%
RGB([11]) 𝜶^\boldsymbol{\hat{\alpha}}, 𝜷^\boldsymbol{\hat{\beta}} minidev test 97.4%
RGBD([11]) 𝜶^\boldsymbol{\hat{\alpha}}, 𝜷^\boldsymbol{\hat{\beta}} minidev test 98.1%
RGB(ours) 𝜷^\boldsymbol{\hat{\beta}} minidev test 99.1%
RGBD(ours) 𝜷^\boldsymbol{\hat{\beta}} minidev test 99.2%
TABLE III: ASD performances of different methods.
Method AP AUROC EER
Self-supervised [2][25] 0.924 0.962 0.083
RGBD([11] ) 0.929 0.954 0.109
RGBD(ours) 0.957 0.971 0.079

V-C Results on Audio-Visual Tasks

We conduct experiments on lip sync and ASD tasks. We adopt the self-supervised PerfectMatch [1] for lip sync, using synthesized images as the visual input and Mel-frequence cepstral coefficients (MFCCs) as the audio input. We first train a model on the dev-split of VoxCeleb2 using the raw face tracks as a baseline for comparisons. Then, a variety of versions are trained using the minidev split, including raw face tracks, synthesized images with E-Net estimated expressions, and synthesized images with estimated identities and expressions from Deep3D.

For evaluation, we follow [1] to extract audio and visual features from every 0.2s segment with a stride of 0.04s. Then, we calculate the cosine similarity between each visual feature and each audio feature within a ±15\pm 15 frame window, and we determine the offset providing the min distance. If the determined offset is within ±1\pm 1 frame of the GT offset, it is correct.

The evaluation results are listed in Tab. II. With less data, models trained with synthesized images outperform those trained with raw face tracks. Further, models trained with RGBD images yield better results. This may be the case because depth information makes 3D lip motions more distinct, see Fig. 4. Particularly, models trained with E-Net estimated expressions consistently outperform others, suggesting that improved lip motion description may directly benefit lip sync.

In the ASD task, effective lip-sync models are directly applied to ASW’s test split [2] without training or finetuning on ASW. Following [2], an AV pair exceeding a threshold distance is deemed active. Our pipeline preprocesses face tracks, synthesizes images, and averages cosine similarities across 15 frames for robustness as suggested in [1]. Results in Tab. III reveal that lip-sync models exhibit great generalizability by matching the strong baseline [2] in all metrics, including average precision (AP), the area under the receiver operating characteristic (AUROC), and equal error rate (EER). This evaluation is challenging as our models are evaluated across datasets and trained with less data.

VI CONCLUSION

This paper introduces a DSP for lip sync, which is customized using 3D face reconstruction techniques. This customization provides flexibility for incorporating domain knowledge and enables the handling of compound distracting factors. Preliminary results suggest that by synthesizing standardized expressive images with disentangled lip motion information, the DSP enhances the data efficiency and robustness of existing lip sync methods.

In the future, this conceptual DSP could be enhanced with more advanced 3D face reconstruction techniques. To be specific, an improved 3D face model such as FLAME [26] might be more expressive and potentially capture expressions more accurately. Additionally, instead of using rendered RGB/RGBD images, directly utilizing expressive 3D face mesh as visual representations could lead to a better implementation of DSP, as more information is retained. Finally, we would like to emphasize that in the data-driven era, domain knowledge remains valuable. Correctly utilizing domain knowledge may aid other AV tasks that face similar challenges, such as lip reading [27], emotion recognition [28], and digital human [29], particularly when the collection and sanity-checking of multi-modality datasets is difficult.

References

  • [1] S.-W. Chung, J. S. Chung, and H.-G. Kang, “Perfect match: Improved cross-modal embeddings for audio-visual synchronisation,” in ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 3965–3969.
  • [2] Y. J. Kim, H.-S. Heo, S. Choe, S.-W. Chung, Y. Kwon, B.-J. Lee, Y. Kwon, and J. S. Chung, “Look Who’s Talking: Active Speaker Detection in the Wild,” in Proc. Interspeech 2021, 2021, pp. 3675–3679.
  • [3] J. S. Son and A. Zisserman, “Lip reading in profile,” in Proceedings of the British Machine Vision Conference (BMVC), September 2017, pp. 155.1–155.11.
  • [4] S. Cheng, P. Ma, G. Tzimiropoulos, S. Petridis, A. Bulat, J. Shen, and M. Pantic, “Towards pose-invariant lip-reading,” in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, pp. 4357–4361.
  • [5] Z. Kang, R. Horaud, and M. Sadeghi, “Robust face frontalization for visual speech recognition,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, October 2021, pp. 2485–2495.
  • [6] A. Koumparoulis and G. Potamianos, “Deep view2view mapping for view-invariant lipreading,” in 2018 IEEE Spoken Language Technology Workshop (SLT), 2018, pp. 588–594.
  • [7] F.-J. Chang, A. Tuan Tran, T. Hassner, I. Masi, R. Nevatia, and G. Medioni, “Expnet: Landmark-free, deep, 3d facial expressions,” in 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), 2018, pp. 122–129.
  • [8] M. R. Koujan, L. Alharbawee, G. Giannakakis, N. Pugeault, and A. Roussos, “Real-time facial expression recognition “in the wild” by disentangling 3d expression from identity,” in 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020), 2020, pp. 24–31.
  • [9] B. Egger, W. A. P. Smith, A. Tewari, S. Wuhrer, M. Zollhoefer, T. Beeler, F. Bernard, T. Bolkart, A. Kortylewski, S. Romdhani, C. Theobalt, V. Blanz, and T. Vetter, “3d morphable face models—past, present, and future,” ACM Trans. Graph., vol. 39, no. 5, jun 2020.
  • [10] Z. Chai, H. Zhang, J. Ren, D. Kang, Z. Xu, X. Zhe, C. Yuan, and L. Bao, “Realy: Rethinking the evaluation of 3d face reconstruction,” in Proceedings of the European Conference on Computer Vision (ECCV), 2022.
  • [11] Y. Deng, J. Yang, S. Xu, D. Chen, Y. Jia, and X. Tong, “Accurate 3d face reconstruction with weakly-supervised learning: From single image to image set,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019.
  • [12] J. Shang, T. Shen, S. Li, L. Zhou, M. Zhen, T. Fang, and L. Quan, “Self-supervised monocular 3d face reconstruction by occlusion-aware multi-view geometry consistency,” in Computer Vision – ECCV 2020, A. Vedaldi, H. Bischof, T. Brox, and J.-M. Frahm, Eds.   Cham: Springer International Publishing, 2020, pp. 53–70.
  • [13] A. Tewari, F. Bernard, P. Garrido, G. Bharaj, M. Elgharib, H.-P. Seidel, P. Perez, M. Zollhofer, and C. Theobalt, “Fml: Face model learning from videos,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
  • [14] B. Mallikarjun, A. Tewari, H.-P. Seidel, M. Elgharib, and C. Theobalt, “Learning complete 3d morphable face models from images and videos,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2021, pp. 3361–3371.
  • [15] J. Thies, M. Zollhofer, M. Stamminger, C. Theobalt, and M. Niessner, “Face2face: Real-time face capture and reenactment of rgb videos,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
  • [16] H. Zhou, J. Liu, Z. Liu, Y. Liu, and X. Wang, “Rotate-and-render: Unsupervised photorealistic face rotation from single-view images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
  • [17] J. Roth, S. Chaudhuri, O. Klejch, R. Marvin, A. Gallagher, L. Kaver, S. Ramaswamy, A. Stopczynski, C. Schmid, Z. Xi, and C. Pantofaru, “Ava active speaker: An audio-visual dataset for active speaker detection,” in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, pp. 4492–4496.
  • [18] O. Köpüklü, M. Taseska, and G. Rigoll, “How to design a three-stage architecture for audio-visual active speaker detection in the wild,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2021, pp. 1193–1203.
  • [19] P. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter, “A 3d face model for pose and illumination invariant face recognition,” in 2009 Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance, 2009, pp. 296–301.
  • [20] C. Cao, Y. Weng, S. Zhou, Y. Tong, and K. Zhou, “Facewarehouse: A 3d facial expression database for visual computing,” IEEE Transactions on Visualization and Computer Graphics, vol. 20, no. 3, pp. 413–425, 2014.
  • [21] R. Ramamoorthi and P. Hanrahan, “An efficient representation for irradiance environment maps,” in Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, ser. SIGGRAPH ’01.   New York, NY, USA: Association for Computing Machinery, 2001, p. 497–500.
  • [22] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
  • [23] J. S. Chung, A. Nagrani, and A. Zisserman, “Voxceleb2: Deep speaker recognition,” Proc. Interspeech 2018, pp. 1086–1090, 2018.
  • [24] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., “Imagenet large scale visual recognition challenge,” International journal of computer vision, vol. 115, no. 3, pp. 211–252, 2015.
  • [25] Y. J. Kim, H.-S. Heo, S. Choe, S.-W. Chung, Y. Kwon, B.-J. Lee, Y. Kwon, and J. S. Chung, “Look who’s talking: Active speaker detection in the wild,” arXiv preprint arXiv:2108.07640, 2021.
  • [26] T. Li, T. Bolkart, M. J. Black, H. Li, and J. Romero, “Learning a model of facial shape and expression from 4d scans,” ACM Trans. Graph., vol. 36, no. 6, nov 2017.
  • [27] Z. Peng, Y. Luo, Y. Shi, H. Xu, X. Zhu, H. Liu, J. He, and Z. Fan, “Selftalk: A self-supervised commutative training diagram to comprehend 3d talking faces,” in Proceedings of the 31st ACM International Conference on Multimedia, ser. MM ’23.   New York, NY, USA: Association for Computing Machinery, 2023, p. 5292–5301.
  • [28] E. Pei, M. C. Oveneke, Y. Zhao, D. Jiang, and H. Sahli, “Monocular 3d facial expression features for continuous affect recognition,” IEEE Transactions on Multimedia, vol. 23, pp. 3540–3550, 2021.
  • [29] R. Daněček, K. Chhatre, S. Tripathi, Y. Wen, M. Black, and T. Bolkart, “Emotional speech-driven animation with content-emotion disentanglement,” in SIGGRAPH Asia 2023 Conference Papers, ser. SA ’23.   New York, NY, USA: Association for Computing Machinery, 2023.