This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: Max Planck Institute for Informatics, Saarbruecken 66123, Germany
https://www.mpi-inf.mpg.de/home/
11email: {mhaberma, wxu, gpons, theobalt}@mpi-inf.mpg.de
22institutetext: EPFL, Lausanne CH-1015, Switzerland
https://www.epfl.ch/
22email: [email protected]
33institutetext: Stanford University, Stanford CA 94305, USA
https://www.stanford.edu/
33email: [email protected]

NRST: Non-rigid Surface Tracking from Monocular Video

Marc Habermann 11 0000-0003-3899-7515    Weipeng Xu 11 0000-0001-9548-5108    Helge Rhodin 22 0000-0003-2692-0801    Michael Zollhöfer 33 0000-0003-1219-0625    Gerard Pons-Moll 11 0000-0001-5115-7794   
Christian Theobalt
11 0000-0001-6104-6625
Abstract

We propose an efficient method for non-rigid surface tracking from monocular RGB videos. Given a video and a template mesh, our algorithm sequentially registers the template non-rigidly to each frame. We formulate the per-frame registration as an optimization problem that includes a novel texture term specifically tailored towards tracking objects with uniform texture but fine-scale structure, such as the regular micro-structural patterns of fabric. Our texture term exploits the orientation information in the micro-structures of the objects, e.g., the yarn patterns of fabrics. This enables us to accurately track uniformly colored materials that have these high frequency micro-structures, for which traditional photometric terms are usually less effective. The results demonstrate the effectiveness of our method on both general textured non-rigid objects and monochromatic fabrics.

Refer to caption
Figure 1: We propose an efficient method for interactive non-rigid surface tracking from monocular RGB videos for general objects such as faces ((a)-(d)). Given the input image (a) our reconstruction nicely overlays with the input (b) and looks also plausible from another view point (c). The textured overlay looks realistic as well (d). Furthermore, our novel texture term leads to improved reconstruction quality for fabrics given a single video (e). Again the overlayed reconstruction (f) aligns well, and also in 3D (g) our result (red) matches the ground truth (blue).

1 Introduction

In this paper, we propose NRST, an efficient method for non-rigid surface tracking from monocular RGB videos. Capturing the non-rigid deformation of a dynamic surface is an important and long-standing problem in computer vision. It has a wide range of real world applications in fields such as virtual/augmented reality, medicine and visual effects. Most of the existing methods are based on multi-view imagery, where expensive and complicated system setups are required [3, 25, 23]. There also exist methods that rely on only a single depth or RGB-D camera [42, 44, 19, 18]. However, these sensors are not as ubiquitous as RGB cameras, and these methods cannot be applied on plenty of existing video footage which is found on social media like YouTube. There are also monocular RGB methods [43, 30], of course with their own limitations; e.g., they rely on highly textured surfaces and they are often times slow.

In this work, we present a method which is able to densely track the non-rigid deformations of general objects such as faces and fabrics from a single RGB video. To solve this challenging problem, our method relies on a textured mesh template of the deforming object’s surface. Given the input video, our algorithm sequentially registers the template to each frame. More specifically, our method automatically reproduces a deformation sequence of the template model that coincides with the non-rigid surface motion in the video. To this end, we formulate the per-frame registration as a non-linear least squares optimization problem – with an objective function consisting of a photometric alignment and several regularization terms. The optimization is computationally intensive due to the large number of residuals in our alignment objective. To address this, we adapt the efficient GPU-based Gauss-Newton solver of Zollhoefer et al. [44] to our problem that allows for deformable object tracking at interactive frame rates.

Besides the efficiency of the algorithm, the core contribution of our approach is a novel texture term that exploits the orientation information in the micro-structures of the tracked objects, such as the yarn patterns of fabrics. This enables us to track uniformly colored materials which have high frequency patterns, for which the classical color-based term is usually less effective.

In our experimental results, we evaluate our method qualitatively and quantitatively on several challenging sequences of deforming surfaces. We use well established benchmarks, such as pieces of cloth [40, 31] and human faces [43, 39]. The results demonstrate that our method can accurately track general non-rigid objects. Furthermore, for materials with regular micro-structural patterns, such as fabrics, the tracking accuracy is further improved with our texture term.

2 Related Work

There is a variety of approaches that reconstruct geometry from multiple images, e.g., template-free methods [3], variational ones [25] or object specific approaches [23]. Although multi-view methods can produce accurate tracking results, their setup is expensive and hard to operate. Some approaches use a single RGB-D sensor instead [42, 44, 19, 18, 9, 10, 36]. They manage to capture deformable surfaces nicely and at high efficiency, some even build up a template model alongside per-frame reconstruction. The main limitations of these methods are that the sensors have a high power consumption, they do not work outdoors, the object has to be close to the camera and they cannot use the large amount of RGB-only video footage provided by social media. On these grounds, we aim for a method that uses just a single RGB video as input. In the following, we focus on related monocular reconstruction and tracking approaches.

Monocular Methods. Non-rigid structure from motion methods, which do not rely on any template, try to infer the 3D geometry from a single video by using a prior-free formulation [4], global models [37], local ones [27] or solving a variational formulation [6]. But they often either capture the deformations only coarsely, are not able to model strong deformations, typically require strongly textured objects or rely on dense 2D correspondences. By constraining the setting to specific types of objects such as faces [7], very accurate reconstructions can be obtained, but at the expense of generality. Since in recent years, several approaches [22, 11] build a 3D model given a set of images, and even commercial software111http://www.agisoft.com/ is available for this task, template acquisition has become easier. Templates are an effective prior for the challenging task of estimating non-rigid deformations from single images as demonstrated by previous work [30, 31, 32, 33, 2, 24, 15, 21, 1, 40, 43, 17, 28, 16, 29, 14]. But even if a template is used, ambiguities [30] remain and additional constraints have to be imposed. Theoretical results [1] show that only allowing isometric deformations [24] results in a uniquely defined solution. Therefore, approaches constrain the deformation space in several ways, e.g., by a Laplacian regularization [21] or by non-linear [32] or linear local surface models [29]. Salzmann et al. [28] argued that relaxing the isometric constraint is beneficial since it allows to model sharp folds. Moreno-Noguer et al. [17] and Malti et al. [16] even go beyond this and show results for elastic surfaces; Tsoli and Argyros [38] demonstrated tracking surfaces that undergo topological changes but require a depth camera. Other approaches investigate how to make reconstruction more robust under faster motions [33] and occlusions [20], or try to replace the feature-based data term by a dense pixel-based one [15] and to find better texture descriptors [26, 8, 12]. Brunet et al. [2] and Yu et al. [43] formulate the problem of estimating non-rigid deformations as minimizing an objective function which brings them closest to our formulation. In particular, we adopt the photometric, spatial and temporal terms of Yu et al. [43] and combine them with an isometric and acceleration constraint as well as our novel texture term.

Along the line of monocular methods, we propose NRST, a template-based reconstruction framework that estimates the non-rigidly deforming geometry of general objects from just monocular video. In contrast to previous work, our approach does not rely on 3D to 2D correspondences and due to the GPU-based solver architecture it is also much faster than previous approaches. Furthermore, our novel texture term enables tracking of regions with little texture.

3 Method

The goal is to estimate the non-rigid deformation of an object from TT frames It(x,y)I^{t}(x,y) with t{1,,T}t\in\{1,...,T\}. We assume a static camera and known camera intrinsics. Since this problem is in general severely under-constrained, it is assumed that a template triangle mesh of the object to be tracked is given as the matrix 𝐕^N×3\hat{\mathbf{V}}\in\mathbb{R}^{N\times 3} where each row contains the coordinates of one of the NN vertices. According to that, 𝐕^i\hat{\mathbf{V}}_{i} is defined as the iith vertex of the template in vector form. This notation is also used for the following matrices. The edges of the template are given as the mapping 𝒩(i)\mathcal{N}(i). Given a vertex index i{1,2,,N}i\in\{1,2,...,N\}, it returns the set of indices sharing an edge with 𝐕^i\hat{\mathbf{V}}_{i}. The FF faces of the mesh are represented as the matrix 𝐅{1,,N}F×3\mathbf{F}\in\{1,...,N\}^{F\times 3}. Each row contains the vertex indices of one triangle. The UV map is given as the matrix 𝐔N×2\mathbf{U}\in\mathbb{N}^{N\times 2}. Each row contains the UV coordinates for the corresponding vertex. The color 𝐂i{0,,255}3\mathbf{C}_{i}\in\{0,...,255\}^{3} of vertex ii can be computed by a simple lookup in the texture map ITMI_{\mathrm{TM}} at the position 𝐔i\mathbf{U}_{i}. The color of all vertices is stored in the matrix 𝐂{0,,255}N×3\mathbf{C}\in\{0,...,255\}^{N\times 3}. Furthermore, it is assumed that the geometry at time t=1t=1 roughly agrees with the true shape shown in the video so that the gradients of the photometric term can guide the optimization to the correct solution without being trapped into local minima. The non-rigidly deformed mesh at time t+1t+1 is represented as the matrix 𝐕t+1N×3\mathbf{V}^{t+1}\in\mathbb{R}^{N\times 3} and contains the updated vertex positions according to the 3D displacement from tt to t+1t+1.

3.1 Non-rigid Tracking as Energy Minimization

Given the template 𝐕^\hat{\mathbf{V}} and our estimate of the previous frame 𝐕t\mathbf{V}^{t}, our method sequentially estimates the geometry 𝐕t+1\mathbf{V}^{t+1} of the current frame t+1t+1. We jointly optimize per-vertex local rotations denoted by 𝚽t+1\mathbf{\Phi}^{t+1} and vertex locations 𝐕t+1\mathbf{V}^{t+1}. Specifically, for each time step the deformation estimation is formulated as the non-linear optimization problem

(𝐕t+1,𝚽t+1)=argmin𝐕,𝚽N×3E(𝐕,𝚽),(\mathbf{V}^{t+1},\mathbf{\Phi}^{t+1})=\operatorname*{arg\,min}_{\mathbf{V},\mathbf{\Phi}\in\mathbb{R}^{N\times 3}}E\left(\mathbf{V},\mathbf{\Phi}\right)\text{,} (1)

with

E(𝐕,𝚽)=λPhotoEPhoto(𝐕)+λSmoothESmooth(𝐕)+λEdgeEEdge(𝐕)+λArapEArap(𝐕,𝚽)+λVelEVel(𝐕)+λAccEAcc(𝐕).\displaystyle\begin{split}E\left(\mathbf{V},\mathbf{\Phi}\right)&=\lambda_{\mathrm{Photo}}E_{\mathrm{Photo}}\left(\mathbf{V}\right)+\lambda_{\mathrm{Smooth}}E_{\mathrm{Smooth}}\left(\mathbf{V}\right)+\lambda_{\mathrm{Edge}}E_{\mathrm{Edge}}\left(\mathbf{V}\right)\\ &+\lambda_{\mathrm{Arap}}E_{\mathrm{Arap}}\left(\mathbf{V},\mathbf{\Phi}\right)+\lambda_{\mathrm{Vel}}E_{\mathrm{Vel}}\left(\mathbf{V}\right)+\lambda_{\mathrm{Acc}}E_{\mathrm{Acc}}\left(\mathbf{V}\right)\text{.}\end{split} (2)

λPhoto\lambda_{Photo}, λSmooth\lambda_{Smooth}, λEdge\lambda_{Edge}, λVel\lambda_{Vel}, λAcc\lambda_{Acc}, λArap\lambda_{Arap} are hyperparameters set before the optimization starts and afterwards they are kept constant. E(𝐕,𝚽)E\left(\mathbf{V},\mathbf{\Phi}\right) combines different cost terms ensuring that the mesh deformations agree with the motion in the video. The resulting non-linear least squares optimization problem is solved with the GPU-based Gauss-Newton solver based on the work of Zollhoefer et al. [44] where we adapted the Jacobian and residual implementation to our energy formulation. The high efficiency is obtained by exploiting the sparse structure of the system of normal equations. For more details we refer the reader to the approach of Zollhoefer et al. [44]. Now, we will explain the terms in more detail.

Photometric Alignment. The photometric term

EPhoto(𝐕)=i=1Nσ(It+1𝐆w(Π(𝐕i))𝐂i)2\displaystyle\begin{aligned} E_{\mathrm{Photo}}(\mathbf{V})=\sum_{i=1}^{N}\left\lVert\sigma\left(I^{t+1}\ast\mathbf{G}_{w}\left(\Pi\left(\mathbf{V}_{i}\right)\right)-\mathbf{C}_{i}\right)\right\rVert&{}^{2}\end{aligned} (3)

densely measures the re-projection error. \left\lVert\cdot\right\rVert is the Euclidean norm, \ast is the convolution operator and 𝐆w\mathbf{G}_{w} is a Gaussian kernel with standard deviation ww. We use Gaussian smoothing on the input frame for more stable and longer range gradients. Π(𝐕i)=(uw,vw)\Pi(\mathbf{V}_{i})=(\frac{u}{w},\frac{v}{w})^{\top} with (u,v,w)=𝐈𝐕i(u,v,w)^{\top}=\mathbf{I}\mathbf{V}_{i} projects the vertex 𝐕i\mathbf{V}_{i} on the image plane and It+1𝐆wI^{t+1}\ast\mathbf{G}_{w} returns the RGB color vector of the smoothed frame at position Π(𝐕i)\Pi\left(\mathbf{V}_{i}\right) which is compared against the pre-computed and constant vertex color 𝐂i\mathbf{C}_{i}. Here, 𝐈3×3\mathbf{I}\in\mathbb{R}^{3\times 3} is the intrinsic camera matrix. σ\sigma is a robust pruning function for wrong correspondences with respect to color similarity. More specifically, we discard errors above a certain threshold because in most cases they are due to occlusions.

Spatial Smoothness. Without regularization, estimating 3D geometry from a single image is an ill-posed problem. Therefore, we introduce several spatial and temporal regularizers to make the problem well-posed and to propagate 3D deformations into areas where information for data terms is missing, e.g., poorly textured or occluded regions. The first prior

ESmooth(𝐕)=i=1Nj𝒩(i)(𝐕i𝐕j)(𝐕^i𝐕^j)2E_{\mathrm{Smooth}}\left(\mathbf{V}\right)=\sum_{i=1}^{N}\sum_{j\in\mathcal{N}(i)}\left\lVert\left(\mathbf{V}_{i}-\mathbf{V}_{j}\right)-(\hat{\mathbf{V}}_{i}-\hat{\mathbf{V}}_{j})\right\rVert^{2} (4)

ensures that if a vertex 𝐕i\mathbf{V}_{i} changes its position, its neighbors 𝐕j\mathbf{V}_{j} with j𝒩(i)j\in\mathcal{N}(i) are deformed such that the overall shape is still spatially smooth compared to the template mesh 𝐕^\hat{\mathbf{V}}. In addition, the prior

EEdge(𝐕)=i=1Nj𝒩(i)(𝐕i𝐕j𝐕^i𝐕^j)2E_{\mathrm{Edge}}\left(\mathbf{V}\right)=\sum_{i=1}^{N}\sum_{j\in\mathcal{N}(i)}\left(\left\lVert\mathbf{V}_{i}-\mathbf{V}_{j}\right\rVert-\lVert\hat{\mathbf{V}}_{i}-\hat{\mathbf{V}}_{j}\rVert\right)^{2} (5)

ensures isometric deformations which means that the edge length with respect to the template is preserved. In contrast to ESmoothE_{\mathrm{Smooth}}, this prior is rotation invariant. Finally, the as-rigid-as-possible (ARAP) prior [34]

EArap(𝐕,𝚽)=i=1Nj𝒩(i)(𝐕i𝐕j)(𝚽i)(𝐕^i𝐕^j)2,E_{\mathrm{Arap}}\left(\mathbf{V},\mathbf{\Phi}\right)=\sum_{i=1}^{N}\sum_{j\in\mathcal{N}(i)}\left\lVert\left(\mathbf{V}_{i}-\mathbf{V}_{j}\right)-\mathcal{R}(\mathbf{\Phi}_{i})(\hat{\mathbf{V}}_{i}-\hat{\mathbf{V}}_{j})\right\rVert^{2}\text{,} (6)

allows local rotations for each of the mesh vertices as long as the relative position with respect to their neighborhood remains the same. Each row of the matrix 𝚽N×3\mathbf{\Phi}\in\mathbb{R}^{N\times 3} contains the per-vertex Euler angles which encode a local rotation around 𝐕i\mathbf{V}_{i}. (𝚽i)\mathcal{R}(\mathbf{\Phi}_{i}) converts them into a rotation matrix.

We choose a combination of spatial regularizers to ensure that our method can track different types of non-rigid deformations equally well. For example, ESmoothE_{\mathrm{Smooth}} is usually sufficient to track facial expressions without large head rotations. But tracking rotating objects can only be achieved with rotational invariant regularizers (EEdgeE_{\mathrm{Edge}}, EArapE_{\mathrm{Arap}}). In contrast to Yu et al. [43], we adopt the Euclidean norm in Eq. 4 and Eq. 6 instead of the Huber loss because it led to visually better results.

Temporal Smoothness. To enforce temporally smooth reconstructions, we propose two additional priors. The first one is defined as

EVelocity(𝐕)=i=1N𝐕i𝐕it2E_{\mathrm{Velocity}}\left(\mathbf{V}\right)=\sum_{i=1}^{N}\left\lVert\mathbf{V}_{i}-\mathbf{V}_{i}^{t}\right\rVert^{2} (7)

and ensures that the displacement of vertices between tt and t+1t+1 is small. Second, the prior

EAcc(𝐕)=i=1N(𝐕i𝐕it)(𝐕it𝐕it1)2E_{\mathrm{Acc}}\left(\mathbf{V}\right)=\sum_{i=1}^{N}\left\lVert\left(\mathbf{V}_{i}-\mathbf{V}_{i}^{t}\right)-\left(\mathbf{V}_{i}^{t}-\mathbf{V}_{i}^{t-1}\right)\right\rVert^{2} (8)

penalizes large deviations of the current velocity direction from the previous one.

3.2 Non-rigid Tracking of Woven Fabrics

Tracking of uniformly colored fabrics is usually challenging for classical color-based terms due to the lack of color features. To overcome this limitation, we inspected the structure of different garments and found that most of them show line-like micro-structures due to the manufacturing process of the woven threads, see Fig. 2 left. Those can be recorded with recent high resolution cameras such that reconstruction algorithms can make use of those patterns. To this end, we propose a novel texture term to refine the estimation of non-rigid motions for the case of woven fabrics. It can be combined with the terms in Eq. 2. Now, we will explain our novel data term in more detail.

Refer to caption
Figure 2: Histogram of oriented gradients of woven fabrics. Left. The neighborhood region with the center pixel in the middle. Right. The corresponding histogram.

Histogram of Oriented Gradient (HOG). Based on HOG [5] we compute for each pixel (i,j)(i,j) of an image the corresponding histogram 𝐡i,jb\mathbf{h}_{i,j}\in\mathbb{R}^{b} where bb is the number of bins that count the total number of gradient angles present in the neighborhood of pixel (i,j)(i,j). To be more robust with respect to outliers and noise we count the number of gradients per angular bin irrespective of the gradient magnitude and only if the magnitude is higher than a certain threshold. Compared to pure image gradients, HOG is less sensitive to noise. Especially for woven fabrics, image gradients are very localized since changing the position in the image can lead to large differences in the gradient directions due to the high frequency of the image content. HOG instead averages over a certain window so that outliers are discarded.

Directions of a Texture. Applying HOG to pictures of fabrics reveals their special characteristics caused by the line like patterns (see Fig. 2). There are two dominant texture gradient angles α\alpha and β=((α+180)mod360)\beta=((\alpha+180)\mod 360) perpendicular to the lines. So α\alpha provides the most characteristic information of the pattern in the image at (i,j)(i,j) and can be computed as the angle whose bin has the highest frequency in 𝐡i,j\mathbf{h}_{i,j}. α\alpha is then converted to its normalized 2D direction, also called dominant frame gradient (DFG), which is stored in the two-valued image IDir(i,j)I_{\mathrm{Dir}}(i,j). To detect image regions that do not contain line patterns, we set IDir(i,j)=(0,0)I_{\mathrm{Dir}}(i,j)=(0,0)^{\top} if the highest frequency is below a certain threshold.

Texture-based Constraint. Our novel texture term

ETex(𝐕)=i=1Fρ(𝐝M,i,𝐝F,i)2\begin{split}E_{\mathrm{Tex}}\left(\mathbf{V}\right)=\sum_{i=1}^{F}\left\lVert\rho\left(\mathbf{d}_{\mathrm{M},i},\mathbf{d}_{\mathrm{F},i}\right)\right\rVert^{2}\end{split} (9)

ensures now that for all triangles the projected DFG 𝐝M,i\mathbf{d}_{\mathrm{M},i} parametrized on the object surface agrees with the frame’s DFG 𝐝F,i\mathbf{d}_{\mathrm{F},i} at the location of the projected triangle center. An overview is shown in Fig. 3. More precisely, by averaging 𝐔k,𝐔m,𝐔l\mathbf{U}_{k},\mathbf{U}_{m},\mathbf{U}_{l} one can compute the pixel position 𝐳TM,i2\mathbf{z}_{\mathrm{TM},i}\in\mathbb{R}^{2} of the center point of the triangle 𝐅i=(k,m,l)\mathbf{F}_{i}=(k,m,l) in the texture map. Now, the neighborhood region for HOG around 𝐳TM,i\mathbf{z}_{\mathrm{TM},i} is defined as the 2D bounding box of the triangle. The HOG descriptor for 𝐳TM,i\mathbf{z}_{\mathrm{TM},i} can be computed and by applying the concept explained in the previous paragraph one obtains the DFG 𝐝TM,i\mathbf{d}_{\mathrm{TM},i} (see Fig 3 (a)). Next, we define 𝐛TM,i=𝐳TM,i+𝐝TM,i\mathbf{b}_{\mathrm{TM},i}=\mathbf{z}_{\mathrm{TM},i}+\mathbf{d}_{\mathrm{TM},i} and express it as a linear combination of the triangles’ UV coordinates leading to the barycentric coordinates 𝐁i,1,𝐁i,2,𝐁i,3\mathbf{B}_{i,1},\mathbf{B}_{i,2},\mathbf{B}_{i,3} of the face 𝐅i\mathbf{F}_{i}. They form together with the other triangles the barycentric coordinates matrix 𝐁F×3\mathbf{B}\in\mathbb{R}^{F\times 3}. Each row represents the texture map’s DFG for the respective triangle of the mesh in an implicit form. Since 𝐛TM,i\mathbf{b}_{\mathrm{TM},i} can be represented as a linear combination, one can compute the corresponding 3D point 𝐛3D,i=𝐁i,1𝐕k+𝐁i,2𝐕m+𝐁i,3𝐕l\mathbf{b}_{\mathrm{3D},i}=\mathbf{B}_{i,1}\mathbf{V}_{k}+\mathbf{B}_{i,2}\mathbf{V}_{m}+\mathbf{B}_{i,3}\mathbf{V}_{l} as well as the triangle center 𝐳3D,i3\mathbf{z}_{\mathrm{3D},i}\in\mathbb{R}^{3} in 3D (see Fig 3 (b)). The barycentric coordinates remain constant, so that 𝐛3D,i\mathbf{b}_{\mathrm{3D},i} and 𝐳3D,i\mathbf{z}_{\mathrm{3D},i} only depend on the mesh vertices 𝐕k\mathbf{V}_{k}, 𝐕m\mathbf{V}_{m} and 𝐕l\mathbf{V}_{l}. One can then project the DFG of the mesh 𝐝M,i=Π(𝐛3D,i)Π(𝐳3D,i)\mathbf{d}_{\mathrm{M},i}=\Pi(\mathbf{b}_{\mathrm{3D},i})-\Pi(\mathbf{z}_{\mathrm{3D},i}) into the frame and compare it against the DFG 𝐝F,i\mathbf{d}_{\mathrm{F},i} of the frame t+1t+1 at the location of Π(𝐳3D,i)\Pi(\mathbf{z}_{\mathrm{3D},i}) which can be retrieved by an image lookup in IDirt+1I_{\mathrm{Dir}}^{t+1} (see Fig 3 (c)). ρ(𝐱,𝐲)\rho\left(\mathbf{x},\mathbf{y}\right) computes the minimum of the differences between 𝐱,𝐲\mathbf{x},\mathbf{y} and 𝐱,𝐲\mathbf{x},-\mathbf{y} iff both 𝐱\mathbf{x} and 𝐲\mathbf{y} are non-zero vectors (otherwise we are not in an area with line patterns) and the directions are similar up to a certain threshold to be more robust with respect to occlusions and noise. As mentioned above, there are two DFGs in the frame for the case of line patterns. We assume the initialization is close to the ground truth and choose the minimum of the two possible directions.

Refer to caption
Figure 3: Overview of the proposed texture term.

4 Results

All experiments were performed on a PC with an NVIDIA GeForce GTX 1080Ti and an Intel Core i7. In contrast to related methods [43], we achieve interactive frame rates using the energy proposed in Eq. 2.

4.1 Qualitative and Quantitative Results

Now, we evaluate NRST on datasets for general objects like faces where we disable ETexE_{\mathrm{Tex}}. After that, we compare our approach against another monocular method. Finally, we evaluate our proposed texture term on two new scenes showing line-like fabric structures, perform an ablation study and demonstrate interesting applications. More results can be found in the supplemental video.

Refer to caption
Figure 4: Reconstructions of existing datasets [43, 39, 40, 31]. Input frames (a). Textured reconstructions overlayed on the input (b). Deformed geometries obtained by our method (rendered from a different view point) (c).

Qualitative Evaluation for General Objects. In Fig. 4 we show frames from our monocular reconstruction results. We tested our approach on two face sequences [43, 39] where templates are provided. Note that NRST precisely reconstructs facial expressions. The 2D overlay (second column) matches the input and also in 3D (third column) our results look realistic. Furthermore, we evaluated on the datasets of Varol et al. [40] and Salzmann et al. [31] showing fast movements of a T-shirt and a waving towel. Again for most parts of the surface the reconstructions look accurate in 2D since they overlap well with the input and they are also plausible in 3D. This validates that our approach can deal with the challenging problem of estimating 3D deformations from a monocular video for general kinds of objects.

Comparison to Yu et al. [43]. Fig. 5 shows a qualitative comparison between our method and the one of Yu et al. [43]. It becomes obvious that both capture the facial expression, but the proposed approach is faster than the one of Yu et al. due to our data-parallel GPU implementation. In particular, on their sequence our method runs at 15fps whereas their approach takes several seconds per frame. More sidy-by-side comparisons on this sequence can be found in the supplemental video.

Refer to caption
Figure 5: Comparison of NRST’s reconstruction (right) and the one of Yu et al. [43] (middle). It becomes obvious that both capture the facial expression shown in the input frame (left), but the proposed approach is significantly faster than the one of Yu et al. due to our data-parallel GPU implementation.
Refer to caption
Figure 6: Reconstruction of line patterns. Top from left to right. Input frames. Textured reconstructions overlayed on the frames. Color coded visualization of the estimated dominant frame angles. Regions where no line pattern was detected are visualized in black. Bottom from left to right. Input frames. Textured reconstructions overlayed on the frames. Ground truth geometries (blue) and our reconstructions (red).

Qualitative Evaluation for Fabrics. The top row of Fig. 6 shows frames (resolution 1920×10801920\times 1080) of a moving piece of cloth that has the typical line patterns. Although the object is sparsely textured, our approach is able to recover the deformations due to the texture term, which accurately tracks the DFG of the line pattern. As demonstrated in the last column, the estimated angles for the frames are correct and therefore give a reliable information cue exploited by ETexE_{\mathrm{Tex}}. For quantitative evaluation, we created a synthetic scene that is modeled and animated in a modeling software showing a carpet that has the characteristic line pattern but is also partially textured (see bottom row of Fig. 6). We rendered the scene at a resolution of 1500×15001500\times 1500. ETexE_{\mathrm{Tex}} helps in the less textured regions where EPhotoE_{\mathrm{Photo}} would fail. The last column shows how close our reconstruction (red) is with respect to ground truth (blue).

Ablation Analysis. Apart from the proposed texture term, our energy formulation is similar to the one of Yu et al. [43]. To validate that ETexE_{\mathrm{Tex}} improves the reconstruction over a photometric-only formulation, we perform an ablation study. We measured the averaged per-vertex Euclidean distance between the ground truth mesh and our reconstructions. For the waving towel shown in Fig. 6 bottom, we obtained an error of 26.8mm without ETexE_{\mathrm{Tex}} and 25.5mm if we also use our proposed texture term leading to an improvement of 4.8%. The diagonal of the 3D bounding box of the towel is 3162mm. For the rotation sequence (resolution 800×800800\times 800) shown in Fig. 7 the color variation is very limited since background and object have the same color. In contrast to EPhotoE_{\mathrm{Photo}} alone, ETexE_{\mathrm{Tex}} can rotate the object leading to an error of 4.1mm for the texture-only case and 6.7mm for the photometric-only setting. So ETexE_{\mathrm{Tex}} improves over EPhotoE_{\mathrm{Photo}} by 38.8%

4.2 Applications

Our method enables several applications such as free view point rendering or re-texturing on general deformable objects or for virtual face make-up (see Fig. 8). Since our approach estimates the deforming geometry, one can even change the scene lighting for the foreground such that the shading remains realistic.

Refer to caption
Figure 7: Rotating object sequence. From left to right. First and last frame. Note that the object and the background have the same color. The reconstructions (red) of the last frame with either EPhotoE_{\mathrm{Photo}} or ETexE_{\mathrm{Tex}} overlayed on the ground truth geometry (blue). Note that ETexE_{\mathrm{Tex}} can recover the rotation in contrast to EPhotoE_{\mathrm{Photo}}.
Refer to caption
Figure 8: Applications. Left. Re-textured shirt. Right. Re-textured and re-lighted face.

4.3 Limitations

By the nature of the challenging task of monocular tracking of non-rigid deformations, our method has some limitations which open up directions for future work. Although, our proposed texture term uses more of the information contained in the video than a photometric-only formulation, there are still image cues that can improve the reconstruction like shading and the object contour as demonstrated by previous work [13, 41]. So, one could combine them in a unified framework. To increase robustness, the deformations could be jointly estimated over a temporal sliding window as proposed by Xu et al. [41] and an embedded graph [35] could lead to improved stability by reducing the number of unknowns.

5 Conclusion

We presented an optimization-based analysis-by-synthesis method that solves the challenging task of estimating non-rigid motion, given a single RGB video and a template. Our method tracks non-trivial deformations of a broad class of shapes, ranging from faces to deforming fabric. Further, we introduce specific solutions tailored to capture woven fabrics, even if they lack clear color variations. Our method runs at interactive frame rates due to the GPU-based solver that can efficiently solve the non-linear least squares optimization problem. Our evaluation shows that the reconstructions are accurate in 2D and 3D which enables several applications such as re-texturing.

Acknowledgments.

This work was funded by the ERC Consolidator Grant 4DRepLy (770784).

References

  • [1] Bartoli, A., Gérard, Y., Chadebecq, F., Collins, T.: On Template-Based Reconstruction from a Single View: Analytical Solutions and Proofs of Well-Posedness for Developable, Isometric and Conformal Surfaces. CVPR (2012)
  • [2] Brunet, F., Hartley, R., Bartoli, A., Navab, N., Malgouyres, R.: Monocular Template-Based Reconstruction of Smooth and Inextensible Surfaces. ACCV (2010)
  • [3] Carceroni, R.L., Kutulakos, K.N.: Multi-View Scene Capture by Surfel Sampling: From Video Streams to Non-Rigid 3D Motion, Shape & Reflectance. ICCV (2001)
  • [4] Dai, Y., Li, H., He, M.: A Simple Prior-free Method for Non-Rigid Structure-from-Motion Factorization. IJCV (2014)
  • [5] Dalal, N., Triggs, B.: Histograms of Oriented Gradients for Human Detection. CVPR (2005)
  • [6] Garg, R., Roussos, A., Agapito, L.: Dense Variational Reconstruction of Non-Rigid Surfaces from Monocular Video. CVPR (2013)
  • [7] Garrido, P., Valgaerts, L., Wu, C., Theobalt, C.: Reconstructing Detailed Dynamic Face Geometry from Monocular Video. TOG (2013)
  • [8] Gårding, J.: Shape from Texture for Smooth Curved Surfaces in Perspective Projection. Journal of Mathematical Imaging and Vision (1992)
  • [9] Jordt, A., Koch, R.: Fast tracking of deformable objects in depth and colour video. In: BMVC. pp. 1–11 (2011)
  • [10] Jordt, A., Koch, R.: Direct model-based tracking of 3d object deformations in depth and color video. IJCV 102(1-3), 239–255 (2013)
  • [11] Labatut, P., Pons, J.P., Keriven, R.: Efficient Multi-View Reconstruction of Large-Scale Scenes using Interest Points, Delaunay Triangulation and Graph Cuts. ICCV (2007)
  • [12] Liang, J., DeMenthon, D., Doermann, D.: Flattening Curved Documents in Images. CVPR (2005)
  • [13] Liu-Yin, Q., Yu, R., Agapito, L., Fitzgibbon, A., Russell, C.: Better Together: Joint Reasoning for Non-rigid 3D Reconstruction with Specularities and Shading. BMVC (2016)
  • [14] Ma, W.J.: Nonrigid 3D Reconstruction from a Single Image. ISAI (2016)
  • [15] Malti, A., Bartoli, A., Collins, T.: A Pixel-Based Approach to Template-Based Monocular 3D Reconstruction of Deformable Surfaces. ICCV Workshops (2011)
  • [16] Malti, A., Hartley, R., Bartoli, A., Kim, J.H.: Monocular Template-Based 3D Reconstruction of Extensible Surfaces with Local Linear Elasticity. CVPR (2013)
  • [17] Moreno-Noguer, F., Salzmann, M., Lepetit, V., Fua, P.: Capturing 3D Stretchable Surfaces from Single Images in Closed Form. CVPR (2009)
  • [18] Newcombe, R.A., Fox, D., Seitz, S.M.: DynamicFusion: Reconstruction and Tracking of Non-Rigid Scenes in Real-Time. CVPR (2015)
  • [19] Newcombe, R.A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A.J., Kohli, P., Shotton, J., Hodges, S., Fitzgibbon, A.: KinectFusion: Real-Time Dense Surface Mapping and Tracking. International Symposium on Mixed and Augmented Reality (2011)
  • [20] Ngo, D.T., Park, S., Jorstad, A., Crivellaro, A., Yoo, C.D., Fua, P.: Dense Image Registration and Deformable Surface Reconstruction in Presence of Occlusions and Minimal Texture. ICCV (2015)
  • [21] Oestlund, J., Varol, A., Ngo, D.T., Fua, P.: Laplacian Meshes for Monocular 3D Shape Recovery. ECCV (2012)
  • [22] Pan, Q., Reitmayr, G., Drummond, T.: ProFORMA: Probabilistic Feature-based On-line Rapid Model Acquisition. BMVC (2009)
  • [23] Perriollat, M., Bartoli, A.: A Quasi-Minimal Model for Paper-Like Surfaces. CVPR (2007)
  • [24] Perriollat, M., Hartley, R., Bartoli, A.: Monocular Template-based Reconstruction of Inextensible Surfaces. IJCV (2011)
  • [25] Pons, J.P., Keriven, R., Faugeras, O.: Modelling Dynamic Scenes by Registering Multi-View Image Sequence. CVPR (2005)
  • [26] Rao, A.R.: Computing Oriented Texture Fields. Computer Vision, Graphics, and Image Processing: Graphical Models and Image Processing (1991)
  • [27] Russell, C., Fayad, J., Agapito, L.: Energy Based Multiple Model Fitting for Non-Rigid Structure from Motion. CVPR (2011)
  • [28] Salzmann, M., Fua, P.: Reconstructing Sharply Folding Surfaces: A Convex Formulation. CVPR (2009)
  • [29] Salzmann, M., Fua, P.: Linear Local Models for Monocular Reconstruction of Deformable Surface. Transactions on Pattern Analysis and Machine Intelligence (2011)
  • [30] Salzmann, M., Lepetit, V., Fua, P.: Deformable Surface Tracking Ambiguities. CVPR (2007)
  • [31] Salzmann, M., Moreno-Noguer, F., Lepetit, V., Fua, P.: Closed-Form Solution to Non-rigid 3D Surface Registration. ECCV (2008)
  • [32] Salzmann, M., Urtasun, R., Fua, P.: Local Deformation Models for Monocular 3D Shape Recovery. CVPR (2008)
  • [33] Shen, S., Shi, W., Liu, Y.: Monocular Template-Based Tracking of Inextensible Deformable Surfaces under L2-Norm. ACCV (2009)
  • [34] Sorkine, O., Alexa, M.: As-rigid-as-possible surface modeling. SGP (2007)
  • [35] Sumner, R.W., Schmid, J., Pauly, M.: Embedded deformation for shape manipulation. TOG (2007)
  • [36] Tao, Y., Zheng, Z., Guo, K., Zhao, J., Quionhai, D., Li, H., Pons-Moll, G., Liu, Y.: Doublefusion: Real-time capture of human performance with inner body shape from a depth sensor. In: IEEE Conf. on Computer Vision and Pattern Recognition (2018)
  • [37] Torresani, L., Hertzmann, A., Bregler, C.: Non-Rigid Structure-From-Motion: Estimating Shape and Motion with Hierarchical Priors. Transactions on Pattern Analysis and Machine Intelligence (2008)
  • [38] Tsoli, A., Argyros, A.: Tracking deformable surfaces that undergo topological changes using an rgb-d camera. In: 3DV (October 2016)
  • [39] Valgaerts, L., Wu, C., Bruhn, A., Seidel, H.P., Theobalt, C.: Lightweight Binocular Facial Performance Capture under Uncontrolled Lighting. SIGGRAPH Asia (2012)
  • [40] Varol, A., Salzmann, M., Fua, P., Urtasun, R.: A Constrained Latent Variable Model. CVPR (2012)
  • [41] Xu, W., Chatterjee, A., Zollhoefer, M., Rhodin, H., Mehta, D., Seidel, H.P., Theobalt, C.: MonoPerfCap: Human Performance Capture from Monocular Video. TOG (2018)
  • [42] Xu, W., Salzmann, M., Wang, Y., Liu, Y.: Nonrigid surface registration and completion from rgbd images. ECCV (2014)
  • [43] Yu, R., Russell, C., Campbell, N.D.F., Agapito, L.: Direct, Dense, and Deformable: Template-Based Non-Rigid 3D Reconstruction from RGB Video. ICCV (2015)
  • [44] Zollhoefer, M., Niessner, M., Izadi, S., Rhemann, C., Zach, C., Fisher, M., Wu, C., Fitzgibbon, A., Loop, C., Theobalt, C., Stamminger, M.: Real-time Non-rigid Reconstruction using an RGB-D Camera. TOG (2014)