https://www.mpi-inf.mpg.de/home/
11email: {mhaberma, wxu, gpons, theobalt}@mpi-inf.mpg.de 22institutetext: EPFL, Lausanne CH-1015, Switzerland
https://www.epfl.ch/
22email: [email protected] 33institutetext: Stanford University, Stanford CA 94305, USA
https://www.stanford.edu/
33email: [email protected]
NRST: Non-rigid Surface Tracking from Monocular Video
Abstract
We propose an efficient method for non-rigid surface tracking from monocular RGB videos. Given a video and a template mesh, our algorithm sequentially registers the template non-rigidly to each frame. We formulate the per-frame registration as an optimization problem that includes a novel texture term specifically tailored towards tracking objects with uniform texture but fine-scale structure, such as the regular micro-structural patterns of fabric. Our texture term exploits the orientation information in the micro-structures of the objects, e.g., the yarn patterns of fabrics. This enables us to accurately track uniformly colored materials that have these high frequency micro-structures, for which traditional photometric terms are usually less effective. The results demonstrate the effectiveness of our method on both general textured non-rigid objects and monochromatic fabrics.

1 Introduction
In this paper, we propose NRST, an efficient method for non-rigid surface tracking from monocular RGB videos. Capturing the non-rigid deformation of a dynamic surface is an important and long-standing problem in computer vision. It has a wide range of real world applications in fields such as virtual/augmented reality, medicine and visual effects. Most of the existing methods are based on multi-view imagery, where expensive and complicated system setups are required [3, 25, 23]. There also exist methods that rely on only a single depth or RGB-D camera [42, 44, 19, 18]. However, these sensors are not as ubiquitous as RGB cameras, and these methods cannot be applied on plenty of existing video footage which is found on social media like YouTube. There are also monocular RGB methods [43, 30], of course with their own limitations; e.g., they rely on highly textured surfaces and they are often times slow.
In this work, we present a method which is able to densely track the non-rigid deformations of general objects such as faces and fabrics from a single RGB video. To solve this challenging problem, our method relies on a textured mesh template of the deforming object’s surface. Given the input video, our algorithm sequentially registers the template to each frame. More specifically, our method automatically reproduces a deformation sequence of the template model that coincides with the non-rigid surface motion in the video. To this end, we formulate the per-frame registration as a non-linear least squares optimization problem – with an objective function consisting of a photometric alignment and several regularization terms. The optimization is computationally intensive due to the large number of residuals in our alignment objective. To address this, we adapt the efficient GPU-based Gauss-Newton solver of Zollhoefer et al. [44] to our problem that allows for deformable object tracking at interactive frame rates.
Besides the efficiency of the algorithm, the core contribution of our approach is a novel texture term that exploits the orientation information in the micro-structures of the tracked objects, such as the yarn patterns of fabrics. This enables us to track uniformly colored materials which have high frequency patterns, for which the classical color-based term is usually less effective.
In our experimental results, we evaluate our method qualitatively and quantitatively on several challenging sequences of deforming surfaces. We use well established benchmarks, such as pieces of cloth [40, 31] and human faces [43, 39]. The results demonstrate that our method can accurately track general non-rigid objects. Furthermore, for materials with regular micro-structural patterns, such as fabrics, the tracking accuracy is further improved with our texture term.
2 Related Work
There is a variety of approaches that reconstruct geometry from multiple images, e.g., template-free methods [3], variational ones [25] or object specific approaches [23]. Although multi-view methods can produce accurate tracking results, their setup is expensive and hard to operate. Some approaches use a single RGB-D sensor instead [42, 44, 19, 18, 9, 10, 36]. They manage to capture deformable surfaces nicely and at high efficiency, some even build up a template model alongside per-frame reconstruction. The main limitations of these methods are that the sensors have a high power consumption, they do not work outdoors, the object has to be close to the camera and they cannot use the large amount of RGB-only video footage provided by social media. On these grounds, we aim for a method that uses just a single RGB video as input. In the following, we focus on related monocular reconstruction and tracking approaches.
Monocular Methods. Non-rigid structure from motion methods, which do not rely on any template, try to infer the 3D geometry from a single video by using a prior-free formulation [4], global models [37], local ones [27] or solving a variational formulation [6]. But they often either capture the deformations only coarsely, are not able to model strong deformations, typically require strongly textured objects or rely on dense 2D correspondences. By constraining the setting to specific types of objects such as faces [7], very accurate reconstructions can be obtained, but at the expense of generality. Since in recent years, several approaches [22, 11] build a 3D model given a set of images, and even commercial software111http://www.agisoft.com/ is available for this task, template acquisition has become easier. Templates are an effective prior for the challenging task of estimating non-rigid deformations from single images as demonstrated by previous work [30, 31, 32, 33, 2, 24, 15, 21, 1, 40, 43, 17, 28, 16, 29, 14]. But even if a template is used, ambiguities [30] remain and additional constraints have to be imposed. Theoretical results [1] show that only allowing isometric deformations [24] results in a uniquely defined solution. Therefore, approaches constrain the deformation space in several ways, e.g., by a Laplacian regularization [21] or by non-linear [32] or linear local surface models [29]. Salzmann et al. [28] argued that relaxing the isometric constraint is beneficial since it allows to model sharp folds. Moreno-Noguer et al. [17] and Malti et al. [16] even go beyond this and show results for elastic surfaces; Tsoli and Argyros [38] demonstrated tracking surfaces that undergo topological changes but require a depth camera. Other approaches investigate how to make reconstruction more robust under faster motions [33] and occlusions [20], or try to replace the feature-based data term by a dense pixel-based one [15] and to find better texture descriptors [26, 8, 12]. Brunet et al. [2] and Yu et al. [43] formulate the problem of estimating non-rigid deformations as minimizing an objective function which brings them closest to our formulation. In particular, we adopt the photometric, spatial and temporal terms of Yu et al. [43] and combine them with an isometric and acceleration constraint as well as our novel texture term.
Along the line of monocular methods, we propose NRST, a template-based reconstruction framework that estimates the non-rigidly deforming geometry of general objects from just monocular video. In contrast to previous work, our approach does not rely on 3D to 2D correspondences and due to the GPU-based solver architecture it is also much faster than previous approaches. Furthermore, our novel texture term enables tracking of regions with little texture.
3 Method
The goal is to estimate the non-rigid deformation of an object from frames with . We assume a static camera and known camera intrinsics. Since this problem is in general severely under-constrained, it is assumed that a template triangle mesh of the object to be tracked is given as the matrix where each row contains the coordinates of one of the vertices. According to that, is defined as the th vertex of the template in vector form. This notation is also used for the following matrices. The edges of the template are given as the mapping . Given a vertex index , it returns the set of indices sharing an edge with . The faces of the mesh are represented as the matrix . Each row contains the vertex indices of one triangle. The UV map is given as the matrix . Each row contains the UV coordinates for the corresponding vertex. The color of vertex can be computed by a simple lookup in the texture map at the position . The color of all vertices is stored in the matrix . Furthermore, it is assumed that the geometry at time roughly agrees with the true shape shown in the video so that the gradients of the photometric term can guide the optimization to the correct solution without being trapped into local minima. The non-rigidly deformed mesh at time is represented as the matrix and contains the updated vertex positions according to the 3D displacement from to .
3.1 Non-rigid Tracking as Energy Minimization
Given the template and our estimate of the previous frame , our method sequentially estimates the geometry of the current frame . We jointly optimize per-vertex local rotations denoted by and vertex locations . Specifically, for each time step the deformation estimation is formulated as the non-linear optimization problem
(1) |
with
(2) |
, , , , , are hyperparameters set before the optimization starts and afterwards they are kept constant. combines different cost terms ensuring that the mesh deformations agree with the motion in the video. The resulting non-linear least squares optimization problem is solved with the GPU-based Gauss-Newton solver based on the work of Zollhoefer et al. [44] where we adapted the Jacobian and residual implementation to our energy formulation. The high efficiency is obtained by exploiting the sparse structure of the system of normal equations. For more details we refer the reader to the approach of Zollhoefer et al. [44]. Now, we will explain the terms in more detail.
Photometric Alignment. The photometric term
(3) |
densely measures the re-projection error. is the Euclidean norm, is the convolution operator and is a Gaussian kernel with standard deviation . We use Gaussian smoothing on the input frame for more stable and longer range gradients. with projects the vertex on the image plane and returns the RGB color vector of the smoothed frame at position which is compared against the pre-computed and constant vertex color . Here, is the intrinsic camera matrix. is a robust pruning function for wrong correspondences with respect to color similarity. More specifically, we discard errors above a certain threshold because in most cases they are due to occlusions.
Spatial Smoothness. Without regularization, estimating 3D geometry from a single image is an ill-posed problem. Therefore, we introduce several spatial and temporal regularizers to make the problem well-posed and to propagate 3D deformations into areas where information for data terms is missing, e.g., poorly textured or occluded regions. The first prior
(4) |
ensures that if a vertex changes its position, its neighbors with are deformed such that the overall shape is still spatially smooth compared to the template mesh . In addition, the prior
(5) |
ensures isometric deformations which means that the edge length with respect to the template is preserved. In contrast to , this prior is rotation invariant. Finally, the as-rigid-as-possible (ARAP) prior [34]
(6) |
allows local rotations for each of the mesh vertices as long as the relative position with respect to their neighborhood remains the same. Each row of the matrix contains the per-vertex Euler angles which encode a local rotation around . converts them into a rotation matrix.
We choose a combination of spatial regularizers to ensure that our method can track different types of non-rigid deformations equally well. For example, is usually sufficient to track facial expressions without large head rotations. But tracking rotating objects can only be achieved with rotational invariant regularizers (, ). In contrast to Yu et al. [43], we adopt the Euclidean norm in Eq. 4 and Eq. 6 instead of the Huber loss because it led to visually better results.
Temporal Smoothness. To enforce temporally smooth reconstructions, we propose two additional priors. The first one is defined as
(7) |
and ensures that the displacement of vertices between and is small. Second, the prior
(8) |
penalizes large deviations of the current velocity direction from the previous one.
3.2 Non-rigid Tracking of Woven Fabrics
Tracking of uniformly colored fabrics is usually challenging for classical color-based terms due to the lack of color features. To overcome this limitation, we inspected the structure of different garments and found that most of them show line-like micro-structures due to the manufacturing process of the woven threads, see Fig. 2 left. Those can be recorded with recent high resolution cameras such that reconstruction algorithms can make use of those patterns. To this end, we propose a novel texture term to refine the estimation of non-rigid motions for the case of woven fabrics. It can be combined with the terms in Eq. 2. Now, we will explain our novel data term in more detail.

Histogram of Oriented Gradient (HOG). Based on HOG [5] we compute for each pixel of an image the corresponding histogram where is the number of bins that count the total number of gradient angles present in the neighborhood of pixel . To be more robust with respect to outliers and noise we count the number of gradients per angular bin irrespective of the gradient magnitude and only if the magnitude is higher than a certain threshold. Compared to pure image gradients, HOG is less sensitive to noise. Especially for woven fabrics, image gradients are very localized since changing the position in the image can lead to large differences in the gradient directions due to the high frequency of the image content. HOG instead averages over a certain window so that outliers are discarded.
Directions of a Texture. Applying HOG to pictures of fabrics reveals their special characteristics caused by the line like patterns (see Fig. 2). There are two dominant texture gradient angles and perpendicular to the lines. So provides the most characteristic information of the pattern in the image at and can be computed as the angle whose bin has the highest frequency in . is then converted to its normalized 2D direction, also called dominant frame gradient (DFG), which is stored in the two-valued image . To detect image regions that do not contain line patterns, we set if the highest frequency is below a certain threshold.
Texture-based Constraint. Our novel texture term
(9) |
ensures now that for all triangles the projected DFG parametrized on the object surface agrees with the frame’s DFG at the location of the projected triangle center. An overview is shown in Fig. 3. More precisely, by averaging one can compute the pixel position of the center point of the triangle in the texture map. Now, the neighborhood region for HOG around is defined as the 2D bounding box of the triangle. The HOG descriptor for can be computed and by applying the concept explained in the previous paragraph one obtains the DFG (see Fig 3 (a)). Next, we define and express it as a linear combination of the triangles’ UV coordinates leading to the barycentric coordinates of the face . They form together with the other triangles the barycentric coordinates matrix . Each row represents the texture map’s DFG for the respective triangle of the mesh in an implicit form. Since can be represented as a linear combination, one can compute the corresponding 3D point as well as the triangle center in 3D (see Fig 3 (b)). The barycentric coordinates remain constant, so that and only depend on the mesh vertices , and . One can then project the DFG of the mesh into the frame and compare it against the DFG of the frame at the location of which can be retrieved by an image lookup in (see Fig 3 (c)). computes the minimum of the differences between and iff both and are non-zero vectors (otherwise we are not in an area with line patterns) and the directions are similar up to a certain threshold to be more robust with respect to occlusions and noise. As mentioned above, there are two DFGs in the frame for the case of line patterns. We assume the initialization is close to the ground truth and choose the minimum of the two possible directions.

4 Results
All experiments were performed on a PC with an NVIDIA GeForce GTX 1080Ti and an Intel Core i7. In contrast to related methods [43], we achieve interactive frame rates using the energy proposed in Eq. 2.
4.1 Qualitative and Quantitative Results
Now, we evaluate NRST on datasets for general objects like faces where we disable . After that, we compare our approach against another monocular method. Finally, we evaluate our proposed texture term on two new scenes showing line-like fabric structures, perform an ablation study and demonstrate interesting applications. More results can be found in the supplemental video.

Qualitative Evaluation for General Objects. In Fig. 4 we show frames from our monocular reconstruction results. We tested our approach on two face sequences [43, 39] where templates are provided. Note that NRST precisely reconstructs facial expressions. The 2D overlay (second column) matches the input and also in 3D (third column) our results look realistic. Furthermore, we evaluated on the datasets of Varol et al. [40] and Salzmann et al. [31] showing fast movements of a T-shirt and a waving towel. Again for most parts of the surface the reconstructions look accurate in 2D since they overlap well with the input and they are also plausible in 3D. This validates that our approach can deal with the challenging problem of estimating 3D deformations from a monocular video for general kinds of objects.
Comparison to Yu et al. [43]. Fig. 5 shows a qualitative comparison between our method and the one of Yu et al. [43]. It becomes obvious that both capture the facial expression, but the proposed approach is faster than the one of Yu et al. due to our data-parallel GPU implementation. In particular, on their sequence our method runs at 15fps whereas their approach takes several seconds per frame. More sidy-by-side comparisons on this sequence can be found in the supplemental video.


Qualitative Evaluation for Fabrics. The top row of Fig. 6 shows frames (resolution ) of a moving piece of cloth that has the typical line patterns. Although the object is sparsely textured, our approach is able to recover the deformations due to the texture term, which accurately tracks the DFG of the line pattern. As demonstrated in the last column, the estimated angles for the frames are correct and therefore give a reliable information cue exploited by . For quantitative evaluation, we created a synthetic scene that is modeled and animated in a modeling software showing a carpet that has the characteristic line pattern but is also partially textured (see bottom row of Fig. 6). We rendered the scene at a resolution of . helps in the less textured regions where would fail. The last column shows how close our reconstruction (red) is with respect to ground truth (blue).
Ablation Analysis. Apart from the proposed texture term, our energy formulation is similar to the one of Yu et al. [43]. To validate that improves the reconstruction over a photometric-only formulation, we perform an ablation study. We measured the averaged per-vertex Euclidean distance between the ground truth mesh and our reconstructions. For the waving towel shown in Fig. 6 bottom, we obtained an error of 26.8mm without and 25.5mm if we also use our proposed texture term leading to an improvement of 4.8%. The diagonal of the 3D bounding box of the towel is 3162mm. For the rotation sequence (resolution ) shown in Fig. 7 the color variation is very limited since background and object have the same color. In contrast to alone, can rotate the object leading to an error of 4.1mm for the texture-only case and 6.7mm for the photometric-only setting. So improves over by 38.8%
4.2 Applications
Our method enables several applications such as free view point rendering or re-texturing on general deformable objects or for virtual face make-up (see Fig. 8). Since our approach estimates the deforming geometry, one can even change the scene lighting for the foreground such that the shading remains realistic.


4.3 Limitations
By the nature of the challenging task of monocular tracking of non-rigid deformations, our method has some limitations which open up directions for future work. Although, our proposed texture term uses more of the information contained in the video than a photometric-only formulation, there are still image cues that can improve the reconstruction like shading and the object contour as demonstrated by previous work [13, 41]. So, one could combine them in a unified framework. To increase robustness, the deformations could be jointly estimated over a temporal sliding window as proposed by Xu et al. [41] and an embedded graph [35] could lead to improved stability by reducing the number of unknowns.
5 Conclusion
We presented an optimization-based analysis-by-synthesis method that solves the challenging task of estimating non-rigid motion, given a single RGB video and a template. Our method tracks non-trivial deformations of a broad class of shapes, ranging from faces to deforming fabric. Further, we introduce specific solutions tailored to capture woven fabrics, even if they lack clear color variations. Our method runs at interactive frame rates due to the GPU-based solver that can efficiently solve the non-linear least squares optimization problem. Our evaluation shows that the reconstructions are accurate in 2D and 3D which enables several applications such as re-texturing.
Acknowledgments.
This work was funded by the ERC Consolidator Grant 4DRepLy (770784).
References
- [1] Bartoli, A., Gérard, Y., Chadebecq, F., Collins, T.: On Template-Based Reconstruction from a Single View: Analytical Solutions and Proofs of Well-Posedness for Developable, Isometric and Conformal Surfaces. CVPR (2012)
- [2] Brunet, F., Hartley, R., Bartoli, A., Navab, N., Malgouyres, R.: Monocular Template-Based Reconstruction of Smooth and Inextensible Surfaces. ACCV (2010)
- [3] Carceroni, R.L., Kutulakos, K.N.: Multi-View Scene Capture by Surfel Sampling: From Video Streams to Non-Rigid 3D Motion, Shape & Reflectance. ICCV (2001)
- [4] Dai, Y., Li, H., He, M.: A Simple Prior-free Method for Non-Rigid Structure-from-Motion Factorization. IJCV (2014)
- [5] Dalal, N., Triggs, B.: Histograms of Oriented Gradients for Human Detection. CVPR (2005)
- [6] Garg, R., Roussos, A., Agapito, L.: Dense Variational Reconstruction of Non-Rigid Surfaces from Monocular Video. CVPR (2013)
- [7] Garrido, P., Valgaerts, L., Wu, C., Theobalt, C.: Reconstructing Detailed Dynamic Face Geometry from Monocular Video. TOG (2013)
- [8] Gårding, J.: Shape from Texture for Smooth Curved Surfaces in Perspective Projection. Journal of Mathematical Imaging and Vision (1992)
- [9] Jordt, A., Koch, R.: Fast tracking of deformable objects in depth and colour video. In: BMVC. pp. 1–11 (2011)
- [10] Jordt, A., Koch, R.: Direct model-based tracking of 3d object deformations in depth and color video. IJCV 102(1-3), 239–255 (2013)
- [11] Labatut, P., Pons, J.P., Keriven, R.: Efficient Multi-View Reconstruction of Large-Scale Scenes using Interest Points, Delaunay Triangulation and Graph Cuts. ICCV (2007)
- [12] Liang, J., DeMenthon, D., Doermann, D.: Flattening Curved Documents in Images. CVPR (2005)
- [13] Liu-Yin, Q., Yu, R., Agapito, L., Fitzgibbon, A., Russell, C.: Better Together: Joint Reasoning for Non-rigid 3D Reconstruction with Specularities and Shading. BMVC (2016)
- [14] Ma, W.J.: Nonrigid 3D Reconstruction from a Single Image. ISAI (2016)
- [15] Malti, A., Bartoli, A., Collins, T.: A Pixel-Based Approach to Template-Based Monocular 3D Reconstruction of Deformable Surfaces. ICCV Workshops (2011)
- [16] Malti, A., Hartley, R., Bartoli, A., Kim, J.H.: Monocular Template-Based 3D Reconstruction of Extensible Surfaces with Local Linear Elasticity. CVPR (2013)
- [17] Moreno-Noguer, F., Salzmann, M., Lepetit, V., Fua, P.: Capturing 3D Stretchable Surfaces from Single Images in Closed Form. CVPR (2009)
- [18] Newcombe, R.A., Fox, D., Seitz, S.M.: DynamicFusion: Reconstruction and Tracking of Non-Rigid Scenes in Real-Time. CVPR (2015)
- [19] Newcombe, R.A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A.J., Kohli, P., Shotton, J., Hodges, S., Fitzgibbon, A.: KinectFusion: Real-Time Dense Surface Mapping and Tracking. International Symposium on Mixed and Augmented Reality (2011)
- [20] Ngo, D.T., Park, S., Jorstad, A., Crivellaro, A., Yoo, C.D., Fua, P.: Dense Image Registration and Deformable Surface Reconstruction in Presence of Occlusions and Minimal Texture. ICCV (2015)
- [21] Oestlund, J., Varol, A., Ngo, D.T., Fua, P.: Laplacian Meshes for Monocular 3D Shape Recovery. ECCV (2012)
- [22] Pan, Q., Reitmayr, G., Drummond, T.: ProFORMA: Probabilistic Feature-based On-line Rapid Model Acquisition. BMVC (2009)
- [23] Perriollat, M., Bartoli, A.: A Quasi-Minimal Model for Paper-Like Surfaces. CVPR (2007)
- [24] Perriollat, M., Hartley, R., Bartoli, A.: Monocular Template-based Reconstruction of Inextensible Surfaces. IJCV (2011)
- [25] Pons, J.P., Keriven, R., Faugeras, O.: Modelling Dynamic Scenes by Registering Multi-View Image Sequence. CVPR (2005)
- [26] Rao, A.R.: Computing Oriented Texture Fields. Computer Vision, Graphics, and Image Processing: Graphical Models and Image Processing (1991)
- [27] Russell, C., Fayad, J., Agapito, L.: Energy Based Multiple Model Fitting for Non-Rigid Structure from Motion. CVPR (2011)
- [28] Salzmann, M., Fua, P.: Reconstructing Sharply Folding Surfaces: A Convex Formulation. CVPR (2009)
- [29] Salzmann, M., Fua, P.: Linear Local Models for Monocular Reconstruction of Deformable Surface. Transactions on Pattern Analysis and Machine Intelligence (2011)
- [30] Salzmann, M., Lepetit, V., Fua, P.: Deformable Surface Tracking Ambiguities. CVPR (2007)
- [31] Salzmann, M., Moreno-Noguer, F., Lepetit, V., Fua, P.: Closed-Form Solution to Non-rigid 3D Surface Registration. ECCV (2008)
- [32] Salzmann, M., Urtasun, R., Fua, P.: Local Deformation Models for Monocular 3D Shape Recovery. CVPR (2008)
- [33] Shen, S., Shi, W., Liu, Y.: Monocular Template-Based Tracking of Inextensible Deformable Surfaces under L2-Norm. ACCV (2009)
- [34] Sorkine, O., Alexa, M.: As-rigid-as-possible surface modeling. SGP (2007)
- [35] Sumner, R.W., Schmid, J., Pauly, M.: Embedded deformation for shape manipulation. TOG (2007)
- [36] Tao, Y., Zheng, Z., Guo, K., Zhao, J., Quionhai, D., Li, H., Pons-Moll, G., Liu, Y.: Doublefusion: Real-time capture of human performance with inner body shape from a depth sensor. In: IEEE Conf. on Computer Vision and Pattern Recognition (2018)
- [37] Torresani, L., Hertzmann, A., Bregler, C.: Non-Rigid Structure-From-Motion: Estimating Shape and Motion with Hierarchical Priors. Transactions on Pattern Analysis and Machine Intelligence (2008)
- [38] Tsoli, A., Argyros, A.: Tracking deformable surfaces that undergo topological changes using an rgb-d camera. In: 3DV (October 2016)
- [39] Valgaerts, L., Wu, C., Bruhn, A., Seidel, H.P., Theobalt, C.: Lightweight Binocular Facial Performance Capture under Uncontrolled Lighting. SIGGRAPH Asia (2012)
- [40] Varol, A., Salzmann, M., Fua, P., Urtasun, R.: A Constrained Latent Variable Model. CVPR (2012)
- [41] Xu, W., Chatterjee, A., Zollhoefer, M., Rhodin, H., Mehta, D., Seidel, H.P., Theobalt, C.: MonoPerfCap: Human Performance Capture from Monocular Video. TOG (2018)
- [42] Xu, W., Salzmann, M., Wang, Y., Liu, Y.: Nonrigid surface registration and completion from rgbd images. ECCV (2014)
- [43] Yu, R., Russell, C., Campbell, N.D.F., Agapito, L.: Direct, Dense, and Deformable: Template-Based Non-Rigid 3D Reconstruction from RGB Video. ICCV (2015)
- [44] Zollhoefer, M., Niessner, M., Izadi, S., Rhemann, C., Zach, C., Fisher, M., Wu, C., Fitzgibbon, A., Loop, C., Theobalt, C., Stamminger, M.: Real-time Non-rigid Reconstruction using an RGB-D Camera. TOG (2014)