This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A Novel 3D Mapping Representation and its Applications

 Qiguang Chen
Department of Mathematics
The Chinese University of Hong Kong
[email protected]
& Lok Ming Lui
Department of Mathematics
The Chinese University of Hong Kong
[email protected]
Abstract

The analysis of mapping relationships and distortions in multidimensional data poses a significant challenge in contemporary research. While Beltrami coefficients offer a precise description of distortions in two-dimensional mappings, current tools lack this capability in the context of three-dimensional space. This paper presents a novel approach: a 3D quasiconformal representation that captures the local dilation of 3D mappings, along with a reconstruction algorithm that establishes a connection between this representation and the corresponding mapping. Experimental results showcase the algorithm’s effectiveness in mapping reconstruction, keyframe interpolation, and mapping compression. These features bear a resemblance to the 2D Linear Beltrami Solver technique. The work presented in this paper offers a promising solution for the precise analysis and adjustment of distortions in 3D data and mappings.

Keywords Quasiconformal mapping, Beltrami coefficient, 3D quasiconformal representation, 3D Linear Beltrami Solver, Keyframe interpolation, 3D mapping compression

1 Introduction

In modern scientific research, one of the main motivations is to process the vast amount of information encountered by human beings. This information includes 1D, 2D, and 3D data, which we encounter daily. Acoustic algorithms have been developed to process 1D information such as voice, while image data captured by cameras, rendered in games, and used in medical scenarios, such as X-ray images, are examples of 2D data. In mathematics, we consider surfaces on a 3D object as 2D and typically parameterize them on the 2D plane for further processing. 3D information includes MRI from brain, CT scans for chest, etc.

When analyzing relationships between pairs or groups of objects, we often use mappings to represent these relationships. However, when working with mappings, it’s important to be mindful of the distortion that can arise from numerical implementations. In the 2D case, Beltrami coefficients, which are based on the Beltrami equation, can be used to measure the distortion of 2D mappings effectively. By controlling or analyzing the Beltrami coefficients of mappings, we can perform registration or shape analysis. However, this approach only works for 2D space. Unfortunately, we currently lack such an effective tool to describe distortion in 3D space precisely.

For a long time, we had no efficient way to reconstruct mappings from Beltrami coefficients. Obtaining Beltrami coefficients from mappings is straightforward, but the inverse direction is not explicitly defined in the Beltrami equation. However, the Linear Beltrami Solver (LBS) [1] has tackled this problem by providing an effective link between Beltrami coefficients and mappings. LBS has been successful in the field of mapping distortion adjustment, enabling the adjustment of distortion in both forward and backward directions. Various algorithms based on LBS have been proposed to solve challenging problems such as efficient parameterization, image registration, segmentation, retargeting, deturbulence, Teichmuller mapping, and harmonic Beltrami signature. Recently, some learning-based methods have been developed based on LBS, leveraging prior knowledge from data and the topological preservation advantages of LBS to effectively address challenging tasks and outperform conventional methods in various aspects.

However, LBS is theoretically based on the Beltrami’s equation, which can only provides a relationship between Beltrami coefficients and mappings in 2D space. when dealing with distortions in 3D space, LBS and its related efficient algorithms cannot be generalized for use in three dimensions. Unlike in 2D space, there is no quasiconformal representation similar to Beltrami coefficients in 3D space, nor is there an equation that relates this representation to its corresponding mapping.

In this paper, we propose a 3D quasiconformal representation that can effectively describe the local dilation of a 3D mapping and an algorithm similar to LBS, connecting this representation and its corresponding mapping. We also propose a keyframe interpolation algorithm and a 3D mapping compression algorithm based on the proposed 3D quasiconformal representation and the reconstruction algorithm. Experimental results show the effectiveness of this algorithm.

2 Contributions

The main contributions of this paper are outlined as follows.

  1. 1.

    We propose an effective 3D quasiconformal representation to describe the local dilation of 3D mappings.

  2. 2.

    We derive a partial differential equation that connects the proposed 3D quasiconformal representation and its corresponding mapping. Similar to LBS, this PDE can be discretized into a linear system, and boundary conditions and landmarks can be imposed easily.

  3. 3.

    We further show that under our discretization scheme, the resulting linear system is symmetric positive definite, meaning that the linear system can be solved efficiently using the conjugate gradient method.

  4. 4.

    Based on the above algorithm, we propose an algorithm for 3D keyframe interpolation in the film and animation industry.

  5. 5.

    Based on the above algorithm, we propose a 3D mapping compression algorithm for 3D unstructured mesh.

3 Related works

The computational method of conformal mapping [2, 3] has proven to be highly beneficial in the field of computer graphics. After the groundbreaking study by Gu et al. [4], there has been significant progress in using the conformal geometry framework for surface processing tasks. The work of Zayer, Rossl, and Seidel [5] already contains implicit generalizations of these ideas. Lui and his coauthors proposed the quasi-conformal extension of the surface registration framework, which includes the map compression [6] and the Linear Beltrami Solver (LBS) introduced in [1]. This solver has been used to develop fast spherical and disk conformal parameterization techniques [7, 8], as well as various registration algorithms for different applications [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. Additionally, LBS has been utilized in various segmentation algorithms for different applications [25, 26, 27, 28]. Besides, it has also been applied to shape analysis [29, 30, 31, 32, 33, 34]. Learning-based methods based on LBS [35, 36] were also proposed to perform topology-preserving registration in the 2D space. However, due to the limitation of the quasiconformal theory the LBS is based on, these useful techniques have not been generalized to process 3D data.

Some efforts have been made to apply the quasiconformal theory to process data with higher dimensions. In [37], a generalized conformality distortion is first defined to measure the dilatation of a n-dimensional quasiconformal map, after which optimization algorithms are proposed to obtain landmark-matching transformation. Based on this quantity, [38] proposes a general framework for various n-dimensional mapping problems. [39] proposes an n-dimensional measure that is consistent with the classical 2D Beltrami coefficient in terms of the relationship between the magnitude of the quantity and the determinant of the mapping. Although these quantities are effective in measuring and controlling the distortion of mappings, the generalized conformality distortion cannot be used to reconstruct 3D mappings, as they do not contain the complete information (dilation in three orthogonal directions) of mapping, and the lack of information makes obtaining the mappings from the defined quantities impossible. In other words, they are not equivalent quantities to the Beltrami coefficient in the higher dimension.

4 Mathematical background

In the following sections, we will introduce the Mathematical background for this paper. It includes three parts, polar decomposition for decomposing rotation and dilation, quasiconformal mapping in the 2D space, and some observation related to matrix inverse.

4.1 Polar decomposition

In this paper, we focus solely on the left polar decomposition for real rectangular matrices. We begin by denoting the set of n×nn\times n real matrices as n\mathcal{M}_{n}. The left polar decomposition of a matrix JnJ\in\mathcal{M}_{n} is a factorization of the form J=UPJ=UP, where the columns of the matrix UnU\in\mathcal{M}_{n} form an orthonormal basis, and PnP\in\mathcal{M}_{n} is a symmetric positive semi-definite matrix. Specifically, P=JTJP=\sqrt{J^{T}J}.

To better understand this definition, we first note that JTJJ^{T}J is a symmetric semi-definite matrix. By applying the eigenvalue decomposition to this matrix, we have

JTJ=WDW1J^{T}J=WDW^{-1}

where WW is an orthogonal matrix because JTJJ^{T}J is symmetric, therefore we also have W1=WTW^{-1}=W^{T}. The diagonal matrix DD contains the eigenvalues of JTJJ^{T}J. Σ\Sigma denotes the matrix whose diagonal entries are the square root of the corresponding entries in DD. Those diagonal entries are the singular values of JJ. Thus, we have P=JTJ=WΣW1P=\sqrt{J^{T}J}=W\Sigma W^{-1}.

Therefore, any linear transformation from n\mathbb{R}^{n} to n\mathbb{R}^{n} is a composition of a rotation/reflection and a dilation.

4.2 2D Quasiconformal maps and Beltrami coefficients

A surface with a conformal structure is known as a Riemann surface. Given two Riemann surfaces MM and NN, a map f:MNf:M\rightarrow N is conformal if it preserves the surface metric up to a multiplicative factor called the conformal factor. Quasiconformal maps are a generalization of conformal maps that are orientation-preserving diffeomorphisms between Riemann surfaces with bounded conformal distortion, meaning that their first-order approximation maps small circles to small ellipses of bounded eccentricity (as shown in Figure 1).

xxyyOO1111(cosθ2sinθ2)\begin{pmatrix}\cos{\frac{\theta}{2}}\\ \sin{\frac{\theta}{2}}\end{pmatrix}(sinθ2cosθ2)\begin{pmatrix}\sin{\frac{\theta}{2}}\\ -\cos{\frac{\theta}{2}}\end{pmatrix}SSxxyyOO1111(1+|μ(p)|)(cosθ2sinθ2)(1+|\mu(p)|)\begin{pmatrix}\cos{\frac{\theta}{2}}\\ \sin{\frac{\theta}{2}}\end{pmatrix}(1|μ(p)|)(sinθ2cosθ2)(1-|\mu(p)|)\begin{pmatrix}\sin{\frac{\theta}{2}}\\ -\cos{\frac{\theta}{2}}\end{pmatrix}
Figure 1: Illustration of how Beltrami equation μ\mu measures the distortion of the stretch map SS of a quasiconformal mapping ff that maps a small circle to an ellipse with dilation K=1+|μ(p)|1|μ(p)|K=\frac{1+|\mu(p)|}{1-|\mu(p)|}.

Mathematically, a map f:f:\mathbb{C}\rightarrow\mathbb{C} is quasiconformal if it satisfies the Beltrami equation

fz¯=μ(z)fz,\frac{\partial f}{\partial\bar{z}}=\mu(z)\frac{\partial f}{\partial z},

for some complex valued functions μ\mu with μ<1\lVert\mu\rVert_{\infty}<1 called the Beltrami coefficient. In particular, ff is conformal around a small neighborhood of a point pp if and only if μ(p)=0\mu(p)=0, that is

f(z)\displaystyle f(z) =f(p)+fz(p)z+fz¯(p)z¯\displaystyle=f(p)+f_{z}(p)z+f_{\bar{z}}(p)\bar{z}
=f(p)+fz(p)(z+μ(p)z¯)\displaystyle=f(p)+f_{z}(p)(z+\mu(p)\bar{z})

This equation shows that ff can be considered as a map composed of a translation that maps pp to f(p)f(p), followed by a stretch map S(z)=z+μz¯S(z)=z+\mu\bar{z}, which is postcomposed by a multiplication of a conformal map fz(p)f_{z}(p). Among these terms, S(z)S(z) causes ff to map small circles to small ellipses, and μ(p)\mu(p) controls the stretching effect of S(z)S(z). By denoting z=ρ+iτz=\rho+i\tau and μ=reiθ\mu=re^{i\theta}, where r=|μ|r=\left|\mu\right|, and representing S(z)=z+μz¯S(z)=z+\mu\bar{z} in a matrix form, we have

[(1001)+r(cosθsinθsinθcosθ)(1001)](ρτ)=(1+rcosθrsinθrsinθ1rcosθ)(ρτ)\left[\begin{pmatrix}1&0\\ 0&1\end{pmatrix}+r\begin{pmatrix}\cos{\theta}&-\sin{\theta}\\ \sin{\theta}&\cos{\theta}\end{pmatrix}\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\right]\begin{pmatrix}\rho\\ \tau\end{pmatrix}=\begin{pmatrix}1+r\cos{\theta}&r\sin{\theta}\\ r\sin{\theta}&1-r\cos{\theta}\end{pmatrix}\begin{pmatrix}\rho\\ \tau\end{pmatrix} (1)

One can check that the eigenvalues for this matrix are 1+|μ|1+|\mu| and 1|μ|1-|\mu|. The eigenvectors for this matrix are (cosθ2,sinθ2)T(\cos{\frac{\theta}{2}},\sin{\frac{\theta}{2}})^{T} and (sinθ2,cosθ2)T(\sin{\frac{\theta}{2}},-\cos{\frac{\theta}{2}})^{T} respectively. It is worth noting that the two eigenvectors are orthogonal to each other. The distortion or dilation can be measured by K=1+|μ|1|μ|K=\frac{1+|\mu|}{1-|\mu|}. Thus the Beltrami coefficient μ\mu gives us important information about the properties of a map.

4.3 An observation related to matrix inverse

In addition, we notice some interesting results related to matrix inverse, which facilitate the derivation in the following sections.

Generally, for any invertible matrix MM, we have

M1\displaystyle M^{-1} =(m11m12m13m21m22m23m31m32m33)1\displaystyle=\begin{pmatrix}m_{11}&m_{12}&m_{13}\\ m_{21}&m_{22}&m_{23}\\ m_{31}&m_{32}&m_{33}\\ \end{pmatrix}^{-1}
=1det(M)(m22m33m23m32m13m32m12m33m12m23m13m22m23m31m21m33m11m33m13m31m21m13m11m23m21m32m22m31m12m31m11m32m11m22m21m12)\displaystyle=\frac{1}{\det(M)}\begin{pmatrix}m_{22}m_{33}-m_{23}m_{32}&m_{13}m_{32}-m_{12}m_{33}&m_{12}m_{23}-m_{13}m_{22}\\ m_{23}m_{31}-m_{21}m_{33}&m_{11}m_{33}-m_{13}m_{31}&m_{21}m_{13}-m_{11}m_{23}\\ m_{21}m_{32}-m_{22}m_{31}&m_{12}m_{31}-m_{11}m_{32}&m_{11}m_{22}-m_{21}m_{12}\end{pmatrix}

Representing the row vectors of MM as r1\vec{r}_{1}, r2\vec{r}_{2}, and r3\vec{r}_{3}, and the column vectors as c1\vec{c}_{1}, c2\vec{c}_{2}, and c3\vec{c}_{3}, we have

(r1Tr2Tr3T)1\displaystyle\begin{pmatrix}\text{---}&\vec{r}_{1}^{T}&\text{---}\\ \text{---}&\vec{r}_{2}^{T}&\text{---}\\ \text{---}&\vec{r}_{3}^{T}&\text{---}\\ \end{pmatrix}^{-1} =1det(M)(|||r2×r3r3×r1r1×r2|||)\displaystyle=\frac{1}{\det(M)}\begin{pmatrix}|&|&|\\ \vec{r}_{2}\times\vec{r}_{3}&\vec{r}_{3}\times\vec{r}_{1}&\vec{r}_{1}\times\vec{r}_{2}\\ |&|&|\end{pmatrix} (2)
(|||c1c2c3|||)1\displaystyle\begin{pmatrix}|&|&|\\ \vec{c}_{1}&\vec{c}_{2}&\vec{c}_{3}\\ |&|&|\end{pmatrix}^{-1} =1det(M)((c2×c3)T(c3×c1)T(c1×c2)T)\displaystyle=\frac{1}{\det(M)}\begin{pmatrix}\text{---}&(\vec{c}_{2}\times\vec{c}_{3})^{T}&\text{---}\\ \text{---}&(\vec{c}_{3}\times\vec{c}_{1})^{T}&\text{---}\\ \text{---}&(\vec{c}_{1}\times\vec{c}_{2})^{T}&\text{---}\end{pmatrix} (3)

5 3D mapping representation

In this section, we will provide a detailed description of our proposed 3D mapping representation for diffeomorphisms. We call it a “representation” because it includes all the information needed to reconstruct the corresponding mapping, which is a feature not present in many existing methods. Therefore we have a reconstruction method for the proposed representation.

The organization of this section is as follows. We begin by extending the Beltrami coefficients to 3D space. Since there is no Beltrami equation for 3D space, we refer to these extended coefficients as 3D quasiconformal representation. Next, we derive an elliptic partial differential equation that relates the mappings to their 3D quasiconformal representation. Finally, we discretize the equation to obtain a linear system. By solving this system with appropriate boundary conditions, we can reconstruct mappings from their representation. Then we summarize the proposed representation and reconstruction algorithm using pseudo code. Experimental results for mapping reconstruction are provided at the end of this section to demonstrate the performance of our algorithm.

5.1 3D quasiconformal representation

Let Ω1\Omega_{1} and Ω2\Omega_{2} be oriented, simply connected open subsets of 3\mathbb{R}^{3}. Suppose we have a diffeomorphism f:Ω1Ω2f:\Omega_{1}\rightarrow\Omega_{2}. Jf(p)\textbf{J}_{f}(p) denotes the Jacobian matrix of ff at a point pp in Ω1\Omega_{1}. Since ff is diffeomorphic, it is invertible. As a result, the Jacobian determinant det(Jf(p))\det(\textbf{J}_{f}(p)) is always positive for every point pp in Ω1\Omega_{1}.

Let Σ\Sigma be the diagonal matrix containing the singular values a,b, and ca,b,\text{ and }c of Jf(p)\textbf{J}_{f}(p), where by convention, we require that abca\geq b\geq c. By polar decomposition, we have

Jf(p)=U(p)P(p)P(p)=Jf(p)TJf(p)=WΣW1\displaystyle\begin{split}\textbf{J}_{f}(p)&=U(p)P(p)\\ P(p)&=\sqrt{\textbf{J}_{f}(p)^{T}\textbf{J}_{f}(p)}=W\Sigma W^{-1}\end{split} (4)

where UU is a rotation matrix, because the determinant of Jf(p)\textbf{J}_{f}(p) and P(p)P(p) are greater than 0, implying that det(U(p))=1\det(U(p))=1. a,b and ca,b\text{ and }c are the eigenvalues of P(p)P(p) representing the dilation information, and WW is an orthogonal matrix containing eigenvectors for dilation.

Recall that in 2D space, the Beltrami coefficient μ\mu encodes the two dilation 1+|μ|1+|\mu| and 1|μ|1-|\mu|, together with their corresponding directions (cosθ2,sinθ2)T(\cos{\frac{\theta}{2}},\sin{\frac{\theta}{2}})^{T} and (sinθ2,cosθ2)T(\sin{\frac{\theta}{2}},-\cos{\frac{\theta}{2}})^{T}, which are orthogonal to each other. Now in 3D space, we have three dilations aa, bb, and cc, together with their corresponding directions represented by the column vectors of WW, which are also orthogonal to each other, as P(p)P(p) is symmetric. Therefore, similar to the Beltrami coefficient, P(p)P(p) encodes the dilation information of the 3D mapping ff.

Furthermore, it is possible to represent the matrix WW by Euler angles. To do so, we require that the column vectors of WW form an orthonormal basis, and det(W)=1\det(W)=1. However, the det(W)\det(W) is not necessarily positive, meaning that WW may not be a rotation matrix. This can be fixed by randomly multiplying a column of WW by 1-1 to get a matrix W~\tilde{W}. This modification ensures det(W~)=1\det(\tilde{W})=1, indicating that W~\tilde{W} is a rotation matrix, while at the same time, the matrix P(p)P(p) will not be changed. To see this, the matrix P(p)P(p) can be reorganized as aw1w1T+bw2w2T+cw3w3Ta\vec{w}_{1}\vec{w}_{1}^{T}+b\vec{w}_{2}\vec{w}_{2}^{T}+c\vec{w}_{3}\vec{w}_{3}^{T}, where w1,w2 and w3\vec{w}_{1},\vec{w}_{2}\text{ and }\vec{w}_{3} are the column vectors of WW. Let w~1=w1\tilde{\vec{w}}_{1}=-\vec{w}_{1}, then w~1w~1T=w1w1T\tilde{\vec{w}}_{1}\tilde{\vec{w}}_{1}^{T}=\vec{w}_{1}\vec{w}_{1}^{T}. Thus reversing the direction of w1\vec{w}_{1} makes no change to P(p)P(p). This also holds for w2\vec{w}_{2} and w3\vec{w}_{3}.

As is known, a rotation matrix RR can be written in the following form:

R\displaystyle R =Rz(θz)Ry(θy)Rx(θx)\displaystyle=R_{z}(\theta_{z})R_{y}(\theta_{y})R_{x}(\theta_{x}) (5)
=(cosθzsinθz0sinθzcosθz0001)(cosθy0sinθy010sinθy0cosθy)(1000cosθxsinθx0sinθxcosθx)\displaystyle=\begin{pmatrix}\cos{\theta_{z}}&-\sin{\theta_{z}}&0\\ \sin{\theta_{z}}&\cos{\theta_{z}}&0\\ 0&0&1\end{pmatrix}\begin{pmatrix}\cos{\theta_{y}}&0&\sin{\theta_{y}}\\ 0&1&0\\ -\sin{\theta_{y}}&0&\cos{\theta_{y}}\end{pmatrix}\begin{pmatrix}1&0&0\\ 0&\cos{\theta_{x}}&-\sin{\theta_{x}}\\ 0&\sin{\theta_{x}}&\cos{\theta_{x}}\end{pmatrix}
=(cosθzcosθycosθzsinθysinθxsinθzcosθxcosθzsinθycosθxsinθzsinθxsinθzcosθysinθzsinθysinθx+cosθzcosθxsinθzsinθycosθxcosθzsinθxsinθycosθysinθxcosθycosθx)\displaystyle=\begin{pmatrix}\cos{\theta_{z}}\cos{\theta_{y}}&\cos{\theta_{z}}\sin{\theta_{y}}\sin{\theta_{x}}-\sin{\theta_{z}}\cos{\theta_{x}}&\cos{\theta_{z}}\sin{\theta_{y}}\cos{\theta_{x}}-\sin{\theta_{z}}\sin{\theta_{x}}\\ \sin{\theta_{z}}\cos{\theta_{y}}&\sin{\theta_{z}}\sin{\theta_{y}}\sin{\theta_{x}}+\cos{\theta_{z}}\cos{\theta_{x}}&\sin{\theta_{z}}\sin{\theta_{y}}\cos{\theta_{x}}-\cos{\theta_{z}}\sin{\theta_{x}}\\ -\sin{\theta_{y}}&\cos{\theta_{y}}\sin{\theta_{x}}&\cos{\theta_{y}}\cos{\theta_{x}}\end{pmatrix}
=(r11r12r13r21r22r23r31r32r33)\displaystyle=\begin{pmatrix}r_{11}&r_{12}&r_{13}\\ r_{21}&r_{22}&r_{23}\\ r_{31}&r_{32}&r_{33}\end{pmatrix}

Then we obtain the Euler angles θx,θy and θz\theta_{x},\theta_{y}\text{ and }\theta_{z} by

θx\displaystyle\theta_{x} =atan2(r32,r33)\displaystyle=atan2(r_{32},r_{33}) (6)
θy\displaystyle\theta_{y} =atan2(r31,r322+r332)\displaystyle=atan2(-r_{31},\sqrt{r_{32}^{2}+r_{33}^{2}})
θz\displaystyle\theta_{z} =atan2(r21,r11)\displaystyle=atan2(r_{21},r_{11})

In conclusion, we have the following two equivalent ways to define 3D quasiconformal representation for diffeomorphism:

  1. 1.

    The upper triangular part of the matrix P(p)P(p). P(p)P(p) is a symmetric positive definite matrix, therefore the upper triangular part can encode the information of the matrix;

  2. 2.

    The eigenvalues and the Euler angles (a,b,c,θx,θy,θz)(a,b,c,\theta_{x},\theta_{y},\theta_{z}). The Euler angles encode the information of the eigenvectors, therefore they include the equivalent information of P(p)P(p).

Note that both representations include 6 numbers for each point in the domain. The first representation facilitates various computational operations, as the angles are not involved during the computation, while the second representation provides a much clearer geometric meaning. Since the two representations are equivalent and can be easily converted to each other, we will use only the first representation, i.e., the upper triangular part of P(p)P(p), in the following sections without further mention.

5.2 The PDE in the continuous case

By Equations 4, we have

Jf(p)TJf(p)=WΣΣW1=W(a2000b2000c2)W1\textbf{J}_{f}(p)^{T}\textbf{J}_{f}(p)=W\Sigma\Sigma W^{-1}=W\begin{pmatrix}a^{2}&0&0\\ 0&b^{2}&0\\ 0&0&c^{2}\end{pmatrix}W^{-1} (7)
det(Jf(p))=det(P(p))=det(Σ)=abc\det(\textbf{J}_{f}(p))=\det(P(p))=\det(\Sigma)=abc (8)

Recall the assumption that ff is a diffeomorphism, implying Jf(p)\textbf{J}_{f}(p) is invertible. Right multiplying by Jf(p)1\textbf{J}_{f}(p)^{-1} on both side of the above equation, we have

Jf(p)T=W(a2000b2000c2)W11det(Jf(p))C=W(abc000bac000cab)W1C\textbf{J}_{f}(p)^{T}=W\begin{pmatrix}a^{2}&0&0\\ 0&b^{2}&0\\ 0&0&c^{2}\end{pmatrix}W^{-1}\frac{1}{\det(\textbf{J}_{f}(p))}C=W\begin{pmatrix}\frac{a}{bc}&0&0\\ 0&\frac{b}{ac}&0\\ 0&0&\frac{c}{ab}\end{pmatrix}W^{-1}C (9)

where CC is the adjugate matrix of Jf(p)\textbf{J}_{f}(p). According to the observation mentioned in Equation 2, CC has the form

C=(|||v×ww×uu×v|||)C=\begin{pmatrix}|&|&|\\ \nabla v\times\nabla w&\nabla w\times\nabla u&\nabla u\times\nabla v\\ |&|&|\end{pmatrix} (10)

where u\nabla u, v\nabla v and w\nabla w are the columns of Jf(p)T\textbf{J}_{f}(p)^{T}.

Moving the terms before the matrix C on the right-hand side of Equation 9 to the left, we have

W(bca000acb000abc)W1Jf(p)T=CW\begin{pmatrix}\frac{bc}{a}&0&0\\ 0&\frac{ac}{b}&0\\ 0&0&\frac{ab}{c}\end{pmatrix}W^{-1}\textbf{J}_{f}(p)^{T}=C (11)

Let 𝒜\mathcal{A} be a matrix defined as

𝒜=W(bca000acb000abc)W1\mathcal{A}=W\begin{pmatrix}\frac{bc}{a}&0&0\\ 0&\frac{ac}{b}&0\\ 0&0&\frac{ab}{c}\end{pmatrix}W^{-1} (12)

We can rewrite Equation 11 in the following form

𝒜(|||uvw|||)=(|||v×ww×uu×v|||)\mathcal{A}\begin{pmatrix}|&|&|\\ \nabla u&\nabla v&\nabla w\\ |&|&|\end{pmatrix}=\begin{pmatrix}|&|&|\\ \nabla v\times\nabla w&\nabla w\times\nabla u&\nabla u\times\nabla v\\ |&|&|\end{pmatrix} (13)

For u\nabla u, we have

𝒜u=(v×w)=w(×v)v(×w)=0\nabla\cdot\mathcal{A}\nabla u=\nabla\cdot(\nabla v\times\nabla w)=\nabla w\cdot(\nabla\times\nabla v)-\nabla v\cdot(\nabla\times\nabla w)=0 (14)

since ×\nabla\times\nabla results in zero vector. This also holds for v\nabla v and w\nabla w.

Therefore, we compute divergence on both sides of Equation 13 to get the following equations

𝒜u\displaystyle\nabla\cdot\mathcal{A}\nabla u =0\displaystyle=0 (15)
𝒜v\displaystyle\nabla\cdot\mathcal{A}\nabla v =0\displaystyle=0
𝒜w\displaystyle\nabla\cdot\mathcal{A}\nabla w =0\displaystyle=0

Furthermore, we define 𝒜\nabla\cdot\mathcal{A}\nabla as the 3D generalized Laplacian operator.

5.3 Discretisation and implementation

In the 2D case, we typically triangularize the domain to obtain finite elements and then define a piecewise linear function on that domain to approximate a smooth mapping. In the 3D case, we need to define the piecewise linear function on tetrahedrons serving as the elements.

Consider a 3D simply connected domain Ω1\Omega_{1} discretized into a tetrahedral mesh containing vertices 𝒱={p1,p2,,pn}\mathcal{V}=\{p_{1},p_{2},\ldots,p_{n}\} and tetrahedrons represented by the set of index sets ={(i1,i2,i3,i4)i1,i2,i3,i4[1,n], and i1,i2,i3,i4 are distinct}\mathcal{F}=\{(i_{1},i_{2},i_{3},i_{4})\mid i_{1},i_{2},i_{3},i_{4}\in\mathbb{N}\cap[1,n],\text{ and }i_{1},i_{2},i_{3},i_{4}\text{ are distinct}\}. Also, the order of the vertices in a tetrahedron T=(i1,i2,i3,i4)T=(i_{1},i_{2},i_{3},i_{4}) should satisfy a condition: the determinant of the matrix consisting of the three vectors (pi2pi1)(p_{i_{2}}-p_{i_{1}}), (pi3pi1)(p_{i_{3}}-p_{i_{1}}), and (pi4pi1)(p_{i_{4}}-p_{i_{1}}) must be positive, or equivalently, [(pi2pi1)×(pi3pi1)](pi4pi1)>0[(p_{i_{2}}-p_{i_{1}})\times(p_{i_{3}}-p_{i_{1}})]\cdot(p_{i_{4}}-p_{i_{1}})>0.

In this setting, the mapping f:Ω1Ω2f:\Omega_{1}\rightarrow\Omega_{2} is piecewise linear.

To compute the 3D quasiconformal representation q\vec{q} for ff, we need to compute q\vec{q} at each point in Ω\Omega. Since ff is piecewise linear, the Jacobian matrices Jf\textbf{J}_{f} at each point in the same tetrahedron are equal to each other, resulting in the same polar decomposition and 3D quasiconformal representation. Therefore, we simply need to compute the Jacobian matrices at each tetrahedron TT, denoted as Jf(T)\textbf{J}_{f}(T), and use it to get q\vec{q}. The linear function defined on each tetrahedron TT can be written as

f|T(x,y,z)=Jf(T)(xyz)+(ϑTιTϱT)f|_{T}(x,y,z)=\textbf{J}_{f}(T)\cdot\begin{pmatrix}x\\ y\\ z\end{pmatrix}+\begin{pmatrix}\vartheta_{T}\\ \iota_{T}\\ \varrho_{T}\end{pmatrix} (16)

We can then compute the polar decomposition of Jf(T)\textbf{J}_{f}(T) using Equations 4, which subsequently provides the 3D quasiconformal representation q\vec{q} for TT.

To reconstruct the mapping ff, we need the matrix 𝒜\mathcal{A}, which contains the 3D representation information. Once given the representation q\vec{q} at a point pp, 𝒜\mathcal{A} at pp can be computed according to Equation 12. As every point in TT shares the same dilation, 𝒜\mathcal{A} is a piecewise-constant matrix-valued function, where 𝒜\mathcal{A} takes a constant matrix value within each TT, which we denote as 𝒜T\mathcal{A}_{T}.

To construct the discrete version of Equations 15, we should first derive the discrete gradient operator. For a certain tetrahedron T=(i1,i2,i3,i4)T=(i_{1},i_{2},i_{3},i_{4}), let pI=(xI,yI,zI),sI=f(pI)=(uI,vI,wI)p_{I}=(x_{I},y_{I},z_{I}),s_{I}=f(p_{I})=(u_{I},v_{I},w_{I}), where I{i1,i2,i3,i4}I\in\{i_{1},i_{2},i_{3},i_{4}\} . Then we have the equation

Jf(T)X=Y\textbf{J}_{f}(T)X=Y (17)

where

X=(|||pi2pi1pi3pi1pi4pi1|||) and Y=(|||si2si1si3si1si4si1|||)X=\begin{pmatrix}|&|&|\\ p_{i_{2}}-p_{i_{1}}&p_{i_{3}}-p_{i_{1}}&p_{i_{4}}-p_{i_{1}}\\ |&|&|\end{pmatrix}\text{ and }Y=\begin{pmatrix}|&|&|\\ s_{i_{2}}-s_{i_{1}}&s_{i_{3}}-s_{i_{1}}&s_{i_{4}}-s_{i_{1}}\\ |&|&|\end{pmatrix} (18)

Then moving the matrix XX to the right, we have

Jf(T)=YX1\textbf{J}_{f}(T)=YX^{-1} (19)

For simplicity, we index the values in X1X^{-1} by the form χjk\chi_{jk}, where j and kj\text{ and }k are the row and column of the indexed entry.

We define

ATi1=(χ11+χ21+χ31),ATi2=χ11,ATi3=χ21 and ATi4=χ31BTi1=(χ12+χ22+χ32),BTi2=χ12,BTi3=χ22 and BTi4=χ32CTi1=(χ13+χ23+χ33),CTi2=χ13,CTi3=χ23 and CTi4=χ33x=(ATi1,ATi2,ATi3,ATi4)T,y=(BTi1,BTi2,BTi3,BTi4)T,z=(CTi1,CTi2,CTi3,CTi4)TuT=(ui1,ui2,ui3,ui4)T,vT=(vi1,vi2,vi3,vi4)T,wT=(wi1,wi2,wi3,wi4)T\displaystyle\begin{gathered}A_{T}^{i_{1}}=-(\chi_{11}+\chi_{21}+\chi_{31}),\ A_{T}^{i_{2}}=\chi_{11},\ A_{T}^{i_{3}}=\chi_{21}\text{ and }A_{T}^{i_{4}}=\chi_{31}\\ B_{T}^{i_{1}}=-(\chi_{12}+\chi_{22}+\chi_{32}),\ B_{T}^{i_{2}}=\chi_{12},\ B_{T}^{i_{3}}=\chi_{22}\text{ and }B_{T}^{i_{4}}=\chi_{32}\\ C_{T}^{i_{1}}=-(\chi_{13}+\chi_{23}+\chi_{33}),\ C_{T}^{i_{2}}=\chi_{13},\ C_{T}^{i_{3}}=\chi_{23}\text{ and }C_{T}^{i_{4}}=\chi_{33}\\ \partial_{x}=(A_{T}^{i_{1}},A_{T}^{i_{2}},A_{T}^{i_{3}},A_{T}^{i_{4}})^{T},\partial_{y}=(B_{T}^{i_{1}},B_{T}^{i_{2}},B_{T}^{i_{3}},B_{T}^{i_{4}})^{T},\partial_{z}=(C_{T}^{i_{1}},C_{T}^{i_{2}},C_{T}^{i_{3}},C_{T}^{i_{4}})^{T}\\ u_{T}=(u_{i_{1}},u_{i_{2}},u_{i_{3}},u_{i_{4}})^{T},v_{T}=(v_{i_{1}},v_{i_{2}},v_{i_{3}},v_{i_{4}})^{T},w_{T}=(w_{i_{1}},w_{i_{2}},w_{i_{3}},w_{i_{4}})^{T}\end{gathered} (20)

Then we have the following result, from which we know that x\partial_{x}, y\partial_{y}, and z\partial_{z} compose the discrete gradient operator T\nabla_{T}.

Jf(T)=(xuTyuTzuTxvTyvTzvTxwTywTzwT)\textbf{J}_{f}(T)=\begin{pmatrix}\partial_{x}\cdot u_{T}&\partial_{y}\cdot u_{T}&\partial_{z}\cdot u_{T}\\ \partial_{x}\cdot v_{T}&\partial_{y}\cdot v_{T}&\partial_{z}\cdot v_{T}\\ \partial_{x}\cdot w_{T}&\partial_{y}\cdot w_{T}&\partial_{z}\cdot w_{T}\end{pmatrix} (21)

The remaining part is the divergence operator. We first look at the definition of divergence. In a connected domain 𝒲\mathcal{W} , the divergence of a vector field F evaluated at a point p𝒲p\in\mathcal{W} is defined as

divF|p=lim|𝒲|01|𝒲|𝒲Fn^𝑑S=lim|𝒲|01|𝒲|Φ(F,𝒲)div\textbf{F}|_{p}=\lim_{|\mathcal{W}|\rightarrow 0}\frac{1}{|\mathcal{W}|}\oint_{\partial\mathcal{W}}\textbf{F}\cdot\hat{n}dS=\lim_{|\mathcal{W}|\rightarrow 0}\frac{1}{|\mathcal{W}|}\Phi(\textbf{F},\partial\mathcal{W}) (22)

where |𝒲||\mathcal{W}| denotes the volume of 𝒲\mathcal{W}, 𝒲\partial\mathcal{W} is the boundary surface of 𝒲\mathcal{W}, n^\hat{n} is the outward unit normal to 𝒲\partial\mathcal{W}, and Φ(F,𝒲)\Phi(\textbf{F},\partial\mathcal{W}) is the flux of F across 𝒲\partial\mathcal{W}.

In the discrete case, a vertex pip_{i} is surrounded by a set of tetrahedrons 𝒩(pi)\mathcal{N}(p_{i}). Considering that our goal is not to develop a discrete divergence operator that can give precise divergence at every vertex, but to solve Equations 15 to get uu, vv, and ww such that the divergence of these functions is approximately equal to 0, we propose to replace the divergence operator by computing the flux in the discrete setting. The flux of F evaluated at pp can be written as follows

Φpi(F,(𝒩(pi)))=T𝒩(pi)Area(Ti)F(T)n^Ti\Phi_{p_{i}}(\textbf{F},\partial\bigl{(}\cup\mathcal{N}(p_{i})\bigr{)})=\sum_{T\in\mathcal{N}(p_{i})}Area(T_{\setminus i})\cdot\textbf{F}(T)\cdot\hat{n}_{T_{\setminus i}} (23)

here we use TiT_{\setminus i} to denote the triangle opposite to pip_{i}, n^Ti\hat{n}_{T_{\setminus i}} is the outward unit normal to the triangle TiT_{\setminus i}, and F(T)\textbf{F}(T) represents the vector field defined on TT. In our setting, 𝐅(T)\mathbf{F}(T) is a piecewise-constant vector field, where 𝐅(T)\mathbf{F}(T) takes a constant vector value within each TT.

Considering that we shrink T𝒩(pi)T\in\mathcal{N}(p_{i}) to 𝒯T\mathcal{T}^{T} whose corresponding vertices {pi,p2𝒯,p3𝒯,p4𝒯}\{p_{i},p^{\mathcal{T}}_{2},p^{\mathcal{T}}_{3},p^{\mathcal{T}}_{4}\} satisfy p2𝒯pi=1n(pi2pi1)p^{\mathcal{T}}_{2}-p_{i}=\frac{1}{n}(p_{i_{2}}-p_{i_{1}}), p3𝒯pi=1n(pi3pi1)p^{\mathcal{T}}_{3}-p_{i}=\frac{1}{n}(p_{i_{3}}-p_{i_{1}}), and p4𝒯pi=1n(pi4pi1)p^{\mathcal{T}}_{4}-p_{i}=\frac{1}{n}(p_{i_{4}}-p_{i_{1}}). Then the flux

Φpi(F,({𝒯TT𝒩(pi)}))\displaystyle\Phi_{p_{i}}(\textbf{F},\partial\bigl{(}\cup\{\mathcal{T}^{T}\mid T\in\mathcal{N}(p_{i})\}\bigr{)}) =𝒯T,T𝒩(pi)Area(𝒯iT)F(𝒯T)n^𝒯iT\displaystyle=\sum_{\mathcal{T}^{T},T\in\mathcal{N}(p_{i})}Area(\mathcal{T}^{T}_{\setminus i})\cdot\textbf{F}(\mathcal{T}^{T})\cdot\hat{n}_{\mathcal{T}^{T}_{\setminus i}}
=1n2T𝒩(pi)Area(Ti)F(T)n^Ti\displaystyle=\frac{1}{n^{2}}\sum_{T\in\mathcal{N}(p_{i})}Area(T_{\setminus i})\cdot\textbf{F}(T)\cdot\hat{n}_{T_{\setminus i}}
=1n2Φpi(F,(𝒩(pi)))\displaystyle=\frac{1}{n^{2}}\Phi_{p_{i}}(\textbf{F},\partial\bigl{(}\cup\mathcal{N}(p_{i})\bigr{)})

since in the discrete setting, F(𝒯T)=F(T)\textbf{F}(\mathcal{T}^{T})=\textbf{F}(T), n^𝒯iT=n^Ti\hat{n}_{\mathcal{T}^{T}_{\setminus i}}=\hat{n}_{T_{\setminus i}}, and Area(𝒯iT)=1n2Area(Ti)Area(\mathcal{T}^{T}_{\setminus i})=\frac{1}{n^{2}}Area(T_{\setminus i}). Therefore, if we require Φpi(F,(𝒩(pi)))=0\Phi_{p_{i}}(\textbf{F},\partial\bigl{(}\cup\mathcal{N}(p_{i})\bigr{)})=0, then

limn1|𝒲(n)|Φpi(F,({𝒯TT𝒩(pi)}))=\displaystyle\lim_{n\rightarrow\infty}\frac{1}{|\mathcal{W}(n)|}\Phi_{p_{i}}(\textbf{F},\partial\bigl{(}\cup\{\mathcal{T}^{T}\mid T\in\mathcal{N}(p_{i})\}\bigr{)})= limn1|𝒲(n)|1n2Φpi(F,(𝒩(pi)))\displaystyle\lim_{n\rightarrow\infty}\frac{1}{|\mathcal{W}(n)|}\frac{1}{n^{2}}\Phi_{p_{i}}(\textbf{F},\partial\bigl{(}\cup\mathcal{N}(p_{i})\bigr{)})
=\displaystyle= limn0n2|𝒲(n)|\displaystyle\lim_{n\rightarrow\infty}\frac{0}{n^{2}|\mathcal{W}(n)|}
=\displaystyle= 0\displaystyle 0

where 𝒲(n)={𝒯TT𝒩(pi)}\mathcal{W}(n)=\cup\{\mathcal{T}^{T}\mid T\in\mathcal{N}(p_{i})\}, and therefore, as nn\rightarrow\infty, |𝒲(n)|0|\mathcal{W}(n)|\rightarrow 0. Although the above equation is not strictly equivalent to divF|p=0div\textbf{F}|_{p}=0, it motivates us to approximate divF|p=0div\textbf{F}|_{p}=0 using Φpi(F,(𝒩(pi)))=0\Phi_{p_{i}}(\textbf{F},\partial\bigl{(}\cup\mathcal{N}(p_{i})\bigr{)})=0.

To solve the problem, we need to rewrite Equation 23 into a computable form. The quantity in the summation of Equation 23 can be evaluated using the property introduced in Equation 3. By comparing XX with the matrix MM in Equation 3, we notice that the three row vectors of X1X^{-1} satisfy

(ATi2,BTi2,CTi2)T\displaystyle(A_{T}^{i_{2}},B_{T}^{i_{2}},C_{T}^{i_{2}})^{T} =1det(X)(pi3pi1)T×(pi4pi1)T=2Area(Ti2)6Vol(T)n^Ti2\displaystyle=\frac{1}{\det(X)}(p_{i_{3}}-p_{i_{1}})^{T}\times(p_{i_{4}}-p_{i_{1}})^{T}=-\frac{2Area(T_{\setminus i_{2}})}{6Vol(T)}\cdot\hat{n}_{T_{\setminus i_{2}}}
(ATi3,BTi3,CTi3)T\displaystyle(A_{T}^{i_{3}},B_{T}^{i_{3}},C_{T}^{i_{3}})^{T} =1det(X)(pi4pi1)T×(pi2pi1)T=2Area(Ti3)6Vol(T)n^Ti3\displaystyle=\frac{1}{\det(X)}(p_{i_{4}}-p_{i_{1}})^{T}\times(p_{i_{2}}-p_{i_{1}})^{T}=-\frac{2Area(T_{\setminus i_{3}})}{6Vol(T)}\cdot\hat{n}_{T_{\setminus i_{3}}}
(ATi4,BTi4,CTi4)T\displaystyle(A_{T}^{i_{4}},B_{T}^{i_{4}},C_{T}^{i_{4}})^{T} =1det(X)(pi2pi1)T×(pi3pi1)T=2Area(Ti4)6Vol(T)n^Ti4\displaystyle=\frac{1}{\det(X)}(p_{i_{2}}-p_{i_{1}})^{T}\times(p_{i_{3}}-p_{i_{1}})^{T}=-\frac{2Area(T_{\setminus i_{4}})}{6Vol(T)}\cdot\hat{n}_{T_{\setminus i_{4}}}

where Vol(T)Vol(T) represents the volume of TT with respect to 𝒱\mathcal{V}, and det(X)=6Vol(T)\det(X)=6Vol(T).

For vertex pi1p_{i_{1}}, the outward normal vector on Ti1T_{\setminus i_{1}} is given by

(pi3pi2)T×(pi4pi2)T=\displaystyle(p_{i_{3}}-p_{i_{2}})^{T}\times(p_{i_{4}}-p_{i_{2}})^{T}= [(pi3pi1)T(pi2pi1)T]×[(pi4pi1)T(pi2pi1)T]\displaystyle[(p_{i_{3}}-p_{i_{1}})^{T}-(p_{i_{2}}-p_{i_{1}})^{T}]\times[(p_{i_{4}}-p_{i_{1}})^{T}-(p_{i_{2}}-p_{i_{1}})^{T}]
=\displaystyle= (pi3pi1)T×(pi4pi1)T(pi3pi1)T×(pi2pi1)T\displaystyle(p_{i_{3}}-p_{i_{1}})^{T}\times(p_{i_{4}}-p_{i_{1}})^{T}-(p_{i_{3}}-p_{i_{1}})^{T}\times(p_{i_{2}}-p_{i_{1}})^{T}
(pi2pi1)T×(pi4pi1)T+(pi2pi1)T×(pi2pi1)T\displaystyle-(p_{i_{2}}-p_{i_{1}})^{T}\times(p_{i_{4}}-p_{i_{1}})^{T}+(p_{i_{2}}-p_{i_{1}})^{T}\times(p_{i_{2}}-p_{i_{1}})^{T}
=\displaystyle= det(X)[(ATi2,BTi2,CTi2)T+(ATi3,BTi3,CTi3)T+(ATi4,BTi4,CTi4)T]\displaystyle\det(X)[(A_{T}^{i_{2}},B_{T}^{i_{2}},C_{T}^{i_{2}})^{T}+(A_{T}^{i_{3}},B_{T}^{i_{3}},C_{T}^{i_{3}})^{T}+(A_{T}^{i_{4}},B_{T}^{i_{4}},C_{T}^{i_{4}})^{T}]
=\displaystyle= det(X)(ATi2+ATi3+ATi4,BTi2+BTi3+BTi4,CTi2+CTi3+CTi4)T\displaystyle\det(X)(A_{T}^{i_{2}}+A_{T}^{i_{3}}+A_{T}^{i_{4}},B_{T}^{i_{2}}+B_{T}^{i_{3}}+B_{T}^{i_{4}},C_{T}^{i_{2}}+C_{T}^{i_{3}}+C_{T}^{i_{4}})^{T}
=\displaystyle= det(X)(ATi1,BTi1,CTi1)T\displaystyle-\det(X)(A_{T}^{i_{1}},B_{T}^{i_{1}},C_{T}^{i_{1}})^{T}

We also know that (pi3pi2)T×(pi4pi2)T=2Area(Ti1)n^Ti1(p_{i_{3}}-p_{i_{2}})^{T}\times(p_{i_{4}}-p_{i_{2}})^{T}=2Area(T_{\setminus i_{1}})\cdot\hat{n}_{T_{\setminus i_{1}}}. The geometric meaning of (ATi,BTi,CTi)T-(A_{T}^{i},B_{T}^{i},C_{T}^{i})^{T} is shown in Figure 2.

(ATi1BTi1CTi1)-\begin{pmatrix}A_{T}^{i_{1}}\\ B_{T}^{i_{1}}\\ C_{T}^{i_{1}}\end{pmatrix}n^Ti1\hat{n}_{T_{\setminus i_{1}}}pi1p_{i_{1}}pi2p_{i_{2}}pi3p_{i_{3}}pi4p_{i_{4}}Ti1T_{\setminus i_{1}}
Figure 2: Illustration of the relation between pip_{i} and (ATi,BTi,CTi)T-(A_{T}^{i},B_{T}^{i},C_{T}^{i})^{T}. Here we only show the case for pi1p_{i_{1}} as an example. Ti1T_{\setminus i_{1}} denotes the triangular face opposite to pi1p_{i_{1}}, and (ATi1,BTi1,CTi1)T-(A_{T}^{i_{1}},B_{T}^{i_{1}},C_{T}^{i_{1}})^{T} is a multiple of n^Ti1\hat{n}_{T_{\setminus i_{1}}}, the unit normal vector to Ti1T_{\setminus i_{1}}.

Together, for the term in the summation of flux evaluated at the four vertices of TT, we have

Area(Ti)F(T)n^Ti=3Vol(T)(ATi,BTi,CTi)TF(T),iTArea(T_{\setminus i})\cdot\textbf{F}(T)\cdot\hat{n}_{T_{\setminus i}}=-3Vol(T)\cdot(A_{T}^{i},B_{T}^{i},C_{T}^{i})^{T}\cdot\textbf{F}(T),\quad i\in T

Then the flux can be rewritten as

Φpi(F(T),(𝒩(pi)))=3T𝒩(pi)Vol(T)(ATi,BTi,CTi)TF(T)\displaystyle\Phi_{p_{i}}(\textbf{F}(T),\partial\bigl{(}\cup\mathcal{N}(p_{i})\bigr{)})=-3\sum_{T\in\mathcal{N}(p_{i})}Vol(T)\cdot(A_{T}^{i},B_{T}^{i},C_{T}^{i})^{T}\cdot\textbf{F}(T)

Denoting the three row vectors of Jf(T)\textbf{J}_{f}(T) as Tu,Tv and Tw\nabla_{T}u,\nabla_{T}v\text{ and }\nabla_{T}w, then by substituting 𝒜TTu\mathcal{A}_{T}\nabla_{T}u, 𝒜TTv\mathcal{A}_{T}\nabla_{T}v, and 𝒜TTw\mathcal{A}_{T}\nabla_{T}w into F, we have the following equations

T𝒩(pi)Vol(T)(ATi,BTi,CTi)T𝒜TTu\displaystyle\sum_{T\in\mathcal{N}(p_{i})}Vol(T)\cdot(A_{T}^{i},B_{T}^{i},C_{T}^{i})^{T}\cdot\mathcal{A}_{T}\nabla_{T}u =0\displaystyle=0 (24)
T𝒩(pi)Vol(T)(ATi,BTi,CTi)T𝒜TTv\displaystyle\sum_{T\in\mathcal{N}(p_{i})}Vol(T)\cdot(A_{T}^{i},B_{T}^{i},C_{T}^{i})^{T}\cdot\mathcal{A}_{T}\nabla_{T}v =0\displaystyle=0
T𝒩(pi)Vol(T)(ATi,BTi,CTi)T𝒜TTw\displaystyle\sum_{T\in\mathcal{N}(p_{i})}Vol(T)\cdot(A_{T}^{i},B_{T}^{i},C_{T}^{i})^{T}\cdot\mathcal{A}_{T}\nabla_{T}w =0\displaystyle=0

Note that in Equations 24, the components Vol(T)Vol(T), (ATi,BTi,CTi)T(A_{T}^{i},B_{T}^{i},C_{T}^{i})^{T}, and T\nabla_{T} are derived from the source mesh, i.e., 𝒱\mathcal{V} and TT, which are not related to the unknowns uu, vv, and ww, and 𝒜T\mathcal{A}_{T} comes from the given 3D quasiconformal representation. Reorganizing Equations 24, we get three linear systems

𝒞u=0,𝒞v=0,𝒞w=0\mathcal{C}u=0,\quad\mathcal{C}v=0,\quad\mathcal{C}w=0 (25)

In order to introduce boundary conditions, for the given 𝒦\mathcal{K} boundary vertices for the function uu indexed by ii from 11 to 𝒦\mathcal{K} with the target value βi\beta_{i}, we first define a vector v=i=1𝒦βiei\vec{v}=\sum_{i=1}^{\mathcal{K}}\beta_{i}\vec{e}_{i}, where ei\vec{e}_{i} is the “one-hot” vector that has 1 in the ii-th entry and zero in the remaining entries. Then a temporary vector h~=𝒞v\tilde{h}=-\mathcal{C}\vec{v} can be computed, after which we set the ii-th entry of h~\tilde{h} to βi\beta_{i} correspondingly to get the vector huh_{u}. Similarly, for the function vv and ww, we can generate hvh_{v} and hwh_{w}.

We define an operation called “masking” as follows: for a given boundary vertex pip_{i}, we set the ii-th row and ii-th column of 𝒞\mathcal{C} to zero, and the ii-th diagonal entry to 1.

We perform the masking operation to 𝒞\mathcal{C} for all the boundary vertices to get the matrices 𝒞u\mathcal{C}_{u}, 𝒞v\mathcal{C}_{v}, and 𝒞w\mathcal{C}_{w}.

Then we have

𝒞uu=hu,𝒞vv=hv,𝒞ww=hw\mathcal{C}_{u}u=h_{u},\quad\mathcal{C}_{v}v=h_{v},\quad\mathcal{C}_{w}w=h_{w} (26)

and the mapping ff can then be reconstructed.

5.4 Analysis

In this section, we would like to show that the linear systems 26 are symmetric positive definite (SPD). Compared with triangular mesh encountered in the 2D cases, tetrahedral meshes possess more vertices, resulting in large linear systems that are hard to solve. But if the linear systems are SPD, then we can use efficient numerical methods, such as the conjugate gradient method, to solve it more efficiently.

We will divide this section into two parts, in which we show the symmetry of 𝒞\mathcal{C} and the positive definiteness of the linear systems.

5.4.1 Symmetry of 𝒞\mathcal{C}

In this section, we would like to show that the matrix 𝒞\mathcal{C} in Equations 25 is symmetric. To see this, we may turn first to the incidence matrix of a tetrahedral mesh.

The incidence matrix EE of a tetrahedral mesh is a matrix that describes the connectivity between the vertices and the tetrahedrons in the mesh. Specifically, it captures which vertices are part of which tetrahedrons. In our setting, the incidence matrix is an ||×|𝒱||\mathcal{F}|\times|\mathcal{V}| matrix, where |||\mathcal{F}| is the number of tetrahedrons and |𝒱||\mathcal{V}| is the number of vertices. Each row corresponds to a tetrahedron, and each column corresponds to a vertex. If a vertex belongs to a tetrahedron, the corresponding entry in the matrix is set to 11; otherwise, it is set to 0.

Suppose we have a discrete function ff defined on the vertices of the mesh. If we want to sum up the function values on the vertices of each tetrahedron, we can simply compute EfEf, which yields the desired results. If we want to compute the weighted sum, we can simply replace the entries by the weights, and then compute EfEf.

In our case, we want to construct a matrix that can compute the discrete gradient for each tetrahedron. Recall that we compute the gradient of uTu_{T} over xx by taking the dot product xuT\partial_{x}\cdot u_{T}. In the implementation, uu is a |𝒱||\mathcal{V}| dimensional vector. To compute xuT\partial_{x}\cdot u_{T} for all tetrahedrons, we first construct the incidence matrix EE for the mesh, and then replace the values in EE by the corresponding ATiA_{T}^{i} to get the matrix ExE_{x}. Then we can compute the gradient over xx of all tetrahedrons by simply computing ExuE_{x}u. Similarly, we can construct EyE_{y} and EzE_{z} with BTiB_{T}^{i} and CTiC_{T}^{i} to compute the gradient of uTu_{T} over yy and zz.

Now we can construct a 3||×|𝒱|3|\mathcal{F}|\times|\mathcal{V}| matrix, where (3i+1)(3i+1)-th row (ii from 0 to ||1|\mathcal{F}|-1) is replaced by the ii-th row of ExE_{x}, (3i+2)(3i+2)-th row is replaced by the ii-th row of EyE_{y}, and (3i+3)(3i+3)-th row is replaced by the ii-th row of EzE_{z}. We denote this matrix as \nabla, as it serves as the discrete gradient operator. \nabla can be regarded as an ||×1|\mathcal{F}|\times 1 block matrix, each block computes the gradient vector for the corresponding tetrahedron TT\in\mathcal{F}. Also, each block has only 44 nonzero columns indexed by iTi\in T, the values of which are (ATi,BTi,CTi)T,iT(A_{T}^{i},B_{T}^{i},C_{T}^{i})^{T},i\in T.

Then we can consider the matrix ETE^{T}, which is a |𝒱|×|||\mathcal{V}|\times|\mathcal{F}|. For any discrete function gg defined on the tetrahedrons of the mesh, ETgE^{T}g sums up the values on the tetrahedrons T𝒩(p)T\in\mathcal{N}(p) surrounding each vertex pp. Similarly, the weighted sum can be computed by first replacing the values in ETE^{T} by the weights.

In our case, T\nabla^{T} has a similar structure to ETE^{T}. Suppose we have a discrete vector field defined on the tetrahedrons of the mesh, we can put the vectors into a 3||3|\mathcal{F}| dimensional vector 𝐅\mathbf{F} according to the index of the tetrahedrons in \mathcal{F}. Then, computing T𝐅\nabla^{T}\mathbf{F} is equivalent to computing T𝒩(pi)(ATi,BTi,CTi)TF(T)\sum_{T\in\mathcal{N}(p_{i})}(A_{T}^{i},B_{T}^{i},C_{T}^{i})^{T}\cdot\textbf{F}(T).

Recall that our formula has a Vol(T)Vol(T) term. We can further define a 3||×3||3|\mathcal{F}|\times 3|\mathcal{F}| block diagonal matrix GG, each diagonal block of which is Vol(T)IVol(T)\cdot I, which is a 3×33\times 3 matrix. Obviously, GG is SPD.

The last component is a 3||×3||3|\mathcal{F}|\times 3|\mathcal{F}| block diagonal matrix 𝒜\mathcal{A}, each diagonal block of which is 𝒜T\mathcal{A}_{T}. Since the 3D representation assigned to each tetrahedron corresponds to an SPD matrix, the eigenvalues of this matrix are all positive, and they compose the eigenvalues of 𝒜T\mathcal{A}_{T}, we know that 𝒜T\mathcal{A}_{T} is also SPD. Therefore, 𝒜\mathcal{A} is SPD.

Together, we have 𝒞=TG𝒜\mathcal{C}=\nabla^{T}G\mathcal{A}\nabla. As Vol(T)I𝒜TVol(T)\cdot I\mathcal{A}_{T} is SPD, and Vol(T)IVol(T)\cdot I and 𝒜T\mathcal{A}_{T} are the diagonal blocks of GG and 𝒜\mathcal{A} respectively, we know that G𝒜G\mathcal{A} is SPD as well, and as a result, 𝒞\mathcal{C} is a symmetric matrix. We denote G𝒜G\mathcal{A} as 𝒜G\mathcal{A}^{G} in the following section for simplicity. Also, since 𝒜G\mathcal{A}^{G} is SPD, there exists an invertible matrix \mathcal{H} such that 𝒜G=T\mathcal{A}^{G}=\mathcal{H}^{T}\mathcal{H}.

5.4.2 Positive definiteness

In this section, we would like to show that 𝒞u\mathcal{C}_{u}, 𝒞v\mathcal{C}_{v}, and 𝒞w\mathcal{C}_{w} in the linear systems 26 are SPD. The proof of symmetry is trivial since the masking operation does not change the symmetry, but the proof of positive definiteness is not trivial, and we first prove some lemmas and propositions which facilitate the proof. For succinctness, in this section, we denote the number of vertices by NN (i.e., N=|𝒱|N=|\mathcal{V}|).

Lemma 1.

rank(𝒞)=N1rank(\mathcal{C})=N-1, and the vector 1=(1,1,,1)T\vec{1}=(1,1,\ldots,1)^{T} is the only basis vector in the null space of 𝒞\mathcal{C}.

Proof.

Matrix \nabla computes the gradient of a discrete function α\vec{\alpha} on tetrahedrons. If α=0\nabla\vec{\alpha}=0 for some vector α\vec{\alpha}, then by definition, the gradient of α\vec{\alpha} on every tetrahedron is zero, meaning that α=γ1,1=(1,1,,1)T\vec{\alpha}=\gamma\vec{1},\vec{1}=(1,1,\ldots,1)^{T} for some constant γ\gamma on each tetrahedron. Therefore, 1\vec{1} is the only basis vector in the null space of \nabla. This shows that 1\vec{1} is in the null space of 𝒞\mathcal{C}.

Then we aim to prove the uniqueness. If there exists a nonzero vector ζ\vec{\zeta} such that TTζ=0\nabla^{T}\mathcal{H}^{T}\vec{\zeta}=0, then ζ\vec{\zeta} is orthogonal to the row vectors of TT\nabla^{T}\mathcal{H}^{T}, or equivalently, ζC()\vec{\zeta}\notin\textbf{C}(\mathcal{H}\nabla), where C denotes the columns space. However, for any nonzero vector α\vec{\alpha} satisfying αγ1\vec{\alpha}\neq\gamma\vec{1} for any constant γ\gamma, we have nonzero vector δ=α\vec{\delta}=\mathcal{H}\nabla\vec{\alpha}, δC()\vec{\delta}\in\textbf{C}(\mathcal{H}\nabla), indicating that TTδ0\nabla^{T}\mathcal{H}^{T}\vec{\delta}\neq 0. In other words, for any nonzero vector αγ1\vec{\alpha}\neq\gamma\vec{1}, there does not exist a nonzero vector δ=α\vec{\delta}=\mathcal{H}\nabla\vec{\alpha} such that TTδ=0\nabla^{T}\mathcal{H}^{T}\vec{\delta}=0. Therefore, the null space of 𝒞\mathcal{C} is only span by 1\vec{1}.

Hence, rank(𝒞)=N1\text{rank}(\mathcal{C})=N-1. ∎

Lemma 2.

Any column (or row) vector in 𝒞\mathcal{C} can be expressed as a linear combination of the rest N1N-1 column (or row) vectors.

Proof.

Matrix 𝒞\mathcal{C} is symmetric and therefore diagonalizable. Let 𝒞=QΛQT\mathcal{C}=Q\Lambda Q^{T}, where QQ is an orthogonal matrix containing the eigenvectors as columns, and Λ\Lambda is a diagonal matrix with the eigenvalues. By Lemma 1, rank(𝒞)=N1\text{rank}(\mathcal{C})=N-1, indicating that one of its eigenvalues is zero, and the rest are non-zero. Without loss of generality, let the last diagonal entry in Λ\Lambda be 0, and the last column of QQ be the vector 11=1N1\frac{\vec{1}}{\|\vec{1}\|}=\frac{1}{\sqrt{N}}\vec{1}. In this setting, the last row of QTQ^{T} is equal to 1N1T\frac{1}{\sqrt{N}}\vec{1}^{T}.

Each column of 𝒞\mathcal{C} can be expressed as a linear combination of the columns in QΛQ\Lambda, or equivalently, the first N1N-1 columns in QΛQ\Lambda, as the last eigenvalue is 0. The linear combination coefficients are stored in QTQ^{T}.

Let Q~\tilde{Q} be the first N1N-1 columns of QQ, then Q~T\tilde{Q}^{T} is the first N1N-1 rows of QTQ^{T}. Let ε\varepsilon be the entry in the last row of QTQ^{T}, then ε=1N\varepsilon=\frac{1}{\sqrt{N}}. Denote the column vectors in Q~T\tilde{Q}^{T} as γi\vec{\gamma}_{i} for i=1,2,,Ni=1,2,\ldots,N, then since QTQ^{T} is an orthogonal matrix, the column vectors are perpendicular to each other, and the length of each vector is 11, we have:

γiγi\displaystyle\vec{\gamma}_{i}\cdot\vec{\gamma}_{i} =1ε2=N1N=1N(1N)\displaystyle=1-\varepsilon^{2}=\frac{N-1}{N}=-\frac{1}{N}(1-N)
γiγj\displaystyle\vec{\gamma}_{i}\cdot\vec{\gamma}_{j} =0ε2=1Nforij\displaystyle=0-\varepsilon^{2}=-\frac{1}{N}\quad\text{for}\quad i\neq j

Randomly pick N1N-1 column vectors from Q~T\tilde{Q}^{T} to form the matrix Γ\Gamma. Then we have:

Ψ=ΓTΓ=1N(1N1111N1111N)=1N(𝒥NI)\Psi=\Gamma^{T}\Gamma=-\frac{1}{N}\begin{pmatrix}1-N&1&\ldots&1\\ 1&1-N&\ldots&1\\ \vdots&\vdots&\vdots&\vdots\\ 1&1&\ldots&1-N\end{pmatrix}=-\frac{1}{N}(\mathcal{J}-NI)

where II is an (N1)×(N1)(N-1)\times(N-1) identity matrix, and 𝒥\mathcal{J} is a matrix full of 1s.

Let Ψ~=𝒥NI\tilde{\Psi}=\mathcal{J}-NI. Suppose λ\lambda is an eigenvalue of Ψ~\tilde{\Psi}, then det(Ψ~λI)=0\det(\tilde{\Psi}-\lambda I)=0, or det(𝒥(N+λ)I)=0\det(\mathcal{J}-(N+\lambda)I)=0. We know that the eigenvalue of 𝒥\mathcal{J} is N1N-1 on the eigenvector 1\vec{1}, and 0 on the other eigenvectors. Therefore, the eigenvalues of 𝒥(N+λ)I\mathcal{J}-(N+\lambda)I are 1λ-1-\lambda on 1\vec{1}, and λ+N\lambda+N on the other eigenvectors. Hence:

det(Ψ~λI)=(1λ)(λ+N)N2=0\det(\tilde{\Psi}-\lambda I)=(-1-\lambda)\cdot(\lambda+N)^{N-2}=0

Thus, λ=1\lambda=-1 with multiplicity 11, or λ=N\lambda=-N with multiplicity N2N-2, and:

det(Ψ~)=(N)N2\det(\tilde{\Psi})=-(-N)^{N-2}

which implies:

det(Ψ)=det(1NΨ~)=(1N)N1((N)N2)=1N>0\det(\Psi)=\det(-\frac{1}{N}\tilde{\Psi})=\left(-\frac{1}{N}\right)^{N-1}\cdot(-(-N)^{N-2})=\frac{1}{N}>0

which indicates Ψ\Psi is full rank, and Γ\Gamma is also full rank. Therefore, any N1N-1 column vectors in Q~T\tilde{Q}^{T} form a basis in the (N1)(N-1)-dimensional space, and the remaining vector can be expressed as a linear combination of the basis.

Let QΛQ_{\Lambda} be the first N1N-1 columns of QΛQ\Lambda and let Q~iT=(γ1,γ2,,γi1,γi+1,,γN)\tilde{Q}_{\setminus i}^{T}=(\vec{\gamma}_{1},\vec{\gamma}_{2},\ldots,\vec{\gamma}_{i-1},\vec{\gamma}_{i+1},\ldots,\vec{\gamma}_{N}). Then by the above proof, we know that there exists an N1N-1 dimensional nonzero vector α\vec{\alpha} such that γi=Q~iTα\vec{\gamma}_{i}=\tilde{Q}_{\setminus i}^{T}\vec{\alpha}. Then we have QΛγi=QΛQ~iTαQ_{\Lambda}\vec{\gamma}_{i}=Q_{\Lambda}\tilde{Q}_{\setminus i}^{T}\vec{\alpha}.

In the left-hand side, QΛγiQ_{\Lambda}\vec{\gamma}_{i} is the ii-th column of 𝒞\mathcal{C}, while in the right-hand side, QΛQ~iTQ_{\Lambda}\tilde{Q}_{\setminus i}^{T} is the matrix containing the remaining N1N-1 column vectors of 𝒞\mathcal{C}. The equation QΛγi=QΛQ~iTαQ_{\Lambda}\vec{\gamma}_{i}=Q_{\Lambda}\tilde{Q}_{\setminus i}^{T}\vec{\alpha} means that the ii-th column vector of 𝒞\mathcal{C} can be expressed as a linear combination of the remaining N1N-1 column vectors of 𝒞\mathcal{C}, and the coefficients are stored in α\vec{\alpha}.

As the index ii can be randomly chosen, we prove the statement for column vectors. Since the matrix is symmetric, this property can be extended to the row vectors as well. ∎

We can set the ii-th row and ii-th column of 𝒞\mathcal{C} to zero. With Lemma 2, we know that the rank of this matrix is still N1N-1.

Also, we know that the vector ei\vec{e}_{i} lies in the null space of this matrix. By setting the ii-th diagonal entry to 11, we get a full rank matrix, which is denoted as \mathcal{M}.

Proposition 1.

\mathcal{M} is an SPD matrix.

Proof.

Setting the ii-th column in \nabla to zero, we get a new matrix denoted as ~\tilde{\nabla}. By the above definition, we have

=~T𝒜G~+\mathcal{M}=\tilde{\nabla}^{T}\mathcal{A}^{G}\tilde{\nabla}+\mathcal{E}

where \mathcal{E} is an N×NN\times N matrix whose ii-th diagonal entry is 11, and all other entries are zero. The matrix ~T𝒜G~\tilde{\nabla}^{T}\mathcal{A}^{G}\tilde{\nabla} is the same matrix obtained by setting ii-th row and ii-th column of 𝒞\mathcal{C} to 0, the rank of which is N1N-1. Then we decompose any nonzero vector v\vec{v} into vi\vec{v}_{\setminus i} and ηei\eta\vec{e}_{i}, where η\eta is the value in the ii-th entry of v\vec{v}, and vi\vec{v}_{\setminus i} denotes the vector with the ii-th component removed.

Then, we have

vTv=(vi+ηei)T(~T𝒜G~+)(vi+ηei)=viT~T𝒜G~vi+η2eiTei=~vi22+η2\vec{v}^{T}\mathcal{M}\vec{v}=(\vec{v}_{\setminus i}+\eta\vec{e}_{i})^{T}(\tilde{\nabla}^{T}\mathcal{A}^{G}\tilde{\nabla}+\mathcal{E})(\vec{v}_{\setminus i}+\eta\vec{e}_{i})=\vec{v}_{\setminus i}^{T}\tilde{\nabla}^{T}\mathcal{A}^{G}\tilde{\nabla}\vec{v}_{\setminus i}+\eta^{2}\vec{e}_{i}^{T}\mathcal{E}\vec{e}_{i}=\|\mathcal{H}\tilde{\nabla}\vec{v}_{\setminus i}\|_{2}^{2}+\eta^{2}

Since rank(~T𝒜G~)=N1rank(\tilde{\nabla}^{T}\mathcal{A}^{G}\tilde{\nabla})=N-1 and 𝒜G\mathcal{A}^{G} is full rank, rank(~)=N1rank(\tilde{\nabla})=N-1. Together with the definition of ~\tilde{\nabla}, we know that its null space only includes ei\vec{e}_{i} and its row space contains the remaining components. Since vi\vec{v}_{\setminus i} contains the complementary components of ei\vec{e}_{i}, ~vi\tilde{\nabla}\vec{v}_{\setminus i} is nonzero if vi\vec{v}_{\setminus i} is nonzero. Otherwise, if vi\vec{v}_{\setminus i} is a zero vector, then η\eta is nonzero because v\vec{v} is nonzero. Therefore, the two terms ~vi22\|\mathcal{H}\tilde{\nabla}\vec{v}_{\setminus i}\|_{2}^{2} and η2\eta^{2} cannot be zero at the same time. Then vTv>0\vec{v}^{T}\mathcal{M}\vec{v}>0, which proves the statement. ∎

For any SPD matrix \mathcal{M}, we set the ii-th row and ii-th column of \mathcal{M} to be zero, and set the ii-th diagonal entry to 11, then we get a new matrix 𝒫\mathcal{P}.

Proposition 2.

𝒫\mathcal{P} is also SPD.

Proof.

Since \mathcal{M} is an SPD matrix, it can be decomposed to QΛQTQ\Lambda Q^{T} where Λ\Lambda contains the eigenvalues, which are all positive.

Then, by setting the ii-th row of QQ to 0, we get another matrix Q~\tilde{Q}. Then we have

𝒫=Q~ΛQ~T+\mathcal{P}=\tilde{Q}\Lambda\tilde{Q}^{T}+\mathcal{E}

where \mathcal{E} is an N×NN\times N matrix whose ii-th diagonal entry is 11, and all other entries are zero.

Then we decompose any nonzero vector v\vec{v} into vi\vec{v}_{\setminus i} and ηei\eta\vec{e}_{i}, where η\eta is the value in the ii-th entry of v\vec{v}, and vi\vec{v}_{\setminus i} denotes the vector with the ii-th component removed.

Then we have,

vT𝒫v=(vi+ηei)T(Q~ΛQ~T+)(vi+ηei)=viTQ~ΛQ~Tvi+η2eiTei=ΣQ~Tvi22+η2\vec{v}^{T}\mathcal{P}\vec{v}=(\vec{v}_{\setminus i}+\eta\vec{e}_{i})^{T}(\tilde{Q}\Lambda\tilde{Q}^{T}+\mathcal{E})(\vec{v}_{\setminus i}+\eta\vec{e}_{i})=\vec{v}_{\setminus i}^{T}\tilde{Q}\Lambda\tilde{Q}^{T}\vec{v}_{\setminus i}+\eta^{2}\vec{e}_{i}^{T}\mathcal{E}\vec{e}_{i}=\|\Sigma\tilde{Q}^{T}\vec{v}_{\setminus i}\|_{2}^{2}+\eta^{2}

where Σ=Λ\Sigma=\sqrt{\Lambda}. Since rank(Q)=Nrank(Q)=N, rank(Q~)=N1rank(\tilde{Q})=N-1. Apparently, we notice that the vector ei\vec{e}_{i} is the only basis in the null space of Q~T\tilde{Q}^{T}. Since vi\vec{v}_{\setminus i} contains the complementary components of ei\vec{e}_{i}, Q~Tvi\tilde{Q}^{T}\vec{v}_{\setminus i} is nonzero if vi\vec{v}_{\setminus i} is nonzero. Otherwise, if vi\vec{v}_{\setminus i} is a zero vector, then η\eta is nonzero because v\vec{v} is nonzero. Therefore, the two terms ΣQ~Tvi22\|\Sigma\tilde{Q}^{T}\vec{v}_{\setminus i}\|_{2}^{2} and η2\eta^{2} cannot be zero at the same time. Then vT𝒫v>0\vec{v}^{T}\mathcal{P}\vec{v}>0, which proves the statement. ∎

With the above Lemmas and Propositions, we can now show that

Theorem 1.

𝒞u\mathcal{C}_{u}, 𝒞v\mathcal{C}_{v}, and 𝒞w\mathcal{C}_{w} in the linear systems 26 are SPD.

Proof.

In Lemma 1, rank(𝒞)=N1rank(\mathcal{C})=N-1 is proved.

For the first masking operation, by Lemma 2 and Proposition 1, we know that the resulting matrix is SPD.

For the remaining masking operations, by Proposition 2, we know that the resulting matrices are all SPD. Therefore, 𝒞u\mathcal{C}_{u}, 𝒞v\mathcal{C}_{v}, and 𝒞w\mathcal{C}_{w} in the linear systems 26 are SPD. ∎

5.5 Pseudo-codes

This section provides a summary of the algorithms discussed in the previous sections by presenting the pseudo-codes for each method, as shown in Algorithm 1 and 2.

Algorithm 1 3D quasiconformal representation computation for 3D bijective map
𝒱={p1,p2,,pn}\mathcal{V}=\{p_{1},p_{2},\ldots,p_{n}\}; f(𝒱)={s1,s2,,sn}f(\mathcal{V})=\{s_{1},s_{2},\ldots,s_{n}\}; ={(i1,i2,i3,i4)i1,i2,i3,i4[1,n], and i1,i2,i3,i4 are distinct}\mathcal{F}=\{(i_{1},i_{2},i_{3},i_{4})\mid i_{1},i_{2},i_{3},i_{4}\in\mathbb{N}\cap[1,n],\text{ and }i_{1},i_{2},i_{3},i_{4}\text{ are distinct}\}
3D quasiconformal representation q(T)\vec{q}(T) for TT\in\mathcal{F}
for TT\in\mathcal{F} :
     Compute the Jacobian matrix Jf(T)\textbf{J}_{f}(T) with 𝒱\mathcal{V}, f(𝒱)f(\mathcal{V}) and TT
     PJf(T)TJf(T)P\leftarrow\sqrt{\textbf{J}_{f}(T)^{T}\textbf{J}_{f}(T)}
     q(T)\vec{q}(T)\leftarrow upper triangular part of PP
Algorithm 2 Mapping reconstruction from 3D quasiconformal representation
𝒱={p1,p2,,pn}\mathcal{V}=\{p_{1},p_{2},\ldots,p_{n}\}; f(𝒱)={s1,s2,,sn}f(\mathcal{V})=\{s_{1},s_{2},\ldots,s_{n}\}; ={(i1,i2,i3,i4)i1,i2,i3,i4[1,n], and i1,i2,i3,i4 are distinct}\mathcal{F}=\{(i_{1},i_{2},i_{3},i_{4})\mid i_{1},i_{2},i_{3},i_{4}\in\mathbb{N}\cap[1,n],\text{ and }i_{1},i_{2},i_{3},i_{4}\text{ are distinct}\}; q(T)\vec{q}(T) for TT\in\mathcal{F}.
f(𝒱)f({\mathcal{V}})
for TT\in\mathcal{F} :
     Reorganize q(T)\vec{q}(T) to get the matrix PP
     Compute eigenvalue decomposition of PP to get a,b,ca,b,c and WW
     Compute 𝒜T\mathcal{A}_{T} by Equation 12 with a,b,ca,b,c and WW
     Construct XX and compute X1X^{-1}, from which {ATi,BTi,CTiiT}\{A_{T}^{i},B_{T}^{i},C_{T}^{i}\mid i\in T\} can be determined Build the |𝒱|×|𝒱||\mathcal{V}|\times|\mathcal{V}| linear systems for Φpi(𝒜TTf,(𝒩(pi)))=0\Phi_{p_{i}}(\mathcal{A}_{T}\nabla_{T}\textbf{f},\partial\bigl{(}\cup\mathcal{N}(p_{i})\bigr{)})=0 for f{u,v,w}\textbf{f}\in\{u,v,w\}
Solve the system to obtain f(𝒱)f({\mathcal{V}}) using the conjugate gradient method

5.6 Experiments

This section presents experimental results demonstrating the effectiveness of our proposed 3D quasiconformal representation and reconstruction algorithm on 3D tetrahedral meshes. For each experiment, we first show the source and target meshes with the same number of vertices and connections, thereby forming a mapping. Then we show the reconstructed mapping from the 3D quasiconformal representation obtained from the given mapping. As we believe the proposed representation contains sufficient information about its corresponding mapping, we expect the reconstructed mapping to be as close to the mapping as possible, resulting in small reconstruction errors determined by the l2l^{2} norm between the original mappings and the reconstructed mappings.

Also in the following experiments, we use the same boundary condition that

f(p𝒮)𝒮,𝒮BoundarySetf(p\in\mathcal{S})\in\mathcal{S},\quad\mathcal{S}\in BoundarySet

where BoundarySet={{(i,y,z)},{(x,i,z)},{(x,y,i)}i{0,1},x,y,z}BoundarySet=\bigl{\{}\{(i,y,z)\},\{(x,i,z)\},\{(x,y,i)\}\mid i\in\{0,1\},x,y,z\in\mathbb{R}\bigr{\}}.

Figures 3,4, and 5 show the reconstruction results for two bijective mappings, and the reconstruction errors are 6.5147×10116.5147\times 10^{-11}, 2.4309×10102.4309\times 10^{-10}, and 1.2845×1091.2845\times 10^{-9}, respectively. These small errors demonstrate that our approach is effective at reconstructing mappings using their 3D quasiconformal representations, yielding precise results for bijective mappings.

Refer to caption
(a) The source mesh.
Refer to caption
(b) The target mesh.
Refer to caption
(c) The reconstructed mesh.
Figure 3: Mesh reconstruction result of mapping without folding. The reconstruction error is 6.5147×10116.5147\times 10^{-11}.
Refer to caption
(a) The source mesh.
Refer to caption
(b) The target mesh.
Refer to caption
(c) The reconstructed mesh.
Figure 4: Mesh reconstruction result of mapping without folding. The reconstruction error is 2.4309×10102.4309\times 10^{-10}.
Refer to caption
(a) The source mesh.
Refer to caption
(b) The target mesh.
Refer to caption
(c) The reconstructed mesh.
Figure 5: Mesh reconstruction result of mapping without folding. The reconstruction error is 1.2845×1091.2845\times 10^{-9}.

6 Applications

The above representation can be applied to 3D animation interpolation and mapping compression. In the following, we will introduce these two applications and show their experimental results.

6.1 3D keyframe interpolation

In the film and animation industry, animation is generally created by first designing two keyframes and then interpolating between them to get the intermediate frames.

In this section, we provide a novel keyframe interpolation method based on the proposed 3D representation, which takes the geometric information of the mapping into account.

Given two keyframes K1K_{1} and K2K_{2}, which are two triangular meshes with the same number of vertices and triangulation, our first step is to embed them into unit cubic domains Ω1\Omega_{1} and Ω2\Omega_{2} respectively.

Then we fill Ω1\Omega_{1} with vertices. This step is necessary because it can ensure that the resulting mapping can map K1K_{1} to K2K_{2} without folding. If the vertices are too sparse, there may not be enough freedom for the vertices to move while satisfying the constraint that the mapping should be diffeomorphic.

However, adding too many vertices to the domain should be avoided, as it can significantly increase the dimension of the linear system and slow down the solving process. Therefore, we propose first to use the Poisson disk sampling method [40] to randomly sample points in the domain evenly, and assign to each vertex a threshold, or in other words, a probability, based on the minimal distance between the vertex and the vertices on the triangular mesh. The formula used to generate the threshold is as follows

Prob(p)=min(1,d(p,K1)σ)eκd(p,K1)Prob(p)=min(1,\frac{d(p,K_{1})}{\sigma})\cdot e^{-\kappa\cdot d(p,K_{1})}

where d(p,K1)d(p,K_{1}) denotes the minimal distance between the vertex pp and the vertices in K1K_{1}, σ\sigma is the radius parameter of the Poisson disk sampling, and κ\kappa is the weight to control the sparsity. The first term is used to discard vertices that are too close to K1K_{1}, and the second exponential term is used to discard those far away from K1K_{1}. Then for each vertex pp, we assign it a random number in [0,1][0,1] and decide whether to discard it by comparing the random number with Prob(p)Prob(p).

Ω1\Omega_{1} is then tetrahedralized using tetgen [41]. We need to find a mapping such that:

  1. 1.

    It is a diffeomorphism;

  2. 2.

    It maps the vertices in K1K_{1} to those in K2K_{2}.

These can be achieved by using the efficient algorithm proposed in [38], which makes use of a generalized conformality term to control the distortion of the mapping.

With this mapping, we can map all the vertices in Ω1\Omega_{1} to Ω2\Omega_{2} diffeomorphically.

After that, we compute the representation for the mapping, which is a 6\mathbb{R}^{6} valued function. This function is piecewise constant on each tetrahedron.

Next, we can interpolate between the 6\mathbb{R}^{6} valued function for each tetrahedron and the upper triangular part of the identity matrix correspondingly.

Once we have the interpolated representation, we can reconstruct mappings for each frame with Equations 24 and boundary conditions.

In the following experiments, we show the interpolation results between human face meshes generated using 3D Morphable Model (3DMM) [42, 43], or registered human scans sampled from the FAUST dataset[44].

Refer to caption
Figure 6: Experimental results for 3D animation interpolation. Each column contains an animation sequence of an object. The objects in the first and last rows are samples generated using 3DMM with different expression parameters, and the second and third rows are the interpolated 3D models for those in the first and last rows. The results show that our algorithm can successfully interpolate between human face meshes to get 3D animation sequences.

In Figure 6, we show three animation sequences generated using the proposed algorithm. The motion between the two keyframes of each sequence is mild, meaning that the distance between the 3D quasiconformal representations of the keyframes is comparatively small. Therefore, we expect it to work properly, and the experiment demonstrates that it works as expected.

Refer to caption
Figure 7: Experimental results for 3D animation interpolation. Each column contains an animation sequence of an object. The objects in the first and last rows are registered human scans sampled from the FAUST dataset, and the second and third rows are the generated 3D models interpolated between those in the first and last rows. The results show that our algorithm can successfully interpolate between large deformation human body scans to get 3D animation sequences.

We also try to apply the algorithm to extreme cases, as shown in Figure 7, in which we show three animation sequences of human body motion. The scale of the motion between the two keyframes of each sequence is large, meaning that the distance between the 3D quasiconformal representations of the keyframes is large. The experiment demonstrates that our algorithm may work properly in some extreme cases.

6.2 3D mapping compression

Another application is to use the proposed 3D Representation to compress a 3D mapping.

Compared with 2D mapping, the storage requirement of 3D mappings is much higher. Nowadays, the prosperity of deep learning stimulates the need to create and store large datasets. With the development of the field of 3D animation, 3D data, which are generally much larger than 2D data, should be stored properly.

In this paper, we propose to store 3D mapping data using the proposed 3D representation. This representation has a good localized property, as the representation is defined on each tetrahedron and depicts the local distortion of the corresponding tetrahedron. In the following experiment, we will show that, compared with the coordinate function, our proposed representation has a better spectral property, which makes it suitable for mapping compression.

Our idea is similar to that in [1], in which the author compresses 2D mapping using discrete Fourier transform. However, when it comes to 3D mapping, the number of vertices increases dramatically. When the distribution of the vertices is very uneven, one may need a very dense regular grid to preserve the information of the original mesh precisely, which will boost the computation.

As we already have a mesh structure, it is a natural way to perform spectral analysis on the tetrahedral mesh directly. [45] introduces the idea of performing signal processing on a graph utilizing the graph Laplacian. In a discrete 2D surface with geometric information, signal processing can be applied with the Laplace-Beltrami operator, as introduced in [46, 47]. In the 3D case, the Laplace-Beltrami operator can also be computed using a 3D version cotangent formula [48, 49].

The 3D discrete Laplace-Beltrami operator is a matrix L|𝒱|×|𝒱|L\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|}. Let Υ\Upsilon be the subset of \mathcal{F} containing the tetrahedrons incident to an edge connecting pip_{i} and pjp_{j}, then the weight of this edge is computed as

ωij=16TΥTijcotθTijij\omega_{ij}=\frac{1}{6}\sum_{T\in\Upsilon}\ell_{T_{\setminus ij}}\cot\theta^{ij}_{T_{\setminus ij}}

where TijT_{\setminus ij} is the edge connecting the other two vertices in T{i,j}T\setminus\{i,j\}, Tij\ell_{T_{\setminus ij}} is the length of TijT_{\setminus ij}; θTijij\theta^{ij}_{T_{\setminus ij}} is the interior dihedral angle of edge connecting pip_{i} and pjp_{j}.

Then the nonzero entries representing the edge connecting vertex pip_{i} and pjp_{j} is

Lij=ωijL_{ij}=-\omega_{ij}

and diagonal element

Lii=j=1|𝒱|ωijL_{ii}=\sum_{j=1}^{|\mathcal{V}|}\omega_{ij}

for each vertex pip_{i}.

Then, by solving the generalized eigenvalue problem

Lν=λMνL\nu=\lambda M\nu (27)

where MM is the mass matrix whose diagonal Mii=14T𝒩(pi)Vol(T)M_{ii}=\frac{1}{4}\sum_{T\in\mathcal{N}(p_{i})}Vol(T), we obtain a series of eigenvalues {λ1,λ2,,λ|𝒱|}\{\lambda_{1},\lambda_{2},\dots,\lambda_{|\mathcal{V}|}\} satisfying λ1<λ2<<λ|𝒱|\lambda_{1}<\lambda_{2}<\dots<\lambda_{|\mathcal{V}|}, and their corresponding eigenvector {ν1,ν2,,ν|𝒱|}\{\nu_{1},\nu_{2},\dots,\nu_{|\mathcal{V}|}\} satisfying νiTMνi=1\nu_{i}^{T}M\nu_{i}=1 for all generalized eigenvectors νi\nu_{i}.

Let ν~i=M12νi\tilde{\nu}_{i}=M^{\frac{1}{2}}\nu_{i}, then νi=M12ν~i\nu_{i}=M^{-\frac{1}{2}}\tilde{\nu}_{i}. Substituting it into Equation 27, we have LM12ν~i=λM12ν~iLM^{-\frac{1}{2}}\tilde{\nu}_{i}=\lambda M^{\frac{1}{2}}\tilde{\nu}_{i}, which is equivalent to M12LM12ν~i=λν~iM^{-\frac{1}{2}}LM^{-\frac{1}{2}}\tilde{\nu}_{i}=\lambda\tilde{\nu}_{i}. Note that M12LM12M^{-\frac{1}{2}}LM^{-\frac{1}{2}} is a symmetric matrix, and ν~i\tilde{\nu}_{i} is its eigenvector, meaning that ν~jTν~i=0\tilde{\nu}_{j}^{T}\tilde{\nu}_{i}=0 if iji\neq j. Thus, νjTM12M12νi=νiTMνj=0\nu_{j}^{T}M^{\frac{1}{2}}M^{\frac{1}{2}}\nu_{i}=\nu_{i}^{T}M\nu_{j}=0, meaning that the generalized eigenvectors of LL are MM-orthogonal to each other.

Therefore, we have VTMV=IV^{T}MV=I for matrix VV whose column vectors are the generalized eigenvectors. Thus we have (VTM)1=V(V^{T}M)^{-1}=V.

For any discrete function ff defined on the vertices of a tetrahedral mesh, the MM-inner product between ff and eigenvector νi\nu_{i} is defined as

f,νiM=fTMνi\langle f,\nu_{i}\rangle_{M}=f^{T}M\nu_{i}

Then

if,νiMνi=i(fTMνi)νi=V(VTMf)=V(VTM)f=f\sum_{i}\langle f,\nu_{i}\rangle_{M}\nu_{i}=\sum_{i}(f^{T}M\nu_{i})\nu_{i}=V(V^{T}Mf)=V(V^{T}M)f=f

as (VTM)1=V(V^{T}M)^{-1}=V.

Therefore, we can project any function defined on a tetrahedral mesh to the spectral domain by computing MM-inner product with the generalized eigenvectors of the discrete Laplace-Beltrami operator. Also, the eigenvalues indicate the frequency of the eigenvectors. A larger eigenvalue corresponds to a higher frequency. With this property, we can perform low-pass or high-pass filtering to any function on the mesh.

In our case, suppose we have a pair of 3D meshes with the same number of vertices and triangulation. This couple forms a mapping. we would like to find a compressed representation for this mapping.

We first construct the discrete Laplace-Beltrami operator and the mass matrix for the source mesh.

Then we solve the generalized eigenvalue problem 27 with the constraint νiTMνi=1\nu_{i}^{T}M\nu_{i}=1 for all generalized eigenvectors νi\nu_{i} to get the sorted eigenvalues and eigenvectors.

With the eigenvalues and eigenvectors, we can start to perform filtering to the 3D quasiconformal representation. A 3D representation is a piecewise linear vector-valued function defined on the tetrahedrons. We can treat this function as a combination of 6 scalar-valued functions, and perform filtering to the 6 functions separately.

For each of the 6 functions, we first interpolate it to the vertices of the mesh. Then we compute the MM-inner product ξi=f,νiM\xi_{i}=\langle f,\nu_{i}\rangle_{M} for each eigenvector to get the spectrum. Based on the spectrum, one can select a threshold 𝐓|𝒱|\mathbf{T}\leq|\mathcal{V}|, and save {ξii[1,𝐓]}\{\xi_{i}\mid i\in\mathbb{N}\cap[1,\mathbf{T}]\} and discard the rest. The smaller 𝐓\mathbf{T} is, the more storage will be saved, but the reconstruction accuracy will be lower.

To reconstruct the function, one can read the saved spectral components {ξii[1,𝐓]}\{\xi_{i}\mid i\in\mathbb{N}\cap[1,\mathbf{T}]\}, from the storage device. Then by computing

f~=i=1𝐓ξiνi\tilde{f}=\sum_{i=1}^{\mathbf{T}}\xi_{i}\nu_{i}

The vector-valued function representing the 3D representation can be reconstructed. One can reconstruct the mapping subsequently by solving the linear systems 24 with the reconstructed 3D representation.

There are two primary concerns regarding this 3D mapping compression scheme:

  1. 1.

    Dimensionality Mismatch: The interpolated 3D representation is an 6\mathbb{R}^{6} valued function, whereas the coordinate function is an 3\mathbb{R}^{3} valued function. Consequently, to ensure that the storage requirements are comparable to those of storing the mapping directly, we must discard half of the spectral components. This process inevitably leads to a loss of information. However, our experimental results demonstrate that this operation is justified, as the significant information is primarily concentrated in the low-frequency region, making the loss due to the removal of high-frequency components relatively minor.

  2. 2.

    Direct Compression of Coordinate Function: One might question why we do not compress the coordinate function directly. Our experiments reveal that attempting direct compression of the coordinate function yields poor results.

In the following experiments, we apply the above compression scheme to two 3D mappings. The experimental results are shown in Figures 8,9,10 and Figure 11,12,13 respectively.

Figures 8 and 11 show the two mappings and their compression results. (a) and (b) compose a mapping, (c) is the reconstructed result of compressing the coordinate function, and (d) is the reconstructed result of compressing the 3D representation. To illustrate the perturbation of the compressed mapping at the boundary, we have colored four of the outer surfaces of the tetrahedral mesh. Additionally, the edge where the two uncolored, transparent outer surfaces meet is highlighted in red.

In Figure 8, we see that the red edge in (c) deviates from its original position in (b). Also the four colored outer surfaces in (b) become turbulent if the mapping is compressed using the coordinate function. In comparison, the four outer surfaces and the red edge in (d) stay in their original position, as the proposed reconstruction algorithm allows us to introduce boundary conditions.

The deformation of the mapping shown in Figure 11 is much larger. Compared with the mesh in (b), apart from the deviation of the four colored outer surfaces and the red edge, we further notice that the three surfaces inside the cube are also distorted. In comparison, the shape of the three surfaces in (d) is well-preserved, which demonstrates the robustness of our algorithm.

Figures 9 and 12 show the spectrum of the 3D representation for the two mappings. As the 3D representation is a 6\mathbb{R}^{6} valued function, we have one spectrum for each function respectively. From the two figures, we notice that most of the information is concentrated in the low-frequency domain, therefore, we can safely discard the coefficients corresponding to the high-frequency domain.

Figures 10 and 13 visualize the mean square error between the compressed mapping and the original one. The reference error is the mean square error between the source and target meshes. We see that as we discard more high-frequency coefficients, the error becomes larger. More importantly, we notice that the curves are flat except when the threshold τ\tau is small, meaning that we can achieve a high compression ratio with the proposed mapping compression algorithm.

In conclusion, our proposed mapping compression algorithm shows the following superiority,

  1. 1.

    It can be used to compress mapping defined on unstructured mesh;

  2. 2.

    It allows the introduction of boundary conditions;

  3. 3.

    It can compress mappings with a high compression ratio.

Refer to caption
Figure 8: Comparison between the results of compressing the coordinate function and the 3D representation. (a) and (b) are the source and target meshes, which compose a mapping. (c) shows the coordinate function compression result. The mesh is reconstructed using 200 spectral coefficients for each component of the coordinate function. (d) shows the coordinate 3D representation compression result. The mesh is reconstructed using 100 spectral coefficients for each component of the 3D representation.
Refer to caption
Figure 9: Spectrum for each mapping. Each chart shows the corresponding spectrum of a component of the 3D representation q\vec{q}. We notice that the magnitude on the left of the charts is much larger than that on the right, meaning that most information lies in the low-frequency domain. From the right column, we know that there are lots of power which are close to zero. We can discard them to compress the mapping.
Refer to caption
Figure 10: Chart for MSE visualization for the cube with 9261 vertices and 40000 tetrahedrons. The reference error is the MSE between the source mesh and the target mesh. The MSE line represents the MSE between the target mesh and the reconstructed after discarding a certain amount of frequency components. 𝐓\mathbf{T}, which represents the number of discarded frequency components, is shown in the x-axis.
Refer to caption
Figure 11: Comparison between the results of compressing the coordinate function and the 3D representation. (a) and (b) are the source and target meshes, which compose a mapping. (c) shows the coordinate function compression result. The mesh is reconstructed using 800 spectral coefficients for each component of the coordinate function. (d) shows the coordinate 3D representation compression result. The mesh is reconstructed using 400 spectral coefficients for each component of the 3D representation.
Refer to caption
Figure 12: Spectrum for each mapping. Each chart shows the corresponding spectrum of a component of the 3D representation q\vec{q}. We notice that the magnitude on the left of the charts is much larger than that on the right, meaning that most information lies in the low-frequency domain. From the right column, we know that there are lots of power which are close to zero. We can discard them to compress the mapping.
Refer to caption
Figure 13: Chart for MSE visualization for the cube with 7283 vertices and 43891 tetrahedrons. The reference error is the MSE between the source mesh and the target mesh. The MSE line represents the MSE between the target mesh and the reconstructed after discarding a certain amount of frequency components. 𝐓\mathbf{T}, which represents the number of discarded frequency components, is shown in the x-axis.

7 Conclusion

In this paper, we have proposed an effective 3D quasiconformal representation to describe the local dilation of 3D mappings. This representation, along with the associated reconstruction algorithm similar to Linear Beltrami Solver, allows us to process the mapping in the space of 3D quasiconformal representation.

We demonstrate the effectiveness of our proposed 3D quasiconformal representation and reconstruction algorithm. By reconstructing randomly generated discrete mappings defined on tetrahedral meshes with high precision, we demonstrate the correctness of the proposed reconstruction algorithm and the ability of the representation to encode distortion information.

We also show two applications enabled by the proposed 3D representation and the reconstruction algorithm. In the keyframe interpolation, we propose an alternative algorithm that enables the interpolation between keyframes in the space of 3D quasiconformal representation. The algorithm gives impressive results.

In mapping compression, we propose a compression algorithm for a mapping defined on unstructured mesh. It allows us to introduce boundary conditions so that we can control the domain easily. Also, it can achieve a high compression ratio for mappings with smooth deformation.

In future work, we plan to improve the computational efficiency of our algorithm and extend it to handle more complex deformations. We also aim to explore more applications where our technique could prove useful, including 3D image registration and image segmentation.

In summary, we have introduced an effective tool that, for the first time, fills the gap of describing and manipulating distortions in 3D space. The potential applications of our work are far-reaching, and we hope this newfound capability will prove invaluable for research involving 3D shapes and information.

References

  • [1] L. M. Lui, K. C. Lam, T. W. Wong, and X. Gu, “Texture map and video compression using beltrami representation,” SIAM Journal on Imaging Sciences, vol. 6, no. 4, pp. 1880–1902, 2013.
  • [2] M. Desbrun, M. Meyer, and P. Alliez, “Intrinsic parameterizations of surface meshes,” in Computer graphics forum, vol. 21.   Wiley Online Library, 2002, pp. 209–218.
  • [3] B. Lévy, S. Petitjean, N. Ray, and J. Maillot, “Least squares conformal maps for automatic texture atlas generation,” ACM transactions on graphics (TOG), vol. 21, no. 3, pp. 362–371, 2002.
  • [4] X. Gu, Y. Wang, T. F. Chan, P. M. Thompson, and S.-T. Yau, “Genus zero surface conformal mapping and its application to brain surface mapping,” IEEE transactions on medical imaging, vol. 23, no. 8, pp. 949–958, 2004.
  • [5] R. Zayer, C. Rossl, and H.-P. Seidel, “Discrete tensorial quasi-harmonic maps,” in International Conference on Shape Modeling and Applications 2005 (SMI’05).   IEEE, 2005, pp. 276–285.
  • [6] L. M. Lui, T. W. Wong, P. Thompson, T. Chan, X. Gu, and S.-T. Yau, “Compression of surface registrations using beltrami coefficients,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.   IEEE, 2010, pp. 2839–2846.
  • [7] P. T. Choi, K. C. Lam, and L. M. Lui, “Flash: Fast landmark aligned spherical harmonic parameterization for genus-0 closed brain surfaces,” SIAM Journal on Imaging Sciences, vol. 8, no. 1, pp. 67–94, 2015.
  • [8] P. T. Choi and L. M. Lui, “Fast disk conformal parameterization of simply-connected open surfaces,” Journal of Scientific Computing, vol. 65, pp. 1065–1090, 2015.
  • [9] K. Chen, L. M. Lui, and J. Modersitzki, “Image and surface registration,” in Handbook of Numerical Analysis.   Elsevier, 2019, vol. 20, pp. 579–611.
  • [10] K. C. Lam, X. Gu, and L. M. Lui, “Genus-one surface registration via teichmüller extremal mapping,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2014, pp. 25–32.
  • [11] ——, “Landmark constrained genus-one surface teichmüller map applied to surface registration in medical imaging,” Medical image analysis, vol. 25, no. 1, pp. 45–55, 2015.
  • [12] K. C. Lam and L. M. Lui, “Landmark-and intensity-based registration with large deformations via quasi-conformal maps,” SIAM Journal on Imaging Sciences, vol. 7, no. 4, pp. 2364–2392, 2014.
  • [13] ——, “Quasi-conformal hybrid multi-modality image registration and its application to medical image fusion,” in International Symposium on Visual Computing.   Springer, 2015, pp. 809–818.
  • [14] L. M. Lui, K. C. Lam, S.-T. Yau, and X. Gu, “Teichmuller mapping (t-map) and its applications to landmark matching registration,” SIAM Journal on Imaging Sciences, vol. 7, no. 1, pp. 391–426, 2014.
  • [15] L. M. Lui and C. Wen, “Geometric registration of high-genus surfaces,” SIAM Journal on Imaging Sciences, vol. 7, no. 1, pp. 337–365, 2014.
  • [16] L. M. Lui, T. W. Wong, P. Thompson, T. Chan, X. Gu, and S.-T. Yau, “Shape-based diffeomorphic registration on hippocampal surfaces using beltrami holomorphic flow,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2010, pp. 323–330.
  • [17] L. M. Lui, T. W. Wong, W. Zeng, X. Gu, P. M. Thompson, T. F. Chan, and S.-T. Yau, “Optimization of surface registrations using beltrami holomorphic flow,” Journal of scientific computing, vol. 50, no. 3, pp. 557–585, 2012.
  • [18] D. Qiu and L. M. Lui, “Inconsistent surface registration via optimization of mapping distortions,” Journal of Scientific Computing, vol. 83, no. 3, pp. 1–31, 2020.
  • [19] C. Wen, D. Wang, L. Shi, W. C. Chu, J. C. Cheng, and L. M. Lui, “Landmark constrained registration of high-genus surfaces applied to vestibular system morphometry,” Computerized Medical Imaging and Graphics, vol. 44, pp. 1–12, 2015.
  • [20] C. P. Yung, G. P. Choi, K. Chen, and L. M. Lui, “Efficient feature-based image registration by mapping sparsified surfaces,” Journal of Visual Communication and Image Representation, vol. 55, pp. 561–571, 2018.
  • [21] W. Zeng, L. Ming Lui, and X. Gu, “Surface registration by optimization in constrained diffeomorphism space,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 4169–4176.
  • [22] D. Zhang, A. Theljani, and K. Chen, “On a new diffeomorphic multi-modality image registration model and its convergent gauss-newton solver,” Journal of Mathematical Research with Applications, vol. 39, no. 6, pp. 633–656, 2019.
  • [23] M. Zhang, F. Li, X. Wang, Z. Wu, S.-Q. Xin, L.-M. Lui, L. Shi, D. Wang, and Y. He, “Automatic registration of vestibular systems with exact landmark correspondence,” Graphical models, vol. 76, no. 5, pp. 532–541, 2014.
  • [24] D. Zhang and K. Chen, “A novel diffeomorphic model for image registration and its algorithm,” Journal of Mathematical Imaging and Vision, vol. 60, no. 8, pp. 1261–1283, 2018.
  • [25] H.-L. Chan, S. Yan, L.-M. Lui, and X.-C. Tai, “Topology-preserving image segmentation by beltrami representation of shapes,” Journal of Mathematical Imaging and Vision, vol. 60, no. 3, pp. 401–421, 2018.
  • [26] C. Y. Siu, H. L. Chan, and R. L. Ming Lui, “Image segmentation with partial convexity shape prior using discrete conformality structures,” SIAM Journal on Imaging Sciences, vol. 13, no. 4, pp. 2105–2139, 2020.
  • [27] D. Zhang, X. cheng Tai, and L. M. Lui, “Topology- and convexity-preserving image segmentation based on image registration,” Applied Mathematical Modelling, vol. 100, p. 218, 2021.
  • [28] D. Zhang and L. M. Lui, “Topology-preserving 3d image segmentation based on hyperelastic regularization,” Journal of Scientific Computing, vol. 87, no. 3, pp. 1–33, 2021.
  • [29] H. L. Chan, H. Li, and L. M. Lui, “Quasi-conformal statistical shape analysis of hippocampal surfaces for alzheimer’s disease analysis,” Neurocomputing, vol. 175, pp. 177–187, 2016. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0925231215015064
  • [30] H.-L. Chan, H.-M. Yuen, C.-T. Au, K. C.-C. Chan, A. M. Li, and L.-M. Lui, “Quasi-conformal geometry based local deformation analysis of lateral cephalogram for childhood osa classification,” arXiv preprint arXiv:2006.11408, 2020.
  • [31] G. P. Choi, H. L. Chan, R. Yong, S. Ranjitkar, A. Brook, G. Townsend, K. Chen, and L. M. Lui, “Tooth morphometry using quasi-conformal theory,” Pattern Recognition, vol. 99, p. 107064, 2020.
  • [32] G. P. Choi, D. Qiu, and L. M. Lui, “Shape analysis via inconsistent surface registration,” Proceedings of the Royal Society A, vol. 476, no. 2242, p. 20200147, 2020.
  • [33] L. M. Lui, W. Zeng, S.-T. Yau, and X. Gu, “Shape analysis of planar multiply-connected objects using conformal welding,” IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 7, pp. 1384–1401, 2013.
  • [34] T. W. Meng, G. P.-T. Choi, and L. M. Lui, “Tempo: Feature-endowed teichmuller extremal mappings of point clouds,” SIAM Journal on Imaging Sciences, vol. 9, no. 4, pp. 1922–1962, 2016.
  • [35] Q. Chen, Z. Li, and L. M. Lui, “A deep learning framework for diffeomorphic mapping problems via quasi-conformal geometry applied to imaging,” arXiv preprint arXiv:2110.10580, 2021.
  • [36] Y. Guo, Q. Chen, G. P. Choi, and L. M. Lui, “Automatic landmark detection and registration of brain cortical surfaces via quasi-conformal geometry and convolutional neural networks,” Computers in Biology and Medicine, p. 107185, 2023.
  • [37] Y. T. Lee, K. C. Lam, and L. M. Lui, “Landmark-matching transformation with large deformation via n-dimensional quasi-conformal maps,” Journal of Scientific Computing, vol. 67, pp. 926–954, 2016.
  • [38] D. Zhang, G. P. Choi, J. Zhang, and L. M. Lui, “A unifying framework for n-dimensional quasi-conformal mappings,” SIAM Journal on Imaging Sciences, vol. 15, no. 2, pp. 960–988, 2022.
  • [39] C. Huang, K. Chen, M. Huang, D. Kong, and J. Yuan, “Topology-preserving image registration with novel multi-dimensional beltrami regularization,” Applied Mathematical Modelling, vol. 125, pp. 539–556, 2024.
  • [40] R. L. Cook, “Stochastic sampling in computer graphics,” ACM Transactions on Graphics (TOG), vol. 5, no. 1, pp. 51–72, 1986.
  • [41] S. Hang, “Tetgen, a delaunay-based quality tetrahedral mesh generator,” ACM Trans. Math. Softw, vol. 41, no. 2, p. 11, 2015.
  • [42] V. Blanz and T. Vetter, “A morphable model for the synthesis of 3d faces,” in Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, ser. SIGGRAPH ’99.   USA: ACM Press/Addison-Wesley Publishing Co., 1999, p. 187–194. [Online]. Available: https://doi.org/10.1145/311535.311556
  • [43] P. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter, “A 3d face model for pose and illumination invariant face recognition,” in 2009 sixth IEEE international conference on advanced video and signal based surveillance.   Ieee, 2009, pp. 296–301.
  • [44] F. Bogo, J. Romero, M. Loper, and M. J. Black, “Faust: Dataset and evaluation for 3d mesh registration,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 3794–3801.
  • [45] D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst, “The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains,” IEEE signal processing magazine, vol. 30, no. 3, pp. 83–98, 2013.
  • [46] R. M. Rustamov et al., “Laplace-beltrami eigenfunctions for deformation invariant shape representation,” in Symposium on geometry processing, vol. 257, 2007, pp. 225–233.
  • [47] K. Crane, “Discrete differential geometry: An applied introduction,” Notices of the AMS, Communication, vol. 1153, 2018.
  • [48] ——, “The n-dimensional cotangent formula,” Online note. URL: https://www. cs. cmu. edu/~ kmcrane/Projects/Other/nDCotanFormula. pdf, pp. 11–32, 2019.
  • [49] X. Gu, Y. Wang, and S.-T. Yau, “Volumetric Harmonic Map,” Communications in Information & Systems, vol. 3, no. 3, pp. 191 – 202, 2003.