Low Rank Approximation of Dual Complex Matrices
Abstract
Dual complex numbers can represent rigid body motion in 2D spaces. Dual complex matrices are linked with screw theory, and have potential applications in various areas. In this paper, we study low rank approximation of dual complex matrices. We define -norm for dual complex vectors, and Frobenius norm for dual complex matrices. These norms are nonnegative dual numbers. We establish the unitary invariance property of dual complex matrices. We study eigenvalues of square dual complex matrices, and show that an dual complex Hermitian matrix has exactly eigenvalues, which are dual numbers. We present a singular value decomposition (SVD) theorem for dual complex matrices, define ranks and appreciable ranks for dual complex matrices, and study their properties. We establish an Eckart-Young like theorem for dual complex matrices, and present an algorithm framework for low rank approximation of dual complex matrices via truncated SVD. The SVD of dual complex matrices also provides a basic tool for Principal Component Analysis (PCA) via these matrices. Numerical experiments are reported.
Key words. Dual complex matrices, conjugation, eigenvalues, Hermitian matrices, singular value decomposition, ranks, Eckart-Young like theorem.
1 Introduction
In 1873, W.K. Clifford [6] introduced dual numbers, dual complex numbers and dual quaternions. These become the core knowledge of Clifford algebra or geometric algebra.
While dual quaternions can represent rigid body motions in 3D spaces, the primary application of dual complex numbers is in representing rigid body motions in 2D spaces [10]. Thus, an -dimensional dual complex vector can represent a set of rigid body motions in 2D space, and an dual complex matrix represents a linear transformation from the -dimensional dual complex vector space to the -dimensional dual complex vector space. Dual complex matrices are also linked with screw geometry or screw theory [7], and have potential applications in classical mechanics and robotics, complex representations of the Lorentz group in relativity and electrodynamics, conformal mappings in computer vision, the physics of scattering processes, etc., see [4]. One important tool in data analysis is Principal Component Analysis (PCA) [5]. The core of PCA is low rank approximation of matrices. Thus, in this paper, we study low rank approximation of dual complex matrices.
Suppose that all the data points are stacked as column vectors of a matrix , the matrix should (approximately) have low rank: mathematically,
where has low rank and is a small perturbation matrix. Classical Principal Component Analysis seeks the best rank- estimate of by solving
This problem can be solved via the singular value decomposition (SVD) and enjoys a number of optimality properties. See [5].
Now, is a dual complex matrix. Hence, we need a theory of low rank approximation for dual complex matrices, including unitary invariance of dual complex matrices, SVD decomposition of dual complex matrices, the rank theory of dual complex matrices, and an Eckart-Young like theorem for dual complex matrices. In this paper, we study these issues.
In the next section, we introduce the -norm for dual complex vectors. The -norm of a dual complex vector is a nonnegative dual number. In Section 3, we define the Frobenius norm for dual complex matrices, and establish the unitary equivalence property of dual complex matrices.
In Section 4, we study eigenvalues of dual complex matrices, in particular, dual complex Hermitian matrix. We show that an dual complex Hermitian matrix has exactly eigenvalues, which are nonnegative dual numbers. We prove a unitary decomposition theorem for dual complex Hermitian matrices. A singular value decomposition theorem of dual complex matrices is proved in Section 5. In Section 6, we define ranks and appreciate ranks for dual complex matrices, and study their properties. An Eckart-Young like theorem for dual complex matrices is established in Section 7.
An algorithm framework for low rank approximation of dual complex matrices via truncated SVD is presented, and numerical experiments are reported, in Section 8.
A dual complex number can represent a 2D rigid body motion in a 2D space. Such a 2D space can be a plane or a surface. For example, the cortex can be regarded as a 2D space. In 2016, Alexander et al. [2] applied principal component analysis (PCA) on scalar valued phase gradients to analyze plane waves in the cortex. In 2019, Alexander et al. [3] used PCA on complex valued unit phase to analyze spiral waves in the cortex. The core of PCA is singular value decomposition (SVD) of matrices [5]. In these two cases, SVD of complex matrices are used. However, the plane waves and spiral waves in the cortex are correlated. They should not be analyzed separately. A possible way to overcome this defect is to combine two kinds of analysis together by using SVD of dual complex matrices. In Section 8, we show that computationally, this combination is possible.
Throughout the paper, scalars, vectors and matrices are denoted by small letters, bold small letters and capital letters, respectively.
2 The -Norm of Dual Complex Vectors
2.1 Dual Numbers
Denote , and as the set of real numbers, the set of complex numbers, and the set of dual numbers, respectively. A dual number may be written as , where and are real numbers, and is the infinitesimal unit, satisfying . We call the real part or the standard part of , and the dual part or the infinitesimal part of . The infinitesimal unit is commutative in multiplication with complex numbers. The dual numbers form a commutative algebra of dimension two over the reals. If , we say that is appreciable, otherwise, we say that is infinitesimal.
A total order for dual numbers was introduced in [11]. Given two dual numbers , , , where , , and are real numbers, we say that , if either , or and . In particular, we say that is positive, nonnegative, nonpositive or negative, if , , or , respectively.
Suppose that , and . Then we define the square root
(1) |
Conventionally, we have .
2.2 Dual Complex Numbers
Denote the dual complex numbers by . A dual complex number has the form [4, 8, 9]
(2) |
where and are complex numbers. Again, we call the real part or the standard part of , and the dual part or the infinitesimal part of . We say that a dual complex number is appreciable if its standard part is nonzero. Otherwise, we say that it is infinitesimal. The multiplication of dual complex numbers is commutative.
Denote
where .
Recall that for a complex number , where and are real numbers, its conjugate is . We also have .
The conjugate of is
(3) |
We have
It is a positive dual number if is appreciable, or otherwise. From this, we may define the magnitude of a dual complex number as
(4) |
By direct calculations, we have the following proposition.
Proposition 2.1.
The magnitude is a nonnegative dual number for any . If is appreciable, then
(5) |
For any , we have
-
(i)
;
-
(ii)
for all , and if and only if ;
-
(iii)
;
-
(iv)
.
It is a special case of Theorem 5.1 of [11]. Hence, we omit its proof here.
2.3 Dual Complex Vectors
Denote for dual complex vectors. We say that is appreciable if at least one of its components is appreciable. We may also write
where . Define . The -norm of is defined as
(6) |
If , then we say that is a unit dual complex vector. If and for , where is the Kronecker symbol, then we say that is an orthonormal basis of .
We have the following proposition.
Proposition 2.2.
Suppose that , and . Then,
-
(i)
, and if and only if ;
-
(ii)
;
-
(iii)
.
This proposition can be proved by definition. It is a special case of Theorem 6.4 of [11]. Hence, we also omit its proof here.
Note that if both and are appreciable, then is appreciable.
3 Unitary Invariance of Dual Complex Matrices
The collections of real, complex and dual complex matrices are denoted by , and , respectively.
A dual complex matrix can be denoted as
(7) |
where . Again, we call and the standard part and the infinitesimal part of , respectively. The transpose of is . The conjugate of is . The conjugate transpose of is .
Let and . Then we have
(8) |
Given a square dual complex matrix , it is called invertible (nonsingular) if for some , where is the identity matrix. Such is unique and denoted by . Matrix is called Hermitian if . Write . Then is Hermitian if and only if both and are complex Hermitian matrices. Matrix is called unitary if . Apparently, is unitary if and only if its column vectors form an orthonormal basis of . Let . We say that is partially unitary if its column vectors are unit vectors and orthogonal to each other.
Suppose that is invertible. Then all column and row vectors of are appreciable. This can be proved by definition directly. We also have the following proposition.
Proposition 3.1.
Suppose that is unitary. Then .
Proof.
Dual complex partially unitary matrices have the following properties.
Proposition 3.2.
Suppose that is partially unitary, , where , . Then is a complex partially unitary matrix, and
(13) |
Proof.
Since is partially unitary, , i.e.,
This implies that
Then we have (13), and , i.e., is a complex partially unitary matrix. ∎
Proposition 3.3.
Suppose that is partially unitary, where , . Then there is a vector such that is partially unitary.
Proof.
By Proposition 3.2, is a complex partially unitary matrix. By complex matrix analysis, there is a complex vector such that is partially unitary. Denote the th column vector of as for . Now, let
(14) |
Then for ,
Then, is orthogonal to every column vector of . Note that is appreciable. Let
(15) |
Then we have the desired result. ∎
Note that the proof of this proposition is constructive. By complex matrix analysis, we may derive a formula for . Then we may calculate by (14) and (15).
Corollary 3.4.
Suppose that is partially unitary, . Then there is such that is unitary.
Corollary 3.5.
Suppose that , , , , such that and are both unitary matrices in . Then there is a unitary matrix such that .
Proof.
Set . Since and are unitary, we have
Thus,
Note that , and . Thus, is a unitary matrix in . This completes the proof. ∎
Let . Define the Frobenius norm of as
(16) |
The Frobenius norm of a matrix is actually the -norm of the vectorization of that matrix. Thus, it has all the properties of a vector norm.
If and , then
This property can be proved directly.
We also have the following proposition.
Proposition 3.6.
Suppose that is partially unitary, and . Then
(17) |
Proof.
We are ready to establish unitary invariance of dual complex matrices. We have the following theorem.
Theorem 3.7.
Suppose that and are unitary, and . Then
(18) |
Proof.
Write the columns of as ,
and
Then is an unitary matrix, and . We have and . By Proposition 3.6, we have
Thus, . Similarly, we may show that . The conclusion follows. ∎
4 Eigenvalues of Dual Complex Matrices
Suppose that . If there are and , where is appreciable, such that
(19) |
then we say that is an eigenvalue of , with as a corresponding eigenvector. Here, we request that is appreciable.
We now establish conditions for a dual complex number to be an eigenvalue of a square dual complex matrix.
Theorem 4.1.
Suppose that . Then is an eigenvalue of with an eigenvector only if is an eigenvalue of the complex matrix with an eigenvector , i.e., and
(20) |
Furthermore, if is an eigenvalue of the complex matrix with an eigenvector , then is an eigenvalue of with an eigenvector if and only if and satisfy
(21) |
Proof.
Suppose that is a Hermitian matrix. For any , we have
This implies that is a dual number. With the total order of dual numbers defined in Section 2, we may define positive semidefiniteness and positive definiteness of Hermitian matrices in . A Hermitian matrix is called positive semidefinite if for any , ; is called positive definite if for any with being appreciable, we have and is appreciable.
Theorem 4.2.
An eigenvalue of a Hermitian matrix must be a dual number, and its standard part is an eigenvalue of the complex Hermitian matrix . Furthermore, assume that is an eigenvalue of with a corresponding eigenvector where . Then we have
(23) |
A Hermitian matrix has at most dual number eigenvalues and no other eigenvalues.
An eigenvalue of a positive semidefinite Hermitian matrix must be a nonnegative dual number. In that case, must be positive semidefinite. An eigenvalue of a positive definite Hermitian matrix must be an appreciable positive dual number. In that case, must be positive definite.
Proof.
Suppose that is a Hermitian matrix, and is an eigenvalue of , with as the corresponding eigenvector. Then we have , and is appreciable. We have
As , , and , considering the infinitesimal part of the equality, we have
Since , and is a real number, the above equality reduces to
which proves (23). Hence, is a dual number and an eigenvalue of .
The other conclusions follow from Theorem 4.1 and matrix theory. ∎
Consider eigenvectors of a dual complex Hermitian matrix, associated with two eigenvalues with distinct standard parts. We have the following proposition.
Proposition 4.3.
Two eigenvectors of a Hermitian matrix , associated with two eigenvalues with distinct standard parts, are orthogonal to each other.
Proof.
Suppose that and are two eigenvectors of a Hermitian matrix , associated with two eigenvalues and , respectively, and . By Theorem 4.1, and are dual numbers. We have
i.e.,
Since , exists. We have . ∎
The following is the unitary decomposition theorem of dual complex matrices.
Theorem 4.4.
Suppose that is a Hermitian matrix. Then there are unitary matrix and a diagonal matrix such that , where
(24) |
with the diagonal entries of being eigenvalues of ,
(25) |
for and , , are real numbers, is a -multiple eigenvalue of , are also real numbers, . Counting possible multiplicities , the form is unique.
Proof.
Suppose that is a dual complex Hermitian matrix. Denote , where . Then and are complex Hermitian matrices. Thus, there is an complex unitary matrix and a real diagonal matrix such that . Suppose that , where , and is a identity matrix, and . Let . Then
where each is a complex matrix of adequate dimensions, and each is Hermitian.
Let
Then
Direct calculations certify and
Since and are unitary matrices, then so is . Noting that each is a complex Hermitian matrix, by matrix theory, we can find unitary matrices , , that diagonalize , , , respectively. That is, there exist real numbers , , , such that
(26) |
Denote . We can easily verify that is unitary. Thus, is also unitary. Denote
Then we have , as required. Letting , we have (25). Thus, are eigenvalues of with as the corresponding eigenvectors, for and .
Note that those ’s are all eigenvalues of ’s, where with the submatrix formed by an orthonormal basis of the eigenspace of for . To show the desired uniqueness of , it suffices to show that for any other orthonormal basis of the right eigenspace of for , say , those ’s are all eigenvalues of as well. Observe that there exists a unitary matrix such that . Then
Since is also unitary, we have ’s are all eigenvalues of . The uniqueness is thus proved. ∎
By the above theorem, we have the following theorem.
Theorem 4.5.
Suppose that is Hermitian. Then has exactly eigenvalues, which are all dual numbers. There are also eigenvectors, associated with these eigenvalues. The Hermitian matrix is positive semidefinite or definite if and only if all of these eigenvalues are nonnegative, or positive and appreciable, respectively.
5 Singular Value Decomposition of Dual Complex Matrices
We need the following theorem for SVD of dual complex matrices.
Theorem 5.1.
Suppose that and . Then there exists a unitary matrix such that
(27) |
where , , , are real numbers, . Counting possible multiplicities , the real numbers and for and are uniquely determined.
Proof.
By Theorems 4.4 and 4.5, since is a positive semidefinite dual complex Hermitian matrix, can be diagonalized by as defined in Theorem 4.4, and has exactly eigenvalues which are all nonnegative dual numbers, and may be denoted as , , , and , , , . We now need to show that if , then for every . Note that
(28) |
Since
(29) |
we have
(30) |
and hence . Therefore, we have
By equations (28) and (29), we have for every . This completes the proof.
∎
We now have the SVD theorem for dual complex matrices.
Theorem 5.2.
Suppose that . Then there exists a unitary matrix and a unitary matrix such that
(31) |
where is a diagonal matrix, taking the form
, are positive appreciable dual numbers, and are positive infinitesimal dual numbers. Counting possible multiplicities of the diagonal entries, the form is unique.
Proof.
Let . By Theorem 5.1, there exists a complex unitary matrix as defined in Theorem 4.4 such that can be diagonalized as in (27). Let , and
(32) |
with , , , . Denote and . By direct calculation, we have
which leads to
Therefore, with a complex matrix .
Let . Then
By Corollary 3.4, we may take such that is a unitary matrix. Then
where is an complex matrix. Thus,
By matrix theory, there exist complex unitary matrices and such that , where with for any and
with . Let be the number of positive ’s (counting multiplicity). Apparently, are the square roots of real positive eigenvalues of the Hermitian matrix . Denote
(33) |
It is obvious that both and are unitary and . Set , and
(34) |
Then we have (31). The uniqueness of follows from Theorems 4.4 and 5.1.
Now we claim that the real positive numbers are also unique. Note that and are uniquely determined from Theorems 4.4 and 5.1, while and may have different choices, provided that , expand and to get unitary matrices and , respectively. Among all these choices, pick any two distinct pairs and (i.e., or , or both). Applying Corollary 3.5, we can find unitary matrices and such that
(35) |
It is known from Proposition 3.2 that and are complex unitary matrices in . Similarly, and are complex unitary matrices in . Thus, the unitary matrices and will satisfy
(36) |
Denote
(37) |
and
(38) |
where , , , are complex matrices. Direct calculations lead to
which implies that . Similarly, we have
(39) | |||||
where the last equality follows from the fact that
Thus, . Note that and are complex unitary matrices. It follows from complex matrix theory that the Hermitian matrices and will have the same eigenvalues, whose square roots are exactly as displayed in (34). The desired uniqueness of the claim is obtained. This completes the proof. ∎
6 Ranks of Dual Complex Matrices
In Theorem 5.2, the dual numbers and possibly , if , are called the singular values of , the integer is called the rank of , and the integer is called the appreciable rank of . We denote the rank of by Rank, and the appreciable rank of by ARank.
Proposition 6.1.
Suppose that and are unitary, and . Then
(40) |
and
(41) |
Proof.
Proposition 6.2.
Suppose that . Then , and .
Proof.
Assume that the SVD of is with unitary dual complex matrices and . By virtue of (8), we have and . Note that and are unitary dual complex matrices, and all the diagonal entries of are nonnegative dual numbers. Combining with the fact , we can easily obtain all the desired assertions. ∎
The proof of the following proposition is direct. We omit its proof here.
Proposition 6.3.
Suppose that can be written as
where and , for . Then
We now prove the following theorem.
Theorem 6.4.
Suppose that and . Then,
(42) |
and
(43) |
Proof.
It suffices to prove
(44) |
and
(45) |
The other parts hold similarly. By Theorem 5.2, there are unitary matrices and such that (31) holds. Then (44) and (45) are equivalent to
(46) |
and
(47) |
By Proposition 6.1, it suffices to prove
(48) |
and
(49) |
where is the diagonal matrix in (31), . Now the conclusion follows from Proposition 6.3. ∎
Corollary 6.5.
Suppose that . Then
(50) |
Proof.
Corollary 6.6.
Suppose that and . Let . Then
(51) |
Proof.
Let and be the singular value decompositions of and , respectively. By setting , , and , one can verify that and are unitary dual complex matrices, and
Thus, and . Note that . The assertions follow readily from Theorem 6.4.
∎
Corollary 6.7.
Suppose that , . Then
(52) |
(53) |
Proof.
The following theorem indicates that the standard part of the singular values of a dual complex matrix are exactly the singular values of the standard part of that dual complex matrix, and the appreciable rank of a dual complex matrix is exactly the rank of the standard part of that dual complex matrix.
Theorem 6.8.
A dual complex matrix has singular values if and only if the complex matrix has singular values . We also have
(54) |
7 An Eckart-Young Like Theorem for Dual Complex Matrices
For large scale dual complex matrices, it is of great interest to extract its low rank approximation, for the sake of data reduction. In this section, the low rank approximation of a given dual complex matrix will be discussed, by establishing an Eckart-Young like theorem analogous to the case of complex matrices.
Theorem 7.1.
Suppose that has singular value decomposition . If . Then the matrix satisfies
for any with rank at most .
Proof.
Recall that for any dual complex matrix and dual complex unitary matrix and dual complex unitary matrix . If has nonzero singular values , then
Take any with rank at most . Let . Denote , and suppose that has singular value decomposition with and orthonormal sets and . By Corollary 3.4, the set can be extended to an orthonormal basis for . Then every unit vector can be written as for a unit vector so that
where the last equality follows from Proposition 3.6. By this, together with dual numbers , we have and for any unit vector ,
Therefore, we have
(56) |
and
(57) |
for . Since has rank at most , the matrix has also rank at most from Theorem 6.4. Hence, there is a unit vector satisfying . Consider the unit vector . By (56), we have
(58) |
It is known from Corollary 6.7 that . If , then set . Now we prove for every . Let
By applying Corollary 6.6, we know that has rank at most , and so does the dual complex matrix , due to Theorem 6.4. It is similar to the case , that there is a unit vector satisfying . Consider the unit vector . It is obvious that . Consequently, by (57), we have
(59) |
for all . By (58) and (59), we have
which means that the desired result holds. ∎
8 Numerical Experiments
8.1 Truncated SVD for Low-Rank Approximation
This subsection is devoted to the approach of finding the best low-rank approximation of a dual complex matrix based on the truncated SVD. Before proceeding, we first consider the unitary decomposition of a Hermitian dual complex matrix of the form with . The algorithmic framework is stated in Algorithm 1.
-
•
Compute the eigenvalue decomposition of :
-
•
Compute
-
•
Compute
-
•
For , perform the eigenvalue decomposition of :
-
•
Set , and compute
-
•
Set
and
Based on the unitary decomposition as presented in Algorithm 1, we are in the position to present the procedure of the singular value decomposition of any given dual complex matrix , see Algorithm 2.
According to the Eckart-Young like theorem as stated in Theorem 7.1, the low-rank approximation of can be obtained by the truncated SVD. Specifically, given and a positive integer , Algorithm 2 can produce the SVD of in terms of . Then the best low-rank approximation of with rank no more than , termed as , can be obtained by
(60) |
Note that such a low-rank approximation may not be unique. For example, when , then both as defined in (60) and
(61) |
with are best low-rank approximations of with rank no more than in the sense that
(62) |
8.2 Numerical Results
In this subsection, we present some numerical results to show the efficiency of proposed algorithms. All the codes are written in Python 3.9.5. The numerical experiments were done on a Macbook notebook with an Intel m3 dual-core processor running at 1.2GHz and 8GB of RAM.
Example 8.1.
The dual complex matrix is given by
and
By using Algorithm 2, all the nonzero singular values of dual complex matrix are positive appreciable dual numbers given by
The corresponding partially unitary matrix is give by
and the unitary matrix is given by
Example 8.2.
We test dual complex matrices with multiple standard singular values. The matrix is given by
and
Here the complex matrix is constructed with singular values . By using Algorithm 2, all the nonzero singular values of dual complex matrix are positive dual numbers given by
The corresponding partially unitary matrix is give by
and the unitary matrix is given by
Example 8.3.
We test proposed algorithms on some randomly generated data to illustrate the possible application to brain science. The dual complex matrix is given by
and
Here the phase matrix is randomly generated such that the modulus of its elements are equal to 1, and the relative phase matrix is calculated by subtracting from each element of the mean of its row. See [1]. By using Algorithm 1, all the eigenvalues of dual complex matrix are positive dual numbers given by
The corresponding unitary matrix is give by
and
In the principal component analysis, one can choose the first few columns of to generate principal components of the data. Besides, we also test the phase matrix with scale , the unitary matrix can be computed within about 4.5 seconds.
Example 8.4.
The truncated SVD is used to approximate the sample images: “Peppers” and “Lena”. Both of them are gray-scale images of size . For these images, we first use the 2-dimensional discrete Fourier transform to generate the complex matrices and which make up the dual complex matrix . The low-rank approximation can be obtained by the truncated SVD with the first largest singular values. The original images are approximated by applying the 2-dimensional inverse discrete Fourier transform to the standard part and the infinitesimal part of , respectively. Given the value of , we have and where are all the singualr values of . Note that , and are dual numbers since ’s are dual numbers. The relative errors of approximation are reported in Table 1. The approximated images with -truncated SVD are also presented in Figure 1.
5 | 15 | 25 | 35 | 45 | |
---|---|---|---|---|---|
0.2546-0.2304 | 0.1446-0.1345 | 0.1052-0.0932 | 0.0838-0.0752 | 0.0696-0.0619 |

References
- [1] D.M. Alexander, C. Trengove, J.J. Wright, P.R. Boord and E. Gordon, “Measurement of phase gradients in the EEG”, Journal of Neuroscience Methods 156 (2006) 111-128.
- [2] D.M. Alexander, A.R. Nikolaev, P. Jurica, M. Zvyagintsev, K. Mathiak and C. van Leeuwen, “Global neuromagnetic cortical fields have non-zero velocity”, PloS One 11 (2016) e0148413.
- [3] D.M. Alexander, T. Ball, A. Schulze-Bonhage and C. van Leeuwen, “Large-scale cortical travelling waves predict localized future cortical signals”, PloS Computational Biology 15 (2019) e1007316.
- [4] D. Brezov, “Factorization and generalized roots of dual complex matrices with Rodrigues’ formula”, Advances in Applied Clifford Algebras 30 (2020) 29.
- [5] E.J. Candés, X. Li, Y. Ma and J. Wright, “Robust principal component analysis?”, J. ACM 58 (2011) Article 11.
- [6] W.K. Clifford, “Preliminary sketch of bi-quaternions”, Proceedings of the London Mathematical Society 4 (1873) 381-395.
- [7] I.S. Fischer, Dual-Number Methods in Kinematics, Statistics and Dynamics, Boca Raton, Florida, CRC Press, 1998.
- [8] M.A. Güngör and Ö. Tetik, “De-Moivre and Euler formulae for dual complex numbers”, Universal Journal of Mathematics and Applications 2 (2019) 126-129.
-
[9]
F. Messelmi, “Dual-complex numbers and their holomorphic functions”,
https://hal.archieves.ouvertes.fr/hal-01114178, (2015). - [10] G. Matsuda, S. Kaji and H. Ochiai, Anti-commutative Dual Complex Numbers and 2D Rigid Transformation in: K. Anjyo, ed., Mathematical Progress in Expressive Image Synthesis I: Extended and Selected Results from the Symposium MEIS2013, Mathematics for Industry, Springer, Japan (2014) pp. 131-138.
- [11] L. Qi, C. Ling and H. Yan, “Dual quaternions and dual quaternion vectors”, November 2021, arXiv:2111.04491.