This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\UseRawInputEncoding

Low Rank Approximation of Dual Complex Matrices

Liqun Qi111Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong; Department of Mathematics, School of Science, Hangzhou Dianzi University, Hangzhou 310018 China. ([email protected]).     David M. Alexander222The Institut de Neurosciences des Systèmes (INS, UMR1106), Faculty of Medicine, Aix-Marseille University, Marseille 13005, France. ([email protected]).     Zhongming Chen333Department of Mathematics, School of Science, Hangzhou Dianzi University, Hangzhou 310018 China. ([email protected]).     Chen Ling444Department of Mathematics, School of Science, Hangzhou Dianzi University, Hangzhou 310018 China. ([email protected]).    and  Ziyan Luo555Corresponding author, Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China. ([email protected]). This author’s work was supported by Beijing Natural Science Foundation (Grant No. Z190002).
Abstract

Dual complex numbers can represent rigid body motion in 2D spaces. Dual complex matrices are linked with screw theory, and have potential applications in various areas. In this paper, we study low rank approximation of dual complex matrices. We define 22-norm for dual complex vectors, and Frobenius norm for dual complex matrices. These norms are nonnegative dual numbers. We establish the unitary invariance property of dual complex matrices. We study eigenvalues of square dual complex matrices, and show that an n×nn\times n dual complex Hermitian matrix has exactly nn eigenvalues, which are dual numbers. We present a singular value decomposition (SVD) theorem for dual complex matrices, define ranks and appreciable ranks for dual complex matrices, and study their properties. We establish an Eckart-Young like theorem for dual complex matrices, and present an algorithm framework for low rank approximation of dual complex matrices via truncated SVD. The SVD of dual complex matrices also provides a basic tool for Principal Component Analysis (PCA) via these matrices. Numerical experiments are reported.

Key words. Dual complex matrices, conjugation, eigenvalues, Hermitian matrices, singular value decomposition, ranks, Eckart-Young like theorem.

1 Introduction

In 1873, W.K. Clifford [6] introduced dual numbers, dual complex numbers and dual quaternions. These become the core knowledge of Clifford algebra or geometric algebra.

While dual quaternions can represent rigid body motions in 3D spaces, the primary application of dual complex numbers is in representing rigid body motions in 2D spaces [10]. Thus, an mm-dimensional dual complex vector can represent a set of mm rigid body motions in 2D space, and an m×nm\times n dual complex matrix represents a linear transformation from the nn-dimensional dual complex vector space to the mm-dimensional dual complex vector space. Dual complex matrices are also linked with screw geometry or screw theory [7], and have potential applications in classical mechanics and robotics, complex representations of the Lorentz group in relativity and electrodynamics, conformal mappings in computer vision, the physics of scattering processes, etc., see [4]. One important tool in data analysis is Principal Component Analysis (PCA) [5]. The core of PCA is low rank approximation of matrices. Thus, in this paper, we study low rank approximation of dual complex matrices.

Suppose that all the data points are stacked as column vectors of a matrix MM, the matrix should (approximately) have low rank: mathematically,

M=L0+N0,M=L_{0}+N_{0},

where L0L_{0} has low rank and N0N_{0} is a small perturbation matrix. Classical Principal Component Analysis seeks the best rank-kk estimate of L0L_{0} by solving

min{ML2:rank(L)k}.\min\{\|M-L\|_{2}:{\rm rank}(L)\leq k\}.

This problem can be solved via the singular value decomposition (SVD) and enjoys a number of optimality properties. See [5].

Now, MM is a dual complex matrix. Hence, we need a theory of low rank approximation for dual complex matrices, including unitary invariance of dual complex matrices, SVD decomposition of dual complex matrices, the rank theory of dual complex matrices, and an Eckart-Young like theorem for dual complex matrices. In this paper, we study these issues.

In the next section, we introduce the 22-norm for dual complex vectors. The 22-norm of a dual complex vector is a nonnegative dual number. In Section 3, we define the Frobenius norm for dual complex matrices, and establish the unitary equivalence property of dual complex matrices.

In Section 4, we study eigenvalues of dual complex matrices, in particular, dual complex Hermitian matrix. We show that an n×nn\times n dual complex Hermitian matrix has exactly nn eigenvalues, which are nonnegative dual numbers. We prove a unitary decomposition theorem for dual complex Hermitian matrices. A singular value decomposition theorem of dual complex matrices is proved in Section 5. In Section 6, we define ranks and appreciate ranks for dual complex matrices, and study their properties. An Eckart-Young like theorem for dual complex matrices is established in Section 7.

An algorithm framework for low rank approximation of dual complex matrices via truncated SVD is presented, and numerical experiments are reported, in Section 8.

A dual complex number can represent a 2D rigid body motion in a 2D space. Such a 2D space can be a plane or a surface. For example, the cortex can be regarded as a 2D space. In 2016, Alexander et al. [2] applied principal component analysis (PCA) on scalar valued phase gradients to analyze plane waves in the cortex. In 2019, Alexander et al. [3] used PCA on complex valued unit phase to analyze spiral waves in the cortex. The core of PCA is singular value decomposition (SVD) of matrices [5]. In these two cases, SVD of complex matrices are used. However, the plane waves and spiral waves in the cortex are correlated. They should not be analyzed separately. A possible way to overcome this defect is to combine two kinds of analysis together by using SVD of dual complex matrices. In Section 8, we show that computationally, this combination is possible.

Throughout the paper, scalars, vectors and matrices are denoted by small letters, bold small letters and capital letters, respectively.

2 The 22-Norm of Dual Complex Vectors

2.1 Dual Numbers

Denote \mathbb{R}, \mathbb{C} and 𝔻\mathbb{D} as the set of real numbers, the set of complex numbers, and the set of dual numbers, respectively. A dual number qq may be written as q=qst+qϵq=q_{st}+q_{\mathcal{I}}\epsilon, where qstq_{st} and qq_{\mathcal{I}} are real numbers, and ϵ\epsilon is the infinitesimal unit, satisfying ϵ2=0\epsilon^{2}=0. We call qstq_{st} the real part or the standard part of qq, and qq_{\mathcal{I}} the dual part or the infinitesimal part of qq. The infinitesimal unit ϵ\epsilon is commutative in multiplication with complex numbers. The dual numbers form a commutative algebra of dimension two over the reals. If qst0q_{st}\not=0, we say that qq is appreciable, otherwise, we say that qq is infinitesimal.

A total order \leq for dual numbers was introduced in [11]. Given two dual numbers p,q𝔻p,q\in\mathbb{D}, p=pst+pϵp=p_{st}+p_{\mathcal{I}}\epsilon, q=qst+qϵq=q_{st}+q_{\mathcal{I}}\epsilon, where pstp_{st}, pp_{\mathcal{I}}, qstq_{st} and qq_{\mathcal{I}} are real numbers, we say that pqp\leq q, if either pst<qstp_{st}<q_{st}, or pst=qstp_{st}=q_{st} and pqp_{\mathcal{I}}\leq q_{\mathcal{I}}. In particular, we say that pp is positive, nonnegative, nonpositive or negative, if p>0p>0, p0p\geq 0, p0p\leq 0 or p<0p<0, respectively.

Suppose that q=qst+qϵ𝔻q=q_{st}+q_{\mathcal{I}}\epsilon\in\mathbb{D}, and qst>0q_{st}>0. Then we define the square root

q=qst+q2qstϵ.\sqrt{q}=\sqrt{q_{st}}+{q_{\mathcal{I}}\over 2\sqrt{q_{st}}}\epsilon. (1)

Conventionally, we have 0=0\sqrt{0}=0.

2.2 Dual Complex Numbers

Denote the dual complex numbers by 𝔻\mathbb{DC}. A dual complex number qq has the form [4, 8, 9]

q=qst+qϵ,q=q_{st}+q_{\mathcal{I}}\epsilon, (2)

where qstq_{st} and qq_{\mathcal{I}} are complex numbers. Again, we call qstq_{st} the real part or the standard part of qq, and qq_{\mathcal{I}} the dual part or the infinitesimal part of qq. We say that a dual complex number qq is appreciable if its standard part is nonzero. Otherwise, we say that it is infinitesimal. The multiplication of dual complex numbers is commutative.

Denote

qst=q1+q2𝐢,q=q3+q4𝐢,q_{st}=q_{1}+q_{2}\mathbf{i},\ \ q_{\mathcal{I}}=q_{3}+q_{4}\mathbf{i},

where q1,q2,q3,q4q_{1},q_{2},q_{3},q_{4}\in\mathbb{R}.

Recall that for a complex number c=a+b𝐢c=a+b\mathbf{i}, where aa and bb are real numbers, its conjugate is c¯=ab𝐢\bar{c}=a-b\mathbf{i}. We also have cc¯=c¯c=a2+b2=|c|2c\bar{c}=\bar{c}c=a^{2}+b^{2}=|c|^{2}.

The conjugate of q=qst+qϵq=q_{st}+q_{\mathcal{I}}\epsilon is

q¯=q¯st+q¯ϵ.\bar{q}=\bar{q}_{st}+\bar{q}_{\mathcal{I}}\epsilon. (3)

We have

qq¯=q¯q=|qst|2+(qstq¯+qq¯st)ϵ=(q12+q22)+2(q1q3+q2q4)ϵ.q\bar{q}=\bar{q}q=|q_{st}|^{2}+(q_{st}\bar{q}_{\mathcal{I}}+q_{\mathcal{I}}\bar{q}_{st})\epsilon=(q_{1}^{2}+q_{2}^{2})+2(q_{1}q_{3}+q_{2}q_{4})\epsilon.

It is a positive dual number if qq is appreciable, or 0 otherwise. From this, we may define the magnitude of a dual complex number qq as

|q|={q12+q22+q1q3+q2q4q12+q22ϵ,ifqst0,q32+q42ϵ,otherwise.|q|=\left\{\begin{aligned} \sqrt{q_{1}^{2}+q_{2}^{2}}+{q_{1}q_{3}+q_{2}q_{4}\over\sqrt{q_{1}^{2}+q_{2}^{2}}}\epsilon,&\ {\rm if}\ q_{st}\not=0,\\ \sqrt{q_{3}^{2}+q_{4}^{2}}\epsilon,&\ {\rm otherwise.}\end{aligned}\right. (4)

By direct calculations, we have the following proposition.

Proposition 2.1.

The magnitude |q||q| is a nonnegative dual number for any q𝔻q\in\mathbb{DC}. If qq is appreciable, then

|q|=qq¯.|q|=\sqrt{q\bar{q}}. (5)

For any p,q𝔻p,q\in\mathbb{DC}, we have

  • (i)

    |q|=|q¯||q|=|\bar{q}|;

  • (ii)

    |q|0|q|\geq 0 for all qq, and |q|=0|q|=0 if and only if q=0q=0;

  • (iii)

    |pq|=|p||q||pq|=|p||q|;

  • (iv)

    |p+q||p|+|q||p+q|\leq|p|+|q|.

It is a special case of Theorem 5.1 of [11]. Hence, we omit its proof here.

2.3 Dual Complex Vectors

Denote 𝐱=(x1,,xn)𝔻n\mathbf{x}=(x_{1},\cdots,x_{n})^{\top}\in{\mathbb{DC}}^{n} for dual complex vectors. We say that 𝐱𝔻n\mathbf{x}\in{\mathbb{DC}}^{n} is appreciable if at least one of its components is appreciable. We may also write

𝐱=𝐱st+𝐱ϵ,\mathbf{x}=\mathbf{x}_{st}+\mathbf{x}_{\mathcal{I}}\epsilon,

where 𝐱st,𝐱n\mathbf{x}_{st},\mathbf{x}_{\mathcal{I}}\in{\mathbb{C}}^{n}. Define 𝐱:=𝐱¯(x¯1,,x¯n)\mathbf{x}^{*}:=\bar{\mathbf{x}}^{\top}\equiv(\bar{x}_{1},\cdots,\bar{x}_{n}). The 22-norm of 𝐱𝔻n\mathbf{x}\in{\mathbb{DC}}^{n} is defined as

𝐱2={i=1n|xi|2,if𝐱st𝟎,𝐱2ϵ,otherwise.\|\mathbf{x}\|_{2}=\left\{\begin{aligned} \sqrt{\sum_{i=1}^{n}|x_{i}|^{2}},&\ {\rm if}\ \mathbf{x}_{st}\not=\mathbf{0},\\ \|\mathbf{x}_{\mathcal{I}}\|_{2}\epsilon,&\ {\rm otherwise.}\end{aligned}\right. (6)

If 𝐱2=1\|\mathbf{x}\|_{2}=1, then we say that 𝐱\mathbf{x} is a unit dual complex vector. If 𝐱(1),,𝐱(n)𝔻n\mathbf{x}^{(1)},\cdots,\mathbf{x}^{(n)}\in{\mathbb{DC}}^{n} and (𝐱(i))𝐱(j)=δij\left(\mathbf{x}^{(i)}\right)^{*}\mathbf{x}^{(j)}=\delta_{ij} for i,j=1,,ni,j=1,\cdots,n, where δij\delta_{ij} is the Kronecker symbol, then we say that {𝐱(1),,𝐱(n)}\{\mathbf{x}^{(1)},\cdots,\mathbf{x}^{(n)}\} is an orthonormal basis of 𝔻n{\mathbb{DC}}^{n}.

We have the following proposition.

Proposition 2.2.

Suppose that 𝐱,𝐲𝔻n\mathbf{x},\mathbf{y}\in{\mathbb{DC}}^{n}, and q𝔻q\in\mathbb{DC}. Then,

  • (i)

    𝐱20\|\mathbf{x}\|_{2}\geq 0, and 𝐱2=0\|\mathbf{x}\|_{2}=0 if and only if 𝐱=𝟎\mathbf{x}=\mathbf{0};

  • (ii)

    q𝐱2=|q|𝐱2\|q\mathbf{x}\|_{2}=|q|\|\mathbf{x}\|_{2};

  • (iii)

    𝐱+𝐲2𝐱2+𝐲2\|\mathbf{x}+\mathbf{y}\|_{2}\leq\|\mathbf{x}\|_{2}+\|\mathbf{y}\|_{2}.

This proposition can be proved by definition. It is a special case of Theorem 6.4 of [11]. Hence, we also omit its proof here.

Note that if both 𝐱\mathbf{x} and qq are appreciable, then q𝐱q\mathbf{x} is appreciable.

3 Unitary Invariance of Dual Complex Matrices

The collections of real, complex and dual complex m×nm\times n matrices are denoted by m×n{\mathbb{R}}^{m\times n}, m×n{\mathbb{C}}^{m\times n} and 𝔻m×n{\mathbb{DC}}^{m\times n}, respectively.

A dual complex matrix A=(aij)𝔻m×nA=(a_{ij})\in{\mathbb{DC}}^{m\times n} can be denoted as

A=Ast+Aϵ,A=A_{st}+A_{\mathcal{I}}\epsilon, (7)

where Ast,Am×nA_{st},A_{\mathcal{I}}\in{\mathbb{C}}^{m\times n}. Again, we call AstA_{st} and AA_{\mathcal{I}} the standard part and the infinitesimal part of AA, respectively. The transpose of AA is A=(aji)A^{\top}=(a_{ji}). The conjugate of AA is A¯=(a¯ij)\bar{A}=(\bar{a}_{ij}). The conjugate transpose of AA is A=(a¯ji)=A¯A^{*}=(\bar{a}_{ji})=\bar{A}^{\top}.

Let A𝔻m×nA\in{\mathbb{DC}}^{m\times n} and B𝔻n×rB\in{\mathbb{DC}}^{n\times r}. Then we have

(AB)=BA,(AB)=BA.(AB)^{\top}=B^{\top}A^{\top},~{}~{}(AB)^{*}=B^{*}A^{*}. (8)

Given a square dual complex matrix A𝔻n×nA\in{\mathbb{DC}}^{n\times n}, it is called invertible (nonsingular) if AB=BA=InAB=BA=I_{n} for some B𝔻n×nB\in{\mathbb{DC}}^{n\times n}, where InI_{n} is the n×nn\times n identity matrix. Such BB is unique and denoted by A1A^{-1}. Matrix AA is called Hermitian if A=AA^{*}=A. Write A=Ast+Aϵ𝔻n×nA=A_{st}+A_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{n\times n}. Then AA is Hermitian if and only if both AstA_{st} and AA_{\mathcal{I}} are complex Hermitian matrices. Matrix AA is called unitary if AA=InA^{*}A=I_{n}. Apparently, A𝔻n×nA\in{\mathbb{DC}}^{n\times n} is unitary if and only if its column vectors form an orthonormal basis of 𝔻n{\mathbb{DC}}^{n}. Let knk\leq n. We say that A𝔻n×kA\in\mathbb{DC}^{n\times k} is partially unitary if its column vectors are unit vectors and orthogonal to each other.

Suppose that A𝔻n×nA\in{\mathbb{DC}}^{n\times n} is invertible. Then all column and row vectors of AA are appreciable. This can be proved by definition directly. We also have the following proposition.

Proposition 3.1.

Suppose that A𝔻n×nA\in{\mathbb{DC}}^{n\times n} is unitary. Then AA=InAA^{*}=I_{n}.

Proof.

Write A=Ast+AϵA=A_{st}+A_{\mathcal{I}}\epsilon, where Ast,An×nA_{st},A_{\mathcal{I}}\in{\mathbb{C}}^{n\times n}. Since AA is unitary, AA=InA^{*}A=I_{n}. This is equivalent to

AstAst=InA_{st}^{*}A_{st}=I_{n} (9)

and

AAst+AstA=O.A_{\mathcal{I}}^{*}A_{st}+A_{st}^{*}A_{\mathcal{I}}=O. (10)

By matrix analysis, from (9), we have

AstAst=InA_{st}A_{st}^{*}=I_{n} (11)

By (10), we have

Ast(AAst+AstA)Ast=O,A_{st}(A_{\mathcal{I}}^{*}A_{st}+A_{st}^{*}A_{\mathcal{I}})A_{st}^{*}=O,

i.e.,

AstA+AAst=O.A_{st}A_{\mathcal{I}}^{*}+A_{\mathcal{I}}A_{st}^{*}=O. (12)

From (11) and (12), we have AA=InAA^{*}=I_{n}. ∎

Dual complex partially unitary matrices have the following properties.

Proposition 3.2.

Suppose that U𝔻n×kU\in{\mathbb{DC}}^{n\times k} is partially unitary, U=Ust+UϵU=U_{st}+U_{\mathcal{I}}\epsilon, where Ust,Un×kU_{st},U_{\mathcal{I}}\in{\mathbb{C}}^{n\times k}, knk\leq n. Then UstU_{st} is a complex partially unitary matrix, and

UstU+UUst=O.U_{st}^{*}U_{\mathcal{I}}+U_{\mathcal{I}}^{*}U_{st}=O. (13)
Proof.

Since UU is partially unitary, UU=IkU^{*}U=I_{k}, i.e.,

(Ust+Uϵ)(Ust+Uϵ)=Ik.\left(U_{st}^{*}+U_{\mathcal{I}}^{*}\epsilon\right)\left(U_{st}+U_{\mathcal{I}}\epsilon\right)=I_{k}.

This implies that

UstUst+(UstU+UUst)ϵ=Ik.U_{st}^{*}U_{st}+\left(U_{st}^{*}U_{\mathcal{I}}+U_{\mathcal{I}}^{*}U_{st}\right)\epsilon=I_{k}.

Then we have (13), and UstUst=IkU_{st}^{*}U_{st}=I_{k}, i.e., UstU_{st} is a complex partially unitary matrix. ∎

Proposition 3.3.

Suppose that U=Ust+Uϵ𝔻n×kU=U_{st}+U_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{n\times k} is partially unitary, where Ust,Un×kU_{st},U_{\mathcal{I}}\in{\mathbb{C}}^{n\times k}, k<nk<n. Then there is a vector 𝐯𝔻n\mathbf{v}\in{\mathbb{DC}}^{n} such that (U,𝐯)𝔻n×(k+1)(U,\mathbf{v})\in{\mathbb{DC}}^{n\times(k+1)} is partially unitary.

Proof.

By Proposition 3.2, UstU_{st} is a complex partially unitary matrix. By complex matrix analysis, there is a complex vector 𝐯stn\mathbf{v}_{st}\in{\mathbb{C}}^{n} such that (Ust,𝐯st)(U_{st},\mathbf{v}_{st}) is partially unitary. Denote the jjth column vector of UU as 𝐮st,j+𝐮,jϵ\mathbf{u}_{st,j}+\mathbf{u}_{\mathcal{I},j}\epsilon for j=1,,kj=1,\cdots,k. Now, let

𝐯^=j=1k(𝐮,j𝐯st)𝐮st,j.\hat{\mathbf{v}}_{\mathcal{I}}=-\sum_{j=1}^{k}(\mathbf{u}_{\mathcal{I},j}^{*}\mathbf{v}_{st})\mathbf{u}_{st,j}. (14)

Then for j=1,,kj=1,\cdots,k,

(𝐯st+𝐯^ϵ)(𝐮st,j+𝐮,jϵ)=0.(\mathbf{v}_{st}+\hat{\mathbf{v}}_{\mathcal{I}}\epsilon)^{*}(\mathbf{u}_{st,j}+\mathbf{u}_{\mathcal{I},j}\epsilon)=0.

Then, 𝐯st+𝐯^ϵ\mathbf{v}_{st}+\hat{\mathbf{v}}_{\mathcal{I}}\epsilon is orthogonal to every column vector of UU. Note that 𝐯st+𝐯^ϵ\mathbf{v}_{st}+\hat{\mathbf{v}}_{\mathcal{I}}\epsilon is appreciable. Let

𝐯=𝐯st+𝐯^ϵ𝐯st+𝐯^ϵ2.\mathbf{v}={\mathbf{v}_{st}+\hat{\mathbf{v}}_{\mathcal{I}}\epsilon\over\|\mathbf{v}_{st}+\hat{\mathbf{v}}_{\mathcal{I}}\epsilon\|_{2}}. (15)

Then we have the desired result. ∎

Note that the proof of this proposition is constructive. By complex matrix analysis, we may derive a formula for 𝐯st\mathbf{v}_{st}. Then we may calculate 𝐯\mathbf{v} by (14) and (15).

Corollary 3.4.

Suppose that U𝔻n×kU\in{\mathbb{DC}}^{n\times k} is partially unitary, k<nk<n. Then there is V𝔻n×(nk)V\in{\mathbb{DC}}^{n\times(n-k)} such that (U,V)𝔻n×n(U,V)\in{\mathbb{DC}}^{n\times n} is unitary.

Corollary 3.5.

Suppose that k<nk<n, U𝔻n×kU\in{\mathbb{DC}}^{n\times k}, VV, W𝔻n×(nk)W\in{\mathbb{DC}}^{n\times(n-k)}, such that (U,V)(U,V) and (U,W)(U,W) are both unitary matrices in 𝔻n×n{\mathbb{DC}}^{n\times n}. Then there is a unitary matrix H𝔻(nk)×(nk)H\in{\mathbb{DC}}^{(n-k)\times(n-k)} such that W=VHW=VH.

Proof.

Set H=VWH=V^{*}W. Since (U,V)(U,V) and (U,W)(U,W) are unitary, we have

UW=O,VV=WW,VV=WW=Ink.U^{*}W=O,~{}~{}VV^{*}=WW^{*},~{}~{}V^{*}V=W^{*}W=I_{n-k}.

Thus,

VH=VVW=WWW=WInk=W.VH=VV^{*}W=WW^{*}W=WI_{n-k}=W.

Note that HH=WVVW=WWWW=InkH^{*}H=W^{*}VV^{*}W=W^{*}WW^{*}W=I_{n-k}, and HH=VWWV=VVVV=InkHH^{*}=V^{*}WW^{*}V=V^{*}VV^{*}V=I_{n-k}. Thus, HH is a unitary matrix in 𝔻(nk)×(nk){\mathbb{DC}}^{(n-k)\times(n-k)}. This completes the proof. ∎

Let A=Ast+Aϵ=(aij)𝔻m×nA=A_{st}+A_{\mathcal{I}}\epsilon=(a_{ij})\in{\mathbb{DC}}^{m\times n}. Define the Frobenius norm of AA as

AF={i=1mj=1n|aij|2,ifAstO,AFϵ,otherwise.\|A\|_{F}=\left\{\begin{aligned} \sqrt{\sum_{i=1}^{m}\sum_{j=1}^{n}|a_{ij}|^{2}},&\ {\rm if}\ A_{st}\not=O,\\ \|A_{\mathcal{I}}\|_{F}\epsilon,&\ {\rm otherwise.}\end{aligned}\right. (16)

The Frobenius norm of a matrix is actually the 22-norm of the vectorization of that matrix. Thus, it has all the properties of a vector norm.

If A𝔻n×nA\in{\mathbb{DC}}^{n\times n} and 𝐱𝔻n\mathbf{x}\in{\mathbb{DC}}^{n}, then

A𝐱2AF𝐱2.\|A\mathbf{x}\|_{2}\leq\|A\|_{F}\|\mathbf{x}\|_{2}.

This property can be proved directly.

We also have the following proposition.

Proposition 3.6.

Suppose that U𝔻m×nU\in{\mathbb{DC}}^{m\times n} is partially unitary, and 𝐱𝔻n\mathbf{x}\in{\mathbb{DC}}^{n}. Then

U𝐱2=𝐱2.\|U\mathbf{x}\|_{2}=\|\mathbf{x}\|_{2}. (17)
Proof.

Suppose that 𝐱=𝐱st+𝐱ϵ\mathbf{x}=\mathbf{x}_{st}+\mathbf{x}_{\mathcal{I}}\epsilon is appreciable. If xix_{i} is appreciable, then by (6), |xi|2=xixi=xixi|x_{i}|^{2}=x_{i}x_{i}^{*}=x_{i}^{*}x_{i}. If xix_{i} is infinitesimal, then |xi|2=0=xixi|x_{i}|^{2}=0=x_{i}^{*}x_{i}. Thus, by (6),

𝐱22=i=1n|xi|2=𝐱𝐱.\|\mathbf{x}\|_{2}^{2}=\sum_{i=1}^{n}|x_{i}|^{2}=\mathbf{x}^{*}\mathbf{x}.

On the other hand, let U=Ust+UϵU=U_{st}+U_{\mathcal{I}}\epsilon. Direct calculations lead to UstUst=InU_{st}^{*}U_{st}=I_{n}. Then the standard part of U𝐱U\mathbf{x} is Ust𝐱st𝟎U_{st}\mathbf{x}_{st}\not=\mathbf{0}, i.e., U𝐱U\mathbf{x} is also appreciable. We have

U𝐱22=(U𝐱)(U𝐱)=𝐱UU𝐱=𝐱𝐱.\|U\mathbf{x}\|_{2}^{2}=(U\mathbf{x})^{*}(U\mathbf{x})=\mathbf{x}^{*}U^{*}U\mathbf{x}=\mathbf{x}^{*}\mathbf{x}.

Hence, U𝐱2=𝐱2\|U\mathbf{x}\|_{2}=\|\mathbf{x}\|_{2} in this case.

Now, assume that 𝐱\mathbf{x} is infinitesimal. Then 𝐱=𝐱ϵ\mathbf{x}=\mathbf{x}_{\mathcal{I}}\epsilon, and U𝐱=Ust𝐱ϵU\mathbf{x}=U_{st}\mathbf{x}_{\mathcal{I}}\epsilon is also infinitesimal. We have Ust𝐱2=𝐱2\|U_{st}\mathbf{x}_{\mathcal{I}}\|_{2}=\|\mathbf{x}_{\mathcal{I}}\|_{2}. Then by (6), we still have U𝐱2=𝐱2\|U\mathbf{x}\|_{2}=\|\mathbf{x}\|_{2} in this case. This proves (17). ∎

We are ready to establish unitary invariance of dual complex matrices. We have the following theorem.

Theorem 3.7.

Suppose that U𝔻m×mU\in{\mathbb{DC}}^{m\times m} and V𝔻n×nV\in{\mathbb{DC}}^{n\times n} are unitary, and A𝔻m×nA\in{\mathbb{DC}}^{m\times n}. Then

UAVF=AF.\|UAV\|_{F}=\|A\|_{F}. (18)
Proof.

Write the columns of AA as 𝐚(1),,𝐚(n)\mathbf{a}^{(1)},\cdots,\mathbf{a}^{(n)},

vec(A)=[𝐚(1)𝐚(n)],{\rm vec}(A)=\begin{bmatrix}\mathbf{a}^{(1)}\\ \vdots\\ \mathbf{a}^{(n)}\end{bmatrix},

and

bldg(U)=diag(U,,U)𝔻mn×mn.{\rm bldg}(U)={\rm diag}(U,\cdots,U)\in{\mathbb{DC}}^{mn\times mn}.

Then bldg(U){\rm bldg}(U) is an mn×mnmn\times mn unitary matrix, and vec(A)𝔻mn{\rm vec}(A)\in{\mathbb{DC}}^{mn}. We have AF=vec(A)2\|A\|_{F}=\|{\rm vec}(A)\|_{2} and UAF=bldg(U)vec(A)2\|UA\|_{F}=\|{\rm bldg}(U){\rm vec}(A)\|_{2}. By Proposition 3.6, we have

bldg(U)vec(A)2=vec(A)2.\|{\rm bldg}(U){\rm vec}(A)\|_{2}=\|{\rm vec}(A)\|_{2}.

Thus, UAF=AF\|UA\|_{F}=\|A\|_{F}. Similarly, we may show that AVF=AF\|AV\|_{F}=\|A\|_{F}. The conclusion follows. ∎

4 Eigenvalues of Dual Complex Matrices

Suppose that A𝔻n×nA\in{\mathbb{DC}}^{n\times n}. If there are λ𝔻\lambda\in\mathbb{DC} and 𝐱𝔻n\mathbf{x}\in{\mathbb{DC}}^{n}, where 𝐱\mathbf{x} is appreciable, such that

A𝐱=λ𝐱,A\mathbf{x}=\lambda\mathbf{x}, (19)

then we say that λ\lambda is an eigenvalue of AA, with 𝐱\mathbf{x} as a corresponding eigenvector. Here, we request that 𝐱\mathbf{x} is appreciable.

We now establish conditions for a dual complex number to be an eigenvalue of a square dual complex matrix.

Theorem 4.1.

Suppose that A=Ast+Aϵ𝔻n×nA=A_{st}+A_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{n\times n}. Then λ=λst+λϵ\lambda=\lambda_{st}+\lambda_{\mathcal{I}}\epsilon is an eigenvalue of AA with an eigenvector 𝐱=𝐱st+𝐱Iϵ\mathbf{x}=\mathbf{x}_{st}+\mathbf{x}_{I}\epsilon only if λst\lambda_{st} is an eigenvalue of the complex matrix AstA_{st} with an eigenvector 𝐱st\mathbf{x}_{st}, i.e., 𝐱st𝟎\mathbf{x}_{st}\not=\mathbf{0} and

Ast𝐱st=λst𝐱st.A_{st}\mathbf{x}_{st}=\lambda_{st}\mathbf{x}_{st}. (20)

Furthermore, if λst\lambda_{st} is an eigenvalue of the complex matrix AstA_{st} with an eigenvector 𝐱st\mathbf{x}_{st}, then λ\lambda is an eigenvalue of AA with an eigenvector 𝐱\mathbf{x} if and only if λ\lambda_{\mathcal{I}} and 𝐱\mathbf{x}_{\mathcal{I}} satisfy

λ𝐱st=A𝐱st+Ast𝐱λst𝐱.\lambda_{\mathcal{I}}\mathbf{x}_{st}=A_{\mathcal{I}}\mathbf{x}_{st}+A_{st}\mathbf{x}_{\mathcal{I}}-\lambda_{st}\mathbf{x}_{\mathcal{I}}. (21)
Proof.

By definition, λ\lambda is an eigenvalue of AA with an eigenvector 𝐱\mathbf{x} if and only if 𝐱st𝟎\mathbf{x}_{st}\not=\mathbf{0} and A𝐱=λ𝐱A\mathbf{x}=\lambda\mathbf{x}. Then A𝐱=λ𝐱A\mathbf{x}=\lambda\mathbf{x} is equivalent to

(Ast+Aϵ)(𝐱st+𝐱Iϵ)=(λst+λϵ)(𝐱st+𝐱Iϵ).(A_{st}+A_{\mathcal{I}}\epsilon)(\mathbf{x}_{st}+\mathbf{x}_{I}\epsilon)=(\lambda_{st}+\lambda_{\mathcal{I}}\epsilon)(\mathbf{x}_{st}+\mathbf{x}_{I}\epsilon).

This is further equivalent to Ast𝐱st=𝐱stλstA_{st}\mathbf{x}_{st}=\mathbf{x}_{st}\lambda_{st}, i.e., (20), and

Ast𝐱ϵ+A𝐱stϵ=λ𝐱st+λst𝐱ϵ.A_{st}\mathbf{x}_{\mathcal{I}}\epsilon+A_{\mathcal{I}}\mathbf{x}_{st}\epsilon=\lambda_{\mathcal{I}}\mathbf{x}_{st}+\lambda_{st}\mathbf{x}_{\mathcal{I}}\epsilon. (22)

Then (22) is equivalent to

Ast𝐱+A𝐱st=λ𝐱st+λst𝐱,A_{st}\mathbf{x}_{\mathcal{I}}+A_{\mathcal{I}}\mathbf{x}_{st}=\lambda_{\mathcal{I}}\mathbf{x}_{st}+\lambda_{st}\mathbf{x}_{\mathcal{I}},

which is further equivalent to (21). The conclusions of this theorem follow from these. ∎

Suppose that A𝔻n×nA\in{\mathbb{DC}}^{n\times n} is a Hermitian matrix. For any 𝐱𝔻n\mathbf{x}\in{\mathbb{DC}}^{n}, we have

(𝐱A𝐱)=𝐱A𝐱.(\mathbf{x}^{*}A\mathbf{x})^{*}=\mathbf{x}^{*}A\mathbf{x}.

This implies that 𝐱A𝐱\mathbf{x}^{*}A\mathbf{x} is a dual number. With the total order of dual numbers defined in Section 2, we may define positive semidefiniteness and positive definiteness of Hermitian matrices in 𝔻n×n{\mathbb{DC}}^{n\times n}. A Hermitian matrix A𝔻n×nA\in{\mathbb{DC}}^{n\times n} is called positive semidefinite if for any 𝐱𝔻n\mathbf{x}\in{\mathbb{DC}}^{n}, 𝐱A𝐱0\mathbf{x}^{*}A\mathbf{x}\geq 0; AA is called positive definite if for any 𝐱𝔻n\mathbf{x}\in{\mathbb{DC}}^{n} with 𝐱\mathbf{x} being appreciable, we have 𝐱A𝐱>0\mathbf{x}^{*}A\mathbf{x}>0 and is appreciable.

Theorem 4.2.

An eigenvalue λ\lambda of a Hermitian matrix A=Ast+Aϵ𝔻n×nA=A_{st}+A_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{n\times n} must be a dual number, and its standard part λst\lambda_{st} is an eigenvalue of the complex Hermitian matrix AstA_{st}. Furthermore, assume that λ=λst+λϵ\lambda=\lambda_{st}+\lambda_{\mathcal{I}}\epsilon is an eigenvalue of AA with a corresponding eigenvector 𝐱=𝐱st+𝐱ϵ𝔻n\mathbf{x}=\mathbf{x}_{st}+\mathbf{x}_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{n} where 𝐱st,𝐱n\mathbf{x}_{st},\mathbf{x}_{\mathcal{I}}\in{\mathbb{C}}^{n}. Then we have

λ=𝐱stA𝐱st𝐱st𝐱st.\lambda_{\mathcal{I}}={\mathbf{x}_{st}^{*}A_{\mathcal{I}}\mathbf{x}_{st}\over\mathbf{x}_{st}^{*}\mathbf{x}_{st}}. (23)

A Hermitian matrix A𝔻n×nA\in{\mathbb{DC}}^{n\times n} has at most nn dual number eigenvalues and no other eigenvalues.

An eigenvalue of a positive semidefinite Hermitian matrix A𝔻n×nA\in{\mathbb{DC}}^{n\times n} must be a nonnegative dual number. In that case, AstA_{st} must be positive semidefinite. An eigenvalue of a positive definite Hermitian matrix A𝔻n×nA\in{\mathbb{DC}}^{n\times n} must be an appreciable positive dual number. In that case, AstA_{st} must be positive definite.

Proof.

Suppose that A𝔻n×nA\in{\mathbb{DC}}^{n\times n} is a Hermitian matrix, and λ\lambda is an eigenvalue of AA, with 𝐱\mathbf{x} as the corresponding eigenvector. Then we have A𝐱=λ𝐱A\mathbf{x}=\lambda\mathbf{x}, and 𝐱\mathbf{x} is appreciable. We have

𝐱A𝐱=λ𝐱𝐱.\mathbf{x}^{*}A\mathbf{x}=\lambda\mathbf{x}^{*}\mathbf{x}.

As 𝐱=𝐱st+𝐱ϵ\mathbf{x}=\mathbf{x}_{st}+\mathbf{x}_{\mathcal{I}}\epsilon, A=Ast+AϵA=A_{st}+A_{\mathcal{I}}\epsilon, and λ=λst+λϵ\lambda=\lambda_{st}+\lambda_{\mathcal{I}}\epsilon, considering the infinitesimal part of the equality, we have

λst(𝐱𝐱st+𝐱st𝐱)+λ𝐱st𝐱st=𝐱Ast𝐱st+𝐱stAst𝐱+𝐱stA𝐱st.\lambda_{st}(\mathbf{x}_{\mathcal{I}}^{*}\mathbf{x}_{st}+\mathbf{x}_{st}^{*}\mathbf{x}_{\mathcal{I}})+\lambda_{\mathcal{I}}\mathbf{x}_{st}^{*}\mathbf{x}_{st}=\mathbf{x}_{\mathcal{I}}^{*}A_{st}\mathbf{x}_{st}+\mathbf{x}_{st}^{*}A_{st}\mathbf{x}_{\mathcal{I}}+\mathbf{x}_{st}^{*}A_{\mathcal{I}}\mathbf{x}_{st}.

Since Ast𝐱st=λst𝐱stA_{st}\mathbf{x}_{st}=\lambda_{st}\mathbf{x}_{st}, 𝐱stAst=λst𝐱st\mathbf{x}_{st}^{*}A_{st}=\lambda_{st}\mathbf{x}_{st}^{*} and λst\lambda_{st} is a real number, the above equality reduces to

λ𝐱st𝐱st=𝐱stA𝐱st,\lambda_{\mathcal{I}}\mathbf{x}_{st}^{*}\mathbf{x}_{st}=\mathbf{x}_{st}^{*}A_{\mathcal{I}}\mathbf{x}_{st},

which proves (23). Hence, λ\lambda is a dual number and an eigenvalue of AA.

The other conclusions follow from Theorem 4.1 and matrix theory. ∎

Consider eigenvectors of a dual complex Hermitian matrix, associated with two eigenvalues with distinct standard parts. We have the following proposition.

Proposition 4.3.

Two eigenvectors of a Hermitian matrix A𝔻n×nA\in{\mathbb{DC}}^{n\times n}, associated with two eigenvalues with distinct standard parts, are orthogonal to each other.

Proof.

Suppose that 𝐱\mathbf{x} and 𝐲\mathbf{y} are two eigenvectors of a Hermitian matrix A𝔻n×nA\in{\mathbb{DC}}^{n\times n}, associated with two eigenvalues λ=λst+λϵ\lambda=\lambda_{st}+\lambda_{\mathcal{I}}\epsilon and μ=μst+μϵ\mu=\mu_{st}+\mu_{\mathcal{I}}\epsilon, respectively, and λstμst\lambda_{st}\not=\mu_{st}. By Theorem 4.1, λ\lambda and μ\mu are dual numbers. We have

λ(𝐱𝐲)=(λ𝐱)𝐲=(A𝐱)𝐲=𝐱A𝐲=𝐱μ𝐲=μ𝐱𝐲,\lambda(\mathbf{x}^{*}\mathbf{y})=(\lambda\mathbf{x})^{*}\mathbf{y}=(A\mathbf{x})^{*}\mathbf{y}=\mathbf{x}^{*}A\mathbf{y}=\mathbf{x}^{*}\mu\mathbf{y}=\mu\mathbf{x}^{*}\mathbf{y},

i.e.,

(λμ)(𝐱𝐲)=0.(\lambda-\mu)(\mathbf{x}^{*}\mathbf{y})=0.

Since λstμst\lambda_{st}\not=\mu_{st}, (λμ)1(\lambda-\mu)^{-1} exists. We have 𝐱𝐲=0\mathbf{x}^{*}\mathbf{y}=0. ∎

The following is the unitary decomposition theorem of dual complex matrices.

Theorem 4.4.

Suppose that A=Ast+A𝔻n×nA=A_{st}+A_{\mathcal{I}}\in{\mathbb{DC}}^{n\times n} is a Hermitian matrix. Then there are unitary matrix U𝔻n×nU\in{\mathbb{DC}}^{n\times n} and a diagonal matrix Σ𝔻n×n\Sigma\in{\mathbb{D}}^{n\times n} such that Σ=UAU\Sigma=U^{*}AU, where

Σdiag(λ1+λ1,1ϵ,,λ1+λ1,k1ϵ,λ2+λ2,1ϵ,,λr+λr,krϵ).\Sigma\equiv{\rm diag}\left(\lambda_{1}+\lambda_{1,1}\epsilon,\cdots,\lambda_{1}+\lambda_{1,k_{1}}\epsilon,\lambda_{2}+\lambda_{2,1}\epsilon,\cdots,\lambda_{r}+\lambda_{r,k_{r}}\epsilon\right). (24)

with the diagonal entries of Σ\Sigma being nn eigenvalues of AA,

A𝐮i,j=(λi+λi,jϵ)𝐮i,j,A\mathbf{u}_{i,j}=(\lambda_{i}+\lambda_{i,j}\epsilon)\mathbf{u}_{i,j}, (25)

for j=1,,kij=1,\cdots,k_{i} and i=1,,ri=1,\cdots,r, U=(𝐮1,1,,𝐮1,k1,,𝐮r,kr)U=(\mathbf{u}_{1,1},\cdots,\mathbf{u}_{1,k_{1}},\cdots,\mathbf{u}_{r,k_{r}}), λ1>λ2>>λr\lambda_{1}>\lambda_{2}>\cdots>\lambda_{r} are real numbers, λi\lambda_{i} is a kik_{i}-multiple eigenvalue of AstA_{st}, λi,1λi,2λi,ki\lambda_{i,1}\geq\lambda_{i,2}\geq\cdots\geq\lambda_{i,k_{i}} are also real numbers, i=1rki=n\sum_{i=1}^{r}k_{i}=n. Counting possible multiplicities λi,j\lambda_{i,j}, the form Σ\Sigma is unique.

Proof.

Suppose that A𝔻n×nA\in{\mathbb{DC}}^{n\times n} is a dual complex Hermitian matrix. Denote A=Ast+AϵA=A_{st}+A_{\mathcal{I}}\epsilon, where Ast,An×nA_{st},A_{\mathcal{I}}\in{\mathbb{C}}^{n\times n}. Then AstA_{st} and AA_{\mathcal{I}} are complex Hermitian matrices. Thus, there is an n×nn\times n complex unitary matrix WW and a real n×nn\times n diagonal matrix DD such that D=WAstWD=WA_{st}W^{*}. Suppose that D=diag(λ1Ik1,λ2Ik2,,λrIkr)D={\rm diag}(\lambda_{1}I_{k_{1}},\lambda_{2}I_{k_{2}},\cdots,\lambda_{r}I_{k_{r}}), where λ1>λ2>>λr\lambda_{1}>\lambda_{2}>\cdots>\lambda_{r}, and IkiI_{k_{i}} is a ki×kik_{i}\times k_{i} identity matrix, and i=1rki=n\sum_{i=1}^{r}k_{i}=n. Let M=WAWM=WAW^{*}. Then

M\displaystyle M
=\displaystyle= D+WAWϵ\displaystyle D+WA_{\mathcal{I}}W^{*}\epsilon
=\displaystyle= [λ1Ik1+C11ϵC12ϵC1rϵC12ϵλ2Ik2+C22ϵC2rϵC1rϵC2rϵλrIkr+Crrϵ],\displaystyle\begin{bmatrix}\lambda_{1}I_{k_{1}}+C_{11}\epsilon&C_{12}\epsilon&\cdots&C_{1r}\epsilon\\ C_{12}^{*}\epsilon&\lambda_{2}I_{k_{2}}+C_{22}\epsilon&\cdots&C_{2r}\epsilon\\ \vdots&\vdots&\ddots&\vdots\\ C_{1r}^{*}\epsilon&C_{2r}^{*}\epsilon&\cdots&\lambda_{r}I_{k_{r}}+C_{rr}\epsilon\end{bmatrix},

where each CijC_{ij} is a complex matrix of adequate dimensions, and each CiiC_{ii} is Hermitian.

Let

P=[Ik1C12ϵλ1λ2C1rϵλ1λrC12ϵλ1λ2Ik2C2rϵλ2λrC1rϵλ1λrC2rϵλ2λrIkr].P=\begin{bmatrix}I_{k_{1}}&{C_{12}\epsilon\over\lambda_{1}-\lambda_{2}}&\cdots&{C_{1r}\epsilon\over\lambda_{1}-\lambda_{r}}\\ -{C_{12}^{*}\epsilon\over\lambda_{1}-\lambda_{2}}&I_{k_{2}}&\cdots&{C_{2r}\epsilon\over\lambda_{2}-\lambda_{r}}\\ \vdots&\vdots&\ddots&\vdots\\ -{C_{1r}^{*}\epsilon\over\lambda_{1}-\lambda_{r}}&-{C_{2r}^{*}\epsilon\over\lambda_{2}-\lambda_{r}}&\cdots&I_{k_{r}}\end{bmatrix}.

Then

P=[Ik1C12ϵλ1λ2C1rϵλ1λrC12ϵλ1λ2Ik2C2rϵλ2λrC1rϵλ1λrC2rϵλ2λrIkr].P^{*}=\begin{bmatrix}I_{k_{1}}&-{C_{12}\epsilon\over\lambda_{1}-\lambda_{2}}&\cdots&-{C_{1r}\epsilon\over\lambda_{1}-\lambda_{r}}\\ {C_{12}^{*}\epsilon\over\lambda_{1}-\lambda_{2}}&I_{k_{2}}&\cdots&-{C_{2r}\epsilon\over\lambda_{2}-\lambda_{r}}\\ \vdots&\vdots&\ddots&\vdots\\ {C_{1r}^{*}\epsilon\over\lambda_{1}-\lambda_{r}}&{C_{2r}^{*}\epsilon\over\lambda_{2}-\lambda_{r}}&\cdots&I_{k_{r}}\end{bmatrix}.

Direct calculations certify PP=PP=InPP^{*}=P^{*}P=I_{n} and

ΣPMP=(PW)A(PW)=diag(λ1Ik1+C11ϵ,λ2Ik2+C22ϵ,,λrIkr+Crrϵ).\Sigma^{\prime}\equiv PMP^{*}=(PW)A(PW)^{*}={\rm diag}(\lambda_{1}I_{k_{1}}+C_{11}\epsilon,\lambda_{2}I_{k_{2}}+C_{22}\epsilon,\cdots,\lambda_{r}I_{k_{r}}+C_{rr}\epsilon).

Since PP and WW are unitary matrices, then so is PWPW. Noting that each CiiC_{ii} is a complex Hermitian matrix, by matrix theory, we can find unitary matrices U1k1×k1U_{1}\in{\mathbb{C}}^{k_{1}\times k_{1}}, \cdots, Urkr×krU_{r}\in{\mathbb{C}}^{k_{r}\times k_{r}} that diagonalize C11C_{11}, \cdots, CrrC_{rr}, respectively. That is, there exist real numbers λ1,1λ1,k1\lambda_{1,1}\geq\cdots\geq\lambda_{1,k_{1}}, λ2,1λ2,k2\lambda_{2,1}\geq\cdots\geq\lambda_{2,k_{2}}, \cdots, λr,1λr,kr\lambda_{r,1}\geq\cdots\geq\lambda_{r,k_{r}} such that

UiCiiUi=diag(λi,1,,λi,ki),i=1,,r.U_{i}^{*}C_{ii}U_{i}={\rm diag}\left(\lambda_{i,1},\cdots,\lambda_{i,k_{i}}\right),~{}~{}i=1,\cdots,r. (26)

Denote Vdiag(U1,,Ur)V\equiv{\rm diag}\left(U_{1},\cdots,U_{r}\right). We can easily verify that VV is unitary. Thus, U(PW)VU\equiv(PW)^{*}V is also unitary. Denote

Σdiag(λ1+λ1,1ϵ,,λ1+λ1,k1ϵ,λ2+λ2,1ϵ,,λr+λr,krϵ).\Sigma\equiv{\rm diag}\left(\lambda_{1}+\lambda_{1,1}\epsilon,\cdots,\lambda_{1}+\lambda_{1,k_{1}}\epsilon,\lambda_{2}+\lambda_{2,1}\epsilon,\cdots,\lambda_{r}+\lambda_{r,k_{r}}\epsilon\right).

Then we have UAU=ΣU^{*}AU=\Sigma, as required. Letting U=(𝐮1,1,,𝐮1,k1,,𝐮r,kr)U=(\mathbf{u}_{1,1},\cdots,\mathbf{u}_{1,k_{1}},\cdots,\mathbf{u}_{r,k_{r}}), we have (25). Thus, λi+λi,jϵ\lambda_{i}+\lambda_{i,j}\epsilon are eigenvalues of AA with 𝐮i,j\mathbf{u}_{i,j} as the corresponding eigenvectors, for j=1,,kij=1,\cdots,k_{i} and i=1,,ri=1,\cdots,r.

Note that those λi,j\lambda_{i,j}’s are all eigenvalues of CiiC_{ii}’s, where Cii=WiAWiC_{ii}=W_{i}A_{\mathcal{I}}W_{i}^{*} with Win×kiW_{i}^{*}\in{\mathbb{C}}^{n\times k_{i}} the submatrix formed by an orthonormal basis of the eigenspace of λi\lambda_{i} for AstA_{st}. To show the desired uniqueness of Σ\Sigma, it suffices to show that for any other orthonormal basis of the right eigenspace of λi\lambda_{i} for AstA_{st}, say W^i\hat{W}_{i}^{*}, those λi,j\lambda_{i,j}’s are all eigenvalues of C^ii=W^iAW^i\hat{C}_{ii}=\hat{W}_{i}A_{\mathcal{I}}\hat{W}_{i}^{*} as well. Observe that there exists a unitary matrix Tiki×kiT_{i}\in{\mathbb{C}}^{k_{i}\times k_{i}} such that W^i=WiTi\hat{W}_{i}^{*}=W_{i}^{*}T_{i}. Then

C^ii=TiWiAWiTi=TiCiiTi=TiUidiag(λi,1,,λi,ki)UiTi.\hat{C}_{ii}=T_{i}^{*}W_{i}A_{\mathcal{I}}W_{i}^{*}T_{i}=T_{i}^{*}C_{ii}T_{i}=T_{i}^{*}U_{i}{\rm diag}\left(\lambda_{i,1},\cdots,\lambda_{i,k_{i}}\right)U_{i}^{*}T_{i}.

Since UiTiU_{i}^{*}T_{i} is also unitary, we have λi,j\lambda_{i,j}’s are all eigenvalues of C^ii\hat{C}_{ii}. The uniqueness is thus proved. ∎

By the above theorem, we have the following theorem.

Theorem 4.5.

Suppose that A𝔻n×nA\in{\mathbb{DC}}^{n\times n} is Hermitian. Then AA has exactly nn eigenvalues, which are all dual numbers. There are also nn eigenvectors, associated with these nn eigenvalues. The Hermitian matrix AA is positive semidefinite or definite if and only if all of these eigenvalues are nonnegative, or positive and appreciable, respectively.

5 Singular Value Decomposition of Dual Complex Matrices

We need the following theorem for SVD of dual complex matrices.

Theorem 5.1.

Suppose that B𝔻m×nB\in{\mathbb{DC}}^{m\times n} and A=BBA=B^{*}B. Then there exists a unitary matrix U𝔻n×nU\in{\mathbb{DC}}^{n\times n} such that

UAU=diag(λ1+λ1,1ϵ,,λ1+λ1,k1ϵ,λ2+λ2,1ϵ,,λs+λs,ksϵ,0,,0),U^{*}AU={\rm diag}(\lambda_{1}+\lambda_{1,1}\epsilon,\cdots,\lambda_{1}+\lambda_{1,k_{1}}\epsilon,\lambda_{2}+\lambda_{2,1}\epsilon,\cdots,\lambda_{s}+\lambda_{s,k_{s}}\epsilon,0,\cdots,0), (27)

where λ1>>λs>0\lambda_{1}>\cdots>\lambda_{s}>0, λ1,1λ1,k1\lambda_{1,1}\geq\cdots\geq\lambda_{1,k_{1}}, \cdots, λs,1λs,ks\lambda_{s,1}\geq\cdots\geq\lambda_{s,k_{s}} are real numbers, i=1skin\sum_{i=1}^{s}k_{i}\leq n. Counting possible multiplicities λi,j\lambda_{i,j}, the real numbers λi\lambda_{i} and λi,j\lambda_{i,j} for i=1,,si=1,\cdots,s and j=1,,kij=1,\cdots,k_{i} are uniquely determined.

Proof.

By Theorems 4.4 and 4.5, since AA is a positive semidefinite dual complex Hermitian matrix, AA can be diagonalized by UU as defined in Theorem 4.4, and has exactly nn eigenvalues which are all nonnegative dual numbers, and may be denoted as λi+λi,jϵ\lambda_{i}+\lambda_{i,j}\epsilon, i=1,,ri=1,\cdots,r, j=1,,kij=1,\cdots,k_{i}, and λ1>>λr0\lambda_{1}>\cdots>\lambda_{r}\geq 0, λ1,1λ1,k1\lambda_{1,1}\geq\cdots\geq\lambda_{1,k_{1}}, \cdots, λr,1λr,kr\lambda_{r,1}\geq\cdots\geq\lambda_{r,k_{r}}. We now need to show that if λr=0\lambda_{r}=0, then λr,j=0\lambda_{r,j}=0 for every j=1,,krj=1,\cdots,k_{r}. Note that

𝐮r,jA𝐮r,j=𝐮r,jλr,j𝐮r,jϵ=λr,jϵ,j=1,,kr.\mathbf{u}_{r,j}^{*}A\mathbf{u}_{r,j}=\mathbf{u}_{r,j}^{*}\lambda_{r,j}\mathbf{u}_{r,j}\epsilon=\lambda_{r,j}\epsilon,~{}~{}j=1,\cdots,k_{r}. (28)

Since

𝐮r,jA𝐮r,j=𝐮r,jBstBst𝐮r,j+𝐮r,j(BstB+BBst)𝐮r,jϵ,\mathbf{u}_{r,j}^{*}A\mathbf{u}_{r,j}=\mathbf{u}_{r,j}^{*}B_{st}^{*}B_{st}\mathbf{u}_{r,j}+\mathbf{u}_{r,j}^{*}\left(B_{st}^{*}B_{\mathcal{I}}+B_{\mathcal{I}}^{*}B_{st}\right)\mathbf{u}_{r,j}\epsilon, (29)

we have

𝐮r,jBstBst𝐮r,j=0,\mathbf{u}_{r,j}^{*}B_{st}^{*}B_{st}\mathbf{u}_{r,j}=0, (30)

and hence Bst𝐮r,j=𝟎B_{st}\mathbf{u}_{r,j}={\bf 0}. Therefore, we have

𝐮r,j(BstB+BBst)𝐮r,j\displaystyle\mathbf{u}_{r,j}^{*}\left(B_{st}^{*}B_{\mathcal{I}}+B_{\mathcal{I}}^{*}B_{st}\right)\mathbf{u}_{r,j}
=\displaystyle= (Bst𝐮r,j)B𝐮r,j+𝐮r,jB(Bst𝐮r,j)\displaystyle\left(B_{st}\mathbf{u}_{r,j}\right)^{*}B_{\mathcal{I}}\mathbf{u}_{r,j}+\mathbf{u}_{r,j}^{*}B_{\mathcal{I}}^{*}\left(B_{st}\mathbf{u}_{r,j}\right)
=\displaystyle= 0.\displaystyle 0.

By equations (28) and (29), we have λr,j=0\lambda_{r,j}=0 for every j=1,,krj=1,\cdots,k_{r}. This completes the proof.

We now have the SVD theorem for dual complex matrices.

Theorem 5.2.

Suppose that B𝔻m×nB\in{\mathbb{DC}}^{m\times n}. Then there exists a unitary matrix V^𝔻m×m\hat{V}\in{\mathbb{DC}}^{m\times m} and a unitary matrix U^𝔻n×n\hat{U}\in{\mathbb{DC}}^{n\times n} such that

V^BU^=[ΣtOOO],\hat{V}^{*}B\hat{U}=\begin{bmatrix}\Sigma_{t}&O\\ O&O\end{bmatrix}, (31)

where Σt𝔻t×t\Sigma_{t}\in{\mathbb{D}}^{t\times t} is a diagonal matrix, taking the form

Σt=diag(μ1,,μr,,μt),\Sigma_{t}={\rm diag}\left(\mu_{1},\cdots,\mu_{r},\cdots,\mu_{t}\right),

rtmin{m,n}r\leq t\leq\min\{m,n\}, μ1μ2μr\mu_{1}\geq\mu_{2}\geq\cdots\geq\mu_{r} are positive appreciable dual numbers, and μr+1μt\mu_{r+1}\geq\cdots\geq\mu_{t} are positive infinitesimal dual numbers. Counting possible multiplicities of the diagonal entries, the form Σt\Sigma_{t} is unique.

Proof.

Let A=BBA=B^{*}B. By Theorem 5.1, there exists a complex unitary matrix U𝔻n×nU\in{\mathbb{DC}}^{n\times n} as defined in Theorem 4.4 such that AA can be diagonalized as in (27). Let r=j=1skjr=\sum\limits_{j=1}^{s}k_{j}, and

Σrdiag(σ1+σ1,1ϵ,,σ1+σ1,k1ϵ,σ2+σ2,1ϵ,,σs+σs,ksϵ)\Sigma_{r}\equiv\mathrm{diag}\left(\sigma_{1}+\sigma_{1,1}\epsilon,\cdots,\sigma_{1}+\sigma_{1,k_{1}}\epsilon,\sigma_{2}+\sigma_{2,1}\epsilon,\cdots,\sigma_{s}+\sigma_{s,k_{s}}\epsilon\right) (32)

with σi=λi\sigma_{i}=\sqrt{\lambda_{i}}, σi,j=λi,j2λi\sigma_{i,j}={\lambda_{i,j}\over 2\sqrt{\lambda_{i}}}, j=1,,kij=1,\cdots,k_{i}, i=1,,si=1,\cdots,s. Denote U1=U:,1:rU_{1}=U_{:,1:r} and U2=U:,r+1:nU_{2}=U_{:,r+1:n}. By direct calculation, we have

AU=[BBU1BBU2]=[U1Σr2O],AU=[B^{*}BU_{1}~{}~{}B^{*}BU_{2}]=[U_{1}\Sigma_{r}^{2}~{}~{}O],

which leads to

U1BBU1=Σr2,U2BBU2=O.U_{1}^{*}B^{*}BU_{1}=\Sigma_{r}^{2},~{}~{}U_{2}^{*}B^{*}BU_{2}=O.

Therefore, BU2=MϵBU_{2}=M\epsilon with a complex matrix MM.

Let V1=BU1Σr1𝔻m×rV_{1}=BU_{1}\Sigma_{r}^{-1}\in{\mathbb{DC}}^{m\times r}. Then

V1V1=Is,V1BU1=V1V1Σr=Σr,V1BU2=(Σr1)U1(BBU2)=O.V_{1}^{*}V_{1}=I_{s},~{}~{}V_{1}^{*}BU_{1}=V_{1}^{*}V_{1}\Sigma_{r}=\Sigma_{r},~{}~{}V_{1}^{*}BU_{2}=\left(\Sigma_{r}^{-1}\right)^{*}U_{1}^{*}(B^{*}BU_{2})=O.

By Corollary 3.4, we may take V2𝔻m×(mr)V_{2}\in{\mathbb{DC}}^{m\times(m-r)} such that V=(V1,V2)V=(V_{1},V_{2}) is a unitary matrix. Then

V2BU1=V2V1Σr=O,V2BU2=V2Mϵ=Gϵ,V_{2}^{*}BU_{1}=V_{2}^{*}V_{1}\Sigma_{r}=O,~{}~{}V_{2}^{*}BU_{2}=V_{2}^{*}M\epsilon=G\epsilon,

where GG is an (mr)×(nr)(m-r)\times(n-r) complex matrix. Thus,

VBU\displaystyle V^{*}BU =\displaystyle= [V1BU1V1BU2V2BU1V2BU2]\displaystyle\begin{bmatrix}V_{1}^{*}BU_{1}&V_{1}^{*}BU_{2}\\ V_{2}^{*}BU_{1}&V_{2}^{*}BU_{2}\end{bmatrix}
=\displaystyle= [ΣrOOGϵ].\displaystyle\begin{bmatrix}\Sigma_{r}&O\\ O&G\epsilon\end{bmatrix}.

By matrix theory, there exist complex unitary matrices W1(mr)×(mr)W_{1}\in{\mathbb{C}}^{(m-r)\times(m-r)} and W2(nr)×(nr)W_{2}\in{\mathbb{C}}^{(n-r)\times(n-r)} such that W1GW2=DW_{1}^{*}GW_{2}=D, where D(mr)×(nr)D\in{\mathbb{R}}^{(m-r)\times(n-r)} with Dij=0D_{ij}=0 for any iji\neq j and

D11Dll0D_{11}\geq\cdots\geq D_{ll}\geq 0

with l=min{mr,nr}l=\min\{m-r,n-r\}. Let rr^{\prime} be the number of positive DiiD_{ii}’s (counting multiplicity). Apparently, D11,,DrrD_{11},\cdots,D_{r^{\prime}r^{\prime}} are the square roots of real positive eigenvalues of the Hermitian matrix GG(nr)×(nr)G^{*}G\in{\mathbb{C}}^{(n-r)\times(n-r)}. Denote

V^V(IrW1),andU^U(IrW2).\hat{V}\equiv V\left(\begin{array}[]{cc}I_{r}&\\ &W_{1}\\ \end{array}\right),~{}~{}\mathrm{and}~{}~{}\hat{U}\equiv U\left(\begin{array}[]{cc}I_{r}&\\ &W_{2}\\ \end{array}\right). (33)

It is obvious that both V^\hat{V} and U^\hat{U} are unitary and V^BU^=[ΣrOODϵ]\hat{V}^{*}B\hat{U}=\begin{bmatrix}\Sigma_{r}&O\\ O&D\epsilon\end{bmatrix}. Set t=r+rt=r+r^{\prime}, and

Σt=diag(Σr,D11ϵ,,Drrϵ).\Sigma_{t}={\rm diag}(\Sigma_{r},D_{11}\epsilon,\cdots,D_{r^{\prime}r^{\prime}}\epsilon). (34)

Then we have (31). The uniqueness of Σr\Sigma_{r} follows from Theorems 4.4 and 5.1.

Now we claim that the real positive numbers D11,,DrrD_{11},\cdots,D_{r^{\prime}r^{\prime}} are also unique. Note that U1U_{1} and V1V_{1} are uniquely determined from Theorems 4.4 and 5.1, while U2U_{2} and V2V_{2} may have different choices, provided that U2U_{2}, V2V_{2} expand U1U_{1} and V1V_{1} to get unitary matrices UU and VV, respectively. Among all these choices, pick any two distinct pairs {U2(1),V2(1)}\{U^{(1)}_{2},V^{(1)}_{2}\} and {U2(2),V2(2)}\{U^{(2)}_{2},V^{(2)}_{2}\} (i.e., U2(1)U2(2)U^{(1)}_{2}\neq U^{(2)}_{2} or V2(1)V2(2)V^{(1)}_{2}\neq V^{(2)}_{2}, or both). Applying Corollary 3.5, we can find unitary matrices H1𝔻(nr)×(nr)H_{1}\in{\mathbb{DC}}^{(n-r)\times(n-r)} and H2𝔻(mr)×(mr)H_{2}\in{\mathbb{DC}}^{(m-r)\times(m-r)} such that

U2(2)=U2(1)H1,V2(2)=V2(1)H2.U^{(2)}_{2}=U^{(1)}_{2}H_{1},~{}~{}V^{(2)}_{2}=V^{(1)}_{2}H_{2}. (35)

It is known from Proposition 3.2 that [U1,st,U2,st(1)]\left[U_{1,st},U^{(1)}_{2,st}\right] and [U1,st,U2,st(2)]\left[U_{1,st},U^{(2)}_{2,st}\right] are complex unitary matrices in n×n{\mathbb{C}}^{n\times n}. Similarly, [V1,st,V2,st(1)]\left[V_{1,st},V^{(1)}_{2,st}\right] and [V1,st,V2,st(2)]\left[V_{1,st},V^{(2)}_{2,st}\right] are complex unitary matrices in m×m{\mathbb{C}}^{m\times m}. Thus, the unitary matrices H1,st(nr)×(nr)H_{1,st}\in{\mathbb{C}}^{(n-r)\times(n-r)} and H2,st(mr)×(mr)H_{2,st}\in{\mathbb{C}}^{(m-r)\times(m-r)} will satisfy

U2,st(2)=U2,st(1)H1,standV2,st(2)=V2,st(1)H2,st.U^{(2)}_{2,st}=U^{(1)}_{2,st}H_{1,st}~{}{\rm~{}and~{}}~{}V^{(2)}_{2,st}=V^{(1)}_{2,st}H_{2,st}. (36)

Denote

BU2(1)=M(1)ϵ,BU2(2)=M(2)ϵ,BU_{2}^{(1)}=M^{(1)}\epsilon,~{}~{}BU_{2}^{(2)}=M^{(2)}\epsilon, (37)

and

(V2(1))BU2(1)=(V2(1))M(1)ϵ=G(1)ϵ,(V2(2))BU2(2)=(V2(2))M(2)ϵ=G(2)ϵ,\left(V_{2}^{(1)}\right)^{*}BU_{2}^{(1)}=\left(V_{2}^{(1)}\right)^{*}M^{(1)}\epsilon=G^{(1)}\epsilon,~{}~{}\left(V_{2}^{(2)}\right)^{*}BU_{2}^{(2)}=\left(V_{2}^{(2)}\right)^{*}M^{(2)}\epsilon=G^{(2)}\epsilon, (38)

where M(i)M^{(i)}, G(i)G^{(i)}, i=1,2i=1,2, are complex matrices. Direct calculations lead to

M(2)ϵ=BU2(2)=BU2(1)H1=M(1)ϵH1=M(1)H1,stϵ,M^{(2)}\epsilon=BU_{2}^{(2)}=BU_{2}^{(1)}H_{1}=M^{(1)}\epsilon H_{1}=M^{(1)}H_{1,st}\epsilon,

which implies that M(2)=M(1)H1,stM^{(2)}=M^{(1)}H_{1,st}. Similarly, we have

G(2)ϵ\displaystyle G^{(2)}\epsilon =\displaystyle= (V2(2))M(2)ϵ\displaystyle\left(V_{2}^{(2)}\right)^{*}M^{(2)}\epsilon (39)
=\displaystyle= (V2,st(2))M(2)ϵ\displaystyle\left(V_{2,st}^{(2)}\right)^{*}M^{(2)}\epsilon
=\displaystyle= H2,st(V2,st(1))M(1)H1,stϵ\displaystyle H_{2,st}^{*}\left(V_{2,st}^{(1)}\right)^{*}M^{(1)}H_{1,st}\epsilon
=\displaystyle= H2,stG(1)H1,stϵ,\displaystyle H_{2,st}^{*}G^{(1)}H_{1,st}\epsilon,

where the last equality follows from the fact that

G(1)ϵ=(V2(1))M(1)ϵ=(V2,st(1))M(1)ϵ.G^{(1)}\epsilon=\left(V_{2}^{(1)}\right)^{*}M^{(1)}\epsilon=\left(V_{2,st}^{(1)}\right)^{*}M^{(1)}\epsilon.

Thus, G(2)=H2,stG(1)H1,stG^{(2)}=H_{2,st}^{*}G^{(1)}H_{1,st}. Note that H2,stH_{2,st} and H1,stH_{1,st} are complex unitary matrices. It follows from complex matrix theory that the Hermitian matrices (G(1))G(1)\left(G^{(1)}\right)^{*}G^{(1)} and (G(2))G(2)\left(G^{(2)}\right)^{*}G^{(2)} will have the same eigenvalues, whose square roots are exactly D11,,DrrD_{11},\cdots,D_{r^{\prime}r^{\prime}} as displayed in (34). The desired uniqueness of the claim is obtained. This completes the proof. ∎

6 Ranks of Dual Complex Matrices

In Theorem 5.2, the dual numbers μ1,,μt\mu_{1},\cdots,\mu_{t} and possibly μt+1==μmin{m,n}=0\mu_{t+1}=\cdots=\mu_{\min\{m,n\}}=0, if t<min{m,n}t<\min\{m,n\}, are called the singular values of AA, the integer tt is called the rank of AA, and the integer rr is called the appreciable rank of AA. We denote the rank of AA by Rank(A)(A), and the appreciable rank of AA by ARank(A)(A).

Proposition 6.1.

Suppose that U𝔻m×mU\in{\mathbb{DC}}^{m\times m} and V𝔻n×nV\in{\mathbb{DC}}^{n\times n} are unitary, and A𝔻m×nA\in{\mathbb{DC}}^{m\times n}. Then

Rank(UAV)=Rank(A),{\rm Rank}(UAV)={\rm Rank}(A), (40)

and

ARank(UAV)=ARank(A).{\rm ARank}(UAV)={\rm ARank}(A). (41)
Proof.

By Theorem 5.2, there are a dual complex unitary matrix VA𝔻m×mV_{A}\in{\mathbb{DC}}^{m\times m} and a dual complex unitary matrix UA𝔻n×nU_{A}\in{\mathbb{DC}}^{n\times n} such that (31) holds. Let W=UAVW=UAV. Then

(VAU)W(VUA)=D,(V_{A}U^{*})W(V^{*}U_{A})=D,

where DD is the diagonal matrix in (31). Since VAUV_{A}U^{*} and VUAV^{*}U_{A} are unitary and the form of Σ\Sigma is unique, we have (40) and (41). ∎

Proposition 6.2.

Suppose that A𝔻n×mA\in{\mathbb{DC}}^{n\times m}. Then Rank(A)=Rank(A)=Rank(A)=Rank(A¯){\rm Rank}(A)={\rm Rank}(A^{*})={\rm Rank}(A^{\top})={\rm Rank}(\bar{A}), and ARank(A)=ARank(A)=ARank(A)=ARank(A¯){\rm ARank}(A)={\rm ARank}(A^{*})={\rm ARank}(A^{\top})={\rm ARank}(\bar{A}).

Proof.

Assume that the SVD of AA is A=UΣVA=U\Sigma V^{*} with unitary dual complex matrices UU and VV. By virtue of (8), we have A=(V)ΣUA^{\top}=(V^{*})^{\top}\Sigma^{\top}U^{\top} and A=VΣUA^{*}=V\Sigma^{*}U^{*}. Note that UU^{\top} and (V)(V^{*})^{\top} are unitary dual complex matrices, and all the diagonal entries of Σ\Sigma are nonnegative dual numbers. Combining with the fact A¯=(A)\bar{A}=(A^{*})^{\top}, we can easily obtain all the desired assertions. ∎

The proof of the following proposition is direct. We omit its proof here.

Proposition 6.3.

Suppose that A𝔻m×nA\in{\mathbb{DC}}^{m\times n} can be written as

A=[ArAsrϵO],A=\begin{bmatrix}A_{r}\\ A_{s-r}\epsilon\\ O\end{bmatrix},

where Ar𝔻r×nA_{r}\in{\mathbb{DC}}^{r\times n} and Asrsr×nA_{s-r}\in{\mathbb{C}}^{s-r\times n}, for 0rsm0\leq r\leq s\leq m. Then

Rank(A)s,ARank(A)r.{\rm Rank}(A)\leq s,\ \ {\rm ARank}(A)\leq r.

We now prove the following theorem.

Theorem 6.4.

Suppose that A𝔻m×nA\in{\mathbb{DC}}^{m\times n} and B𝔻n×pB\in{\mathbb{DC}}^{n\times p}. Then,

Rank(AB)min{Rank(A),Rank(B)},{\rm Rank}(AB)\leq\min\{{\rm Rank}(A),{\rm Rank}(B)\}, (42)

and

ARank(AB)min{ARank(A),ARank(B)}.{\rm ARank}(AB)\leq\min\{{\rm ARank}(A),{\rm ARank}(B)\}. (43)
Proof.

It suffices to prove

Rank(AB)Rank(A),{\rm Rank}(AB)\leq{\rm Rank}(A), (44)

and

ARank(AB)ARank(A).{\rm ARank}(AB)\leq{\rm ARank}(A). (45)

The other parts hold similarly. By Theorem 5.2, there are unitary matrices UAU_{A} and VAV_{A} such that (31) holds. Then (44) and (45) are equivalent to

Rank(UADVAB)Rank(UADVA),{\rm Rank}(U_{A}DV_{A}B)\leq{\rm Rank}(U_{A}DV_{A}), (46)

and

ARank(UADVAB)ARank(UADVA).{\rm ARank}(U_{A}DV_{A}B)\leq{\rm ARank}(U_{A}DV_{A}). (47)

By Proposition 6.1, it suffices to prove

Rank(DC)Rank(D),{\rm Rank}(DC)\leq{\rm Rank}(D), (48)

and

ARank(DC)ARank(D),{\rm ARank}(DC)\leq{\rm ARank}(D), (49)

where DD is the diagonal matrix in (31), C=VABC=V_{A}B. Now the conclusion follows from Proposition 6.3. ∎

Corollary 6.5.

Suppose that A𝔻m×nA\in{\mathbb{DC}}^{m\times n}. Then

Rank(A)=max{r:A=BC,B𝔻m×r,C𝔻r×n}.{\rm Rank}(A)=\max\{r:A=BC,B\in{\mathbb{DC}}^{m\times r},C\in{\mathbb{DC}}^{r\times n}\}. (50)
Proof.

By Theorem 6.4, the left side of (50) is less than or equal to the right hand side of (50). Combining with the SVD of AA as stated in Theorem 5.2, the equality follows. ∎

Corollary 6.6.

Suppose that A𝔻m×nA\in{\mathbb{DC}}^{m\times n} and B𝔻q×nB\in{\mathbb{DC}}^{q\times n}. Let C=[AB]𝔻(m+q)×nC=\left[\begin{array}[]{c}A\\ B\end{array}\right]\in{\mathbb{DC}}^{(m+q)\times n}. Then

Rank(C)Rank(A)+Rank(B),ARank(C)ARank(A)+ARank(B).{\rm Rank}(C)\leq{\rm Rank}(A)+{\rm Rank}(B),~{}~{}{\rm ARank}(C)\leq{\rm ARank}(A)+{\rm ARank}(B). (51)
Proof.

Let A=UAΣAVAA=U_{A}\Sigma_{A}V_{A}^{*} and B=UBΣBVBB=U_{B}\Sigma_{B}V_{B}^{*} be the singular value decompositions of AA and BB, respectively. By setting U=diag(UA,UB)U={\rm diag}(U_{A},U_{B}), V=diag(VA,VB)V={\rm diag}(V_{A},V_{B}), and W=[AOOB]W=\left[\begin{array}[]{cc}A&O\\ O&B\\ \end{array}\right], one can verify that UU and VV are unitary dual complex matrices, and

W=U[ΣAOOΣB]V.W=U\left[\begin{array}[]{cc}\Sigma_{A}&O\\ O&\Sigma_{B}\\ \end{array}\right]V^{*}.

Thus, Rank(W)=Rank(A)+Rank(B){\rm Rank}(W)={\rm Rank}(A)+{\rm Rank}(B) and ARank(W)=ARank(A)+ARank(B){\rm ARank}(W)={\rm ARank}(A)+{\rm ARank}(B). Note that C=W[InIn]C=W\left[\begin{array}[]{c}I_{n}\\ I_{n}\end{array}\right]. The assertions follow readily from Theorem 6.4.

Corollary 6.7.

Suppose that AA, B𝔻m×nB\in{\mathbb{DC}}^{m\times n}. Then

Rank(A)Rank(B)Rank(A+B)Rank(A)+Rank(B),{\rm Rank}(A)-{\rm Rank}(B)\leq{\rm Rank}(A+B)\leq{\rm Rank}(A)+{\rm Rank}(B), (52)
ARank(A)ARank(B)ARank(A+B)ARank(A)+ARank(B).{\rm ARank}(A)-{\rm ARank}(B)\leq{\rm ARank}(A+B)\leq{\rm ARank}(A)+{\rm ARank}(B). (53)
Proof.

Note that A+B=[Im,Im][AB]A+B=\left[I_{m},I_{m}\right]\left[\begin{array}[]{c}A\\ B\end{array}\right]. The assertion Rank(A+B)Rank(A)+Rank(B){\rm Rank}(A+B)\leq{\rm Rank}(A)+{\rm Rank}(B) follows directly from Theorem 6.4 and Corollary 6.6. By writing A=(A+B)+(B)A=(A+B)+(-B), together with the fact Rank(B)=(-B)= Rank(B)(B), we have Rank(A)(A)\leq Rank(A+B)+(A+B)+ Rank(B)(B). The inequalities for the appreciable rank can be proved in the same manner. ∎

The following theorem indicates that the standard part of the singular values of a dual complex matrix are exactly the singular values of the standard part of that dual complex matrix, and the appreciable rank of a dual complex matrix is exactly the rank of the standard part of that dual complex matrix.

Theorem 6.8.

A dual complex matrix A=Ast+Aϵ𝔻m×nA=A_{st}+A_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{m\times n} has singular values σ1,st+σ1,ϵσmin{m,n},st+σmin{m,n},ϵ\sigma_{1,st}+\sigma_{1,\mathcal{I}}\epsilon\geq\cdots\geq\sigma_{\min\{m,n\},st}+\sigma_{\min\{m,n\},\mathcal{I}}\epsilon if and only if the complex matrix AstA_{st} has singular values σ1,stσmin{m,n},st\sigma_{1,st}\geq\cdots\geq\sigma_{\min\{m,n\},st}. We also have

ARank(A)=Rank(Ast).{\rm ARank}(A)={\rm Rank}(A_{st}). (54)
Proof.

In (31), write UA=UA,st+UA,ϵU_{A}=U_{A,st}+U_{A,\mathcal{I}}\epsilon, D=Dst+DϵD=D_{st}+D_{\mathcal{I}}\epsilon and VA=VA,st+VA,ϵV_{A}=V_{A,st}+V_{A,\mathcal{I}}\epsilon. Then we have

Ast=UA,stDstVA,st.A_{st}=U_{A,st}D_{st}V_{A,st}. (55)

By Proposition 3.2, UA,stU_{A,st} and VA,stV_{A,st} are unitary. Then (55) is a singular value decomposition of the complex matrix AstA_{st}. The conclusions follow from this. ∎

7 An Eckart-Young Like Theorem for Dual Complex Matrices

For large scale dual complex matrices, it is of great interest to extract its low rank approximation, for the sake of data reduction. In this section, the low rank approximation of a given dual complex matrix will be discussed, by establishing an Eckart-Young like theorem analogous to the case of complex matrices.

Theorem 7.1.

Suppose that A𝔻m×nA\in\mathbb{DC}^{m\times n} has singular value decomposition A=j=1rμj𝐮j𝐯jA=\sum_{j=1}^{r}\mu_{j}{\bf u}_{j}{\bf v}_{j}^{*}. If krk\leq r. Then the matrix Ak=j=1kμj𝐮j𝐯jA_{k}=\sum_{j=1}^{k}\mu_{j}{\bf u}_{j}{\bf v}_{j}^{*} satisfies

AAkFABF\|A-A_{k}\|_{F}\leq\|A-B\|_{F}

for any B𝔻m×nB\in\mathbb{DC}^{m\times n} with rank at most kk.

Proof.

Recall that UDVF=DF\|UDV\|_{F}=\|D\|_{F} for any dual complex matrix D=Dst+Dϵ𝔻m×nD=D_{st}+D_{\mathcal{I}}\epsilon\in\mathbb{DC}^{m\times n} and dual complex unitary matrix U𝔻m×mU\in\mathbb{DC}^{m\times m} and dual complex unitary matrix V𝔻n×nV\in\mathbb{DC}^{n\times n}. If DD has nonzero singular values σ1(D)σ2(D)σr(D)0\sigma_{1}(D)\geq\sigma_{2}(D)\geq\ldots\geq\sigma_{r}(D)\geq 0, then

DF=(σ1(D)σ2(D)σr(D))2.\|D\|_{F}=\left\|\left(\begin{array}[]{c}\sigma_{1}(D)\\ \sigma_{2}(D)\\ \vdots\\ \sigma_{r}(D)\\ \end{array}\right)\right\|_{2}.

Take any B𝔻m×nB\in\mathbb{DC}^{m\times n} with rank at most krk\leq r. Let C=ABC=A-B. Denote l=Rank(C)l={\rm Rank}(C), and suppose that CC has singular value decomposition C=j=1lγj𝐱j𝐲jC=\sum_{j=1}^{l}\gamma_{j}{\bf x}_{j}{\bf y}_{j}^{*} with γ1γ2γl0\gamma_{1}\geq\gamma_{2}\geq\ldots\geq\gamma_{l}\geq 0 and orthonormal sets {𝐱1,𝐱2,,𝐱l}𝔻m\{{\bf x}_{1},{\bf x}_{2},\ldots,{\bf x}_{l}\}\subset\mathbb{DC}^{m} and {𝐲1,𝐲2,,𝐲l}𝔻n\{{\bf y}_{1},{\bf y}_{2},\ldots,{\bf y}_{l}\}\subset\mathbb{DC}^{n}. By Corollary 3.4, the set {𝐲1,𝐲2,,𝐲l}𝔻n\{{\bf y}_{1},{\bf y}_{2},\ldots,{\bf y}_{l}\}\subset\mathbb{DC}^{n} can be extended to an orthonormal basis {𝐲1,𝐲2,,𝐲l,,𝐲n}\{{\bf y}_{1},{\bf y}_{2},\ldots,{\bf y}_{l},\ldots,{\bf y}_{n}\} for 𝔻n\mathbb{DC}^{n}. Then every unit vector 𝐳𝔻n{\bf z}\in\mathbb{DC}^{n} can be written as 𝐳=i=1nai𝐲i{\bf z}=\sum_{i=1}^{n}a_{i}{\bf y}_{i} for a unit vector (a1,a2,an)𝔻n(a_{1},a_{2}\ldots,a_{n})^{\top}\in\mathbb{DC}^{n} so that

C𝐳2=(j=1lγj𝐱j𝐲j)(i=1nai𝐲i)2=j=1lγjaj𝐱j2=(γ1a1γlal)2,\|C{\bf z}\|_{2}=\left\|\left(\sum_{j=1}^{l}\gamma_{j}{\bf x}_{j}{\bf y}_{j}^{*}\right)\left(\sum_{i=1}^{n}a_{i}{\bf y}_{i}\right)\right\|_{2}=\left\|\sum_{j=1}^{l}\gamma_{j}a_{j}{\bf x}_{j}\right\|_{2}=\left\|\left(\begin{array}[]{c}\gamma_{1}a_{1}\\ \vdots\\ \gamma_{l}a_{l}\\ \end{array}\right)\right\|_{2},

where the last equality follows from Proposition 3.6. By this, together with dual numbers γ1γ2γl0\gamma_{1}\geq\gamma_{2}\geq\cdots\geq\gamma_{l}\geq 0, we have γ1=|γ1|=C𝐲12\gamma_{1}=|\gamma_{1}|=\|C{\bf y}_{1}\|_{2} and for any unit vector 𝐳𝔻n{\bf z}\in\mathbb{DC}^{n},

C𝐳2γ1(a1al)2γ1.\|C{\bf{\bf z}}\|_{2}\leq\gamma_{1}\left\|\left(\begin{array}[]{c}a_{1}\\ \vdots\\ a_{l}\\ \end{array}\right)\right\|_{2}\leq\gamma_{1}.

Therefore, we have

γ1=max{C𝐳2:𝐳𝔻nwith𝐳2=1},\gamma_{1}=\max\left\{\|C{\bf z}\|_{2}~{}:~{}{\bf z}\in\mathbb{DC}^{n}~{}{\rm with}~{}\|{\bf z}\|_{2}=1\right\}, (56)

and

γj=max{C𝐳2:𝐳𝔻nwith𝐳2=1and𝐲1𝐳==𝐲j1𝐳=0}\gamma_{j}=\max\{\|C{\bf z}\|_{2}~{}:~{}{\bf z}\in\mathbb{DC}^{n}~{}{\rm with}~{}\|{\bf z}\|_{2}=1~{}{\rm and}~{}{\bf y}_{1}^{*}{\bf z}=\ldots={\bf y}_{j-1}^{*}{\bf z}=0\} (57)

for j=2,,lj=2,\ldots,l. Since BB has rank at most kk, the matrix B[𝐯1,,𝐯k+1]𝔻m×(k+1)B[{\bf v}_{1},\ldots,{\bf v}_{k+1}]\in\mathbb{DC}^{m\times(k+1)} has also rank at most kk from Theorem 6.4. Hence, there is a unit vector 𝐰~=(w1,w2,wk+1)𝔻k+1\tilde{{\bf w}}=(w_{1},w_{2}\ldots,w_{k+1})^{\top}\in\mathbb{DC}^{k+1} satisfying B[𝐯1,,𝐯k+1]𝐰~=0B[{\bf v}_{1},\ldots,{\bf v}_{k+1}]\tilde{{\bf w}}=0. Consider the unit vector 𝐳~=[𝐯1,,𝐯k+1]𝐰~𝔻n\tilde{\bf z}=[{\bf v}_{1},\ldots,{\bf v}_{k+1}]\tilde{{\bf w}}\in\mathbb{DC}^{n}. By (56), we have

γ1C𝐳~2=i=1k+1μiwi𝐮i2=(μ1w1μk+1wk+1)2μk+1𝐰~2=μk+1.\gamma_{1}\geq\|C\tilde{\bf z}\|_{2}=\left\|\sum_{i=1}^{k+1}\mu_{i}w_{i}{\bf u}_{i}\right\|_{2}=\left\|\left(\begin{array}[]{c}\mu_{1}w_{1}\\ \vdots\\ \mu_{k+1}w_{k+1}\\ \end{array}\right)\right\|_{2}\geq\mu_{k+1}\left\|\tilde{{\bf w}}\right\|_{2}=\mu_{k+1}. (58)

It is known from Corollary 6.7 that lrkl\geq r-k. If l>rkl>r-k, then set μr+1==μl+k=0\mu_{r+1}=\cdots=\mu_{l+k}=0. Now we prove γjμk+j\gamma_{j}\geq\mu_{k+j} for every j=2,,lj=2,\ldots,l. Let

Bj=[B𝐲1𝐲j1].B_{j}=\left[\begin{array}[]{c}B\\ {\bf y}_{1}^{*}\\ \vdots\\ {\bf y}_{j-1}^{*}\end{array}\right].

By applying Corollary 6.6, we know that Bj𝔻(m+j1)×nB_{j}\in\mathbb{DC}^{(m+j-1)\times n} has rank at most k+j1k+j-1, and so does the dual complex matrix Bj[𝐯1,,𝐯k+j]𝔻(m+j1)×(k+j)B_{j}[{\bf v}_{1},\ldots,{\bf v}_{k+j}]\in\mathbb{DC}^{(m+j-1)\times(k+j)}, due to Theorem 6.4. It is similar to the case j=1j=1, that there is a unit vector 𝐰~j=(w1,w2,wk+j)𝔻k+j\tilde{{\bf w}}_{j}=(w_{1},w_{2}\ldots,w_{k+j})^{\top}\in\mathbb{DC}^{k+j} satisfying Bj[𝐯1,,𝐯k+j]𝐰~j=0B_{j}[{\bf v}_{1},\ldots,{\bf v}_{k+j}]\tilde{{\bf w}}_{j}=0. Consider the unit vector 𝐳~=[𝐯1,,𝐯k+j]𝐰~j𝔻n\tilde{\bf z}=[{\bf v}_{1},\ldots,{\bf v}_{k+j}]\tilde{{\bf w}}_{j}\in\mathbb{DC}^{n}. It is obvious that 𝐲1𝐳~==𝐲j1𝐳~=0{\bf y}_{1}^{*}\tilde{{\bf z}}=\ldots={\bf y}_{j-1}^{*}\tilde{{\bf z}}=0. Consequently, by (57), we have

γjC𝐳~2=i=1k+jμiwi𝐮i2=(μ1w1μk+jwk+j)2μk+j𝐰~j2=μk+j0,\gamma_{j}\geq\|C\tilde{\bf z}\|_{2}=\left\|\sum_{i=1}^{k+j}\mu_{i}w_{i}{\bf u}_{i}\right\|_{2}=\left\|\left(\begin{array}[]{c}\mu_{1}w_{1}\\ \vdots\\ \mu_{k+j}w_{k+j}\\ \end{array}\right)\right\|_{2}\geq\mu_{k+j}\left\|\tilde{{\bf w}}_{j}\right\|_{2}=\mu_{k+j}\geq 0, (59)

for all j=1,,lj=1,\cdots,l. By (58) and (59), we have

AAkF=i=k+1rμi𝐮i𝐯iF=(μk+1μr)2(γ1γrk)2(γ1γrkγl)2=ABF,\|A-A_{k}\|_{F}=\left\|\sum_{i=k+1}^{r}\mu_{i}{\bf u}_{i}{\bf v}_{i}^{*}\right\|_{F}=\left\|\left(\begin{array}[]{c}\mu_{k+1}\\ \vdots\\ \mu_{r}\\ \end{array}\right)\right\|_{2}\leq\left\|\left(\begin{array}[]{c}\gamma_{1}\\ \vdots\\ \gamma_{r-k}\\ \end{array}\right)\right\|_{2}\leq\left\|\left(\begin{array}[]{c}\gamma_{1}\\ \vdots\\ \gamma_{r-k}\\ \vdots\\ \gamma_{l}\\ \end{array}\right)\right\|_{2}=\|A-B\|_{F},

which means that the desired result holds. ∎

8 Numerical Experiments

8.1 Truncated SVD for Low-Rank Approximation

This subsection is devoted to the approach of finding the best low-rank approximation of a dual complex matrix A𝔻m×nA\in{\mathbb{DC}}^{m\times n} based on the truncated SVD. Before proceeding, we first consider the unitary decomposition of a Hermitian dual complex matrix of the form B=AAB=A^{*}A with A𝔻m×nA\in{\mathbb{DC}}^{m\times n}. The algorithmic framework is stated in Algorithm 1.

Input: A=Ast+Aϵ𝔻m×nA=A_{st}+A_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{m\times n}
Output: Unitary UB=(UB)st+(UB)ϵ𝔻n×nU_{B}=(U_{B})_{st}+(U_{B})_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{n\times n}, diagonal ΣB=(ΣB)st+(ΣB)ϵ𝔻n×n\Sigma_{B}=(\Sigma_{B})_{st}+(\Sigma_{B})_{\mathcal{I}}\epsilon\in{\mathbb{D}}^{n\times n}
  • Compute the eigenvalue decomposition of Bst:=AstAstB_{st}:=A_{st}^{*}A_{st}:

    Bst=SDS,Sn×n unitary,D=diag(λ1Ik1,,λrIkr)n×nwithλ1>>λr0B_{st}=SDS^{*},~{}~{}S\in{\mathbb{C}}^{n\times n}\mbox{~{}unitary},~{}~{}D=\mathrm{diag}(\lambda_{1}I_{k_{1}},\cdots,\lambda_{r}I_{k_{r}})\in{\mathbb{R}}^{n\times n}\mathrm{~{}with~{}}\lambda_{1}>\cdots>\lambda_{r}\geq 0
  • Compute M=SBS=(C11C12C1rC12C22C2rC1rC2rCrr)M=S^{*}B_{\mathcal{I}}S=\left(\begin{array}[]{cccc}C_{11}&C_{12}&\cdots&C_{1r}\\ C_{12}^{*}&C_{22}&\cdots&C_{2r}\\ \vdots&\vdots&\vdots&\vdots\\ C_{1r}^{*}&C_{2r}^{*}&\cdots&C_{rr}\\ \end{array}\right)

  • Compute P=(OC12λ1λ2C1rλ1λrC12λ1λ2OC2rλ2λrC12λ1λrC2rλ2λrO)P_{\mathcal{I}}=\left(\begin{array}[]{cccc}O&\frac{C_{12}}{\lambda_{1}-\lambda_{2}}&\cdots&\frac{C_{1r}}{\lambda_{1}-\lambda_{r}}\\ -\frac{C_{12}^{*}}{\lambda_{1}-\lambda_{2}}&O&\cdots&\frac{C_{2r}}{\lambda_{2}-\lambda_{r}}\\ \vdots&\vdots&\vdots&\vdots\\ -\frac{C_{12}^{*}}{\lambda_{1}-\lambda_{r}}&-\frac{C_{2r}^{*}}{\lambda_{2}-\lambda_{r}}&\cdots&O\\ \end{array}\right)

  • For i=1,,ri=1,\cdots,r, perform the eigenvalue decomposition of CiiC_{ii}:

    Cii=ViDiVi,Viki×kiunitary,Di=diag(λi,1,,λi,ki)C_{ii}=V_{i}D_{i}V_{i}^{*},~{}~{}V_{i}\in{\mathbb{C}}^{k_{i}\times k_{i}}~{}\mathrm{unitary},~{}D_{i}=\mathrm{diag}(\lambda_{i,1},\cdots,\lambda_{i,k_{i}})
  • Set V^B=diag(V1,,Vr)\hat{V}_{B}=\mathrm{diag}(V_{1},\cdots,V_{r}), and compute

    (UB)st=SV^B,(UB)=SPV^B.(U_{B})_{st}=S\hat{V}_{B},~{}~{}(U_{B})_{\mathcal{I}}=SP_{\mathcal{I}}^{*}\hat{V}_{B}.
  • Set

    (ΣB)st=diag(λ1,,λ1k1times,,λr,,λrkrtimes)(\Sigma_{B})_{st}=\mathrm{diag}\left(\underbrace{\lambda_{1},\ldots,\lambda_{1}}_{k_{1}~{}\mathrm{times}},\ldots,\underbrace{\lambda_{r},\ldots,\lambda_{r}}_{k_{r}~{}\mathrm{times}}\right)

    and

    (ΣB)=diag(λ1,1,,λ1,k1,,λr,1,,λr,kr)(\Sigma_{B})_{\mathcal{I}}=\mathrm{diag}\left(\lambda_{1,1},\ldots,\lambda_{1,k_{1}},\ldots,\lambda_{r,1},\ldots,\lambda_{r,k_{r}}\right)
Algorithm 1 Unitary Decomposition of B=AA𝔻n×nB=A^{*}A\in{\mathbb{DC}}^{n\times n}

Based on the unitary decomposition as presented in Algorithm 1, we are in the position to present the procedure of the singular value decomposition of any given dual complex matrix A𝔻m×nA\in{\mathbb{DC}}^{m\times n}, see Algorithm 2.

According to the Eckart-Young like theorem as stated in Theorem 7.1, the low-rank approximation of A𝔻m×nA\in{\mathbb{DC}}^{m\times n} can be obtained by the truncated SVD. Specifically, given A𝔻m×nA\in{\mathbb{DC}}^{m\times n} and a positive integer k<min{m,n}k<\min\{m,n\}, Algorithm 2 can produce the SVD of AA in terms of A=VΣUA=V\Sigma U^{*}. Then the best low-rank approximation of AA with rank no more than kk, termed as AkA_{k}, can be obtained by

Ak:=V:,1:kΣ1:k,1:k(U:,1:k).A_{k}:=V_{:,1:k}\Sigma_{1:k,1:k}\left(U_{:,1:k}\right)^{*}. (60)

Note that such a low-rank approximation may not be unique. For example, when Σkk=Σ(k+1)(k+1)\Sigma_{kk}=\Sigma_{(k+1)(k+1)}, then both AkA_{k} as defined in (60) and

A~k:=V:,ΩΣΩ,Ω(U:,Ω)\tilde{A}_{k}:=V_{:,\Omega}\Sigma_{\Omega,\Omega}\left(U_{:,\Omega}\right)^{*} (61)

with Ω:={1,,k1,k+1}\Omega:=\{1,\ldots,k-1,k+1\} are best low-rank approximations of AA with rank no more than kk in the sense that

AAkF=AA~kF.\|A-A_{k}\|_{F}=\|A-\tilde{A}_{k}\|_{F}. (62)
Input: A=Ast+Aϵ𝔻m×nA=A_{st}+A_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{m\times n}
Output:
The unitary matrices: U=Ust+Uϵ𝔻n×nU=U_{st}+U_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{n\times n}, V=Vst+Vϵ𝔻m×mV=V_{st}+V_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{m\times m};
The rectangular diagonal matrix of all the singular values: Σ=Σst+Σϵ𝔻m×n\Sigma=\Sigma_{st}+\Sigma_{\mathcal{I}}\epsilon\in{\mathbb{D}}^{m\times n}.
Step 1: Decompose B:=AA𝔻n×nB:=A^{*}A\in{\mathbb{DC}}^{n\times n} by Algorithm 1 to get UB=(UB)st+(UB)ϵ𝔻n×nU_{B}=(U_{B})_{st}+(U_{B})_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{n\times n} and ΣB=(ΣB)st+(ΣB)ϵ𝔻n×n\Sigma_{B}=(\Sigma_{B})_{st}+(\Sigma_{B})_{\mathcal{I}}\epsilon\in{\mathbb{D}}^{n\times n} such that B=UBΣBUBB=U_{B}\Sigma_{B}U_{B}^{*}
Step 2: Set λ=(λ1,,λr)\lambda=\left(\lambda_{1},\ldots,\lambda_{r}\right) including all distinct diagonal entries of (ΣB)st(\Sigma_{B})_{st} and compute
s=λ0={i:λi0,i{1,,r}},andra=i=1ski.s=\|\lambda\|_{0}=\sharp\{i:\lambda_{i}\neq 0,i\in\{1,\ldots,r\}\},\mathrm{~{}and~{}}r_{a}=\sum\limits_{i=1}^{s}k_{i}.
Set
Ust1=(UB)st(:,1:ra),\displaystyle U^{1}_{st}=(U_{B})_{st}(:,1:r_{a}), Ust2=(UB)st(:,ra+1:n)\displaystyle~{}~{}U^{2}_{st}=(U_{B})_{st}(:,r_{a}+1:n)
U1=(UB)(:,1:ra),\displaystyle U^{1}_{\mathcal{I}}=(U_{B})_{\mathcal{I}}(:,1:r_{a}), U2=(UB)(:,ra+1:n)\displaystyle~{}~{}U^{2}_{\mathcal{I}}=(U_{B})_{\mathcal{I}}(:,r_{a}+1:n)
Compute
Σst1=diag(λ1,,λ1k1times,,λs,,λskstimes)\Sigma^{1}_{st}=\mathrm{diag}\left(\underbrace{\sqrt{\lambda_{1}},\ldots,\sqrt{\lambda_{1}}}_{k_{1}~{}\mathrm{times}},\ldots,\underbrace{\sqrt{\lambda_{s}},\ldots,\sqrt{\lambda_{s}}}_{k_{s}~{}\mathrm{times}}\right)
Σ1=diag(λ1,12λ1,,λ1,k12λ1,,λs,12λs,,λs,ks2λs)\Sigma^{1}_{\mathcal{I}}=\mathrm{diag}\left(\frac{\lambda_{1,1}}{2\sqrt{\lambda_{1}}},\ldots,\frac{\lambda_{1,k_{1}}}{2\sqrt{\lambda_{1}}},\ldots,\frac{\lambda_{s,1}}{2\sqrt{\lambda_{s}}},\ldots,\frac{\lambda_{s,k_{s}}}{2\sqrt{\lambda_{s}}}\right)
Σ=diag(λ1,12λ1λ1,,λ1,k12λ1λ1,,λs,12λsλs,,λs,ks2λsλs)\Sigma_{\mathcal{I}}^{-}=\mathrm{diag}\left(\frac{\lambda_{1,1}}{-2\lambda_{1}\sqrt{\lambda_{1}}},\ldots,\frac{\lambda_{1,k_{1}}}{-2\lambda_{1}\sqrt{\lambda_{1}}},\ldots,\frac{\lambda_{s,1}}{-2\lambda_{s}\sqrt{\lambda_{s}}},\ldots,\frac{\lambda_{s,k_{s}}}{-2\lambda_{s}\sqrt{\lambda_{s}}}\right)
Compute
Vst1=AstUst1(Σst1)1,V1=AstUst1Σ+AstU1(Σst1)1+AUst1(Σst1)1V^{1}_{st}=A_{st}U^{1}_{st}(\Sigma^{1}_{st})^{-1},~{}~{}V^{1}_{\mathcal{I}}=A_{st}U^{1}_{st}\Sigma_{\mathcal{I}}^{-}+A_{st}U^{1}_{\mathcal{I}}(\Sigma^{1}_{st})^{-1}+A_{\mathcal{I}}U^{1}_{st}(\Sigma^{1}_{st})^{-1}
Step 3: For V1=Vst1+V1ϵ𝔻m×raV^{1}=V^{1}_{st}+V^{1}_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{m\times r_{a}}, find V2=Vst2+V2ϵ𝔻m×(mra)V^{2}=V^{2}_{st}+V^{2}_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{m\times(m-r_{a})} such that (V1,V2)(V^{1},V^{2}) is unitary according to Corollary 3.4.
Step 4: Set
G=(Vst2)AstU2+(Vst2)AUst2G=(V^{2}_{st})^{*}A_{st}U^{2}_{\mathcal{I}}+(V^{2}_{st})^{*}A_{\mathcal{I}}U^{2}_{st}
Decompose G(mra)×(nra)G\in{\mathbb{C}}^{(m-r_{a})\times(n-r_{a})} by classical singular value decomposition to get unitary matrices W1(mra)×(mra)W_{1}\in{\mathbb{C}}^{(m-r_{a})\times(m-r_{a})}, W2(nra)×(nra)W_{2}\in{\mathbb{C}}^{(n-r_{a})\times(n-r_{a})} and rectangular diagonal matrix ΣG(mra)×(nra)\Sigma_{G}\in{\mathbb{R}}^{(m-r_{a})\times(n-r_{a})} with diagonals in nonincreasing order such that W1GW2=ΣGW_{1}^{*}GW_{2}=\Sigma_{G}. Compute
Ust=(Ust1,Ust2W2),U=(U1,U2W2),Vst=(Vst1,Vst2W1),V=(V1,V2W1),U_{st}=(U^{1}_{st},U^{2}_{st}W_{2}),\quad U_{\mathcal{I}}=(U^{1}_{\mathcal{I}},U^{2}_{\mathcal{I}}W_{2}),\quad V_{st}=(V^{1}_{st},V^{2}_{st}W_{1}),\quad V_{\mathcal{I}}=(V^{1}_{\mathcal{I}},V^{2}_{\mathcal{I}}W_{1}),
Σst=diag(Σst1,O(mra)×(nra)),Σ=diag(Σ1,ΣG)\Sigma_{st}=\mathrm{diag}\left(\Sigma^{1}_{st},O_{(m-r_{a})\times(n-r_{a})}\right),\quad\Sigma_{\mathcal{I}}=\mathrm{diag}\left(\Sigma^{1}_{\mathcal{I}},\Sigma_{G}\right)
Algorithm 2 Singular Value Decomposition of A𝔻m×nA\in{\mathbb{DC}}^{m\times n} in terms of A=VΣUA=V\Sigma U^{*}

8.2 Numerical Results

In this subsection, we present some numerical results to show the efficiency of proposed algorithms. All the codes are written in Python 3.9.5. The numerical experiments were done on a Macbook notebook with an Intel m3 dual-core processor running at 1.2GHz and 8GB of RAM.

Example 8.1.

The dual complex matrix A=Ast+Aϵ𝔻8×4A=A_{st}+A_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{8\times 4} is given by

Ast=[0.0942+0.5476𝐢0.05580.8419𝐢0.0110+0.3480𝐢0.54870.5106𝐢0.8862+0.4817𝐢0.97100.9956𝐢0.80440.2831𝐢0.4500+0.2713𝐢0.7160+0.1136𝐢0.18220.1160𝐢0.14440.7116𝐢0.4144+0.9641𝐢0.40030.9425𝐢0.5649+0.1447𝐢0.8047+0.5493𝐢0.77440.1893𝐢0.83000.4158𝐢0.07390.8446𝐢0.85130.3221𝐢0.24510.4130𝐢0.1672+0.7633𝐢0.1062+0.0883𝐢0.9567+0.5460𝐢0.11830.2970𝐢0.97170.6890𝐢0.94100.8403𝐢0.50390.7443𝐢0.4999+0.8501𝐢0.08670.5021𝐢0.20650.2641𝐢0.83580.5494𝐢0.7137+0.8727𝐢]A_{st}=\left[\begin{array}[]{rrrr}0.0942+0.5476\mathbf{i}&-0.0558-0.8419\mathbf{i}&-0.0110+0.3480\mathbf{i}&-0.5487-0.5106\mathbf{i}\\ 0.8862+0.4817\mathbf{i}&0.9710-0.9956\mathbf{i}&0.8044-0.2831\mathbf{i}&0.4500+0.2713\mathbf{i}\\ -0.7160+0.1136\mathbf{i}&0.1822-0.1160\mathbf{i}&0.1444-0.7116\mathbf{i}&0.4144+0.9641\mathbf{i}\\ 0.4003-0.9425\mathbf{i}&-0.5649+0.1447\mathbf{i}&-0.8047+0.5493\mathbf{i}&0.7744-0.1893\mathbf{i}\\ -0.8300-0.4158\mathbf{i}&0.0739-0.8446\mathbf{i}&0.8513-0.3221\mathbf{i}&-0.2451-0.4130\mathbf{i}\\ -0.1672+0.7633\mathbf{i}&-0.1062+0.0883\mathbf{i}&-0.9567+0.5460\mathbf{i}&0.1183-0.2970\mathbf{i}\\ 0.9717-0.6890\mathbf{i}&0.9410-0.8403\mathbf{i}&0.5039-0.7443\mathbf{i}&0.4999+0.8501\mathbf{i}\\ -0.0867-0.5021\mathbf{i}&0.2065-0.2641\mathbf{i}&-0.8358-0.5494\mathbf{i}&0.7137+0.8727\mathbf{i}\end{array}\right]

and

A=[0.50100.8953𝐢0.61440.1153𝐢0.2793+0.2216𝐢0.99060.8764𝐢0.96560.5856𝐢0.07860.3271𝐢0.66260.9692𝐢0.6339+0.2159𝐢0.37070.9521𝐢0.6612+0.7596𝐢0.3158+0.8447𝐢0.22540.8259𝐢0.8779+0.8864𝐢0.1376+0.7962𝐢0.06010.8183𝐢0.8345+0.7047𝐢0.25930.7807𝐢0.50110.3864𝐢0.75600.1174𝐢0.1853+0.1771𝐢0.5777+0.2567𝐢0.8656+0.8806𝐢0.8558+0.0361𝐢0.03260.7419𝐢0.5477+0.7905𝐢0.99920.0610𝐢0.1922+0.5463𝐢0.77530.0609𝐢0.4876+0.6894𝐢0.55060.5568𝐢0.05930.6293𝐢0.6266+0.4538𝐢].A_{\mathcal{I}}=\left[\begin{array}[]{rrrr}-0.5010-0.8953\mathbf{i}&0.6144-0.1153\mathbf{i}&-0.2793+0.2216\mathbf{i}&-0.9906-0.8764\mathbf{i}\\ 0.9656-0.5856\mathbf{i}&0.0786-0.3271\mathbf{i}&-0.6626-0.9692\mathbf{i}&-0.6339+0.2159\mathbf{i}\\ 0.3707-0.9521\mathbf{i}&-0.6612+0.7596\mathbf{i}&0.3158+0.8447\mathbf{i}&0.2254-0.8259\mathbf{i}\\ 0.8779+0.8864\mathbf{i}&-0.1376+0.7962\mathbf{i}&0.0601-0.8183\mathbf{i}&0.8345+0.7047\mathbf{i}\\ -0.2593-0.7807\mathbf{i}&0.5011-0.3864\mathbf{i}&0.7560-0.1174\mathbf{i}&0.1853+0.1771\mathbf{i}\\ -0.5777+0.2567\mathbf{i}&0.8656+0.8806\mathbf{i}&-0.8558+0.0361\mathbf{i}&-0.0326-0.7419\mathbf{i}\\ 0.5477+0.7905\mathbf{i}&0.9992-0.0610\mathbf{i}&-0.1922+0.5463\mathbf{i}&0.7753-0.0609\mathbf{i}\\ 0.4876+0.6894\mathbf{i}&-0.5506-0.5568\mathbf{i}&-0.0593-0.6293\mathbf{i}&-0.6266+0.4538\mathbf{i}\end{array}\right].

By using Algorithm 2, all the nonzero singular values of dual complex matrix AA are positive appreciable dual numbers given by

ΣA=diag(3.4147+0.5451ϵ,2.4280+0.6444ϵ,2.12870.5667ϵ,0.8744+0.4006ϵ).\Sigma_{A}={\rm diag}\left(3.4147+0.5451\epsilon,2.4280+0.6444\epsilon,2.1287-0.5667\epsilon,0.8744+0.4006\epsilon\right).

The corresponding partially unitary matrix V=Vst+Vϵ𝔻8×4V=V_{st}+V_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{8\times 4} is give by

Vst=[0.0279+0.0919𝐢0.01160.1099𝐢0.13570.5137𝐢0.4708+0.3136𝐢0.35060.3066𝐢0.26280.0759𝐢0.15300.4269𝐢0.2549+0.0570𝐢0.03510.2229𝐢0.2700+0.2571𝐢0.3419+0.1245𝐢0.2949+0.3974𝐢0.34740.0418𝐢0.38780.1294𝐢0.1672+0.3037𝐢0.0640+0.1271𝐢0.01420.0910𝐢0.40780.4301𝐢0.22740.2196𝐢0.0112+0.1497𝐢0.1792+0.2150𝐢0.1925+0.2859𝐢0.04220.2238𝐢0.3141+0.1643𝐢0.27040.5472𝐢0.21670.1328𝐢0.0826+0.0717𝐢0.12220.0123𝐢0.15990.3398𝐢0.0926+0.2520𝐢0.1853+0.2477𝐢0.3413+0.2587𝐢],V_{st}=\left[\begin{array}[]{rrrr}0.0279+0.0919\mathbf{i}&-0.0116-0.1099\mathbf{i}&-0.1357-0.5137\mathbf{i}&-0.4708+0.3136\mathbf{i}\\ 0.3506-0.3066\mathbf{i}&0.2628-0.0759\mathbf{i}&0.1530-0.4269\mathbf{i}&0.2549+0.0570\mathbf{i}\\ 0.0351-0.2229\mathbf{i}&-0.2700+0.2571\mathbf{i}&0.3419+0.1245\mathbf{i}&0.2949+0.3974\mathbf{i}\\ -0.3474-0.0418\mathbf{i}&0.3878-0.1294\mathbf{i}&-0.1672+0.3037\mathbf{i}&0.0640+0.1271\mathbf{i}\\ 0.0142-0.0910\mathbf{i}&-0.4078-0.4301\mathbf{i}&0.2274-0.2196\mathbf{i}&-0.0112+0.1497\mathbf{i}\\ -0.1792+0.2150\mathbf{i}&0.1925+0.2859\mathbf{i}&0.0422-0.2238\mathbf{i}&-0.3141+0.1643\mathbf{i}\\ 0.2704-0.5472\mathbf{i}&0.2167-0.1328\mathbf{i}&0.0826+0.0717\mathbf{i}&-0.1222-0.0123\mathbf{i}\\ -0.1599-0.3398\mathbf{i}&0.0926+0.2520\mathbf{i}&0.1853+0.2477\mathbf{i}&-0.3413+0.2587\mathbf{i}\end{array}\right],
V=[0.0154+0.0519𝐢0.88830.9381𝐢0.27910.0629𝐢0.24140.7158𝐢0.09060.2965𝐢0.58980.3918𝐢0.87540.0513𝐢0.11410.2964𝐢0.2370+0.3518𝐢0.76960.6322𝐢0.5323+0.5947𝐢0.3348+0.1037𝐢0.26150.1980𝐢0.4132+1.1643𝐢0.93510.2609𝐢0.50670.9837𝐢0.02650.2820𝐢0.02040.7098𝐢0.2755+1.3629𝐢0.5371+0.3169𝐢0.0266+0.2995𝐢0.05760.3514𝐢0.41290.9124𝐢0.14880.4585𝐢0.1512+0.0934𝐢0.5073+0.4991𝐢0.07470.2496𝐢0.06080.2841𝐢0.1614+0.0240𝐢0.3599+0.4377𝐢0.18070.8833𝐢1.0300+0.4473𝐢]V_{\mathcal{I}}=\left[\begin{array}[]{rrrr}0.0154+0.0519\mathbf{i}&-0.8883-0.9381\mathbf{i}&-0.2791-0.0629\mathbf{i}&0.2414-0.7158\mathbf{i}\\ 0.0906-0.2965\mathbf{i}&-0.5898-0.3918\mathbf{i}&-0.8754-0.0513\mathbf{i}&0.1141-0.2964\mathbf{i}\\ -0.2370+0.3518\mathbf{i}&0.7696-0.6322\mathbf{i}&0.5323+0.5947\mathbf{i}&-0.3348+0.1037\mathbf{i}\\ 0.2615-0.1980\mathbf{i}&0.4132+1.1643\mathbf{i}&-0.9351-0.2609\mathbf{i}&0.5067-0.9837\mathbf{i}\\ 0.0265-0.2820\mathbf{i}&0.0204-0.7098\mathbf{i}&0.2755+1.3629\mathbf{i}&0.5371+0.3169\mathbf{i}\\ 0.0266+0.2995\mathbf{i}&-0.0576-0.3514\mathbf{i}&0.4129-0.9124\mathbf{i}&-0.1488-0.4585\mathbf{i}\\ 0.1512+0.0934\mathbf{i}&0.5073+0.4991\mathbf{i}&-0.0747-0.2496\mathbf{i}&0.0608-0.2841\mathbf{i}\\ 0.1614+0.0240\mathbf{i}&0.3599+0.4377\mathbf{i}&-0.1807-0.8833\mathbf{i}&-1.0300+0.4473\mathbf{i}\end{array}\right]

and the unitary matrix U=Ust+Uϵ𝔻4×4U=U_{st}+U_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{4\times 4} is given by

Ust=[0.32510.2489𝐢0.62010.62620.0516𝐢0.04010.2268𝐢0.49080.0864𝐢0.29370.0301𝐢0.64270.43470.2497𝐢0.59010.26970.4888𝐢0.06000.0323𝐢0.5769+0.0519𝐢0.36340.3220𝐢0.4604+0.0681𝐢0.4329+0.0089𝐢0.6001],U_{st}=\left[\begin{array}[]{rrrr}0.3251-0.2489\mathbf{i}&0.6201&-0.6262-0.0516\mathbf{i}&0.0401-0.2268\mathbf{i}\\ 0.4908-0.0864\mathbf{i}&0.2937-0.0301\mathbf{i}&0.6427&-0.4347-0.2497\mathbf{i}\\ 0.5901&-0.2697-0.4888\mathbf{i}&0.0600-0.0323\mathbf{i}&0.5769+0.0519\mathbf{i}\\ -0.3634-0.3220\mathbf{i}&0.4604+0.0681\mathbf{i}&0.4329+0.0089\mathbf{i}&0.6001\end{array}\right],
U=[0.17610.0836𝐢0.9991+0.8785𝐢0.56860.7263𝐢0.32470.5953𝐢0.2281+0.1830𝐢0.77880.8423𝐢0.29600.7558𝐢0.0148+0.6665𝐢0.19690.2544𝐢0.30370.0170𝐢0.2460+1.7207𝐢0.17770.1192𝐢0.1222+0.0423𝐢0.64080.2896𝐢0.30520.0871𝐢0.1405+0.5359𝐢].U_{\mathcal{I}}=\left[\begin{array}[]{rrrr}0.1761-0.0836\mathbf{i}&-0.9991+0.8785\mathbf{i}&-0.5686-0.7263\mathbf{i}&0.3247-0.5953\mathbf{i}\\ 0.2281+0.1830\mathbf{i}&0.7788-0.8423\mathbf{i}&-0.2960-0.7558\mathbf{i}&-0.0148+0.6665\mathbf{i}\\ -0.1969-0.2544\mathbf{i}&-0.3037-0.0170\mathbf{i}&-0.2460+1.7207\mathbf{i}&0.1777-0.1192\mathbf{i}\\ 0.1222+0.0423\mathbf{i}&0.6408-0.2896\mathbf{i}&-0.3052-0.0871\mathbf{i}&-0.1405+0.5359\mathbf{i}\end{array}\right].
Example 8.2.

We test dual complex matrices with multiple standard singular values. The matrix A=Ast+Aϵ𝔻6×4A=A_{st}+A_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{6\times 4} is given by

Ast=[0.1041+0.3547𝐢0.05920.1799𝐢0.44380.4804𝐢0.0835+0.4100𝐢0.4672+0.3171𝐢0.19170.3081𝐢0.03170.0438𝐢0.21920.4308𝐢0.48630.0343𝐢0.1600+0.5508𝐢0.23480.0403𝐢0.62140.1158𝐢0.3769+0.4002𝐢0.28750.4984𝐢0.6286+0.2987𝐢0.75970.1947𝐢0.25830.4642𝐢0.05090.3241𝐢0.29190.0071𝐢0.00910.1178𝐢0.4598+0.4482𝐢0.64060.0624𝐢0.12580.2293𝐢0.5014+0.3828𝐢]A_{st}=\left[\begin{array}[]{rrrr}0.1041+0.3547\mathbf{i}&0.0592-0.1799\mathbf{i}&0.4438-0.4804\mathbf{i}&0.0835+0.4100\mathbf{i}\\ -0.4672+0.3171\mathbf{i}&-0.1917-0.3081\mathbf{i}&-0.0317-0.0438\mathbf{i}&-0.2192-0.4308\mathbf{i}\\ -0.4863-0.0343\mathbf{i}&-0.1600+0.5508\mathbf{i}&-0.2348-0.0403\mathbf{i}&0.6214-0.1158\mathbf{i}\\ -0.3769+0.4002\mathbf{i}&0.2875-0.4984\mathbf{i}&0.6286+0.2987\mathbf{i}&-0.7597-0.1947\mathbf{i}\\ 0.2583-0.4642\mathbf{i}&-0.0509-0.3241\mathbf{i}&-0.2919-0.0071\mathbf{i}&-0.0091-0.1178\mathbf{i}\\ 0.4598+0.4482\mathbf{i}&-0.6406-0.0624\mathbf{i}&-0.1258-0.2293\mathbf{i}&-0.5014+0.3828\mathbf{i}\end{array}\right]

and

A=[0.8515+0.6585𝐢0.15890.2629𝐢0.95870.9404𝐢0.17170.5454𝐢0.8687+0.2795𝐢0.0565+0.5294𝐢0.2467+0.6030𝐢0.0594+0.6913𝐢0.7183+0.8306𝐢0.17410.0283𝐢0.83070.8884𝐢0.7921+0.2311𝐢0.45440.3646𝐢0.3832+0.7387𝐢0.0662+0.0687𝐢0.69020.0236𝐢0.69200.5542𝐢0.72150.3887𝐢0.0265+0.0162𝐢0.31950.5390𝐢0.8588+0.6675𝐢0.53370.0519𝐢0.75990.1386𝐢0.14960.8450𝐢].A_{\mathcal{I}}=\left[\begin{array}[]{rrrr}-0.8515+0.6585\mathbf{i}&0.1589-0.2629\mathbf{i}&0.9587-0.9404\mathbf{i}&0.1717-0.5454\mathbf{i}\\ 0.8687+0.2795\mathbf{i}&0.0565+0.5294\mathbf{i}&0.2467+0.6030\mathbf{i}&0.0594+0.6913\mathbf{i}\\ -0.7183+0.8306\mathbf{i}&0.1741-0.0283\mathbf{i}&-0.8307-0.8884\mathbf{i}&0.7921+0.2311\mathbf{i}\\ -0.4544-0.3646\mathbf{i}&-0.3832+0.7387\mathbf{i}&-0.0662+0.0687\mathbf{i}&0.6902-0.0236\mathbf{i}\\ 0.6920-0.5542\mathbf{i}&-0.7215-0.3887\mathbf{i}&-0.0265+0.0162\mathbf{i}&-0.3195-0.5390\mathbf{i}\\ 0.8588+0.6675\mathbf{i}&0.5337-0.0519\mathbf{i}&0.7599-0.1386\mathbf{i}&0.1496-0.8450\mathbf{i}\end{array}\right].

Here the complex matrix AstA_{st} is constructed with singular values 2,1,1,02,1,1,0. By using Algorithm 2, all the nonzero singular values of dual complex matrix AA are positive dual numbers given by

ΣA=diag(20.4551ϵ,10.4524ϵ,1+1.9418ϵ,0+0.9203ϵ).\Sigma_{A}={\rm diag}\left(2-0.4551\epsilon,1-0.4524\epsilon,1+1.9418\epsilon,0+0.9203\epsilon\right).

The corresponding partially unitary matrix V=Vst+Vϵ𝔻6×4V=V_{st}+V_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{6\times 4} is give by

Vst=[0.21060.2339𝐢0.3265+0.2413𝐢0.3539+0.2987𝐢0.4623+0.5243𝐢0.22300.1973𝐢0.5322+0.0760𝐢0.0513+0.2022𝐢0.2601+0.1058𝐢0.0612+0.3576𝐢0.27410.2340𝐢0.4780+0.3717𝐢0.04900.0799𝐢0.36060.5315𝐢0.14930.0002𝐢0.08120.2319𝐢0.3017+0.2590𝐢0.0463+0.1119𝐢0.1608+0.3877𝐢0.40530.3000𝐢0.4113+0.0855𝐢0.39800.3085𝐢0.33250.3275𝐢0.2463+0.0019𝐢0.29010.0679𝐢],V_{st}=\left[\begin{array}[]{rrrr}-0.2106-0.2339\mathbf{i}&-0.3265+0.2413\mathbf{i}&-0.3539+0.2987\mathbf{i}&-0.4623+0.5243\mathbf{i}\\ 0.2230-0.1973\mathbf{i}&0.5322+0.0760\mathbf{i}&-0.0513+0.2022\mathbf{i}&-0.2601+0.1058\mathbf{i}\\ 0.0612+0.3576\mathbf{i}&0.2741-0.2340\mathbf{i}&-0.4780+0.3717\mathbf{i}&-0.0490-0.0799\mathbf{i}\\ 0.3606-0.5315\mathbf{i}&-0.1493-0.0002\mathbf{i}&-0.0812-0.2319\mathbf{i}&0.3017+0.2590\mathbf{i}\\ -0.0463+0.1119\mathbf{i}&0.1608+0.3877\mathbf{i}&0.4053-0.3000\mathbf{i}&-0.4113+0.0855\mathbf{i}\\ -0.3980-0.3085\mathbf{i}&0.3325-0.3275\mathbf{i}&0.2463+0.0019\mathbf{i}&-0.2901-0.0679\mathbf{i}\end{array}\right],
V=[0.13710.6584𝐢0.2540+0.4767𝐢0.1966+0.0528𝐢0.26480.0997𝐢0.2880+0.0937𝐢0.37500.4115𝐢0.61100.5756𝐢0.12520.2210𝐢0.0223+0.1212𝐢0.0878+0.1483𝐢0.1702+0.9506𝐢0.32310.0264𝐢0.1348+0.3201𝐢0.8156+0.2973𝐢0.2300+0.4261𝐢0.2807+0.1123𝐢0.2812+0.2160𝐢0.4647+0.4034𝐢0.1816+0.1141𝐢0.06890.1729𝐢0.13920.1706𝐢0.14970.4576𝐢0.0619+0.2437𝐢0.23550.1253𝐢]V_{\mathcal{I}}=\left[\begin{array}[]{rrrr}0.1371-0.6584\mathbf{i}&0.2540+0.4767\mathbf{i}&-0.1966+0.0528\mathbf{i}&0.2648-0.0997\mathbf{i}\\ -0.2880+0.0937\mathbf{i}&-0.3750-0.4115\mathbf{i}&0.6110-0.5756\mathbf{i}&-0.1252-0.2210\mathbf{i}\\ 0.0223+0.1212\mathbf{i}&0.0878+0.1483\mathbf{i}&0.1702+0.9506\mathbf{i}&-0.3231-0.0264\mathbf{i}\\ 0.1348+0.3201\mathbf{i}&0.8156+0.2973\mathbf{i}&0.2300+0.4261\mathbf{i}&0.2807+0.1123\mathbf{i}\\ -0.2812+0.2160\mathbf{i}&0.4647+0.4034\mathbf{i}&-0.1816+0.1141\mathbf{i}&0.0689-0.1729\mathbf{i}\\ 0.1392-0.1706\mathbf{i}&-0.1497-0.4576\mathbf{i}&0.0619+0.2437\mathbf{i}&-0.2355-0.1253\mathbf{i}\end{array}\right]

and the unitary matrix U=Ust+Uϵ𝔻4×4U=U_{st}+U_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{4\times 4} is given by

Ust=[0.52370.37440.0051𝐢0.6728+0.0023𝐢0.36460.4219+0.0715𝐢0.7301+0.1798𝐢0.1650+0.1569𝐢0.44680.0091𝐢0.09660.3988𝐢0.4463+0.0870𝐢0.47840.1274𝐢0.5654+0.2451𝐢0.0912+0.6050𝐢0.1060+0.2764𝐢0.3073+0.3948𝐢0.32880.4238],U_{st}=\left[\begin{array}[]{rrrr}-0.5237&-0.3744-0.0051\mathbf{i}&0.6728+0.0023\mathbf{i}&-0.3646\\ 0.4219+0.0715\mathbf{i}&-0.7301+0.1798\mathbf{i}&0.1650+0.1569\mathbf{i}&0.4468-0.0091\mathbf{i}\\ 0.0966-0.3988\mathbf{i}&-0.4463+0.0870\mathbf{i}&-0.4784-0.1274\mathbf{i}&-0.5654+0.2451\mathbf{i}\\ -0.0912+0.6050\mathbf{i}&-0.1060+0.2764\mathbf{i}&-0.3073+0.3948\mathbf{i}&-0.3288-0.4238\end{array}\right],
U=[0.30570.1480𝐢0.05390.3954𝐢0.17310.3048𝐢0.70690.1562𝐢0.32370.2247𝐢0.0843+0.3301𝐢0.3227+0.3169𝐢0.0851+0.3018𝐢0.01650.7156𝐢0.1991+0.0450𝐢0.72100.4880𝐢0.25490.2455𝐢0.04180.4878𝐢0.34110.1952𝐢0.1419+0.5425𝐢0.7384+0.2460𝐢].U_{\mathcal{I}}=\left[\begin{array}[]{rrrr}-0.3057-0.1480\mathbf{i}&0.0539-0.3954\mathbf{i}&0.1731-0.3048\mathbf{i}&0.7069-0.1562\mathbf{i}\\ -0.3237-0.2247\mathbf{i}&-0.0843+0.3301\mathbf{i}&-0.3227+0.3169\mathbf{i}&0.0851+0.3018\mathbf{i}\\ -0.0165-0.7156\mathbf{i}&0.1991+0.0450\mathbf{i}&0.7210-0.4880\mathbf{i}&-0.2549-0.2455\mathbf{i}\\ -0.0418-0.4878\mathbf{i}&-0.3411-0.1952\mathbf{i}&0.1419+0.5425\mathbf{i}&-0.7384+0.2460\mathbf{i}\end{array}\right].
Example 8.3.

We test proposed algorithms on some randomly generated data to illustrate the possible application to brain science. The dual complex matrix A=Ast+Aϵ𝔻10×4A=A_{st}+A_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{10\times 4} is given by

Ast=[0.65160.7586𝐢0.79090.6119𝐢0.96870.2484𝐢0.8647+0.5022𝐢0.2923+0.9563𝐢0.99120.1326𝐢0.0899+0.9960𝐢0.8026+0.5965𝐢0.50780.8615𝐢0.5161+0.8565𝐢0.7457+0.6663𝐢0.72110.6928𝐢0.7627+0.6467𝐢0.23770.9713𝐢0.88510.4655𝐢0.9973+0.0734𝐢0.6893+0.7245𝐢0.50610.8625𝐢0.9634+0.2681𝐢0.54200.8404𝐢0.99600.0896𝐢0.5090+0.8608𝐢0.72500.6887𝐢0.50130.8653𝐢0.9767+0.2148𝐢0.60860.7935𝐢0.9909+0.1345𝐢0.7155+0.6986𝐢0.7858+0.6185𝐢0.00431.0000𝐢0.2393+0.9710𝐢0.74170.6708𝐢0.74790.6638𝐢0.94620.3235𝐢0.82820.5605𝐢0.65200.7582𝐢0.9988+0.0497𝐢0.7748+0.6322𝐢0.9035+0.4286𝐢0.95100.3092𝐢]A_{st}=\left[\begin{array}[]{rrrr}0.6516-0.7586\mathbf{i}&0.7909-0.6119\mathbf{i}&-0.9687-0.2484\mathbf{i}&-0.8647+0.5022\mathbf{i}\\ 0.2923+0.9563\mathbf{i}&-0.9912-0.1326\mathbf{i}&0.0899+0.9960\mathbf{i}&-0.8026+0.5965\mathbf{i}\\ -0.5078-0.8615\mathbf{i}&-0.5161+0.8565\mathbf{i}&0.7457+0.6663\mathbf{i}&-0.7211-0.6928\mathbf{i}\\ 0.7627+0.6467\mathbf{i}&-0.2377-0.9713\mathbf{i}&0.8851-0.4655\mathbf{i}&-0.9973+0.0734\mathbf{i}\\ 0.6893+0.7245\mathbf{i}&-0.5061-0.8625\mathbf{i}&-0.9634+0.2681\mathbf{i}&-0.5420-0.8404\mathbf{i}\\ -0.9960-0.0896\mathbf{i}&0.5090+0.8608\mathbf{i}&0.7250-0.6887\mathbf{i}&-0.5013-0.8653\mathbf{i}\\ -0.9767+0.2148\mathbf{i}&0.6086-0.7935\mathbf{i}&-0.9909+0.1345\mathbf{i}&0.7155+0.6986\mathbf{i}\\ -0.7858+0.6185\mathbf{i}&-0.0043-1.0000\mathbf{i}&-0.2393+0.9710\mathbf{i}&-0.7417-0.6708\mathbf{i}\\ -0.7479-0.6638\mathbf{i}&-0.9462-0.3235\mathbf{i}&-0.8282-0.5605\mathbf{i}&0.6520-0.7582\mathbf{i}\\ -0.9988+0.0497\mathbf{i}&0.7748+0.6322\mathbf{i}&-0.9035+0.4286\mathbf{i}&-0.9510-0.3092\mathbf{i}\end{array}\right]

and

A=[0.74930.4794𝐢0.88860.3328𝐢0.8709+0.0308𝐢0.7670+0.7814𝐢0.6452+0.3523𝐢0.63830.7366𝐢0.4428+0.3919𝐢0.44970.0075𝐢0.25800.8536𝐢0.2663+0.8644𝐢0.9955+0.6742𝐢0.47130.6850𝐢0.6595+0.8259𝐢0.34090.7922𝐢0.78190.2863𝐢1.1005+0.2526𝐢1.0198+0.9021𝐢0.17560.6849𝐢0.6328+0.4457𝐢0.21140.6628𝐢0.9302+0.1061𝐢0.5748+1.0565𝐢0.79080.4930𝐢0.43550.6696𝐢0.8158+0.1512𝐢0.76940.8571𝐢0.8300+0.0709𝐢0.8764+0.6350𝐢0.3431+0.6388𝐢0.43850.9797𝐢0.2035+0.9913𝐢0.29890.6504𝐢0.28040.0873𝐢0.4786+0.2530𝐢0.3606+0.0160𝐢1.11960.1817𝐢0.47920.1506𝐢1.2944+0.4319𝐢0.3839+0.2283𝐢0.43140.5095𝐢].A_{\mathcal{I}}=\left[\begin{array}[]{rrrr}0.7493-0.4794\mathbf{i}&0.8886-0.3328\mathbf{i}&-0.8709+0.0308\mathbf{i}&-0.7670+0.7814\mathbf{i}\\ 0.6452+0.3523\mathbf{i}&-0.6383-0.7366\mathbf{i}&0.4428+0.3919\mathbf{i}&-0.4497-0.0075\mathbf{i}\\ -0.2580-0.8536\mathbf{i}&-0.2663+0.8644\mathbf{i}&0.9955+0.6742\mathbf{i}&-0.4713-0.6850\mathbf{i}\\ 0.6595+0.8259\mathbf{i}&-0.3409-0.7922\mathbf{i}&0.7819-0.2863\mathbf{i}&-1.1005+0.2526\mathbf{i}\\ 1.0198+0.9021\mathbf{i}&-0.1756-0.6849\mathbf{i}&-0.6328+0.4457\mathbf{i}&-0.2114-0.6628\mathbf{i}\\ -0.9302+0.1061\mathbf{i}&0.5748+1.0565\mathbf{i}&0.7908-0.4930\mathbf{i}&-0.4355-0.6696\mathbf{i}\\ -0.8158+0.1512\mathbf{i}&0.7694-0.8571\mathbf{i}&-0.8300+0.0709\mathbf{i}&0.8764+0.6350\mathbf{i}\\ -0.3431+0.6388\mathbf{i}&0.4385-0.9797\mathbf{i}&0.2035+0.9913\mathbf{i}&-0.2989-0.6504\mathbf{i}\\ -0.2804-0.0873\mathbf{i}&-0.4786+0.2530\mathbf{i}&-0.3606+0.0160\mathbf{i}&1.1196-0.1817\mathbf{i}\\ -0.4792-0.1506\mathbf{i}&1.2944+0.4319\mathbf{i}&-0.3839+0.2283\mathbf{i}&-0.4314-0.5095\mathbf{i}\end{array}\right].

Here the phase matrix Ast10×4A_{st}\in{\mathbb{C}}^{10\times 4} is randomly generated such that the modulus of its elements are equal to 1, and the relative phase matrix A10×4A_{\mathcal{I}}\in{\mathbb{C}}^{10\times 4} is calculated by subtracting from each element of AstA_{st} the mean of its row. See [1]. By using Algorithm 1, all the eigenvalues of dual complex matrix B=AA𝔻4×4B=A^{*}A\in{\mathbb{DC}}^{4\times 4} are positive dual numbers given by

ΣB=diag(16.3352+27.3465ϵ,12.836+22.9941ϵ,7.3681+9.9258ϵ,3.4607+4.1092ϵ).\Sigma_{B}={\rm diag}\left(16.3352+27.3465\epsilon,12.836+22.9941\epsilon,7.3681+9.9258\epsilon,3.4607+4.1092\epsilon\right).

The corresponding unitary matrix U=Ust+Uϵ𝔻4×4U=U_{st}+U_{\mathcal{I}}\epsilon\in{\mathbb{DC}}^{4\times 4} is give by

Ust=[0.60720.30810.48050.55270.49920.0776𝐢0.25810.5157𝐢0.1719+0.0728𝐢0.5541+0.2654𝐢0.48580.1602𝐢0.27580.3732𝐢0.2524+0.4806𝐢0.1606+0.4500𝐢0.02700.3371𝐢0.2539+0.5411𝐢0.3819+0.5419𝐢0.22020.2008𝐢]U_{st}=\left[\begin{array}[]{rrrr}0.6072&-0.3081&0.4805&-0.5527\\ -0.4992-0.0776\mathbf{i}&-0.2581-0.5157\mathbf{i}&-0.1719+0.0728\mathbf{i}&-0.5541+0.2654\mathbf{i}\\ 0.4858-0.1602\mathbf{i}&0.2758-0.3732\mathbf{i}&-0.2524+0.4806\mathbf{i}&0.1606+0.4500\mathbf{i}\\ -0.0270-0.3371\mathbf{i}&-0.2539+0.5411\mathbf{i}&-0.3819+0.5419\mathbf{i}&-0.2202-0.2008\mathbf{i}\end{array}\right]

and

U=[0.0144+0.5457𝐢0.06250.7958𝐢0.09240.0491𝐢0.0297+0.3729𝐢0.8851+0.3259𝐢0.2346+0.7871𝐢0.15950.4347𝐢0.20080.0086𝐢0.48330.0491𝐢0.08960.3614𝐢0.96300.0557𝐢0.79320.0479𝐢0.0467+0.5327𝐢0.0852+0.5627𝐢0.8119+0.2002𝐢0.5946+0.3713𝐢].U_{\mathcal{I}}=\left[\begin{array}[]{rrrr}-0.0144+0.5457\mathbf{i}&0.0625-0.7958\mathbf{i}&0.0924-0.0491\mathbf{i}&0.0297+0.3729\mathbf{i}\\ -0.8851+0.3259\mathbf{i}&0.2346+0.7871\mathbf{i}&0.1595-0.4347\mathbf{i}&-0.2008-0.0086\mathbf{i}\\ -0.4833-0.0491\mathbf{i}&0.0896-0.3614\mathbf{i}&-0.9630-0.0557\mathbf{i}&-0.7932-0.0479\mathbf{i}\\ 0.0467+0.5327\mathbf{i}&-0.0852+0.5627\mathbf{i}&0.8119+0.2002\mathbf{i}&-0.5946+0.3713\mathbf{i}\end{array}\right].

In the principal component analysis, one can choose the first few columns of UU to generate principal components of the data. Besides, we also test the phase matrix AstA_{st} with scale 20000×50020000\times 500, the unitary matrix UU can be computed within about 4.5 seconds.

Example 8.4.

The truncated SVD is used to approximate the sample images: “Peppers” and “Lena”. Both of them are gray-scale images of size 512×512512\times 512. For these images, we first use the 2-dimensional discrete Fourier transform to generate the complex matrices AstA_{st} and AA_{\mathcal{I}} which make up the dual complex matrix A=Ast+Aϵ𝔻512×512A=A_{st}+A_{\mathcal{I}}\epsilon\in{\mathbb{DC}^{512\times 512}}. The low-rank approximation Ak𝔻512×512A_{k}\in{\mathbb{DC}^{512\times 512}} can be obtained by the truncated SVD with the first kk largest singular values. The original images are approximated by applying the 2-dimensional inverse discrete Fourier transform to the standard part and the infinitesimal part of AkA_{k}, respectively. Given the value of kk, we have AkAF=i=k+1512|λi|2\|A_{k}-A\|_{F}=\sqrt{\sum_{i=k+1}^{512}|\lambda_{i}|^{2}} and AF=i=1512|λi|2\|A\|_{F}=\sqrt{\sum_{i=1}^{512}|\lambda_{i}|^{2}} where λ1λ2λ512\lambda_{1}\geq\lambda_{2}\geq\ldots\geq\lambda_{512} are all the singualr values of AA. Note that AkAF\|A_{k}-A\|_{F}, AF\|A\|_{F} and AkAF/AF\|A_{k}-A\|_{F}/\|A\|_{F} are dual numbers since λi\lambda_{i}’s are dual numbers. The relative errors of approximation are reported in Table 1. The approximated images with kk-truncated SVD are also presented in Figure 1.

Table 1: Relative error of approxiamtion with kk-truncated SVD
kk 5 15 25 35 45
AkAFAF\frac{\|A_{k}-A\|_{F}}{\|A\|_{F}} 0.2546-0.2304ϵ\epsilon 0.1446-0.1345ϵ\epsilon 0.1052-0.0932ϵ\epsilon 0.0838-0.0752ϵ\epsilon 0.0696-0.0619ϵ\epsilon
Refer to caption
Figure 1: Image approximation with kk-truncated SVD of dual complex matrices

References

  • [1] D.M. Alexander, C. Trengove, J.J. Wright, P.R. Boord and E. Gordon, “Measurement of phase gradients in the EEG”, Journal of Neuroscience Methods 156 (2006) 111-128.
  • [2] D.M. Alexander, A.R. Nikolaev, P. Jurica, M. Zvyagintsev, K. Mathiak and C. van Leeuwen, “Global neuromagnetic cortical fields have non-zero velocity”, PloS One 11 (2016) e0148413.
  • [3] D.M. Alexander, T. Ball, A. Schulze-Bonhage and C. van Leeuwen, “Large-scale cortical travelling waves predict localized future cortical signals”, PloS Computational Biology 15 (2019) e1007316.
  • [4] D. Brezov, “Factorization and generalized roots of dual complex matrices with Rodrigues’ formula”, Advances in Applied Clifford Algebras 30 (2020) 29.
  • [5] E.J. Candés, X. Li, Y. Ma and J. Wright, “Robust principal component analysis?”, J. ACM 58 (2011) Article 11.
  • [6] W.K. Clifford, “Preliminary sketch of bi-quaternions”, Proceedings of the London Mathematical Society 4 (1873) 381-395.
  • [7] I.S. Fischer, Dual-Number Methods in Kinematics, Statistics and Dynamics, Boca Raton, Florida, CRC Press, 1998.
  • [8] M.A. Güngör and Ö. Tetik, “De-Moivre and Euler formulae for dual complex numbers”, Universal Journal of Mathematics and Applications 2 (2019) 126-129.
  • [9] F. Messelmi, “Dual-complex numbers and their holomorphic functions”,
    https://hal.archieves.ouvertes.fr/hal-01114178, (2015).
  • [10] G. Matsuda, S. Kaji and H. Ochiai, Anti-commutative Dual Complex Numbers and 2D Rigid Transformation in: K. Anjyo, ed., Mathematical Progress in Expressive Image Synthesis I: Extended and Selected Results from the Symposium MEIS2013, Mathematics for Industry, Springer, Japan (2014) pp. 131-138.
  • [11] L. Qi, C. Ling and H. Yan, “Dual quaternions and dual quaternion vectors”, November 2021, arXiv:2111.04491.