Matrix-valued Bratu equation associated with the symmetric domains of type BDI and CI
Hiroto Inoue
Institute of Mathematics for Industry, Kyushu University
Abstract
A formulation of the exponential matrix solution of the matrix-valued Bratu equation is given, based on the structure of the symmetric domain of type BDI. Moreover, an analog for the symmetric domain of type CI is given.
Contents
1 Introduction
2 Proof for exponential matrix solution
2.1 Block-Gauss decomposition
2.2 Lagrangian calculus
3 Expression in symmetric domains
3.1 Siegel domain of type BDI
3.2 Bounded domain of type BDI and power series solution
4 Analog to symmetric domain of type CI
4.1 Siegel domain D ( J ; K ) D(J;K) of type CI
5 Remarks
Notation
Throughout this article,
Sym n + ( ℝ ) \operatorname{Sym}^{+}_{n}(\mathbb{R}) is the set of n n -dimensional positive-symmetric matrices and M n , r ( ℝ ) \mathrm{M}_{n,r}(\mathbb{R}) is the set of ( n × r ) (n\times r) real matrices.
1 Introduction
We consider the matrix-valued Bratu equation
( h ′ ( s ) h ( s ) − 1 ) ′ = ( a a t ) h ( s ) − 1 , a ∈ M n , r ( ℝ ) (h^{\prime}(s)h(s)^{-1})^{\prime}=(a{}^{t}\!a)h(s)^{-1},\quad a\in\mathrm{M}_{n,r}(\mathbb{R})
(1.1)
for a Sym n + ( ℝ ) \operatorname{Sym}^{+}_{n}(\mathbb{R}) -valued function h ( s ) h(s) of s ∈ ℝ s\in\mathbb{R} introduced in [1 ] .
The solution of the initial value problem of the Bratu equation (1.1 ) was given in [1 ] using a specific exponential matrix.
In this article, we give a geometrical formulation of this solution based on the geometrical structure of the symmetric domains G / K G/K . In detail, we express the exponential matrix solution of the equation (1.1 ) by the defining function of the symmetric domain of type BDI according to Cartan’s classification, ref. [2 , 3 ] .
Moreover, we give an analog associated with the symmetric domain of type CI, as Table 1 .
Table 1: Matrix-valued differential equations associated to symmetric domains
domain condition for Δ ( w , e s B w ) equation of Δ ( e s B u 0 , e s B u 0 ) − 1 BDI ( a a t ) s 2 2 + O ( s 3 ) ( h ′ h − 1 ) ′ = ( a a t ) h − 1 a ∈ M n , r ( ℝ ) CI None ( h ′ h − 1 ) ′ = ( c h − 1 ) 2 c ∈ Sym n ( ℝ ) \begin{array}[]{c||c|c}\hline\cr\hline\cr\text{domain}&\begin{array}[]{l}\text{condition for}\\
\text{$\Delta(w,e^{sB}w)$}\end{array}&\begin{array}[]{l}\text{equation of}\\
\Delta(e^{sB}u_{0},e^{sB}u_{0})^{-1}\end{array}\\
\hline\cr\text{BDI}&\displaystyle(a{}^{t}\!a)\frac{s^{2}}{2}+O(s^{3})&\begin{array}[]{r}(h^{\prime}h^{-1})^{\prime}=(a{}^{t}\!a)h^{-1}\\
a\in\mathrm{M}_{n,r}(\mathbb{R})\end{array}\\
\hline\cr\text{CI}&\text{None}&\begin{array}[]{r}(h^{\prime}h^{-1})^{\prime}=(ch^{-1})^{2}\\
c\in\operatorname{Sym}_{n}(\mathbb{R})\end{array}\\
\hline\cr\end{array}
2 Proof for exponential matrix solution
2.1 Block-Gauss decomposition
Manifold Ω ( J ) \Omega(J)
We define a submanifold Ω ( J ) \Omega(J) in the cone Sym 2 n + r + ( ℝ ) \operatorname{Sym}^{+}_{2n+r}(\mathbb{R}) by
Ω ( J ) = { G ∈ Sym 2 n + r + ( ℝ ) ; J G J = G − 1 } , \Omega(J)=\left\{G\in\operatorname{Sym}^{+}_{2n+r}(\mathbb{R});\;JGJ=G^{-1}\right\},
J = ( 0 0 I n 0 I r 0 I n 0 0 ) . J=\begin{pmatrix}0&0&I_{n}\\
0&I_{r}&0\\
I_{n}&0&0\end{pmatrix}.
Setting the set V ( J ) V(J) by
V ( J ) = { B ∈ Sym 2 n + r ( ℝ ) ; J B J = − B } , V(J)=\left\{B\in\operatorname{Sym}_{2n+r}(\mathbb{R});\;JBJ=-B\right\},
we can consider the following map:
V ( J ) ∋ B ↦ exp B ∈ Ω ( J ) . V(J)\ni B\mapsto\exp B\in\Omega(J).
We see that any element B ∈ V ( J ) B\in V(J) has the following form:
B = B ( b , a , c ) := ( b a c a t 0 − a t c t − a − b ) , b ∈ Sym n ( ℝ ) , a ∈ M n , r ( ℝ ) , c ∈ Alt n ( ℝ ) . B=B(b,a,c):=\begin{pmatrix}b&a&c\\
{}^{t}\!a&0&-{}^{t}\!a\\
{}^{t}\!c&-a&-b\end{pmatrix},\quad\begin{array}[]{l}b\in\operatorname{Sym}_{n}(\mathbb{R}),\\
a\in\mathrm{M}_{n,r}(\mathbb{R}),\\
c\in\mathrm{Alt}_{n}(\mathbb{R}).\end{array}
For an element B ∈ V ( J ) B\in V(J) , we define an Ω ( J ) \Omega(J) -valued function G ( s ; B ) G(s;B) by
G ( s ; B ) := exp ( s B ( b , a , c ) ) ( s ∈ ℝ ) . G(s;B):=\exp(sB(b,a,c))\quad(s\in\mathbb{R}).
We define the following sets of matrices:
𝒢 ( J ) = { G ∈ GL 2 n + r ( ℝ ) ; J G J = G − 1 t } ≅ O ( n + r , n ) , \displaystyle\mathcal{G}(J)=\left\{G\in\operatorname{GL}_{2n+r}(\mathbb{R});\;JGJ={}^{t}\!G^{-1}\right\}\cong\mathrm{O}(n+r,n),
𝒩 ( J ) = { N ∈ 𝒢 ( J ) ; N = ( I n 0 0 ∗ I r 0 ∗ ∗ I n ) } , \displaystyle\mathcal{N}(J)=\left\{N\in\mathcal{G}(J);\;N=\begin{pmatrix}I_{n}&0&0\\
*&I_{r}&0\\
*&*&I_{n}\end{pmatrix}\right\},
= { N = ( I n 0 0 n 1 t I r 0 n 2 t − n 1 I n ) , n 2 + n 2 t = − n 1 n 1 t } , \displaystyle\qquad=\left\{N=\begin{pmatrix}I_{n}&0&0\\
{}^{t}\!n_{1}&I_{r}&0\\
{}^{t}\!n_{2}&-n_{1}&I_{n}\end{pmatrix},n_{2}+{}^{t}\!n_{2}=-n_{1}{}^{t}\!n_{1}\right\},
Ω 0 ( J ) = { A ∈ Ω ( J ) ; A = ( ∗ 0 0 0 ∗ 0 0 0 ∗ ) } , \displaystyle\Omega_{0}(J)=\left\{A\in\Omega(J);\;A=\begin{pmatrix}*&0&0\\
0&*&0\\
0&0&*\end{pmatrix}\right\},
= { A = ( h 0 0 0 I r 0 0 0 h − 1 ) , h t = h } . \displaystyle\qquad=\left\{A=\begin{pmatrix}h&0&0\\
0&I_{r}&0\\
0&0&h^{-1}\end{pmatrix},{}^{t}\!h=h\right\}.
Proposition 2.1 (Block-Gauss decomposition for Ω ( J ) \Omega(J) ).
The following map is bijective:
𝒩 ( J ) × Ω 0 ( J ) → Ω ( J ) ( N , A ) ↦ N A N t . \begin{array}[]{ccc}\mathcal{N}(J)\times\Omega_{0}(J)&\rightarrow&\Omega(J)\\
(N,A)&\mapsto&NA{}^{t}\!N.\end{array}
We denote the variable change defined through this decomposition by
G → N , A → h , n 1 , n 2 , G\rightarrow N,A\rightarrow h,n_{1},n_{2},
or simply by G → h G\rightarrow h .
Now we state the exponential matrix solution in [I2020] as follows.
Proposition 2.2 .
For B = B ( b , a , 0 ) ∈ V ( J ) B=B(b,a,0)\in V(J) , define a Sym n + ( ℝ ) \operatorname{Sym}^{+}_{n}(\mathbb{R}) -valued function h ~ ( s ) \tilde{h}(s) by the variable change
G ( s ; B ) → h ~ ( s ) . G(s;B)\rightarrow\tilde{h}(s).
Then, h ~ ( s ) \tilde{h}(s) satisfies the matrix-valued Bratu equation;
( h ~ ′ ( s ) h ~ ( s ) − 1 ) ′ = ( a a t ) h ~ ( s ) − 1 . (\tilde{h}^{\prime}(s)\tilde{h}(s)^{-1})^{\prime}=(a{}^{t}\!a)\tilde{h}(s)^{-1}.
In the rest of this section, we give an alternative proof for this proposition based on the Lagrangian calculus.
2.2 Lagrangian calculus
Lemma 2.1 .
Let G = G ( s ) G=G(s) be an Ω ( J ) \Omega(J) -valued function. Under the variable change
G → N , A → h , n 1 , n 2 , G\rightarrow N,A\rightarrow h,n_{1},n_{2},
the following equation holds:
1 2 tr ( ( G ′ G − 1 ) 2 ) \displaystyle\frac{1}{2}\operatorname{tr}((G^{\prime}G^{-1})^{2})
= 1 2 tr ( ( A ′ A − 1 ) 2 ) + tr ( A N ′ t N − 1 t A − 1 N − 1 N ′ ) \displaystyle=\frac{1}{2}\operatorname{tr}((A^{\prime}A^{-1})^{2})+\operatorname{tr}(A{}^{t}\!N^{\prime}{}^{t}\!N^{-1}A^{-1}N^{-1}N^{\prime})
= tr ( ( h ′ h − 1 ) 2 ) + 2 tr ( h n 1 ′ n 1 ′ t ) + tr ( h m t h m ) , \displaystyle=\operatorname{tr}((h^{\prime}h^{-1})^{2})+2\operatorname{tr}(hn_{1}^{\prime}{}^{t}\!n_{1}^{\prime})+\operatorname{tr}(h{}^{t}\!mhm),
m = n 1 n 1 ′ t + n 2 ′ = − m t . m=n_{1}{}^{t}\!n^{\prime}_{1}+n^{\prime}_{2}=-{}^{t}\!m.
Proof.
To prove this lemma, we use the following orthogonal relation.
Lemma 2.2 .
Define a subset of M = M 2 n + r ( ℝ ) M=\mathrm{M}_{2n+r}(\mathbb{R}) as
M ≤ 0 = { X ∈ M ; X = ( ∗ 0 0 ∗ ∗ 0 ∗ ∗ ∗ ) } , M_{\leq 0}=\left\{X\in M;\;X=\begin{pmatrix}*&0&0\\
*&*&0\\
*&*&*\end{pmatrix}\right\},
M < 0 = { X ∈ M ; X = ( 0 0 0 ∗ 0 0 ∗ ∗ 0 ) } . M_{<0}=\left\{X\in M;\;X=\begin{pmatrix}0&0&0\\
*&0&0\\
*&*&0\end{pmatrix}\right\}.
Then, it holds that
tr ( X 1 X 2 ) = 0 \operatorname{tr}(X_{1}X_{2})=0
for any X 1 ∈ M ≤ 0 , X 2 ∈ M < 0 X_{1}\in M_{\leq 0},X_{2}\in M_{<0} .
Inserting G = N A N t G=NA{}^{t}\!N into G ′ G − 1 G^{\prime}G^{-1} , we get
G ′ G − 1 \displaystyle G^{\prime}G^{-1}
= N A N ′ t N − 1 t A − 1 N − 1 + N A ′ A − 1 N − 1 + N ′ N − 1 \displaystyle=NA{}^{t}\!N^{\prime}{}^{t}\!N^{-1}A^{-1}N^{-1}+NA^{\prime}A^{-1}N^{-1}+N^{\prime}N^{-1}
= : X 1 + X 2 + X 3 . \displaystyle=:X_{1}+X_{2}+X_{3}.
Here we see that
X 1 ∈ Ad ( N ) M < 0 t , X 2 ∈ Ad ( N ) M ≤ 0 t ∩ M ≤ 0 , X_{1}\in\operatorname{Ad}(N){}^{t}\!M_{<0},\quad X_{2}\in\operatorname{Ad}(N){}^{t}\!M_{\leq 0}\cap M_{\leq 0},
Then, by Lemma 2.2 , we get
1 2 tr ( ( G ′ G − 1 ) 2 ) \displaystyle\frac{1}{2}\operatorname{tr}((G^{\prime}G^{-1})^{2})
= tr ( X 1 X 3 ) + 1 2 tr ( X 2 X 2 ) . \displaystyle=\operatorname{tr}(X_{1}X_{3})+\frac{1}{2}\operatorname{tr}(X_{2}X_{2}).
Inserting
A ′ A − 1 \displaystyle A^{\prime}A^{-1}
= ( h ′ h − 1 0 0 0 0 0 0 0 − h − 1 h ′ ) , \displaystyle=\begin{pmatrix}h^{\prime}h^{-1}&0&0\\
0&0&0\\
0&0&-h^{-1}h^{\prime}\end{pmatrix},
N − 1 N ′ \displaystyle N^{-1}N^{\prime}
= ( 0 0 0 n 1 ′ t 0 0 m − n 1 ′ 0 ) , m = n 1 n 1 ′ t + n 2 ′ , \displaystyle=\begin{pmatrix}0&0&0\\
{}^{t}\!n^{\prime}_{1}&0&0\\
m&-n^{\prime}_{1}&0\end{pmatrix},\;m=n_{1}{}^{t}\!n^{\prime}_{1}+n^{\prime}_{2},
we calculate as
tr ( X 1 X 3 ) \displaystyle\operatorname{tr}(X_{1}X_{3})
= tr ( A N ′ t N − 1 t A − 1 N − 1 N ′ ) \displaystyle=\operatorname{tr}(A{}^{t}\!N^{\prime}{}^{t}\!N^{-1}A^{-1}N^{-1}N^{\prime})
= 2 tr ( h n 1 ′ n 1 ′ t ) + tr ( h m t h m ) , \displaystyle=2\operatorname{tr}(hn_{1}^{\prime}{}^{t}\!n_{1}^{\prime})+\operatorname{tr}(h{}^{t}\!mhm),
1 2 tr ( X 2 X 2 ) \displaystyle\frac{1}{2}\operatorname{tr}(X_{2}X_{2})
= 1 2 tr ( ( A ′ A − 1 ) 2 ) = tr ( ( h ′ h − 1 ) 2 ) . \displaystyle=\frac{1}{2}\operatorname{tr}((A^{\prime}A^{-1})^{2})=\operatorname{tr}((h^{\prime}h^{-1})^{2}).
Therefore, we get the Lagrangian as
1 2 tr ( ( G ′ G − 1 ) 2 ) = \displaystyle\frac{1}{2}\operatorname{tr}((G^{\prime}G^{-1})^{2})=
tr ( ( h ′ h − 1 ) 2 ) + 2 tr ( h n 1 ′ n 1 ′ t ) \displaystyle\operatorname{tr}((h^{\prime}h^{-1})^{2})+2\operatorname{tr}(hn_{1}^{\prime}{}^{t}\!n_{1}^{\prime})
+ tr ( h m t h m ) . \displaystyle+\operatorname{tr}(h{}^{t}\!mhm).
∎
Next we proceed to derive the Euler-Lagrange equation. For the preparation, we deal with the constrain condition for G ( s ) G(s) as follows.
Lemma 2.3 .
For the Lagrangian
ℒ = 1 2 tr ( ( G ′ G − 1 ) 2 ) + tr ( ( J G J G t − I ) λ ) + tr ( ( G − G t ) μ ) \mathcal{L}=\frac{1}{2}\operatorname{tr}((G^{\prime}G^{-1})^{2})+\operatorname{tr}((JGJ{}^{t}\!G-I)\lambda)+\operatorname{tr}((G-{}^{t}\!G)\mu)
for GL ( n , ℝ ) \operatorname{GL}(n,\mathbb{R}) -valued function G , λ , μ G,\lambda,\mu ,
the Euler-Lagrange equation reads
{ ( G ′ G − 1 ) ′ = 0 , G ( μ − μ t ) G + J ( μ − μ t ) J = 0 . \displaystyle\begin{cases}(G^{\prime}G^{-1})^{\prime}=0,\\
G(\mu-{}^{t}\!\mu)G+J(\mu-{}^{t}\!\mu)J=0.\end{cases}
Proof.
By the property of the matrix trace, we have the equation
tr ( J G 1 J G 2 t λ ) \displaystyle\operatorname{tr}(JG_{1}J{}^{t}\!G_{2}\lambda)
= tr ( G 1 ⋅ J G 2 t λ J ) \displaystyle=\operatorname{tr}(G_{1}\cdot J{}^{t}\!G_{2}\lambda J)
= tr ( G 2 ⋅ J G 1 t J λ t ) . \displaystyle=\operatorname{tr}(G_{2}\cdot J{}^{t}\!G_{1}J{}^{t}\!\lambda).
Then, we can derive the Euler-Lagrange equation as
G : \displaystyle G:
G − 1 ( G ′′ − G ′ G − 1 G ′ ) G − 1 = J G t λ J + J G t J λ t , \displaystyle G^{-1}(G^{\prime\prime}-G^{\prime}G^{-1}G^{\prime})G^{-1}=J{}^{t}\!G\lambda J+J{}^{t}\!GJ{}^{t}\!\lambda,
⇔ ( G ′ G − 1 ) ′ = J λ J + λ t + G ( μ − μ t ) , \displaystyle\Leftrightarrow(G^{\prime}G^{-1})^{\prime}=J\lambda J+{}^{t}\!\lambda+G(\mu-{}^{t}\!\mu),
λ : \displaystyle\lambda:
G J G t = J , \displaystyle GJ{}^{t}\!G=J,
μ : \displaystyle\mu:
G − G t = 0 . \displaystyle G-{}^{t}\!G=0.
Applying the map
X ↦ X − J X t J X\mapsto X-J{}^{t}\!XJ to the both sides of the first equation, we get
2 ( G ′ G − 1 ) ′ = G ( μ − μ t ) + J ( μ − μ t ) J G − 1 \displaystyle 2(G^{\prime}G^{-1})^{\prime}=G(\mu-{}^{t}\!\mu)+J(\mu-{}^{t}\!\mu)JG^{-1}
⇔ \displaystyle\Leftrightarrow
2 ( G ′ G − 1 ) ′ G = G ( μ − μ t ) G + J ( μ − μ t ) J . \displaystyle 2(G^{\prime}G^{-1})^{\prime}G=G(\mu-{}^{t}\!\mu)G+J(\mu-{}^{t}\!\mu)J.
We see that the left hand side is a symmetric matrix and the right hand side is a skew-symmetric matrix. Therefore, they turn to be 0 .
∎
Derivation of Euler-Lagrange equation
Now we derive the Euler-Lagrange equation for our Lagrangian
ℒ \displaystyle\mathcal{L}
= ℒ ( h , n 1 , n 2 , h ′ , n 1 ′ , n 2 ′ ) \displaystyle=\mathcal{L}(h,n_{1},n_{2},h^{\prime},n_{1}^{\prime},n_{2}^{\prime})
= tr ( ( h ′ h − 1 ) 2 ) + 2 tr ( h n 1 ′ n 1 ′ t ) + tr ( h m t h m ) , \displaystyle=\operatorname{tr}((h^{\prime}h^{-1})^{2})+2\operatorname{tr}(hn_{1}^{\prime}{}^{t}\!n_{1}^{\prime})+\operatorname{tr}(h{}^{t}\!mhm),
m = n 1 n 1 ′ t + n 2 ′ . \displaystyle m=n_{1}{}^{t}\!n^{\prime}_{1}+n^{\prime}_{2}.
The result is written as follows:
variable x : \displaystyle\text{variable }x:
E-L eq. d d s ∂ ℒ ∂ x ′ = ∂ ℒ ∂ x \displaystyle\text{E-L eq. }\displaystyle\frac{d}{ds}\frac{\partial\mathcal{L}}{\partial x^{\prime}}=\frac{\partial\mathcal{L}}{\partial x}
h : \displaystyle h:
( h ′ h − 1 ) ′ = h n 1 ′ n 1 ′ t + h m t h m , \displaystyle(h^{\prime}h^{-1})^{\prime}=hn_{1}^{\prime}{}^{t}\!n_{1}^{\prime}+h{}^{t}\!mhm,
n 1 : \displaystyle n_{1}:
{ 2 h n 1 ′ + h m t h n 1 } ′ = h m h n 1 ′ , \displaystyle\left\{2hn_{1}^{\prime}+h{}^{t}\!mhn_{1}\right\}^{\prime}=hmhn_{1}^{\prime},
n 2 : \displaystyle n_{2}:
( h m t h ) ′ = 0 . \displaystyle(h{}^{t}\!mh)^{\prime}=0.
From the third equation, we get
h m t h = c ~ : constant matrix , c ~ ∈ Alt n ( ℝ ) . h{}^{t}\!mh=\tilde{c}:\text{constant matrix},\quad\tilde{c}\in\mathrm{Alt}_{n}(\mathbb{R}).
Thus, the above equations become
h : \displaystyle h:
( h ′ h − 1 ) ′ = h n 1 ′ n 1 ′ t − c ~ h − 1 c ~ h − 1 , \displaystyle(h^{\prime}h^{-1})^{\prime}=hn_{1}^{\prime}{}^{t}\!n_{1}^{\prime}-\tilde{c}h^{-1}\tilde{c}h^{-1},
n 1 : \displaystyle n_{1}:
2 ( h n 1 ′ ) ′ + 2 ( c ~ n 1 ) ′ = 0 . \displaystyle 2(hn_{1}^{\prime})^{\prime}+2(\tilde{c}n_{1})^{\prime}=0.
From the second equation, we get
h n 1 ′ + c n 1 = a ~ : constant matrix . hn_{1}^{\prime}+{c}n_{1}=\tilde{a}:\text{constant matrix}.
Especially, in the case c ~ = 0 \tilde{c}=0 , we get
h n 1 ′ = a ~ hn_{1}^{\prime}=\tilde{a} .
Inserting this into the first equation, we obtain the matrix-valued Bratu equation
h : \displaystyle h:
( h ′ h − 1 ) ′ = ( a ~ a ~ t ) h − 1 . \displaystyle(h^{\prime}h^{-1})^{\prime}=(\tilde{a}{}^{t}\!\tilde{a})h^{-1}.
Next we identify the constant matrices c ~ , a ~ \tilde{c},\tilde{a} .
Defining the function G ( s ) G(s) by the relation G → h , n 1 , n 2 G\rightarrow h,n_{1},n_{2} , we have that
G ′ G − 1 = ( ∗ h n 1 ′ − h m h n 1 h m t h ∗ ∗ ∗ ∗ ∗ ∗ ) = ( ∗ a ~ c ~ ∗ ∗ ∗ ∗ ∗ ∗ ) . G^{\prime}G^{-1}=\begin{pmatrix}*&hn_{1}^{\prime}-hmhn_{1}&h{}^{t}\!mh\\
*&*&*\\
*&*&*\end{pmatrix}=\begin{pmatrix}*&\tilde{a}&\tilde{c}\\
*&*&*\\
*&*&*\end{pmatrix}.
On the other hand, G ( s ) G(s) satisfies the original E-L equation
G ′ G − 1 = (constant matrix) . G^{\prime}G^{-1}=\text{(constant matrix)}.
This shows that G ( s ) G(s) is given as
G = G ( s ; B ( b ~ , a ~ , c ~ ) ) , b ~ ∈ Sym n ( ℝ ) . G=G(s;B(\tilde{b},\tilde{a},\tilde{c})),\quad\tilde{b}\in\operatorname{Sym}_{n}(\mathbb{R}).
We summarize the above calculation in the following table:
G ( s ; B ( b , a , c ) ) → h ( s ) , n 1 ( s ) , n 2 ( s ) ( G ′ G − 1 ) ′ = 0 { ( h ′ h − 1 ) ′ = h n 1 ′ n 1 ′ t − c h − 1 c h − 1 h n 1 ′ + c n 1 = a h ( n 1 n 1 ′ t + n 2 ′ ) t h = c 1 2 tr ( ( G ′ G − 1 ) 2 ) tr ( ( h ′ h − 1 ) 2 ) + 2 tr ( h n 1 ′ n 1 ′ t ) + tr ( h m t h m ) \begin{array}[]{ccc}\hline\cr\\
G(s;B(b,a,c))&\rightarrow&h(s),n_{1}(s),n_{2}(s)\\
\\
(G^{\prime}G^{-1})^{\prime}=0&&\begin{cases}(h^{\prime}h^{-1})^{\prime}=hn_{1}^{\prime}{}^{t}\!n_{1}^{\prime}-{c}h^{-1}{c}h^{-1}\\
hn_{1}^{\prime}+{c}n_{1}={a}\\
h{}^{t}\!(n_{1}{}^{t}\!n^{\prime}_{1}+n^{\prime}_{2})h=c\end{cases}\\
&&\\
\displaystyle\frac{1}{2}\operatorname{tr}((G^{\prime}G^{-1})^{2})&&\operatorname{tr}((h^{\prime}h^{-1})^{2})+2\operatorname{tr}(hn_{1}^{\prime}{}^{t}\!n_{1}^{\prime})+\operatorname{tr}(h{}^{t}\!mhm)\\
\\
\hline\cr\end{array}
Theorem 2.1 .
The equation
G ′ ( s ) G ( s ) − 1 = B ( b , a , c ) G^{\prime}(s)G(s)^{-1}=B(b,a,c)
for an Ω ( J ) \Omega(J) -valued function G ( s ) G(s) is expressed as
{ ( h ′ h − 1 ) ′ = h n 1 ′ n 1 ′ t − c h − 1 c h − 1 , h n 1 ′ + c n 1 = a , h ( n 1 n 1 ′ t + n 2 ′ ) t h = c \begin{cases}(h^{\prime}h^{-1})^{\prime}=hn_{1}^{\prime}{}^{t}\!n_{1}^{\prime}-{c}h^{-1}{c}h^{-1},\\
hn_{1}^{\prime}+{c}n_{1}={a},\\
h{}^{t}\!(n_{1}{}^{t}\!n^{\prime}_{1}+n^{\prime}_{2})h=c\end{cases}
(2.1)
by the variable change G ( s ) → h ( s ) , n 1 ( s ) , n 2 ( s ) G(s)\rightarrow h(s),n_{1}(s),n_{2}(s) .
Especially, in the case c = 0 {c}=0 , it is equivalent to the matrix-valued Bratu equation
( h ′ h − 1 ) ′ = ( a a t ) h − 1 . (h^{\prime}h^{-1})^{\prime}=({a}{}^{t}\!{a})h^{-1}.
The exponential matrix solution (Proposition 2.2 ) follows from this theorem.
3 Expression in symmetric domains
Considering a realization of Ω ( J ) \Omega(J) as a symmetric domain,
we express
•
the condition c = 0 c=0 for B = B ( b , a , c ) B=B(b,a,c) ,
•
the variable change G → h G\rightarrow h by the block-Gauss decomposition
in Proposition 2.2 in terms of the structure of symmetric domains.
3.1 Siegel domain of type BDI
Siegel domain D ( J ) D(J) of type BDI
For the set
M = { U = ( U 1 U 2 U 3 ) ∈ M 2 n + r , n ( ℝ ) ; rank U = n } , \displaystyle M=\left\{U=\begin{pmatrix}U_{1}\\
U_{2}\\
U_{3}\end{pmatrix}\in\mathrm{M}_{2n+r,n}(\mathbb{R});\;\operatorname{rank}U=n\right\},
we consider the group actions
GL ( 2 n + r , ℝ ) ↷ M ↶ GL ( n , ℝ ) \operatorname{GL}(2n+r,\mathbb{R})\curvearrowright M\curvearrowleft\operatorname{GL}(n,\mathbb{R})
by the matrix multiplication. For the quatient
D := M / GL ( n , ℝ ) = { u = [ U ] = [ U 1 U 2 U 3 ] ; U ∈ M } , D:=M/\operatorname{GL}(n,\mathbb{R})=\left\{u=[U]=\begin{bmatrix}U_{1}\\
U_{2}\\
U_{3}\end{bmatrix};\;U\in M\right\},
we equip it with the induced topology to consider D D as a domain.
We let
J = − ( 0 0 I n 0 I r 0 I n 0 0 ) J=-\begin{pmatrix}0&0&I_{n}\\
0&I_{r}&0\\
I_{n}&0&0\end{pmatrix}
and define a subdomain D ( J ) ⊂ D D(J)\subset D by
M ( J ) = { U ∈ M ; U t J U > 0 } , M(J)=\left\{U\in M;\;{}^{t}\!UJU>0\right\},
D ( J ) := { u = [ U ] ∈ D ; U ∈ M ( J ) } . D(J):=\left\{u=[U]\in D;\;U\in M(J)\right\}.
Lemma 3.1 (Realization as a Siegel domain).
The domain D ( J ) D(J) is the following set:
D ( J ) = { u = [ U 1 U 2 I n ] ∈ D ; − U 1 t − U 1 − U 2 t U 2 > 0 } . D(J)=\left\{u=\begin{bmatrix}U_{1}\\
U_{2}\\
I_{n}\end{bmatrix}\in D;\;-{}^{t}\!U_{1}-U_{1}-{}^{t}\!U_{2}U_{2}>0\right\}.
We define a subdomain Σ ⊂ D \Sigma\subset D (the Shilov boundary of D ( J ) D(J) ) by
Σ = { u = [ U 1 U 2 I n ] ∈ D ; − U 1 t − U 1 − U 2 t U 2 = 0 } . \Sigma=\left\{u=\begin{bmatrix}U_{1}\\
U_{2}\\
I_{n}\end{bmatrix}\in D;\;-{}^{t}\!U_{1}-U_{1}-{}^{t}\!U_{2}U_{2}=0\right\}.
We take two points u 0 , w ∈ D u_{0},w\in D as
u 0 = [ U 0 ] ∈ D ( J ) , w = [ W ] ∈ Σ , \displaystyle u_{0}=[U_{0}]\in D(J),\quad w=[W]\in\Sigma,
U 0 = ( − I n 0 I n ) , W = ( 0 0 I n ) . \displaystyle U_{0}=\begin{pmatrix}-I_{n}\\
0\\
I_{n}\end{pmatrix},\quad W=\begin{pmatrix}0\\
0\\
I_{n}\end{pmatrix}.
For two points u 1 , u 2 ∈ D ( J ) u_{1},u_{2}\in D(J) expressed as
u j = [ U j ] , U j = ( U j , 1 U j , 2 U j , 3 ) ( j = 1 , 2 ) , u_{j}=[U_{j}],\;U_{j}=\begin{pmatrix}U_{j,1}\\
U_{j,2}\\
U_{j,3}\end{pmatrix}\quad(j=1,2),
we define Δ ( u 1 , u 2 ) ∈ M n ( ℝ ) \Delta(u_{1},u_{2})\in\mathrm{M}_{n}(\mathbb{R}) by
Δ ( u 1 , u 2 ) := ( U 1 Γ 1 − 1 ) t J ( U 2 Γ 2 − 1 ) , \displaystyle\Delta(u_{1},u_{2}):={}^{t}\!(U_{1}\Gamma_{1}^{-1})J(U_{2}\Gamma_{2}^{-1}),
Γ j = W t U j = U j , 3 ( j = 1 , 2 ) . \Gamma_{j}={}^{t}\!WU_{j}=U_{j,3}\quad(j=1,2).
Expression of D ( J ) D(J) as a symmetric space G / K G/K
Consider the action 𝒢 ( J ) ↷ D ( J ) \mathcal{G}(J)\curvearrowright D(J) defined by
g ⋅ u = [ g U ] ( g ∈ 𝒢 ( J ) , u = [ U ] ∈ D ( J ) ) . g\cdot u=[gU]\quad(g\in\mathcal{G}(J),u=[U]\in D(J)).
We also define the following subgroups:
𝒫 ( J ) = { g ∈ 𝒢 ( J ) ; g = ( ∗ ∗ ∗ 0 ∗ ∗ 0 0 ∗ ) } , \displaystyle\mathcal{P}(J)=\left\{g\in\mathcal{G}(J);\;g=\begin{pmatrix}*&*&*\\
0&*&*\\
0&0&*\end{pmatrix}\right\},
𝒦 ( J ) = { g ∈ 𝒢 ( J ) ; g t = g − 1 } ≅ O ( n + r ) × O ( n ) , \displaystyle\mathcal{K}(J)=\left\{g\in\mathcal{G}(J);\;{}^{t}\!g=g^{-1}\right\}\cong\mathrm{O}(n+r)\times\mathrm{O}(n),
𝒢 ( J ) u 0 = { g ∈ 𝒢 ( J ) ; g u 0 = u 0 } . \displaystyle\mathcal{G}(J)_{u_{0}}=\left\{g\in\mathcal{G}(J);\;gu_{0}=u_{0}\right\}.
Lemma 3.2 .
The following assertions hold:
1.
𝒢 ( J ) = 𝒫 ( J ) 𝒦 ( J ) \mathcal{G}(J)=\mathcal{P}(J)\mathcal{K}(J) .
2.
𝒢 ( J ) u 0 = 𝒦 ( J ) \mathcal{G}(J)_{u_{0}}=\mathcal{K}(J) .
3.
The action 𝒢 ( J ) ↷ D ( J ) \mathcal{G}(J)\curvearrowright D(J) is transitive, and it holds that
D ( J ) ≅ SO 0 ( n + r , n ) / SO ( n + r ) × SO ( n ) . D(J)\cong\operatorname{SO}_{0}(n+r,n)/\operatorname{SO}(n+r)\times\operatorname{SO}(n).
Proof.
2 .
Let
A = 1 2 ( I n 0 − I n 0 2 I r 0 I n 0 I n ) , A=\frac{1}{\sqrt{2}}\begin{pmatrix}I_{n}&0&-I_{n}\\
0&\sqrt{2}I_{r}&0\\
I_{n}&0&I_{n}\end{pmatrix},
and
v 0 := A u 0 = [ I n 0 0 ] , K := A J A t = ( I n 0 0 − I n + r ) , v_{0}:=Au_{0}=\begin{bmatrix}I_{n}\\
0\\
0\end{bmatrix},\quad K:=AJ{}^{t}\!A=\begin{pmatrix}I_{n}&0\\
0&-I_{n+r}\end{pmatrix},
𝒢 ( K ) = A 𝒢 ( J ) A t . \mathcal{G}(K)=A\mathcal{G}(J){}^{t}\!A.
For g ∈ GL ( 2 n + r , ℝ ) g\in\operatorname{GL}(2n+r,\mathbb{R}) , we see the equivalence
g v 0 = v 0 ⇔ g = ( ∗ ∗ 0 ∗ ) . \displaystyle gv_{0}=v_{0}\Leftrightarrow g=\begin{pmatrix}*&*\\
0&*\end{pmatrix}.
Thus, for g ∈ 𝒢 ( K ) g\in\mathcal{G}(K) , it holds the equivalence
g v 0 = v 0 ⇔ g = g − 1 t . \displaystyle gv_{0}=v_{0}\Leftrightarrow g={}^{t}\!g^{-1}.
Therefore, for g ∈ 𝒢 ( J ) g\in\mathcal{G}(J) ,
g u 0 = u 0 ⇔ g = g − 1 t . \displaystyle gu_{0}=u_{0}\Leftrightarrow g={}^{t}\!g^{-1}.
Isomorphism ℱ : Ω ( J ) → ∼ D ( J ) \mathcal{F}:\Omega(J)\xrightarrow{\sim}D(J)
We define a map ℱ : Ω ( J ) → D \mathcal{F}:\Omega(J)\xrightarrow{}D by
ℱ ( G ) = [ G 3 − I n G 2 h ] , G = ( h ∗ ∗ G 2 ∗ ∗ G 3 ∗ ∗ ) ∈ Ω ( J ) , \mathcal{F}(G)=\begin{bmatrix}G_{3}-I_{n}\\
G_{2}\\
h\end{bmatrix},\quad G=\begin{pmatrix}h&*&*\\
G_{2}&*&*\\
G_{3}&*&*\end{pmatrix}\in\Omega(J),
and a map π : Ω ( J ) → Sym n + ( ℝ ) \pi:\Omega(J)\rightarrow\operatorname{Sym}^{+}_{n}(\mathbb{R}) by
π ( G ) = h , G = ( h ∗ ∗ G 2 ∗ ∗ G 3 ∗ ∗ ) ∈ Ω ( J ) . \pi(G)=h,\quad G=\begin{pmatrix}h&*&*\\
G_{2}&*&*\\
G_{3}&*&*\end{pmatrix}\in\Omega(J).
We define an action 𝒢 ( J ) ↷ Ω ( J ) \mathcal{G}(J)\curvearrowright\Omega(J) by
g ⋅ G = g − 1 t G g − 1 ( g ∈ 𝒢 ( J ) , G ∈ Ω ( J ) ) . g\cdot G={}^{t}\!g^{-1}Gg^{-1}\quad(g\in\mathcal{G}(J),G\in\Omega(J)).
Lemma 3.3 .
1.
For any g ∈ 𝒢 ( J ) , G ∈ Ω ( J ) g\in\mathcal{G}(J),G\in\Omega(J) , it holds that
ℱ ( g ⋅ G ) = g ℱ ( G ) . \mathcal{F}(g\cdot G)=g\mathcal{F}(G).
2.
It holds that ℱ ( Ω ( J ) ) ⊂ D ( J ) \mathcal{F}(\Omega(J))\subset D(J) , and ℱ : Ω ( J ) → D ( J ) \mathcal{F}:\Omega(J)\rightarrow D(J) is bijective.
3.
For any G ∈ Ω ( J ) G\in\Omega(J) , it holds that
1 2 π ( G ) = Δ ( ℱ ( G ) , ℱ ( G ) ) − 1 . \frac{1}{2}\pi(G)=\Delta(\mathcal{F}(G),\mathcal{F}(G))^{-1}.
By this lemma, we obtain the following commutative diagram:
Ω ( J ) \textstyle{\Omega(J)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces} ℱ \scriptstyle{\mathcal{F}} ↻ \scriptstyle{\circlearrowright} 1 2 π \scriptstyle{\frac{1}{2}\pi} D ( J ) \textstyle{D(J)\ignorespaces\ignorespaces\ignorespaces\ignorespaces} Δ ( ⋅ , ⋅ ) − 1 \scriptstyle{\Delta(\cdot,\cdot)^{-1}} Sym n + ( ℝ ) \textstyle{\operatorname{Sym}^{+}_{n}(\mathbb{R})}
Proof.
1 .
First, we prove for the case G = I 2 n + r G=I_{2n+r} .
Take an arbitrary g ∈ 𝒢 ( J ) g\in\mathcal{G}(J) and express it as
g = p k , p ∈ 𝒫 ( J ) , k ∈ 𝒦 ( J ) g=pk,\quad p\in\mathcal{P}(J),\;k\in\mathcal{K}(J)
p = ( p 1 − p 2 p 3 t 0 I r p 2 t 0 0 p 1 − 1 t ) . p=\begin{pmatrix}p_{1}&-p_{2}&{}^{t}\!p_{3}\\
0&I_{r}&{}^{t}\!p_{2}\\
0&0&{}^{t}\!p_{1}^{-1}\end{pmatrix}.
Then, from the relation p − 1 t = J p J {}^{t}\!p^{-1}=JpJ we get
p ⋅ I 2 n + r \displaystyle p\cdot I_{2n+r}
= p − 1 t p − 1 \displaystyle={}^{t}\!p^{-1}p^{-1}
= ( p 1 − 1 t 0 0 p 2 t I r 0 p 3 t − p 2 p 1 ) ( p 1 − 1 p 2 p 3 0 I r − p 2 t 0 0 p 1 t ) \displaystyle=\begin{pmatrix}{}^{t}\!p_{1}^{-1}&0&0\\
{}^{t}\!p_{2}&I_{r}&0\\
{}^{t}\!p_{3}&-p_{2}&p_{1}\end{pmatrix}\begin{pmatrix}p_{1}^{-1}&p_{2}&p_{3}\\
0&I_{r}&-{}^{t}\!p_{2}\\
0&0&{}^{t}\!p_{1}\end{pmatrix}
= ( p 1 − 1 t p 1 − 1 ∗ ∗ p 2 t p 1 − 1 ∗ ∗ p 3 t p 1 − 1 ∗ ∗ ) . \displaystyle=\begin{pmatrix}{}^{t}\!p_{1}^{-1}p_{1}^{-1}&*&*\\
{}^{t}\!p_{2}p_{1}^{-1}&*&*\\
{}^{t}\!p_{3}p_{1}^{-1}&*&*\end{pmatrix}.
Therefore, its image under the map ℱ \mathcal{F} is
ℱ ( p ⋅ I 2 n + r ) \displaystyle\mathcal{F}(p\cdot I_{2n+r})
= [ p 3 t p 1 − 1 − I n p 2 t p 1 − 1 p 1 − 1 t p 1 − 1 ] = [ p 3 t − p 1 p 2 t p 1 − 1 t ] . \displaystyle=\begin{bmatrix}{}^{t}\!p_{3}p_{1}^{-1}-I_{n}\\
{}^{t}\!p_{2}p_{1}^{-1}\\
{}^{t}\!p_{1}^{-1}p_{1}^{-1}\end{bmatrix}=\begin{bmatrix}{}^{t}\!p_{3}-p_{1}\\
{}^{t}\!p_{2}\\
{}^{t}\!p_{1}^{-1}\end{bmatrix}.
On the other hand, we see that
p ℱ ( I 2 n + r ) = p [ − I n 0 I n ] = [ − p 1 + p 3 t p 2 t p 1 − 1 t ] . \displaystyle p\mathcal{F}(I_{2n+r})=p\begin{bmatrix}-I_{n}\\
0\\
I_{n}\end{bmatrix}=\begin{bmatrix}-p_{1}+{}^{t}\!p_{3}\\
{}^{t}\!p_{2}\\
{}^{t}\!p_{1}^{-1}\end{bmatrix}.
It shows that ℱ ( p ⋅ I 2 n + r ) = p ℱ ( I 2 n + r ) \mathcal{F}(p\cdot I_{2n+r})=p\mathcal{F}(I_{2n+r}) . Therefore,
ℱ ( g ⋅ I 2 n + r ) = ℱ ( p ⋅ I 2 n + r ) \displaystyle\mathcal{F}(g\cdot I_{2n+r})=\mathcal{F}(p\cdot I_{2n+r})
= p u 0 = p k u 0 = g ℱ ( I 2 n + r ) . \displaystyle=pu_{0}=pku_{0}=g\mathcal{F}(I_{2n+r}).
Next, we show the assertion for the general case g ∈ 𝒢 ( J ) , G ∈ Ω ( J ) g\in\mathcal{G}(J),G\in\Omega(J) .
We express G G as
G = g 0 ⋅ I 2 n + r , g 0 ∈ 𝒢 ( J ) . G=g_{0}\cdot I_{2n+r},\quad g_{0}\in\mathcal{G}(J).
Then, we see that
ℱ ( g ⋅ G ) = ℱ ( g g 0 ⋅ I 2 n + r ) = g g 0 ℱ ( I 2 n + r ) \displaystyle\mathcal{F}(g\cdot G)=\mathcal{F}(gg_{0}\cdot I_{2n+r})=gg_{0}\mathcal{F}(I_{2n+r})
= g ℱ ( g 0 ⋅ I 2 n + r ) = g ℱ ( G ) . \displaystyle=g\mathcal{F}(g_{0}\cdot I_{2n+r})=g\mathcal{F}(G).
2 .
The equation ℱ ( Ω ( J ) ) = D ( J ) \mathcal{F}(\Omega(J))=D(J) is follows
from 1 and the transitivity of the action of 𝒢 ( J ) \mathcal{G}(J) .
The injectivity of ℱ \mathcal{F} holds because
the isotropy subgroups are coincident to be 𝒦 ( J ) \mathcal{K}(J) .
3 .
Take an arbitrary G ∈ Ω ( J ) G\in\Omega(J) and express it as
G = ( h ∗ ∗ G 2 ∗ ∗ G 3 ∗ ∗ ) , h G 3 + G 3 t h + G 2 t G 2 = 0 . G=\begin{pmatrix}h&*&*\\
G_{2}&*&*\\
G_{3}&*&*\end{pmatrix},\quad hG_{3}+{}^{t}\!G_{3}h+{}^{t}\!G_{2}G_{2}=0.
Thus, its image is
ℱ ( G ) = [ U ] , U = ( U 1 U 2 U 3 ) = ( G 3 − I n G 2 h ) . \displaystyle\mathcal{F}(G)=[U],\quad U=\begin{pmatrix}U_{1}\\
U_{2}\\
U_{3}\end{pmatrix}=\begin{pmatrix}G_{3}-I_{n}\\
G_{2}\\
h\end{pmatrix}.
Setting Γ := h \Gamma:=h , we get
Δ ( ℱ ( G ) , ℱ ( G ) ) = ( U Γ − 1 ) t J ( U Γ − 1 ) \displaystyle\Delta(\mathcal{F}(G),\mathcal{F}(G))={}^{t}\!(U\Gamma^{-1})J(U\Gamma^{-1})
= − ( G 3 h − 1 − h − 1 ) t − ( G 3 h − 1 − h − 1 ) − h − 1 G 2 t G 2 h − 1 \displaystyle=-{}^{t}\!(G_{3}h^{-1}-h^{-1})-(G_{3}h^{-1}-h^{-1})-h^{-1}{}^{t}\!G_{2}G_{2}h^{-1}
= 2 h − 1 = 2 π ( G ) − 1 . \displaystyle=2h^{-1}=2\pi(G)^{-1}.
∎
Characterization of B = B ( b , a , 0 ) B=B(b,a,0)
𝔤 ( J ) = { X ∈ 𝔤 𝔩 ( 2 n + r , ℝ ) ; X J + J X t = 0 } ≅ 𝔰 𝔬 ( n + r , n ) , \displaystyle\mathfrak{g}(J)=\left\{X\in\mathfrak{gl}(2n+r,\mathbb{R});\;XJ+J{}^{t}\!X=0\right\}\cong\mathfrak{so}(n+r,n),
𝔭 ( J ) = { X ∈ 𝔤 ( J ) ; X = X t } . \displaystyle\mathfrak{p}(J)=\left\{X\in\mathfrak{g}(J);\;X={}^{t}\!X\right\}.
An arbitrary element 𝔭 ( J ) \mathfrak{p}(J) has the form
B = B ( b , a , c ) := ( b a c a t 0 − a t c t − a − b ) , b ∈ Sym n ( ℝ ) , a ∈ M n , r ( ℝ ) , c ∈ Alt n ( ℝ ) . B=B(b,a,c):=\begin{pmatrix}b&a&c\\
{}^{t}\!a&0&-{}^{t}\!a\\
{}^{t}\!c&-a&-b\end{pmatrix},\quad\begin{array}[]{l}b\in\operatorname{Sym}_{n}(\mathbb{R}),\\
a\in\mathrm{M}_{n,r}(\mathbb{R}),\\
c\in\mathrm{Alt}_{n}(\mathbb{R})\end{array}.
Let w ∈ Σ w\in\Sigma as above,
Lemma 3.4 .
For an element B ( b , a , c ) ∈ 𝔭 ( J ) B(b,a,c)\in\mathfrak{p}(J) , it holds the following Taylor expansion:
Δ ( w , e s B w ) = − c s + ( a a t − b c − c b ) s 2 2 + O ( s 3 ) . \Delta(w,e^{sB}w)=-cs+\left(a{}^{t}\!a-bc-cb\right)\frac{s^{2}}{2}+O(s^{3}).
Proof.
Let
Γ 1 = W t W = I n , Γ ( s ) = W t e s B W , \Gamma_{1}={}^{t}\!WW=I_{n},\quad\Gamma(s)={}^{t}\!We^{sB}W,
Φ ( s ) = W t J e s B W . \Phi(s)={}^{t}\!WJe^{sB}W.
Then, we have
Δ ( w , e s B w ) \displaystyle\Delta(w,e^{sB}w)
= ( W Γ 1 − 1 ) t J ( e s B W Γ ( s ) − 1 ) \displaystyle={}^{t}\!(W\Gamma_{1}^{-1})J(e^{sB}W\Gamma(s)^{-1})
= Φ ( s ) Γ ( s ) − 1 . \displaystyle=\Phi(s)\Gamma(s)^{-1}.
By the definition, it is easily seen that
Φ ( s ) = − c s + ( a a t − b c + c b ) s 2 2 + O ( s 3 ) , \displaystyle\Phi(s)=-cs+(a{}^{t}\!a-bc+cb)\frac{s^{2}}{2}+O(s^{3}),
Γ ( s ) − 1 = I n + b s + O ( s 2 ) . \displaystyle\Gamma(s)^{-1}=I_{n}+bs+O(s^{2}).
Then, we obtain the expansion
Δ ( w , e s B w ) = − c s + ( a a t − b c − c b ) s 2 2 + O ( s 3 ) . \Delta(w,e^{sB}w)=-cs+\left(a{}^{t}\!a-bc-cb\right)\frac{s^{2}}{2}+O(s^{3}).
∎
Proposition 3.1 .
For B ∈ 𝔭 ( J ) B\in\mathfrak{p}(J) , assume the Taylor expansion
Δ ( w , e s B w ) = ( a a t ) s 2 2 + O ( s 3 ) , a ∈ M n , r ( ℝ ) \Delta(w,e^{sB}w)=(a{}^{t}\!a)\frac{s^{2}}{2}+O(s^{3}),\quad a\in\mathrm{M}_{n,r}(\mathbb{R})
at s = 0 s=0 .
Define a Sym n + ( ℝ ) \operatorname{Sym}^{+}_{n}(\mathbb{R}) -valued function h ( s ) h(s) by
h ( s ) = Δ ( e s B u 0 , e s B u 0 ) − 1 . h(s)=\Delta(e^{sB}u_{0},e^{sB}u_{0})^{-1}.
Then, h ( s ) h(s) satisfies the matrix-valued Bratu equation;
( h ′ h − 1 ) ′ = 2 ( a a t ) h − 1 . (h^{\prime}h^{-1})^{\prime}=2(a{}^{t}\!a)h^{-1}.
Proof.
From the assumption, the element B B has the form B = B ( b , a , 0 ) B=B(b,a,0) .
Since
ℱ − 1 ( e s B u 0 ) = e − 2 s B = : G ( s ) , \mathcal{F}^{-1}(e^{sB}u_{0})=e^{-2sB}=:G(s),
the function h ~ ( s ) := π ( G ( s ) ) \tilde{h}(s):=\pi(G(s)) satisfies the matrix-valued Bratu equation
( h ~ ′ h ~ − 1 ) ′ = 4 ( a a t ) h ~ − 1 . (\tilde{h}^{\prime}\tilde{h}^{-1})^{\prime}=4(a{}^{t}\!a)\tilde{h}^{-1}.
Therefore, the function
h ( s ) := Δ ( e s B u 0 , e s B u 0 ) − 1 = 1 2 h ~ ( s ) {h}(s):=\Delta(e^{sB}u_{0},e^{sB}u_{0})^{-1}=\frac{1}{2}\tilde{h}(s)
satisfies the equation
( h ′ h − 1 ) ′ = 2 ( a a t ) h − 1 . ({h}^{\prime}{h}^{-1})^{\prime}=2(a{}^{t}\!a){h}^{-1}.
∎
3.2 Bounded domain of type BDI and power series solution
Bounded domain D ( L ) D(L) of type BDI
Let
L = ( I n 0 0 0 − I r 0 0 0 − I n ) . \displaystyle L=\begin{pmatrix}I_{n}&0&0\\
0&-I_{r}&0\\
0&0&-I_{n}\end{pmatrix}.
We define a subdomain D ( L ) ⊂ D D(L)\subset D by
M ( L ) = { U ∈ M ; U t L U > 0 } , \displaystyle M(L)=\left\{U\in M;\;{}^{t}\!ULU>0\right\},
D ( L ) := { u = [ U ] ∈ D ; U ∈ M ( L ) } . \displaystyle D(L):=\left\{u=[U]\in D;\;U\in M(L)\right\}.
We notice the relation L = A J A t L=AJ{}^{t}\!A with a matrix
A = 1 2 ( I n 0 − I n 0 2 I r 0 I n 0 I n ) . A=\frac{1}{\sqrt{2}}\begin{pmatrix}I_{n}&0&-I_{n}\\
0&\sqrt{2}I_{r}&0\\
I_{n}&0&I_{n}\end{pmatrix}.
Lemma 3.5 .
1.
It holds the following bijection:
D ( J ) → ∼ D ( L ) : u ↦ A u \displaystyle D(J)\xrightarrow{\sim}D(L):u\mapsto Au
2.
(Realization as a bounded domain)
D ( L ) D(L) is described as
D ( L ) = { v = [ I n V 2 V 3 ] ∈ D ; I n − V 2 t V 2 − V 3 t V 3 > 0 } . D(L)=\left\{v=\begin{bmatrix}I_{n}\\
V_{2}\\
V_{3}\end{bmatrix}\in D;\;I_{n}-{}^{t}\!V_{2}V_{2}-{}^{t}\!V_{3}V_{3}>0\right\}.
Next, we take two points v 0 , z v_{0},z as
v 0 = [ V 0 ] ∈ D ( L ) , z = [ Z ] ∈ A Σ , \displaystyle v_{0}=[V_{0}]\in D(L),\quad z=[Z]\in A\Sigma,
V 0 = − 2 ( I n 0 0 ) = A U 0 , Z = 1 2 ( − I n 0 I n ) = A W . \displaystyle V_{0}=-\sqrt{2}\begin{pmatrix}I_{n}\\
0\\
0\end{pmatrix}=AU_{0},\quad Z=\frac{1}{\sqrt{2}}\begin{pmatrix}-I_{n}\\
0\\
I_{n}\end{pmatrix}=AW.
For two points v 1 , v 2 ∈ D ( L ) v_{1},v_{2}\in D(L) expressed as
v j = [ V j ] , V j = ( V j , 1 V j , 2 V j , 3 ) ( j = 1 , 2 ) , v_{j}=[V_{j}],\quad V_{j}=\begin{pmatrix}V_{j,1}\\
V_{j,2}\\
V_{j,3}\end{pmatrix}\quad(j=1,2),
we define ψ ( v 1 , v 2 ) ∈ M n ( ℝ ) \psi(v_{1},v_{2})\in\mathrm{M}_{n}(\mathbb{R}) by
ψ ( v 1 , v 2 ) \displaystyle\psi(v_{1},v_{2})
= ( V 1 Γ 1 − 1 ) t L ( V 2 Γ 2 − 1 ) , \displaystyle={}^{t}\!(V_{1}\Gamma_{1}^{-1})L(V_{2}\Gamma_{2}^{-1}),
Γ j = Z t V j = ( V j , 3 − V j , 1 ) / 2 ( j = 1 , 2 ) . \Gamma_{j}={}^{t}\!ZV_{j}=(V_{j,3}-V_{j,1})/{\sqrt{2}}\quad(j=1,2).
Lemma 3.6 .
For any u 1 , u 2 ∈ D ( J ) u_{1},u_{2}\in D(J) , it holds that
ψ ( A u 1 , A u 2 ) = Δ ( u 1 , u 2 ) . \psi(Au_{1},Au_{2})=\Delta(u_{1},u_{2}).
By above lemma, we have the following commutative diagram:
D ( J ) × D ( J ) \textstyle{D(J)\times D(J)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces} A × A \scriptstyle{A\times A} ↻ \scriptstyle{\circlearrowright} Δ ( ⋅ , ⋅ ) \scriptstyle{\Delta(\cdot,\cdot)} D ( L ) × D ( L ) \textstyle{D(L)\times D(L)\ignorespaces\ignorespaces\ignorespaces\ignorespaces} ψ ( ⋅ , ⋅ ) \scriptstyle{\psi(\cdot,\cdot)} Sym n + ( ℝ ) \textstyle{\operatorname{Sym}^{+}_{n}(\mathbb{R})}
Proof.
Take arbitrary u 1 , u 2 ∈ D ( J ) u_{1},u_{2}\in D(J) and express them as
u j = [ U j ] ( j = 1 , 2 ) . u_{j}=[U_{j}]\quad(j=1,2).
With the relation
Γ j = Z t ( A U j ) = W t U j , \Gamma_{j}={}^{t}\!Z(AU_{j})={}^{t}\!WU_{j},
we calculate as
ψ ( A u 1 , A u 2 ) \displaystyle\psi(Au_{1},Au_{2})
= ( A U 1 Γ 1 − 1 ) t L ( A U 2 Γ 2 − 1 ) \displaystyle={}^{t}\!(AU_{1}\Gamma_{1}^{-1})L(AU_{2}\Gamma_{2}^{-1})
= ( U 1 Γ 1 − 1 ) t J ( U 2 Γ 2 − 1 ) \displaystyle={}^{t}\!(U_{1}\Gamma_{1}^{-1})J(U_{2}\Gamma_{2}^{-1})
= Δ ( u , u ) . \displaystyle=\Delta(u,u).
∎
Put
𝔭 ( L ) = { X ∈ 𝔤 ( L ) ; X = X t } \mathfrak{p}(L)=\left\{X\in\mathfrak{g}(L);\;X={}^{t}\!X\right\} .
Proposition 3.2 .
For B ∈ 𝔭 ( L ) B\in\mathfrak{p}(L) , assume the following Taylor expansion
ψ ( z , e s B z ) = ( a a t ) s 2 2 + O ( s 3 ) , a ∈ M n , r ( ℝ ) \psi(z,e^{sB}z)=(a{}^{t}\!a)\frac{s^{2}}{2}+O(s^{3}),\quad a\in\mathrm{M}_{n,r}(\mathbb{R})
at s = 0 s=0 .
Define a Sym n + ( ℝ ) \operatorname{Sym}^{+}_{n}(\mathbb{R}) -valued function h ( s ) h(s) by
h ( s ) = ψ ( e s B v 0 , e s B v 0 ) − 1 , h(s)=\psi(e^{sB}v_{0},e^{sB}v_{0})^{-1},
Then, h ( s ) h(s) satisfies the matrix-valued Bratu equation;
( h ′ h − 1 ) ′ = 2 ( a a t ) h − 1 . (h^{\prime}h^{-1})^{\prime}=2(a{}^{t}\!a)h^{-1}.
Derivation of the power series solution
We take an arbitrary B ∈ 𝔭 ( L ) B\in\mathfrak{p}(L) and express it
B = ( 0 C t C 0 ) , C ∈ M n + r , n ( ℝ ) . B=\begin{pmatrix}0&{}^{t}\!C\\
C&0\end{pmatrix},\quad C\in\mathrm{M}_{n+r,n}(\mathbb{R}).
We consider the singular value decomposition of the submatrix C C in the following form:
C = ( p 1 p 3 p 2 p ) ( 0 σ ) q t , C=\begin{pmatrix}p_{1}&p_{3}\\
p_{2}&p\end{pmatrix}\begin{pmatrix}0\\
\sigma\end{pmatrix}{}^{t}\!q,
( p 1 p 3 p 2 p ) ∈ O ( n + r ) , q ∈ O ( n ) , \begin{pmatrix}p_{1}&p_{3}\\
p_{2}&p\end{pmatrix}\in\mathrm{O}(n+r),\quad q\in\mathrm{O}(n),
σ = diag ( σ i ) i = 1 n ∈ M n ( ℝ ) . \sigma=\operatorname{diag}(\sigma_{i})_{i=1}^{n}\in\mathrm{M}_{n}(\mathbb{R}).
We define the following functions of s ∈ ℝ s\in\mathbb{R} :
ch ( s σ ) := 1 2 ( e s σ + e − s σ ) , sh ( s σ ) := 1 2 ( e s σ − e − s σ ) . \mathrm{ch}(s\sigma):=\frac{1}{2}(e^{s\sigma}+e^{-s\sigma}),\quad\mathrm{sh}(s\sigma):=\frac{1}{2}(e^{s\sigma}-e^{-s\sigma}).
Proposition 3.3 .
For B ∈ 𝔭 ( L ) B\in\mathfrak{p}(L) , define a function
h ( s ) := ψ ( e s B v 0 , e s B v 0 ) − 1 . h(s):=\psi(e^{sB}v_{0},e^{sB}v_{0})^{-1}.
Then, h ( s ) h(s) has the following expression:
h ( s ) \displaystyle h(s)
= 1 2 𝒆 ( s ) 𝒆 t ( s ) , 𝒆 ( s ) = p sh ( s σ ) − q ch ( s σ ) . \displaystyle=\frac{1}{2}\bm{e}(s){}^{t}\!\bm{e}(s),\quad\bm{e}(s)=p\mathrm{sh}(s\sigma)-q\mathrm{ch}(s\sigma).
Proof.
From the singular value decomposition, we get the equation
B = P ( 0 0 σ 0 0 0 σ 0 0 ) P t , P := ( q 0 0 0 p 1 p 3 0 p 2 p ) . B=P\begin{pmatrix}0&0&\sigma\\
0&0&0\\
\sigma&0&0\end{pmatrix}{}^{t}\!P,\quad P:=\begin{pmatrix}q&0&0\\
0&p_{1}&p_{3}\\
0&p_{2}&p\end{pmatrix}.
Then, we can calculate its exponential as
e s B V 0 \displaystyle e^{sB}V_{0}
= P ( ch ( s σ ) 0 sh ( s σ ) 0 I r 0 sh ( s σ ) 0 ch ( s σ ) ) P t V 0 \displaystyle=P\begin{pmatrix}\mathrm{ch}(s\sigma)&0&\mathrm{sh}(s\sigma)\\
0&I_{r}&0\\
\mathrm{sh}(s\sigma)&0&\mathrm{ch}(s\sigma)\end{pmatrix}{}^{t}\!PV_{0}
= 2 ( q ch ( s σ ) q t p 3 sh ( s σ ) q t p sh ( s σ ) q t ) . \displaystyle=\sqrt{2}\begin{pmatrix}q\mathrm{ch}(s\sigma){}^{t}\!q\\
p_{3}\mathrm{sh}(s\sigma){}^{t}\!q\\
p\mathrm{sh}(s\sigma){}^{t}\!q\end{pmatrix}.
Putting
Γ := p sh ( s σ ) q t − q ch ( s σ ) q t , 𝒆 ( s ) = Γ q , \Gamma:=p\mathrm{sh}(s\sigma){}^{t}\!q-q\mathrm{ch}(s\sigma){}^{t}\!q,\quad\bm{e}(s)=\Gamma q,
we obtain that
ψ ( e s B v 0 , e s B v 0 ) \displaystyle\psi(e^{sB}v_{0},e^{sB}v_{0})
= ( e s B V 0 Γ − 1 ) t L ( e s B V 0 Γ − 1 ) \displaystyle={}^{t}\!(e^{sB}V_{0}\Gamma^{-1})L(e^{sB}V_{0}\Gamma^{-1})
= 2 Γ − 1 t Γ − 1 = 2 𝒆 t ( s ) − 1 𝒆 ( s ) − 1 . \displaystyle=2{}^{t}\!\Gamma^{-1}\Gamma^{-1}=2{}^{t}\!\bm{e}(s)^{-1}\bm{e}(s)^{-1}.
∎
Corollary 3.1 .
h ( s ) h(s) has the following power series expansion:
h ( s ) \displaystyle h(s)
= 1 2 I n + 1 2 ∑ n = 1 ∞ ( q σ 2 n q t + p σ 2 n p t ) s 2 n ( 2 n ) ! \displaystyle=\frac{1}{2}I_{n}+\frac{1}{2}\sum_{n=1}^{\infty}\left(q\sigma^{2n}\;{}^{t}\!q+p\sigma^{2n}\;{}^{t}\!p\right)\frac{s^{2n}}{(2n)!}
− 1 2 ∑ n = 1 ∞ ( p σ 2 n − 1 q t + q σ 2 n − 1 p t ) s 2 n − 1 ( 2 n − 1 ) ! . \displaystyle-\frac{1}{2}\sum_{n=1}^{\infty}\left(p\sigma^{2n-1}\;{}^{t}\!q+q\sigma^{2n-1}\;{}^{t}\!p\right)\frac{s^{2n-1}}{(2n-1)!}.
Proof.
It is shown by the direct calculation:
1 2 𝒆 ( s ) 𝒆 t ( s ) \displaystyle\frac{1}{2}\bm{e}(s){}^{t}\!\bm{e}(s)
= 1 2 ( − q p ) ( ch ( s σ ) 2 ch ( s σ ) sh ( s σ ) sh ( s σ ) ch ( s σ ) sh ( s σ ) 2 ) ( − q t p t ) \displaystyle=\frac{1}{2}\begin{pmatrix}-q&p\end{pmatrix}\begin{pmatrix}\mathrm{ch}(s\sigma)^{2}&\mathrm{ch}(s\sigma)\mathrm{sh}(s\sigma)\\
\mathrm{sh}(s\sigma)\mathrm{ch}(s\sigma)&\mathrm{sh}(s\sigma)^{2}\end{pmatrix}\begin{pmatrix}-{}^{t}\!q\\
{}^{t}\!p\end{pmatrix}
= 1 4 ( − q p ) ( ch ( 2 s σ ) + I n sh ( 2 s σ ) sh ( 2 s σ ) ch ( 2 s σ ) − I n ) ( − q t p t ) \displaystyle=\frac{1}{4}\begin{pmatrix}-q&p\end{pmatrix}\begin{pmatrix}\mathrm{ch}(2s\sigma)+I_{n}&\mathrm{sh}(2s\sigma)\\
\mathrm{sh}(2s\sigma)&\mathrm{ch}(2s\sigma)-I_{n}\end{pmatrix}\begin{pmatrix}-{}^{t}\!q\\
{}^{t}\!p\end{pmatrix}
= 1 4 { 2 I n + q ( ch ( 2 s σ ) − I n ) q t + p ( ch ( 2 s σ ) − I n ) p t \displaystyle=\frac{1}{4}\{2I_{n}+q(\mathrm{ch}(2s\sigma)-I_{n}){}^{t}\!q+p(\mathrm{ch}(2s\sigma)-I_{n}){}^{t}\!p
− q sh ( 2 s σ ) p t − p sh ( 2 s σ ) q t } . \displaystyle\quad-q\mathrm{sh}(2s\sigma){}^{t}\!p-p\mathrm{sh}(2s\sigma){}^{t}\!q\}.
∎
4 Analog to symmetric domain of type CI
Manifold Ω ( J ) \Omega(J)
We define a submanifold Ω ( J ) \Omega(J) in the cone Sym 2 n + ( ℝ ) \operatorname{Sym}^{+}_{2n}(\mathbb{R}) by
Ω ( J ) = { G ∈ Sym 2 n + ( ℝ ) ; J G J = G − 1 } , \Omega(J)=\left\{G\in\operatorname{Sym}^{+}_{2n}(\mathbb{R});\;JGJ=G^{-1}\right\},
J = ( 0 i I n − i I n 0 ) . J=\begin{pmatrix}0&iI_{n}\\
-iI_{n}&0\end{pmatrix}.
Setting the set V ( J ) V(J) by
V ( J ) = { B ∈ Sym 2 n ( ℝ ) ; J B J = − B } , V(J)=\left\{B\in\operatorname{Sym}_{2n}(\mathbb{R});\;JBJ=-B\right\},
we can consider the following map:
V ( J ) ∋ B ↦ exp B ∈ Ω ( J ) . V(J)\ni B\mapsto\exp B\in\Omega(J).
We see that any element B ∈ V ( J ) B\in V(J) has the following form:
B = B ( b , c ) := ( b c c − b ) , b , c ∈ Sym n ( ℝ ) . B=B(b,c):=\begin{pmatrix}b&c\\
c&-b\end{pmatrix},\quad\begin{array}[]{l}b,c\in\operatorname{Sym}_{n}(\mathbb{R}).\end{array}
For an element B ∈ V ( J ) B\in V(J) , we define an Ω ( J ) \Omega(J) -valued function G ( s ; B ) G(s;B) by
G ( s ; B ) := exp ( s B ( b , c ) ) ( s ∈ ℝ ) . G(s;B):=\exp(sB(b,c))\quad(s\in\mathbb{R}).
We define the following sets of matrices:
𝒢 ( J ) = { G ∈ GL 2 n ( ℝ ) ; J G J = G − 1 t } ≅ Sp ( n , ℝ ) , \displaystyle\mathcal{G}(J)=\left\{G\in\operatorname{GL}_{2n}(\mathbb{R});\;JGJ={}^{t}\!G^{-1}\right\}\cong\mathrm{Sp}(n,\mathbb{R}),
𝒩 ( J ) = { N ∈ 𝒢 ( J ) ; N = ( I n 0 ∗ I n ) } , \displaystyle\mathcal{N}(J)=\left\{N\in\mathcal{G}(J);\;N=\begin{pmatrix}I_{n}&0\\
*&I_{n}\end{pmatrix}\right\},
= { N = ( I n 0 n 1 I n ) , n 1 = n 1 t } , \displaystyle\qquad=\left\{N=\begin{pmatrix}I_{n}&0\\
n_{1}&I_{n}\end{pmatrix},\;n_{1}={}^{t}\!n_{1}\right\},
Ω 0 ( J ) = { A ∈ Ω ( J ) ; A = ( ∗ 0 0 ∗ ) } , \displaystyle\Omega_{0}(J)=\left\{A\in\Omega(J);\;A=\begin{pmatrix}*&0\\
0&*\end{pmatrix}\right\},
= { A = ( h 0 0 h − 1 ) , h t = h } . \displaystyle\qquad=\left\{A=\begin{pmatrix}h&0\\
0&h^{-1}\end{pmatrix},{}^{t}\!h=h\right\}.
Proposition 4.1 (Block-Gauss decomposition for Ω ( J ) \Omega(J) ).
The following map is bijective:
𝒩 ( J ) × Ω 0 ( J ) → Ω ( J ) ( N , A ) ↦ N A N t . \begin{array}[]{ccc}\mathcal{N}(J)\times\Omega_{0}(J)&\rightarrow&\Omega(J)\\
(N,A)&\mapsto&NA{}^{t}\!N.\end{array}
We denote the variable change defined through this decomposition by
G → N , A → h , n 1 , G\rightarrow N,A\rightarrow h,n_{1},
or simply by G → h G\rightarrow h .
Lemma 4.1 .
Under the variable change
G → N , A → h , n 1 , G\rightarrow N,A\rightarrow h,n_{1},
it holds that
1 2 tr ( ( G ′ G − 1 ) 2 ) \displaystyle\frac{1}{2}\operatorname{tr}((G^{\prime}G^{-1})^{2})
= 1 2 tr ( ( A ′ A − 1 ) 2 ) + tr ( A N ′ t N − 1 t A − 1 N − 1 N ′ ) \displaystyle=\frac{1}{2}\operatorname{tr}((A^{\prime}A^{-1})^{2})+\operatorname{tr}(A{}^{t}\!N^{\prime}{}^{t}\!N^{-1}A^{-1}N^{-1}N^{\prime})
= tr ( ( h ′ h − 1 ) 2 ) + tr ( h n 1 ′ t h n 1 ′ ) . \displaystyle=\operatorname{tr}((h^{\prime}h^{-1})^{2})+\operatorname{tr}(h{}^{t}\!n_{1}^{\prime}hn_{1}^{\prime}).
By the direct derivation of the Euler-Lagrange equation, we obtain the following analog of the matrix-valued Bratu equation.
Theorem 4.1 .
The equation
G ′ ( s ) G ( s ) − 1 = B ( b , c ) G^{\prime}(s)G(s)^{-1}=B(b,c)
for an Ω ( J ) \Omega(J) -valued function G ( s ) G(s) is expressed as
( h ′ ( s ) h ( s ) − 1 ) ′ = ( c h ( s ) − 1 ) 2 ({h}^{\prime}(s){h}(s)^{-1})^{\prime}=(c{h}(s)^{-1})^{2}
by the variable change G ( s ) → h ( s ) , n 1 ( s ) G(s)\rightarrow h(s),n_{1}(s) .
4.1 Siegel domain D ( J ; K ) D(J;K) of type CI
Similarly as the type of BDI,
we put
M = { U = ( U 1 U 2 ) ∈ M 2 n , n ( ℂ ) ; rank U = n } \displaystyle M=\left\{U=\begin{pmatrix}U_{1}\\
U_{2}\end{pmatrix}\in\mathrm{M}_{2n,n}(\mathbb{C});\;\operatorname{rank}U=n\right\}
and consider the actions
GL ( 2 n , ℂ ) ↷ M ↶ GL ( n , ℂ ) \displaystyle\operatorname{GL}(2n,\mathbb{C})\curvearrowright M\curvearrowleft\operatorname{GL}(n,\mathbb{C})
by the matrix multiplication. Put the quotient space
D := M / GL ( n , ℂ ) = { u = [ U ] = [ U 1 U 2 ] ; U ∈ M } D:=M/\operatorname{GL}(n,\mathbb{C})=\left\{u=[U]=\begin{bmatrix}U_{1}\\
U_{2}\end{bmatrix};\;U\in M\right\}
and regard it as a domain. We put
J = ( 0 i I n − i I n 0 ) , K = ( 0 I n − I n 0 ) J=\begin{pmatrix}0&iI_{n}\\
-iI_{n}&0\end{pmatrix},\quad K=\begin{pmatrix}0&I_{n}\\
-I_{n}&0\end{pmatrix}
and define a subdomain D ( J ; K ) D(J;K) by
M ( J ; K ) = { U ∈ M ; U ∗ J U > 0 , U t K U = 0 } , M(J;K)=\left\{U\in M;\;U^{*}JU>0,{}^{t}\!UKU=0\right\},
D ( J ; K ) := { u = [ U ] ∈ D ; U ∈ M ( J ; K ) } . D(J;K):=\left\{u=[U]\in D;\;U\in M(J;K)\right\}.
Lemma 4.2 (Realization as a Siegel upper half space).
D ( J ; K ) D(J;K) is the following set:
D ( J ; K ) \displaystyle D(J;K)
= { u = [ U 1 I n ] ; 1 i ( U 1 − U 1 ∗ ) > 0 , U 1 t = U 1 } \displaystyle=\left\{u=\begin{bmatrix}U_{1}\\
I_{n}\end{bmatrix};\;\frac{1}{i}(U_{1}-U_{1}^{*})>0,{}^{t}\!U_{1}=U_{1}\right\}
≅ Sp ( n , ℝ ) / U ( n ) . \displaystyle\cong\operatorname{Sp}(n,\mathbb{R})/\mathrm{U}(n).
We define a subdomain Σ ⊂ D \Sigma\subset D (Shilov boundary of D ( J ; K ) D(J;K) ) by
Σ = { u = [ U 1 I n ] ; 1 i ( U 1 − U 1 ∗ ) = 0 , U 1 t = U 1 } . \Sigma=\left\{u=\begin{bmatrix}U_{1}\\
I_{n}\end{bmatrix};\;\frac{1}{i}(U_{1}-U_{1}^{*})=0,{}^{t}\!U_{1}=U_{1}\right\}.
We take two points u 0 , w u_{0},w as
u 0 = [ U 0 ] ∈ D ( J ; K ) , w = [ W ] ∈ Σ , \displaystyle u_{0}=[U_{0}]\in D(J;K),\quad w=[W]\in\Sigma,
U 0 = ( i I n I n ) , W = ( 0 I n ) . \displaystyle U_{0}=\begin{pmatrix}iI_{n}\\
I_{n}\end{pmatrix},\quad W=\begin{pmatrix}0\\
I_{n}\end{pmatrix}.
For two points u 1 , u 2 ∈ D ( J ; K ) u_{1},u_{2}\in D(J;K) expressed as
u j = [ U j ] , U j = ( U j , 1 U j , 2 ) ( j = 1 , 2 ) , u_{j}=[U_{j}],\quad U_{j}=\begin{pmatrix}U_{j,1}\\
U_{j,2}\end{pmatrix}\quad(j=1,2),
we define Δ ( u 1 , u 2 ) ∈ M n ( ℂ ) \Delta(u_{1},u_{2})\in\mathrm{M}_{n}(\mathbb{C}) by
Δ ( u 1 , u 2 ) \displaystyle\Delta(u_{1},u_{2})
= ( U 1 Γ 1 − 1 ) ∗ J ( U 2 Γ 2 − 1 ) , \displaystyle=(U_{1}\Gamma_{1}^{-1})^{*}J(U_{2}\Gamma_{2}^{-1}),
Γ j = W t U j = U j , 2 ( j = 1 , 2 ) . \Gamma_{j}={}^{t}\!WU_{j}=U_{j,2}\quad(j=1,2).
Put
𝔤 ( J ) = { X ∈ 𝔤 𝔩 ( 2 n + r , ℝ ) ; X J + J X t = 0 } \displaystyle\mathfrak{g}(J)=\left\{X\in\mathfrak{gl}(2n+r,\mathbb{R});\;XJ+J{}^{t}\!X=0\right\}
≅ 𝔰 𝔭 ( n , ℝ ) , \displaystyle\qquad\cong\mathfrak{sp}(n,\mathbb{R}),
𝔭 ( J ) = { X ∈ 𝔤 ( J ) ; X = X t } \displaystyle\mathfrak{p}(J)=\left\{X\in\mathfrak{g}(J);\;X={}^{t}\!X\right\}
= { B ∈ 𝔤 ( J ) ; B = ( b c c − b ) , b , c ∈ Sym n ( ℝ ) } . \displaystyle=\left\{B\in\mathfrak{g}(J);\;B=\begin{pmatrix}b&c\\
c&-b\end{pmatrix},\begin{array}[]{l}b,c\in\operatorname{Sym}_{n}(\mathbb{R})\end{array}\right\}.
Proposition 4.2 (Analog to type CI).
For B ∈ 𝔭 ( J ) B\in\mathfrak{p}(J) , define a Sym n + ( ℝ ) \operatorname{Sym}^{+}_{n}(\mathbb{R}) -valued function h ( s ) h(s) by
h ( s ) = Δ ( e s B u 0 , e s B u 0 ) − 1 . h(s)=\Delta(e^{sB}u_{0},e^{sB}u_{0})^{-1}.
Then, h ( s ) h(s) satisfies the equation
( h ′ h − 1 ) ′ = ( c h − 1 ) 2 . (h^{\prime}h^{-1})^{\prime}=(ch^{-1})^{2}.
We can define an action 𝒢 ( J ) ↷ D ( J : K ) \mathcal{G}(J)\curvearrowright D(J:K) by
g ⋅ u = [ g U ] ( g ∈ 𝒢 ( J ) , u = [ U ] ∈ D ( J ; K ) ) . g\cdot u=[gU]\quad(g\in\mathcal{G}(J),u=[U]\in D(J;K)).
We also define subgroups:
𝒫 ( J ) = { g ∈ 𝒢 ( J ) ; g = ( ∗ ∗ 0 ∗ ) } , \displaystyle\mathcal{P}(J)=\left\{g\in\mathcal{G}(J);\;g=\begin{pmatrix}*&*\\
0&*\end{pmatrix}\right\},
𝒦 ( J ) = { g ∈ 𝒢 ( J ) ; g t = g − 1 } ≅ U ( n ) , \displaystyle\mathcal{K}(J)=\left\{g\in\mathcal{G}(J);\;{}^{t}\!g=g^{-1}\right\}\cong\mathrm{U}(n),
𝒢 ( J ) u 0 = { g ∈ 𝒢 ( J ) ; g u 0 = u 0 } . \displaystyle\mathcal{G}(J)_{u_{0}}=\left\{g\in\mathcal{G}(J);\;gu_{0}=u_{0}\right\}.
Lemma 4.3 .
The following assertions hold.
1.
𝒢 ( J ) = 𝒫 ( J ) 𝒦 ( J ) \mathcal{G}(J)=\mathcal{P}(J)\mathcal{K}(J) .
2.
𝒢 ( J ) u 0 = 𝒦 ( J ) \mathcal{G}(J)_{u_{0}}=\mathcal{K}(J) .
3.
The action 𝒢 ↷ D ( J ; K ) \mathcal{G}\curvearrowright D(J;K) is transitive and the following expression holds:
D ( J ; K ) ≅ Sp ( n , ℝ ) / U ( n ) . D(J;K)\cong\operatorname{Sp}(n,\mathbb{R})/\mathrm{U}(n).
Proof of the proposition
Define a manifold
Ω ( J ) \displaystyle\Omega(J)
= { g ∈ Herm 2 n + ( ℂ ) ; g J g ∗ = J , g K g t = K } \displaystyle=\left\{g\in\operatorname{Herm}^{+}_{2n}(\mathbb{C});\;gJg^{*}=J,gK{}^{t}\!g=K\right\}
= { g ∈ Sym 2 n + ( ℝ ) ; g J g t = J } \displaystyle=\left\{g\in\operatorname{Sym}^{+}_{2n}(\mathbb{R});\;gJ{}^{t}\!g=J\right\}
and a map ℱ : Ω ( J ) → D \mathcal{F}:\Omega(J)\xrightarrow{}D by
ℱ ( G ) = [ i I n − G 2 h ] , G = ( h ∗ G 2 ∗ ) ∈ Ω ( J ; K ) , \mathcal{F}(G)=\begin{bmatrix}iI_{n}-G_{2}\\
h\end{bmatrix},\quad G=\begin{pmatrix}h&*\\
G_{2}&*\end{pmatrix}\in\Omega(J;K),
and a map π : Ω ( J ) → Sym n + ( ℝ ) \pi:\Omega(J)\rightarrow\operatorname{Sym}^{+}_{n}(\mathbb{R}) by
π ( G ) = h , G = ( h ∗ G 2 ∗ ) ∈ Ω ( J ; K ) . \pi(G)=h,\quad G=\begin{pmatrix}h&*\\
G_{2}&*\end{pmatrix}\in\Omega(J;K).
We define an action 𝒢 ( J ) ↷ Ω ( J ) \mathcal{G}(J)\curvearrowright\Omega(J) by
g ⋅ G = g − 1 t G g − 1 ( g ∈ 𝒢 ( J ) , G ∈ Ω ( J ) ) . g\cdot G={}^{t}\!g^{-1}Gg^{-1}\quad(g\in\mathcal{G}(J),G\in\Omega(J)).
Lemma 4.4 .
The following assertions hold.
1.
ℱ ( g ⋅ G ) = g ℱ ( G ) \mathcal{F}(g\cdot G)=g\mathcal{F}(G)
for any g ∈ 𝒢 ( J ) , G ∈ Ω ( J ) g\in\mathcal{G}(J),G\in\Omega(J) .
2.
ℱ ( Ω ( J ) ) ⊂ D ( J ) \mathcal{F}(\Omega(J))\subset D(J) and the map ℱ : Ω ( J ) → D ( J ) \mathcal{F}:\Omega(J)\rightarrow D(J) is bijective.
3.
For any G ∈ Ω ( J ) G\in\Omega(J) , it holds that
1 2 π ( G ) = Δ ( ℱ ( G ) , ℱ ( G ) ) − 1 . \frac{1}{2}\pi(G)=\Delta(\mathcal{F}(G),\mathcal{F}(G))^{-1}.
From the above lemma, we have the following commutative diagram:
Ω ( J ) \textstyle{\Omega(J)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces} ℱ \scriptstyle{\mathcal{F}} ↻ \scriptstyle{\circlearrowright} 1 2 π \scriptstyle{\frac{1}{2}\pi} D ( J ; K ) \textstyle{D(J;K)\ignorespaces\ignorespaces\ignorespaces\ignorespaces} Δ ( ⋅ , ⋅ ) − 1 \scriptstyle{\Delta(\cdot,\cdot)^{-1}} Sym n + ( ℝ ) \textstyle{\operatorname{Sym}^{+}_{n}(\mathbb{R})}
Proof.
Each G ∈ Ω ( J ) G\in\Omega(J) is expressed as
G = g − 1 t g − 1 = ( h h n 1 n 1 t h I n + n 1 t h n 1 ) , G={}^{t}\!g^{-1}g^{-1}=\begin{pmatrix}h&hn_{1}\\
{}^{t}\!n_{1}h&I_{n}+{}^{t}\!n_{1}hn_{1}\end{pmatrix},
g − 1 t = ( I n 0 n 1 t I r ) ( h 1 2 0 0 h − 1 2 ) ∈ 𝒢 ( J ) . {}^{t}\!g^{-1}=\begin{pmatrix}I_{n}&0\\
{}^{t}\!n_{1}&I_{r}\end{pmatrix}\begin{pmatrix}h^{\frac{1}{2}}&0\\
0&h^{-\frac{1}{2}}\end{pmatrix}\in\mathcal{G}(J).
We also see that
g u 0 \displaystyle gu_{0}
= ( I − n 1 0 I ) ( h 1 2 0 0 h − 1 2 ) [ i I n I n ] \displaystyle=\begin{pmatrix}I&-n_{1}\\
0&I\end{pmatrix}\begin{pmatrix}h^{\frac{1}{2}}&0\\
0&h^{-\frac{1}{2}}\end{pmatrix}\begin{bmatrix}iI_{n}\\
I_{n}\end{bmatrix}
= [ i h − 1 − n 1 I n ] . \displaystyle=\begin{bmatrix}ih^{-1}-n_{1}\\
I_{n}\end{bmatrix}.
1 .
First, we show in the case G = I 2 n G=I_{2n} .
Take an arbitrary g ∈ 𝒢 ( J ) g\in\mathcal{G}(J) and express it as
g = p k , p ∈ 𝒫 ( J ) , k ∈ 𝒦 ( J ) , g=pk,\quad p\in\mathcal{P}(J),\;k\in\mathcal{K}(J),
p = ( p 1 − p 2 0 p 1 − 1 t ) . p=\begin{pmatrix}p_{1}&-p_{2}\\
0&{}^{t}\!p_{1}^{-1}\end{pmatrix}.
Then, using the relation p − 1 t = J − 1 p J {}^{t}\!p^{-1}=J^{-1}pJ , we get
p ⋅ I 2 n \displaystyle p\cdot I_{2n}
= p − 1 t G p − 1 \displaystyle={}^{t}\!p^{-1}Gp^{-1}
= ( p 1 − 1 t 0 p 2 p 1 ) ( p 1 − 1 p 2 t 0 p 1 t ) \displaystyle=\begin{pmatrix}{}^{t}\!p_{1}^{-1}&0\\
p_{2}&p_{1}\end{pmatrix}\begin{pmatrix}p_{1}^{-1}&{}^{t}\!p_{2}\\
0&{}^{t}\!p_{1}\end{pmatrix}
= ( p 1 − 1 t p 1 − 1 ∗ p 2 p 1 − 1 ∗ ) . \displaystyle=\begin{pmatrix}{}^{t}\!p_{1}^{-1}p_{1}^{-1}&*\\
p_{2}p_{1}^{-1}&*\end{pmatrix}.
Therefore,
ℱ ( p ⋅ I 2 n ) \displaystyle\mathcal{F}(p\cdot I_{2n})
= [ i I n − p 2 p 1 − 1 p 1 − 1 t p 1 − 1 ] = [ i p 1 − p 2 p 1 − 1 t ] . \displaystyle=\begin{bmatrix}iI_{n}-p_{2}p_{1}^{-1}\\
{}^{t}\!p_{1}^{-1}p_{1}^{-1}\end{bmatrix}=\begin{bmatrix}ip_{1}-p_{2}\\
{}^{t}\!p_{1}^{-1}\end{bmatrix}.
On the other hand, we see that
p ℱ ( I 2 n ) = p [ i I n I n ] = [ i p 1 − p 2 p 1 − 1 t ] \displaystyle p\mathcal{F}(I_{2n})=p\begin{bmatrix}iI_{n}\\
I_{n}\end{bmatrix}=\begin{bmatrix}ip_{1}-p_{2}\\
{}^{t}\!p_{1}^{-1}\end{bmatrix}
and then ℱ ( p ⋅ I 2 n ) = p ℱ ( I 2 n ) \mathcal{F}(p\cdot I_{2n})=p\mathcal{F}(I_{2n}) . Therefore,
ℱ ( g ⋅ I 2 n ) = ℱ ( p ⋅ I 2 n ) \displaystyle\mathcal{F}(g\cdot I_{2n})=\mathcal{F}(p\cdot I_{2n})
= p u 0 = p k u 0 = g ℱ ( I 2 n ) . \displaystyle=pu_{0}=pku_{0}=g\mathcal{F}(I_{2n}).
Next, we show in the general case g ∈ 𝒢 ( J ) , G ∈ Ω ( J ) g\in\mathcal{G}(J),G\in\Omega(J) .
Express an arbitrary G ∈ Ω ( J ) G\in\Omega(J) as
G = g 0 ⋅ I 2 n , g 0 ∈ 𝒢 ( J ) . G=g_{0}\cdot I_{2n},\quad g_{0}\in\mathcal{G}(J).
Then we see that
ℱ ( g ⋅ G ) = ℱ ( g g 0 ⋅ I 2 n ) = g g 0 ℱ ( I 2 n ) \displaystyle\mathcal{F}(g\cdot G)=\mathcal{F}(gg_{0}\cdot I_{2n})=gg_{0}\mathcal{F}(I_{2n})
= g ℱ ( g 0 ⋅ I 2 n ) = g ℱ ( G ) . \displaystyle=g\mathcal{F}(g_{0}\cdot I_{2n})=g\mathcal{F}(G).
3 .
Take an arbitrary G ∈ Ω ( J ) G\in\Omega(J) and express it as
G = ( h G 2 t G 2 ∗ ) , G 2 t h − h G 2 = 0 . \displaystyle G=\begin{pmatrix}h&{}^{t}\!G_{2}\\
G_{2}&*\end{pmatrix},\quad{}^{t}\!G_{2}h-hG_{2}=0.
Then we see that
ℱ ( G ) = [ U ] , U = ( i I n − G 2 h ) . \mathcal{F}(G)=[U],\quad U=\begin{pmatrix}iI_{n}-G_{2}\\
h\end{pmatrix}.
Putting
Γ = W t U = h , \Gamma={}^{t}\!WU=h,
we see that
Δ ( ℱ ( G ) , ℱ ( G ) ) = ( U Γ − 1 ) ∗ J ( U Γ − 1 ) \displaystyle\Delta(\mathcal{F}(G),\mathcal{F}(G))=(U\Gamma^{-1})^{*}J(U\Gamma^{-1})
= 1 i { ( i h − 1 − G 2 h − 1 ) − ( i h − 1 − G 2 h − 1 ) ∗ } \displaystyle=\frac{1}{i}\left\{(ih^{-1}-G_{2}h^{-1})-(ih^{-1}-G_{2}h^{-1})^{*}\right\}
= 2 h − 1 > 0 . \displaystyle=2h^{-1}>0.
∎
Proof of Proposition 4.2 .
From the definition, we have
ℱ − 1 ( e s B u 0 ) = e − 2 s B = : G ( s ) . \mathcal{F}^{-1}(e^{sB}u_{0})=e^{-2sB}=:G(s).
Then, the function h ~ ( s ) := π ( G ( s ) ) \tilde{h}(s):=\pi(G(s)) satisfies
( h ~ ′ h ~ − 1 ) ′ = 4 ( c h − 1 ) 2 . (\tilde{h}^{\prime}\tilde{h}^{-1})^{\prime}=4(ch^{-1})^{2}.
Therefore, the function
h ( s ) := Δ ( e s B u 0 , e s B u 0 ) − 1 = 1 2 h ~ ( s ) {h}(s):=\Delta(e^{sB}u_{0},e^{sB}u_{0})^{-1}=\frac{1}{2}\tilde{h}(s)
satisfies
( h ′ h − 1 ) ′ = ( c h − 1 ) 2 . ({h}^{\prime}{h}^{-1})^{\prime}=(ch^{-1})^{2}.
∎
5 Remarks
We gave individual calculations for each symmetric domain of type BDI and CI. The same calculation, which consists of the Lagrangian calculation and the realization of symmetric domains, is likely applicable to other symmetric domains classified by Cartan, ref [2 , 3 ] . For this purpose, it is natural to expect a unified way to treat general symmetric domains.
References
[1]
Inoue, H.,
Matrix-valued Bratu equation and the exact solution of its initial value problem,
Int. Jour. Math. Ind. , published on 12 Mar. 2020.
[2]
Helgason, S.,
Differential geometry and symmetric spaces ,
Academic Press, 1962.
[3]
Pyatetskii-Shapiro, I. I.,
Automorphic functions and the geometry of classical domains ,
Gordon and Breach Science Publishers, 1969.
Hiroto Inoue
Institute of Mathematics for Industry
Kyushu University
744 Motooka, Nishi-ku
Fukuoka, 819-0395, Japan
(E-mail: [email protected] )