This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\UseRawInputEncoding\definecolor

Greenrgb0,.8,0 \definecolorGreenrgb0.20,0.43,0.09

11institutetext: Thomas Dreyfus 22institutetext: Institut de Recherche Mathématique Avancée, U.M.R. 7501 Université de Strasbourg et C.N.R.S. 7, rue René Descartes 67084 Strasbourg, France. 22email: [email protected] 33institutetext: Jacques-Arthur Weil 44institutetext: XLIM, U.M.R 7252, Université de Limoges et C.N.R.S, 123 avenue Albert Thomas, 87060 Limoges Cedex, France. 44email: [email protected]

Differential Galois Theory and Integration

Thomas Dreyfus and Jacques-Arthur Weil
Abstract

In this chapter, we present methods to simplify reducible linear differential systems before solving. Classical integrals appear naturally as solutions of such systems. We will illustrate the methods developed in DW (19) on several examples to reduce the differential system. This will give information on potential algebraic relations between integrals.

Keywords:
Ordinary Differential Equations, Differential Galois Theory, Computer Algebra, Integrals, Lie Algebras, DD-finite functions.

Introduction

In this chapter, we will review properties of block triangular linear differential systems and their use to compute properties of integrals.

Let 𝐤=(x)\mathbf{k}=\mathbb{C}(x) and AMat(n,𝐤)A\in\mathrm{Mat}(n,\mathbf{k}). We will study the corresponding linear differential system [A]:xY=AY[A]:\,\partial_{x}Y=AY. More generally, we might consider linear differential systems over a differential field (𝐤,)(\mathbf{k},\partial) of characteristic zero, that is a field 𝐤\mathbf{k} equipped with an additive morphism \partial satisfying the Leibniz rule (ab)=a(b)+(a)b\partial(ab)=a\partial(b)+\partial(a)b.

The Galois theory of linear differential equations aims at understanding what are the algebraic relations between the solutions of [A][A]. We attach to [A][A] a group that measures the relations. The computation of this differential Galois group is a hard task in full generality. The goal of this chapter is to illustrate on examples the method described in DW (19) that focuses on the reduction of block-triangular linear differential systems. This approach is powerful enough to understand the desired relations on the solutions.

Given an invertible matrix PGL(n,𝐤)P\in\mathrm{GL}(n,\mathbf{k}), the linear change of variables Y=PZY=PZ produces a new differential system denoted Z=P[A]ZZ^{\prime}=P[A]Z where

P[A]:=P1APP1P.P[A]:=P^{-1}AP-P^{-1}P^{\prime}.

Two linear differential systems [A][A] and [B][B] are called (gauge) equivalent over 𝐤\mathbf{k} if there exists a gauge transformation, an invertible matrix PGL(n,𝐤)P\in\mathrm{GL}(n,\mathbf{k}), such that B=P[A]B=P[A]. A linear differential system is called reducible (over 𝐤\mathbf{k}) when it is gauge equivalent to a linear differential system in block triangular form:

[A]:Y=AY, with A=(A10SA2)Mat(𝐤).[A]:\,\partial Y=AY,\,\textrm{ with }A=\left(\begin{array}[]{c|c}{A_{1}}&0\\ \hline\cr{S}&{A_{2}}\end{array}\right)\in\mathrm{Mat}(\mathbf{k}). (1)

It turns out that computing properties of integrals of DD-finite functions111A function is DD-finite when its is a solution of a linear differential equations with coefficients in 𝐤\mathbf{k} or of some types of iterated integrals may be reduced to computing solutions of such reducible linear differential systems. Computing the differential Galois groups of block triangular systems gives, in turn, information on properties of their solutions. This idea was promoted by Bertrand Ber (01) or Berman and Singer BS (99) who showed how to compute Galois groups of some reducible systems and how this would reveal algebraic properties of integrals.

Our aim, in this chapter, is to show how to compute such algebraic properties. The underlying theory is developed in DW (19) and CW (18); general references for differential Galois theory are for example PS (03); CH (11); Sin (09); general references for constructive theory of reduced forms of differential systems are AMCW (13); MS (02); AMDW (16); BCDVW (20); BCWDV (16). We will rely on many examples rather than on cumbersome theory and provide references for interested readers. For example, we will consider the dilogarithm function Li2\textrm{Li}_{2} defined by Li2(x)=ln(1x)x\textrm{Li}_{2}^{\prime}(x)=-\frac{\ln(1-x)}{x} and our 44-dimensional example (in the next sections) will provide a simple algorithmic proof that it is not only transcendent but algebraically independent of ex,ln(x)e^{x},\ln(x) and ln(1x)\ln(1-x); the proof will only use rational solutions of linear first order differential equations.

The chapter is organized as follows. We begin by some examples to illustrate what the method will provide. Then we give a review of differential Galois theory and reduced forms of differential systems. We finish by explaining the strategy of DW (19) on two examples that are chosen so that almost all calculations can be reproduced easily.

Acknowledgment This project has received funding from the ANR project De Rerum Natura ANR-19-CE40-0018. We warmly thank the organizers of the conference Antidifferentiation and the Calculation of Feynman Amplitudes for stimulating exchanges and lectures. We specially thank the referee for many observations and precisions that have enhanced the quality of this text.

1 Examples

1.1 A first toy example

We consider two confluent Heun222See https://dlmf.nist.gov/31.12 or Mot (18). The notation HeunCHeunC is the syntax in Maple. functions

f1(x)\displaystyle f_{1}(x) =\displaystyle= exp(312x)HeunC(36,13,13,148,1148;x)\displaystyle\exp\left(\frac{\sqrt{3}}{12}x\right)\textrm{HeunC}\left(\frac{\sqrt{3}}{6},-\frac{1}{3},-\frac{1}{3},\frac{1}{48},\frac{11}{48};x\right)
=\displaystyle= 1796x71946080x212730710616832x38231929310192158720x4+O(x5)\displaystyle 1-{\frac{7}{96}}x-{\frac{719}{46080}}{x}^{2}-{\frac{127307}{10616832}}{x}^{3}-{\frac{82319293}{10192158720}}{x}^{4}+O\left({x}^{5}\right)

and

f2(x)\displaystyle f_{2}(x) =\displaystyle= exp(312x)x3HeunC(36,13,13,148,1148;x)\displaystyle\exp\left(\frac{\sqrt{3}}{12}x\right)\sqrt[3]{x}\textrm{HeunC}\left(\frac{\sqrt{3}}{6}\,,\frac{1}{3},-\frac{1}{3},\frac{1}{48},{\frac{11}{48}};x\right)
=\displaystyle= x13(1+25192x+8977129024x2+109918326542080x3+O(x4)).\displaystyle x^{\frac{1}{3}}\left(1+{\frac{25}{192}}x+{\frac{8977}{129024}}{x}^{2}+{\frac{1099183}{26542080}}{x}^{3}+O({x}^{4})\right).

They form a basis of solutions of the second order linear differential equation

L(y):=d2dx2y(x)+23(1x+1x1)ddxy(x)(3x26x+7)144x(x1)y(x)=0L(y):={\frac{{\rm d}^{2}}{{\rm d}{x}^{2}}}y\left(x\right)+\frac{2}{3}\left(\frac{1}{x}+\frac{1}{x-1}\right){\frac{\rm d}{{\rm d}x}}y\left(x\right)-{\frac{\left(3\,{x}^{2}-6\,x+7\right)}{144\,x\left(x-1\right)}}y\left(x\right)=0

The Wronskian relation gives us the algebraic relation (f1f2f1f2)3=x2(x1)2(f_{1}f_{2}^{\prime}-f_{1}^{\prime}f_{2})^{3}=x^{2}(x-1)^{2}. The equation has order two so the Kovacic algorithm Kov (86); UW (96); vHW (05) can be used to compute the differential Galois group and we find that no other algebraic relations exist between f1f_{1}, f2f_{2}, f1f_{1}^{\prime} and f2f_{2}^{\prime}. Now let Fi(x):=xfi(t)𝑑tF_{i}(x):=\int^{x}f_{i}(t)dt be a primitive. We want to determine whether F1F_{1} and F2F_{2} are algebraically independent of the fif_{i} and fif_{i}^{\prime} or not. The techniques explained below will show that this question reduces to asking whether there is a rational solution to the linear differential system

Z=(03x2+6x7144x(x1)1134x2x(x1))Z+(10)Z^{\prime}=\left(\begin{array}[]{cc}0&{\frac{-3\,{x}^{2}+6\,x-7}{144\,x\left(x-1\right)}}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr-1&\frac{1}{3}\,{\frac{4\,x-2}{x\left(x-1\right)}}\end{array}\right)\cdot Z\,+\,\begin{pmatrix}1\\ 0\end{pmatrix}

or, equivalently, whether the following linear differential equation (the left hand side turns out to be the adjoint operator of LL) has a rational solution:

g′′(x)+23(2x1)x(x1)g(x)+(3x49x3179x2+185x96)144x2(x1)2g(x)=1.-g^{\prime\prime}(x)+\frac{2}{3}\,\frac{\left(2\,x-1\right)}{x\left(x-1\right)}g^{\prime}(x)+{\frac{\left(3\,{x}^{4}-9\,{x}^{3}-179\,{x}^{2}+185\,x-96\right)}{144\,{x}^{2}\left(x-1\right)^{2}}}g\left(x\right)=1.

It can be seen directly or with a computer algebra system that this equation has no rational solution. The underlying theoretical tools are from the constructive differential Galois theory. However, in operational terms, it is rather easy to compute and check : no hard theory is required for the calculation. Let us unveil a corner of the underlying tools.

We have in fact studied the differential Galois group of the differential system [A][A] with

A=(0103x26x+7144(x1)3x223(1x1+1x)0100)A=\left(\begin{array}[]{ccc}0&1&0\\ {\frac{3\,{x}^{2}-6\,x+7}{144\,(x-1)^{3}x^{2}}}&-\frac{2}{3}\left(\frac{1}{x-1}+\frac{1}{x}\right)&0\\ 1&0&0\end{array}\right)

which admits the fundamental solution matrix

U:=(f1(x)f2(x)0f1(x)f2(x)0xf1(t)dtxf2(t)dt1).U:=\left(\begin{matrix}f_{1}(x)&f_{2}(x)&0\\ f_{1}^{\prime}(x)&f_{2}^{\prime}(x)&0\\ \int^{x}f_{1}(t){\rm dt}&\int^{x}f_{2}(t){\rm dt}&1\end{matrix}\right).

Our calculation of rational solution above shows (with the tools displayed below) that the differential Galois group of the system [A][A] will have dimension 55. This in turn shows that the integrals xfj(t)dt\int^{x}f_{j}(t){\rm dt} are algebraically independent of f1f_{1}, f2f_{2} and their derivatives. In fact, it even shows that both integrals are algebraically independent.

1.2 A second toy example

Recall that the hypergeometric function is given by the formula

F2([a,b],[c])1(x)=n=0(a)n(b)n(c)nxnn!,with (a)0=1,(a)n=a(a+1)(a+n1).{}_{2}F{}_{1}([a,b],[c])(x)=\displaystyle\sum_{n=0}^{\infty}\frac{(a)_{n}(b)_{n}}{(c)_{n}}\frac{x^{n}}{n!},\quad\hbox{with }\quad(a)_{0}=1,\;(a)_{n}=a(a+1)\dots(a+n-1).

We now take two hypergeometric functions (with nice modular properties)

f1(x)=F12([13,112],[712])(x){f_{1}(x)={}_{2}F_{1}\left(\left[-\frac{1}{3},\frac{1}{12}\right],\left[\frac{7}{12}\right]\right)(x)}

and

f2(x)=x5/12F12([112,12],[1712])(x).f_{2}(x)=x^{5/12}{}_{2}F_{1}\left(\left[\frac{1}{12},\frac{1}{2}\right],\left[\frac{17}{12}\right]\right)(x).

The vectors Yi:=(fi,fi)TY_{i}:=(f_{i},f_{i}^{\prime})^{T} are solutions of Y=A1YY^{\prime}=A_{1}Y below. To study properties of their integrals, we set Y:=(f,f,xf(t)𝑑t)TY:=(f,f^{\prime},\int^{x}f(t)dt)^{T} and we have Y=AYY^{\prime}=AY where

A1=(011361x(x1)712x16(x1)) and A=(A101 00).A_{1}=\left(\begin{array}[]{cc}0&1\\ \frac{1}{36}\,{\frac{1}{x\left(x-1\right)}}&-{\frac{7}{12\,x}}-\frac{1}{6\left(x-1\right)}\end{array}\right)\textrm{ and }A=\left(\begin{matrix}A_{1}&0\\ 1\,0&0\end{matrix}\right).

We will see in the sequel how we may find a suitable change of variables

Q:=(1000101544+45x449x(x1)111)Q:=\left(\begin{array}[]{ccc}1&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&1&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr-{\frac{15}{44}}+{\frac{45\,x}{44}}&-{\frac{9\,x\left(x-1\right)}{11}}&1\end{array}\right)

such that

Q[A]=(A100 00).Q[A]=\left(\begin{matrix}A_{1}&0\\ 0\,0&0\end{matrix}\right).

This shows that

xfi(t)dt=911x(x1)fi(x)+1544(3x1)fi(x)+ci,ci,\int^{x}\!f_{i}\left(t\right)\,{\rm dt}=-\frac{9}{11}\,x\left(x-1\right)f_{i}^{\prime}\left(x\right)+\frac{15}{44}\left(3\,x-1\right)f_{i}\left(x\right){+}{c_{i}},\quad c_{i}\in\mathbb{C},

and the differential Galois group of [A][A] has dimension 3. We note that none of these two examples is new and that efficient methods to handle these questions have been developed by Abramov and van Hoeij in AvH (99, 97).

1.3 Integrals via reducible systems

These two examples have shown how, by augmenting the dimension of a linear differential system, we can study integrals of its solutions. Conversely, block triangular systems give rise to integrals via variation of constants; we review this for clarification. For a factorized reducible system [A][A] of the form

A=(A10SA2)=Adiag+Asub,A=\left(\begin{array}[]{c|c}A_{1}&0\\ \hline\cr S&A_{2}\end{array}\right)=A_{diag}+A_{sub},

with Adiag:=(A100A2)A_{diag}:=\left(\begin{array}[]{c|c}A_{1}&0\\ \hline\cr 0&A_{2}\end{array}\right) and Asub:=(00S0)A_{sub}:=\left(\begin{array}[]{c|c}0&0\\ \hline\cr S&0\end{array}\right), we have a fundamental solution matrix of the form

U=(U10U2VU2)=(U100U2)(Idn10VIdn2).U=\left(\begin{array}[]{c|c}U_{1}&0\\ \hline\cr U_{2}V&U_{2}\end{array}\right)=\left(\begin{array}[]{c|c}U_{1}&0\\ \hline\cr 0&U_{2}\end{array}\right)\left(\begin{array}[]{c|c}\mathrm{Id}_{\mathrm{n_{1}}}&0\\ \hline\cr V&\mathrm{Id}_{\mathrm{n_{2}}}\end{array}\right).

Once U1U_{1} and U2U_{2} are known, VV is given by integrals : V=U21SU1\partial V=U_{2}^{-1}SU_{1}. So it may seem that no further theory is required. However, U1U_{1} and U2U_{2} are fundamental solution matrices for systems Ui=AiUi\partial U_{i}=A_{i}U_{i} so the relation V=U21SU1\partial V=U_{2}^{-1}SU_{1} may involve integrals of complicated DD-finite functions.
Our approach will be to first “reduce” SS as much as possible, using manipulations with rational functions, to prepare the system for easier solving. In return, we will obtain algebraic information on all (possibly iterated) integrals that may occur in the relation V=U21SU1\partial V=U_{2}^{-1}SU_{1}.

We note, for the record, that factorized differential operators also correspond to block triangular differential systems.

1.4 Example of situations involving reducible linear differential systems

Operators from statistical physics and combinatorics

In many models attached to statistical mechanics, quantities are expressed as multiple integrals depending on a parameter. They are generally holonomic in this parameter, meaning that they are DD-finite, i.e they are solutions to linear differential operators (for example, the so-called Ising operators). A description of this setting may be found in the books of Baxter Bax (82) and McCoy McC (10) (notably Chapters 10 and 12) or in the surveys MM (12); MAB+ (11) (and references therein). Similarly, in combinatorics, sequences that satisfy recurrence relations may be studied through their generating series, which are often DD-finite.

Experimentally (see, for example, BHM+ (07); BBH+ (09, 10, 11); BHMW (14, 15) or HKMZ (16); Kou (13); ABKM (20, 18)), it turns out that many differential operators coming from these processes are factored into a product of smaller order factors - and the corresponding companion systems are reducible.

Other cases of reducible systems are the ones admitting reducible monodromies such as ST (16) or in the works of Kalmykov on Feynman integrals via Mellin-Barnes integrals Kal (06); BKK (10); KK (12, 17, 11); about differential equations for Feynman integrals, one may consult the works of Smirnov Smi (12); LSS (18).

Finally, we mention the paper ABRS (14) with other examples of integrals where the techniques presented below may offer alternative approaches for some of the computations.

Variational equations of nonlinear differential systems

Another natural source of reducible systems is the old method of variational equations of nonlinear differential systems along a given particular solution ϕ\phi. One can form the linear differential equation describing perturbations along this solution ϕ\phi. The general principle is that obstructions to integrability of the nonlinear system can be read on this linear differential system. Ziglin Zig (82) linked non-integrability to non-commutations in the monodromy group with concrete versions given e.g. in CR (91) and Sal (14). It was generalized in the theory of Morales-Ruiz and Ramis and extended with Simó MRR (01); MRRS (07) for Hamiltonian systems; they prove that a Hamiltonian system is completely integrable only if all its variational equations have a virtually abelian differential Galois group. Extensions to nonhamiltonian differential equations can be found in AZ (10); CW (18). Applications to specific problems have been occasions to establish effective criteria. For example: the three body problem Tsy (01); BW (03), nn-body problems MRS (09); Com (12), Hill systems (movements of the moon) MRSS (05); AMW (12) or a swinging Atwood machine PPR+ (10).

These variational equations can be written in the form of reducible linear differential systems of big dimension. The simplification techniques outlined below are hence particularly relevant to make computation on such systems practical (see AMDW (16)).

We note that the Morales-Ramis theory has had a spectacular recent development, initiated in MR (20) where this variational approach is applied to path integrals thus establishing a beautiful and unexpected bridge with the previous subsection.

Reducible linear differential systems appear naturally in another type of perturbative approach: in ϵ\epsilon-expansions of solutions for perturbed systems ddxY=B(x,ϵ)Y\frac{d}{dx}Y=B(x,\epsilon)Y like the ones that appear in works of J. Blümlein, C. Raab, C. Schneider, J. Henn and others for example.

2 Reduced forms of linear differential systems

2.1 Ingredient #1 : Differential Galois-Lie Algebra

In what follows, (𝐤,)(\mathbf{k},\partial) is a differential field of characteristic 0. We outline a brief exposition of the Galois theory of linear differential equations, see PS (03); CH (11); Sin (09) for expositions with proofs. We consider a linear differential system

[A]:Y=AY,AMat(n,𝐤).[A]:\quad{\partial Y}=AY,\quad A\in\mathrm{Mat}(n,\mathbf{k}). (2)

Let 𝒞{\cal C} be the field of constants of the differential field 𝐤\mathbf{k}, that is 𝒞:={a𝐤|(a)=0}{\cal C}:=\{a\in\mathbf{k}|\partial(a)=0\}. We will assume that 𝒞{\cal C} is algebraically closed, i.e, every non constant polynomial equation has a solution in 𝒞{\cal C}.

A Picard-Vessiot extension is a field K=𝐤(U)K=\mathbf{k}(U), where UU is a fundamental solution matrix333An invertible n×nn\times n matrix UU such that U=AUU^{\prime}=AU. of [A][A], such that the field of constants of KK is still 𝒞{\cal C}. This can be constructed algebraically; alternatively, when 𝐤\mathbf{k} is a field of meromorphic functions, one may consider a local matrix of power series solutions at a regular point and this gives a Picard-Vessiot extension. A Picard-Vessiot extension is unique modulo differential field isomorphisms.

The differential Galois group G:=Aut(K/𝐤)G:=\mathrm{Aut}_{\partial}(K/\mathbf{k}) is the set of automorphisms of KK which leave the base field 𝐤\mathbf{k} fixed and commute with the derivation. Let σG\sigma\in G. By construction, σ(U)\sigma(U) is also a fundamental solution matrix in KK and we find that there exists a matrix [σ]GL(n,𝒞)[\sigma]\in\mathrm{GL}(n,{\cal C}) such that σ(U)=U[σ]\sigma(U)=U\cdot[\sigma]. The map σ[σ]\sigma\mapsto[\sigma] provides a faithful representation of GG as a subgroup of GL(n,𝒞)\mathrm{GL}(n,{\cal C}), actually a linear algebraic group. If we change the fundamental solution, we obtain a conjugate representation.

We recall that, given an invertible matrix PGL(n,𝐤)P\in\mathrm{GL}(n,\mathbf{k}), the linear change of variables Y=PZY=PZ produces a new differential system denoted Z=P[A]Z\partial Z=P[A]Z where

P[A]:=P1APP1P.P[A]:=P^{-1}AP-P^{-1}\partial P.

We note that such a gauge transformation P[A]P[A], with PGL(n,𝐤)P\in\mathrm{GL}(n,\mathbf{k}), does not change the Galois group.

Given a Picard-Vessiot extension K=𝐤(U)K=\mathbf{k}(U), the polynomial relations among all entries of UU (over 𝐤\mathbf{k}) form an ideal II. The Galois group GG can then be viewed as the set of matrices which stabilize this ideal II of relations. Thus the computation of GG is strongly related to the understanding of the algebraic relations among the solutions.

The Galois-Lie algebra of [A][A] is the Lie algebra 𝔤\mathfrak{g} of the differential Galois group GG. It is defined as the tangent space of GG at the identity Id\mathrm{Id}. The dimension of 𝔤\mathfrak{g} measures the transcendence degree of KK over 𝐤\mathbf{k}, that is

dim𝒞𝔤=trdeg(K/𝐤).\dim_{{\cal C}}\mathfrak{g}={\rm trdeg}(K/\mathbf{k}).

One way of computing the Lie algebra (PS (03)) is the following : 𝔤\mathfrak{g} is the set of matrices NN such that Id+ϵNG(𝒞[ϵ])\mathrm{Id}+\epsilon N\in G({\cal C}[\epsilon]) with ϵ2=0\epsilon^{2}=0. In other terms, Id+ϵN\mathrm{Id}+\epsilon N satisfies the defining equations of the group modulo ϵ2\epsilon^{2}. For example, SL(n,𝒞){\mathrm{SL}(n,{\cal C})} (set of MM such that det(M)=1\det(M)=1) gives the Lie algebra 𝔰𝔩(n,𝒞)\mathfrak{sl}(n,{\cal C}) of matrices NN such that Tr(N)=0\mathrm{Tr}(N)=0. The symplectic group Sp(2n,𝒞){\mathrm{Sp}(2n,{\cal C})} is the set of MM such that MTJM=JM^{T}\cdot J\cdot M=J; its Lie algebra 𝔰𝔭(n,𝒞)\mathfrak{sp}(n,{\cal C}) is found to the set of NN such that NTJ+JN=0N^{T}J+JN=0, with J=(0IdId0)J=\left(\begin{array}[]{cc}0&\mathrm{Id}\\ -\mathrm{Id}&0\end{array}\right). The additive group 𝔾a:={(1a01),a𝒞}\mathbb{G}_{a}:=\left\{\left(\begin{array}[]{cc}1&a\\ 0&1\end{array}\right),a\in{\cal C}\right\} admits the Lie algebra 𝔤a=Span𝒞{(0100)}{\mathfrak{g}_{a}}=Span_{{\cal C}}\left\{\left(\begin{array}[]{cc}0&1\\ 0&0\end{array}\right)\right\}; the multiplicative group 𝔾m={(a001a),a𝒞}\mathbb{G}_{m}=\left\{\left(\begin{array}[]{cc}a&0\\ 0&\frac{1}{a}\end{array}\right),a\in{\cal C}^{*}\right\} admits the Lie algebra 𝔤m=Span𝒞{(1001)}\mathfrak{g}_{m}=Span_{{\cal C}}\left\{\left(\begin{array}[]{cc}1&0\\ 0&-1\end{array}\right)\right\}.

The reduction technique exposed in this chapter aims at computing directly the Galois-Lie algebra 𝔤\mathfrak{g} before computing GG itself. Although the theory is not obvious, the resulting calculations are reasonably simple.

2.2 Ingredient #2 : Lie algebra Lie(A)\textrm{Lie}(A) associated to AA

The Lie algebra Lie(A)\textrm{Lie}(A) associated to the matrix AA is defined as follows. Let a1,,as𝐤a_{1},\ldots,a_{s}\in\mathbf{k} be a basis of the 𝒞{\cal C}-vector space generated by the coefficients of AA. We can then decompose AA as A=i=1saiMiA=\sum_{i=1}^{s}a_{i}M_{i} where the MiM_{i} are constant matrices. Now, we consider the smallest Lie algebra containing all the MiM_{i}: this is the vector space generated by the MiM_{i} and all their iterated Lie brackets ([M,N]:=MNNM[M,N]:=MN-NM); then we take its algebraic envelope.

Definition 1

The Lie algebra Lie(A)\textrm{Lie}(A) associated to the matrix AA is the smallest algebraic444A Lie algebra is called algebraic when it is the Lie algebra of a linear algebraic group. When the MiM_{i} are given, this can be computed, see FdG (07), Section 3, or DW (19), Section 6. Lie algebra containing all the MiM_{i}.

The decomposition A=i=1saiMiA=\sum_{i=1}^{s}a_{i}M_{i} is not unique but the vector space generated by the MiM_{i} is unique. Thus, the associated Lie algebra Lie(A)\textrm{Lie}(A) does not depend on the chosen decomposition.
This Lie algebra Lie(A)\textrm{Lie}(A) appears in works of Magnus Mag (54) or Feynman who use the Baker-Campbell-Hausdorf formula to write solutions of Y=AY\partial Y=AY as (infinite) products of exponentials constructed with Lie brackets. Wei and Norman give in WN (63, 64) a finite formula to solve the system when Lie(A)\textrm{Lie}(A) is solvable. This formula is well-known in physics and control theory but not as well among mathematicians. The terminology of Lie algebra associated to AA appears555For this reason, some authors, including ourselves, call the decomposition A=i=1saiMiA=\sum_{i=1}^{s}a_{i}M_{i} a Wei-Norman decomposition of AA. in WN (63, 64) (in there, it is defined as the Lie algebra generated by all values of A(z0)A(z_{0}) for z0z_{0} spanning all constants minus singularities and the algebraic envelope is missing). In the sequel, we will study a 44-dimensional example and an 88-dimensional example where our technique will have some relations to the Wei-Norman approach; namely, we will change AA to obtain an associated Lie algebra Lie(A)\textrm{Lie}(A) of minimal dimension so that solving formulas become optimal in some sense.

Example 1 (A 4-dimensional example)

Let

A:=(11x1x100101x10011x0001).A:=\left(\begin{array}[]{cccc}1&\frac{1}{x}&\frac{1}{x-1}&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&1&0&\frac{1}{x-1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&1&-\frac{1}{x}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&1\end{array}\right).

Note that this system is upper triangular, contrary to (1). This will illustrate that our method can be equivalently applied to upper and lower triangular systems. We obtain a Wei-Norman decomposition A=M1+1xM2+1x1M3A=M_{1}+\frac{1}{x}M_{2}+\frac{1}{x-1}M_{3}, where

M1=(1000010000100001),M2:=(0100000000010000),M3:=(0010000100000000).M_{1}=\left(\begin{array}[]{cccc}1&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&1&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&1&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&1\end{array}\right),\;M_{2}:=\left(\begin{array}[]{cccc}0&1&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&-1\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\end{array}\right),\;M_{3}:=\left(\begin{array}[]{cccc}0&0&1&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&1\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\end{array}\right).

We have

M4:=[M2,M3]=(0002000000000000).M_{4}:=[M_{2},M_{3}]=\left(\begin{array}[]{cccc}0&0&0&2\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\end{array}\right).

All the other brackets in M1,M2,M3,M4\langle M_{1},M_{2},M_{3},M_{4}\rangle are zero and we find that Lie(A)=M1,M2,M3,M4\textrm{Lie}(A)=\langle M_{1},M_{2},M_{3},M_{4}\rangle. It is solvable of depth 22: the first derived algebra (the set of all matrices in Lie(A) which can be written as a Lie bracket) is M4\langle M_{4}\rangle and the second derived algebra is {0}\{0\}. We will continue below with this example. \diamond

2.3 Linear differential systems in reduced form

We now turn to the link between Lie(A)\textrm{Lie}(A) and differential Galois theory, based on two important results of Kolchin and Kovacic. Proofs can be found in PS (03), Proposition 1.31 and Corollary 1.32; see also BSMR (10), Theorem 5.8, and AMCW (13), § 5.3 after Remark 31. Let AMat(n,𝐤)A\in\mathrm{Mat}(n,\mathbf{k}); let GG be the differential Galois group of [A][A] and 𝔤\mathfrak{g} its Lie algebra, the Galois-Lie algebra of the system [A][A]. The first result is

𝔤Lie(A).\mathfrak{g}\,\subset\,\textrm{Lie}(A).

So, the Lie algebra associated to AA, an object which is very easy to compute, provides an “upper bound” on 𝔤\mathfrak{g}. When we perform a gauge transformation PGL(n,𝐤)P\in\mathrm{GL}(n,\mathbf{k}) to obtain a new system P[A]P[A], GG and 𝔤\mathfrak{g} are invariant while Lie(P[A])\textrm{Lie}(P[A]) may vary. This shows that 𝔤\mathfrak{g} is a lower bound on all Lie(P[A])\textrm{Lie}(P[A]), for all gauge transformations PGL(n,𝐤)P\in\mathrm{GL}(n,\mathbf{k}). The second result of Kolchin and Kovacic is that this lower bound is reached. By definition, Lie(A)\textrm{Lie}(A) is the Lie algebra of an algebraic connected group HH. Then there exists a gauge transformation666The notation H(𝐤¯)H(\overline{\mathbf{k}}) denotes matrices whose entries are in 𝐤¯\overline{\mathbf{k}} and satisfy all the equations defining the algebraic group HH. PH(𝐤¯)P\in H(\overline{\mathbf{k}}) such that 𝔤=Lie(P[A])\mathfrak{g}=\textrm{Lie}(P[A]). Furthermore, if GG is connected and under the very mild additional condition that 𝐤\mathbf{k} is a 𝒞1\mathcal{C}^{1}-field777 A field 𝐤\mathbf{k} is a 𝒞1\mathcal{C}^{1}-field when every non-constant homogeneous polynomial PP over 𝐤\mathbf{k} has a non-trivial zero provided that the number of its variables is more than its degree. For example, 𝒞(x){\cal C}(x) is a 𝒞1\mathcal{C}^{1}-field and any algebraic extension of a 𝒞1\mathcal{C}^{1}-field is a 𝒞1\mathcal{C}^{1}-field (Tsen’s theorem). then we may choose PH(𝐤)P\in H({\mathbf{k}}) (no algebraic extension).

Definition 2

A system Y=AY\partial Y=AY is in reduced form when the Lie algebra Lie(A)\textrm{Lie}(A) associated to AA is equal to the Lie algebra of the differential Galois group of Y=AY\partial Y=AY.

The results of Kolchin and Kovacic show that a reduced form always exists. We provide, in the sequel, constructive methods to obtain them when the systems are in block-triangular form. They will be illustrated on our 44-dimensional example in the next section.

Example 2 (4-dimensional example, continued)

We will show, in the next section, that [A][A] is in reduced form. This system is easily integrated step by step and we find a fundamental solution matrix

U=ex(1ln(x)ln(x1)2dilog(x)+ln(x1)ln(x)010ln(x1)001ln(x)0001)U=e^{x}\left(\begin{array}[]{cccc}1&\ln\left(x\right)&\ln\left(x-1\right)&2\,\textrm{dilog}\left(x\right)+\ln\left(x-1\right)\ln\left(x\right)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&1&0&\ln\left(x-1\right)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&1&-\ln\left(x\right)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&1\end{array}\right)

where dilog is defined by dilog(x)=ln(x)1x\textrm{dilog}^{\prime}(x)=\frac{\ln(x)}{1-x}. We may take dilog(x)=Li2(1x)\textrm{dilog}(x)=\textrm{Li}_{2}(1-x). When computing UU, the terms requiring one integration (ln(x)\ln(x) and ln(x1)\ln(x-1)) correspond to the terms in M2M_{2} and M3M_{3} in the Wei-Norman decomposition of AA. The term dilog comes from the existence of M4M_{4}, the Lie bracket of M2M_{2} and M3M_{3} in Lie(A)\textrm{Lie}(A).

The corresponding Galois group is a semi-direct product of a 11-dimensional torus (giving rise to the exp(x)\exp(x) in the solution) and of a vector group generated by M2M_{2}, M3M_{3} (giving rise to the terms in ln\ln) and M4M_{4} (giving rise to the dilog). Its Lie algebra is Lie(A)\textrm{Lie}(A). \diamond

The ideas behind this notion of reduced form have been used for inverse problems in differential Galois theory: given an algebraic group GG, construct a differential system [A][A] having GG as its differential Galois group. It is also a technique known in differential geometry. Its use for direct problems in differential Galois theory is more recent. A remark in PS (03) suggests that this would be a good idea. In the context of Lie-Vessiot systems, Blazquez and Morales exploit this idea in BSMR (10). It is developed in AMDW (16); AMW (12, 11) in order to study variational equations in the context of integrability of Hamiltonian systems and the Morales-Ramis-Simó theory (and later in CW (18) to study algebraic properties of Painlevé equations). For irreducible systems (or systems in block diagonal form), a criterion for reduced forms is established in AMCW (13) with a decision procedure. Another, much more efficient approach is given in BCWDV (16); BCDVW (20) together with generalizations of the criterion of AMCW (13).
The approach described below allows, given the above results, to compute a reduced form of a block triangular linear differential system (the last case remaining after all the above contributions). It is based upon these works, notably CW (18), and is constructed in DW (19).

3 How to Compute a Reduced Form of a Reducible System

Assume now that 𝐤=𝒞(x)\mathbf{k}={\cal C}(x), where 𝒞{\cal C} is algebraically closed of characteristic zero where the derivation \partial acts trivially. It is in particular a 𝒞1\mathcal{C}^{1}-field. We consider a block triangular system over the differential field 𝒞(x){\cal C}(x) in the same form as (1), that is

[A]:Y=AY, with A=(A10SA2)Mat(n,𝐤).[A]:\,\partial Y=AY,\,\textrm{ with }A=\left(\begin{array}[]{c|c}{A_{1}}&0\\ \hline\cr{S}&{A_{2}}\end{array}\right)\in\mathrm{Mat}(n,\mathbf{k}).

Let Adiag:=(A100A2){A_{\mathrm{diag}}}:=\left(\begin{array}[]{c|c}{A_{1}}&0\\ \hline\cr 0&{A_{2}}\end{array}\right). In what follows, we will assume that the block diagonal part Adiag{A_{\mathrm{diag}}} is in reduced form and we will show how to find a gauge transformation PP such that P[A]P[A] is in reduced form. By DW (19), Lemma 2.7, the differential Galois group is connected and the reduction matrix we are looking for has coefficients in 𝐤\mathbf{k}. Instead of reproving all the theory (which, in this case, can be mostly found in DW (19)), we will work out in details a simple example where most of the required algorithmic elements appear. This may help convince the reader of how the method works (the details in DW (19) may be technical, at least in a first reading).

3.1 Shape of the gauge transformation

Let 𝔥sub\mathfrak{h}_{sub} be the the set of off-diagonal constant matrices of the form (00S0)\left(\begin{array}[]{c|c}0&0\\ \hline\cr{S}&0\end{array}\right) (same sizes as in relation (1)). We will extend the scalars to 𝔥sub(𝐤):=𝔥sub𝒞𝐤\mathfrak{h}_{sub}(\mathbf{k}):=\mathfrak{h}_{sub}\otimes_{{\cal C}}\mathbf{k}, the off-diagonal matrices with coefficients in 𝐤\mathbf{k}. Our first step is that we may find a reduction matrix in a very particular shape.

Lemma 1 (AMDW (16), Lemma 3.4)

There exists a gauge transformation P{Id+B,B𝔥sub(𝐤)}P\in\Big{\{}\mathrm{Id}+B,B\in\mathfrak{h}_{sub}(\mathbf{k})\Big{\}} such that Y=P[A]Y{\partial Y=P[A]Y} is in reduced form.

The following is is based on an observation from AMW (12, 11). Let P=Id+BP=\mathrm{Id}+B, B𝔥sub(𝐤)B\in\mathfrak{h}_{sub}(\mathbf{k}). Suppose that, for all Q{Id+B,B𝔥sub(𝐤)}Q\in\left\{\mathrm{Id}+B,B\in\mathfrak{h}_{sub}(\mathbf{k})\right\}, we have Lie(P[A])Lie(Q[P[A]])\textrm{Lie}(P[A])\subseteq\textrm{Lie}(Q[P[A]]); then, P[A]P[A] is in reduced form. In other terms, no rational gauge transformation can turn it into a system with a smaller associated Lie algebra. In this case, Lie(P[A])\textrm{Lie}(P[A]) will be the Lie algebra of the differential Galois group and this will give us transcendence relations and algebraic relations on the solutions; this will be seen on the main example of this section.

More generally, as we can see in DW (19), Section 5, if our method can reduce a system with two diagonal blocks then we can iterate this method to obtain a reduced form of a block-triangular system with an arbitrary number of blocks on the diagonal. Let us illustrate this iteration on a system with three diagonal blocks of the form

(A100S2,1A20S3,1S3,2A3)\left(\begin{array}[]{c|c|c}A_{1}&0&0\\ \hline\cr S_{2,1}&A_{2}&0\\ \hline\cr S_{3,1}&S_{3,2}&A_{3}\end{array}\right)

where the block diagonal part is in reduced form (see BCDVW (20); BCWDV (16) for this). We will first reduce the south-east part (which is of the same form as (1)) into a form

(A20SA3)\left(\begin{array}[]{c|c}A_{2}&0\\ \hline\cr S&A_{3}\end{array}\right)

Let P1P_{1} be the reduction matrix. By DW (19), Lemma 5.1, the following system is automatically in reduced form

Ad:=(A1000A200SA3).A_{d}:=\left(\begin{array}[]{c|c|c}A_{1}&0&0\\ \hline\cr 0&A_{2}&0\\ \hline\cr 0&S&A_{3}\end{array}\right).

Now we perform the gauge transformation (Id00P1)\left(\begin{array}[]{c|c}\mathrm{Id}&0\\ \hline\cr 0&P_{1}\end{array}\right) to obtain a system of the form

(A100S~2,1A20S~3,1SA3)\left(\begin{array}[]{c|c|c}A_{1}&0&0\\ \hline\cr\tilde{S}_{2,1}&A_{2}&0\\ \hline\cr\tilde{S}_{3,1}&S&A_{3}\end{array}\right)

(the Si,jS_{i,j} may have changed after the first reduction step). We now see that this system in the same form as (1) with AdA_{d} as the block diagonal matrix. So a second reduction of a two-blocks triangular system allows to reduce the initial three-blocks triangular system.

This iteration is well seen in our 44-dimensional example below.

Example 3 (4-dimensional example, continued)

Let

A:=(101x01x1101x0010001x11).A:=\left(\begin{array}[]{cccc}1&0&\frac{1}{x}&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\frac{1}{x-1}&1&0&-\frac{1}{x}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&1&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&\frac{1}{x-1}&1\end{array}\right).

A simple application of a factorization algorithm shows that it is reducible. Indeed, letting

P:=(0010100000010100),P:=\left(\begin{array}[]{cccc}0&0&-1&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr-1&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&1\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&1&0&0\end{array}\right),

we have

P[A]=(11x1x100101x10011x0001).P[A]=\left(\begin{array}[]{cc|c|c}1&\frac{1}{x}&\frac{1}{x-1}&0\\ 0&1&0&\frac{1}{x-1}\\ \hline\cr 0&0&1&-\frac{1}{x}\\ \hline\cr 0&0&0&1\end{array}\right).

This example is, of course, particularly simple. We use it to show how to apply the iteration procedure and Lemma 1 to simplify the system or prove that it cannot be simplified further.

Since we consider an upper triangular system, we start with the “north-west” corner. We let

B:=(11x01).B:=\begin{pmatrix}1&\frac{1}{x}\\ 0&1\end{pmatrix}.

The diagonal part is in reduced form (solutions are exe^{x} and cannot be simplified using rational functions). The associated Lie algebra Lie(B)\textrm{Lie}(B) has dimension 2. Reduction would imply to have dimension 11. By Lemma 1, a reduction matrix would have the form

P:=(1f(x)01).P:=\begin{pmatrix}1&f(x)\\ 0&1\end{pmatrix}.

The north-east coefficient of P[B]P[B] is 1xf(x)\frac{1}{x}-f^{\prime}(x). The coefficient 1xf(x)\frac{1}{x}-f^{\prime}(x) could never be constant (the equation f(x)=1xf^{\prime}(x)=\frac{1}{x} has no rational solution, the simple pole 1x\frac{1}{x} cannot be canceled by the derivative of a rational function). For any choice of ff, Lie(P[B])\textrm{Lie}(P[B]) will have dimension 22. It follows that [B][B] is in reduced form. So we iterate.
We now pick a bigger matrix BB:

B:=(11x1x1010001) and Bdiag:=(11x0010001).B:=\left(\begin{array}[]{cc|c}1&\frac{1}{x}&\frac{1}{x-1}\\ 0&1&0\\ \hline\cr 0&0&1\end{array}\right)\,\textrm{ and }\,B_{\textrm{diag}}:=\left(\begin{array}[]{cc|c}1&\frac{1}{x}&0\\ 0&1&0\\ \hline\cr 0&0&1\end{array}\right).

By DW (19), Lemma 5.1, and by the above calculation, we find that [Bdiag][B_{\textrm{diag}}] is in reduced form. Lemma 1 thus shows that a reduction matrix would have the simple form

P:=(10f(x)01g(x)001).P:=\begin{pmatrix}1&0&f(x)\\ 0&1&g(x)\\ 0&0&1\end{pmatrix}.

Now Lie(Bdiag)\textrm{Lie}(B_{\textrm{diag}}) has dimension 22 and Lie(B)\textrm{Lie}(B) has dimension 3. A reduction matrix should therefore map BB to a matrix with an associated Lie algebra of dimension 22.
We have P[B]2,3=g(x)P[B]_{2,3}=-g^{\prime}(x) so there should exist constants g1,g2g_{1},g_{2} and a rational function g(x)g(x) such that g(x)=g1+g21x-g^{\prime}(x)=g_{1}+g_{2}\frac{1}{x}. A necessary condition is g2=0g_{2}=0 and g(x)=g0g1xg(x)=g_{0}{-}g_{1}x. Similarly, there should exist constants f1,f2f_{1},f_{2} such that P[B]1,3=f1+f21xP[B]_{1,3}=f_{1}+f_{2}\frac{1}{x}. We now plug our condition on gg into this relation and find that there should be a rational function ff such that

f(x)=f1g1+(g0f2)1x+1x1.f^{\prime}(x)={-}f_{1}-g_{1}+(g_{0}{-}f_{2})\frac{1}{x}+\frac{1}{x-1}.

Now, because of the pole of order 11 at x=1x=1, this can never have a rational solution (whatever the values of the unknown constants). It follows that, for any choice of ff, Lie(P[B])\textrm{Lie}(P[B]) will have dimension 33. So PP is in reduced form.
Note that our main ingredient here has been to look for a rational solution of an inhomogeneous linear differential equation whose right-hand side contains parameters. There exist algorithms to compute conditions on the parameters (from the right-hand side) so that such an equation has rational solutions, see subsection 3.2 below or Sin (91), and this will be the key to what follows.
We continue iterating the reduction process. Now we will have

B:=(11x1x100101x10011x0001),Bdiag:=(11x1x10010000100001) and P:=(100f1(x)010f2(x)001f3(x)0001).B:=\left(\begin{array}[]{cccc}1&\frac{1}{x}&\frac{1}{x-1}&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&1&0&\frac{1}{x-1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&1&-\frac{1}{x}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&1\end{array}\right),\,B_{\textrm{diag}}:=\left(\begin{array}[]{cccc}1&\frac{1}{x}&\frac{1}{x-1}&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&1&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&1&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&1\end{array}\right)\,\textrm{ and }P:=\left(\begin{array}[]{cccc}1&0&0&f_{{1}}\left(x\right)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&1&0&f_{{2}}\left(x\right)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&1&f_{{3}}\left(x\right)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&1\end{array}\right).

Using again DW (19), Lemma 5.1, we see that BdiagB_{\textrm{diag}} is in reduced form. Furthermore, Lie(Bdiag)\textrm{Lie}(B_{\textrm{diag}}) has dimension 33 and Lie(B)\textrm{Lie}(B) has dimension 44. We compute P[B]P[B].

P[B]=(11x1x1f2(x)x+f3(x)x1ddxf1(x)0101x1ddxf2(x)0011xddxf3(x)0001).P[B]=\left(\begin{array}[]{cccc}1&\frac{1}{x}&\frac{1}{x-1}&{\frac{f_{{2}}\left(x\right)}{x}}+{\frac{f_{{3}}\left(x\right)}{x-1}}-{\frac{\rm d}{{\rm d}x}}f_{{1}}\left(x\right)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&1&0&\frac{1}{x-1}-{\frac{\rm d}{{\rm d}x}}f_{{2}}\left(x\right)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&1&-\frac{1}{x}-{\frac{\rm d}{{\rm d}x}}f_{{3}}\left(x\right)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&1\end{array}\right).

The relation P[B]3,4=c1,3+c2,31x+c3,31x1P[B]_{3,4}=c_{1,3}+c_{2,3}\frac{1}{x}+c_{3,3}\frac{1}{x-1} gives conditions c3,3=0,c2,3=1c_{3,3}=0,c_{2,3}=-1 and f3(x)=c1,3x+c0,3f_{{3}}\left(x\right)=-c_{{1,3}}x+c_{{0,3}}. The same study on P[B]2,4P[B]_{2,4} gives us f2(x)=c1,2x+c0,2f_{{2}}\left(x\right)=-c_{{1,2}}x+c_{{0,2}}. Without finishing with the last coefficient, we see that Lie(P[B])\textrm{Lie}(P[B]) contains matrices of the following forms (respectively because of terms in 1x\frac{1}{x} and 1x1\frac{1}{x-1}):

M2:=(010000000010000) and M3:=(001000100000000)M_{2}:=\left(\begin{array}[]{cccc}0&1&0&\star\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&-1\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\end{array}\right)\,\textrm{ and }\,M_{3}:=\left(\begin{array}[]{cccc}0&0&1&\star\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&1\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\end{array}\right)

whose Lie bracket is

M4:=[M2,M3]=(0002000000000000).M_{4}:=[M_{2},M_{3}]=\left(\begin{array}[]{cccc}0&0&0&2\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\end{array}\right).

So we see that, whatever our future choices may be, Lie(P[B])\textrm{Lie}(P[B]) will contain M4M_{4} and hence have dimension 44. This shows that our system cannot be reduced so it is in reduced form. Furthermore, this suggests that our reduction conditions might have been stronger : requiring P[B]3,4=c1,3+c2,31x+c3,31xP[B]_{3,4}=c_{1,3}+c_{2,3}\frac{1}{x}+c_{3,3}\frac{1}{x} was not enough. We could have imposed P[B]3,4=0P[B]_{3,4}=0 and P[B]2,4=0P[B]_{2,4}=0. Both these relations are easily seen to be impossible to fulfill with rational functions so our system is again seen to be in reduced form. \diamond

To summarize what this example suggests: we need to “cancel” terms in the purely triangular part; this reduces to finding rational solutions of linear differential equations with parametrized right-hand sides. And the order of the computations matters: here, one needs to study the relations on f2f_{2} and f3f_{3} before studying relations on f1f_{1}. We will show, in the sequel, how to systematize these ideas, using an isotypical decomposition and an adapted flag structure, and how to make them algorithmic so that a computer algebra system may perform the calculations.

In this example, we had seen that a fundamental solution matrix could be written using ex,ln(x),ln(x1)e^{x},\ln(x),\ln(x-1) and dilog(x). As [B][B] is in reduced form, Lie(B)\textrm{Lie}(B) is the Lie algebra of the Galois group and it has dimension 44. This shows that these four functions are transcendent and algebraically independent. So our calculation above (long but not hard) gave us a simple proof that dilog(x) is algebraically independent of ex,ln(x),ln(x1)e^{x},\ln(x),\ln(x-1).

3.2 The adjoint action of the diagonal

We recall our notations so far. We have ni×nin_{i}\times n_{i} matrices AiA_{i} with coefficients in 𝐤\mathbf{k} and

A=(A10SA2)Mat(n,𝐤),Adiag:=(A100A2).A=\left(\begin{array}[]{c|c}A_{1}&0\\ \hline\cr S&A_{2}\end{array}\right){\in\mathrm{Mat}(n,\mathbf{k})},\,{A_{\mathrm{diag}}}:=\left(\begin{array}[]{c|c}{A_{1}}&0\\ \hline\cr 0&{A_{2}}\end{array}\right).

If we take two off-diagonal matrices B1B_{1} and B2B_{2} in 𝔥sub\mathfrak{h}_{sub}, we have B1.B2=0B_{1}.B_{2}=0. This allows two simple calculations. First, let P:=Id+ifiBiP:=\mathrm{Id}+\sum_{i}f_{i}B_{i}, with fi𝐤{f_{i}}\in\mathbf{k}, Bi𝔥subB_{i}\in\mathfrak{h}_{sub}. Then

P[A]=A+ifi[Adiag,Bi]i(fi)Bi.P[A]=A+\sum_{i}{f_{i}}[{{A_{\mathrm{diag}}},B_{i}}]-\sum_{i}\partial({f_{i}})B_{i}. (3)

Furthermore, [Adiag,Bi]𝔥sub(𝐤)[{{A_{\mathrm{diag}}},B_{i}}]\in\mathfrak{h}_{sub}(\mathbf{k}). These two calculations show that reduction will be governed by the adjoint action Ψ:X[Adiag,X]{\Psi}:X\mapsto[{{A_{\mathrm{diag}}},X}] of the block diagonal part Adiag{A_{\mathrm{diag}}} on 𝔥sub(𝐤)\mathfrak{h}_{sub}(\mathbf{k}). This adjoint action Ψ\Psi is a linear map. Its matrix, on the canonical basis of 𝔥sub\mathfrak{h}_{sub}, is

Ψ=A2Idn1Idn2A1T.\Psi=A_{2}\otimes\mathrm{Id}_{\mathrm{n_{1}}}-\mathrm{Id}_{\mathrm{n_{2}}}\otimes A_{1}^{T}.

When Y=AdiagY\partial Y={A_{\mathrm{diag}}}Y has an abelian Lie algebra we may easily compute a Jordan normal form of Ψ:X[Adiag,X]\Psi:X\mapsto[{{A_{\mathrm{diag}}},X}]. Furthermore the eigenvalues of Ψ\Psi belong to 𝐤\mathbf{k}. This is the idea behind AMDW (16). In our case, we will need a more subtle structure, an isotypical decomposition into Ψ\Psi-invariant subspaces of 𝔥sub\mathfrak{h}_{sub}.

Example 4 (An 8-dimensional example)

We consider a matrix AA given by

A:=(101x000001x1101x000000100000001x110000101x01x1101x0010001x11).A:=\left(\begin{array}[]{cccc|cccc}1&0&\frac{1}{x}&0&0&0&0&0\\ \frac{1}{x-1}&1&0&-\frac{1}{x}&0&0&0&0\\ 0&0&1&0&0&0&0&0\\ 0&0&\frac{1}{x-1}&1&0&0&0&0\\ \hline\cr{\star}&{\star}&{\star}&{\star}&1&0&\frac{1}{x}&0\\ {\star}&{\star}&{\star}&{\star}&\frac{1}{x-1}&1&0&-\frac{1}{x}\\ {\star}&{\star}&{\star}&{\star}&0&0&1&0\\ {\star}&{\star}&{\star}&{\star}&0&0&\frac{1}{x-1}&1\end{array}\right).

The block diagonal part is given by two copies of our 44-dimensional example A1A_{1} (here, A2=A1A_{2}=A_{1}) and we have shown that it was in reduced form. The off-diagonal part is given by

(3x+44x2x44x212(x1)+2x2x21x12(x1)+2x+5x2x44x22x1+4x212(x1)+2x2x214(x1)03x44x2x+44x212(x1)14(x1)12(x1)+2x7x2x+44x2).\left(\begin{array}[]{cccc}\frac{-3x+4}{4x^{2}}&\frac{x-4}{4x^{2}}&-\frac{1}{2(x-1)}+\frac{2x-2}{x^{2}}&-\frac{1}{x}\\ \frac{1}{2(x-1)}+\frac{-2x+5}{x^{2}}&\frac{x-4}{4x^{2}}&\frac{2}{x-1}+\frac{4}{x^{2}}&\frac{1}{2(x-1)}+\frac{2x-2}{x^{2}}\\ \frac{-1}{4(x-1)}&0&\frac{3x-4}{4x^{2}}&\frac{x+4}{4x^{2}}\\ -\frac{1}{2(x-1)}&\frac{1}{4(x-1)}&\frac{1}{2(x-1)}+\frac{2x-7}{x^{2}}&\frac{-x+4}{4x^{2}}\end{array}\right).

As A2=A1A_{2}=A_{1}, the matrix of the adjoint action of the diagonal on 𝔥sub{\mathfrak{h}_{sub}} is a (sparse) 16×1616\times 16 matrix given by Ψ=A1Id4Id4A1T\Psi=A_{1}\otimes\mathrm{Id}_{4}-\mathrm{Id}_{4}\otimes A_{1}^{T}. \diamond

Isotypical decomposition

Recall that 𝔥sub\mathfrak{h}_{sub} is the 𝒞{\cal C}-vector space of off-diagonal matrices. We now show how the adjoint action Ψ\Psi of the diagonal will govern the reduction strategy on 𝔥sub\mathfrak{h}_{sub}.

Definition 3

A vector space W𝔥subW\subset\mathfrak{h}_{sub} will be called a Ψ\Psi-space if Ψ(W)W𝒞𝐤\Psi(W)\subset W\otimes_{{\cal C}}\mathbf{k}.

The importance of these Ψ\Psi-spaces is stated in the following lemma.

Lemma 2 (DW (19), Lemma 2.11)

Let A:=(A10SA2)A:=\left(\begin{array}[]{c|c}A_{1}&0\\ \hline\cr S&A_{2}\end{array}\right) and assume that Y=AY\partial Y=AY is in reduced form. Then, Lie(A)𝔥sub\textrm{Lie}(A)\cap\mathfrak{h}_{sub} is a Ψ\Psi-space.

So our reduction strategy will be to try to project onto the smallest possible Ψ\Psi-space using rational gauge transformations. In DW (19), we provide references to algorithms to decompose and factor into Ψ\Psi-spaces. This is obtained using an isotypical decomposition (eigenring methods) and a flag structure.

Lemma 3 (Krull-Schmidt)

The 𝒞{\cal C}-vector space 𝔥sub\mathfrak{h}_{sub} admits a unique isotypical decomposition

𝔥sub=i=1κWi\mathfrak{h}_{sub}=\displaystyle\bigoplus_{i=1}^{\kappa}W_{i}

where

  • each WiW_{i} is a Ψ\Psi-space;

  • WiνiViW_{i}\simeq\nu_{i}V_{i}, a direct sum of νi\nu_{i} Ψ\Psi-spaces that are all isomorphic to an indecomposable Ψ\Psi-space ViV_{i} which admits a flag decomposition

    Vi=Vi[μ]Vi[μ1]Vi[1]Vi[0]={0}V_{i}=V_{i}^{[\mu]}\supsetneq V_{i}^{[\mu-1]}\supsetneq\cdots\supsetneq V_{i}^{[1]}\supsetneq V_{i}^{[0]}=\{0\}

    and Vi[j]/Vi[j1]V_{i}^{[j]}/V_{i}^{[j-1]} is a sum of isomorphic irreducible Ψ\Psi-spaces for 1jμ1\leq j\leq\mu;

  • For iji\neq j, the Ψ\Psi-spaces ViWiV_{i}\subset W_{i} and VjWjV_{j}\subset W_{j} are not isomorphic.

Once this decomposition and flag structure are computed, we perform, at each stage, a projection on a minimal Ψ\Psi-subspace in Vi[j]{V_{i}^{[j]}}. For some vectors bi𝐤Nb_{i}\in\mathbf{k}^{N} and a matrix Ei,jE_{i,j} with coefficients in 𝐤\mathbf{k} (obtained by linear algebra), this reduces to computing all tuples (F,c1,,cs)\left(F,c_{1},\ldots,c_{s}\right), with F𝐤NF\in\mathbf{k}^{N} and cic_{i} constants, such that

F=Ei,jF+icibi.F^{\prime}=E_{i,j}\cdot F+\sum_{i}c_{i}b_{i}.

The resulting system P[A]P[A] will be “minimal”: it will be in reduced form. The proof of this result is technical and can be found in DW (19). We will illustrate the process on our main example.

Example 5 (8-dimensional example, continued)

In this example, 𝔥sub{\mathfrak{h}_{sub}} decomposes as a direct sum 𝔥sub=𝔥1𝔥5𝔥10{\mathfrak{h}_{sub}}=\mathfrak{h}_{1}\oplus\mathfrak{h}_{5}\oplus\mathfrak{h}_{10} of three indecomposable Ψ\Psi-spaces.
We first study the adjoint action Ψ=[Adiag,]\Psi=[{{A_{\mathrm{diag}}},\bullet}] of Adiag{A_{\mathrm{diag}}} on 𝔥5\mathfrak{h}_{5}. We find (see DW (20)) an adapted basis given by off-diagonal matrices N2,,N6{N}_{2},\dots,{N}_{6} with south-west blocks

[0000200000000020],[0020000200000000],[1000010000100001],[0000000010000100],[0100000000010000].\scriptstyle{\tiny{\left[\begin{array}[]{cccc}0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 2&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&-2&0\end{array}\right],\;\left[\begin{array}[]{cccc}0&0&-2&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&-2\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\end{array}\right],\;\left[\begin{array}[]{cccc}1&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&-1&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&-1&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&1\end{array}\right],\;\left[\begin{array}[]{cccc}0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 1&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&1&0&0\end{array}\right],\;\left[\begin{array}[]{cccc}0&-1&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&1\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\end{array}\right].}}

The matrix of the adjoint action Ψ\Psi on this basis of 𝔥5\mathfrak{h}_{5} is

Ψ5:=(001x100001x000001x1x10000000000).\Psi_{5}:=\left(\begin{array}[]{cc|c|cc}0&0&\frac{{1}}{x-1}&0&0\\ 0&0&\frac{{1}}{x}&0&0\\ \hline\cr 0&0&0&\frac{{1}}{x}&\frac{{1}}{x-1}\\ \hline\cr 0&0&0&0&0\\ 0&0&0&0&0\end{array}\right).

The flag structure on 𝔥5\mathfrak{h}_{5} suggests the following reduction path. Try to remove elements in N5,N6\langle{N}_{5},{N}_{6}\rangle if possible; then in N4\langle{N}_{4}\rangle; then in N2,N3\langle{N}_{2},{N}_{3}\rangle. How to do this will be made clear in the next section; the flag structure guides the order in which computations should be handled.

We turn to 𝔥10\mathfrak{h}_{10}. We find (see DW (20)) a basis adapted to the flag structure given by off-diagonal matrices N7,,N16{N}_{7},\dots,{N}_{16} whose south-west blocks are:

[0000002000000000],[0000100000000010],[0010000100000000],\displaystyle\scriptstyle{\tiny{\left[\begin{array}[]{cccc}0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&2&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\end{array}\right],\;\left[\begin{array}[]{cccc}0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr-1&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&-1&0\end{array}\right],\;\left[\begin{array}[]{cccc}0&0&1&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&-1\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\end{array}\right],}}
[0000000000001000],[12000012000012000012],[12000012000012000012],[0001000000000000],\displaystyle\scriptstyle{\tiny{\left[\begin{array}[]{cccc}0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 1&0&0&0\end{array}\right],\;\left[\begin{array}[]{cccc}\frac{1}{2}&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&-\frac{1}{2}&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&\frac{1}{2}&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&-\frac{1}{2}\end{array}\right],\;\left[\begin{array}[]{cccc}\frac{1}{2}&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&\frac{1}{2}&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&-\frac{1}{2}&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&-\frac{1}{2}\end{array}\right],\;\left[\begin{array}[]{cccc}0&0&0&-1\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\end{array}\right],}}
[000000001200001200],[012000000000120000],[00000000012000000].\displaystyle\scriptstyle{\tiny{\left[\begin{array}[]{cccc}0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\frac{1}{2}&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&-\frac{1}{2}&0&0\end{array}\right],\;\left[\begin{array}[]{cccc}0&-\frac{1}{2}&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&-\frac{1}{2}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\end{array}\right],\;\left[\begin{array}[]{cccc}0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&-\frac{1}{2}&0&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&0&0&0\end{array}\right].}}

The matrix of the adjoint action Ψ\Psi on this adapted basis N7,,N16{N}_{7},\dots,{N}_{16} is:

Ψ10:=(01x1x100000000001x1x100000000001x1x100000000001x100000000001x1000000001x00000000001x00000000001x10000000001x0000000000).\Psi_{10}:=\left(\begin{array}[]{c|cc|cccc|cc|c}0&\frac{1}{x}&\frac{1}{x-1}&0&0&0&0&0&0&0\\ \hline\cr 0&0&0&\frac{1}{x}&-\frac{1}{x-1}&0&0&0&0&0\\ 0&0&0&0&0&-\frac{1}{x}&\frac{1}{x-1}&0&0&0\\ \hline\cr 0&0&0&0&0&0&0&\frac{1}{x-1}&0&0\\ 0&0&0&0&0&0&0&0&\frac{1}{x-1}&0\\ 0&0&0&0&0&0&0&\frac{1}{x}&0&0\\ 0&0&0&0&0&0&0&0&\frac{1}{x}&0\\ \hline\cr 0&0&0&0&0&0&0&0&0&\frac{1}{x-1}\\ 0&0&0&0&0&0&0&0&0&\frac{1}{x}\\ \hline\cr 0&0&0&0&0&0&0&0&0&0\end{array}\right).

Intermezzo : Reduction and Rational Solutions

Before we continue, let us make a quick excursion into our main algorithmic toolbox. we start with a simple case. We look for a condition on P:=Id+(00β0)P:=\mathrm{Id}+\left(\begin{array}[]{c|c}0&0\\ \hline\cr\beta&0\end{array}\right) to have

A=(A10SA2)P[A]=(A100A2).A=\left(\begin{array}[]{c|c}A_{1}&0\\ \hline\cr S&A_{2}\end{array}\right)\longrightarrow P[A]=\left(\begin{array}[]{c|c}A_{1}&0\\ \hline\cr{0}&A_{2}\end{array}\right).

A simple calculation shows that β\beta should be a rational solution of the matrix linear differential system β=A2ββA1+S\beta^{\prime}={A_{2}}\beta-\beta{A_{1}}{+}S. If we let vec denote the operator transforming a matrix into a vector by stacking its rows, we find (see DW (19)) that vec(β)=Ψvec(β)vec(S)\texttt{vec}(\beta)^{\prime}=\Psi\cdot\texttt{vec}(\beta)-\texttt{vec}(S), where Ψ\Psi is again the adjoint action of the diagonal defined above. So reduction will be governed by computing rational solutions of linear differential systems. When 𝐤=𝒞(x)\mathbf{k}={\cal C}(x), a computer algebra algorithm for this task has been given by Barkatou in Bar (99), see BCEBW (12) for a generalization to linear partial differential systems and a Maple implementation.

Now, our general tool (also found in the above references) will be an apparently more complicated problem. Given a matrix Ψ\Psi and vectors b1,,bsb_{1},\ldots,b_{s}, we will look for tuples (F,c1,,cs)\left(F,c_{1},\ldots,c_{s}\right), with F𝐤NF\in\mathbf{k}^{N} and cic_{i} constant, such that F=ΨF+icibiF^{\prime}=\Psi\cdot F+\sum_{i}c_{i}b_{i}. Such tuples form a computable vector space and the algorithms in Bar (99); BCEBW (12) provide this when 𝐤=𝒞(x)\mathbf{k}={\cal C}(x). Results and algorithms for general fields 𝐤\mathbf{k} can be found in Sin (91).

We now pick concrete coefficients to show how to perform the reduction on our 88-dimensional example. A Maple worksheet888The reader may also find a pdf version at
http://www.unilim.fr/pages_perso/jacques-arthur.weil/DreyfusWeilReductionExamples.pdf
with this example and the chosen coefficients may be found at DW (20).

Reduction on 𝔥5\mathfrak{h}_{5} (8-dimensional example)

To remove all of 𝔥5\mathfrak{h}_{5}, it would be enough to have a rational solution to the system

Y=Ψ5.Y+bwithb=(3x21x1x21x1x212x01x2)Y^{\prime}=\Psi_{5}.Y+b\;\textrm{with}\;b=\left(\begin{array}[]{c}\frac{3}{x^{2}}-\frac{1}{x}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\frac{1}{x^{2}}-\frac{1}{x}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\frac{1}{x^{2}}-\frac{1}{2\,x}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\frac{1}{x^{2}}\end{array}\right)

and Ψ5\Psi_{5} is given in Example 5 (page 5). This gives us reduction equations

(W[3]):{f3,1(x)=1x2f3,2(x)=0(W[2]):{f2,1(x)=1x1f3,1(x)+1xf3,2(x)+1x212x(W[1]):{f1,1(x)=1xf2,1(x)+1x21xf1,2(x)=1x1f2,1(x)+3x21x\begin{array}[]{lll}(W^{[3]}):&\left\{\begin{array}[]{ccl}f^{\prime}_{{3,1}}\left(x\right)&=&\frac{1}{x^{2}}\\ f^{\prime}_{{3,2}}\left(x\right)&=&0\end{array}\right.\\ (W^{[2]}):&\left\{\begin{array}[]{ccl}f^{\prime}_{{2,1}}\left(x\right)&=&\,{\frac{1}{x-1}}f_{{3,1}}\left(x\right)+\,{\frac{1}{x}}f_{{3,2}}\left(x\right)+\frac{1}{x^{2}}-\frac{1}{2\,x}\end{array}\right.\\ (W^{[1]}):&\left\{\begin{array}[]{ccl}f^{\prime}_{1,1}\left(x\right)&=&\,{\frac{1}{x}}f_{{2,1}}\left(x\right)+\frac{1}{x^{2}}-\frac{1}{x}\\ f^{\prime}_{1,2}\left(x\right)&=&\,{\frac{1}{x-1}}f_{{2,1}}\left(x\right)+\frac{3}{x^{2}}-\frac{1}{x}\end{array}\right.\end{array}

The first two equations correspond to the highest level W[3]W^{[3]} of the flag. To remove an element from W[3]W^{[3]}, there should be a rational solution to the equation y=c1.1x2+c2.0y^{\prime}=c_{1}.\frac{1}{x^{2}}+c_{2}.0. The 𝒞{\cal C}-vector space of pairs (c1,c2)𝒞2(c_{1},c_{2})\in{\cal C}^{2} such that there exists f𝐤f\in\mathbf{k} with f=c1.1x2+c2.0f^{\prime}=c_{1}.\frac{1}{x^{2}}+c_{2}.0 is found to be 22-dimensional; for c¯=(1,0)\underline{c}=(1,0), we have f3,1:=1x+c3,1f_{3,1}:=-\frac{1}{x}+c_{3,1}; for c¯=(0,1)\underline{c}=(0,1), we have f3,2:=c3,2f_{3,2}:=c_{3,2}, where the c3,ic_{3,i} are arbitrary constants (their importance will soon be visible). Our gauge transformation is P[3]=Id+f3,1N6+f3,2N5P^{[3]}=\textrm{Id}+f_{3,1}N_{6}+f_{3,2}N_{5} and A[2]:=P[3][A]A^{[2]}:=P^{[3]}[A] does not contain any terms from W[3]W^{[3]}.

Now W[2]W^{[2]} is 11-dimensional. The equation for the reduction on W2[2]W_{2}^{[2]} is now

y=1x1f3,1(x)+1xf3,2(x)+1x212x=122c3,2+1x+c3,11x1+1x2.\begin{array}[]{lll}y^{\prime}&=&{\frac{1}{x-1}}f_{{3,1}}\left(x\right)+\,{\frac{1}{x}}f_{{3,2}}\left(x\right)+\frac{1}{x^{2}}-\frac{1}{2\,x}\\ &=&\frac{1}{2}{\frac{2\,c_{{3,2}}+1}{x}}+{\frac{c_{{3,1}}-1}{x-1}}+\frac{1}{{x}^{2}}.\end{array}

We have necessary and sufficient conditions on the parameters c3,ic_{3,i} to have a rational solution, namely c3,1=1c_{{3,1}}=1, c3,2=12c_{{3,2}}=-\frac{1}{2} and then a general rational solution f2,1:=1x+c2,1f_{2,1}:=\frac{-1}{x}+c_{2,1}. Our new gauge transformation is P[2]=Id+(1x+c2,1)N4P^{[2]}=\mathrm{Id}+(-\frac{1}{x}+c_{2,1})N_{4} and A[1]:=P[2][A[2]]A^{[1]}:=P^{[2]}[A^{[2]}] does not contain any term from W[2]W^{[2]} any more.

Finally, we look for all (c1,c2)𝒞2(c_{1},c_{2})\in{\cal C}^{2} such that c1f1,1+c2f2,2c_{1}f_{1,1}+c_{2}f_{2,2} is rational: we look for non-zero pairs (c1,c2)𝒞2(c_{1},c_{2})\in{\cal C}^{2} such that there exists a rational solution f𝐤f\in\mathbf{k} of

y\displaystyle y^{\prime} =\displaystyle= c1(1x(1x+c2,1)+1x21x)+c2(1x1(1x+c2,1)+3x21x)\displaystyle c_{1}\,\left({\frac{1}{x}\left(-\frac{1}{x}+c_{2,1}\right)}+\frac{1}{x^{2}}-\frac{1}{x}\right)+c_{2}\,\left({\frac{1}{x-1}\left(-\frac{1}{x}+c_{2,1}\right)}+\frac{3}{x^{2}}-\frac{1}{x}\right)
=\displaystyle= 3c2x2+c1(c2,11)x+c2(c2,11)x1.\displaystyle{\frac{3\,c_{2}}{{x}^{2}}}+{\frac{c_{1}\,\left(c_{2,1}-1\right)}{x}}+{\frac{c_{2}\,\left(c_{2,1}-1\right)}{x-1}}.

This integral is rational if and only if both residues are zero. As the solution c1=c2=0c_{1}=c_{2}=0 is not admissible, we see that a necessary and sufficient condition is c2,1=1c_{2,1}=1. The set of desired pairs (c1,c2)(c_{1},c_{2}) is of dimension 22. For c¯=(1,0)\underline{c}=(1,0), we have f1,1:=c1,1f_{1,1}:=c_{1,1}, for c¯=(0,1)\underline{c}=(0,1), we have f1,2:=3x+c1,2f_{1,2}:=-\frac{3}{x}+c_{1,2}, where the c1,ic_{1,i} are constants and can be chosen arbitrarily. So our last gauge transformation matrix will be P[1]=Id3xN2P^{[1]}=\textrm{Id}-\frac{3}{x}N_{2} and the reduction matrix on 𝔥5\mathfrak{h}_{5} is

P5:=P[3]P[2]P[1]=Id3xN2+(11x)N412N5+(11x)N6.P_{5}:=P^{[3]}P^{[2]}P^{[1]}=\textrm{Id}-\frac{3}{x}N_{2}+\left(1-\frac{1}{x}\right)N_{{4}}-\frac{1}{2}\,N_{{5}}+\left(1-\frac{1}{x}\right)N_{{6}}.

The resulting matrix A~:=P5[A]\tilde{A}:=P_{5}[A] contains no terms from 𝔥5\mathfrak{h}_{5}.

Reduction on 𝔥10\mathfrak{h}_{10} (8-dimensional example)

The matrix Ψ10\Psi_{10} is given in Example 5 page 5. The reduction equations are now

(W[5]):{f5,1(x)=0(W[4]):{f4,1(x)=1xf5,1(x)12xf4,2(x)=1x1f5,1(x)12(x1)(W[3]):{f3,1(x)=1xf4,1(x)+1xf3,2(x)=1xf4,2(x)12xf3,3(x)=1x1f4,1(x)f3,4(x)=1x1f4,2(x)12(x1)(W[2]):{f2,1(x)=1x1f3,1(x)1xf3,2(x)12(x1)f2,2(x)=1x1f3,3(x)+1xf3,4(x)+1x212(x1)(W[1]):{f1,1(x)=1x1f2,1(x)+1xf2,2(x)+2x2+1(x1).\begin{array}[]{cl}(W^{[5]}):&\left\{f_{5,1}^{\prime}\left(x\right)=0\right.\\ (W^{[4]}):&\left\{\begin{array}[]{ccl}f_{4,1}^{\prime}(x)&=&\,{\frac{1}{x}}f_{5,1}(x)-\frac{1}{2x}\\ f_{4,2}^{\prime}(x)&=&\,{\frac{1}{x-1}}f_{5,1}(x)-\frac{1}{2(x-1)}\end{array}\right.\\ (W^{[3]}):&\left\{\begin{array}[]{ccl}f_{3,1}^{\prime}(x)&=&\,{\frac{1}{x}}f_{4,1}(x)+\frac{1}{x}\\ f_{3,2}^{\prime}(x)&=&\,{\frac{1}{x}}f_{4,2}(x)-\frac{1}{2\,x}\\ f_{3,3}^{\prime}(x)&=&\,{\frac{1}{x-1}}f_{4,1}(x)\\ f_{3,4}^{\prime}(x)&=&\,{\frac{1}{x-1}}f_{4,2}(x)-\frac{1}{2\,(x-1)}\end{array}\right.\\ (W^{[2]}):&\left\{\begin{array}[]{ccl}f_{2,1}^{\prime}(x)&=&\,{\frac{1}{x-1}}f_{3,1}(x)-\,{\frac{1}{x}}f_{3,2}(x)-\frac{1}{2\,(x-1)}\\ f_{2,2}^{\prime}(x)&=&-\,{\frac{1}{x-1}}f_{3,3}(x)+\,{\frac{1}{x}}f_{3,4}(x)+\frac{1}{x^{2}}-\frac{1}{2\,(x-1)}\end{array}\right.\\ (W^{[1]}):&\left\{f_{1,1}^{\prime}(x)=\,{\frac{1}{x-1}}f_{2,1}(x)+\,{\frac{1}{x}}f_{2,2}(x)+\frac{2}{x^{2}}+\frac{1}{(x-1)}.\right.\end{array}

We will let the reader solve this iteratively following the method from the previous section. This will give the following successive reductions

[Uncaptioned image][Uncaptioned image][Uncaptioned image][Uncaptioned image]

where the green denotes parts that have been successfully removed. However, we reach an obstruction when trying to remove N11{N}_{11} (once the equation for f3,1f_{3,1} has a rational solution, the equation for f3,3(x)f_{3,3}(x) cannot have a rational solution).
[Uncaptioned image] [Uncaptioned image] [Uncaptioned image] [Uncaptioned image]
The reduction matrix is

P10:=Id+(c1,11x)N71xN8N912N11+12N13+12N14N15+12N16P_{10}:=\textrm{Id}+\left(c_{1,1}-\frac{1}{x}\right)N_{{7}}-{\frac{1}{x}}N_{{8}}-N_{{9}}-\frac{1}{2}\,N_{{11}}+\frac{1}{2}\,N_{{13}}+\frac{1}{2}\,N_{{14}}-N_{{15}}+\frac{1}{2}\,N_{{16}}

and we obtain the reduced form Ared:=P10[P5[A]]A_{\textrm{red}}:=P_{10}[P_{5}[A]]:

Ared:=(101x000001x1101x000000100000001x11000012(x1)000101x0012(x1)001x1101x0012(x1)0001000012(x1)001x11).A_{\textrm{red}}:=\left(\begin{array}[]{cccc|cccc}1&0&\frac{1}{x}&0&0&0&0&0\\ \frac{1}{x-1}&1&0&-\frac{1}{x}&0&0&0&0\\ 0&0&1&0&0&0&0&0\\ 0&0&\frac{1}{x-1}&1&0&0&0&0\\ \hline\cr-\frac{1}{2\,(x-1)}&0&0&0&1&0&\frac{1}{x}&0\\ 0&\frac{1}{2\,(x-1)}&0&0&\frac{1}{x-1}&1&0&-\frac{1}{x}\\ 0&0&-\frac{1}{2\,(x-1)}&0&0&0&1&0\\ 0&0&0&\frac{1}{2\,(x-1)}&0&0&\frac{1}{x-1}&1\end{array}\right).

The associated Lie algebra is spanned by

(1000000001000000001000000001000000001000000001000000001000000001),(00000000100000000000000000100000120000000012001000001200000000120010),(0010000000010000000000000000000000000010000000010000000000000000),(0000000000100000000000000000000000000000000000100000000000000000),(0000000000000000000000000000000000000000001000000000000000000000).\displaystyle{\displaystyle{\tiny\left(\begin{array}[]{cccc|cccc}1&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0&0\\ 0&0&1&0&0&0&0&0\\ 0&0&0&1&0&0&0&0\\ \hline\cr 0&0&0&0&1&0&0&0\\ 0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&1&0\\ 0&0&0&0&0&0&0&1\end{array}\right),\left(\begin{array}[]{cccc|cccc}0&0&0&0&0&0&0&0\\ 1&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&1&0&0&0&0&0\\ \hline\cr-\frac{1}{2}&0&0&0&0&0&0&0\\ 0&\frac{1}{2}&0&0&1&0&0&0\\ 0&0&-\frac{1}{2}&0&0&0&0&0\\ 0&0&0&\frac{1}{2}&0&0&1&0\end{array}\right),\left(\begin{array}[]{cccc|cccc}0&0&1&0&0&0&0&0\\ 0&0&0&-1&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ \hline\cr 0&0&0&0&0&0&1&0\\ 0&0&0&0&0&0&0&-1\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\end{array}\right)}},{\displaystyle{\tiny\left(\begin{array}[]{cccc|cccc}0&0&0&0&0&0&0&0\\ 0&0&1&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ \hline\cr 0&0&{0}&0&0&0&0&0\\ 0&0&0&{0}&0&0&1&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\end{array}\right),\left(\begin{array}[]{cccc|cccc}0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ \hline\cr 0&0&0&0&0&0&0&0\\ 0&0&{\color[rgb]{0,0,1}1}&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\end{array}\right).}}

This gives us the Lie algebra 𝔤=Lie(Ared)\mathfrak{g}=\textrm{Lie}(A_{\textrm{red}}) of the differential Galois group. Note that, during the reduction process, we found the two incompatible equations f3,1(x)=c4,1+1xf_{3,1}^{\prime}(x)=\frac{c_{4,1}+1}{x} and f3,3(x)=c4,1x1f_{3,3}^{\prime}(x)=\frac{c_{4,1}}{x-1}, where c4,1c_{4,1} is a constant. There were two mutually exclusive paths: either remove N11N_{11} or remove N13N_{13}. We removed N13N_{13} here by setting c4,1=1c_{4,1}=-1; the choice of removing N11N_{11} (by setting c4,1=0c_{4,1}=0) gives a different reduced form whose associated Lie algebra is conjugated to the one we just found. We refer to DW (19) for the computations in that other path. We also remark that two of the matrices that could not be removed from 𝔥sub\mathfrak{h}_{sub} are “absorbed” as lower triangular parts of matrices coming from Adiag{A_{\mathrm{diag}}}. It is 55-dimensional, whereas the Lie algebra Lie(A)\textrm{Lie}(A) associated to the original matrix AA had dimension 1414. This shows that the Picard-Vessiot extension is obtained from the Picard-Vessiot extension KdiagK_{\textrm{diag}} for [Adiag][{A_{\mathrm{diag}}}] by adding only one integral and the system has indeed been transformed into a form where solving is much simpler than before - and we also have proofs of transcendence properties for the remaining objects.

References

  • ABKM (18) Y. Abdelaziz, S. Boukraa, C. Koutschan, and J.-M. Maillard, Diagonals of rational functions, pullbacked F12{}_{2}F_{1} hypergeometric functions and modular forms, J. Phys. A 51 (2018), no. 45, 455201, 30.
  • ABKM (20)  , Heun functions and diagonals of rational functions, J. Phys. A 53 (2020), no. 7, 075206, 24.
  • ABRS (14) J. Ablinger, J. Blümlein, C. G. Raab, and C. Schneider, Iterated binomial sums and their associated iterated integrals, J. Math. Phys. 55 (2014), no. 11, 112301, 57.
  • AMCW (13) A. Aparicio-Monforte, É. Compoint, and J.-A. Weil, A characterization of reduced forms of linear differential systems, Journal of Pure and Applied Algebra 217 (2013), no. 8, 1504–1516.
  • AMDW (16) A. Aparicio-Monforte, T. Dreyfus, and J.-A. Weil, Liouville integrability: an effective Morales–Ramis–Simó theorem, Journal of Symbolic Computation 74 (2016), 537 – 560.
  • AMW (11) A. Aparicio-Monforte and J.-A. Weil, A reduction method for higher order variational equations of Hamiltonian systems, Symmetries and Related Topics in Differential and Difference Equations, Contemporary Mathematics, vol. 549, Amer. Math. Soc., Providence, RI, September 2011, pp. 1–15.
  • AMW (12) A. Aparicio-Monforte and J.-A. Weil, A reduced form for linear differential systems and its application to integrability of Hamiltonian systems, Journal of Symbolic Computation 47 (2012), no. 2, 192 – 213.
  • AvH (97) S. A. Abramov and M. van Hoeij, A method for the integration of solutions of Ore equations, Proceedings of the 1997 International Symposium on Symbolic and Algebraic Computation (Kihei, HI), ACM, New York, 1997, pp. 172–175.
  • AvH (99)  , Integration of solutions of linear functional equations, Integral Transform. Spec. Funct. 8 (1999), no. 1-2, 3–12.
  • AZ (10) M. Ayoul and Nguyen Tien Zung, Galoisian obstructions to non-Hamiltonian integrability, C. R. Math. Acad. Sci. Paris 348 (2010), no. 23-24, 1323–1326.
  • Bar (99) M. A. Barkatou, On rational solutions of systems of linear differential equations, J. Symbolic Comput. 28 (1999), no. 4-5, 547–567.
  • Bax (82) R. J. Baxter, Exactly solved models in statistical mechanics, Academic Press Inc. [Harcourt Brace Jovanovich Publishers], London, 1982.
  • BBH+ (09) A. Bostan, S. Boukraa, S. Hassani, J.-M. Maillard, J.-A. Weil, and N. Zenine, Globally nilpotent differential operators and the square Ising model, J. Phys. A 42 (2009), no. 12, 125206, 50.
  • BBH+ (10) A. Bostan, S. Boukraa, S. Hassani, J.-M. Maillard, J.-A. Weil, N. Zenine, and N. Abarenkova, Renormalization, isogenies, and rational symmetries of differential equations, Adv. Math. Phys. (2010), 44p.
  • BBH+ (11) A. Bostan, S. Boukraa, S. Hassani, M. van Hoeij, J.-M. Maillard, J.-A. Weil, and N. Zenine, The Ising model: from elliptic curves to modular forms and Calabi-Yau equations, J. Phys. A: Math. Theor. 44 (2011), no. 4, 045204, 44.
  • BCDVW (20) M. Barkatou, T. Cluzeau, L. Di Vizio, and J.-A. Weil, Reduced forms of linear differential systems and the intrinsic Galois-Lie algebra of Katz, SIGMA Symmetry Integrability Geom. Methods Appl. 16 (2020), Paper No. 054, 13.
  • BCEBW (12) M. A. Barkatou, T. Cluzeau, C. El Bacha, and J.-A. Weil, Computing closed form solutions of integrable connections, Proceedings of the 36th international symposium on Symbolic and algebraic computation (New York, NY, USA), ISSAC ’12, ACM, 2012.
  • BCWDV (16) M. Barkatou, T. Cluzeau, J.-A. Weil, and L. Di Vizio, Computing the lie algebra of the differential galois group of a linear differential system, Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation, 2016, pp. 63–70.
  • Ber (01) D. Bertrand, Unipotent radicals of differential Galois group and integrals of solutions of inhomogeneous equations, Math. Ann. 321 (2001), no. 3, 645–666.
  • BHM+ (07) S. Boukraa, S. Hassani, J.-M. Maillard, B. M. McCoy, J.-A. Weil, and N. Zenine, Fuchs versus Painlevé, J. Phys. A 40 (2007), no. 42, 12589–12605.
  • BHMW (14) S. Boukraa, S. Hassani, J.-M. Maillard, and J.-A. Weil, Differential algebra on lattice green and calabi-yau operators, J. Phys. A: Math. Theor. 47 (2014), no. 9, 095203.
  • BHMW (15) S. Boukraa, S. Hassani, J.-M. Maillard, and J.-A. Weil, Canonical decomposition of irreducible linear differential operators with symplectic or orthogonal differential galois groups, Journal of Physics A: Mathematical and Theoretical 48 (2015), no. 10, 105202.
  • BKK (10) V. V. Bytev, M. Yu. Kalmykov, and B. A. Kniehl, Differential reduction of generalized hypergeometric functions from Feynman diagrams: one-variable case, Nuclear Phys. B 836 (2010), no. 3, 129–170.
  • BS (99) P. H. Berman and M. F. Singer, Calculating the Galois group of L1(L2(y))=0L_{1}(L_{2}(y))=0, L1,L2L_{1},L_{2} completely reducible operators, J. Pure Appl. Algebra 139 (1999), no. 1-3, 3–23, Effective methods in algebraic geometry (Saint-Malo, 1998).
  • BSMR (10) D. Blázquez-Sanz and J.-J. Morales-Ruiz, Differential Galois theory of algebraic Lie-Vessiot systems, Differential algebra, complex analysis and orthogonal polynomials, Contemp. Math., vol. 509, Amer. Math. Soc., Providence, RI, 2010, pp. 1–58.
  • BW (03) D. Boucher and J.-A. Weil, Application of J.-J. Morales and J.-P. Ramis’ theorem to test the non-complete integrability of the planar three-body problem, From combinatorics to dynamical systems, IRMA Lect. Math. Theor. Phys., vol. 3, de Gruyter, Berlin, 2003, pp. 163–177.
  • CH (11) T. Crespo and Z. Hajto, Algebraic groups and differential Galois theory, Graduate Studies in Mathematics, vol. 122, American Mathematical Society, Providence, RI, 2011.
  • Com (12) T. Combot, Non-integrability of the equal mass nn-body problem with non-zero angular momentum, Celestial Mech. Dynam. Astronom. 114 (2012), no. 4, 319–340.
  • CR (91) R. C. Churchill and D. L. Rod, On the determination of Ziglin monodromy groups, SIAM J. Math. Anal. 22 (1991), no. 6, 1790–1802.
  • CW (18) G. Casale and J.-A. Weil, Galoisian methods for testing irreducibility of order two nonlinear differential equations, Pacific J. Math. 297 (2018), no. 2, 299–337.
  • DW (19) T. Dreyfus and J.-A. Weil, Computing the Lie algebra of the differential Galois group: the reducible case, ArXiv 1904.07925 (2019).
  • DW (20)  , Maple worksheet with the examples for this paper: http://www.unilim.fr/pages_perso/jacques-arthur.weil/DreyfusWeilReductionExamples.mw, 2020.
  • FdG (07) C. Fieker and W. A. de Graaf, Finding integral linear dependencies of algebraic numbers and algebraic Lie algebras, LMS J. Comput. Math. 10 (2007), 271–287.
  • HKMZ (16) S. Hassani, Ch. Koutschan, J.-M. Maillard, and N. Zenine, Lattice Green functions: the dd-dimensional face-centered cubic lattice, d=8,9,10,11,12d=8,9,10,11,12, J. Phys. A 49 (2016), no. 16, 164003, 30.
  • Kal (06) M. Yu. Kalmykov, Gauss hypergeometric function: reduction, ϵ\epsilon-expansion for integer/half-integer parameters and Feynman diagrams, J. High Energy Phys. (2006), no. 4, 056, 21.
  • KK (11) M. Yu. Kalmykov and B. A. Kniehl, Counting master integrals: integration by parts vs. differential reduction, Phys. Lett. B 702 (2011), no. 4, 268–271.
  • KK (12)  , Mellin-Barnes representations of Feynman diagrams, linear systems of differential equations, and polynomial solutions, Phys. Lett. B 714 (2012), no. 1, 103–109.
  • KK (17)  , Counting the number of master integrals for sunrise diagrams via the Mellin-Barnes representation, J. High Energy Phys. (2017), no. 7, 031, front matter+27.
  • Kou (13) C. Koutschan, Lattice Green functions of the higher-dimensional face-centered cubic lattices, J. Phys. A 46 (2013), no. 12, 125005, 14.
  • Kov (86) J. J. Kovacic, An algorithm for solving second order linear homogeneous differential equations, J. Symbolic Comput. 2 (1986), no. 1, 3–43.
  • LSS (18) R. N. Lee, A. V. Smirnov, and V. A. Smirnov, Solving differential equations for Feynman integrals by expansions near singular points, J. High Energy Phys. (2018), no. 3, 008, front matter+14.
  • MAB+ (11) B. M. McCoy, M. Assis, S. Boukraa, S. Hassani, J.-M. Maillard, W. P. Orrick, and N. Zenine, The saga of the Ising susceptibility, New trends in quantum integrable systems, World Sci. Publ., Hackensack, NJ, 2011, pp. 287–306.
  • Mag (54) W. Magnus, On the exponential solution of differential equations for a linear operator, Comm. Pure Appl. Math. 7 (1954), 649–673.
  • McC (10) B. M. McCoy, Advanced statistical mechanics, International Series of Monographs on Physics, vol. 146, Oxford University Press, Oxford, 2010.
  • MM (12) B. M. McCoy and J-M. Maillard, The importance of the ising model, Prog. Theor. Phys. 127 (2012), 791-817, 2012.
  • Mot (18) O. V. Motygin, On evaluation of the confluent heun functions, 2018.
  • MR (20) J.-J. Morales-Ruiz, A differential Galois approach to path integrals, J. Math. Phys. 61 (2020), no. 5, 052103, 12.
  • MRR (01) J.-J. Morales-Ruiz and J.-P. Ramis, Galoisian obstructions to integrability of Hamiltonian systems. I, II, Methods Appl. Anal. 8 (2001), no. 1, 33–95, 97–111.
  • MRRS (07) J.-J. Morales-Ruiz, J.-P. Ramis, and C. Simo, Integrability of Hamiltonian systems and differential Galois groups of higher variational equations, Ann. Sci. École Norm. Sup. (4) 40 (2007), no. 6, 845–884.
  • MRS (09) J.-J. Morales-Ruiz and S. Simon, On the meromorphic non-integrability of some NN-body problems, Discrete Contin. Dyn. Syst. 24 (2009), no. 4, 1225–1273.
  • MRSS (05) J.-J. Morales-Ruiz, C. Simó, and S. Simon, Algebraic proof of the non-integrability of Hill’s problem, Ergodic Theory Dynam. Systems 25 (2005), no. 4, 1237–1256.
  • MS (02) C. Mitschi and M. F. Singer, Solvable-by-finite groups as differential Galois groups, Ann. Fac. Sci. Toulouse Math. (6) 11 (2002), no. 3, 403–423.
  • PPR+ (10) O. Pujol, J.-P. Pérez, J.-P. Ramis, C. Simó, S. Simon, and J.-A. Weil, Swinging Atwood Machine: experimental and numerical results, and a theoretical study, Physica D: Nonlinear Phenomena 239 (2010), no. 12, 1067–1081.
  • PS (03) M. van der Put and M. F. Singer, Galois theory of linear differential equations, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 328, Springer-Verlag, Berlin, 2003.
  • Sal (14) V. Salnikov, Effective algorithm of analysis of integrability via the Ziglin’s method, Journal of Dynamical and Control Systems 20 (2014), no. 4, 465–474 (English).
  • Sin (91) M. F. Singer, Liouvillian solutions of linear differential equations with Liouvillian coefficients, J. Symbolic Comput. 11 (1991), no. 3, 251–273.
  • Sin (09)  , Introduction to the Galois theory of linear differential equations, Algebraic theory of differential equations, London Math. Soc. Lecture Note Ser., vol. 357, Cambridge Univ. Press, Cambridge, 2009, pp. 1–82.
  • Smi (12) V. A. Smirnov, Analytic tools for Feynman integrals, Springer Tracts in Modern Physics, vol. 250, Springer, Heidelberg, 2012.
  • ST (16) T. M. Sadykov and S. Tanabé, Maximally reducible monodromy of bivariate hypergeometric systems, Izv. Ross. Akad. Nauk Ser. Mat. 80 (2016), no. 1, 235–280.
  • Tsy (01) A. Tsygvintsev, The meromorphic non-integrability of the three-body problem, J. Reine Angew. Math. 537 (2001), 127–149.
  • UW (96) F. Ulmer and J.-A. Weil, Note on Kovacic’s algorithm, J. Symbolic Comput. 22 (1996), no. 2, 179–200.
  • vHW (05) M. van Hoeij and J.-A. Weil, Solving second order differential equations with Klein’s theorem, ISSAC 2005 (Beijing), ACM, New York, 2005.
  • WN (63) J. Wei and E. Norman, Lie algebraic solution of linear differential equations, J. Mathematical Phys. 4 (1963), 575–581.
  • WN (64) J. Wei and E. Norman, On global representations of the solutions of linear differential equations as a product of exponentials, Proc. Amer. Math. Soc. 15 (1964), 327–334.
  • Zig (82) S. L. Ziglin, Branching of solutions and nonexistence of first integrals in hamiltonian mechanics. I, Functional Analysis and Its Applications 16 (1982), no. 3, 181–189.