Principles of operator algebras
Abstract.
This is an introduction to the algebras that the linear operators can form, once a complex Hilbert space is given. Motivated by quantum mechanics, we are mainly interested in the von Neumann algebras, which are stable under taking adjoints, , and are weakly closed. When the algebra has a trace , we can think of it as being of the form , with being a quantum measured space. Of particular interest is the free case, where the center of the algebra reduces to the scalars, . Following von Neumann, Connes, Jones, Voiculescu and others, we discuss the basic properties of such algebras , and how to do algebra, geometry, analysis and probability on the underlying quantum spaces .
Key words and phrases:
Linear operator, Operator algebra2010 Mathematics Subject Classification:
46L10Preface
Quantum mechanics as we know it is the source of many puzzling questions. The simplest quantum mechanical system is the hydrogen atom, consisting of a negative charge, an electron, moving around a positive charge, a proton. This reminds electrodynamics, and accepting the fact that the electron is a bit of a slippery particle, whose position and speed are described by probability, rather than by exact formulae, the hydrogen atom can indeed be solved, by starting with electrodynamics, and making a long series of corrections, for the most coming from experiments, but sometimes coming as well from intuition, with the idea in mind that beautiful mathematics should correspond to true physics. The solution, as we presently know it, is something quite complicated.
Mathematically, the commonly accepted belief is that the good framework for the study of quantum mechanics is an infinite dimensional complex Hilbert space , whose vectors can be thought of as being states of the system, and with the linear operators corresponding to the observables. This is however to be taken with care, because in order to do “true physics”, things must be far sharper than that. Always remember indeed that the simplest object of quantum mechanics is the hydrogen atom, whose simplest states and observables are something quite complicated. Thus when talking about “states and observables”, we have a whole continuum of possible considerations and theories, ranging from true physics to very abstract mathematics.
For making things worse, even the existence and exact relevance of the Hilbert space is subject to debate. This is something more philosophical, related to the 2-body hydrogen problem evoked above, which has twisted the minds of many scientists, starting with Einstein and others. Can we get someday to a better quantum mechanics, by adding more variables to those available inside ? No one really knows the answer here.
The present book is an introduction to the algebras that the bounded linear operators can form, once a Hilbert space is given. There has been an enormous amount of work on such algebras, starting with von Neumann in the 1930s, and we will insist here on the aspects which are beautiful. With the idea, or rather hope in mind, that beautiful mathematics should correspond to true physics.
So, what is beauty, in the operator algebra framework? In our opinion, the source of all possible beauty is an old result of von Neumann, related to the Spectral Theorem for normal operators, which states that any commutative von Neumann algebra must be of the form , with being a measured space.
This is something subtle and interesting, which suggests doing several things with the von Neumann algebras . Given such an algebra we can write the center as , we have then a decomposition of type , and the problem is that of understanding the structure of the fibers, called “factors”. This is what von Neumann himself, and then Connes and others, did. Another idea, more speculative, following later work of Connes, and in parallel work of Voiculescu, is that of writing , with being an abstract “quantum measured space”, and then trying to understand the geometry and probabilistic theory of . Finally, yet another beautiful idea, due this time to Jones, is that of looking at the inclusions of von Neumann algebras, instead at the von Neumann algebras themselves, the point being that the “symmetries” of such an inclusion lead to interesting combinatorics.
All in all, many things that can be done with a von Neumann algebra , and explaining the basics, plus having a look at the above 4 directions of research, is already what a medium sized book can cover. And this book is written exactly with this idea in mind. We will talk about all the above, keeping things as simple as possible, and with everything being accessible with a minimal knowledge of undergraduate mathematics.
The book is organized in 4 parts, with Part I explaining the basics of operator theory, Part II explaining the basics of operator algebras, with a look into geometry and probability too, then Part III going into the structure of the von Neumann factors, and finally Part IV being an introduction to the subfactor theory of Jones.
This book contains, besides the basics of the operator algebra theory, some modern material as well, namely quantum group illustrations for pretty much everything, and I am grateful to Julien Bichon, Benoît Collins, Steve Curran and the others, for our joint work. Many thanks go as well to my cats. Their views and opinions on mathematics, and knowledge of advanced functional analysis, have always been of great help.
Cergy, August 2024
Teo Banica
Part I Linear operators
Does anybody here remember Vera Lynn
Remember how she said that
We would meet again
Some sunny day
Chapter 1 Linear algebra
1a. Linear maps
According to various findings in physics, starting with those of Heisenberg from the early 1920s, basic quantum mechanics involves linear operators from a complex Hilbert space to itself. The space is typically infinite dimensional, a basic example being the Schrödinger space of the wave functions of the electron. In fact, in what regards the electron, this space is basically the correct one, with the only adjustment needed, due to Pauli and others, being that of tensoring with a copy of , in order to account for the electron spin.
But more on this later. Let us start this book more modestly, as follows:
Fact 1.1.
We are interested in quantum mechanics, taking place in infinite dimensions, but as a main source of inspiration we will have , with scalar product
with the linearity at left being the standard mathematical convention. More specifically, we will be interested in the mathematics of the linear operators .
The point now, that you surely know about, is that the above operators correspond to the square matrices . Thus, as a preliminary to what we want to do in this book, we need a good knowledge of linear algebra over .
You probably know well linear algebra, but always good to recall this, and this will be the purpose of the present chapter. Let us start with the very basics:
Theorem 1.2.
The linear maps are in correspondence with the square matrices , with the linear map associated to such a matrix being
and with the matrix associated to a linear map being .
Proof.
The first assertion is clear, because a linear map must send a vector to a certain vector , all whose components are linear combinations of the components of . Thus, we can write, for certain complex numbers :
Now the parameters can be regarded as being the entries of a square matrix , and with the usual convention for matrix multiplication, we have:
Regarding the second assertion, with as above, if we denote by the standard basis of , then we have the following formula:
But this gives the second formula, , as desired. ∎
Our claim now is that, no matter what we want to do with or , of advanced type, we will run at some point into their adjoints and , constructed as follows:
Theorem 1.3.
The adjoint operator , which is given by
corresponds to the adjoint matrix , given by
via the correspondence between linear maps and matrices constructed above.
Proof.
Given a linear map , fix , and consider the linear form . This form must be as follows, for a certain vector :
Thus, we have constructed a map as in the statement, which is obviously linear, and that we can call . Now by taking the vectors to be elements of the standard basis of , our defining formula for reads:
By reversing the scalar product on the right, this formula can be written as:
But this means that the matrix of is given by , as desired. ∎
Getting back to our claim, the adjoints are indeed ubiquitous, as shown by:
Theorem 1.4.
The following happen:
-
(1)
with is an isometry precisely when .
-
(2)
with is a projection precisely when .
Proof.
Let us first recall that the lengths, or norms, of the vectors can be recovered from the knowledge of the scalar products, as follows:
Conversely, we can recover the scalar products out of norms, by using the following difficult to remember formula, called complex polarization identity:
The proof of this latter formula is indeed elementary, as follows:
Finally, we will use Theorem 1.3, and more specifically the following formula coming from there, valid for any matrix and any two vectors :
(1) Given a matrix , we have indeed the following equivalences, with the first one coming from the polarization identity, and the other ones being clear:
(2) Given a matrix , in order for to be an oblique projection, we must have . Now observe that this projection is orthogonal when:
The point now is that by conjugating the last formula, we obtain . Thus we must have , and this gives the result. ∎
Summarizing, the linear operators come in pairs , and the associated matrices come as well in pairs . This is something quite interesting, philosophically speaking, and will keep this in mind, and come back to it later, on numerous occasions.
1b. Diagonalization
Let us discuss now the diagonalization question for the linear maps and matrices. Again, we will be quite brief here, and for more, we refer to any standard linear algebra book. By the way, there will be some complex analysis involved too, and here we refer to Rudin [rud]. Which book of Rudin will be in fact the one and only true prerequisite for reading the present book, but more on references and reading later.
The basic diagonalization theory, formulated in terms of matrices, is as follows:
Proposition 1.5.
A vector is called eigenvector of , with corresponding eigenvalue , when multiplies by in the direction of :
In the case where has a basis formed by eigenvectors of , with corresponding eigenvalues , in this new basis becomes diagonal, as follows:
Equivalently, if we denote by the above diagonal matrix, and by the square matrix formed by the eigenvectors of , we have:
In this case we say that the matrix is diagonalizable.
Proof.
This is something which is clear, the idea being as follows:
(1) The first assertion is clear, because the matrix which multiplies each basis element by a number is precisely the diagonal matrix .
(2) The second assertion follows from the first one, by changing the basis. We can prove this by a direct computation as well, because we have , and so:
Thus, the matrices and coincide, as stated. ∎
Let us recall as well that the basic example of a non diagonalizable matrix, over the complex numbers as above, is the following matrix:
Indeed, we have , so the eigenvectors are the vectors of type , all with eigenvalue . Thus, we have not enough eigenvectors for constructing a basis of .
In general, in order to study the diagonalization problem, the idea is that the eigenvectors can be grouped into linear spaces, called eigenspaces, as follows:
Theorem 1.6.
Let , and for any eigenvalue define the corresponding eigenspace as being the vector space formed by the corresponding eigenvectors:
These eigenspaces are then in a direct sum position, in the sense that given vectors corresponding to different eigenvalues , we have:
In particular we have the following estimate, with sum over all the eigenvalues,
and our matrix is diagonalizable precisely when we have equality.
Proof.
We prove the first assertion by recurrence on . Assume by contradiction that we have a formula as follows, with the scalars being not all zero:
By dividing by one of these scalars, we can assume that our formula is:
Now let us apply to this vector. On the left we obtain:
On the right we obtain something different, as follows:
We conclude from this that the following equality must hold:
On the other hand, we know by recurrence that the vectors must be linearly independent. Thus, the coefficients must be equal, at right and at left:
Now since at least one of the numbers must be nonzero, from we obtain , which is a contradiction. Thus our proof by recurrence of the first assertion is complete. As for the second assertion, this follows from the first one. ∎
In order to reach now to more advanced results, we can use the characteristic polynomial, which appears via the following fundamental result:
Theorem 1.7.
Given a matrix , consider its characteristic polynomial:
The eigenvalues of are then the roots of . Also, we have the inequality
where is the multiplicity of , as root of .
Proof.
The first assertion follows from the following computation, using the fact that a linear map is bijective when the determinant of the associated matrix is nonzero:
Regarding now the second assertion, given an eigenvalue of our matrix , consider the dimension of the corresponding eigenspace. By changing the basis of , as for the eigenspace to be spanned by the first basis elements, our matrix becomes as follows, with being a certain smaller matrix:
We conclude that the characteristic polynomial of is of the following form:
Thus the multiplicity of our eigenvalue , as a root of , satisfies , and this leads to the conclusion in the statement. ∎
Now recall that we are over , which is something that we have not used yet, in our last two statements. And the point here is that we have the following key result:
Theorem 1.8.
Any polynomial decomposes as
with and with .
Proof.
It is enough to prove that has one root, and we do this by contradiction. Assume that has no roots, and pick a number where attains its minimum:
Since is a polynomial which vanishes at , this polynomial must be of the form + higher terms, with , and with being an integer. We obtain from this that, with small, we have the following estimate:
Now let us write , with small, and with . Our estimate becomes:
Now recall that we have assumed . We can therefore choose such that points in the opposite direction to that of , and we obtain in this way:
Now by choosing small enough, as for the error in the first estimate to be small, and overcame by the negative quantity , we obtain from this:
But this contradicts our definition of , as a point where attains its minimum. Thus has a root, and by recurrence it has roots, as stated. ∎
Now by putting everything together, we obtain the following result:
Theorem 1.9.
Given a matrix , consider its characteristic polynomial
then factorize this polynomial, by computing the complex roots, with multiplicities,
and finally compute the corresponding eigenspaces, for each eigenvalue found:
The dimensions of these eigenspaces satisfy then the following inequalities,
and is diagonalizable precisely when we have equality for any .
Proof.
This follows by combining Theorem 1.6, Theorem 1.7 and Theorem 1.8. Indeed, the statement is well formulated, thanks to Theorem 1.8. By summing the inequalities from Theorem 1.7, we obtain an inequality as follows:
On the other hand, we know from Theorem 1.6 that our matrix is diagonalizable when we have global equality. Thus, we are led to the conclusion in the statement. ∎
This was for the main result of linear algebra. There are countless applications of this, and generally speaking, advanced linear algebra consists in building on Theorem 1.9.
In practice, diagonalizing a matrix remains something quite complicated. Let us record a useful algorithmic version of the above result, as follows:
Theorem 1.10.
The square matrices can be diagonalized as follows:
-
(1)
Compute the characteristic polynomial.
-
(2)
Factorize the characteristic polynomial.
-
(3)
Compute the eigenvectors, for each eigenvalue found.
-
(4)
If there are no eigenvectors, is not diagonalizable.
-
(5)
Otherwise, is diagonalizable, .
Proof.
This is an informal reformulation of Theorem 1.9, with (4) referring to the total number of linearly independent eigenvectors found in (3), and with in (5) being the usual diagonalization formula, with being as before. ∎
As an illustration for all this, which is a must-know computation, we have:
Proposition 1.11.
The rotation of angle in the plane diagonalizes as:
Over the reals this is impossible, unless , where the rotation is diagonal.
Proof.
Observe first that, as indicated, unlike we are in the case , where our rotation is , our rotation is a “true” rotation, having no eigenvectors in the plane. Fortunately the complex numbers come to the rescue, via the following computation:
We have as well a second complex eigenvector, coming from:
Thus, we are led to the conclusion in the statement. ∎
1c. Matrix tricks
At the level of basic examples of diagonalizable matrices, we first have the following result, which provides us with the “generic” examples:
Theorem 1.12.
For a matrix the following conditions are equivalent,
-
(1)
The eigenvalues are different, ,
-
(2)
The characteristic polynomial has simple roots,
-
(3)
The characteristic polynomial satisfies ,
-
(4)
The resultant of is nonzero, ,
-
(5)
The discriminant of is nonzero, ,
and in this case, the matrix is diagonalizable.
Proof.
The last assertion holds indeed, due to Theorem 1.9. As for the equivalences in the statement, these are all standard, the idea for their proofs, along with some more theory, needed for using in practice the present result, being as follows:
This follows from Theorem 1.9.
This is standard, the double roots of being roots of .
The idea here is that associated to any two polynomials is their resultant , which checks whether have a common root. Let us write:
We can define then the resultant as being the following quantity:
The point now, that we will explain as well, is that this is a polynomial in the coefficients of , with integer coefficients. Indeed, this can be checked as follows:
– We can expand the formula of , and in what regards , which are the roots of , we obtain in this way certain symmetric functions in these variables, which will be therefore polynomials in the coefficients of , with integer coefficients.
– We can then look what happens with respect to the remaining variables , which are the roots of . Once again what we have here are certain symmetric functions, and so polynomials in the coefficients of , with integer coefficients.
– Thus, we are led to the above conclusion, that is a polynomial in the coefficients of , with integer coefficients, and with the remark that the factor is there for these latter coefficients to be indeed integers, instead of rationals.
Alternatively, let us write our two polynomials in usual form, as follows:
The corresponding resultant appears then as the determinant of an associated matrix, having size , and having coefficients at the blank spaces, as follows:
Once again this is something standard, the idea here being that the discriminant of a polynomial is, modulo scalars, the resultant . To be more precise, let us write our polynomial as follows:
Its discriminant is then defined as being the following quantity:
This is a polynomial in the coefficients of , with integer coefficients, with the division by being indeed possible, under , and with the sign being there for various reasons, including the compatibility with some well-known formulae, at small values of . ∎
All the above might seem a bit complicated, so as an illustration, let us work out an example. Consider the case of a polynomial of degree 2, and a polynomial of degree 1:
In order to compute the resultant, let us factorize our polynomials:
The resultant can be then computed as follows, by using the two-step method:
Observe that corresponds indeed to the fact that have a common root. Indeed, the root of is , and we have:
We can recover as well the resultant as a determinant, as follows:
Finally, in what regards the discriminant, let us see what happens in degree 2. Here we must compute the resultant of the following two polynomials:
The resultant is then given by the following formula:
Now by doing the discriminant normalizations, we obtain, as we should:
As already mentioned, one can prove that the matrices having distinct eigenvalues are “generic”, and so the above result basically captures the whole situation. We have in fact the following collection of density results, which are quite advanced:
Theorem 1.13.
The following happen, inside :
-
(1)
The invertible matrices are dense.
-
(2)
The matrices having distinct eigenvalues are dense.
-
(3)
The diagonalizable matrices are dense.
Proof.
These are quite advanced results, which can be proved as follows:
(1) This is clear, intuitively speaking, because the invertible matrices are given by the condition . Thus, the set formed by these matrices appears as the complement of the hypersurface , and so must be dense inside , as claimed.
(2) Here we can use a similar argument, this time by saying that the set formed by the matrices having distinct eigenvalues appears as the complement of the hypersurface given by , and so must be dense inside , as claimed.
(3) This follows from (2), via the fact that the matrices having distinct eigenvalues are diagonalizable, that we know from Theorem 1.12. There are of course some other proofs as well, for instance by putting the matrix in Jordan form. ∎
As an application of the above results, and of our methods in general, we have:
Theorem 1.14.
The following happen:
-
(1)
We have , for any two matrices .
-
(2)
have the same eigenvalues, with the same multiplicities.
-
(3)
If has eigenvalues , then has eigenvalues .
Proof.
These results can be deduced by using Theorem 1.13, as follows:
(1) It follows from definitions that the characteristic polynomial of a matrix is invariant under conjugation, in the sense that we have the following formula:
Now observe that, when assuming that is invertible, we have:
Thus, we have the result when is invertible. By using now Theorem 1.13 (1), we conclude that this formula holds for any matrix , by continuity.
(2) This is a reformulation of (1), via the fact that encodes the eigenvalues, with multiplicities, which is hard to prove with bare hands.
(3) This is something quite informal, clear for the diagonal matrices , then for the diagonalizable matrices , and finally for all matrices, by using Theorem 1.13 (3), provided that has suitable regularity properties. We will be back to this. ∎
Let us go back to the main problem raised by the diagonalization procedure, namely the computation of the roots of characteristic polynomials. We have here:
Theorem 1.15.
The complex eigenvalues of a matrix , counted with multiplicities, have the following properties:
-
(1)
Their sum is the trace.
-
(2)
Their product is the determinant.
Proof.
Consider indeed the characteristic polynomial of the matrix:
We can factorize this polynomial, by using its complex roots, and we obtain:
Thus, we are led to the conclusion in the statement. ∎
Regarding now the intermediate terms, we have here:
Theorem 1.16.
Assume that has eigenvalues , counted with multiplicities. The basic symmetric functions of these eigenvalues, namely
are then given by the fact that the characteristic polynomial of the matrix is:
Moreover, all symmetric functions of the eigenvalues, such as the sums of powers
appear as polynomials in these characteristic polynomial coefficients .
Proof.
These results can be proved by doing some algebra, as follows:
(1) Consider indeed the characteristic polynomial of the matrix, factorized by using its complex roots, taken with multiplicities. By expanding, we obtain:
With the convention , we are led to the conclusion in the statement.
(2) This is something standard, coming by doing some abstract algebra. Working out the formulae for the sums of powers , at small values of the exponent , is an excellent exercise, which shows how to proceed in general, by recurrence. ∎
1d. Spectral theorems
Let us go back now to the diagonalization question. Here is a key result:
Theorem 1.17.
Any matrix which is self-adjoint, , is diagonalizable, with the diagonalization being of the following type,
with , and with diagonal. The converse holds too.
Proof.
As a first remark, the converse trivially holds, because if we take a matrix of the form , with unitary and diagonal and real, then we have:
In the other sense now, assume that is self-adjoint, . Our first claim is that the eigenvalues are real. Indeed, assuming , we have:
Thus we obtain , as claimed. Our next claim now is that the eigenspaces corresponding to different eigenvalues are pairwise orthogonal. Assume indeed that:
We have then the following computation, using :
Thus implies , as claimed. In order now to finish the proof, it remains to prove that the eigenspaces of span the whole space . For this purpose, we will use a recurrence method. Let us pick an eigenvector of our matrix:
Assuming now that we have a vector orthogonal to it, , we have:
Thus, if is an eigenvector, then the vector space is invariant under . Moreover, since a matrix is self-adjoint precisely when for any vector , as one can see by expanding the scalar product, the restriction of to the subspace is self-adjoint. Thus, we can proceed by recurrence, and we obtain the result. ∎
As basic examples of self-adjoint matrices, we have the orthogonal projections. The diagonalization result regarding them is as follows:
Proposition 1.18.
The matrices which are projections,
are precisely those which diagonalize as follows,
with , and with being diagonal.
Proof.
The equation for the projections being , the eigenvalues are real, and we have as well the following condition, coming from :
Thus we obtain , as claimed, and as a final conclusion here, the diagonalization of the self-adjoint matrices is as follows, with :
To be more precise, the number of 1 values is the dimension of the image of , and the number of 0 values is the dimension of space of vectors sent to 0 by . ∎
An important class of self-adjoint matrices, which includes for instance all the projections, are the positive matrices. The theory here is as follows:
Theorem 1.19.
For a matrix the following conditions are equivalent, and if they are satisfied, we say that is positive:
-
(1)
, with .
-
(2)
, for some .
-
(3)
, for any vector .
-
(4)
, and the eigenvalues are positive, .
-
(5)
, with and with diagonal.
Proof.
The idea is that the equivalences in the statement basically follow from some elementary computations, with only Theorem 1.17 needed, at some point:
This is clear, because we can take .
This follows from the following computation:
By using the fact that is real, we have:
Thus we have , and the remaining assertion, regarding the eigenvalues, follows from the following computation, assuming :
This follows indeed by using Theorem 1.17.
Assuming , with , and with being diagonal, we can set . Then is self-adjoint, and its square is given by:
Thus, we are led to the conclusion in the statement. ∎
Let us record as well the following technical version of the above result:
Theorem 1.20.
For a matrix the following conditions are equivalent, and if they are satisfied, we say that is strictly positive:
-
(1)
, with , invertible.
-
(2)
, for some invertible.
-
(3)
, for any nonzero vector .
-
(4)
, and the eigenvalues are strictly positive, .
-
(5)
, with and with diagonal.
Proof.
This follows either from Theorem 1.19, by adding the various extra assumptions in the statement, or from the proof of Theorem 1.19, by modifying where needed. ∎
Let us discuss now the case of the unitary matrices. We have here:
Theorem 1.21.
Any matrix which is unitary, , is diagonalizable, with the eigenvalues on . More precisely we have
with , and with diagonal. The converse holds too.
Proof.
As a first remark, the converse trivially holds, because given a matrix of type , with , and with being diagonal, we have:
Let us prove now the first assertion, stating that the eigenvalues of a unitary matrix belong to . Indeed, assuming , we have:
Thus we obtain , as claimed. Our next claim now is that the eigenspaces corresponding to different eigenvalues are pairwise orthogonal. Assume indeed that:
We have then the following computation, using and :
Thus implies , as claimed. In order now to finish the proof, it remains to prove that the eigenspaces of span the whole space . For this purpose, we will use a recurrence method. Let us pick an eigenvector of our matrix:
Assuming that we have a vector orthogonal to it, , we have:
Thus, if is an eigenvector, then the vector space is invariant under . Now since is an isometry, so is its restriction to this space . Thus this restriction is a unitary, and so we can proceed by recurrence, and we obtain the result. ∎
The self-adjoint matrices and the unitary matrices are particular cases of the general notion of a “normal matrix”, and we have here:
Theorem 1.22.
Any matrix which is normal, , is diagonalizable, with the diagonalization being of the following type,
with , and with diagonal. The converse holds too.
Proof.
As a first remark, the converse trivially holds, because if we take a matrix of the form , with unitary and diagonal, then we have:
In the other sense now, this is something more technical. Our first claim is that a matrix is normal precisely when the following happens, for any vector :
Indeed, the above equality can be written as follows:
But this is equivalent to , by expanding the scalar products. Our next claim is that have the same eigenvectors, with conjugate eigenvalues:
Indeed, this follows from the following computation, and from the trivial fact that if is normal, then so is any matrix of type :
Let us prove now, by using this, that the eigenspaces of are pairwise orthogonal. Assume that we have two eigenvectors, corresponding to different eigenvalues, :
We have the following computation, which shows that implies :
In order to finish, it remains to prove that the eigenspaces of span the whole . This is something that we have already seen for the self-adjoint matrices, and for unitaries, and we will use here these results, in order to deal with the general normal case. As a first observation, given an arbitrary matrix , the matrix is self-adjoint:
Thus, we can diagonalize this matrix , as follows, with the passage matrix being a unitary, , and with the diagonal form being real, :
Now observe that, for matrices of type , which are those that we supposed to deal with, we have the following formulae:
In particular, the matrices and have the same eigenspaces. So, this will be our idea, proving that the eigenspaces of are eigenspaces of . In order to do so, let us pick two eigenvectors of the matrix , corresponding to different eigenvalues, . The eigenvalue equations are then as follows:
We have the following computation, using the normality condition , and the fact that the eigenvalues of , and in particular , are real:
We conclude that we have . But this reformulates as follows:
Now since the eigenspaces of are pairwise orthogonal, and span the whole , we deduce from this that these eigenspaces are invariant under :
But with this result in hand, we can finish. Indeed, we can decompose the problem, and the matrix itself, following these eigenspaces of , which in practice amounts in saying that we can assume that we only have 1 eigenspace. Now by rescaling, this is the same as assuming that we have . But with this, we are now into the unitary case, that we know how to solve, as explained in Theorem 1.21, and so done. ∎
As a first application, we have the following result:
Theorem 1.23.
Given a matrix , we can construct a matrix as follows, by using the fact that is diagonalizable, with positive eigenvalues:
This matrix is then positive, and its square is . In the case , we obtain in this way the usual absolute value of the complex numbers.
Proof.
Consider indeed the matrix , which is normal. According to Theorem 1.22, we can diagonalize this matrix as follows, with , and with diagonal:
From we obtain . But this means that the entries of are real, and positive. Thus we can extract the square root , and then set:
Thus, we are basically done. Indeed, if we call this latter matrix , then we are led to the conclusions in the statement. Finally, the last assertion is clear from definitions. ∎
We can now formulate a first polar decomposition result, as follows:
Theorem 1.24.
Any invertible matrix decomposes as
with , and with as above.
Proof.
This is routine, and follows by comparing the actions of on the vectors , and deducing from this the existence of a unitary as above. We will be back to this, later on, directly in the case of the linear operators on Hilbert spaces. ∎
Observe that at we obtain in this way the usual polar decomposition of the nonzero complex numbers. More generally now, we have the following result:
Theorem 1.25.
Any square matrix decomposes as
with being a partial isometry, and with as above.
Proof.
Again, this follows by comparing the actions of on the vectors , and deducing from this the existence of a partial isometry as above. Alternatively, we can get this from Theorem 1.24, applied on the complement of the 0-eigenvectors. ∎
This was for our basic presentation of linear algebra. There are of course many other things that can be said, but we will come back to some of them in what follows, directly in the case of the linear operators on the arbitrary Hilbert spaces.
1e. Exercises
Linear algebra is a wide topic, and there are countless interesting matrices, and exercises about them. As a continuation of our discussion about rotations, we have:
Exercise 1.26.
Prove that the symmetry and projection with respect to the axis rotated by an angle are given by the matrices
and then diagonalize these matrices, and if possible without computations.
Here the first part can only be clear on pictures, and by the way, prior to this, do not forget to verify as well that our formula of is the good one. As for the second part, just don’t go head-first into computations, there might be some geometry over there.
Exercise 1.27.
Prove that the isometries of are rotations or symmetries,
and then try as well to find a formula for the isometries of .
Here for the first question you should look first at the determinant of such an isometry. As for the second question, this is something quite difficult. If you’re good at computers, you can look into the code of 3D games, the rotation formula is probably there.
Exercise 1.28.
Prove that the isometries of of determinant are
then work out as well the general case, of arbitrary determinant.
As a comment here, if done with this exercise about , but not yet with the previous one about , you can go back to that exercise, by using a trick. And in case this trick leads to tough computations and big headache, look it up.
Exercise 1.29.
Prove that the flat matrix, which is the all-one matrix, diagonalizes over the complex numbers as follows,
where with is the Fourier matrix, with the convention that the indices are taken to be .
This is something very instructive. Normally you have to look for eigenvectors for the flat matrix, and you are led in this way to the equation . The problem however is that this equation, while looking very gentle, has no “canonical” solutions over the real numbers. Thus you are led to the complex numbers, and more specifically to the roots of unity, and their magic, leading to the above result. Enjoy.
Chapter 2 Linear operators
2a. Hilbert spaces
We discuss in what follows an extension of the linear algebra results from the previous chapter, obtained by looking at the linear operators , with the space being no longer assumed to be finite dimensional. Our motivations come from quantum mechanics, and in order to get motivated, here is some suggested reading:
(1) Generally speaking, physics is best learned from Feynman [fey]. If you already know some, and want to learn quantum mechanics, go with Griffiths [gri]. And if you’re already a bit familiar with quantum mechanics, a good book is Weinberg [wei].
(2) A look at classics like Dirac [dir], von Neumann [vn4] or Weyl [wey] can be instructive too. On the opposite, you have as well modern, fancy books on quantum information, such as Bengtsson-Życzkowski [bzy], Nielsen-Chuang [nch] or Watrous [wat].
(3) In short, many ways of getting familiar with this big mess which is quantum mechanics, and as long as you stay away from books advertised as “rigorous”, “axiomatic”, “mathematical”, things fine. By the way, you can try as well my book [ba4].
Getting to work now, physics tells us to look at infinite dimensional complex spaces, such as the space of wave functions of the electron. In order to do some mathematics on these spaces, we will need scalar products. So, let us start with:
Definition 2.1.
A scalar product on a complex vector space is a binary operation , denoted , satisfying the following conditions:
-
(1)
is linear in , and antilinear in .
-
(2)
, for any .
-
(3)
, for any .
As before in chapter 1, we use here mathematicians’ convention for scalar products, that is, linear at left, as opposed to physicists’ convention, linear at right. The reasons for this are quite subtle, coming from the fact that, while basic quantum mechanics looks better with linear at right, advanced quantum mechanics looks better with linear at left. Or at least that’s what my cats say.
As a basic example for Definition 2.1, we have the finite dimensional vector space , with its usual scalar product, namely:
There are many other examples, and notably various spaces of functions, which naturally appear in problems coming from physics. We will discuss them later on. In order to study now the scalar products, let us formulate the following definition:
Definition 2.2.
The norm of a vector is the following quantity:
We also call this number length of , or distance from to the origin.
The terminology comes from what happens in , where the length of the vector, as defined above, coincides with the usual length, given by:
In analogy with what happens in finite dimensions, we have two important results regarding the norms. First we have the Cauchy-Schwarz inequality, as follows:
Theorem 2.3.
We have the Cauchy-Schwarz inequality
and the equality case holds precisely when are proportional.
Proof.
This is something very standard. Consider indeed the following quantity, depending on a real variable , and on a variable on the unit circle, :
By developing , we see that this is a degree 2 polynomial in :
Since is obviously positive, its discriminant must be negative:
But this is equivalent to the following condition:
Now the point is that we can arrange for the number to be such that the quantity is real. Thus, we obtain the following inequality:
Finally, the study of the equality case is straightforward, by using the fact that the discriminant of vanishes precisely when we have a root. But this leads to the conclusion in the statement, namely that the vectors must be proportional. ∎
As a second main result now, we have the Minkowski inequality:
Theorem 2.4.
We have the Minkowski inequality
and the equality case holds precisely when are proportional.
Proof.
This follows indeed from the Cauchy-Schwarz inequality, as follows:
As for the equality case, this is clear from Cauchy-Schwarz as well. ∎
As a consequence of this, we have the following result:
Theorem 2.5.
The following function is a distance on ,
in the usual sense, that of the abstract metric spaces.
Proof.
This follows indeed from the Minkowski inequality, which corresponds to the triangle inequality, the other two axioms for a distance being trivially satisfied. ∎
The above result is quite important, because it shows that we can do geometry and analysis in our present setting, with distances and angles, a bit as in the finite dimensional case. In order to do such abstract geometry, we will often need the following key result, which shows that everything can be recovered in terms of distances:
Proposition 2.6.
The scalar products can be recovered from distances, via the formula
called complex polarization identity.
Proof.
This is something that we have already met in finite dimensions. In arbitrary dimensions the proof is similar, as follows:
Thus, we are led to the conclusion in the statement. ∎
In order to do analysis on our spaces, we need the Cauchy sequences that we construct to converge. This is something which is automatic in finite dimensions, but in arbitrary dimensions, this can fail. It is convenient here to formulate a detailed new definition, as follows, which will be the starting point for our various considerations to follow:
Definition 2.7.
A Hilbert space is a complex vector space given with a scalar product , satisfying the following conditions:
-
(1)
is linear in , and antilinear in .
-
(2)
, for any .
-
(3)
, for any .
-
(4)
is complete with respect to the norm .
In other words, we have taken here Definition 2.1 above, and added the condition that must be complete with respect to the norm , that we know indeed to be a norm, according to the Minkowski inequality proved above. As a basic example, as before, we have the space , with its usual scalar product, namely:
More generally now, we have the following construction of Hilbert spaces:
Proposition 2.8.
The sequences of complex numbers which are square-summable,
form a Hilbert space , with the following scalar product:
In fact, given any index set , we can construct a Hilbert space , in this way.
Proof.
There are several things to be proved, as follows:
(1) Our first claim is that is a vector space. For this purpose, we must prove that implies . But this leads us into proving , where . Now since we know this inequality to hold on each subspace obtained by truncating, this inequality holds everywhere, as desired.
(2) Our second claim is that is well-defined on . But this follows from the Cauchy-Schwarz inequality, , which can be established by truncating, a bit like we established the Minkowski inequality in (1) above.
(3) It is also clear that is a scalar product on , so it remains to prove that is complete with respect to . But this is clear, because if we pick a Cauchy sequence , then each numeric sequence is Cauchy, and by setting , we have inside , as desired.
(4) Finally, the same arguments extend to the case of an arbitrary index set , leading to a Hilbert space , and with the remark here that there is absolutely no problem of taking about quantities of type , even if the index set is uncountable, because we are summing positive numbers. ∎
Even more generally, we have the following construction of Hilbert spaces:
Theorem 2.9.
Given a measured space , the functions , taken up to equality almost everywhere, which are square-summable,
form a Hilbert space , with the following scalar product:
In the case , with the counting measure, we obtain in this way the space .
Proof.
This is a straightforward generalization of Proposition 2.8, with the arguments from the proof of Proposition 2.8 carrying over in our case, as follows:
(1) The first part, regarding Cauchy-Schwarz and Minkowski, extends without problems, by using this time approximation by step functions.
(2) Regarding the fact that is indeed a scalar product on , there is a subtlety here, because if we want for , we must declare that when almost everywhere, and so that when almost everywhere.
(3) Regarding the fact that is complete with respect to , this is again basic measure theory, by picking a Cauchy sequence , and then constructing a pointwise, and hence limit, , almost everywhere.
(4) Finally, the last assertion is clear, because the integration with respect to the counting measure is by definition a sum, and so in this case. ∎
Quite remarkably, any Hilbert space must be of the form , and even of the particular form . This follows indeed from the following key result:
Theorem 2.10.
Let be a Hilbert space.
-
(1)
Any algebraic basis of this space can be turned into an orthonormal basis , by using the Gram-Schmidt procedure.
-
(2)
Thus, has an orthonormal basis, and so we have , with being the indexing set for this orthonormal basis.
Proof.
All this is standard by Gram-Schmidt, the idea being as follows:
(1) First of all, in finite dimensions an orthonormal basis is by definition a usual algebraic basis, satisfying . But the existence of such a basis follows by applying the Gram-Schmidt procedure to any algebraic basis , as claimed.
(2) In infinite dimensions, a first issue comes from the fact that the standard basis of the space is not an algebraic basis in the usual sense, with the finite linear combinations of the functions producing only a dense subspace of , that of the functions having finite support. Thus, we must fine-tune our definition of “basis”.
(3) But this can be done in two ways, by saying that is a basis of when the functions are linearly independent, and when either the finite linear combinations of these functions form a dense subspace of , or the linear combinations with coefficients of these functions form the whole . For orthogonal bases these definitions are equivalent, and in any case, our statement makes now sense.
(4) Regarding now the proof, in infinite dimensions, this follows again from Gram-Schmidt, exactly as in the finite dimensional case, but by using this time a tool from logic, called Zorn lemma, in order to correctly do the recurrence. ∎
The above result, and its relation with Theorem 2.9, is something quite subtle, so let us further get into this. First, we have the following definition, based on the above:
Definition 2.11.
A Hilbert space is called separable when the following equivalent conditions are satisfied:
-
(1)
has a countable algebraic basis .
-
(2)
has a countable orthonormal basis .
-
(3)
We have , isomorphism of Hilbert spaces.
In what follows we will be mainly interested in the separable Hilbert spaces, where most of the questions coming from quantum physics take place. In view of the above, the following philosophical question appears: why not simply talking about ?
In answer to this, we cannot really do so, because many of the separable spaces that we are interested in appear as spaces of functions, and such spaces do not necessarily have a very simple or explicit orthonormal basis, as shown by the following result:
Proposition 2.12.
The Hilbert space is separable, having as orthonormal basis the orthonormalized version of the algebraic basis with .
Proof.
This follows from the Weierstrass theorem, which provides us with the basis , which can be orthogonalized by using the Gram-Schmidt procedure, as explained in Theorem 2.10. Working out the details here is actually an excellent exercise. ∎
As a conclusion to all this, we are interested in 1 space, namely the unique separable Hilbert space , but due to various technical reasons, it is often better to forget that we have , and say instead that we have , with being a separable measured space, or simply say that is an abstract separable Hilbert space.
2b. Linear operators
Let us get now into the study of linear operators . Before anything, we should mention that things are quite tricky with respect to quantum mechanics, and physics in general. Indeed, if there is a central operator in physics, this is the Laplace operator on the smooth functions , given by:
And the problem is that what we have here is an operator , which does not extend into an operator . Thus, we should perhaps look at operators which are densely defined, instead of looking at operators which are everywhere defined. We will not do so, for two reasons:
(1) Tactical retreat. When physics looks too complicated, as it is the case now, you can always declare that mathematics comes first. So, let us be pure mathematicians, simply looking in generalizing linear algebra to infinite dimensions. And from this viewpoint, it is a no-brainer to look at everywhere defined operators .
(2) Modern physics. We will see later, towards the middle of the present book, when talking about various mathematical physics findings of Connes, Jones, Voiculescu and others, that a lot of interesting mathematics, which is definitely related to modern physics, can be developed by using the everywhere defined operators .
In short, you’ll have to trust me here. And hang on, we are not done yet, because with this choice made, there is one more problem, mathematical this time. The problem comes from the fact that in infinite dimensions the everywhere defined operators can be bounded or not, and for reasons which are mathematically intuitive and obvious, and physically acceptable too, we want to deal with the bounded case only.
Long story short, let us avoid too much thinking, and start in a simple way, with:
Proposition 2.13.
For a linear operator , the following are equivalent:
-
(1)
is continuous.
-
(2)
is continuous at .
-
(3)
for some , where is the unit ball.
-
(4)
is bounded, in the sense that satisfies .
Proof.
This is elementary, with coming from the linearity of , then coming from definitions, and finally coming from the fact that the number from (4) is the infimum of the numbers making (3) work. ∎
Regarding such operators, we have the following result:
Theorem 2.14.
The linear operators which are bounded,
form a complex algebra with unit , having the property
and which is complete with respect to the norm.
Proof.
The fact that we have indeed an algebra, satisfying the product condition in the statement, follows from the following estimates, which are all elementary:
Regarding now the last assertion, if is Cauchy then is Cauchy for any , so we can define the limit by setting:
Let us first check that the application is linear. We have:
Similarly, we have as well the following computation:
Thus we have a linear map . It remains to prove that we have , and that we have in norm. For this purpose, observe that we have:
As a first consequence, we obtain , because we have:
As a second consequence, we obtain in norm, and we are done. ∎
In the case where comes with a basis , we can talk about the infinite matrices , with the remark that the multiplication of such matrices is not always defined, in the case . In this context, we have the following result:
Theorem 2.15.
Let be a Hilbert space, with orthonormal basis . The bounded operators can be then identified with matrices via
and we obtain in this way an embedding as follows, which is multiplicative:
In the case we obtain in this way the usual isomorphism . In the separable case we obtain in this way a proper embedding .
Proof.
We have several assertions to be proved, the idea being as follows:
(1) Regarding the first assertion, given a bounded operator , let us associate to it a matrix as in the statement, by the following formula:
It is clear that this correspondence is linear, and also that its kernel is . Thus, we have an embedding of linear spaces .
(2) Our claim now is that this embedding is multiplicative. But this is clear too, because if we denote by our correspondence, we have:
(3) Finally, we must prove that the original operator can be recovered from its matrix via the formula in the statement, namely . But this latter formula holds for the vectors of the basis, , because we have:
Now by linearity we obtain from this that the formula holds everywhere, on any vector , and this finishes the proof of the first assertion.
(4) In finite dimensions we obtain an isomorphism, because any matrix determines an operator , according to the formula . In infinite dimensions, however, we do not have an isomorphism. For instance on the following matrix does not define an operator:
Indeed, should be the all-one vector, which is not square-summable. ∎
In connection with our previous comments on bases, the above result is something quite theoretical, because for basic Hilbert spaces like , which do not have a simple orthonormal basis, the embedding that we obtain is not something very useful. In short, while the bounded operators are basically some infinite matrices, it is better to think of these operators as being objects on their own.
As another comment, the construction makes sense for any linear operator , but when , we do not obtain an embedding in this way. Indeed, set , let be the linear space spanned by the standard basis, and pick an algebraic complement of this space , so that we have , as an algebraic direct sum. Then any linear operator gives rise to a linear operator , given by , whose associated matrix is . And, restrospectively speaking, it is in order to avoid such pathologies that we decided some time ago to restrict the attention to the bounded case, .
As in the finite dimensional case, we can talk about adjoint operators, in this setting, the definition and main properties of the construction being as follows:
Theorem 2.16.
Given a bounded operator , the following formula defines a bounded operator , called adjoint of :
The correspondence is antilinear, antimultiplicative, and is an involution, and an isometry. In finite dimensions, we recover the usual adjoint operator.
Proof.
There are several things to be done here, the idea being as follows:
(1) We will need a standard functional analysis result, stating that the continuous linear forms appear as scalar products, as follows, with :
Indeed, in one sense this is clear, because given , the application is linear, and continuous as well, because by Cauchy-Schwarz we have:
Conversely now, by using a basis we can assume , and our linear form must be then, by linearity, given by a formula of the following type:
But, again by Cauchy-Schwarz, in order for such a formula to define indeed a continuous linear form we must have , and so , as desired.
(2) With this in hand, we can now construct the adjoint , by the formula in the statement. Indeed, given , the formula defines a linear map . Thus, we must have a formula as follows, for a certain vector :
Moreover, this vector is unique with this property, and we conclude from this that the formula defines a certain map , which is unique with the property in the statement, namely for any .
(3) Let us prove that we have . By using once again the uniqueness of , we conclude that we have the following formulae, which show that is linear:
Observe also that is bounded as well, because we have:
(4) The fact that the correspondence is antilinear, antimultiplicative, and is an involution comes from the following formulae, coming from uniqueness:
As for the isometry property with respect to the operator norm, , this is something that we already know, from the proof of (3) above.
(5) Regarding finite dimensions, let us first examine the general case where our Hilbert space comes with a basis, . We can compute the matrix associated to the operator , by using , in the following way:
Thus, we have reached to the usual formula for the adjoints of matrices, and in the particular case , we conclude that comes indeed from the usual . ∎
As in finite dimensions, the operators can be thought of as being “twin brothers”, and there is a lot of interesting mathematics connecting them. We first have:
Proposition 2.17.
Given a bounded operator , the following happen:
-
(1)
.
-
(2)
.
Proof.
Both these assertions are elementary, as follows:
(1) Let us first prove “”. Assuming , we have indeed , because:
As for “”, assuming for any , we have , because:
(2) This can be deduced from (1), applied to the operator , as follows:
Here we have used the formula , valid for any linear subspace of a Hilbert space, which for closed reads , and comes from , and which in general follows from , the reverse inclusion being clear. ∎
Let us record as well the following useful formula, relating and :
Theorem 2.18.
We have the following formula,
valid for any operator .
Proof.
We recall from Theorem 2.16 that the correspondence is an isometry with respect to the operator norm, in the sense that we have:
In order to prove now the formula in the statement, observe first that we have:
On the other hand, we have as well the following estimate:
By replacing we obtain from this that we have:
Thus, we have obtained the needed inequality, and we are done. ∎
2c. Unitaries, projections
Let us discuss now some explicit examples of operators, in analogy with what happens in finite dimensions. The most basic examples of linear transformations are the rotations, symmetries and projections. Then, we have certain remarkable classes of linear transformations, such as the positive, self-adjoint and normal ones. In what follows we will develop the basic theory of such transformations, in the present Hilbert space setting.
Let us begin with the rotations. The situation here is quite tricky in arbitrary dimensions, and we have several notions instead of one. We first have the following result:
Theorem 2.19.
For a linear operator the following conditions are equivalent, and if they are satisfied, we say that is an isometry:
-
(1)
is a metric space isometry, .
-
(2)
is a normed space isometry, .
-
(3)
preserves the scalar product, .
-
(4)
satisfies the isometry condition .
In finite dimensions, we recover in this way the usual unitary transformations.
Proof.
The proofs are similar to those in finite dimensions, as follows:
This follows indeed from the formula of the distances, namely:
This is again standard, because we can pass from scalar products to distances, and vice versa, by using , and the polarization formula.
We have indeed the following equivalences, by using the standard formula , which defines the adjoint operator:
Thus, we are led to the conclusions in the statement. ∎
The point now is that the condition does not imply in general , the simplest counterexample here being the shift operator on :
Proposition 2.20.
The shift operator on the space , given by
is an isometry, . However, we have .
Proof.
The adjoint of the shift is given by the following formula:
When composing , in one sense we obtain the following formula:
In other other sense now, we obtain the following formula:
Summarizing, the compositions are given by the following formulae:
Thus, we are led to the conclusions in the statement. ∎
As a conclusion, the notion of isometry is not the correct infinite dimensional analogue of the notion of unitary, and the unitary operators must be introduced as follows:
Theorem 2.21.
For a linear operator the following conditions are equivalent, and if they are satisfied, we say that is a unitary:
-
(1)
is an isometry, which is invertible.
-
(2)
, are both isometries.
-
(3)
, are both isometries.
-
(4)
.
-
(5)
.
Moreover, the unitary operators from a group .
Proof.
There are several statements here, the idea being as follows:
(1) The various equivalences in the statement are all clear from definitions, and from Theorem 2.19 in what regards the various possible notions of isometries which can be used, by using the formula for the adjoints of the products of operators.
(2) The fact that the products and inverses of unitaries are unitaries is also clear, and we conclude that the unitary operators from a group , as stated. ∎
Let us discuss now the projections. Modulo the fact that all the subspaces where these projections project must be assumed to be closed, in the present setting, here the result is perfectly similar to the one in finite dimensions, as follows:
Theorem 2.22.
For a linear operator the following conditions are equivalent, and if they are satisfied, we say that is a projection:
-
(1)
is the orthogonal projection on a closed subspace .
-
(2)
satisfies the projection equations .
Proof.
As in finite dimensions, is an abstract projection, not necessarily orthogonal, when it is an idempotent, algebrically speaking, in the sense that we have:
The point now is that this projection is orthogonal when:
Now observe that by conjugating, we obtain . Thus, we must have , and so we have shown that any orthogonal projection must satisfy, as claimed:
Conversely, if this condition is satisfied, shows that is a projection, and shows via the above computation that is indeed orthogonal. ∎
There is a relation between the projections and the general isometries, such as the shift that we met before, and we have the following result:
Proposition 2.23.
Given an isometry , the operator
is a projection, namely the orthogonal projection on .
Proof.
Assume indeed that we have an isometry, . The fact that is indeed a projection can be checked abstractly, as follows:
As for the last assertion, this is something that we already met, for the shift, and the situation in general is similar, with the result itself being clear. ∎
More generally now, along the same lines, and clarifying the whole situation with the unitaries and isometries, we have the following result:
Theorem 2.24.
An operator is a partial isometry, in the usual geometric sense, when the following two operators are projections:
Moreover, the isometries, adjoints of isometries and unitaries are respectively characterized by the conditions , , .
Proof.
The first assertion is a straightforward extension of Proposition 2.23, and the second assertion follows from various results regarding isometries established above. ∎
It is possible to talk as well about symmetries, in the following way:
Definition 2.25.
An operator is called a symmetry when , and a unitary symmetry when one of the following equivalent conditions is satisfied:
-
(1)
is a unitary, , and a symmetry as well, .
-
(2)
satisfies the equations .
Here the terminology is a bit non-standard, because even in finite dimensions, is not exactly what you would require for a “true” symmetry, as shown by the following transformation, which is a symmetry in our sense, but not a unitary symmetry:
Let us study now some larger classes of operators, which are of particular importance, namely the self-adjoint, positive and normal ones. We first have:
Theorem 2.26.
For an operator , the following conditions are equivalent, and if they are satisfied, we call self-adjoint:
-
(1)
.
-
(2)
.
In finite dimensions, we recover in this way the usual self-adjointness notion.
Proof.
There are several assertions here, the idea being as follows:
This is clear, because we have:
In order to prove this, observe that the beginning of the above computation shows that, when assuming , the following happens:
Thus, in terms of the operator , we have:
In order to finish, we use a polarization trick. We have the following formula:
Since the first 3 terms vanish, the sum of the 2 last terms vanishes too. But, by using , coming from , we can process this latter vanishing as follows:
Thus we must have , and with we obtain too, and so . Thus , which gives , as desired.
(3) Finally, in what regards the finite dimensions, or more generally the case where our Hilbert space comes with a basis, , here the condition corresponds to the usual self-adjointness condition at the level of the associated matrices. ∎
At the level of the basic examples, the situation is as follows:
Proposition 2.27.
The folowing operators are self-adjoint:
-
(1)
The projections, . In fact, an abstract, algebraic projection is an orthogonal projection precisely when it is self-adjoint.
-
(2)
The unitary symmetries, . In fact, a unitary is a unitary symmetry precisely when it is self-adjoint.
Proof.
These assertions are indeed all clear from definitions. ∎
Next in line, we have the notion of positive operator. We have here:
Theorem 2.28.
The positive operators, which are the operators satisfying , have the following properties:
-
(1)
They are self-adjoint, .
-
(2)
As examples, we have the projections, .
-
(3)
More generally, is positive, for any .
-
(4)
In finite dimensions, we recover the usual positive operators.
Proof.
All these assertions are elementary, the idea being as follows:
(1) This follows from Theorem 2.26, because implies .
(2) This is clear from , because we have:
(3) This follows from a similar computation, namely:
(4) This is well-known, the idea being that the condition corresponds to the usual positivity condition , at the level of the associated matrix. ∎
It is possible to talk as well about strictly positive operators, and we have here:
Theorem 2.29.
The strictly positive operators, which are the operators satisfying , for any , have the following properties:
-
(1)
They are self-adjoint, .
-
(2)
As examples, is positive, for any injective.
-
(3)
In finite dimensions, we recover the usual strictly positive operators.
Proof.
As before, all these assertions are elementary, the idea being as follows:
(1) This is something that we know, from Theorem 2.28.
(2) This follows from the injectivity of , because for any we have:
(3) This is well-known, the idea being that the condition corresponds to the usual strict positivity condition , at the level of the associated matrix. ∎
As a comment, while any strictly positive matrix is well-known to be invertible, the analogue of this fact does not hold in infinite dimensions, a counterexample here being the following operator on , which is strictly positive but not invertible:
As a last remarkable class of operators, we have the normal ones. We have here:
Theorem 2.30.
For an operator , the following conditions are equivalent, and if they are satisfied, we call normal:
-
(1)
.
-
(2)
.
In finite dimensions, we recover in this way the usual normality notion.
Proof.
There are several assertions here, the idea being as follows:
This is clear, due to the following computation:
This is clear as well, because the above computation shows that, when assuming , the following happens:
Thus, in terms of the operator , we have:
In order to finish, we use a polarization trick. We have the following formula:
Since the first 3 terms vanish, the sum of the 2 last terms vanishes too. But, by using , coming from , we can process this latter vanishing as follows:
Thus we must have , and with we obtain too, and so . Thus , which gives , as desired.
(3) Finally, in what regards finite dimensions, or more generally the case where our Hilbert space comes with a basis, , here the condition corresponds to the usual normality condition at the level of the associated matrices. ∎
Observe that the normal operators generalize both the self-adjoint operators, and the unitaries. We will be back to such operators, on many occassions, in what follows.
2d. Diagonal operators
Let us work out now what happens in the case that we are mostly interested in, namely , with being a measured space. We first have:
Theorem 2.31.
Given a measured space , consider the Hilbert space . Associated to any function is then the multiplication operator
which is well-defined, linear and bounded, having norm as follows:
Moreover, the correspondence is linear, multiplicative and involutive.
Proof.
There are several assertions here, the idea being as follows:
(1) We must first prove that the formula in the statement, , defines indeed an operator , which amounts in saying that we have:
But this follows from the following explicit estimate:
(2) Next in line, we must prove that is linear and bounded. We have:
As for the boundedness condition, this follows from the estimate from the proof of (1), which gives, in terms of the operator norm of :
(3) Let us prove now that we have equality, , in the above estimate. For this purpose, we use the well-known fact that the functions can be approximated by functions. Indeed, with such an approximation we obtain:
Thus, with we obtain , which is reverse to the inequality obtained in the proof of (2), and this leads to the conclusion in the statement.
(4) Regarding now the fact that the correspondence is indeed linear and multiplicative, the corresponding formulae are as follows, both clear:
(5) Finally, let us prove that the correspondence is involutive, in the sense that it transforms the standard involution of the algebra into the standard involution of the algebra . We must prove that we have:
But this follows from the following computation:
Indeed, since the adjoint is unique, we obtain from this . Thus the correspondence is indeed involutive, as claimed. ∎
In what regards now the basic classes of operators, the above construction provides us with many new examples, which are very explicit, and are complementary to the usual finite dimensional examples that we usually have in mind, as follows:
Theorem 2.32.
The multiplication operators on the Hilbert space associated to the functions are as follows:
-
(1)
is unitary when .
-
(2)
is a symmetry when .
-
(3)
is a projection when with .
-
(4)
There are no non-unitary isometries.
-
(5)
There are no non-unitary symmetries.
-
(6)
is positive when .
-
(7)
is self-adjoint when .
-
(8)
is always normal, for any .
Proof.
All these assertions are clear from definitions, and from the various properties of the correspondence , established above, as follows:
(1) The unitarity condition for the operator reads , which means that we must have , as claimed.
(2) The symmetry condition for the operator reads , which means that we must have , as claimed.
(3) The projection condition for the operator reads , which means that we must have , or equivalently, with .
(4) A non-unitary isometry must satisfy by definition , and for the operator this means that we must have , which is impossible.
(5) This follows from (1) and (2), because the solutions found in (2) for the symmetry problem are included in the solutions found in (1) for the unitarity problem.
(6) The fact that is positive amounts in saying that we must have for any , and this is equivalent to the fact that we must have , as desired.
(7) The self-adjointness condition for the operator reads , which means that we must have , as claimed.
(8) The normality condition for the operator reads , which is automatic for any function , as claimed. ∎
The above result might look quite puzzling, at a first glance, messing up our intuition with various classes of operators, coming from usual linear algebra. However, a bit of further thinking tells us that there is no contradiction, and that Theorem 2.32 in fact is very similar to what we know about the diagonal matrices. To be more precise, the diagonal matrices are unitaries precisely when their entries are in , there are no non-unitary isometries, all such matrices are normal, and so on. In order to understand all this, let us work out what happens with the correspondence , in finite dimensions. The situation here is in fact extremely simple, and illuminating, as follows:
Theorem 2.33.
Assuming with the counting measure, the embedding
constructed via multiplication operators, , corresponds to the embedding
given by the diagonal matrices, constructed as follows:
Thus, Theorem 2.32 generalizes what we know about the diagonal matrices.
Proof.
The idea is that all this is trivial, with not a single new computation needed, modulo some algebraic thinking, of quite soft type. Let us go back indeed to Theorem 2.31 above and its proof, with the abstract measured space appearing there being now the following finite space, with its counting mesure:
Regarding the functions , these are now functions as follows:
We can identify such a function with the corresponding vector , and so we conclude that our input algebra is the algebra :
Regarding now the Hilbert space , this is equal as well to , and for the same reasons, namely that can be identified with the vector :
Observe that, due to our assumption that comes with its counting measure, the scalar product that we obtain on is the usual one, without weights. Now, let us identify the operators on with the square matrices, in the usual way:
This was our final identification, in order to get started. Now by getting back to Theorem 2.31, the embedding constructed there reads:
But this can only be the embedding given by the diagonal matrices, so are basically done. In order to finish, however, let us understand what the operator associated to an arbitrary vector is. We can regard this vector as a function, , and so the action on the vectors of is by componentwise multiplication by the numbers . But this is exactly the action of the diagonal matrix , and so we are led to the conclusion in the statement. ∎
There are other things that can be said about the embedding , a key observation here, which is elementary to prove, being the fact that the image of is closed with respect to the weak topology, the one where when for any . And with this meaning that is a so-called von Neumann algebra on . We will be back to this, on numerous occasions, in what follows.
2e. Exercises
As before with linear algebra, operator theory is a wide area of mathematics, and there are many interesting operators, and exercises about them. We first have:
Exercise 2.34.
Find an explicit orthonormal basis for the Hilbert space
by starting with the algebraic basic with , and applying Gram-Schmidt.
This is actually quite non-trivial, and in case you’re stuck with complicated computations, better look it up, preferably in the physics literature, physicists being well-known to adore such things, and then write a brief account of what you found.
Exercise 2.35.
Find all the complex matrices
which are symmetries, , and interpret them geometrically.
Here you can of course start with the real case first, . Also, you can have a look at 3 dimensions too, real or complex, and beware of the computations here.
Exercise 2.36.
Prove that any positive operator appears as
with self-adjoint, first in finite dimensions, then in general.
Here the discussion in finite dimensions involves positive eigenvalues and their square roots, which is something quite standard. In infinite dimensions things are a bit more complicated, because we don’t have yet such eigenvalue technology, and with this being actually to come in the next chapter, but you can try of course some other tricks.
Chapter 3 Spectral theorems
3a. Basic theory
We discuss in this chapter the diagonalization problem for the operators , in analogy with the diagonalization problem for the usual matrices . As a first observation, we can talk about eigenvalues and eigenvectors, as follows:
Definition 3.1.
Given an operator , assuming that we have
we say that is an eigenvector of , with eigenvalue .
We know many things about eigenvalues and eigenvectors, in the finite dimensional case. However, most of these will not extend to the infinite dimensional case, or at least not extend in a straightforward way, due to a number of reasons:
-
(1)
Most of basic linear algebra is based on the fact that is equivalent to , so that is an eigenvalue when is not invertible. In the infinite dimensional setting might be injective and not surjective, or vice versa, or invertible with not bounded, and so on.
-
(2)
Also, in linear algebra is not invertible when , and with this leading to most of the advanced results about eigenvalues and eigenvectors. In infinite dimensions, however, it is impossible to construct a determinant function , and this even for the diagonal operators on .
Summarizing, we are in trouble with our extension program, and this right from the beginning. In order to have some theory started, however, let us forget about (2), which obviously leads nowhere, and focus on the difficulties in (1).
In order to cut short the discussion there, regarding the various properties of , we can just say that is either invertible with bounded inverse, the “good case”, or not. We are led in this way to the following definition:
Definition 3.2.
The spectrum of an operator is the set
where is the set of invertible operators.
As a basic example, in the finite dimensional case, , the spectrum of a usual matrix is the collection of its eigenvalues, taken without multiplicities. We will see many other examples. In general, the spectrum has the following properties:
Proposition 3.3.
The spectrum of contains the eigenvalue set
and is an equality in finite dimensions, but not in infinite dimensions.
Proof.
We have several assertions here, the idea being as follows:
(1) First of all, the eigenvalue set is indeed the one in the statement, because tells us precisely that must be not injective. The fact that we have is clear as well, because if is not injective, it is not bijective.
(2) In finite dimensions we have , because is injective if and only if it is bijective, with the boundedness of the inverse being automatic.
(3) In infinite dimensions we can assume , and the shift operator is injective but not surjective. Thus . ∎
We will see more examples and counterexamples, and some general theory, in a moment. Philosophically speaking, the best way of thinking at all this is as follows:
– The numbers are good, because we can invert .
– The numbers are bad.
– The eigenvalues are evil.
Note that this is somewhat contrary to what happens in linear algebra, where the eigenvalues are highly valued, and cherished, and regarded as being the source of all good things on Earth. Welcome to operator theory, where some things are upside down.
Let us develop now some general theory for the spectrum, or perhaps for its complement, with the promise to come back to eigenvalues later. As a first result, we would like to prove that the spectra are non-empty. This is something tricky, and we will need:
Proposition 3.4.
The following happen:
-
(1)
-
(2)
The set is open.
-
(3)
The map is differentiable.
Proof.
All these assertions are elementary, as follows:
(1) This follows as in the scalar case, the computation being as follows, provided that everything converges under the norm, which amounts in saying that :
(2) Assuming , let us pick such that:
We have then the following estimate:
Thus we have , and so , as desired.
(3) In the scalar case, the derivative of is . In the present normed space setting the derivative is no longer a number, but rather a linear transformation, which can be found by developing at order 1, as follows:
Thus is indeed differentiable, with derivative . ∎
We can now formulate our first theorem about spectra, as follows:
Theorem 3.5.
The spectrum of a bounded operator is:
-
(1)
Compact.
-
(2)
Contained in the disc .
-
(3)
Non-empty.
Proof.
This can be proved by using Proposition 3.4, along with a bit of complex and functional analysis, for which we refer to Rudin [rud] and Lax [lax], as follows:
(1) In view of (2) below, it is enough to prove that is closed. But this follows from the following computation, with being small:
(2) This follows from the following computation:
(3) Assume by contradiction . Given a linear form , consider the following map, which is well-defined, due to our assumption :
By using the fact that is differentiable, that we know from Proposition 3.4, we conclude that this map is differentiable, and so holomorphic. Also, we have:
Thus by the Liouville theorem we obtain . But, in view of the definition of , this gives , which is a contradiction, as desired. ∎
Here is now a second basic result regarding the spectra, inspired from what happens in finite dimensions, for the usual complex matrices, and which shows that things do not necessarily extend without troubles to the infinite dimensional setting:
Theorem 3.6.
We have the following formula, valid for any operators :
In finite dimensions we have , but this fails in infinite dimensions.
Proof.
There are several assertions here, the idea being as follows:
(1) This is something that we know in finite dimensions, coming from the fact that the characteristic polynomials of the associated matrices coincide:
Thus we obtain in this case, as claimed. Observe that this improves twice the general formula in the statement, first because we have no issues at 0, and second because what we obtain is actually an equality of sets with mutiplicities.
(2) In general now, let us first prove the main assertion, stating that coincide outside 0. We first prove that we have the following implication:
Assume indeed that is invertible, with inverse denoted :
We have then the following formulae, relating our variables :
By using , we have the following computation:
A similar computation, using , shows that we have:
Thus is invertible, with inverse , which proves our claim. Now by multiplying by scalars, we deduce from this that for any we have:
But this leads to the conclusion in the statement.
(3) Regarding now the counterexample to the formula , in general, let us take to be the shift on , given by the following formula:
As for , we can take it to be the adjoint of , which is the following operator:
Let us compose now these two operators. In one sense, we have:
In the other sense, however, the situation is different, as follows:
Thus, the spectra do not match on , and we have our counterexample, as desired. ∎
3b. Spectral radius
Let us develop now some systematic theory for the computation of the spectra, based on what we know about the eigenvalues of the usual complex matrices. As a first result, which is well-known for the usual matrices, and extends well, we have:
Theorem 3.7.
We have the “polynomial functional calculus” formula
valid for any polynomial , and any operator .
Proof.
We pick a scalar , and we decompose the polynomial :
We have then the following equivalences:
Thus, we are led to the formula in the statement. ∎
The above result is something very useful, and generalizing it will be our next task. As a first ingredient here, assuming that is invertible, we have:
It is possible to extend this formula to the arbitrary operators, and we will do this in a moment. Before starting, however, we have to think in advance on how to unify this potential result, that we have in mind, with Theorem 3.7 itself.
What we have to do here is to find a class of functions generalizing both the polynomials and the inverse function , and the answer to this question is provided by the rational functions, which are as follows:
Definition 3.8.
A rational function is a quotient of polynomials:
Assuming that are prime to each other, we can regard as a usual function,
with being the set of zeros of , also called poles of .
Here the term “poles” comes from the fact that, if you want to imagine the graph of such a rational function , in two complex dimensions, what you get is some sort of tent, supported by poles of infinite height, situated at the zeros of . For more on all this, and on complex analysis in general, we refer as usual to Rudin [rud]. Although a look at an abstract algebra book can be interesting as well.
Now that we have our class of functions, the next step consists in applying them to operators. Here we cannot expect to make sense for any and any , for instance because is defined only when is invertible. We are led in this way to:
Definition 3.9.
Given an operator , and a rational function having poles outside , we can construct the following operator,
that we can denote as a usual fraction, as follows,
due to the fact that commute, so that the order is irrelevant.
To be more precise, is indeed well-defined, and the fraction notation is justified too. In more formal terms, we can say that we have a morphism of complex algebras as follows, with standing for the rational functions having poles outside :
Summarizing, we have now a good class of functions, generalizing both the polynomials and the inverse map . We can now extend Theorem 3.7, as follows:
Theorem 3.10.
We have the “rational functional calculus” formula
valid for any rational function having poles outside .
Proof.
We pick a scalar , we write , and we set:
By using now Theorem 3.7, for this polynomial, we obtain:
Thus, we are led to the formula in the statement. ∎
As an application of the above methods, we can investigate certain special classes of operators, such as the self-adjoint ones, and the unitary ones. Let us start with:
Proposition 3.11.
The following happen:
-
(1)
We have , for any .
-
(2)
If then satisfies .
-
(3)
If then satisfies .
Proof.
We have several assertions here, the idea being as follows:
(1) The spectrum of the adjoint operator can be computed as follows:
(2) This is clear indeed from (1).
(3) For a unitary operator, , Theorem 3.10 and (1) give:
Thus, we are led to the conclusion in the statement. ∎
In analogy with what happens for the usual matrices, we would like to improve now (2,3) above, with results stating that the spectrum satisfies for self-adjoints, and for unitaries. This will be tricky. Let us start with:
Theorem 3.12.
The spectrum of a unitary operator
is on the unit circle, .
Proof.
Assuming , we have the following norm computation:
Now if we denote by the unit disk, we obtain from this:
On the other hand, once again by using , we have as well:
Thus, as before with being the unit disk in the complex plane, we have:
Now by using Theorem 3.10, we obtain , as desired. ∎
We have as well a similar result for self-adjoints, as follows:
Theorem 3.13.
The spectrum of a self-adjoint operator
consists of real numbers, .
Proof.
The idea is that we can deduce the result from Theorem 3.12, by using the following remarkable rational function, depending on a parameter :
Indeed, for the operator is well-defined, and we have:
Thus is unitary, and by using Theorem 3.12 we obtain:
Thus, we are led to the conclusion in the statement. ∎
As a theoretical remark, it is possible to deduce as well Theorem 3.12 from Theorem 3.13, by performing the above computation in the other sense. Indeed, by assuming that Theorem 3.13 holds indeed, and starting with a unitary , we obtain:
As a conclusion now, we have so far a beginning of spectral theory, with results allowing us to investigate the unitaries and the self-adjoints, and with the remark that these two classes of operators are related by a certain wizarding rational function, namely:
Let us keep building on this, with more complex analysis involved. One key thing that we know about matrices, and which follows for instance by using the fact that the diagonalizable matrices are dense, is the following formula:
We would like to have such formulae for the general operators , but this is something quite technical. Consider the rational calculus morphism from Definition 3.9, which is as follows, with the exponent standing for “having poles outside ”:
As mentioned before, the rational functions are holomorphic outside their poles, and this raises the question of extending this morphism, as follows:
Normally this can be done in several steps. Let us start with:
Proposition 3.14.
We can exponentiate any operator , by setting:
Similarly, we can define , for any holomorphic function .
Proof.
We must prove that the series defining converges, and this follows from:
The case of the arbitrary holomorphic functions is similar. ∎
In general, the holomorphic functions are not entire, and the above method won’t cover the rational functions that we want to generalize. Thus, we must use something else. And the answer here comes from the Cauchy formula:
Indeed, given a rational function , the operator , constructed in Definition 3.9, can be recaptured in an analytic way, as follows:
Now given an arbitrary function , we can define by the exactly same formula, and we obtain in this way the desired correspondence:
This was for the plan. In practice now, all this needs a bit of care, with many verifications needed, and with the technical remark that a winding number must be added to the above Cauchy formulae, for things to be correct. The result is as follows:
Theorem 3.15.
We have the “holomorphic functional calculus” formula
valid for any holomorphic function .
Proof.
This is something that we will not really need, for the purposes of the present book, which is more algebraic than analytic, but here is the general idea:
(1) As explained above, given a rational function , the corresponding operator can be recaptured in an analytic way, as follows:
(2) Now given an arbitrary function , we can define by the exactly same formula, and we obtain in this way the desired correspondence:
(3) In practice now, all this needs a bit of care, notably with the verification of the fact that the operator does not depend on , and with the technical remark that a winding number must be added to the above Cauchy formulae, for things to be correct. But this can be done via a standard study, keeping in mind the fact that in the case , where our operators are usual numbers, , what we want to do is simply proving that the usual Cauchy formula holds indeed.
(4) Now with this correspondence constructed, and so with the formula in the statement, namely , making now sense, it remains to prove that this formula holds indeed. But this follows as well via a careful use of the Cauchy formula, or by using approximation by polynomials, or rational functions. ∎
As already said, the above result is important for advanced operator theory and applications, and we will not get further into this subject. We will be back, however, to all this in the special case of the normal operators, which is of particular interest for us.
In order to formulate now our next result, we will need the following notion:
Definition 3.16.
Given an operator , its spectral radius
is the radius of the smallest disk centered at containing .
Here we have included for convenience a number of basic results from Theorem 3.5, namely the fact that the spectrum is non-empty, and is contained in the disk , which provide us respectively with the inequalities , with the usual convention , and . Now with this notion in hand, we have the following key result, improving our key result so far, namely , from Theorem 3.5:
Theorem 3.17.
The spectral radius of an operator is given by
and in this formula, we can replace the limit by an inf.
Proof.
We have several things to be proved, the idea being as follows:
(1) Our first claim is that the numbers satisfy:
Indeed, we have the following estimate, using the Young inequality , with exponents and :
(2) Our second claim is that the second assertion holds, namely:
For this purpose, we just need the inequality found in (1). Indeed, fix , let , and write with . By using twice , we get:
It follows that we have , which proves our claim.
(3) Summarizing, we are left with proving the main formula, which is as follows, and with the remark that we already know that the sequence on the right converges:
In one sense, we can use the polynomial calculus formula . Indeed, this gives the following estimate, valid for any , as desired:
(4) For the reverse inequality, we fix a number , and we want to prove that we have . By using the Cauchy formula, we have:
By applying the norm we obtain from this formula:
Since the sup does not depend on , by taking -th roots, we obtain in the limit:
Now recall that was by definition an arbitrary number satisfying . Thus, we have obtained the following estimate, valid for any :
Thus, we are led to the conclusion in the statement. ∎
In the case of the normal elements, we have the following finer result:
Theorem 3.18.
The spectral radius of a normal element,
is equal to its norm.
Proof.
We can proceed in two steps, as follows:
Step 1. In the case we have for any exponent of the form , by using the formula , and by taking -th roots we get:
Thus, we are done with the self-adjoint case, with the result .
Step 2. In the general normal case we have , and by using this, along with the result from Step 1, applied to , we obtain:
Thus, we are led to the conclusion in the statement. ∎
As a first comment, the spectral radius formula does not hold in general, the simplest counterexample being the following non-normal matrix:
As another comment, we can combine the formula for normal operators with the formula , and we are led to the following statement:
Theorem 3.19.
The norm of is given by
and so is a purely algebraic quantity.
Proof.
We have the following computation, using the formula , then the spectral radius formula for , and finally the definition of the spectral radius:
Thus, we are led to the conclusion in the statement. ∎
The above result is quite interesting, philosophically speaking. We will be back to this, with further results and comments on , and other algebras of the same type.
3c. Normal operators
By using Theorem 3.18 we can say a number of non-trivial things concerning the normal operators, commonly known as “spectral theorem for normal operators”. As a first result here, we can improve the polynomial functional calculus formula:
Theorem 3.20.
Given normal, we have a morphism of algebras
having the properties , and .
Proof.
This is an improvement of Theorem 3.7 in the normal case, with the extra assertion being the norm estimate. But the element being normal, we can apply to it the spectral radius formula for normal elements, and we obtain:
Thus, we are led to the conclusions in the statement. ∎
We can improve as well the rational calculus formula, and the holomorphic calculus formula, in the same way. Importantly now, at a more advanced level, we have:
Theorem 3.21.
Given normal, we have a morphism of algebras
which is isometric, , and has the property .
Proof.
The idea here is to “complete” the morphism in Theorem 3.20, namely:
Indeed, we know from Theorem 3.20 that this morphism is continuous, and is in fact isometric, when regarding the polynomials as functions on :
We conclude from this that we have a unique isometric extension, as follows:
It remains to prove , and we can do this by double inclusion:
“” Given a continuous function , we must prove that we have:
For this purpose, consider the following function, which is well-defined:
We can therefore apply this function to , and we obtain:
In particular is invertible, so , as desired.
“” Given a continuous function , we must prove that we have:
But this is the same as proving that we have:
For this purpose, we approximate our function by polynomials, , and we examine the following convergence, which follows from :
We know from polynomial functional calculus that we have:
Thus, the operators are not invertible. On the other hand, we know that the set formed by the invertible operators is open, so its complement is closed. Thus the limit is not invertible either, and so , as desired. ∎
As an important comment, Theorem 3.21 is not exactly in final form, because it misses an important point, namely that our correspondence maps:
However, this is something non-trivial, and we will be back to this later. Observe however that Theorem 3.21 is fully powerful for the self-adjoint operators, , where the spectrum is real, and so where on the spectrum. We will be back to this.
As a second result now, along the same lines, we can further extend Theorem 3.21 into a measurable functional calculus theorem, as follows:
Theorem 3.22.
Given normal, we have a morphism of algebras as follows, with standing for abstract measurable functions, or Borel functions,
which is isometric, , and has the property .
Proof.
As before, the idea will be that of “completing” what we have. To be more precise, we can use the Riesz theorem and a polarization trick, as follows:
(1) Given a vector , consider the following functional:
By the Riesz theorem, this functional must be the integration with respect to a certain measure on the space . Thus, we have a formula as follows:
Now given an arbitrary Borel function , as in the statement, we can define a number , by using exactly the same formula, namely:
Thus, we have managed to define numbers , for all vectors , and in addition we can recover these numbers as follows, with :
(2) In order to define now numbers , for all vectors , we can use a polarization trick. Indeed, for any operator we have:
By replacing , we have as well the following formula:
By multiplying this latter formula by , we obtain the following formula:
Now by summing this latter formula with the first one, we obtain:
(3) But with this, we can now finish. Indeed, by combining (1,2), given a Borel function , we can define numbers for any , and it is routine to check, by using approximation by continuous functions as in (1), that we obtain in this way an operator , having all the desired properties. ∎
The same comments as before apply. Theorem 3.22 is not exactly in final form, because it misses an important point, namely that our correspondence maps:
However, this is something non-trivial, and we will be back to this later. Observe however that Theorem 3.22 is fully powerful for the self-adjoint operators, , where the spectrum is real, and so where on the spectrum. We will be back to this.
As another comment, the above result and its proof provide us with more than a Borel functional calculus, because what we got is a certain measure on the spectrum , along with a functional calculus for the functions with respect to this measure. We will be back to this later, and for the moment we will only need Theorem 3.22 as formulated, with standing, a bit abusively, for the Borel functions on .
3d. Diagonalization
We can now diagonalize the normal operators. We will do this in 3 steps, first for the self-adjoint operators, then for the families of commuting self-adjoint operators, and finally for the general normal operators, by using a trick of the following type:
The diagonalization in infinite dimensions is more tricky than in finite dimensions, and instead of writing a formula of type , with being respectively unitary and diagonal, we will express our operator as , with being a certain unitary, and with being a certain diagonal operator.
This is indeed how the spectral theorem is best formulated, in view of applications. In practice, the explicit construction of , which will be actually rather part of the proof, is also needed. For the self-adjoint operators, the statement and proof are as follows:
Theorem 3.23.
Any self-adjoint operator can be diagonalized,
with being a unitary operator from to a certain space associated to , with being a certain function, once again associated to , and with
being the usual multiplication operator by , on the Hilbert space .
Proof.
The construction of can be done in several steps, as follows:
(1) We first prove the result in the special case where our operator has a cyclic vector , with this meaning that the following holds:
For this purpose, let us go back to the proof of Theorem 3.22. We will use the following formula from there, with being the measure on associated to :
Our claim is that we can define a unitary , first on the dense part spanned by the vectors , by the following formula, and then by continuity:
Indeed, the following computation shows that is well-defined, and isometric:
We can then extend by continuity into a unitary , as claimed. Now observe that we have the following formula:
Thus our result is proved in the present case, with as above, and with .
(2) We discuss now the general case. Our first claim is that has a decomposition as follows, with each being invariant under , and admitting a cyclic vector :
Indeed, this is something elementary, the construction being by recurrence in finite dimensions, in the obvious way, and by using the Zorn lemma in general. Now with this decomposition in hand, we can make a direct sum of the diagonalizations obtained in (1), for each of the restrictions , and we obtain the formula in the statement. ∎
We have the following technical generalization of the above result:
Theorem 3.24.
Any family of commuting self-adjoint operators can be jointly diagonalized,
with being a unitary operator from to a certain space associated to , with being certain functions, once again associated to , and with
being the usual multiplication operator by , on the Hilbert space .
Proof.
This is similar to the proof of Theorem 3.23, by suitably modifying the measurable calculus formula, and the measure itself, as to have this formula working for all the operators . With this modification done, everything extends. ∎
In order to discuss now the case of the arbitrary normal operators, we will need:
Proposition 3.25.
Any operator can be written as
with being self-adjoint, and this decomposition is unique.
Proof.
This is something elementary, the idea being as follows:
(1) As a first observation, in the case our operators are usual complex numbers, and the formula in the statement corresponds to the following basic fact:
(2) In general now, we can use the same formulae for the real and imaginary part as in the complex number case, the decomposition formula being as follows:
To be more precise, both the operators on the right are self-adjoint, and the summing formula holds indeed, and so we have our decomposition result, as desired.
(3) Regarding now the uniqueness, by linearity it is enough to show that with both self-adjoint implies . But this follows by applying the adjoint to , which gives , and so , as desired. ∎
As a comment here, the above result is just the “tip of the iceberg”, in what regards decomposition results for the operators , in analogy with decomposition results for the complex numbers . As a sample result here, improving Proposition 3.25, we can write any operator as a linear combination of 4 positive operators, by writing both as differences of positive operators. More on this later.
Good news, after all these preliminaries, that you enjoyed I hope, as much as I did, we can eventually discuss the case of arbitrary normal operators. We have here the following result, generalizing what we know from chapter 1 about the normal matrices:
Theorem 3.26.
Any normal operator can be diagonalized,
with being a unitary operator from to a certain space associated to , with being a certain function, once again associated to , and with
being the usual multiplication operator by , on the Hilbert space .
Proof.
This is our main diagonalization theorem, the idea being as follows:
(1) Consider the decomposition of into its real and imaginary parts, as constructed in the proof of Proposition 3.25, namely:
We know that the real and imaginary parts are self-adjoint operators. Now since was assumed to be normal, , these real and imaginary parts commute:
Thus Theorem 3.24 applies to these real and imaginary parts, and gives the result.
(2) Alternatively, we can use methods similar to those that we used in chapter 1, in order to deal with the usual normal matrices, involving the special relation between and the operator , which is self-adjoint. We will leave this as an instructive exercise. ∎
This was for our series of diagonalization theorems. There is of course one more result here, regarding the families of commuting normal operators, as follows:
Theorem 3.27.
Any family of commuting normal operators can be jointly diagonalized,
with being a unitary operator from to a certain space associated to , with being certain functions, once again associated to , and with
being the usual multiplication operator by , on the Hilbert space .
Proof.
This is similar to the proof of Theorem 3.24 and Theorem 3.26, by combining the arguments there. To be more precise, this follows as Theorem 3.24, by using the decomposition trick from the proof of Theorem 3.26. ∎
With the above diagonalization results in hand, we can now “fix” the continuous and measurable functional calculus theorems, with a key complement, as follows:
Theorem 3.28.
Given a normal operator , the following hold, for both the functional calculus and the measurable calculus morphisms:
-
(1)
These morphisms are -morphisms.
-
(2)
The function gets mapped to .
-
(3)
The functions get mapped to .
-
(4)
The function gets mapped to .
-
(5)
If is real, then is self-adjoint.
Proof.
These assertions are more or less equivalent, with (1) being the main one, which obviously implies everything else. But this assertion (1) follows from the diagonalization result for normal operators, from Theorem 3.26. ∎
This was for the spectral theory of arbitrary and normal operators, or at least for the basics of this theory. As a conclusion here, our main results are as follows:
-
(1)
Regarding the arbitrary operators, the main results here, or rather the most advanced results, are the holomorphic calculus formula from Theorem 3.15, and the spectral radius estimate from Theorem 3.17.
-
(2)
For the self-adjoint operators, the main results are the spectral radius formula from Theorem 3.18, the measurable calculus formula from Theorem 3.22, and the diagonalization result from Theorem 3.23.
-
(3)
For general normal operators, the main results are the spectral radius formula from Theorem 3.18, the measurable calculus formula from Theorem 3.22, complemented by Theorem 3.28, and the diagonalization result in Theorem 3.26.
There are of course many other things that can be said about the spectral theory of the bounded operators , and on that of the unbounded operators too. As a complement, we recommend any good operator theory book, with the comment however that there is a bewildering choice here, depending on taste, and on what exactly you want to do with your operators . In what concerns us, who are rather into general quantum mechanics, but with our operators being bounded, good choices are the functional analysis book of Lax [lax], or the operator algebra book of Blackadar [bla].
3e. Exercises
The main theoretical notion introduced in this chapter was that of the spectrum of an operator, and as a first exercise here, we have:
Exercise 3.29.
Prove that for the usual matrices we have
where denotes the set of eigenvalues, taken with multiplicities.
As a remark, we have seen in the above that holds outside , and the equality on holds as well, because is invertible if and only if is invertible. However, in what regards the eigenvalues taken with multiplicities, things are more tricky here, and the answer should be somewhere inside your linear algebra knowledge.
Exercise 3.30.
Clarify, with examples and counterexamples, the relation between the eigenvalues of an operator , and its spectrum .
Here, as usual, the counterexamples could only come from the shift operator , on the space . As a bonus exercise here, try computing the spectrum of .
Exercise 3.31.
Draw the picture of the following function, and of its inverse,
with , and prove that for and , the element is well-defined.
This is something that we used in the above, when computing spectra of self-adjoints and unitaries, and the problem is that of working out all the details.
Exercise 3.32.
Comment on the spectral radius theorem, stating that for a normal operator, , the spectral radius is equal to the norm,
with examples and counterexamples, and simpler proofs of well, in various particular cases of interest, such as the finite dimensional one.
This is of course something a bit philosophical, but the spectral radius theorem being our key technical result so far, some further thinking on it is definitely a good thing.
Exercise 3.33.
Develop a theory of -algebras for which the quantity
defines a norm, for the elements .
As pointed out in the above, the spectral radius formula shows that for the norm is given by the above formula, and so there should be such a theory of “good” -algebras, with as a main example. However, this is tricky.
Exercise 3.34.
Find and write down a proof for the spectral theorem for normal operators in the spirit of the proof for normal matrices from chapter 1, and vice versa.
To be more precise, the problem is that the proof of the spectral theorem for the usual matrices, from chapter 1, was using a certain kind of trick, while the proof of the spectral theorem for the arbitrary operators, given in this chapter, was using some other kind of trick. Thus, for fully understanding all this, working out more proofs, both for the usual matrices and for the arbitrary operators, is a useful thing.
Exercise 3.35.
Find and write down an enhancement of the proof given above for the spectral theorem, as for to appear way before the end of the proof.
This is something a bit philosophical, and check here first the various comments made above, and maybe work out this as well in parallel with the previous exercise.
Chapter 4 Compact operators
4a. Polar decomposition
We have seen so far the basic theory of bounded operators, in the arbitrary, normal and self-adjoint cases, and in a few other cases of interest. In this chapter we discuss a number of more specialized questions, for the most dealing with the compact operators, which are particularly close, conceptually speaking, to the usual complex matrices.
We have in fact considerably many interesting things that we can talk about, in this final chapter on operator theory, and our choices will be as follows:
(1) Before anything, at the general level, we would like to understand the matrix and operator theory analogues of the various things that we know about the complex numbers , such as , or and so on. We will discuss this first.
(2) Then, motivated by advanced linear algebra, we will go on a lengthy discussion on the algebra of compact operators , which for many advanced operator theory purposes is the correct generalization of the matrix algebra .
(3) Our discussion on the compact operators will feature as well some more specialized types of operators, , with being the finite rank ones, being the trace class ones, and being the Hilbert-Schmidt ones.
And that is pretty much it, all basic things, that must be known. Of course this will be just the tip of the iceberg, and more of an introduction to modern operator theory.
Getting started now, we would first like to systematically develop the theory of positive operators, and then establish polar decomposition results for the operators . We first have the following result, improving our knowledge from chapter 2:
Theorem 4.1.
For an operator , the following are equivalent:
-
(1)
, for any .
-
(2)
is normal, and .
-
(3)
, for some satisfying .
-
(4)
, for some .
If these conditions are satisfied, we call positive, and write .
Proof.
We have already seen some implications in chapter 2, but the best is to forget the few partial results that we know, and prove everything, as follows:
Assuming , with we have:
The next step is to use a polarization trick, as follows:
Thus we must have , and with we obtain too, and so . Thus , which gives . Now since is self-adjoint, it is normal as claimed. Moreover, by self-adjointness, we have:
In order to prove now that we have indeed , as claimed, we must invert , for any . For this purpose, observe that we have:
But this shows that is injective. In order to prove now the surjectivity, and the boundedness of the inverse, observe first that we have:
Thus is dense. On the other hand, observe that we have:
Thus for any vector in the image we have:
As a conclusion to what we have so far, is bijective and invertible as a bounded operator from onto its image, with the following norm bound:
But this shows that is complete, hence closed, and since we already knew that is dense, our operator is surjective, and we are done.
Since is normal, and with spectrum contained in , we can use the continuous functional calculus formula for the normal operators from chapter 3, with the function , as to construct a square root .
This is trivial, because we can set .
This is clear, because we have the following computation:
Thus, we have the equivalences in the statement. ∎
In analogy with what happens in finite dimensions, where among the positive matrices we have the strictly positive ones, , given by the fact that the eigenvalues are strictly positive, we have as well a “strict” version of the above result, as follows:
Theorem 4.2.
For an operator , the following are equivalent:
-
(1)
is positive and invertible.
-
(2)
is normal, and .
-
(3)
, for some invertible, satisfying .
-
(4)
, for some invertible.
If these conditions are satisfied, we call strictly positive, and write .
Proof.
Our claim is that the above conditions (1-4) are precisely the conditions (1-4) in Theorem 4.1, with the assumption “ is invertible” added. Indeed:
(1) This is clear by definition.
(2) In the context of Theorem 4.1 (2), namely when is normal, and , the invertibility of , which means , gives , as desired.
(3) In the context of Theorem 4.1 (3), namely when , with , by using the basic properties of the functional calculus for normal operators, the invertibility of is equivalent to the invertibility of its square root , as desired.
(4) In the context of Theorem 4.1 (4), namely when , the invertibility of is equivalent to the invertibility of . This can be either checked directly, or deduced via the equivalence from Theorem 4.1, by using the above argument (3). ∎
As a subtlety now, we have the following complement to the above result:
Proposition 4.3.
For a strictly positive operator, , we have
but the converse of this fact is not true, unless we are in finite dimensions.
Proof.
We have several things to be proved, the idea being as follows:
(1) Regarding the main assertion, the inequality can be deduced as follows, by using the fact that the operator is invertible, and in particular injective:
(2) In finite dimensions, assuming for any , we know from Theorem 4.1 that we have . Thus we have , and assuming by contradiction , we obtain that has as eigenvalue, and the corresponding eigenvector has the property , contradiction. Thus , as claimed.
(3) Regarding now the counterexample, consider the following operator on :
This operator is well-defined and bounded, and we have for any . However is not invertible, and so the converse does not hold, as stated. ∎
With this done, let us discuss now some decomposition results for the bounded operators . We know that any can be written as follows, with :
Also, we know that both the real and imaginary parts , and more generally any real number , can be written as follows, with :
Here are the operator theoretic generalizations of these results:
Proposition 4.4.
Given an operator , the following happen:
-
(1)
We can write , with being self-adjoint.
-
(2)
When , we can write , with being positive.
-
(3)
Thus, we can write any as a linear combination of positive elements.
Proof.
All this follows from basic spectral theory, as follows:
(1) This is something that we have already met in chapter 3, when proving the spectral theorem in its general form, the decomposition formula being as follows:
(2) This follows from the measurable functional calculus. Indeed, assuming we have , so we can use the following decomposition formula on :
To be more precise, let us multiply by , and rewrite this formula as follows:
Now by applying these measurable functions to , we obtain as formula as follows, with both the operators being positive, as desired:
(3) This follows indeed by combining the results in (1) and (2) above. ∎
Going ahead with our decomposition results, another basic thing that we know about complex numbers is that any appears as a real multiple of a unitary:
Finding the correct operator theoretic analogue of this is quite tricky, and this even for the usual matrices . As a basic result here, we have:
Proposition 4.5.
Given an operator , the following happen:
-
(1)
When and , we can write as an average of unitaries:
-
(2)
In the general case, we can write as a rescaled sum of unitaries:
-
(3)
Thus, in general, we can write as a rescaled sum of unitaries.
Proof.
This follows from the results that we have, as follows:
(1) Assuming and we have , and the decomposition that we are looking for is as follows, with both the components being unitaries:
To be more precise, the square root can be extracted as in Theorem 4.1 (3), and the check of the unitarity of the components goes as follows:
(2) This simply follows by applying (1) to the operator .
(3) Assuming first , we know from Proposition 4.4 (1) that we can write , with being self-adjoint, and satisfying . Now by applying (1) to both and , we obtain a decomposition of as follows:
In general, we can apply this to the operator , and we obtain the result. ∎
All this gets us into the multiplicative theory of the complex numbers, that we will attempt to generalize now. As a first construction, that we would like to generalize to the bounded operator setting, we have the construction of the modulus, as follows:
The point now is that we can indeed generalize this construction, as follows:
Proposition 4.6.
Given an operator , we can construct a positive operator as follows, by using the fact that is positive:
The square of this operator is then . In the case , we obtain in this way the usual absolute value of the complex numbers:
More generally, in the case where is finite dimensional, we obtain in this way the usual moduli of the complex matrices .
Proof.
We have several things to be proved, the idea being as follows:
(1) The first assertion follows from Theorem 4.1. Indeed, according to (4) there the operator is indeed positive, and then according to (2) there we can extract the square root of this latter positive operator, by applying to it the function .
(2) By functional calculus we have then , as desired.
(3) In the case , we obtain indeed the absolute value of complex numbers.
(4) In the case where the space is finite dimensional, , we obtain indeed the usual moduli of the complex matrices . ∎
As a comment here, it is possible to talk as well about , which is in general different from . Note that when is normal, no issue, because we have:
Regarding now the polar decomposition formula, let us start with a weak version of this statement, regarding the invertible operators, as follows:
Theorem 4.7.
We have the polar decomposition formula
with being a unitary, for any invertible.
Proof.
According to our definition of the modulus, , we have:
Thus we can define a unitary operator by the following formula:
But this formula shows that we have , as desired. ∎
Observe that we have uniqueness in the above result, in what regards the choice of the unitary , due to the fact that we can write this unitary as follows:
More generally now, we have the following result:
Theorem 4.8.
We have the polar decomposition formula
with being a partial isometry, for any .
Proof.
As before, we have the following equality, for any two vectors :
We conclude that the following linear application is well-defined, and isometric:
Now by continuity we can extend this isometry into an isometry between certain Hilbert subspaces of , as follows:
Moreover, we can further extend into a partial isometry , by setting , for any , and with this convention, the result follows. ∎
4b. Compact operators
We have seen so far the basic theory of the bounded operators, in the arbitrary, normal and self-adjoint cases, and in a few other cases of interest. We will keep building on this, with a number of more specialized results, regarding the finite rank operators and compact operators, and other special classes of related operators, namely the trace class operators, and the Hilbert-Schmidt operators. Let us start with a basic definition, as follows:
Definition 4.9.
An operator is said to be of finite rank if its image
is finite dimensional. The set of such operators is denoted .
There are many interesting examples of finite rank operators, the most basic ones being the finite rank projections, on the finite dimensional subspaces . Observe also that in the case where is finite dimensional, any operator is automatically of finite rank. In general, this is of course wrong, but we have the following result:
Proposition 4.10.
The set of finite rank operators
is a two-sided -ideal.
Proof.
We have several assertions to be proved, the idea being as follows:
(1) It is clear from definitions that is indeed a vector space, with this due to the following formulae, valid for any , which are both clear:
(2) Let us prove now that is stable under . Given , we can regard it as an invertible operator between finite dimensional Hilbert spaces, as follows:
We conclude from this that we have the following dimension equality:
Our claim now, in relation with our problem, is that we have equalities as follows:
Indeed, the third equality is the one above, and the second equality is something that we know too, from chapter 2. Now by combining these two equalities we deduce that is finite dimensional, and so the first equality holds as well. Thus, our equalities are proved, and this shows that we have , as desired.
(3) Finally, regarding the ideal property, this follows from the following two formulae, valid for any , which are once again clear from definitions:
Thus, we are led to the conclusion in the statement. ∎
Let us discuss now the compact operators, which will be the main topic of discussion, for the present chapter. These are best introduced as follows:
Definition 4.11.
An operator is said to be compact if the closed set
is compact, where is the unit ball. The set of such operators is denoted .
Equivalently, an operator is compact when for any sequence , or more generally for any bounded sequence , the sequence has a convergence subsequence. We will see later some further criteria of compactness.
In finite dimensions any operator is compact. In general, as a first observation, any finite rank operator is compact. We have in fact the following result:
Proposition 4.12.
Any finite rank operator is compact,
and the finite rank operators are dense inside the compact operators.
Proof.
The first assertion is clear, because if is finite dimensional, then the following subset is closed and bounded, and so it is compact:
Regarding the second assertion, let us pick a compact operator , and a number . By compactness of we can find a finite set such that:
Consider now the orthogonal projection onto the following finite dimensional space:
Since the set is finite, this space is finite dimensional, and so is of finite rank, . Now observe that for any norm one and any we have:
Now by picking such that the ball covers the point , we conclude from this that we have the following estimate:
Thus we have , which gives the density result. ∎
Quite remarkably, the set of compact operators is closed, and we have:
Theorem 4.13.
The set of compact operators
is a closed two-sided -ideal.
Proof.
We have several assertions here, the idea being as follows:
(1) It is clear from definitions that is indeed a vector space, with this due to the following formulae, valid for any , which are both clear:
(2) In order to prove now that is closed, assume that a sequence converges to . Given , let us pick such that:
By compactness of we can find a finite set such that:
We conclude that for any there exists such that:
Thus, we have an inclusion as follows, with being finite:
But this shows that our limiting operator is compact, as desired.
(3) Regarding the fact that is stable under involution, this follows from Proposition 4.10, Proposition 4.12 and (2). Indeed, by using Proposition 4.12, given we can write it as a limit of finite rank operators, as follows:
Now by applying the adjoint, we obtain that we have as well:
We know from Proposition 4.10 that the operators are of finite rank, and so compact by Proposition 4.12, and by using (2) we obtain that is compact too, as desired.
(4) Finally, regarding the ideal property, this follows from the following two formulae, valid for any , which are once again clear from definitions:
Thus, we are led to the conclusion in the statement. ∎
Here is now a second key result regarding the compact operators:
Theorem 4.14.
A bounded operator is compact precisely when
for any orthonormal system .
Proof.
We have two implications to be proved, the idea being as follows:
“” Assume that is compact. By contradiction, assume . This means that there exists and a subsequence satisfying , and by replacing with this subsequence, we can assume that the following holds, with :
Since was assumed to be compact, and the sequence is bounded, a certain subsequence must converge. Thus, by replacing once again with a subsequence, we can assume that the following holds, with :
But this is a contradiction, because we obtain in this way:
Thus our assumption was wrong, and we obtain the result.
“” Assume , for any orthonormal system . In order to prove that is compact, we use the various results established above, which show that this is the same as proving that is in the closure of the space of finite rank operators:
We do this by contradiction. So, assume that the above is wrong, and so that there exists such that the following holds:
As a first observation, by using we obtain . Thus, we can find a norm one vector such that the following holds:
Our claim, which will bring the desired contradiction, is that we can construct by recurrence vectors such that the following holds, for any :
Indeed, assume that we have constructed such vectors . Let be the linear space spanned by these vectors, and let us set:
Since the operator has finite rank, our assumption above shows that we have:
Thus, we can find a vector such that the following holds:
We have then , and so we can consider the following nonzero vector:
With this nonzero vector constructed, in this way, now let us set:
This vector is then orthogonal to , has norm one, and satisfies:
Thus we are done with our construction by recurrence, and this contradicts our assumption that , for any orthonormal system , as desired. ∎
Summarizing, we have so far a number of results regarding the compact operators, in analogy with what we know about the usual complex matrices. Let us discuss now the spectral theory of the compact operators. We first have the following result:
Proposition 4.15.
Assuming that , with , is compact and self-adjoint, the following happen:
-
(1)
The eigenvalues of form a sequence .
-
(2)
All eigenvalues have finite multiplicity.
Proof.
We prove both the assertions at the same time. For this purpose, we fix a number , we consider all the eigenvalues satisfying , and for each such eigenvalue we consider the corresponding eigenspace . Let us set:
Our claim, which will prove both (1) and (2), is that this space is finite dimensional. In now to prove now this claim, we can proceed as follows:
(1) We know that we have . Our claim is that we have:
Indeed, assume that we have a sequence which converges, . Let us write , with . By definition of , the following condition is satisfied:
Now since the sequence is Cauchy we obtain from this that the sequence is Cauchy as well, and with we have , as desired.
(2) Consider now the projection onto the closure of the above vector space . The composition is then as follows, surjective on its target:
On the other hand since is compact so must be , and if follows from this that the space is finite dimensional. Thus itself must be finite dimensional too, and as explained in the beginning of the proof, this gives (1) and (2), as desired. ∎
In order to construct now eigenvalues, we will need:
Proposition 4.16.
If is compact and self-adjoint, one of the numbers
must be an eigenvalue of .
Proof.
We know from the spectral theory of the self-adjoint operators that the spectral radius of our operator is attained, and so one of the numbers must be in the spectrum. In order to prove now that one of these numbers must actually appear as an eigenvalue, we must use the compactness of , as follows:
(1) First, we can assume . By functional calculus this implies too, and so we can find a sequence of norm one vectors such that:
By using our assumption , we can rewrite this formula as follows:
Now since is compact, and is bounded, we can assume, up to changing the sequence to one of its subsequences, that the sequence converges:
Thus, the convergence formula found above reformulates as follows, with :
(2) Our claim now, which will finish the proof, is that this latter formula implies . Indeed, by using Cauchy-Schwarz and , we have:
We know that this must be an equality, so must be proportional. But since is self-adjoint the proportionality factor must be , and so we obtain, as claimed:
Thus, we have constructed an eigenvector for , as desired. ∎
We can further build on the above results in the following way:
Proposition 4.17.
If is compact and self-adjoint, there is an orthogonal basis of made of eigenvectors of .
Proof.
We use Proposition 4.15. According to the results there, we can arrange the nonzero eigenvalues of , taken with multiplicities, into a sequence . Let be the corresponding eigenvectors, and consider the following space:
The result follows then from the following observations:
(1) Since we have , both and its orthogonal are invariant under .
(2) On the space , our operator is by definition diagonal.
(3) On the space , our claim is that we have . Indeed, assuming that the restriction is nonzero, we can apply Proposition 4.16 to this restriction, and we obtain an eigenvalue for , and so for , contradicting the maximality of . ∎
With the above results in hand, we can now formulate a first spectral theory result for compact operators, which closes the discussion in the self-adjoint case:
Theorem 4.18.
Assuming that , with , is compact and self-adjoint, the following happen:
-
(1)
The spectrum consists of a sequence .
-
(2)
All spectral values are eigenvalues.
-
(3)
All eigenvalues have finite multiplicity.
-
(4)
There is an orthogonal basis of made of eigenvectors of .
Proof.
This follows from the various results established above:
(1) In view of Proposition 4.15 (1), this will follow from (2) below.
(2) Assume that belongs to the spectrum , but is not an eigenvalue. By using Proposition 4.17, let us pick an orthonormal basis of consisting of eigenvectors of , and then consider the following operator:
Then is an inverse for , and so we have , as desired.
(3) This is something that we know, from Proposition 4.15 (2).
(4) This is something that we know too, from Proposition 4.17. ∎
Finally, we have the following result, regarding the general case:
Theorem 4.19.
The compact operators , with , are the operators of the following form, with , being orthonormal families, and with :
The numbers , called singular values of , are the eigenvalues of . In fact, the polar decomposition of is given by , with
and with being given by , and on the complement of .
Proof.
This basically follows from Theorem 4.8 and Theorem 4.18, as follows:
(1) Given two orthonormal families , , and a sequence of real numbers , consider the linear operator given by the formula in the statement, namely:
Our first claim is that is bounded. Indeed, when assuming for any , which is something that we can do if we want to prove that is bounded, we have:
(2) The next observation is that this operator is indeed compact, because it appears as the norm limit, , of the following sequence of finite rank operators:
(3) Regarding now the polar decomposition assertion, for the above operator, this follows once again from definitions. Indeed, the adjoint is given by:
Thus, when composing with , we obtain the following operator:
Now by extracting the square root, we obtain the formula in the statement, namely:
(4) Conversely now, assume that is compact. Then , which is self-adjoint, must be compact as well, and so by Theorem 4.18 we have a formula as follows, with being a certain orthonormal family, and with :
By extracting the square root we obtain the formula of in the statement, and then by setting we obtain a second orthonormal family, , such that:
Thus, our compact operator appears indeed as in the statement. ∎
As a technical remark here, it is possible to slightly improve a part of the above statement. Consider indeed an operator of the following form, with , being orthonormal families as before, and with being now complex numbers:
Then the same proof as before shows that is compact, and that the polar decomposition of is given by , with the modulus being as follows:
As for the partial isometry , this is given by , and on the complement of , where are such that .
4c. Trace class operators
We have not talked so far about the trace of operators , in analogy with the trace of the usual matrices . This is because the trace can be finite or infinite, or even not well-defined, and we will discuss this now. Let us start with:
Proposition 4.20.
Given a positive operator , the quantity
is indpendent on the choice of an orthonormal basis .
Proof.
If is another orthonormal basis, we have:
Since this quantity is symmetric in , this gives the result. ∎
We can now introduce the trace class operators, as follows:
Definition 4.21.
An operator is said to be of trace class if:
The set of such operators, also called integrable, is denoted .
In finite dimensions, any operator is of course of trace class. In arbitrary dimension, finite or not, we first have the following result, regarding such operators:
Proposition 4.22.
Any finite rank operator is of trace class, and any trace class operator is compact, so that we have embeddings as follows:
Moreover, for any compact operator we have the formula
where are the singular values, and so precisely when .
Proof.
We have several assertions here, the idea being as follows:
(1) If is of finite rank, it is clearly of trace class.
(2) In order to prove now the second assertion, assume first that is of trace class. For any orthonormal basis we have:
But this shows that we have a convergence as follows:
Thus the operator is compact. Now since the compact operators form an ideal, it follows that is compact as well, as desired.
(3) In order to prove now the second assertion in general, assume that is of trace class. Then is also of trace class, and so compact by (2), and since we have by polar decomposition, it follows that is compact too.
(4) Finally, in order to prove the last assertion, assume that is compact. The singular value decomposition of , from Theorem 4.19, is then as follows:
But this gives the formula for in the statement, and proves the last assertion. ∎
Here is a useful reformulation of the above result, or rather of the above result coupled with Theorem 4.19, without reference to compact operators:
Theorem 4.23.
The trace class operators are precisely the operators of the form
with , being orthonormal systems, and with being a sequence satisfying:
Moreover, for such an operator we have the following estimate:
Proof.
This follows indeed from Proposition 4.22, or rather for step (4) in the proof of Proposition 4.22, coupled with Theorem 4.19. ∎
Next, we have the following result, which comes as a continuation of Proposition 4.22, and is our central result here, regarding the trace class operators:
Theorem 4.24.
The space of trace class operators, which appears as an intermediate space between the finite rank operators and the compact operators,
is a two-sided -ideal of . The following is a Banach space norm on ,
satisfying , and for and we have:
Also, the subspace is dense inside , with respect to this norm.
Proof.
There are several assertions here, the idea being as follows:
(1) In order to prove that is a linear space, and that is a norm on it, the only non-trivial point is that of proving the following inequality:
For this purpose, consider the polar decompositions of these operators:
Given an orthonormal basis , we have the following formula:
The point now is that the first sum can be estimated as follows:
In order to estimate the terms on the right, we can proceed as follows:
The second sum in the above formula of can be estimated in the same way, and in the end we obtain, as desired:
(2) The estimate can be established as follows:
(3) The fact that is indeed a Banach space follows by constructing a limit for any Cauchy sequence, by using the singular value decomposition.
(4) The fact that is indeed closed under the involution follows from:
(5) In order to prove now the ideal property of , we use the standard fact, that we know from Proposition 4.5, that any bounded operator can be written as a linear combination of 4 unitary operators, as follows:
Indeed, by taking the real and imaginary part we can first write as a linear combination of 2 self-adjoint operators, and then by functional calculus each of these 2 self-adjoint operators can be written as a linear linear combination of 2 unitary operators.
(6) With this trick in hand, we can now prove the ideal property of . Indeed, it is enough to prove that we have:
But this latter result follows by using the polar decomposition theorem.
(7) With a bit more care, we obtain from this the estimate from the statement. As for the last assertion, this is clear as well. ∎
This was for the basic theory of the trace class operators. Much more can be said, and we refer here to the literature, such as Lax [lax]. In what concerns us, we will be back to these operators later in this book, in Part III, when discussing operator algebras.
4d. Hilbert-Schmidt operators
As a last topic of this chapter, let us discuss yet another important class of operators, namely the Hilbert-Schmidt ones. These operators, that we will need on several key occasions in what follows, when talking operator algebras, are introduced as follows:
Definition 4.25.
An operator is said to be Hilbert-Schmidt if:
The set of such operators is denoted .
As before with other sets of operators, in finite dimensions we obtain in this way all the operators. In general, we have the following result, regarding such operators:
Theorem 4.26.
The space of Hilbert-Schmidt operators, which appears as an intermediate space between the trace class operators and the compact operators,
is a two-sided -ideal of . This ideal has the property
and conversely, each appears as product of two operators in . In terms of the singular values , the Hilbert-Schmidt operators are characterized by:
Also, the following formula, whose output is finite by Cauchy-Schwarz,
defines a scalar product of , making it a Hilbert space.
Proof.
All this is quite standard, from the results that we have already, and more specifically from the singular value decomposition theorem, and its applications. To be more precise, the proof of the various assertions goes as follows:
(1) First of all, the fact that the space of Hilbert-Schmidt operators is stable under taking sums, and so is a vector space, follows from:
Regarding now multiplicative properties, we can use here the following inequality:
Thus, the space is a two-sided -ideal of , as claimed.
(2) In order to prove now that the product of any two Hilbert-Schmidt operators is a trace class operator, we can use the following formula, which is elementary:
Conversely, given an arbitrary trace class operator , we have:
Thus, by using the polar decomposition , we obtain the following decomposition for , with both components being Hilbert-Schmidt operators:
(3) The condition for the singular values is clear.
(4) The fact that we have a scalar product is clear as well.
(5) The proof of the completness property is routine as well. ∎
We have as well the following key result, regarding the Hilbert-Schmidt operators:
Theorem 4.27.
We have the following formula,
valied for any Hilbert-Schmidt operators .
Proof.
We can prove this in two steps, as follows:
(1) Assume first that is trace class. Consider the polar decomposition , and choose an orthonormal basis for the image of , suitably extended to an orthonormal basis of . We have then the following computation, as desired:
(2) Assume now that we are in the general case, where is only assumed to be Hilbert-Schmidt. For any finite rank operator we have then:
Thus by choosing with , we obtain the result. ∎
This was for the basic theory of bounded operators on a Hilbert space, . In the remainder of this book we will be rather interested in the operator algebras that these operators can form. This is of course related to operator theory, because we can, at least in theory, take , and then study via the properties of . Actually, this is something that we already did a few times, when doing spectral theory, and notably when talking about functional calculus for normal operators.
For further operator theory, however, nothing beats a good operator theory book, and various ad-hoc methods, depending on the type of operators involved, and especially, on what you want to do with them. As before, in relation with topics to be later discussed in this book, we recommend here the books of Lax [lax] and Blackadar [bla].
Let us mention as well that there is a lot of interesting theory regarding the unbounded operators too, which is something quite technical, and here once again, we warmly recommend a good operator theory book. In addition, we recommend as well a good PDE book, because most of the questions making appear unbounded operators usually have PDE formulations as well, which are extremely efficient.
4e. Exercises
There has been a lot of theory in this chapter, with some of the things not really explained in great detail, and we have several exercises about all this. First comes:
Exercise 4.28.
Try to find the best operator theoretic analogue of the formula
for the complex numbers, telling us that any number is a real multiple of a unitary.
As explained in the above, a weak analogue of this holds, stating that any operator is a linear combination of 4 unitaries. The problem is that of improving this.
Exercise 4.29.
Work out a few explicit examples of the polar decomposition formula
with, if possible, a non-trivial computation for the square root.
This is actually something quite tricky, even for the usual matrices. So, as a preliminary exercise here, have some fun with the matrices.
Exercise 4.30.
Look up the various extra general properties of the sets of finite rank, trace class, Hilbert-Schmidt and compact operators,
coming in addition to what has been said above, about such operators.
This is of course quite vague, and, as good news, it is not indicated either if you should just come with a list of such properties, or with a list of such properties coming with complete proofs. Up to you here, and the more the better.
Part II Operator algebras
There was something in the air that night
The stars were bright, Fernando
They were shining there for you and me
For liberty, Fernando
Chapter 5 Operator algebras
5a. Normed algebras
We have seen that the study of the bounded operators often leads to the consideration of the algebras generated by such operators, the idea being that the study of can lead to results about itself. In the remainder of this book we focus on the study of such algebras . Before anything, we should mention that there are countless ways of getting introduced to operator algebras, depending on motivations and taste, with the available books including:
(1) The old book of von Neumann [vn4], which started everything. This is a very classical book, with mathematical physics content, written at times when mathematics and physics were starting to part ways. A great book, still enjoyable nowadays.
(2) Various post-war treatises, such as Dixmier [dix], Kadison-Ringrose [kri], Strătilă-Zsidó [szs] and Takesaki [tak]. As a warning, however, these books are purely mathematical. Also, they sometimes avoid deep results of von Neumann and Connes.
(3) More recent books, including Arveson [arv], Blackadar [bla], Brown-Ozawa [boz], Connes [co3], Davidson [dav], Jones [jo6], Murphy [mur], Pedersen [ped] and Sakai [sak]. These are well-concieved one-volume books, written with various purposes in mind.
Our presentation below is inspired by Blackadar [bla], Connes [co3], Jones [jo6], but is yet another type of beast, often insisting on probabilistic aspects. But probably enough talking, more on this later, and let us get to work. We are interested in the study of the algebras of bounded operators . Let us start our discussion with the following broad definition, obtained by imposing the “minimal” set of reasonable axioms:
Definition 5.1.
An operator algebra is an algebra of bounded operators which contains the unit, is closed under taking adjoints,
and is closed as well under the norm.
Here, as in the previous chapters, is an arbitrary Hilbert space, with the case that we are mostly interested in being the separable one. By separable we mean having a countable orthonormal basis, with countable, and such a space is of course unique. The simplest model is the space , but in practice, we are particularly interested in the spaces of the form , which are separable too, but with the basis and the subsequent identification being not necessarily very explicit.
Also as in the previous chapters, is the algebra of linear operators which are bounded, in the sense that the norm is finite. This algebra has an involution , with the adjoint operator being defined by the formula , and in the above definition, the assumption refers to this involution. Thus, must be a -algebra.
As a first result now regarding the operator algebras, in relation with the normal operators, where most of the non-trivial results that we have so far are, we have:
Theorem 5.2.
The operator algebra generated by a normal operator appears as an algebra of continuous functions,
where denotes as usual the spectrum of .
Proof.
This is an abstract reformulation of the continuous functional calculus theorem for the normal operators, that we know from chapter 3. Indeed, that theorem tells us that we have a continuous morphism of -algebras, as follows:
Moreover, by the general properties of the continuous calculus, also established in chapter 3, this morphism is injective, and its image is the norm closed algebra generated by . Thus, we obtain the isomorphism in the statement. ∎
The above result is very nice, and it is possible to further build on it, by using this time the spectral theorem for families of normal operators, as follows:
Theorem 5.3.
The operator algebra generated by a family of normal operators appears as an algebra of continuous functions,
where is a certain compact space associated to the family . Equivalently, any commutative operator algebra is of the form .
Proof.
We have two assertions here, the idea being as follows:
(1) Regarding the first assertion, this follows exactly as in the proof of Theorem 5.2, by using this time the spectral theorem for families of normal operators.
(2) As for the second assertion, this is clear from the first one, because any commutative algebra is generated by its elements , which are all normal. ∎
All this is good to know, but Theorem 5.2 and Theorem 5.3 remain something quite heavy, based on the spectral theorem. We would like to present now an alternative proof for these results, which is rather elementary, and has the advantage of reconstructing the compact space directly from the knowledge of the algebra . We will need:
Theorem 5.4.
Given an operator , define its spectrum as:
The following spectral theory results hold, exactly as in the case:
-
(1)
We have .
-
(2)
We have polynomial, rational and holomorphic calculus.
-
(3)
As a consequence, the spectra are compact and non-empty.
-
(4)
The spectra of unitaries and self-adjoints are on .
-
(5)
The spectral radius of normal elements is given by .
In addition, assuming , the spectra of with respect to and to coincide.
Proof.
This is something that we know from the beginning of chapter 3, in the case . In general the proof is similar, the idea being as follows:
(1) Regarding the assertions (1-5), which are of course formulated a bit informally, the proofs here are perfectly similar to those for the full operator algebra . All this is standard material, and in fact, things in chapter 3 were written in such a way as for their extension now, to the general operator algebra setting, to be obvious.
(2) Regarding the last assertion, the inclusion is clear. For the converse, assume , and consider the following self-adjoint element:
The difference between the two spectra of is then given by:
Thus this difference in an open subset of . On the other hand being self-adjoint, its two spectra are both real, and so is their difference. Thus the two spectra of are equal, and in particular is invertible in , and so , as desired.
(3) As an observation, the last assertion applied with shows that the spectrum as constructed in the statement coincides with the spectrum as constructed and studied in chapter 3, so the fact that (1-5) hold indeed is no surprise.
(4) Finally, I can hear you screaming that I should have concieved this book differently, matter of not proving the same things twice. Good point, with my distinguished colleague Bourbaki saying the same, and in answer, wait for chapter 7 below, where we will prove exactly the same things a third time. We can discuss pedagogy at that time. ∎
We can now get back to the commutative algebras, and we have the following result, due to Gelfand, which provides an alternative to Theorem 5.2 and Theorem 5.3:
Theorem 5.5.
Any commutative operator algebra is of the form
with the “spectrum” of such an algebra being the space of characters , with topology making continuous the evaluation maps .
Proof.
Given a commutative operator algebra , we can define as in the statement. Then is compact, and is a morphism of algebras, as follows:
(1) We first prove that is involutive. We use the following formula, which is similar to the formula for the usual complex numbers:
Thus it is enough to prove the equality for self-adjoint elements . But this is the same as proving that implies that is a real function, which is in turn true, because is an element of , contained in .
(2) Since is commutative, each element is normal, so is isometric:
(3) It remains to prove that is surjective. But this follows from the Stone-Weierstrass theorem, because is a closed subalgebra of , which separates the points. ∎
The above theorem of Gelfand is something very beautiful, and far-reaching. It is possible to further build on it, indefinitely high. We will be back to this.
5b. Von Neumann algebras
Instead of further building on the above results, which are already quite non-trivial, let us return to our modest status of apprentice operator algebraists, and declare ourselves rather unsatisfied with Definition 5.1, on the following intuitive grounds:
Thought 5.6.
Our assumption that is norm closed is not satisfying, because we would like to be stable under polar decomposition, under taking spectral projections, and more generally, under measurable functional calculus.
Here all these “defects” are best visible in the context of Theorem 5.3, with the algebra found there, with , being obviously too small. In fact, Theorem 5.3 teaches us that, when looking for a fix, we should look for a weaker topology on , as for the algebra generated by a normal operator to be .
So, let us get now into this, topologies on , and fine-tunings of Definition 5.1, based on them. The result that we will need, which is elementary, is as follows:
Proposition 5.7.
For a subalgebra , the following are equivalent:
-
(1)
is closed under the weak operator topology, making each of the linear maps continuous.
-
(2)
is closed under the strong operator topology, making each of the linear maps continuous.
In the case where these conditions are satisfied, is closed under the norm topology.
Proof.
There are several statements here, the proof being as follows:
(1) It is clear that the norm topology is stronger than the strong operator topology, which is in turn stronger than the weak operator topology. At the level of the subsets which are closed things get reversed, in the sense that weakly closed implies strongly closed, which in turn implies norm closed. Thus, we are left with proving that for any algebra , strongly closed implies weakly closed.
(2) Consider the Hilbert space obtained by summing times with itself:
The operators over can be regarded as being square matrices with entries in , and in particular, we have a representation , as follows:
Assume now that we are given an operator , with the bar denoting the weak closure. We have then, by using the Hahn-Banach theorem, for any :
Now observe that the last formula tells us that for any , and any , we can find such that the following holds, for any :
Thus belongs to the strong operator closure of , as desired. ∎
Observe that in the above the terminology is a bit confusing, because the norm topology is stronger than the strong operator topology. As a solution, we agree to call the norm topology “strong”, and the weak and strong operator topologies “weak”, whenever these two topologies coincide. With this convention made, the algebras in Proposition 5.7 are those which are weakly closed. Thus, we can now formulate:
Definition 5.8.
A von Neumann algebra is an operator algebra
which is closed under the weak topology.
These algebras will be our main objects of study, in what follows. As basic examples, we have the algebra itself, then the singly generated algebras, with , and then the multiply generated algebras, with . But for the moment, let us keep things simple, and build directly on Definition 5.8, by using basic functional analysis methods. We will need the following key result:
Theorem 5.9.
For an operator algebra , we have
with being the bicommutant inside , and being the weak closure.
Proof.
We can prove this by double inclusion, as follows:
“” Since any operator commutes with the operators that it commutes with, we have a trivial inclusion , valid for any set . In particular, we have:
Our claim now is that the algebra is closed, with respect to the strong operator topology. Indeed, assuming that we have in this topology, we have:
Thus our claim is proved, and together with Proposition 5.7, which allows us to pass from the strong to the weak operator topology, this gives , as desired.
“” Here we must prove that we have the following implication, valid for any , with the bar denoting as usual the weak operator closure:
For this purpose, we use the same amplification trick as in the proof of Proposition 5.7. Consider the Hilbert space obtained by summing times with itself:
The operators over can be regarded as being square matrices with entries in , and in particular, we have a representation , as follows:
The idea will be that of doing the computations in this representation. First, in this representation, the image of our algebra is given by:
We can compute the commutant of this image, exactly as in the usual scalar matrix case, and we obtain the following formula:
We conclude from this that, given an operator as above, we have:
In other words, the conclusion of all this is that we have:
Now given a vector , consider the orthogonal projection on the norm closure of the vector space . Since the subspace is invariant under the action of , so is its norm closure inside , and we obtain from this:
By combining this with what we found above, we conclude that we have:
Since this holds for any , we conclude that any operator belongs to the strong operator closure of . By using now Proposition 5.7, which allows us to pass from the strong to the weak operator closure, we conclude that we have:
Thus, we have the desired reverse inclusion, and this finishes the proof. ∎
Now by getting back to the von Neumann algebras, from Definition 5.8, we have the following result, which is a reformulation of Theorem 5.9, by using this notion:
Theorem 5.10.
For an operator algebra , the following are equivalent:
-
(1)
is weakly closed, so it is a von Neumann algebra.
-
(2)
equals its algebraic bicommutant , taken inside .
Proof.
This follows from the formula from Theorem 5.9, along with the trivial fact that the commutants are automatically weakly closed. ∎
The above statement, called bicommutant theorem, and due to von Neumann [vn1], is quite interesting, philosophically speaking. Among others, it shows that the von Neumann algebras are exactly the commutants of the self-adjoint sets of operators:
Proposition 5.11.
Given a subset which is closed under , the commutant
is a von Neumann algebra. Any von Neumann algebra appears in this way.
Proof.
We have two assertions here, the idea being as follows:
(1) Given satisfying , the commutant satisfies , and is also weakly closed. Thus, is a von Neumann algebra. Note that this follows as well from the following “tricommutant formula”, which follows from Theorem 5.10:
(2) Given a von Neumann algebra , we can take . Then is closed under the involution, and we have , as desired. ∎
Observe that Proposition 5.11 can be regarded as yet another alternative definition for the von Neumann algebras, and with this definition being probably the best one when talking about quantum mechanics, where the self-adjoint operators can be though of as being “observables” of the system, and with the commutants of the sets of such observables being the algebras that we are interested in. And with all this actually needing some discussion about self-adjointness, and about boundedness too, but let us not get into this here, and stay mathematical, as before.
As another interesting consequence of Theorem 5.10, we have:
Proposition 5.12.
Given a von Neumann algebra , its center
regarded as an algebra , is a von Neumann algebra too.
Proof.
This follows from the fact that the commutants are weakly closed, that we know from the above, which shows that is a von Neumann algebra. Thus, the intersection must be a von Neumann algebra too, as claimed. ∎
In order to develop some general theory, let us start by investigating the finite dimensional case. Here the ambient algebra is , any linear subspace is automatically closed, for all 3 topologies in Proposition 5.7, and we have:
Theorem 5.13.
The -algebras are exactly the algebras of the form
depending on parameters and satisfying
embedded into via the obvious block embedding, twisted by a unitary .
Proof.
We have two assertions to be proved, the idea being as follows:
(1) Given numbers satisfying , we have indeed an obvious embedding of -algebras, via matrix blocks, as follows:
In addition, we can twist this embedding by a unitary , as follows:
(2) In the other sense now, consider a -algebra . It is elementary to prove that the center , as an algebra, is of the following form:
Consider now the standard basis , and let be the images of these vectors via the above identification. In other words, these elements are central minimal projections, summing up to 1:
The idea is then that this partition of the unity will eventually lead to the block decomposition of , as in the statement. We prove this in 4 steps, as follows:
Step 1. We first construct the matrix blocks, our claim here being that each of the following linear subspaces of are non-unital -subalgebras of :
But this is clear, with the fact that each is closed under the various non-unital -subalgebra operations coming from the projection equations .
Step 2. We prove now that the above algebras are in a direct sum position, in the sense that we have a non-unital -algebra sum decomposition, as follows:
As with any direct sum question, we have two things to be proved here. First, by using the formula and the projection equations , we conclude that we have the needed generation property, namely:
As for the fact that the sum is indeed direct, this follows as well from the formula , and from the projection equations .
Step 3. Our claim now, which will finish the proof, is that each of the -subalgebras constructed above is a full matrix algebra. To be more precise here, with , our claim is that we have isomorphisms, as follows:
In order to prove this claim, recall that the projections were chosen central and minimal. Thus, the center of each of the algebras reduces to the scalars:
But this shows, either via a direct computation, or via the bicommutant theorem, that the each of the algebras is a full matrix algebra, as claimed.
Step 4. We can now obtain the result, by putting together what we have. Indeed, by using the results from Step 2 and Step 3, we obtain an isomorphism as follows:
Moreover, a more careful look at the isomorphisms established in Step 3 shows that at the global level, that of the algebra itself, the above isomorphism simply comes by twisting the following standard multimatrix embedding, discussed in the beginning of the proof, (1) above, by a certain unitary matrix :
Now by putting everything together, we obtain the result. ∎
In relation with the bicommutant theorem, we have the following result, which fully clarifies the situation, with a very explicit proof, in finite dimensions:
Proposition 5.14.
Consider a -algebra , written as above:
The commutant of this algebra is then, with respect with the block decomposition used,
and by taking one more time the commutant we obtain itself, .
Proof.
Let us decompose indeed our algebra as in Theorem 5.13:
The center of each matrix algebra being reduced to the scalars, the commutant of this algebra is then as follows, with each copy of corresponding to a matrix block:
By taking once again the commutant we obtain itself, and we are done. ∎
As another interesting application of Theorem 5.13, clarifying this time the relation with operator theory, in finite dimensions, we have the following result:
Theorem 5.15.
Given an operator in finite dimensions, , the von Neumann algebra that it generates inside is
with the sizes of the blocks coming from the spectral theory of the associated matrix . In the normal case , this decomposition comes from
with diagonal, and with unitary.
Proof.
This is something which is routine, by using the linear algebra and spectral theory developed in chapter 1, for the matrices . To be more precise:
(1) The fact that decomposes into a direct sum of matrix algebras is something that we already know, coming from Theorem 5.13.
(2) By using standard linear algebra, we can compute the block sizes , from the knowledge of the spectral theory of the associated matrix .
(3) In the normal case, , we can simply invoke the spectral theorem, and by suitably changing the basis, we are led to the conclusion in the statement. ∎
Let us get now to infinite dimensions, with Theorem 5.15 as our main source of inspiration. The same argument applies, provided that we are in the normal case, and we have the following result, summarizing our basic knowledge here:
Theorem 5.16.
Given a bounded operator which is normal, , the von Neumann algebra that it generates inside is
with being as usual its spectrum.
Proof.
The measurable functional calculus theorem for the normal operators tells us that we have a weakly continuous morphism of -algebras, as follows:
Moreover, by the general properties of the measurable calculus, also established in chapter 3, this morphism is injective, and its image is the weakly closed algebra generated by . Thus, we obtain the isomorphism in the statement. ∎
More generally now, along the same lines, we have the following result:
Theorem 5.17.
Given operators which are normal, and which commute, the von Neumann algebra that these operators generates inside is
with being a certain measured space, associated to the family .
Proof.
This is once again routine, by using the spectral theory for the families of commuting normal operators developed in chapter 3. ∎
As a fundamental consequence now of the above results, we have:
Theorem 5.18.
The commutative von Neumann algebras are the algebras
with being a measured space.
Proof.
We have two assertions to be proved, the idea being as follows:
(1) In one sense, we must prove that given a measured space , we can realize the as a von Neumann algebra, on a certain Hilbert space . But this is something that we know since chapter 2, the representation being as follows:
(2) In the other sense, given a commutative von Neumann algebra , we must construct a certain measured space , and an identification . But this follows from Theorem 5.17, because we can write our algebra as follows:
To be more precise, being commutative, any element is normal, so we can pick a basis , and then we have as above, with being commuting normal operators. Thus Theorem 5.17 applies, and gives the result.
(3) Alternatively, and more explicitly, we can deduce this from Theorem 5.16, applied with . Indeed, by using , we conclude that any von Neumann algebra is generated by its self-adjoint elements . Moreover, by using measurable functional calculus, we conclude that is linearly generated by its projections. But then, assuming , with being projections, we can set:
Then , and by functional calculus we have , then , and so on. Thus , and comes now via Theorem 5.16, as claimed. ∎
The above result is the foundation for all the advanced von Neumann algebra theory, that we will discuss in the remainder of this book, and there are many things that can be said about it. To start with, in relation with the general theory of the normed closed algebras, that we developed in the beginning of this chapter, we have:
Warning 5.19.
Although the von Neumann algebras are norm closed, the theory of norm closed algebras does not always apply well to them. For instance for Gelfand gives , with being a certain technical compactification of .
In short, this would be my advice, do not mess up the two theories that we will be developing in this book, try finding different rooms for them, in your brain. At least at this stage of things, because later, do not worry, we will be playing with both.
Now forgetting about Gelfand, and taking Theorem 5.18 as such, tentative foundation for the theory that we want to develop, as a first consequence of this, we have:
Theorem 5.20.
Given a von Neumann algebra , we have
with being a certain measured space.
Proof.
We know from Proposition 5.12 that the center is a von Neumann algebra. Thus Theorem 5.18 applies, and gives the result. ∎
It is possible to further build on this, with a powerful decomposition result as follows, over the measured space constructed in Theorem 5.20:
But more on this later, after developing the appropriate tools for this program, which is something non-trivial. Among others, before getting into such things, we will have to study the von Neumann algebras having trivial center, , called factors, which include the fibers in the above decomposition result. More on this later.
5c. Random matrices
Our main results so far on the von Neumann algebras concern the finite dimensional case, where the algebra is of the form , and the commutative case, where the algebra is of the form . In order to advance, we must solve:
Question 5.21.
What are the next simplest von Neumann algebras, generalizing at the same time the finite dimensional ones, , and the commutative ones, , that we can use as input for our study?
In this formulation, our question is a no-brainer, the answer to it being that of looking at the direct integrals of matrix algebras, over an arbitrary measured space :
However, when thinking a bit, all this looks quite tricky, with most likely lots of technical functional analysis and measure theory involved. So, we will leave the investigation of such algebras, which are indeed quite basic, and called of type I, for later.
Nevermind. Let us replace Question 5.21 with something more modest, as follows:
Question 5.22 (update).
What are the next simplest von Neumann algebras, generalizing at the same time the usual matrix algebras, , and the commutative ones, , that we can use as input for our study?
But here, what we have is again a no-brainer, because in relation to what has been said above, we just have to restrict the attention to the “isotypic” case, where all fibers are isomorphic. And in this case our algebra is a random matrix algebra:
Which looks quite nice, and so good news, we have our algebras. In practice now, although there is some functional analysis to be done with these algebras, the main questions regard the individual operators , called random matrices. Thus, we are basically back to good old operator theory. Let us begin our discussion with:
Definition 5.23.
A random matrix algebra is a von Neumann algebra of the following type, with being a probability space, and with being an integer:
In other words, appears as a tensor product, as follows,
of a matrix algebra and a commutative von Neumann algebra.
As a first observation, our algebra can be written as well as follows, with this latter convention being quite standard in the probability literature:
In connection with the tensor product notation, which is often the most useful one for computations, we have as well the following possible writing, also used in probability:
Importantly now, each random matrix algebra is naturally endowed with a canonical von Neumann algebra trace , which appears as follows:
Proposition 5.24.
Given a random matrix algebra , consider the linear form given by:
In tensor product notation, , we have then the formula
and this functional is a faithful positive unital trace.
Proof.
The first assertion, regarding the tensor product writing of , is clear from definitions. As for the second assertion, regarding the various properties of , this follows from this, because these properties are stable under taking tensor products. ∎
As before, there is a discussion here in connection with the other possible writings of . With the probabilistic notation , the trace appears as:
Also, with the probabilistic tensor notation , the trace appears exactly as in the second part of Proposition 5.24, with the order inverted:
To summarize, the random matrix algebras appear to be very basic objects, and the only difficulty, in the beginning, lies in getting familiar with the 4 possible notations for them. Or perhaps 5 possible notations, because we have as well.
Getting to work now, as already said, the main questions about random matrix algebras regard the individual operators , called random matrices. To be more precise, we are interested in computing the laws of such matrices, constructed according to:
Theorem 5.25.
Given an operator algebra with a faithful trace , any normal element has a law, namely a probability measure satisfying
with the powers being with respect to colored exponents , defined via
and multiplicativity. This law is unique, and is supported by the spectrum . In the non-normal case, , such a law does not exist.
Proof.
We have two assertions here, the idea being as follows:
(1) In the normal case, , we know from Theorem 5.2, based on the continuous functional calculus theorem, that we have:
Thus the functional can be regarded as an integration functional on the algebra , and by the Riesz theorem, this latter functional must come from a probability measure on the spectrum , in the sense that we must have:
We are therefore led to the conclusions in the statement, with the uniqueness assertion coming from the fact that the operators , taken as usual with respect to colored integer exponents, , generate the whole operator algebra .
(2) In the non-normal case now, , we must show that such a law does not exist. For this purpose, we can use a positivity trick, as follows:
Now assuming that has a law , in the sense that the moment formula in the statement holds, the above two different numbers would have to both appear by integrating with respect to this law , which is contradictory, as desired. ∎
Back now to the random matrices, as a basic example, assume , so that we are dealing with a usual scalar matrix, . By changing the basis of , which won’t affect our trace computations, we can assume that is diagonal:
But for such a diagonal matrix, we have the following formula:
Thus, the law of is the average of the Dirac masses at the eigenvalues:
As a second example now, assume , and so . In this case we obtain the usual law of , because the equation to be satisfied by is:
At a more advanced level, the main problem regarding the random matrices is that of computing the law of various classes of such matrices, coming in series:
Question 5.26.
What is the law of random matrices coming in series
in the regime?
The general strategy here, coming from physicists, is that of computing first the asymptotic law , in the limit, and then looking for the higher order terms as well, as to finally reach to a series in giving the law of , as follows:
As a basic example here, of particular interest are the random matrices having i.i.d. complex normal entries, under the constraint . Here the asymptotic law is the Wigner semicircle law on . We will discuss this in chapter 6 below, and in the meantime we can only recommend some reading, from the original papers of Marchenko-Pastur [mpa], Voiculescu [vo2], Wigner [wig], and from the books of Anderson-Guionnet-Zeitouni [agz], Mehta [meh], Nica-Speicher [nsp], Voiculescu-Dykema-Nica [vdn].
5d. Quantum spaces
Let us end this preliminary chapter on operator algebras with some philosophy, a bit a la Heisenberg. In relation with general “quantum space” goals, Theorem 5.18 is something very interesting, philosophically speaking, suggesting us to formulate:
Definition 5.27.
Given a von Neumann algebra , we write
and call a quantum measured space.
As an example here, for the simplest noncommutative von Neumann algebra that we know, namely the usual matrix algebra , the formula that we want to write is as follows, with being a certain mysterious quantum space:
So, what can we say about this space ? As a first observation, this is a finite space, with its cardinality being defined and computed as follows:
Now since this is the same as the cardinality of the set , we are led to the conclusion that we should have a twisting result as follows, with the twisting operation being something that destroys the points, but keeps the cardinality:
From an analytic viewpoint now, we would like to understand what is the integration over , giving rise to the corresponding functions. And here, we can set:
To be more precise, on the left we have the integral of an arbitrary function on , which according to our conventions, should be a usual matrix:
As for the quantity on the right, the outcome of the computation, this can only be the trace of . In addition, it is better to choose this trace to be normalized, by , and this in order for our measure on to have mass 1, as it is ideal:
We can say even more about this. Indeed, since the traces of positive matrices are positive, we are led to the following formula, to be taken with the above conventions, which shows that the measure on that we constructed is a probability measure:
Before going further, let us record what we found, for future reference:
Theorem 5.28.
The quantum measured space formally given by
has cardinality , appears as a twist, in a purely algebraic sense,
and is a probability space, its uniform integration being given by
where at right we have the normalized trace of matrices, .
Proof.
This is something half-informal, mostly for fun, which basically follows from the above discussion, the details and missing details being as follows:
(1) In what regards the formula , coming by computing the complex vector space dimension, as explained above, this is obviously something rock-solid.
(2) Regarding twisting, we would like to have a formula as follows, with the operation being something that destroys the commutativity of the multiplication:
In more familiar terms, with usual complex matrices on the left, and with a better-looking product of sets being used on the right, this formula reads:
In order to establish this formula, consider the algebra on the right. As a complex vector space, this algebra has the standard basis formed by the Dirac masses at the points , and the multiplicative structure of this algebra is given by:
Now let us twist this multiplication, according to the formula . We obtain in this way the usual combination formulae for the standard matrix units of the algebra , and so we have our twisting result, as claimed.
(3) In what regards the integration formula in the statement, with the conclusion that the underlying measure on is a probability one, this is something that we fully explained before, and as for the result (1) above, it is something rock-solid.
(4) As a last technical comment, observe that the twisting operation performed in (2) destroys both the involution, and the trace of the algebra. This is something quite interesting, which cannot be fixed, and we will back to it, later on. ∎
In order to advance now, based on the above result, the key point there is the construction and interpretation of the trace , as an integration functional. But this leads us into the following natural, and quite puzzling question:
Question 5.29.
In the general context of Definition 5.27, where we formally wrote , what is the underlying integration functional ?
This is a quite subtle question, and there are several possible answers here. For instance, we would like the integration functional to have the following property:
And the problem is that certain von Neumann algebras do not possess such traces. This is actually something quite advanced, that we do not know yet, but by anticipating a bit, we are in trouble, and we must modify Definition 5.27, as follows:
Definition 5.30 (update).
Given a von Neumann algebra , coming with a faithful positive unital trace , we write
and call a quantum probability space. We also write the trace as , and call it integration with respect to the uniform measure on .
At the level of examples, passed the classical probability spaces , we know from Theorem 5.28 that the quantum space is a finite quantum probability space. But this raises the question of understanding what the finite quantum probability spaces are, in general. For this purpose, we need to examine the finite dimensional von Neumann algebras. And the result here, extending Theorem 5.13, is as follows:
Theorem 5.31.
The finite dimensional von Neumann algebras over an arbitrary Hilbert space are exactly the direct sums of matrix algebras,
embedded into by using a partition of unity of with rank projections
with the “factors” being each embedded into the algebra .
Proof.
This is standard, as in the case . Consider the center of , which is a finite dimensional commutative von Neumann algebra, of the following form:
Now let be the Dirac mass at . Then is an orthogonal projection, and these projections form a partition of unity, as follows:
With , we have then a non-unital -algebra decomposition, as follows:
On the other hand, it follows from the minimality of each of the projections that we have unital -algebra isomorphisms , and this gives the result. ∎
We can now deduce what the finite quantum measured spaces are, in the sense of the old Definition 5.27. Indeed, we must solve here the following equation:
Now since the direct unions of sets correspond to direct sums at the level of the associated algebras of functions, in the classical case, we can take the following formula as a definition for a direct union of sets, in the general, noncommutative case:
With this, and by remembering the definition of , we are led to the conclusion that the solution to our quantum measured space equation above is as follows:
For fully solving our problem, in the spirit of the new Definition 5.30, we still have to discuss the traces on . We are led in this way to the following statement:
Theorem 5.32.
The finite quantum measured spaces are the spaces
according to the following formula, for the associated algebras of functions:
The cardinality of such a space is the following number,
and the possible traces are as follows, with summing up to :
Among these traces, we have the canonical trace, appearing as
via the left regular representation, having weights .
Proof.
We have many assertions here, basically coming from the above discussion, with only the last one needing some explanations. Consider the left regular representation of our algebra , which is given by the following formula:
We know that the algebra of linear operators is isomorphic to a matrix algebra, and more specifically to , with being as before:
Thus, this algebra has a trace , and by composing this trace with the representation , we obtain a certain trace , that we can call “canonical”:
We can compute the weights of this trace by using a multimatrix basis of , formed by matrix units , with and with , and we obtain:
Thus, we are led to the conclusion in the statement. ∎
We will be back to quantum spaces on several occasions, in what follows. In fact, the present book is as much on operator algebras as it is on quantum spaces, and this because these two points of view are both useful, and complementary to each other.
5e. Exercises
The theory in this chapter has been quite exciting, and we have already run into a number of difficult questions. As a basic exercise on all this, we have:
Exercise 5.33.
Find a simple proof for the von Neumann bicommutant theorem, in finite dimensions.
This is something quite subjective, and try not to cheat. That is, not to convert the amplification proof that we have in general, by using matrix algebras everywhere, nor by using the structure result for the finite dimensional algebras either.
Exercise 5.34.
Again in finite dimensions, , compute explicitly the von Neumann algebra generated by a single operator.
As mentioned above, in the normal case the answer is clear, by diagonalizing . The problem is that of understanding what happens when is not normal.
Exercise 5.35.
Try understanding what the law of the simplest non-normal operator,
acting on should be. Look also at more general Jordan blocks.
There are many non-trivial computations here. We will be back to this.
Exercise 5.36.
Develop a full theory of finite quantum spaces, by enlarging what has been said above, with various geometric topics, of your choice.
This is of course a bit vague, but some further thinking at all this is certainly useful, at this point, and this is what the exercise is about.
Chapter 6 Random matrices
6a. Random matrices
We have seen so far the basics of von Neumann algebras , with a look into some interesting ramifications too, concerning random matrices and quantum spaces. In what regards these ramifications, the situation is as follows:
(1) The random matrix algebras, acting on , are the simplest von Neumann algebras, from a variety of viewpoints. The main problem regarding them is of operator theoretic nature, regarding the computation of the law of individual elements with respect to the random matrix trace .
(2) The quantum spaces are exciting abstract objects, obtained by looking at an arbitrary von Neumann algebra coming with a trace , and formally writing the algebra as , and its trace as . In this picture, is our quantum probability space, and is the integration over it, or expectation.
All this is quite interesting, and we will further explore these two topics, random matrices and quantum spaces, with some basic theory for them, in this chapter and in the next one. As a first observation, these two topics are closely related, due to:
Fact 6.1.
A random matrix algebra can be written in the following way,
so the underlying quantum space is something very simple, .
With this understood, the philosophical problem is now, what to do with our quantum spaces, be them of random matrix type , or more general. Good question, and do not expect a simple answer to it. Indeed, quantum spaces are more or less the same thing as operator algebras, and from this perspective, our question becomes “what are the operator algebras, and what is to be done with them”, obviously difficult.
And there is even worse, because when remembering that operator algebras are more or less the same thing as quantum mechanics, our question becomes something of type “what is quantum mechanics, and what is to be done with it”. So, modesty.
Getting back to Earth, now that we have our questions and philosophy, for the whole remainder of this book, let us get into random matrices. Quite remarkably, these provide us with an epsilon of answer to our philosophical questions, as follows:
Answer 6.2.
The simplest quantum spaces are those coming from random matrix algebras, which are as follows, with being a usual probability space,
and what is to be done with them is the computation of the law of individual elements, the random matrices , in the regime.
Which looks very nice, we eventually reached to some concrete questions, and time now for mathematics and computations. Getting started, we must first further build on the material from chapter 5. We recall from there that given a von Neumann algebra coming with a trace , any normal element has a law, which is the complex probability measure given by the following formula:
In the non-normal case, , the law does not exist as a complex probability measure , as also explained in chapter 5. However, we can trick a bit, and talk about the law of non-normal elements as well, in the following abstract way:
Definition 6.3.
Let be a von Neumann algebra, given with a trace .
-
(1)
The elements are called random variables.
-
(2)
The moments of such a variable are the numbers .
-
(3)
The law of such a variable is the functional .
Here is by definition a colored integer, and the powers are defined by multiplicativity and the usual formulae, namely:
As for the polynomial , this is a noncommuting -polynomial in one variable:
Observe that the law is uniquely determined by the moments, because:
Generally speaking, the above definition, due to Voiculescu [vdn], is something quite abstract, but there is no other way of doing things, at least at this level of generality. However, in the special case where our variable is self-adjoint, or more generally normal, the theory simplifies, and we recover more familiar objects, as follows:
Theorem 6.4.
The law of a normal variable can be identified with the corresponding spectral measure , according to the following formula,
valid for any , coming from the measurable functional calculus. In the self-adjoint case the spectral measure is real, .
Proof.
This is something that we know well, from chapter 5, coming from the spectral theorem for the normal operators, as developed in chapter 3. ∎
Getting back now to the random matrices, we have all we need, as general formalism, and we are ready for doing some computations. As a first observation, we have:
Theorem 6.5.
The laws of basic random matrices are as follows:
-
(1)
In the case the random matrix is a usual random variable, , automatically normal, and its law as defined above is the usual law.
-
(2)
In the case the random matrix is a usual scalar matrix, , and in the diagonalizable case, the law is .
Proof.
This is something that we know, once again, from chapter 5, and which is elementary. Indeed, the first assertion follows from definitions, and the above discussion. As for the second assertion, this follows by diagonalizing the matrix. ∎
In general, what we have can only be a mixture of (1) and (2) above. Our plan will be that of discussing more in detail (1), and then getting into the general case, or rather into the case of the most interesting random matrices, with inspiration from (2).
6b. Probability theory
So, let us set . Here our algebra is , an arbitrary commutative von Neumann algebra. The most interesting linear operators , that we will rather denote as complex functions , and call random variables, as it is customary, are the normal, or Gaussian variables, which are defined as follows:
Definition 6.6.
A variable is called standard normal when its law is:
More generally, the normal law of parameter is the following measure:
These are also called Gaussian distributions, with “g” standing for Gauss.
Observe that these normal laws have indeed mass 1, as they should, as shown by a quick change of variable, and the Gauss formula, namely:
Let us start with some basic results regarding the normal laws. We first have:
Proposition 6.7.
The normal law with has the following properties:
-
(1)
The variance is .
-
(2)
The density is even, so the odd moments vanish.
-
(3)
The even moments are , with .
-
(4)
Equivalently, the moments are , for any .
-
(5)
The Fourier transform is given by .
-
(6)
We have the convolution semigroup formula , for any .
Proof.
All this is very standard, with the various notations used in the statement being explained below, the idea being as follows:
(1) The normal law being centered, its variance is the second moment, . Thus the result follows from (3), proved below, which gives in particular:
(2) This is indeed something self-explanatory.
(3) We have indeed the following computation, by partial integration:
The initial value being , we obtain the result.
(4) We know from (2,3) that the moments of the normal law satisfy the following recurrence formula, with the initial data :
Now let us look at , the set of pairings of . In order to have such a pairing, we must pair 1 with a number chosen among , and then come up with a pairing of the remaining numbers. Thus, the number of such pairings is subject to the following recurrence formula, with initial data :
But this solves our problem at , because in this case we obtain the following formula, with standing as usual for the number of blocks of a partition:
Now back to the general case, , our problem here is solved in fact too, because the number of blocks of a pairing being constant, , we obtain:
(5) The Fourier transform formula can be established as follows:
(6) This follows indeed from (5), because is linear in . ∎
We are now ready to establish the Central Limit Theorem (CLT), which is a key result, telling us why the normal laws appear a bit everywhere, in the real life:
Theorem 6.8.
Given a sequence of real random variables , which are i.i.d., centered, and with variance , we have
with , in moments.
Proof.
In terms of moments, the Fourier transform is given by:
Thus, the Fourier transform of the variable in the statement is:
But this latter function being the Fourier transform of , we obtain the result. ∎
Let us discuss as well the “discrete” counterpart of the above results, that we will need too a bit later, in relation with the random matrices. We have:
Definition 6.9.
The Poisson law of parameter is the following measure,
and the Poisson law of parameter is the following measure,
with the letter “p” standing for Poisson.
We will see in a moment why these laws appear everywhere, in discrete probability, the reasons behind this coming from the Poisson Limit Theorem (PLT). Getting started now, in analogy with the normal laws, the Poisson laws have the following properties:
Proposition 6.10.
The Poisson law with has the following properties:
-
(1)
The variance is .
-
(2)
The moments are .
-
(3)
The Fourier transform is .
-
(4)
We have the semigroup formula , for any .
Proof.
We have four formulae to be proved, the idea being as follows:
(1) The variance is , and by using the formulae and , coming from (2), proved below, we obtain as desired, .
(2) This is something more tricky. Consider indeed the set of all partitions of . At , to start with, the formula that we want to prove is:
We have the following recurrence formula for the moments of :
Our claim is that the numbers satisfy the same recurrence formula. Indeed, since a partition of appears by choosing neighbors for , among the numbers available, and then partitioning the elements left, we have:
Thus we obtain by recurrence , as desired. Regarding now the general case, , we can use here a similar method. We have the following recurrence formula for the moments of , obtained by using the binomial formula:
On the other hand, consider the numbers in the statement, . As before, since a partition of appears by choosing neighbors for , among the numbers available, and then partitioning the elements left, we have:
Thus we obtain by recurrence , as desired.
(3) The Fourier transform formula can be established as follows:
(4) This follows from (3), because is linear in . ∎
We are now ready to establish the Poisson Limit Theorem (PLT), as follows:
Theorem 6.11.
We have the following convergence, in moments,
for any .
Proof.
Let us denote by the Bernoulli measure appearing under the convolution sign. We have then the following computation:
Thus, we obtain the Fourier transform of , as desired. ∎
As a third and last topic from classical probability, let us discuss now the complex normal laws, that we will need too. To start with, we have the following definition:
Definition 6.12.
The complex Gaussian law of parameter is
where are independent, each following the law .
As in the real case, these measures form convolution semigroups:
Proposition 6.13.
The complex Gaussian laws have the property
for any , and so they form a convolution semigroup.
Proof.
This follows indeed from the real result, namely , established above, simply by taking real and imaginary parts. ∎
We have the following complex analogue of the CLT:
Theorem 6.14 (CCLT).
Given complex random variables which are i.i.d., centered, and with variance , we have, with , in moments,
where is the complex Gaussian law of parameter .
Proof.
This follows indeed from the real CLT, established above, simply by taking the real and imaginary parts of all the variables involved. ∎
Regarding now the moments, we use the general formalism from Definition 6.3, involving colored integer exponents We say that a pairing is matching when it pairs symbols. With this convention, we have the following result:
Theorem 6.15.
The moments of the complex normal law are the numbers
where are the matching pairings of , and is the number of blocks.
Proof.
This is something well-known, which can be established as follows:
(1) As a first observation, by using a standard dilation argument, it is enough to do this at . So, let us first recall from the above that the moments of the real Gaussian law , with respect to integer exponents , are the following numbers:
Numerically, we have the following formula, explained as well in the above:
(2) We will show here that in what concerns the complex Gaussian law , similar results hold. Numerically, we will prove that we have the following formula, where a colored integer is called uniform when it contains the same number of and , and where is the length of such a colored integer:
Now since the matching partitions are counted by exactly the same numbers, and this for trivial reasons, we will obtain the formula in the statement, namely:
(3) This was for the plan. In practice now, we must compute the moments, with respect to colored integer exponents , of the variable in the statement:
As a first observation, in the case where such an exponent is not uniform in , a rotation argument shows that the corresponding moment of vanishes. To be more precise, the variable can be shown to be complex Gaussian too, for any , and from we obtain , in this case.
(4) In the uniform case now, where consists of copies of and copies of , the corresponding moment can be computed as follows:
(5) In order to finish now the computation, let us recall that we have the following formula, coming from the generalized binomial formula, or from the Taylor formula:
By taking the square of this series, we obtain the following formula:
Now by looking at the coefficient of on both sides, we conclude that the sum on the right equals . Thus, we can finish the moment computation in (4), as follows:
(6) As a conclusion, if we denote by the length of a colored integer , the moments of the variable in the statement are given by:
On the other hand, the numbers are given by exactly the same formula. Indeed, in order to have matching pairings of , our exponent must be uniform, consisting of copies of and copies of , with . But then the matching pairings of correspond to the permutations of the symbols, as to be matched with symbols, and so we have such matching pairings. Thus, we have the same formula as for the moments of , and we are led to the conclusion in the statement. ∎
This was for the basic probability theory, which is in a certain sense advanced operator theory, inside the commutative von Neumann algebras, . We will be back to this, with some further limiting theorems, in chapter 8 below.
6c. Wigner matrices
Let us exit now the classical world, that of the commutative von Neumann algebras , and do as promised some random matrix theory. We recall that a random matrix algebra is a von Neumann algebra of type , and that we are interested in the computation of the laws of the operators , called random matrices. Regarding the precise classes of random matrices that we are interested in, first we have the complex Gaussian matrices, which are constructed as follows:
Definition 6.16.
A complex Gaussian matrix is a random matrix of type
which has i.i.d. complex normal entries.
We will see that the above matrices have an interesting, and “central” combinatorics, among all kinds of random matrices, with the study of the other random matrices being usually obtained as a modification of the study of the Gaussian matrices.
As a somewhat surprising remark, using real normal variables in Definition 6.16, instead of the complex ones appearing there, leads nowhere. The correct real versions of the Gaussian matrices are the Wigner random matrices, constructed as follows:
Definition 6.17.
A Wigner matrix is a random matrix of type
which has i.i.d. complex normal entries, up to the constraint .
In other words, a Wigner matrix must be as follows, with the diagonal entries being real normal variables, , for some , the upper diagonal entries being complex normal variables, , the lower diagonal entries being the conjugates of the upper diagonal entries, as indicated, and with all the variables being independent:
As a comment here, for many concrete applications the Wigner matrices are in fact the central objects in random matrix theory, and in particular, they are often more important than the Gaussian matrices. In fact, these are the random matrices which were first considered and investigated, a long time ago, by Wigner himself [wig].
Finally, we will be interested as well in the complex Wishart matrices, which are the positive versions of the above random matrices, constructed as follows:
Definition 6.18.
A complex Wishart matrix is a random matrix of type
with being a complex Gaussian matrix.
As before with the Gaussian and Wigner matrices, there are many possible comments that can be made here, of technical or historical nature. First, using real Gaussian variables instead of complex ones leads to a less interesting combinatorics. Also, these matrices were introduced and studied by Marchenko-Pastur not long after Wigner, in [mpa], and so historically came second. Finally, in what regards their combinatorics and applications, these matrices quite often come first, before both the Gaussian and the Wigner ones, with all this being of course a matter of knowledge and taste.
Summarizing, we have three main types of random matrices, which can be somehow designated as “complex”, “real” and “positive”, and that we will study in what follows. Let us also mention that there are many other interesting classes of random matrices, usually appearing as modifications of the above. More on these later.
In order to compute the asymptotic laws of the above matrices, we will use the moment method. We have the following result, which will be our main tool here:
Theorem 6.19.
Given independent variables , each following the complex normal law , with being a fixed parameter, we have the Wick formula
where and , for the joint moments of these variables.
Proof.
This is something well-known, and the basis for all possible computations with complex normal variables, which can be proved in two steps, as follows:
(1) Let us first discuss the case where we have a single complex normal variable , which amounts in taking for any in the formula in the statement. What we have to compute here are the moments of , with respect to colored integer exponents , and the formula in the statement tells us that these moments must be:
But this is something that we know well from the above, the idea being that at this follows by doing some combinatorics and calculus, in analogy with the combinatorics and calculus from the real case, where the moment formula is identical, save for the matching pairings being replaced by the usual pairings , and then that the general case follows from this, by rescaling. Thus, we are done with this case.
(2) In general now, the point is that we obtain the formula in the statement. Indeed, when expanding the product and rearranging the terms, we are left with doing a number of computations as in (1), and then making the product of the expectations that we found. But this amounts precisely in counting the partitions in the statement, with the condition there standing precisely for the fact that we are doing the various type (1) computations independently, and then making the product. ∎
Now by getting back to the Gaussian matrices, we have the following result, with standing for the noncrossing pairings of a colored integer :
Theorem 6.20.
Given a sequence of Gaussian random matrices
having independent variables as entries, for some fixed , we have
for any colored integer , in the limit.
Proof.
This is something standard, which can be done as follows:
(1) We fix , and we let . Let us first compute the trace of . With , and with the convention , we have:
(2) Next, we rescale our variable by a factor, as in the statement, and we also replace the usual trace by its normalized version, . Our formula becomes:
Thus, the moment that we are interested in is given by:
(3) Let us apply now the Wick formula, from Theorem 6.19. We conclude that the moment that we are interested in is given by the following formula:
(4) Our claim now is that in the limit the combinatorics of the above sum simplifies, with only the noncrossing partitions contributing to the sum, and with each of them contributing precisely with a 1 factor, so that we will have, as desired:
(5) In order to prove this, the first observation is that when is not uniform, in the sense that it contains a different number of , symbols, we have , and so:
(6) Thus, we are left with the case where is uniform. Let us examine first the case where consists of an alternating sequence of and symbols, as follows:
In this case it is convenient to relabel our multi-index , with , in the form . With this done, our moment formula becomes:
Now observe that, with being as above, we have an identification , obtained in the obvious way. With this done too, our moment formula becomes:
(7) We are now ready to do our asymptotic study, and prove the claim in (4). Let indeed be the full cycle, which is by definition the following permutation:
In terms of , the conditions and found above read:
Counting the number of free parameters in our moment formula, we obtain:
(8) The point now is that the last exponent is well-known to be , with equality precisely when the permutation is geodesic, which in practice means that must come from a noncrossing partition. Thus we obtain, in the limit, as desired:
This finishes the proof in the case of the exponents which are alternating, and the case where is an arbitrary uniform exponent is similar, by permuting everything. ∎
As a conclusion to this, we have obtained as asymptotic law for the Gaussian matrices a certain mysterious distribution, having as moments some numbers which are similar to the moments of the usual normal laws, but with the “underlying matching pairings being now replaced by underlying matching noncrossing pairings”. More on this later.
Regarding now the Wigner matrices, we have here the following result, coming as a consequence of Theorem 6.20, via some simple algebraic manipulations:
Theorem 6.21.
Given a sequence of Wigner random matrices
having independent variables as entries, with , up to , we have
for any integer , in the limit.
Proof.
This can be deduced from a direct computation based on the Wick formula, similar to that from the proof of Theorem 6.20, but the best is to deduce this result from Theorem 6.20 itself. Indeed, we know from there that for Gaussian matrices we have the following formula, valid for any colored integer , in the limit, with standing for noncrossing matching pairings:
By doing some combinatorics, we deduce from this that we have the following formula for the moments of the matrices , with respect to usual exponents, :
Now since the matrices are of Wigner type, this gives the result. ∎
Summarizing, all this brings us into counting noncrossing pairings. So, let us start with some preliminaries here. We first have the following well-known result:
Theorem 6.22.
The Catalan numbers, which are by definition given by
satisfy the following recurrence formula, with initial data ,
their generating series satisfies the equation
and is given by the following explicit formula,
and we have the following explicit formula for these numbers:
Numerically, these numbers are
Proof.
We must count the noncrossing pairings of . Now observe that such a pairing appears by pairing 1 to an odd number, , and then inserting a noncrossing pairing of , and a noncrossing pairing of . We conclude that we have the following recurrence formula for the Catalan numbers:
In terms of the generating series , this recurrence formula reads:
Thus satisfies , and by solving this equation, and choosing the solution which is bounded at , we obtain the following formula:
In order to finish, we use the generalized binomial formula, which gives:
Now back to our series , we obtain the following formula for it:
It follows that the Catalan numbers are given by:
Thus, we are led to the conclusion in the statement. ∎
In order to recapture now the Wigner measure from its moments, we can use:
Proposition 6.23.
The Catalan numbers are the even moments of
called standard semicircle law. As for the odd moments of , these all vanish.
Proof.
The even moments of the semicircle law in the statement can be computed with the change of variable , and we are led to the following formula:
As for the odd moments, these all vanish, because the density of is an even function. Thus, we are led to the conclusion in the statement. ∎
More generally, we have the following result, involving a parameter :
Proposition 6.24.
Given , the real measure having as even moments the numbers and having all odd moments is the measure
called Wigner semicircle law on .
Proof.
This follows indeed from Proposition 6.23, via a change of variables. ∎
Now by putting everything together, we obtain the Wigner theorem, as follows:
Theorem 6.25.
Given a sequence of Wigner random matrices
which by definition have i.i.d. complex normal entries, up to , we have
in the limit, where is the Wigner semicircle law.
Proof.
This follows indeed from all the above, and more specifically, by combining Theorem 6.21, Theorem 6.22 and Proposition 6.24. ∎
Regarding now the complex Gaussian matrices, in view of this result, it is natural to think at the law found in Theorem 6.20 as being “circular”. But this is just a thought, and more on this later, in chapter 8 below, when doing free probability.
6d. Wishart matrices
Let us discuss now the Wishart matrices, which are the positive analogues of the Wigner matrices. Quite surprisingly, the computation here leads to the Catalan numbers, but not in the same way as for the Wigner matrices, the result being as follows:
Theorem 6.26.
Given a sequence of complex Wishart matrices
with being complex Gaussian of parameter , we have
for any exponent , in the limit.
Proof.
There are several possible proofs for this result, as follows:
(1) A first method is by using the formula that we have in Theorem 6.20, for the Gaussian matrices . Indeed, we know from there that we have the following formula, valid for any colored integer , in the limit:
With , alternating word of length , with , this gives:
Thus, in terms of the Wishart matrix we have, for any :
The point now is that, by doing some combinatorics, we have:
Thus, we are led to the formula in the statement.
(2) A second method, that we will explain now as well, is by proving the result directly, starting from definitions. The matrix entries of our matrix are given by:
Thus, the normalized traces of powers of are given by the following formula:
By rescaling now by a factor, as in the statement, we obtain:
By using now the Wick rule, we obtain the following formula for the moments, with , alternating word of lenght , and with :
In order to compute this quantity, we use the standard bijection . By identifying the pairings with their counterparts , we obtain:
Now let be the full cycle, which is by definition the following permutation:
The general factor in the product computed above is then 1 precisely when following two conditions are simultaneously satisfied:
Counting the number of free parameters in our moment formula, we obtain:
The point now is that the last exponent is well-known to be , with equality precisely when the permutation is geodesic, which in practice means that must come from a noncrossing partition. Thus we obtain, in the limit:
Thus, we are led to the conclusion in the statement. ∎
As a consequence of the above result, we have a new look on the Catalan numbers, which is more adapted to our present Wishart matrix considerations, as follows:
Proposition 6.27.
The Catalan numbers appear as well as
where is the set of all noncrossing partitions of .
Proof.
This follows indeed from the proof of Theorem 6.26. Observe that we obtain as well a formula in terms of matching pairings of alternating colored integers. ∎
The direct explanation for the above formula, relating noncrossing partitions and pairings, comes form the following result, which is very useful, and good to know:
Proposition 6.28.
We have a bijection between noncrossing partitions and pairings
which is constructed as follows:
-
(1)
The application is the “fattening” one, obtained by doubling all the legs, and doubling all the strings as well.
-
(2)
Its inverse is the “shrinking” application, obtained by collapsing pairs of consecutive neighbors.
Proof.
The fact that the two operations in the statement are indeed inverse to each other is clear, by computing the corresponding two compositions, with the remark that the construction of the fattening operation requires the partitions to be noncrossing. ∎
Getting back now to probability, we are led to the question of finding the law having the Catalan numbers as moments, in the above way. The result here is as follows:
Proposition 6.29.
The real measure having the Catalan numbers as moments is
called Marchenko-Pastur law of parameter .
Proof.
The moments of the law in the statement can be computed with the change of variable , as follows:
Thus, we are led to the conclusion in the statement. ∎
Now back to the Wishart matrices, we are led to the following result:
Theorem 6.30.
Given a sequence of complex Wishart matrices
with being complex Gaussian of parameter , we have
with , with the limiting measure being the Marchenko-Pastur law .
Proof.
This follows indeed from Theorem 6.26 and Proposition 6.29. ∎
As a comment now, while the above result is definitely something interesting at , at general this looks more like a “fake” generalization of the result, because the law stays the same, modulo a trivial rescaling. The reasons behind this phenomenon are quite subtle, and skipping some discussion, the point is that Theorem 6.30 is indeed something “fake” at general , and the correct generalization of the computation, involving more general classes of complex Wishart matrices, is as follows:
Theorem 6.31.
Given a sequence of general complex Wishart matrices
with being complex Gaussian of parameter , we have
with , with the limiting measure being the Marchenko-Pastur law .
Proof.
This follows once again by using the moment method, the limiting moments in the regime being as follows, after doing the combinatorics:
But these numbers are the moments of the Marchenko-Pastur law , which in addition has the density given by the formula in the statement, and this gives the result. ∎
As a philosophical conclusion now, we have 4 main laws in what we have been doing so far, namely the Gaussian laws , the Poisson laws , the Wigner laws and the Marchenko-Pastur laws . These laws naturally form a diagram, as follows:
We will see in chapter 8 that appear as “free analogues” of , and that a full theory can be developed, with central limiting theorems for all 4 laws, convolution semigroup results for all 4 laws too, and Lie group type results for all 4 laws too. And also, we will be back to the random matrices as well, with further results about them.
6e. Exercises
There has been a lot of non-trivial combinatorics and calculus in this chapter, sometimes only briefly explained, and as an exercise on all this, we have:
Exercise 6.32.
Clarify all the details in connection with the Wigner and Marchenko-Pastur computations, first at , and then for general .
As before, these are things discussed in the above, but only briefly, this whole chapter having been just a modest introduction to this exciting subject which are the random matrices. In the hope that you will find some time, and do the exercise.
Chapter 7 Quantum spaces
7a. Gelfand theorem
We have seen that the von Neumann algebras are interesting objects, and it is tempting to go ahead with a systematic study of such algebras. This is what Murray and von Neumann did, when first coming across such algebras, back in the 1930s, in their series of papers [mv1], [mv2], [mv3], [vn1], [vn2], [vn3]. In what concerns us, we will rather keep this material for later, and talk instead, in this chapter and in the next one, of things which are perhaps more basic, motivated by the following definition:
Definition 7.1.
Given a von Neumann algebra , coming with a faithful positive unital trace , we write
and call a quantum probability space. We also write the trace as , and call it integration with respect to the uniform measure on .
Obviously, this is something exciting, and we have seen how some interesting theory can be developed along these lines in the simplest case, that of the random matrix algebras. Thus, all this needs a better understanding, before going ahead with the above-mentioned Murray-von Neumann theory. In order to get started, here are a few comments:
(1) Generally speaking, all this comes from the fact that the commutative von Neumann algebras are those of the form , with being a measured space. Since in the finite measure case, , the integration can be regarded as being a faithful positive unital trace , we are basically led to Definition 7.1.
(2) Regarding our assumption , making the integration bounded, this is something advanced, coming from deep classification results of von Neumann and Connes, which roughly state that “modulo classical measure theory, the study of the quantum measured spaces basically reduces to the case ”.
(3) Finally, the traciality of is something advanced too, again coming from that classification results of von Neumann and Connes, which in their more precise formulation state that “modulo classical measure theory, the study of the quantum measured spaces basically reduces to the case where , and is a trace”.
In short, complicated all this, and you will have to trust me here. Moving ahead now, there is one more thing to be discussed in connection with Definition 7.1, and this is physics. Let me formulate here the question that you surely have in mind:
Question 7.2.
As physicists we already agreed, without clear evidence, that our operators should be bounded. But what about quantum spaces, is it a good idea to assume that these are as above, of finite mass, and with tracial integration?
Well, this is certainly an interesting question. In favor of my choice, I would argue that the mathematical physics of Jones [jo1], [jo2], [jo3], [jo5], [jo6] and Voiculescu [vo1], [vo2], [vdn] needs a trace , as above. And the same goes for certain theoretical physics continuations of the main work of Connes [co3], as for instance the basic theory of the Standard Model spectral triple of Chamseddine-Connes, whose free gauge group has tracial Haar integration. Needless to say, all this is quite subjective. But hey, question of theoretical physics you asked, answer of theoretical physics is what you get.
Hang on, we are not done yet. Now that we are convinced that Definition 7.1 is the correct one, be that on mathematical or physical grounds, let us look for examples. And here the situation is quite grim, because even in the classical case, we have:
Fact 7.3.
The measure on a classical measured space cannot come out of nowhere, and is usually a Haar measure, appearing by theorem. Thus, in our picture
both the Hilbert space and the von Neumann algebra should appear by theorem, not by definition, contrary to what Definition 7.1 says.
To be more precise, in what regards the first assertion, this is certainly the case with simple objects like Lie groups, or spheres and other homogeneous spaces. Of course you might say that with the uniform measure is a measured space, but isn’t obtained by cutting the Lie group , with its Haar measure. And the same goes with with an arbitrary measure , or with being deformed into a curve, and so on, because that , or what is left from it, will always refer to the Haar measure of .
As for the second assertion, nothing much to comment here, mathematics has spoken. So, getting back now to Definition 7.1 as it is, looks like we have two dead bodies there, the Hilbert space and the operator algebra . So let us try to get rid of at least one of them. But which? In the lack of any obvious idea, let us turn to physics:
Question 7.4.
In quantum mechanics, which came first, the Hilbert space , or the operator algebra ?
Unfortunately this question is as difficult as the one regarding the chicken and the egg. A look at what various physicists said on this matter, in a direct or indirect way, does not help much, and by the end of the day we are left with guidelines like “no one understands quantum mechanics” (Feynman), “shut up and compute” (Dirac) and so on. And all this, coming on top on what has been already said on Definition 7.1, of rather unclear nature, is probably too much. That is, the last drop, time to conclude:
Conclusion 7.5.
The theory of von Neumann algebras has the same peculiarity as quantum mechanics: it tends to self-destruct, when approached axiomatically.
And we will take this as good news, providing us with warm evidence that the theory of von Neumann algebras is indeed related to quantum mechanics. This is what matters, being on the right track, and difficulties and all the rest, we won’t be scared by them.
Back to business now, in practice, we must go back to chapter 5, and examine what we were saying right before introducing the von Neumann algebras. And at that time, we were talking about general operator algebras , closed with respect to the norm, but not necessarily with respect to the weak topology. But this suggests formulating the following definition, somewhat as a purely mathematical answer to Question 7.4:
Definition 7.6.
A -algebra is an complex algebra , given with:
-
(1)
A norm , making it into a Banach algebra.
-
(2)
An involution , related to the norm by the formula .
Here by Banach algebra we mean a complex algebra with a norm satisfying all the conditions for a vector space norm, along with and , and which is such that our algebra is complete, in the sense that the Cauchy sequences converge. As for the involution, this must be antilinear, antimultiplicative, and satisfying .
As basic examples, we have the operator algebra , for any Hilbert space , and more generally, the norm closed -subalgebras . It is possible to prove that any -algebra appears in this way, but this is a non-trivial result, called GNS theorem, and more on this later. Note in passing that this result tells us that there is no need to memorize the above axioms for the -algebras, because these are simply the obvious things that can be said about , and its norm closed -subalgebras .
As a second class of basic examples, which are of great interest for us, we have:
Proposition 7.7.
If is a compact space, the algebra of continuous functions is a -algebra, with the usual norm and involution, namely:
This algebra is commutative, in the sense that , for any .
Proof.
All this is clear from definitions. Observe that we have indeed:
Thus, the axioms are satisfied, and finally is clear. ∎
In general, the -algebras can be thought of as being algebras of operators, over some Hilbert space which is not present. By using this philosophy, one can emulate spectral theory in this setting, with extensions of the various results from chapters 3,5:
Theorem 7.8.
Given element of a -algebra, define its spectrum as:
The following spectral theory results hold, exactly as in the case:
-
(1)
We have .
-
(2)
We have polynomial, rational and holomorphic calculus.
-
(3)
As a consequence, the spectra are compact and non-empty.
-
(4)
The spectra of unitaries and self-adjoints are on .
-
(5)
The spectral radius of normal elements is given by .
In addition, assuming , the spectra of with respect to and to coincide.
Proof.
This is something that we know from chapter 3, in the case , and then from chapter 5, in the case . In general, the proof is similar:
(1) Regarding the assertions (1-5), which are of course formulated a bit informally, the proofs here are perfectly similar to those for the full operator algebra . All this is standard material, and in fact, things in chapters 3 were written in such a way as for their extension now, to the general -algebra setting, to be obvious.
(2) Regarding the last assertion, we know this from chapter 5 for , and the proof in general is similar. Indeed, the inclusion is clear. For the converse, assume , and consider the following self-adjoint element:
The difference between the two spectra of is then given by:
Thus this difference in an open subset of . On the other hand being self-adjoint, its two spectra are both real, and so is their difference. Thus the two spectra of are equal, and in particular is invertible in , and so , as desired. ∎
We can now get back to the commutative -algebras, and we have the following result, due to Gelfand, which will be of crucial importance for us:
Theorem 7.9.
The commutative -algebras are exactly the algebras of the form
with the “spectrum” of such an algebra being the space of characters , with topology making continuous the evaluation maps .
Proof.
This is something that we basically know from chapter 5, but always good to talk about it again. Given a commutative -algebra , we can define as in the statement. Then is compact, and is a morphism of algebras, as follows:
(1) We first prove that is involutive. We use the following formula, which is similar to the formula for the usual complex numbers:
Thus it is enough to prove the equality for self-adjoint elements . But this is the same as proving that implies that is a real function, which is in turn true, because is an element of , contained in .
(2) Since is commutative, each element is normal, so is isometric:
(3) It remains to prove that is surjective. But this follows from the Stone-Weierstrass theorem, because is a closed subalgebra of , which separates the points. ∎
In view of the Gelfand theorem, we can formulate the following key definition:
Definition 7.10.
Given an arbitrary -algebra , we write
and call a compact quantum space.
This might look like something informal, but it is not. Indeed, we can define the category of compact quantum spaces to be the category of the -algebras, with the arrows reversed. When is commutative, the above space exists indeed, as a Gelfand spectrum, . In general, is something rather abstract, and our philosophy here will be that of studying of course , but formulating our results in terms of . For instance whenever we have a morphism , we will write , and rather speak of the corresponding morphism . And so on.
Technically speaking, we will see later that the above formalism has its limitations, and needs a fix. To be more precise, when looking at compact quantum spaces having a probability measure, there are more of them in the sense of Definition 7.10, than in the von Neumann algebra sense. Thus, all this needs a fix. But more on this later.
As a first concrete consequence of the Gelfand theorem, we have:
Proposition 7.11.
Assume that is normal, and let .
-
(1)
We can define , with being a morphism of -algebras.
-
(2)
We have the “continuous functional calculus” formula .
Proof.
Since is normal, the -algebra that is generates is commutative, so if we denote by the space formed by the characters , we have:
Now since the map given by evaluation at is bijective, we obtain:
Thus, we are dealing with usual functions, and this gives all the assertions. ∎
As another consequence of the Gelfand theorem, we have:
Proposition 7.12.
For a normal element , the following are equivalent:
-
(1)
is positive, in the sense that .
-
(2)
, for some satisfying .
-
(3)
, for some .
Proof.
This is very standard, exactly as in case, as follows:
Since is well-defined on , we can set .
This is trivial, because we can set .
We proceed by contradiction. By multiplying by a suitable element of , we are led to the existence of an element satisfying . By writing now with we have:
Thus , contradicting the fact that must coincide outside . ∎
Let us clarify now the relation between -algebras and von Neumann algebras. In order to do so, we need a prove a key result, called GNS representation theorem, stating that any -algebra appears as an operator algebra. As a first result, we have:
Proposition 7.13.
Let be a commutative -algebra, write , with being a compact space, and let be a positive measure on . We have then
where , with corresponding to the operator .
Proof.
Given a continuous function , consider the operator , on . Observe that is indeed well-defined, and bounded as well, because:
The application being linear, involutive, continuous, and injective as well, we obtain in this way a -algebra embedding , as claimed. ∎
In order to prove the GNS representation theorem, we must extend the above construction, to the case where is not necessarily commutative. Let us start with:
Definition 7.14.
Consider a -algebra .
-
(1)
is called positive when .
-
(2)
is called faithful and positive when .
In the commutative case, , the positive elements are the positive functions, . As for the positive linear forms , these appear as follows, with being positive, and strictly positive if we want to be faithful and positive:
In general, the positive linear forms can be thought of as being integration functionals with respect to some underlying “positive measures”. We can use them as follows:
Proposition 7.15.
Let be a positive linear form.
-
(1)
defines a generalized scalar product on .
-
(2)
By separating and completing we obtain a Hilbert space .
-
(3)
defines a representation .
-
(4)
If is faithful in the above sense, then is faithful.
Proof.
Almost everything here is straightforward, as follows:
(1) This is clear from definitions, and from the basic properties of the positive elements , which can be established exactly as in the case.
(2) This is a standard procedure, which works for any scalar product, the idea being that of dividing by the vectors satisfying , then completing.
(3) All the verifications here are standard algebraic computations, in analogy with what we have seen many times, for multiplication operators, or group algebras.
(4) Assuming that we have , we have then , which in turn implies by faithfulness that we have , which gives the result. ∎
In order to establish the embedding theorem, it remains to prove that any -algebra has a faithful positive linear form . This is something more technical:
Proposition 7.16.
Let be a -algebra.
-
(1)
Any positive linear form is continuous.
-
(2)
A linear form is positive iff there is a norm one such that .
-
(3)
For any there exists a positive norm one form such that .
-
(4)
If is separable there is a faithful positive form .
Proof.
The proof here is quite technical, inspired from the existence proof of the probability measures on abstract compact spaces, the idea being as follows:
(1) This follows from Proposition 7.15, via the following estimate:
(2) In one sense we can take . Conversely, let , . We have:
Thus we have , and with we obtain:
Thus , and so , so we can assume . Now observe that for any self-adjoint element , and any we have, with :
Thus we have , and this finishes the proof of our remaining claim.
(3) We can set on the linear space spanned by , then extend this functional by Hahn-Banach, to the whole . The positivity follows from (2).
(4) This is standard, by starting with a dense sequence , and taking the Cesàro limit of the functionals constructed in (3). We have , and we are done. ∎
With these ingredients in hand, we can now state and prove:
Theorem 7.17.
Any -algebra appears as a norm closed -algebra of operators
over a certain Hilbert space . When is separable, can be taken to be separable.
Proof.
This result, called called GNS representation theorem after Gelfand, Naimark and Segal, follows indeed by combining Proposition 7.15 with Proposition 7.16. ∎
All this might seem quite surprising, and your first reaction would be to say what have we been we doing here, with our -algebra theory, because we are now back to operator algebras , and everything that we did with -algebras, extending things that we knew about operator algebras , looks more like a waste of time.
Error. The axioms in Definition 7.6, coupled with the writing in Definition 7.10, are something powerful, because they do not involve any kind of or functions on our quantum spaces . Thus, we can start hunting for such spaces, just by defining -algebras with generators and relations, then look for Haar measures on such spaces, and use the GNS construction in order to reach to von Neumann algebras. Before getting into this, however, let us summarize the above discussion as follows:
Theorem 7.18.
We can talk about compact quantum measured spaces, as follows:
-
(1)
The category of compact quantum measured spaces is the category of the -algebras with faithful traces , with the arrows reversed.
-
(2)
In the case where we have a non-faithful trace , we can still talk about the corresponding space , by performing the GNS construction.
-
(3)
By taking the weak closure in the GNS representation, we obtain the von Neumann algebra , in the previous general measured space sense.
Proof.
All this follows from Theorem 7.17, and from the other things that we already know, with the whole result itself being something rather philosophical. ∎
7b. Tori, amenability
In the remainder of this chapter we explore the whole new world opened by the -algebra theory, with the study of several key examples. We will first discuss the group duals, also called noncommutative tori. Let us start with a well-known result:
Theorem 7.19.
The compact abelian groups are in correspondence with the discrete abelian groups , via Pontrjagin duality,
with the dual of a locally compact group being the locally compact group consisting of the continuous group characters .
Proof.
This is something very standard, the idea being that, given a group as above, its continuous characters form indeed a group, that we can call . The correspondence constructed in this way has then the following properties:
(1) We have . This is the basic computation to be performed, before anything else, and which is something algebraic, with roots of unity.
(2) More generally, the dual of a finite abelian group is the group itself. This comes indeed from (1) and from .
(3) At the opposite end now, that of the locally compact groups which are not compact, nor discrete, the main example, which is standard, is .
(4) Getting now to what we are interested in, it follows from the definition of the correspondence that when is compact is discrete, and vice versa.
(5) Finally, in order to best understand this latter phenomenon, the best is to work out the main pair of examples, which are and . ∎
Our claim now is that, by using operator algebra theory, we can talk about the dual of any discrete group . Let us start our discussion in the von Neumann algebra setting, where things are particularly simple. We have here:
Theorem 7.20.
Given a discrete group , we can construct its von Neumann algebra
by using the left regular representation. This algebra has a faithful positive trace, , and when is abelian we have an isomorphism of tracial von Neumann algebras
given by a Fourier type transform, where is the compact dual of .
Proof.
There are many assertions here, the idea being as follows:
(1) The first part is standard, with the left regular representation of working as expected, and being a unitary representation, as follows:
(2) The positivity of the trace comes from the following alternative formula for it, with the equivalence with the definition in the statement being clear:
(3) The third part is standard as well, because when is abelian the algebra is commutative, and its spectral decomposition leads by delinearization to the group characters , and so the dual group , as indicated.
(4) Finally, the fact that our isomorphism transforms the trace of into the Haar integration functional of is clear. Moreover, the study of various examples show that what we constructed is in fact the Fourier transform, in its various incarnations. ∎
Getting back now to our quantum space questions, we have a beginning of answer, because based on the above, we can formulate the following definition:
Definition 7.21.
Given a discrete group , not necessarily abelian, we can construct its abstract dual as a quantum measured space, via the following formula:
In the case where happens to be abelian, this quantum space is a classical space, namely the usual Pontrjagin dual of , endowed with its Haar measure.
Let us discuss now the same questions, in the -algebra setting. The situation here is more complicated than in the von Neumann algebra setting, as follows:
Proposition 7.22.
Associated to any discrete group are several group -algebras,
which are constructed as follows:
-
(1)
is the closure of the group algebra , with involution , with respect to the maximal -seminorm on this -algebra, which is a -norm.
-
(2)
is the norm closure of the group algebra in the left regular representation, on the Hilbert space , given by and linearity.
-
(3)
can be any intermediate -algebra, but for best results, the indexing object must be a unitary group representation, satisfying .
Proof.
This is something quite technical, with (2) being very similar to the von Neumann algebra construction from Theorem 7.20, with (1) being something new, with the norm property there coming from (2), and finally with (3) being an informal statement, that we will comment on later, once we will know about compact quantum groups. ∎
When is finite, or abelian, or more generally amenable, all the above group algebras coincide. In the abelian case, that we are particularly interested in here, the precise result is as follows, complementing the analysis from Theorem 7.20:
Theorem 7.23.
When is abelian all its group -algebras coincide, and we have an isomorphism as follows, given by a Fourier type transform,
where is the compact dual of . Moreover, this isomorphism transforms the standard group algebra trace into the Haar integration of .
Proof.
Since is abelian, any of its group -algebras is commutative. Thus, we can apply the Gelfand theorem, and we obtain , with . But the spectrum , consisting of the characters , can be identified by delinearizing with the Pontrjagin dual , and this gives the results. ∎
At a more advanced level now, we have the following result:
Theorem 7.24.
For a discrete group , the following conditions are equivalent, and if they are satisfied, we say that is amenable:
-
(1)
The projection map is an isomorphism.
-
(2)
The morphism given by factorizes through .
-
(3)
We have , the spectrum being taken inside .
The amenable groups include all finite groups, and all abelian groups. As a basic example of a non-amenable group, we have the free group , with .
Proof.
There are several things to be proved, the idea being as follows:
(1) The implication is trivial, and comes from the following computation, which shows that is not invertible inside :
As for , this is something more advanced, that we will not need for the moment. We will be back to this later, directly in a more general setting.
(2) The fact that any finite group is amenable is clear, because all the group -algebras are equal to the usual group -algebra , in this case. As for the case of the abelian groups, these are all amenable as well, as shown by Theorem 7.23.
(3) It remains to prove that with is not amenable. By using , it is enough to do this at . So, consider the free group . In order to prove that is not amenable, we use . To be more precise, it is enough to show that 4 is not in the spectrum of the following operator:
This is a sum of four terms, each of them acting via , with being a certain length one word. Thus if has length then is a sum of four Dirac masses, three of them at words of length and the remaining one at a length word. We can therefore decompose as a sum , where adds and cuts:
That is, if is a word, say beginning with , then act on as follows:
It follows from definitions that we have . We can use the following trick:
Indeed, this gives , and we obtain in this way:
Let be a word, say beginning with . We have then:
The action of on the remaining vector is computed as follows:
Summing up, with being the projection onto , we have:
On the other hand we have , so the subspace is invariant under the operator . We have the following norm estimate:
The norm of is equal to , and the other norm is estimated as follows:
Thus we have , and this finishes the proof. ∎
7c. Quantum groups
The duals of discrete groups have several similarities with the compact groups, and our goal now will be that of unifying these two classes of compact quantum spaces. Let us start with the following definition, due to Woronowicz [wo1]:
Definition 7.25.
A Woronowicz algebra is a -algebra , given with a unitary matrix whose coefficients generate , such that the formulae
define morphisms of -algebras , , .
We say that is cocommutative when , where is the flip. We have the following result, which justifies the terminology and axioms:
Proposition 7.26.
The following are Woronowicz algebras:
-
(1)
, with compact Lie group. Here the structural maps are:
-
(2)
, with finitely generated group. Here the structural maps are:
Moreover, we obtain in this way all the commutative/cocommutative algebras.
Proof.
In both cases, we have to exhibit a certain matrix . For the first assertion, we can use the matrix formed by matrix coordinates of , given by:
As for the second assertion, here we can use the diagonal matrix formed by generators, . Finally, the last assertion follows from the Gelfand theorem, in the commutative case, and in the cocommutative case, we will be back to this later. ∎
In general now, the structural maps have the following properties:
Proposition 7.27.
Let be a Woronowicz algebra.
-
(1)
satisfy the usual axioms for a comultiplication and a counit, namely:
-
(2)
satisfies the antipode axiom, on the -subalgebra generated by entries of :
-
(3)
In addition, the square of the antipode is the identity, .
Proof.
When is commutative, by using Proposition 7.26 we can write:
The above 3 conditions come then by transposition from the basic 3 group theory conditions satisfied by , which are as follows, with :
Observe that is satisfied as well, coming from . In general now, all the formulae in the statement are satisfied on the generators , and so by linearity, multiplicativity and continuity they are satisfied everywhere, as desired. ∎
In view of Proposition 7.26, we can formulate the following definition:
Definition 7.28.
Given a Woronowicz algebra , we formally write
and call compact quantum group, and discrete quantum group.
When is both commutative and cocommutative, is a compact abelian group, is a discrete abelian group, and these groups are dual to each other:
In general, we still agree to write , in a formal sense. Finally, in relation with functoriality issues, let us complement Definitions 7.25 and 7.28 with:
Definition 7.29.
Given two Woronowicz algebras and , we write
and we identify as well the corresponding compact and discrete quantum groups, when we have an isomorphism of -algebras , mapping .
In order to develop now some theory, let us call corepresentation of any unitary matrix , with , satisfying the same conditions as , namely:
These can be thought of as corresponding to the unitary representations of the underlying compact quantum group . Following Woronowicz [wo1], we have:
Theorem 7.30.
Any Woronowicz algebra has a unique Haar integration functional,
which can be constructed by starting with any faithful positive form , and setting
where . Moreover, for any corepresentation we have
where is the orthogonal projection onto .
Proof.
Following [wo1], this can be done in 3 steps, as follows:
(1) Given , our claim is that the following limit converges, for any :
Indeed, by linearity we can assume that is the coefficient of corepresentation, . But in this case, an elementary computation shows that we have the following formula, where is the orthogonal projection onto the -eigenspace of :
(2) Since implies , we have , where is the orthogonal projection onto the space . The point now is that when is faithful, by using a positivity trick, one can prove that we have . Thus our linear form is independent of , and is given on coefficients by:
(3) With the above formula in hand, the left and right invariance of is clear on coefficients, and so in general, and this gives all the assertions. See [wo1]. ∎
As a main application, we can develop a Peter-Weyl type theory for the corepresentations of . Consider the dense -subalgebra generated by the coefficients of the fundamental corepresentation , and endow it with the following scalar product:
With this convention, we have the following result, also from Woronowicz [wo1]:
Theorem 7.31.
We have the following Peter-Weyl type results:
-
(1)
Any corepresentation decomposes as a sum of irreducible corepresentations.
-
(2)
Each irreducible corepresentation appears inside a certain .
-
(3)
, the summands being pairwise orthogonal.
-
(4)
The characters of irreducible corepresentations form an orthonormal system.
Proof.
All these results are from [wo1], the idea being as follows:
(1) Given , its intertwiner algebra is a finite dimensional -algebra, and so decomposes as . But this gives a decomposition of type , as desired.
(2) Consider indeed the Peter-Weyl corepresentations, with colored integer, defined by , , and multiplicativity. The coefficients of these corepresentations span the dense algebra , and by using (1), this gives the result.
(3) Here the direct sum decomposition, which is technically a -coalgebra isomorphism, follows from (2). As for the second assertion, this follows from the fact that is the orthogonal projection onto the space , for any corepresentation .
(4) Let us define indeed the character of to be the matrix trace, . Since this character is a coefficient of , the orthogonality assertion follows from (3). As for the norm 1 claim, this follows once again from . ∎
We can now solve a problem that we left open before, namely:
Proposition 7.32.
The cocommutative Woronowicz algebras appear as the quotients
given by with , with being a discrete group.
Proof.
This follows from the Peter-Weyl theory, and clarifies a number of things said before, notably in Proposition 7.26. Indeed, for a cocommutative Woronowicz algebra the irreducible corepresentations are all 1-dimensional, and this gives the results. ∎
As another consequence of the above results, once again by following Woronowicz [wo1], we have the following statement, dealing with functional analysis aspects, and extending what we already knew about the -algebras of the usual discrete groups:
Theorem 7.33.
Let be the enveloping -algebra of , and be the quotient of by the null ideal of the Haar integration. The following are then equivalent:
-
(1)
The Haar functional of is faithful.
-
(2)
The projection map is an isomorphism.
-
(3)
The counit map factorizes through .
-
(4)
We have , the spectrum being taken inside .
If this is the case, we say that the underlying discrete quantum group is amenable.
Proof.
This is well-known in the group dual case, , with being a usual discrete group. In general, the result follows by adapting the group dual case proof:
This simply follows from the fact that the GNS construction for the algebra with respect to the Haar functional produces the algebra .
Here is trivial, and conversely, a counit map produces an isomorphism , via a formula of type . See [wo1].
Here is clear, coming from , and the converse can be proved by doing some functional analysis. Once again, we refer here to [wo1]. ∎
Let us discuss now some interesting examples. Following Wang [wan], we have:
Proposition 7.34.
The following universal algebras are Woronowicz algebras,
so the underlying spaces are compact quantum groups.
Proof.
This follows from the elementary fact that if a matrix is orthogonal or biunitary, then so must be the following matrices:
Thus, we can indeed define morphisms as in Definition 7.25, by using the universal properties of , , and this gives the result. ∎
There is a connection here with group duals, coming from:
Proposition 7.35.
Given a closed subgroup , consider its “diagonal torus”, which is the closed subgroup constructed as follows:
This torus is then a group dual, , where is the discrete group generated by the elements , which are unitaries inside .
Proof.
Since is unitary, its diagonal entries are unitaries inside . Moreover, from we obtain, when passing inside the quotient:
It follows that we have , modulo identifying as usual the -completions of the various group algebras, and so that we have , as claimed. ∎
With this notion in hand, we have the following result:
Theorem 7.36.
The diagonal tori of the basic rotation groups are as follows,
where is the free group on generators, and is a group-theoretical free product.
Proof.
This is clear indeed from , and the other results can be obtained by imposing to the generators of the relations defining the corresponding quantum groups. ∎
As a conclusion to all this, the -algebra theory suggests developing a theory of “noncommutative geometry”, covering both the classical and the free geometry, by using compact quantum groups. We will be back to this in chapter 8.
7d. Cuntz algebras
We would like to end this chapter with an interesting class of -algebras, discovered by Cuntz in [cun], and heavily used since then, for various technical purposes. These algebras are not obviously related to the quantum space program that we have been developing so far, and might even look like some sort of Devil’s invention, orthogonal to what is beautiful in operator algebras, but believe me, if planning to do some serious operator algebra work, you will certainly run into them. Their definition is as follows:
Definition 7.37.
The Cuntz algebra is the -algebra generated by isometries satisfying the following condition:
That is, is generated by isometries whose ranges sum up to .
Observe that must be infinite dimensional, in order to have isometries as above. In what follows we will prove that is independent on the choice of such isometries, and also that this algebra is simple. We will restrict the attention to the case , the proof in general being similar. Let us start with some simple computations, as follows:
Proposition 7.38.
Given a word with , we associate to it the element of the algebra . Then are isometries, and we have
for any two words having the same lenght.
Proof.
We use the relations defining the algebra , namely:
The fact that are isometries is clear, here being the check for :
Regarding the last assertion, by recurrence we just have to establish the formula there for the words of length 1. That is, we want to prove the following formulae:
But these two formulae follow from the fact that the projections satisfy by definition . Indeed, we have the following computation:
Thus, we have the first formula, and the proof of the second one is similar. ∎
We can use the formulae in Proposition 7.38 as follows:
Proposition 7.39.
Consider words in , meaning products of .
-
(1)
Each word in is of form or for some words .
-
(2)
Words of type with form a system of matrix units.
-
(3)
The algebra generated by matrix units in (2) is a subalgebra of .
Proof.
Here the first two assertions follow from the formulae in Proposition 7.38, and for the last assertion, we can use the following formula:
Thus, we obtain an embedding of algebras , as in the statement. ∎
Observe now that the embedding constructed in (3) above is compatible with the matrix unit systems in (2). Consider indeed the following diagram:
With the notation , the inclusion on the right is given by:
Thus, with standard tensor product notations, the inclusion on the right is the canonical inclusion , and so the above diagram becomes:
The passage from the algebra coming from this observation to the full the algebra that we are interested in can be done by using:
Proposition 7.40.
Each element decomposes as a finite sum
where each is in the union of algebras .
Proof.
By linearity and by using Proposition 7.39 we may assume that is a nonzero word, say . In the case we can set and we are done. Otherwise, we just have to add at left or at right terms of the form . For instance is equal to , and we can take . ∎
We must show now that the decomposition found above is unique, and then prove that each application has good continuity properties. The following formulae show that in both problems we may restrict attention to the case :
In order to solve these questions, we use the following fact:
Proposition 7.41.
If is a nonzero projection in , its -th average, given by the formula
is a nonzero projection in having the property that the linear subspace is isomorphic to a matrix algebra, and is an isomorphism of onto it.
Proof.
We know that the words of form with are a system of matrix units in . We apply to them the map , and we obtain:
The output being a system of matrix units, is an isomorphism from the algebra of matrices to another algebra of matrices , and this gives the result. ∎
Thus any map behaves well on the part of the decomposition on . It remains to find such that destroys all terms, and we have here:
Proposition 7.42.
Assuming , there is a nonzero projection such that , where is the -th average of .
Proof.
We want to map to zero all terms in the decomposition of , except for . Let us call the terms to be destroyed. We want the following equalities to hold, with the sum over all pairs of length indices:
The simplest way is to look for such that all terms of all sums are :
By multiplying to the left by and to the right by , we want to have:
With , where belongs to some new index set, we want to have:
Since , we can write with , and we want:
In order to do this, we can the projections of form . We want:
Let be the biggest length of all . Assume that we have fixed , of length bigger than . If the above product is nonzero then both and must be nonzero, which gives the following equalities of words:
Assuming that these equalities hold indeed, the above product reduces as follows:
Now if this product is nonzero, the middle term must be nonzero:
In order for this for hold, the indices starting from the middle to the right must be equal to the indices starting from the middle to the left. Thus must be periodic, of period . But this is certainly possible, because we can take any aperiodic infinite word, and let be the sequence of first letters, with big enough. ∎
We can now start solving our problems. We first have:
Proposition 7.43.
The decomposition of is unique, and we have
for any .
Proof.
It is enough to do this for . But this follows from the previous result, via the following sequence of equalities and inequalities:
Thus we got the inequality in the statement. As for the uniqueness part, this follows from the fact that is an isomorphism. ∎
Remember now we want to prove that the Cuntz algebra does not depend on the choice of the isometries . In order to do so, let be the completion of the -algebra with respect to the biggest -norm. We have:
Proposition 7.44.
We have the equivalence
valid for any element .
Proof.
Assume for any , and choose a sequence with . For we define a representation in the following way:
We have then for any element . We fix norm one vectors and we consider the following continuous functions :
From we get, with respect to the usual sup norm of :
Each can be decomposed, and is given by the following formula:
This is a Fourier type expansion of , that can we write in the following way:
By using Proposition 7.43 we obtain that with , we have:
On the other hand we have with . Thus all Fourier coefficients of are zero, so . With this gives the following equality:
This is true for arbitrary norm one vectors , so and we are done. ∎
We can now formulate the Cuntz theorem, from [cun], as follows:
Theorem 7.45 (Cuntz).
Let be isometries satisfying .
-
(1)
The -algebra generated by does not depend on the choice of .
-
(2)
For any nonzero there are with .
-
(3)
In particular is simple.
Proof.
This basically follows from the various results established above:
(1) Consider the canonical projection map . We know that is surjective, and we will prove now that is injective. Indeed, if then for any . But is in the dense -algebra , so it can be regarded as an element of , and with this identification, we have in . Thus for any , so . Thus is an isomorphism. On the other hand depends only on , and the above formulae in , for algebraic calculus and for decomposition of an arbitrary , show that does not depend on the choice of . Thus, we obtain the result.
(2) Choose a sequence with . We have the following formula:
Thus implies . By linearity we can assume that we have:
Now choose a positive element which is close enough to :
Since is norm decreasing, we have the following estimate:
We apply Proposition 7.42 to our positive element . We obtain in this way a certain projection such that belongs to a certain matrix algebra. We have , so we can diagonalize this latter element, as follows:
Here are positive numbers and are minimal projections in the matrix algebra. Now since , there must be an eigenvalue greater that :
By linear algebra, we can pass from a minimal projection to another:
The element has norm , and we get the following inequality:
The last term can be computed by using the diagonalization of , as follows:
From we get , and we obtain the following estimate:
Thus is invertible, say with inverse , and we have .
(3) This is clear from the formula established in (2). ∎
7e. Exercises
We have seen many things in this chapter, and there are many potential exercises, on all this. We will be however short, and as unique, key exercise, we have:
Exercise 7.46.
Work out the proof of the existence result for the Haar measure on a compact group , as a particular case of the result proved for quantum groups.
This is of course something very standard, the problem being that of eliminating algebras, linear forms and other functional analysis notions from the proof for the quantum groups, as to have in the end something talking about spaces, and measures on them.
Chapter 8 Geometric aspects
8a. Topology, K-theory
This chapter is a continuation of the previous one, meant to be a grand finale to the -algebra theory that we started to develop there, before getting back to more traditional von Neumann algebra material, following Murray, von Neumann and others. There are countless things to be said, and possible paths to be taken. En hommage to Connes, and his book [co3], which is probably the finest ever on -algebras, we will adopt a geometric viewpoint. To be more precise, we know that a -algebra is a beast of type , with being a compact quantum space. So, it is about the “geometry” of that we would like to talk about, everything else being rather of administrative nature.
Let us first look at the classical case, where is a usual compact space. You might say right away that wrong way, what we need for doing geometry is a manifold. But my answer here is modesty, and no hurry. It is right that you cannot do much geometry with a compact space , but you can do some, and we have here, for instance:
Definition 8.1.
Given a compact space , its first -theory group is the group of formal differences of complex vector bundles over .
This notion is quite interesting, and we can talk in fact about higher -theory groups as well, and all this is related to the homotopy groups too. There are many non-trivial results on the subject, the end of the game being of course that of understanding the “shape” of , that you need to know a bit about, before getting into serious geometry, in the case where happens to be a manifold.
As a question for us now, operator algebra theorists, we have:
Question 8.2.
Can we talk about the first -theory group of a compact quantum space ?
We will see that this is a quite subtle question. To be more precise, we will see that we can talk, in a quite straightforward way, of the group of an arbitrary -algebra , which is constructed as to have in the commutative case, where , with being a usual compact space. In the noncommutative case, however, will sometimes depend on the choice of satisfying , and so all this will eventually lead to a sort of dead end, and to a rather “no” answer to Question 8.2.
Getting started now, in order to talk about the first -theory group of an arbitrary -algebra , we will need the following simple fact:
Proposition 8.3.
Given a -algebra , the finitely generated projective -modules appear via quotient maps , so are of the form
with being an idempotent. In the commutative case, with classical, these -modules consist of sections of the complex vector bundles over .
Proof.
Here the first assertion is clear from definitions, via some standard algebra, and the second assertion is clear from definitions too, again via some algebra. ∎
With this in hand, let us go back to Definition 8.1. Given a compact space , it is now clear that its -theory group can be recaptured from the knowledge of the associated -algebra , and to be more precise we have , when the first -theory group of an arbitrary -algebra is constructed as follows:
Definition 8.4.
The first -theory group of a -algebra is the group of formal differences
of equivalence classes of projections , with the equivalence being given by
and with the additive structure being the obvious one, by diagonal concatenation.
This is very nice, and as a first example, we have . More generally, as already mentioned above, it follows from Proposition 8.3 that in the commutative case, where with being a compact space, we have . Observe also that we have, by definition, the following formula, valid for any :
Some further elementary observations include the fact that behaves well with respect to direct sums and with inductive limits, and also that is a homotopy invariant, and for details here, we refer to any introductory book on the subject, such as [bla].
In what concerns us, back to our Question 8.2, what has been said above is certainly not enough for investigating our question, and we need more examples. However, these examples are not easy to find, and for getting them, we need more theory. We have:
Definition 8.5.
The second -theory group of a -algebra is the group of connected components of the unitary group of , with
being the embeddings producing the inductive limit .
Again, for a basic example we can take , and we have here , trivially. In fact, in the commutative case, where , with being a usual compact space, it is possible to establish a formula of type . Further elementary observations include the fact that behaves well with respect to direct sums and with inductive limits, and also that is a homotopy invariant.
Importantly, the first and second -theory groups are related, as follows:
Theorem 8.6.
Given a -algebra , we have isomorphisms as follows, with
standing for the suspension operation for the -algebras:
-
(1)
.
-
(2)
.
Proof.
Here the isomorphism in (1) is something rather elementary, and the isomorphism in (2) is something more complicated. In both cases, the idea is to start first with the commutative case, where with being a compact space, and understand there the isomorphisms (1,2), called Bott periodicity isomorphisms. Then, with this understood, the extension to the general -algebra case is quite straightforward. ∎
The above result is quite interesting, making it clear that the groups are of the same nature. In fact, it is possible to be a bit more abstract here, and talk in various clever ways about the higher -theory groups, with , of an arbitrary -algebra, with the result that these higher -theory groups are subject to Bott periodicity:
However, in practice, this leads us back to Definition 8.4, Definition 8.5 and Theorem 8.6, with these statements containing in fact all we need to know, at .
Going ahead with examples, following Cuntz [cun] and related papers, we have:
Theorem 8.7.
The -theory groups of the Cuntz algebra are given by
with the equivalent projections standing for the standard generator of .
Proof.
We recall that the Cuntz algebra is generated by isometries satisfying . Since we have , with , we have:
On the other hand, we also know that we have , and the conclusion is that, in the first -theory group , the following equality happens:
Thus , and it is quite elementary to prove that happens in fact precisely when is a multiple of . Thus, we have a group embedding, as follows:
The whole point now is that of proving that this group embedding is an isomorphism, which in practice amounts in proving that any projection in is equivalent to a sum of the form , with as above. Which is something non-trivial, requiring the use of Bott periodicity, and the consideration of the second -theory group as well, and for details here, we refer to Cuntz [cun] and related papers. ∎
The above result is very interesting, for various reasons. First, it shows that the structure of the first -theory groups of the arbitrary -algebras can be more complicated than that of the first -theory groups of the usual compact spaces , with the group being for instance not ordered, in the case , and with this being the first in a series of no-go observations that can be formulated.
Second, and on a positive note now, what we have in Theorem 8.7 is a true noncommutative computation, dealing with an algebra which is rather of “free” type. The outcome of the computation is something nice and clear, suggesting that, modulo the small technical issues mentioned above, we are on our way of developing a nice theory, and that the answer to Question 8.2 might be “yes”. However, as bad news, we have:
Theorem 8.8.
There are discrete groups having the property that the projection
is not an isomorphism, at the level of -theory groups.
Proof.
For constructing such a counterexample, the group must be definitely non-amenable, and the first thought goes to the free group . But it is possible to prove that is -amenable, in the sense that is an isomorphism at the -theory level. However, counterexamples do exist, such as the infinite groups having Kazhdan’s property . Indeed, for such a group the asssociated Kazhdan projection is nonzero, while mapping to the zero element , so we have our counterexample. ∎
As a conclusion to all this, which might seem a bit dissapointing, we have:
Conclusion 8.9.
The answer to Question 8.2 is no.
Of course, the answer to Question 8.2 remains “yes” in many cases, the general idea being that, as long as we don’t get too far away from the classical case, the answer remains “yes”, so we can talk about the -theory groups of our compact quantum spaces , and also, about countless other invariants inspired from the classical theory. For a survey of what can be done here, including applications too, we refer to Connes’ book [co3].
In what concerns us, however, we will not take this path. For various reasons, coming from certain quantum physics beliefs, which can be informally summarized as “at sufficiently tiny scales, freeness rules”, we will be rather interested in this book in compact quantum spaces which are of “free” type, and we will only accept geometric invariants for them which are well-defined. And -theory, unfortunately, does not qualify.
8b. Free probability
As a solution to the difficulties met in the previous section, let us turn to probability. This is surely not geometry, in a standard sense, but at a more advanced level, geometry that is. For instance if you have a quantum manifold , and you want to talk about its Laplacian, or its Dirac operator, you will certainly need to know a bit about . And isn’t advanced measure theory the same as probability theory, hope we agree on this.
Let us start our discussion with something that we know since chapter 5:
Definition 8.10.
Let be a -algebra, given with a trace .
-
(1)
The elements are called random variables.
-
(2)
The moments of such a variable are the numbers .
-
(3)
The law of such a variable is the functional .
Here the exponent is as before a colored integer, with the powers being defined by multiplicativity and the usual formulae, namely:
As for the polynomial , this is a noncommuting -polynomial in one variable:
Generally speaking, the above definition is something quite abstract, but there is no other way of doing things, at least at this level of generality. However, in the special case where our variable is self-adjoint, or more generally normal, we have:
Proposition 8.11.
The law of a normal variable can be identified with the corresponding spectral measure , according to the following formula,
valid for any , coming from the measurable functional calculus. In the self-adjoint case the spectral measure is real, .
Proof.
This is something that we again know well, either from chapter 5, or simply from chapter 3, coming from the spectral theorem for normal operators. ∎
Let us discuss now independence, and its noncommutative versions. As a starting point, we have the following update of the classical notion of independence:
Definition 8.12.
We call two subalgebras independent when the following condition is satisfied, for any and :
Equivalently, the following condition must be satisfied, for any and :
Also, are called independent when and are independent.
It is possible to develop some theory here, but this leads to the usual CLT. As a much more interesting notion now, we have Voiculescu’s freeness [vo1]:
Definition 8.13.
Given a pair , we call two subalgebras free when the following condition is satisfied, for any and :
Also, are called free when and are free.
As a first observation, there is a certain lack of symmetry between Definition 8.12 and Definition 8.13, because the latter does not include an explicit formula for quantities of type . But this can be done, the precise result being as follows:
Proposition 8.14.
If are free, the restriction of to can be computed in terms of the restrictions of to . To be more precise, we have
where is certain polynomial, depending on the length of , having as variables the traces of products and , with and
Proof.
With , we can start our computation as follows:
Thus, we are led to a kind of recurrence, and this gives the result. ∎
Let us discuss now some examples of independence and freeness. We first have the following result, from [vo1], which is something elementary:
Proposition 8.15.
Given two algebras and , the following hold:
-
(1)
are independent inside their tensor product , endowed with its canonical tensor product trace, given on basic tensors by .
-
(2)
are free inside their free product , endowed with its canonical free product trace, given by the formulae in Proposition 8.14.
Proof.
Both the assertions are indeed clear from definitions, with just some standard discussion needed for (2), in connection with the free product trace. See [vo1]. ∎
More concretely now, we have the following result, also from Voiculescu [vo1]:
Proposition 8.16.
We have the following results, valid for group algebras:
-
(1)
are independent inside .
-
(2)
are free inside .
Proof.
In order to prove these results, we can use the general results in Proposition 8.15, along with the following two isomorphisms, which are both standard:
Alternatively, we can check the independence and freeness formulae on group elements, which is something trivial, and then conclude by linearity. See [vo1]. ∎
We have already seen limiting theorems in classical probability, in chapter 6. In order to deal now with freeness, let us develop some tools. First, we have:
Proposition 8.17.
We have a well-defined operation , given by
with being free, called free convolution.
Proof.
We need to check here that if are free, then the distribution depends only on the distributions . But for this purpose, we can use the formula in Proposition 8.14. Indeed, by plugging in arbitrary powers of as variables , we obtain a family of formulae of the following type, with being certain polyomials:
Thus the moments of depend only on the moments of , and the same argument shows that the same holds for -moments, and this gives the result. ∎
In order to advance now, we would need an analogue of the Fourier transform, or rather of the log of the Fourier transform. Quite remarkably, such a transform exists indeed, the precise result here, due to Voiculescu [vo1], being as follows:
Theorem 8.18.
Given a probability measure , define its -transform as follows:
The free convolution operation is then linearized by the -transform.
Proof.
This is something quite tricky, the idea being as follows:
(1) In order to model the free convolution, the best is to use creation operators on free Fock spaces, corresponding to the semigroup von Neumann algebras . Indeed, we have some freeness here, a bit in the same way as in the free group algebras .
(2) The point now, motivating this choice, is that the variables of type , with being the shift, and with being an arbitrary polynomial, are easily seen to model in moments all the possible distributions .
(3) Now let and consider the variables and , where are the shifts corresponding to the generators of . These variables are free, and by using a argument, their sum has the same law as .
(4) Thus the operation linearizes the free convolution. We are therefore left with a computation inside , which is elementary, and whose conclusion is that can be recaptured from via the Cauchy transform , as in the statement. ∎
With the above linearization technology in hand, we can now establish the following remarkable free analogue of the CLT, also due to Voiculescu [vo1]:
Theorem 8.19 (Free CLT).
Given self-adjoint variables which are f.i.d., centered, with variance , we have, with , in moments,
where is the Wigner semicircle law of parameter .
Proof.
We follow the same idea as in the proof of the CLT:
(1) At , the -transform of the variable in the statement can be computed by using the linearization property from Theorem 8.18, and is given by:
(2) On the other hand, some standard computations show that the Cauchy transform of the Wigner law satisfies the following equation:
Thus, by using Theorem 8.18, we have the following formula:
(3) We conclude that the laws in the statement have the same -transforms, and so they are equal. The passage to the general case, , is routine, by dilation. ∎
In the complex case now, we have a similar result, also from [vo1], as follows:
Theorem 8.20 (Free CCLT).
Given random variables which are f.i.d., centered, with variance , we have, with , in moments,
where , with being free, each following the Wigner semicircle law , is the Voiculescu circular law of parameter .
Proof.
This follows indeed from the free CLT, established before, simply by taking real and imaginary parts of all the variables involved. ∎
Now that we are done with the basic results in continuous case, let us discuss as well the discrete case. We can establish a free version of the PLT, as follows:
Theorem 8.21 (Free PLT).
The following limit converges, for any ,
and we obtain the Marchenko-Pastur law of parameter ,
also called free Poisson law of parameter .
Proof.
Let be the measure in the statement, appearing under the convolution sign. The Cauchy transform of this measure is elementary to compute, given by:
By using Theorem 8.18, we want to compute the following -transform:
We know that the equation for this function is as follows:
With we obtain from this the following formula:
But this being the -transform of , via some calculus, we are done. ∎
As a first application now of all this, following Voiculescu [vo2], we have:
Theorem 8.22.
Given a sequence of complex Gaussian matrices , having independent variables as entries, with , we have
in the limit, with the limiting measure being Voiculescu’s circular law.
Proof.
We know from chapter 6 that the asymptotic moments are:
On the other hand, the free Fock space analysis done in the proof of Theorem 8.18 shows that we have, with the notations there, the following formulae:
By doing some combinatorics, this shows that an abstract noncommutative variable is circular, following the law , precisely when its moments are:
Thus, we are led to the conclusion in the statement. See [vo2]. ∎
Next in line, comes the main result of Voiculescu in [vo2], as follows:
Theorem 8.23.
Given a family of sequences of Wigner matrices,
with pairwise independent entries, each following the complex normal law , with , up to the constraint , the rescaled sequences of matrices
become with semicircular, each following the Wigner law , and free.
Proof.
We can assume that we are dealing with 2 sequences of matrices, . In order to prove the asymptotic freeness, consider the following matrix:
This is then a complex Gaussian matrix, so by using Theorem 8.22, we have:
We are therefore in the situation where , which has asymptotically semicircular real and imaginary parts, converges to the distribution of a free combination of such variables. Thus become asymptotically free, as desired. ∎
Getting now to the complex case, we have a similar result here, as follows:
Theorem 8.24.
Given a family of sequences of complex Gaussian matrices,
with pairwise independent entries, each following the law , with , the matrices
become with circular, each following the Voiculescu law , and free.
Proof.
This follows indeed from Theorem 8.23, which applies to the real and imaginary parts of our complex Gaussian matrices, and gives the result. ∎
Finally, we have as well a similar result for the Wishart matrices, as follows:
Theorem 8.25.
Given a family of sequences of complex Wishart matrices,
with each being a matrix, with entries following the normal law , and with all these entries being pairwise independent, the rescaled sequences of matrices
become with Marchenko-Pastur, each following the law , and free.
Proof.
Here the first assertion is the Marchenko-Pastur theorem, from chapter 6, and the second assertion follows from Theorem 8.23, or from Theorem 8.24. ∎
Let us develop now some further limiting theorems, classical and free. We have the following definition, extending the Poisson limit theory developed before:
Definition 8.26.
Associated to any compactly supported positive measure on are the probability measures
where , called compound Poisson and compound free Poisson laws.
In what follows we will be interested in the case where is discrete, as is for instance the case for with , which produces the Poisson and free Poisson laws. The following result allows one to detect compound Poisson/free Poisson laws:
Proposition 8.27.
For with and , we have
where denote respectively the Fourier transform, and Voiculescu’s -transform.
Proof.
Let be the measure appearing in Definition 8.26. We have:
In the free case we can use a similar method, and we obtain the above formula. ∎
We have the following result, providing an alternative to Definition 8.26, which will be our formulation here of the Compond Poisson Limit Theorem, classical and free:
Theorem 8.28 (CPLT).
For with and , we have
where the variables are Poisson/free Poisson, independent/free.
Proof.
This follows indeed from the fact that the the Fourier/-transform of the variable in the statement is given by the formulae in Proposition 8.27. ∎
Following [bb+], [bbc], we will be interested here in the main examples of classical and free compound Poisson laws, which are constructed as follows:
Definition 8.29.
The Bessel and free Bessel laws are the compound Poisson laws
where is the uniform measure on the -th roots unity. In particular:
-
(1)
At we obtain the usual Poisson and free Poisson laws, .
-
(2)
At we obtain the “real” Bessel and free Bessel laws, denoted .
-
(3)
At we obtain the “complex” Bessel and free Bessel laws, denoted .
There is a lot of theory regarding these laws, and we refer here to [bb+], [bbc], where these laws were introduced. We will be back to these laws, in a moment.
8c. Algebraic manifolds
We are now ready, or almost, to develop some basic noncommutative geometry. The idea will be that of further building on the material from chapter 7, by enlarging the class of compact quantum groups studied there, with the consideration of quantum homogeneous spaces, , and with classical and free probability as our main tools.
But let us start with something intuitive, namely basic algebraic geometry, in a basic sense. The simplest compact manifolds that we know are the spheres, and if we want to have free analogues of these spheres, there are not many choices here, and we have:
Definition 8.30.
We have compact quantum spaces, constructed as follows,
called respectively free real sphere, and free complex sphere.
Observe that our spheres are indeed well-defined, due to the following estimate:
Given a compact quantum space , meaning as usual the abstract spectrum of a -algebra, we define its classical version to be the classical space obtained by dividing by its commutator ideal, then applying the Gelfand theorem:
Observe that we have an embedding of compact quantum spaces . In this situation, we also say that appears as a “liberation” of . We have:
Proposition 8.31.
We have embeddings of compact quantum spaces
and the spaces on the right appear as liberations of the spaces of the left.
Proof.
In order to prove this, we must establish the following isomorphisms:
But these isomorphisms are both clear, by using the Gelfand theorem. ∎
We can now introduce a broad class of compact quantum manifolds, as follows:
Definition 8.32.
A real algebraic submanifold is a closed quantum space defined, at the level of the corresponding -algebra, by a formula of type
for certain noncommutative polynomials . We identify two such manifolds, , when we have an isomorphism of -algebras of coordinates
mapping standard coordinates to standard coordinates.
In practice, while our assumption is definitely something technical, we are not losing much when imposing it, and we have the following list of examples:
Proposition 8.33.
The following are algebraic submanifolds :
-
(1)
The spheres .
-
(2)
Any compact Lie group, , with .
-
(3)
The duals of finitely generated groups, .
-
(4)
More generally, the closed subgroups , with .
Proof.
These facts are all well-known, the proofs being as follows:
(1) This is indeed true by definition of our various spheres.
(2) Given a closed subgroup , we have an embedding , with , given in double indices by , that we can further compose with the standard embedding . As for the fact that we obtain indeed a real algebraic manifold, this is standard too, coming either from Lie theory or from Tannakian duality.
(3) Given a group , consider the variables . These variables satisfy then the quadratic relations defining , and the algebricity claim for the manifold is clear.
(4) Given a closed subgroup , we have indeed an embedding , with , given by . As for the fact that we obtain indeed a real algebraic manifold, this comes from the Tannakian duality results in [mal], [wo2]. ∎
Summarizing, what we have in Definition 8.32 is something quite fruitful, covering many interesting examples. In addition, all this is nice too at the axiomatic level, because the equivalence relation for our algebraic manifolds, as formulated in Definition 8.32, fixes in a quite clever way the functoriality issues of the Gelfand correspondence.
At the level of the general theory now, as a first tool that we can use, for the study of our manifolds, we have the following version of the Gelfand theorem:
Theorem 8.34.
Assuming that is an algebraic manifold, given by
for certain noncommutative polynomials , we have
and itself appears as a liberation of .
Proof.
This is something that we know well for the spheres, from Proposition 8.31. In general, the proof is similar, coming from the Gelfand theorem. ∎
There are of course many other things that can be said about our manifolds, at the purely algebraic level. But in what follows we will be rather going towards analysis.
8d. Free geometry
We have now all the needed tools in our bag for developing “free geometry”. The idea will be that of going back to the free quantum groups from chapter 7, and further building on that material, with a beginning of free geometry. Let us start with:
Theorem 8.35.
The classical and free, real and complex quantum rotation groups can be complemented with quantum reflection groups, as follows,
with and being the hyperoctahedral group and the full complex reflection group, and and being their free versions.
Proof.
This is something quite tricky, the idea being as follows:
(1) The first observation is that , regarded as group of permutations of the coordinate axes of , is a group of orthogonal matrices, . The corresponding coordinate functions form a matrix which is “magic”, in the sense that its entries are projections, summing up to 1 on each row and each column. In fact, by using the Gelfand theorem, we have the following presentation result:
(2) Based on the above, and following Wang’s paper [wan], we can construct the free analogue of the symmetric group via the following formula:
Here the fact that we have indeed a Woronowicz algebra is standard, exactly as for the free rotation groups in chapter 7, because if a matrix is magic, then so are the matrices constructed there, and this gives the existence of .
(3) Consider now the group consisting of permutation-like matrices having as entries the -th roots of unity. This group decomposes as follows:
It is straightforward then to construct a free analogue of this group, for instance by formulating a definition as follows, with being a free wreath product:
(4) In order to finish, besides the case , of particular interest are the cases . Here the corresponding reflection groups are as follows:
As for the corresponding quantum groups, these are denoted as follows:
Thus, we are led to the conclusions in the statement. See [bb+], [bbc]. ∎
The point now is that we can add to the picture spheres and tori, as follows:
Fact 8.36.
The basic quantum groups can be complemented with spheres and tori,
with , and with standing for the duals of .
Again, this is something quite tricky, and there is a long story with all this. We already know from chapter 7 that the diagonal subgroups of the rotation groups are the tori in the statement, but this is just an epsilon of what can be said, and this type of result can be extended as well to the reflection groups, and then we can make the spheres come into play too, with various results connecting them to the quantum groups, and to the tori.
Instead of getting into details here, let us formulate, again a bit informally:
Fact 8.37.
The various quantum manifolds that we have, namely spheres , tori , unitary groups , and reflection groups , arrange into diagrams, as follows,
with the arrows standing for various correspondences between . These diagrams correspond to main noncommutative geometries, real and complex, classical and free,
with the remark that, technically speaking, , do not exist, as quantum spaces.
As before, things here are quite long and tricky, but we already have some good evidence for all this, so I guess you can just trust me. And if truly interested in all this, later after finishing this book, you can check [bgo] and subsequent papers for details.
Summarizing, we have some beginning of theory. Now with this understood, let us try to integrate on our manifolds. In order to deal with quantum groups, we will need:
Definition 8.38.
The Tannakian category associated to a Woronowicz algebra is the collection of vector spaces
where the corepresentations with colored integer, defined by
and multiplicativity, , are the Peter-Weyl corepresentations.
As a key remark, the fact that is biunitary translates into the following conditions, where is the linear map given by :
We are therefore led to the following abstract definition, summarizing the main properties of the categories appearing from Woronowicz algebras:
Definition 8.39.
Let be a finite dimensional Hilbert space. A tensor category over is a collection of subspaces
satisfying the following conditions:
-
(1)
implies .
-
(2)
If are composable, then .
-
(3)
implies .
-
(4)
Each contains the identity operator.
-
(5)
and contain the operator .
The point now is that conversely, we can associate a Woronowicz algebra to any tensor category in the sense of Definition 8.39, in the following way:
Proposition 8.40.
Given a tensor category over , as above,
is a Woronowicz algebra.
Proof.
This is something standard, because the relations determine a Hopf ideal, so they allow the construction of as in chapter 7. ∎
With the above constructions in hand, we have the following result:
Theorem 8.41.
The Tannakian duality constructions
are inverse to each other, modulo identifying full and reduced versions.
Proof.
The idea is that we have , for any algebra , and so we are left with proving that we have , for any category . But this follows from a long series of algebraic manipulations, and for details we refer to Malacarne [mal], and also to Woronowicz [wo2], where this result was first proved, by using other methods. ∎
In practice now, all this is quite abstract, and we will rather need Brauer type results, for the specific quantum groups that we are interested in. Let us start with:
Definition 8.42.
Let be the set of partitions between an upper colored integer , and a lower colored integer . A collection of subsets
with is called a category of partitions when it has the following properties:
-
(1)
Stability under the horizontal concatenation, .
-
(2)
Stability under vertical concatenation , with matching middle symbols.
-
(3)
Stability under the upside-down turning , with switching of colors, .
-
(4)
Each set contains the identity partition .
-
(5)
The sets and both contain the semicircle .
Observe the similarity with Definition 8.39. In fact Definition 8.42 is a delinearized version of Definition 8.39, the relation with the Tannakian categories coming from:
Proposition 8.43.
Given a partition , consider the linear map
given by the following formula, where is the standard basis of ,
and with the Kronecker type symbols depending on whether the indices fit or not. The assignement is then categorical, in the sense that we have
where are certain integers, coming from the erased components in the middle.
Proof.
The concatenation property follows from the following computation:
As for the other two formulae in the statement, their proofs are similar. ∎
In relation with quantum groups, we have the following result, from [bsp]:
Theorem 8.44.
Each category of partitions produces a family of compact quantum groups , one for each , via the following formula:
To be more precise, the spaces on the right form a Tannakian category, and so produce a certain closed subgroup , via the Tannakian duality correspondence.
Proof.
This follows indeed from Woronowicz’s Tannakian duality, in its “soft” form from Malacarne [mal], as explained in Theorem 8.41. Indeed, let us set:
By using the axioms in Definition 8.42, and the categorical properties of the operation , from Proposition 8.43, we deduce that is a Tannakian category. Thus the Tannakian duality applies, and gives the result. ∎
Philosophically speaking, the quantum groups appearing as in Theorem 8.44 are the simplest, from the perspective of Tannakian duality, so let us formulate:
Definition 8.45.
A closed subgroup is called easy when we have
for any colored integers , for a certain category of partitions .
All this might seem a bit complicated, but we will see examples in a moment. Getting back now to integration questions, we have the following key result:
Theorem 8.46.
For an easy quantum group , coming from a category of partitions , we have the Weingarten integration formula
for any and any , where , are usual Kronecker symbols, and , with , where is the number of blocks.
Proof.
We know from chapter 7 that the integrals in the statement form altogether the orthogonal projection onto the space . Let us set:
By standard linear algebra, it follows that we have , where is the inverse on of the restriction of . But this restriction is the linear map given by , and so is the linear map given by , and this gives the result. ∎
All this is very nice. However, before enjoying the Weingarten formula, we still have to prove that our main quantum groups are easy. The result here is as follows:
Theorem 8.47.
The basic quantum unitary and reflection groups
are all easy, the corresponding categories of partitions being as follows,
with standing for partitions and noncrosssing partitions, standing for pairings, and partitions with even blocks, and with calligraphic standing for matching.
Proof.
The quantum group is defined via the following relations:
Thus, the following operators must be in the associated Tannakian category:
We conclude that the associated Tannakian category is , with:
Thus, we have one result, and the other ones are similar. See [bb+], [bbc]. ∎
We are not ready yet for applications, because we still have to understand which assumptions on make the vectors linearly independent. We will need:
Definition 8.48.
The Möbius function of any lattice, and so of , is given by
with the construction being performed by recurrence.
The main interest in this function comes from the Möbius inversion formula:
In linear algebra terms, the statement and proof of this formula are as follows:
Proposition 8.49.
The inverse of the adjacency matrix of , given by
is the Möbius matrix of , given by .
Proof.
This is well-known, coming for instance from the fact that is upper triangular. Indeed, when inverting, we are led into the recurrence from Definition 8.48. ∎
Now back to our Gram and Weingarten matrix considerations, with , as in the statement of Theorem 8.46, we have the following result:
Proposition 8.50.
The Gram matrix is given by , where
and where is the adjacency matrix of .
Proof.
We have indeed the following computation:
According to the definition of and of , this formula reads:
Thus, we obtain the formula in the statement. ∎
With the above result in hand, we can now formulate:
Theorem 8.51.
The determinant of the Gram matrix is given by:
In particular, the vectors are linearly independent for .
Proof.
This is an old formula from the 60s, due to Lindstöm and others, having many things behind it. By using the formula in Proposition 8.50, we have:
Now if we order with respect to the number of blocks, then lexicographically, is upper triangular, and is lower triangular, and we obtain the above formula. ∎
Now back to our quantum groups, let us start with:
Theorem 8.52.
For an easy quantum group , coming from a category of partitions , the asymptotic moments of the character are
where , with the limiting sequence on the left consisting of certain integers, and being stationary at least starting from the -th term.
Proof.
This is something elementary, which follows straight from Peter-Weyl theory, by using the linear independence result from Theorem 8.51. ∎
In practice now, for the basic rotation and reflection groups, we obtain:
Theorem 8.53.
The character laws for basic rotation and reflection groups are
in the limit, corresponding to the basic probabilistic limiting theorems, at .
Proof.
This follows indeed from Theorem 8.47 and Theorem 8.52, by using the known moment formulae for the laws in the statement, at . ∎
In the free case, the convergence can be shown to be stationary starting from . The “fix” comes by looking at truncated characters, constructed as follows:
With this convention, we have the following final result on the subject, with the convergence being non-stationary at , in both the classical and free cases:
Theorem 8.54.
The truncated character laws for the basic quantum groups are
in the limit, corresponding to the basic probabilistic limiting theorems.
Proof.
We already know that the result holds at , and the proof at arbitrary is once again based on easiness, but this time by using the Weingarten formula for the computation of the moments. We refer here to [bb+], [bbc], [bco], [bsp]. ∎
All this is very nice, as a beginning. Of course, still left for this chapter would be the extension of all this to the case of more general homogeneous spaces , and other free manifolds, in the sense of the free real and complex geometry axiomatized before.
But hey, we learned enough math in this chapter, time for a beer. We refer here to the 2010 paper [bgo], which started things with the computation for , and to the book [ba3], which explains what was found on the subject, in the 10s. And if interested in this, the hot topic, waiting for input from you, are the applications to quantum physics.
8e. Exercises
There has been a lot of exciting theory in this chapter, and as exercise, we have:
Exercise 8.55.
Prove that is easy, coming from the category of all noncrossing partitions , and compute the asymptotic law of the main character.
As bonus exercise, try as well the truncated characters. Also, don’t forget about .
Part III Theory of factors
And the story tellers say
That the score brave souls inside
For many a lonely day sailed across the milky seas
Never looked back, never feared, never cried
Chapter 9 Functional analysis
9a. Kaplansky density
Welcome to this second half of the present book. We will get back here to a more normal pace, at least for most of the text to follow, our goal being to discuss the basics of the von Neumann algebra theory, due to Murray, von Neumann and Connes [co1], [co2], [mv1], [mv2], [mv3], [vn1], [vn2], [vn3], or at least the “basics of the basics”, the whole theory being quite complex, and then the most beautiful advanced theory which can be built on this, which is the subfactor theory of Jones [jo1], [jo2], [jo3], [jo4], [jo5], [jms], [jsu].
The material here will be in direct continuation of what we learned in chapter 5, namely bicommutant theorem, commutative case, finite dimensions, and a handful of other things. The idea will be that of building directly on that material, and using the same basic techniques, namely functional analysis and operator theory.
As an important point, all this is related, but in a subtle way, to what we learned in chapters 6-8 too. To be more precise, what we will be doing in chapters 9-12 here will be more or less orthogonal to what we did in chapters 6-8. However, and here comes our point, the continuation of all this, chapters 13-16 below following Jones, will stand as a direct continuation of what we did in chapters 6-8, with Jones’ subfactors being something more general than the random matrices and quantum groups from there.
Getting started, as a first objective we would like to have a better understanding of the precise difference between the norm closed -algebras, or -algebras, , and the weakly closed such algebras, which are the von Neumann algebras, from a functional analytic viewpoint. Let us begin with some generalities. We first have:
Proposition 9.1.
The weak operator topology on is the topology having the following equivalent properties:
-
(1)
It makes continuous, for any .
-
(2)
It makes when , for any .
-
(3)
Has as subbase the sets .
-
(4)
Has as base .
Proof.
The equivalences all follow from definitions, with of course (1,2) referring to the coarsest topology making that things happen. ∎
Similarly, in what regards the strong operator topology, we have:
Proposition 9.2.
The strong operator topology on is the topology having the following equivalent properties:
-
(1)
It makes continuous, for any .
-
(2)
It makes when , for any .
-
(3)
Has as subbase the sets .
-
(4)
Has as base the sets .
Proof.
Again, the equivalences are all clear, and with (1,2) referring to the coarsest topology making that things happen. ∎
We know from chapter 5 that an operator algebra is weakly closed if and only if it is strongly closed. Here is a useful generalization of this fact:
Theorem 9.3.
Given a convex set , its weak operator closure and strong operator closure coincide.
Proof.
Since the weak operator topology on is weaker by definition than the strong operator topology on , we have, for any subset :
Now by assuming that is convex, we must prove that:
In order to do so, let us pick vectors and . We let , and we consider the standard embedding , given by:
We have then the following implications, which are all trivial:
Now since the set was assumed to be convex, the set is convex too, and by the Hahn-Banach theorem, for compact sets, it follows that we have:
Thus, there exists an operator such that we have, for any :
But this shows that we have , and since and were arbitrary, by Proposition 9.2 it follows that we have , as desired. ∎
We will need as well the following standard result:
Proposition 9.4.
Given a vector space , and a linear form , the following conditions are equivalent:
-
(1)
is weakly continuous.
-
(2)
is strongly continuous.
-
(3)
, for certain vectors .
Proof.
This is something standard, using the same tools at those already used in chapter 5, namely basic functional analysis, and amplification tricks:
Since the weak operator topology on is weaker than the strong operator topology on , weakly continuous implies strongly continuous. To be more precise, assume strongly. Then weakly, and since was assumed to be weakly continuous, we have . Thus is strongly continuous, as desired.
Assume indeed that our linear form is strongly continuous. In particular is strongly continuous at 0, and Proposition 9.2 provides us with vectors and a number such that, with the notations there: