The publication of a new linear algebra textbook is not normally a cause for excitement. However, Roger Horn is co-author of two of the most highly regarded and widely used books on matrix analysis: Matrix Analysis (2nd edition, 2013) and Topics in Matrix Analysis (1991), both co-authored with Charles Johnson. It is therefore to be expected that this new book by Garcia and Horn will offer something special.
Chapter 0 (Preliminaries) summarizes basic concepts and definitions, often stating results without proof (for example, properties of determinants). Chapters 1 (Vector Spaces) and 2 (Bases and Similarity) are described as reviews, but give results with proofs and examples. The second course proper starts with Chapter 3 (Block Matrices). As the chapter title suggests, the book makes systematic use of block matrices to simplify the treatment, and it is very much based on matrices rather than linear transformations.
Two things stand out about this book. First, it lies part-way between a traditional linear algebra text and texts with a numerical linear algebra focus. Thus it includes Householder matrices (but not Givens matrices), QR factorization, and Cholesky factorization. The construction given for QR factorization is essentially the Householder QR factorization, but the existence proof for Cholesky goes via the QR factor of the Hermitian positive definite square root, rather than by constructing the Cholesky factor explicitly via the usual recurrences. The existence of square roots of Hermitian positive definite matrices is proved via the spectral decomposition. It is possible to prove the existence of square roots without using the spectral theorem, and it would have been nice to mention this, at least in an exercise.
The second impressive aspect of the book is the wide, and often quite advanced, range of topics covered, which includes polar decomposition, interlacing results for the eigenvalues of Hermitian matrices, and circulant matrices. Not covered are, for example, Perron–Frobenius theory, the power method, and functions of nonsymmetric matrices (though various special cases are covered, such as the square root of Jordan block, often in the problems). New to me are the QS decomposition of a unitary matrix, Shoda’s theorem on commutators, and the Fuglede–Putnam theorem on normal matrices.
The 16-page index occupies 3.7 percent of the book, which, according to the length criteria discussed in my article A Call for Better Indexes, is unusually thorough. However, there is some over-indexing. For example, the entry permutation consists of 7 subentries all referring to page 10, but “permutation, 10” would have sufficed. An index entry “Cecil Sagehen” puzzled me. It has two page locators: one on which that term does not appear and one for a problem beginning “Cecil Sagehen is either happy or sad”. A little investigation revealed that “Cecil the Sagehen” is the mascot of Pomona College, which is the home institution of the first author.
There is a large collection of problems that go well beyond simple illustration and computation, and it is good to see that the problems are indexed.
Here are some other observations.
- The singular value decomposition (SVD) is proved via the eigensystem of . Personally, I prefer the more elegant, if less intuitively obvious, proof in Golub and Van Loan’s Matrix Computations.
- The treatment of Gershgorin’s theorem occupies six pages, but it omits the practically important result that if discs form a connected region that is isolated from the other discs then that region contains precisely eigenvalues.
- The Cayley-Hamilton theorem is proved by using the Schur form. I would do it either via the minimal polynomial or the Jordan form, but these concepts are introduced only in later chapters.
- Correlation matrices are mentioned in the preface, but do not appear in the book. They can make excellent examples.
- The real Schur decomposition is not included, but rather just the special case for a real matrix having only real eigenvalues.
- Matrix norms are not treated. The Frobenius norm is defined as an inner product norm and, unusually, the 2-norm is defined as the largest singular value of a matrix. There are no index entries for “matrix norm”, “norm, matrix”, “vector norm”, or “norm, vector”.
- The pseudoinverse of a matrix is defined via the SVD. The Moore-Penrose conditions are not explicitly mentioned.
- Three pages at the front summarize the notation and point to where terms are defined. Ironically, the oft-used notation for an matrix, is not included.
- Applications are mentioned only in passing. However, this does keep the book down to a relatively slim 426 pages.
Just as for numerical analysis texts, there will probably never exist a perfect linear algebra text.
The book is very well written and typeset. With its original presentation and choice of content it must be a strong contender for use on any second (or third) course on linear algebra. It can also serve as a reference on matrix theory: look here first and turn to Horn and Johnson if you don’t find what you want. Indeed a surprising amount of material from Horn and Johnson’s books is actually covered, albeit usually in less general form.