Creativity Workshop for EPSRC NA-HPC Network

The EPSRC Network Numerical Algorithms and High Performance Computing, coordinated by David Silvester and me, came to the end of its three-year term in May 2014. One of our final activities was a two-day Creativity Workshop, held at Chicheley Hall just before Easter.

140416-1850-58-0780-Edit.jpg

The workshop was advertised to network members and we were able to accept all applicants. The 23 attendees comprised PhD students, postdoctoral researchers, faculty, and HPC support experts from Cambridge University, the University of Edinburgh, Imperial College, The University of Manchester, MIT, NAG Ltd., Queens University Belfast, STFC-RAL, UCL, and the University of Tennessee at Knoxville, along with an EPSRC representative.

The workshop was facilitated by creativity expert Dennis Sherwood. I explained the idea of these workshops in an earlier post about a creativity workshop we held for the Manchester Numerical Analysis Group last year. The procedure is for the attendees to work in groups tackling important questions using a structured approach that encourages innovative ideas to be generated and carefully assessed and developed. The key ingredients are

  • a group of enthusiastic people,
  • careful planning to produce a set of nontrivial questions that address the workshop goals and are of interest to the attendees,
  • a willingness to adapt the schedule based on how the workshop progresses.
140416-1143-11-0766.jpg

Dennis Sherwood talking about innovation and idea generation.

The workshop was targeted at researchers working at the interface between numerical analysis and high performance computing. The aims were to share ideas and experiences, make progress on research problems, and identify topics for research proposals and new collaborations.

The topics addressed by the groups were sensitivity in sparse matrix computations; programming languages; deployability, maintainability and reliability of software; fault-resilient numerical algorithms; and “16th April 2019″.

The notes for the last topic began “It’s 16th April 2019, and we’re celebrating the success of our network. What is it, precisely, that is so successful? And what was it about the decisions we took five years ago, in 2014, that, with hindsight, were so important?”. The discussion led to a number of ideas for taking the activities of the network forward over the coming years. These include

  • organizing summer schools,
  • producing a register of members’ interests and areas of expertise,
  • exploiting opportunities for co-design across communities such as algorithm designers, NA specialists and domain scientists, and
  • creating opportunities targeted at early career members of the network.

As an ice-breaker and a way of the participants getting to know each other everyone was asked to prepare a flip chart containing a summary of their key attributes, why they were attending, and something they have done that they feel particularly good about. These were presented throughout the two days.

140417-1648-04-0049.jpg

Presenting my “Who I Am”, with Post-its behind me containing ideas written down by participants during the workshop.

Dennis Sherwood has produced a 166-page report that distills and organizes the ideas generated during the workshop. Attendees will find this very useful as a reminder of the event and of the various actions that resulted from it.

The Venue

140417-0737-16-4240-Edit.jpg

Chicheley Hall, is a historic country house located near Milton Keynes. It was purchased a few years go by the Royal Society, who turned it into a hotel and conference center, and it houses the Kavli Royal Society International Centre. It’s a terrific place to hold a small workshop. The main house and its meeting rooms have a wonderful ambience, the 80-acre grounds (complete with lake and sculpture) are a delight to walk around, and each of the 48 bedrooms is named after a famous scientist.

140416-1236-23-4066.jpg

140416-1819-09-4147.jpg

Photo credits: Nick Higham (1,2,4,5,6), Dennis Sherwood (3).

Posted in conferences | Tagged | Leave a comment

Videos of Lectures from Gene Golub SIAM Summer School 2013

Videos of lectures given by four of the five lecturers at the 2013 Gene Golub SIAM summer school at Fudan University in Shanghai are now available on the summer school website.

These include the five 2-hour lectures from my course on Functions of Matrices. Here is a summary of the contents of my lectures, with direct links to the videos hosted on YouTube.

IMG_2444.JPG

  • Lecture 1: History, definitions and some applications of matrix functions. Quiz.
  • Lecture 2: Properties, more applications, Fréchet derivative, and condition number.
  • Lecture 3: Exponential integrator application. Problem classification. Methods for f(A): Schur-Parlett method, iterative methods for sign function and matrix square root.
  • Lecture 4: Convergence and stability of iterative methods for sign function and square root. The f(A)b problem. Software for matrix functions.
  • Lecture 5: The method of Al-Mohy and Higham (2011) for the \exp(A)b problem. Discussion of how to do research, reproducible research, workflow.

A written summary of the course is available as Matrix Functions: A Short Course (MIMS EPrint 2013.73).

The video team, visible in the photo below that I took of my audience, have done a great job. The music over the opening sequence is reminiscent of the theme from the film Titanic!

130722-0829-58-2448.jpg

As a reminder, other relevant links are

Posted in conferences | Tagged | Leave a comment

My Mac Setup

I came to Macs quite late, switching to Mac laptops in 2009 because of the quality of the hardware. Over the last year I have taken my 13-inch MacBook Pro Retina to China, the USA and Europe. With the World Travel Adapter Kit to allow hassle-free power connections, this is the ultimate machine for travelling.

I still use Windows desktop machines, but switching between Mac and Windows machines is easy nowadays thanks to three things: almost all the software that I use runs on both systems, Dropbox allows easy sharing of files between machines, and Windows and Mac OS X have converged so as to have very similar features and capabilities.

131006-1050-58-6084.jpg

Most of my core applications are open source: Emacs, Firefox, Thunderbird, Git for version control, Cyberduck (for ftp and ssh), and TeX Live. Mac-specific software includes iTerm2 (a replacement for Terminal), Path Finder (an enhanced Finder), Skim (PDF viewer) and Witch (app-switcher, Cmd-tab replacement). And for numerical and symbolic computation I use MATLAB.

A password manager is essential nowadays. I use 1Password, which runs on all my Apple hardware and Windows, and I sync it via Dropbox.

On the iPhone a couple of free apps are proving very useful. MapsWithMe gives offline maps downloadable by country, and since it only needs a GPS signal it’s great for finding where you are while on a train, or in a foreign country. As long as I have the iPhone in my pocket, Moves is good at counting my number of steps per day, which is sadly all too low, and records my time spent travelling. It also has the handy feature of showing on a map where you have been, which is useful if you are lost and want to retrace your steps.

On my MacBook Pro I have File Vault turned on, so that the hard disk is encrypted. I’m impressed with how little overhead this creates with the Core i7 Ivy bridge chip and an SSD. I also like the way File Vault works with Find My Mac to trap thieves via the Guest account (as detailed in this article)!

I continue to use Windows desktop machines. Two particular reasons are that I have not found Mac programs that match the functionality of Xyplorer (file manager) and Fineprint (printer driver), which I use many times every day.

This post is a modified version of an article titled “My Setup” that appeared in MacUser magazine, November 2013, page 126.

Posted in software | Tagged , | Leave a comment

400 Years of Logarithms

The logarithm was first presented in John Napier’s 1614 book Mirifici Logarithmorum Canonis Descriptio (Description of the Wonderful Canon of Logarithms). Last week I was celebrating 400 years of logarithms at the Napier 400 workshop held at the ICMS in Edinburgh and organized by NAIS. The previous such celebrations had been in 1914 and, as one speaker remarked, it is nice to participate in an event held only once every 100 years.

This one-day workshop included talks by Mike Giles on computing logarithms and other special functions on GPUs, and Jacek Gondzio on the history of the logarithmic barrier function in linear and nonlinear optimization.

My interest is in the matrix logarithm. The earliest explicit occurrence that I am aware of is in an 1892 paper by Metzler On the Roots of Matrices, so we are only just into the second century of matrix logarithms.

140402-1631-24-111.jpg

Photo and Tweet by @DesHigham: “@nhigham introduced by Dugald Duncan at @ICMS_Edinburgh”.

In my talk The Matrix Logarithm: from Theory to Computation I explained how the inverse scaling and squaring (ISS) algorithm that we use today to compute the matrix logarithm is a direct analogue of the method Henry Briggs used to produce his 1624 tables Arithmetica Logarithmica, which give logarithms to the base 10 of the numbers 1–20,000 and 90,000–100,000 to 14 decimal places. Briggs’s impressive hand computations were done by using the formulas \log a = 2^k \log a^{1/2^k} and \log(1+x) \approx x to write \log_{10} a \approx 2^k \cdot \log_{10}e \cdot (a^{1/2^k} - 1). The ISS algorithm for the matrix case uses the same idea, with the square roots being matrix square roots, but approximates \log(1+x) at a matrix argument using Padé approximants, evaluated using a partial fraction expansion. The Fréchet derivative of the logarithm can be obtained by Fréchet differentiating the formulas used in the ISS algorithm. For details see Improved Inverse Scaling and Squaring Algorithms for the Matrix Logarithm (2012) and Computing the Fréchet Derivative of the Matrix Logarithm and Estimating the Condition Number (2013).

As well as the logarithm itself, various log-like functions are of interest nowadays. One is the unwinding function, discussed in my previous post. Another is the Lambert W function, defined as the solution W(z) of W(z) e^{W(z)} = Z. Its many applications include the solution of delay differential equations. Rob Corless and his colleagues produced a wonderful poster about the Lambert W function, which I have on my office wall. Cleve Moler has a recent blog post on the function.

A few years ago I wrote a paper with Rob, Hui Ding and David Jeffrey about the matrix Lambert W function: The solution of S exp(S) = A is not always the Lambert W function of A. We show that as a primary matrix function the Lambert W function does not yield all solutions to S \exp(S) = A, just as the primary logarithm does not yield all solutions to e^X = A. I am involved in some further work on the matrix Lambert W function and hope to have more to report in due course.

Posted in research | Tagged , , | Leave a comment

Making Sense of Multivalued Matrix Functions with the Matrix Unwinding Function

Try the following quiz. Let A be an n\times n real or complex matrix. Consider the principal logarithm—the one for which \log z has imaginary part in (-\pi,\pi]—and define z^{t} = e^{t \log z} for t\in\mathbb{C} (an important special case being t = 1/p for an integer p) .

True or false:

  1. \log e^A = A for all A, in other words passing A through the exponential then the logarithm takes us on a round trip.
  2. (I-A^2)^{1/2} = (I-A)^{1/2}(I+A)^{1/2} for all A.
  3. (AB)^{t} = A^{t}B^{t} whenever A and B commute.

The answers are

  1. False. Yet e^{\log A} = A is always true.
  2. True. Yet the similar identity (A^2-I)^{1/2}=(A-I)^{1/2}(A+I)^{1/2} is false.
  3. False.

At first sight these results may seem rather strange. How can we understand them? If you take the viewpoint that each occurrence of \log and a power t in the above expressions stands for the families of all possible logarithms and powers then the identities are all true. But from a computational viewpoint we are usually concerned with a particular branch of each function, the principal branch, so equality cannot be taken for granted.

An excellent tool for understanding these identities is a new matrix function called the matrix unwinding function. This function is defined for any square matrix A by U(A) = (A - \log e^A )/(2\pi i), and it arises from the scalar unwinding number introduced by Corless, Hare and Jeffrey in 1996 1, 2. There is nothing special about A and B being matrices in this quiz; the answers are the same if they are scalars. But the matrix unwinding function neatly handles the extra subtleties of the matrix case.

130712-1101-57-2430.jpg

Mary’s talk at the 2014 SIAM Annual Meeting in San Diego.

From the definition we have \log e^A = A + 2\pi i U(A), so the relation in the first quiz question is clearly valid when U(A) = 0, which is the case when the eigenvalues of A have imaginary parts lying on the interval (-\pi,\pi]. Each of the above identities can be understood by deriving an exact relation in which the unwinding function provides the discrepancy between the left and right-hand sides. For example,

(AB)^t = A^t B^t e^{-2\pi t i U(\log A + \log B)}.

Mary Aprahamian and I have recently published the paper The Matrix Unwinding Function, with an Application to Computing the Matrix Exponential, (SIAM J. Matrix. Anal. Appl., 35, 88-109, 2014), in which we introduce the matrix unwinding function and develop its many interesting properties. We analyze the identities discussed above, along with various others. Thanks to the University of Manchester’s Open Access funds, that paper is available for anyone to download from the SIAM website, using the given link.

The matrix unwinding function has another use. Note that e^A=e^{\log e^A}=e^{A-2\pi i U(A)} and the matrix A-2\pi i U(A) has eigenvalues with imaginary parts in (-\pi,\pi]. The scaling and squaring method for computing the matrix exponential is at its most efficient when A has norm of order 1, and this argument reduction operation tends to reduce the norm of A when A has eigenvalues with large imaginary part. In the paper we develop this argument reduction and show that it can lead to substantial computational savings.

How can we compute U(A)? The following incomplete MATLAB code implements the Schur algorithm developed in the paper. The full code is available.

function U = unwindm(A,flag)
%UNWINDM  Matrix unwinding function.
%   UNWINDM(A) is the matrix unwinding function of the square matrix A.

%   Reference: M. Aprahamian and N. J. Higham.
%   The matrix unwinding function, with an application to computing the
%   matrix exponential.  SIAM J. Matrix Anal. Appl., 35(1):88-109, 2014.

%   Mary Aprahamian and Nicholas J. Higham, 2013.

if nargin < 2, flag = 1; end
[Q,T] = schur(A,'complex');

ord = blocking(T);
[ord, ind] = swapping(ord);  % Gives the blocking.
ord = max(ord)-ord+1;        % Since ORDSCHUR puts highest index top left.
[Q,T] = ordschur(Q,T,ord);
U = Q * unwindm_tri(T) * Q';

%%%%%%%%%%%%%%%%%%%%%%%%%%%
function F = unwindm_tri(T)
%UNWINDM_tri   Unwinding matrix of upper triangular matrix.

n = length(T);
F = diag( unwind( diag(T) ) );

% Compute off-diagonal of F by scalar Parlett recurrence.
for j=2:n
     for i = j-1:-1:1
         if F(i,i) == F(j,j)
            F(i,j) = 0;        % We're within a diagonal block.
         else   
            s = T(i,j)*(F(i,i)-F(j,j));
            if j-i >= 2
               k = i+1:j-1;
               s = s + F(i,k)*T(k,j) - T(i,k)*F(k,j);
            end
            F(i,j) = s/(T(i,i)-T(j,j));
         end
     end   
end

%%%%%%%%%%%%%%%%%%%%%%
function u = unwind(z)
%UNWIND  Unwinding number.
%   UNWIND(A) is the (scalar) unwinding number.

u = ceil( (imag(z) - pi)/(2*pi) );

... Other subfunctions omitted

Here is an example. As it illustrates, the unwinding matrix of a real matrix is usually pure imaginary.

>> A = [1 4; -1 1]*4, U = unwindm(A)
A =
     4    16
    -4     4
U =
   0.0000 + 0.0000i   0.0000 - 2.0000i
   0.0000 + 0.5000i  -0.0000 + 0.0000i
>> residual = A - logm(expm(A))
residual =
   -0.0000   12.5664
   -3.1416   -0.0000
>> residual - 2*pi*i*U
ans =
   1.0e-15 *
  -0.8882 + 0.0000i   0.0000 + 0.0000i
  -0.8882 + 0.0000i  -0.8882 + 0.3488i

Footnotes:

1

Robert Corless and David Jeffrey, The Unwinding Number, SIGSAM Bull 30, 28-35, 1996

2

David Jeffrey, D. E. G. Hare and Robert Corless, Unwinding the Branches of the Lambert W Function, Math. Scientist 21, 1-7, 1996

Posted in research | Tagged , | Leave a comment

Second Edition (2014) of Handbook of Linear Algebra edited by Hogben

One of the two or three largest books I have ever owned was recently delivered to me. The second edition of the Handbook of Linear Algebra, edited by Leslie Hogben (with the help of associate editors Richard Brualdi and G. W. (Pete) Stewart), comes in at over 1900 pages, 7cm thick and about half a kilogram. It is the same height and width, but much thicker than, the fourth edition of Golub and Van Loan’s Matrix Computations.

140302-1126-31-6120.jpg
140302-1132-20-6124.jpg

The second edition is substantially expanded from the 1400 page first edition of 2007, with 95 articles as opposed to the original 77. The table of contents and list of contributors is available at the book’s website.

The handbook aims to cover the major topics of linear algebra at both undergraduate and graduate level, as well as numerical linear algebra, combinatorial linear algebra, applications to different areas, and software.

The distinguished list of about 120 authors have produced articles in the CRC handbook style, which requires everything to be presented as a definition, a fact (without proof), an algorithm, or an example. As the author of the chapter on Functions of Matrices, I didn’t find this a natural style to write in, but one benefit is that it encourages the presentation of examples and the large number of illustrative examples is a characteristic feature of the book.

The 18 new chapters include

  • Tensors and Hypermatrices by Lek-Heng Lim
  • Matrix Polynomials by Joerg Liesen and Christian Mehl
  • Matrix Equations by Beatrice Meini
  • Invariant Subspaces by G. W. Stewart
  • Tournaments by T. S. Michael
  • Nonlinear Eigenvalue Problems by Heinrich Voss
  • Linear Algebra in Mathematical Population Biology and Epidemiology by Fred Brauer and Carlos Castillo-Chavez
  • Sage by Robert A. Bezer, Robert Bradshaw, Jason Grout, and William Stein

A notable absence from the applications chapters is network analysis, which in recent years has increasingly made use of linear algebra to define concepts such as centrality and communicability. However, it is impossible to cover every topic and in such a major project I would expect that some articles are invited but do not come to fruition by publication time.

The book is typeset in \LaTeX, like the first edition, but now using the Computer Modern fonts, which I feel give better readability than the font used previously.

A huge amount of thought has gone into the book. It has a 9 page initial section called Preliminaries that lists key definitions, a 51 page glossary, a 12 page notation index, and a 54 page main index.

For quite a while I was puzzled by index entries such as “50-12–17″. I eventually noticed that the second dash is an en-dash and realized that the notation means “pages 12 to 17 of article 50″. This should have been noted at the start of the index.

In fact my only serious criticism of the book is the index. It is simply too hard to find what you are looking for. For example, there is no entry for Gerhsgorin’s theorem, which appears on page 16-6. Nor is there one for Courant-Fischer, whose variational eigenvalue characterization theorem is on page 16-4. There is no index entry under “exponential”, but the matrix exponential appears under two other entries and they point to only one of the various pages where the exponential appears. The index entry for Loewner partial ordering points to Chapter 22, but the topic also has a substantial appearance in Section 9.5. Surprisingly, most of these problems were not present in the index to the first edition, which is also two pages longer!

Fortunately the glossary is effectively a high-level index with definitions of terms (and an interesting read in itself). So to get the best from the book use the glossary and index together!

An alternative book for reference is Bernstein’s Matrix Mathematics (second edition, 2009), which has an excellent 100+ page index, but no glossary. I am glad to have both books on my shelves (the first edition at home and the second edition at work, or vice versa—these books are too heavy to carry around!).

Overall, Leslie Hogben has done an outstanding job to produce a book of this size in a uniform style with such a high standard of editing and typesetting. Ideally one would have both the hard copy and the ebook version, so that one can search the latter. Unfortunately, the ebook appears to have the same relatively high list price as the hard copy (“unlimited access for $169.95″) and I could not see a special deal for buying both. Nevertheless, this is certainly a book to ask your library to order and maybe even to purchase yourself.

Posted in books | Leave a comment

Cataloguing Software for Matrix Functions

I began working on functions of matrices just over thirty years ago, when I was an MSc student, my original interest being in the matrix square root. In those days relatively little research had been done on the topic and no software for evaluating matrix functions was generally available. Since then interest in matrix functions has grown greatly.

Functions of interest include the exponential, the logarithm, and real powers, along with all kinds of trigonometric functions and some less generally known functions such as the sign function and the unwinding function.

130410-1301-49-0762.jpg

Gil Strang at the Advances in Matrix Functions and Matrix Equations Workshop in Manchester, April 2013.

A large amount of software for evaluating matrix functions now exists, covering many languages (C++, Fortran, Julia, Python, …) and problem solving environments (GNU Octave, Maple, Mathematica, MATLAB, R, Scilab, …). Some of it is part of a core product or package, while other codes are available individually on researchers’ web sites. It is hard to keep up with what is available.

Edvin Deadman and I therefore decided to produce a catalogue of matrix function software, which is available in the form of MIMS EPrint 2014.8. We have organized the catalogue by language/package and documented the algorithms that are implemented. The EPrint also contains a summary of the rapidly growing number of applications in which matrix functions are used, including some that we discovered only recently, and a list of what we regard as the best current algorithms.

Producing the catalogue took more work than I expected. Many packages are rather poorly documented and we sometimes had to delve deep into documentation or source code in order to find out which algorithms had been implemented.

One thing our survey shows is that the most complete and up to date collection of codes for matrix functions currently available is that in the NAG Library, which contains over 40 codes all implementing state of the art algorithms. This is no accident. We recently completed a three year Knowledge Transfer Partnership (KTP) funded by the Technology Strategy Board, the University of Manchester, and EPSRC, whose purpose was to translate matrix function algorithms into software for the NAG Engine (the underlying code base from which all NAG products are built) and to embed processes and expertise in developing matrix functions software into the company. Edvin was the Associate employed on the project and wrote all the codes, which are in Fortran.

A video about the KTP project been made by the University of Manchester and more information about the project can be obtained from Edvin’s posts at the NAG blog:

We intend to update the catalogue from time to time and welcome notification of errors and omissions.

Posted in software | Tagged , , | Leave a comment