(6) 26G 16@ 1& 36G

(6a) 22sp 2. 1: 95 1& 15 1& 3S 2& 3: 1.

which say that on the 6th line you should type the letter G 26 times followed by 16 @ symbols, etc., then overwrite the line with 22 spaces, 2 full stops, etc.

This is an example of *ASCII art*, though ASCII art does not usually involve overwriting characters.

At the time I had a Commodore Pet microcomputer and it struck me that the painstaking process of typing the image would be better turned into a computer program. Once written and debugged the program could be used to print multiple copies of the image. By switching the data set the program could be used to print other photos. So I wrote a program in Commodore Basic that printed the image to a Commodore 4022 dot matrix printer.

I sent the program to Bob. He liked it and printed the program in an appendix to his 1982 book *Bob Neill’s Book of Typewriter Art (With Special Computer Programme)*. That book contains instructions for typing 20 different images, including other members of the royal family, Elvis Presley and Telly Savalas (the actor who played Kojak in the TV series of the same name, which was popular at the time), and various animals,. *Bob Neill’s Second Book of Typewriter Art* was published in 1984, which reprinted my original program. It included further celebrities such as Adam Ant, Benny from Crossroads, “J.R.” from Dallas and Barry Manilow

I recently came across some articles describing Bob’s work, including one by his daughter, Barbara, one by Lori Emerson that includes a PDF scan of the first book, and an article The Lost Ancestors of ASCII Art. The latter pointed me to a recently published book Typewriter Art: A Modern Anthology. This resurgence of interest in typewriter art prompted me to look again at my code.

I had revisited my original 1982 Basic code later in the 1980s, converting it to GW-Basic so it would run on IBM PCs with Epson printers. I had also added the data for *The Tabby Cat* from Bob’s second book. Here is an extract from the code, complete with GOTOs and GOSUBs (GW-Basic had few structured programming features).

10 REM TYPEART.BAS 20 REM Program by Nick Higham 1982 (Commodore Basic), 30 REM and 1988 (GW-Basic/Turbo Basic). (c) N.J. Higham 1982, 1988. 40 REM Designs by Bob Neill. (c) A.R. Neill 1982, 1984. ... 530 REM ----------------------------- 540 REM ROUTINE TO PRINT OUT DATABASE 550 REM ----------------------------- 560 DEV$ = "LPT"+PP$+":" 570 OPEN DEV$ FOR OUTPUT AS #1 580 PRINT #1, RESET.CODE$ 590 WIDTH #1,255 ' this stops basic inserting unwanted carriage returns 600 GOSUB 800 610 L$="" 620 GOSUB 700:IF A$="/" THEN PRINT#1, NORMAL.LFEED$+L$: GOTO 610 630 IF A$="-" THEN PRINT#1, ZERO.LFEED$;L$: GOTO 610 640 A=ASC(A$):IF A>47 AND A<58 THEN A=A-48: GOTO 660 650 L$=L$+A$: GOTO 620 660 GOSUB 700:B=ASC(A$):IF B>47 AND B<58 THEN A=10*A+B-48: GOSUB 700 670 FOR I=1 TO A:L$=L$+A$:NEXT: GOTO 620 680 ' 690 REM -- SUBROUTINE TO TAKE NEXT CHARACTER FROM Z$ 700 A$=MID$(Z$,P,1):P=P+1: IF A$<>" " AND A$<>"" THEN 730 710 IF P>Z THEN GOSUB 800 720 GOTO 700 730 IF A$="]" THEN A$=" " 740 IF A$="#" THEN A$=CHR$(34) 750 IF A$="^" THEN A$=":" 760 IF P>Z THEN GOSUB 800 770 RETURN 780 ' 790 REM -- SUBROUTINE TO READ NEXT LUMP OF DATA 800 READ Z$:Z=LEN(Z$):P=1 810 IF Z$="PAUSE" THEN FOR D=1 TO 20000:NEXT: GOTO 800 820 IF Z$="FINISH" THEN PRINT #1, CHR$(12)+RESET.CODE$: CLOSE #1:END 830 RETURN 840 ' 850 REM ------------------------------------- 860 REM * DATABASE1 - H.R.H. PRINCE CHARLES * 870 REM ------------------------------------- 880 ' 890 DATA "H.R.H. Prince Charles" 900 DATA 79G/79G/79G/79G 910 DATA /79G-25]2.2^2&^L2^2&3^2. 920 DATA /26G16@&36G-22]2.^9]&S&3S2&3^. 930 DATA /22G23@34G-20].^10&]3&^6Y2C&^. ... 4710 ' 4720 REM -- EXPLANATION OF DATA -- 4730 REM / MEANS NEWLINE 4740 REM - MEANS CONTINUATION LINE 4750 REM 29G MEANS PRINT 29 LETTER G'S. 4760 REM @ MEANS PRINT ONE @ CHARACTER. 4770 REM CHARACTERS : " AND 'SPACE' 4780 REM ARE REPRESENTED BY ^ # AND ] 4790 REM IN THE DATA STATEMENTS. 4800 REM ALL OTHER CHARACTERS ARE 4810 REM PRINTED OUT AS THEMSELVES.

The full code is available, along with documentation.

Like typewriters, dot matrix printers could carry out a carriage return without line feed. Today’s inkjet and laser printers cannot do that. I pose a challenge:

convert the program to a modern language (MATLAB or Python are natural choices) and modify it to render the images in some appropriate format.

- A. R. Neill.
*Bob Neill’s Book of Typewriter Art (With Special Computer Programme)*. The Weavers Press, 4 Weavers Cottages, Goudhurst, Kent, 1982, 176 pp. ISBN 0 946017 01 8. - A. R. Neill.
*Bob Neill’s Second Book of Typewriter Art*. The Weavers Press, 4 Weavers Cottages, Goudhurst, Kent, 1984. ISBN 0 946017 02 6.

]]>

My article *Sylvester’s Influence on Applied Mathematics* published in the August 2014 issue of Mathematics Today explains how Sylvester’s work continues to have a strong influence on mathematics. A version of the article with an extended bibliography containing additional historical references is available as a MIMS EPrint.

In the article I discuss how

- Many mathematical terms coined by Sylvester are still in use today, such as the words “matrix” and “Jacobian”.
- The Sylvester equation and the quadratic matrix equation that he studied have many modern applications and are the subject of ongoing research.
- Sylvester’s law of inertia, as taught in undergraduate linear algebra courses, continues to be a useful tool.
- Sylvester gave the first definition of a function of a matrix, the study of which has in recent years has become a very active area of research.
- Sylvester’s resultant matrix, which provides information about the common roots of two polynomials, has important applications in computational geometry and symbolic algebra.

Sylvester’s collected works, totalling almost 3000 pages, are freely available online and are well worth perusing: Volume 1, Volume 2, Volume 3, Volume 4.

In a subsequent post I will write about Sylvester’s life.

]]>

David was a truly interdisciplinary mathematician and led the CICADA (Centre for Interdisciplinary Computational and Dynamical Analysis) project (2007-2011), a £3M centre funded by the University of Manchester and EPSRC, which explored new mathematical and computational methods for analyzing hybrid systems and asynchronous systems and developed adaptive control methods for these systems. The centre involved academics from the Schools of Mathematics, Computer Science, and Electrical and Electronic Engineering, along with four PhD students and six postdocs, all brought together by David’s inspirational leadership.

One of the legacies of CICADA is the burgeoning activity in Tropical Mathematics, which straddles the pure and applied mathematics groups in Manchester, and whose weekly seminars David managed to attend regularly until shortly before his death. Indeed one of David’s last papers is his Algebraic approach to time borrowing (2013), with Steve Furber and Marianne Johnson, which uses max-plus algebra to study an algorithmic approach to time borrowing in digital hardware.

Among the other things that David pioneered in the School, two stand out for me. First, he ran one of the EPSRC creativity workshop pilots in 2010 under the Creativity@Home banner, for the CICADA project team. The report from that workshop contains a limerick, which I remember David composing and reading out on the first morning:

One who works on Project CICADA

Has to be a conceptual trader

Who needs the theory of Morse

To tap into the Force -

A mathematically driven Darth Vader!

The workshop was influential in guiding the subsequent activities of CICADA and its success encouraged me to organize two further creativity workshops, for the numerical analysis group and for the EPSRC NA-HPC Network.

The second idea that David introduced to the School was the role of a *technology translator*. He had organized (with David Abrahams) a European Study Group with Industry in Manchester in 2005 and saw first-hand the important role played by technology translators in providing two-way communication between mathematicians and industry. David secured funding from the University’s EPSRC Knowledge Transfer Account and combined this with CICADA funds to create a technology translator post in the School of Mathematics. That role was very successful and the holder (Dr Geoff Evatt) is now a permanent lecturer in the School.

I’ve touched on just a few of David’s many contributions. I am sure other tributes to David will appear, and I will try to keep a record at the end of this post.

Photo credits: Nick Higham (1), Dennis Sherwood (2).

**Updates**

- Condolences page on School of Mathematics website.
- By Paul Glendinning.
- By Douglas Kell.
- By David Sumpter.
- By Kate Cooper.

]]>

The workshop was advertised to network members and we were able to accept all applicants. The 23 attendees comprised PhD students, postdoctoral researchers, faculty, and HPC support experts from Cambridge University, the University of Edinburgh, Imperial College, The University of Manchester, MIT, NAG Ltd., Queens University Belfast, STFC-RAL, UCL, and the University of Tennessee at Knoxville, along with an EPSRC representative.

The workshop was facilitated by creativity expert Dennis Sherwood. I explained the idea of these workshops in an earlier post about a creativity workshop we held for the Manchester Numerical Analysis Group last year. The procedure is for the attendees to work in groups tackling important questions using a structured approach that encourages innovative ideas to be generated and carefully assessed and developed. The key ingredients are

- a group of enthusiastic people,
- careful planning to produce a set of nontrivial questions that address the workshop goals and are of interest to the attendees,
- a willingness to adapt the schedule based on how the workshop progresses.

The workshop was targeted at researchers working at the interface between numerical analysis and high performance computing. The aims were to share ideas and experiences, make progress on research problems, and identify topics for research proposals and new collaborations.

The topics addressed by the groups were sensitivity in sparse matrix computations; programming languages; deployability, maintainability and reliability of software; fault-resilient numerical algorithms; and “16th April 2019″.

The notes for the last topic began “It’s 16th April 2019, and we’re celebrating the success of our network. What is it, precisely, that is so successful? And what was it about the decisions we took five years ago, in 2014, that, with hindsight, were so important?”. The discussion led to a number of ideas for taking the activities of the network forward over the coming years. These include

- organizing summer schools,
- producing a register of members’ interests and areas of expertise,
- exploiting opportunities for co-design across communities such as algorithm designers, NA specialists and domain scientists, and
- creating opportunities targeted at early career members of the network.

As an ice-breaker and a way of the participants getting to know each other everyone was asked to prepare a flip chart containing a summary of their key attributes, why they were attending, and something they have done that they feel particularly good about. These were presented throughout the two days.

Dennis Sherwood has produced a 166-page report that distills and organizes the ideas generated during the workshop. Attendees will find this very useful as a reminder of the event and of the various actions that resulted from it.

Chicheley Hall, is a historic country house located near Milton Keynes. It was purchased a few years go by the Royal Society, who turned it into a hotel and conference center, and it houses the Kavli Royal Society International Centre. It’s a terrific place to hold a small workshop. The main house and its meeting rooms have a wonderful ambience, the 80-acre grounds (complete with lake and dinosaur sculpture) are a delight to walk around, and each of the 48 bedrooms is named after a famous scientist.

Photo credits: Nick Higham (1,2,4,5,6), Dennis Sherwood (3).

**Addendum (July 29, 2014)**

- A 2011 article The Story of Chicheley Hall by Peter Collins and Stefanie Fischer describes the history of the hall, which goes back to the 1086 Domesday Book.
- The dinosaur model that I saw in the grounds is described in the blog post Milton Keynes: where giant pterosaurs go to die.

]]>

These include the five 2-hour lectures from my course on Functions of Matrices. Here is a summary of the contents of my lectures, with direct links to the videos hosted on YouTube.

- Lecture 1: History, definitions and some applications of matrix functions. Quiz.
- Lecture 2: Properties, more applications, Fréchet derivative, and condition number.
- Lecture 3: Exponential integrator application. Problem classification. Methods for : Schur-Parlett method, iterative methods for sign function and matrix square root.
- Lecture 4: Convergence and stability of iterative methods for sign function and square root. The problem. Software for matrix functions.
- Lecture 5: The method of Al-Mohy and Higham (2011) for the problem. Discussion of how to do research, reproducible research, workflow.

A written summary of the course is available as Matrix Functions: A Short Course (MIMS EPrint 2013.73).

The video team, visible in the photo below that I took of my audience, have done a great job. The music over the opening sequence is reminiscent of the theme from the film *Titanic*!

As a reminder, other relevant links are

- a report on the School by Sam Relton,
- post by me describing my experience of the summer school,
- a post by me about the written summary of my course.

]]>

I still use Windows desktop machines, but switching between Mac and Windows machines is easy nowadays thanks to three things: almost all the software that I use runs on both systems, Dropbox allows easy sharing of files between machines, and Windows and Mac OS X have converged so as to have very similar features and capabilities.

Most of my core applications are open source: Emacs, Firefox, Thunderbird, Git for version control, Cyberduck (for ftp and ssh), and TeX Live. Mac-specific software includes iTerm2 (a replacement for Terminal), Path Finder (an enhanced Finder), Skim (PDF viewer) and Witch (app-switcher, Cmd-tab replacement). And for numerical and symbolic computation I use MATLAB.

A password manager is essential nowadays. I use 1Password, which runs on all my Apple hardware and Windows, and I sync it via Dropbox.

On the iPhone a couple of free apps are proving very useful. MapsWithMe gives offline maps downloadable by country, and since it only needs a GPS signal it’s great for finding where you are while on a train, or in a foreign country. As long as I have the iPhone in my pocket, Moves is good at counting my number of steps per day, which is sadly all too low, and records my time spent travelling. It also has the handy feature of showing on a map where you have been, which is useful if you are lost and want to retrace your steps.

On my MacBook Pro I have File Vault turned on, so that the hard disk is encrypted. I’m impressed with how little overhead this creates with the Core i7 Ivy bridge chip and an SSD. I also like the way File Vault works with Find My Mac to trap thieves via the Guest account (as detailed in this article)!

I continue to use Windows desktop machines. Two particular reasons are that I have not found Mac programs that match the functionality of Xyplorer (file manager) and Fineprint (printer driver), which I use many times every day.

*This post is a modified version of an article titled “My Setup” that appeared in MacUser magazine, November 2013, page 126.*

]]>

This one-day workshop included talks by Mike Giles on computing logarithms and other special functions on GPUs, and Jacek Gondzio on the history of the logarithmic barrier function in linear and nonlinear optimization.

My interest is in the *matrix* logarithm. The earliest explicit occurrence that I am aware of is in an 1892 paper by Metzler On the Roots of Matrices, so we are only just into the second century of matrix logarithms.

In my talk The Matrix Logarithm: from Theory to Computation I explained how the inverse scaling and squaring (ISS) algorithm that we use today to compute the matrix logarithm is a direct analogue of the method Henry Briggs used to produce his 1624 tables Arithmetica Logarithmica, which give logarithms to the base 10 of the numbers 1–20,000 and 90,000–100,000 to 14 decimal places. Briggs’s impressive hand computations were done by using the formulas and to write . The ISS algorithm for the matrix case uses the same idea, with the square roots being matrix square roots, but approximates at a matrix argument using Padé approximants, evaluated using a partial fraction expansion. The Fréchet derivative of the logarithm can be obtained by Fréchet differentiating the formulas used in the ISS algorithm. For details see Improved Inverse Scaling and Squaring Algorithms for the Matrix Logarithm (2012) and Computing the Fréchet Derivative of the Matrix Logarithm and Estimating the Condition Number (2013).

As well as the logarithm itself, various log-like functions are of interest nowadays. One is the unwinding function, discussed in my previous post. Another is the Lambert W function, defined as the solution of . Its many applications include the solution of delay differential equations. Rob Corless and his colleagues produced a wonderful poster about the Lambert W function, which I have on my office wall. Cleve Moler has a recent blog post on the function.

A few years ago I wrote a paper with Rob, Hui Ding and David Jeffrey about the *matrix* Lambert W function: The solution of S exp(S) = A is not always the Lambert W function of A. We show that as a primary matrix function the Lambert function does not yield all solutions to , just as the primary logarithm does not yield all solutions to . I am involved in some further work on the matrix Lambert W function and hope to have more to report in due course.

]]>

*True or false*:

- for all , in other words passing through the exponential then the logarithm takes us on a round trip.
- for all .
- whenever and commute.

The answers are

- False. Yet is always true.
- True. Yet the similar identity is false.
- False.

At first sight these results may seem rather strange. How can we understand them? If you take the viewpoint that each occurrence of and a power in the above expressions stands for the families of all possible logarithms and powers then the identities are all true. But from a computational viewpoint we are usually concerned with a particular branch of each function, the principal branch, so equality cannot be taken for granted.

An excellent tool for understanding these identities is a new matrix function called the *matrix unwinding function*. This function is defined for any square matrix by , and it arises from the scalar *unwinding number* introduced by Corless, Hare and Jeffrey in 1996 ^{1}, ^{2}. There is nothing special about and being matrices in this quiz; the answers are the same if they are scalars. But the matrix unwinding function neatly handles the extra subtleties of the matrix case.

From the definition we have , so the relation in the first quiz question is clearly valid when , which is the case when the eigenvalues of have imaginary parts lying on the interval . Each of the above identities can be understood by deriving an exact relation in which the unwinding function provides the discrepancy between the left and right-hand sides. For example,

Mary Aprahamian and I have recently published the paper The Matrix Unwinding Function, with an Application to Computing the Matrix Exponential, (SIAM J. Matrix. Anal. Appl., 35, 88-109, 2014), in which we introduce the matrix unwinding function and develop its many interesting properties. We analyze the identities discussed above, along with various others. Thanks to the University of Manchester’s Open Access funds, that paper is available for anyone to download from the SIAM website, using the given link.

The matrix unwinding function has another use. Note that and the matrix has eigenvalues with imaginary parts in . The scaling and squaring method for computing the matrix exponential is at its most efficient when has norm of order 1, and this *argument reduction* operation tends to reduce the norm of when has eigenvalues with large imaginary part. In the paper we develop this argument reduction and show that it can lead to substantial computational savings.

How can we compute ? The following incomplete MATLAB code implements the Schur algorithm developed in the paper. The full code is available.

function U = unwindm(A,flag) %UNWINDM Matrix unwinding function. % UNWINDM(A) is the matrix unwinding function of the square matrix A. % Reference: M. Aprahamian and N. J. Higham. % The matrix unwinding function, with an application to computing the % matrix exponential. SIAM J. Matrix Anal. Appl., 35(1):88-109, 2014. % Mary Aprahamian and Nicholas J. Higham, 2013. if nargin < 2, flag = 1; end [Q,T] = schur(A,'complex'); ord = blocking(T); [ord, ind] = swapping(ord); % Gives the blocking. ord = max(ord)-ord+1; % Since ORDSCHUR puts highest index top left. [Q,T] = ordschur(Q,T,ord); U = Q * unwindm_tri(T) * Q'; %%%%%%%%%%%%%%%%%%%%%%%%%%% function F = unwindm_tri(T) %UNWINDM_tri Unwinding matrix of upper triangular matrix. n = length(T); F = diag( unwind( diag(T) ) ); % Compute off-diagonal of F by scalar Parlett recurrence. for j=2:n for i = j-1:-1:1 if F(i,i) == F(j,j) F(i,j) = 0; % We're within a diagonal block. else s = T(i,j)*(F(i,i)-F(j,j)); if j-i >= 2 k = i+1:j-1; s = s + F(i,k)*T(k,j) - T(i,k)*F(k,j); end F(i,j) = s/(T(i,i)-T(j,j)); end end end %%%%%%%%%%%%%%%%%%%%%% function u = unwind(z) %UNWIND Unwinding number. % UNWIND(A) is the (scalar) unwinding number. u = ceil( (imag(z) - pi)/(2*pi) ); ... Other subfunctions omitted

Here is an example. As it illustrates, the unwinding matrix of a real matrix is usually pure imaginary.

>> A = [1 4; -1 1]*4, U = unwindm(A) A = 4 16 -4 4 U = 0.0000 + 0.0000i 0.0000 - 2.0000i 0.0000 + 0.5000i -0.0000 + 0.0000i >> residual = A - logm(expm(A)) residual = -0.0000 12.5664 -3.1416 -0.0000 >> residual - 2*pi*i*U ans = 1.0e-15 * -0.8882 + 0.0000i 0.0000 + 0.0000i -0.8882 + 0.0000i -0.8882 + 0.3488i

Robert Corless and David Jeffrey, The Unwinding Number, SIGSAM Bull 30, 28-35, 1996

David Jeffrey, D. E. G. Hare and Robert Corless, Unwinding the Branches of the Lambert W Function, Math. Scientist 21, 1-7, 1996

]]>

The second edition is substantially expanded from the 1400 page first edition of 2007, with 95 articles as opposed to the original 77. The table of contents and list of contributors is available at the book’s website.

The handbook aims to cover the major topics of linear algebra at both undergraduate and graduate level, as well as numerical linear algebra, combinatorial linear algebra, applications to different areas, and software.

The distinguished list of about 120 authors have produced articles in the CRC handbook style, which requires everything to be presented as a definition, a fact (without proof), an algorithm, or an example. As the author of the chapter on Functions of Matrices, I didn’t find this a natural style to write in, but one benefit is that it encourages the presentation of examples and the large number of illustrative examples is a characteristic feature of the book.

The 18 new chapters include

*Tensors and Hypermatrices*by Lek-Heng Lim*Matrix Polynomials*by Joerg Liesen and Christian Mehl*Matrix Equations*by Beatrice Meini*Invariant Subspaces*by G. W. Stewart*Tournaments*by T. S. Michael*Nonlinear Eigenvalue Problems*by Heinrich Voss*Linear Algebra in Mathematical Population Biology and Epidemiology*by Fred Brauer and Carlos Castillo-Chavez*Sage*by Robert A. Bezer, Robert Bradshaw, Jason Grout, and William Stein

A notable absence from the applications chapters is network analysis, which in recent years has increasingly made use of linear algebra to define concepts such as centrality and communicability. However, it is impossible to cover every topic and in such a major project I would expect that some articles are invited but do not come to fruition by publication time.

The book is typeset in , like the first edition, but now using the Computer Modern fonts, which I feel give better readability than the font used previously.

A huge amount of thought has gone into the book. It has a 9 page initial section called *Preliminaries* that lists key definitions, a 51 page glossary, a 12 page notation index, and a 54 page main index.

For quite a while I was puzzled by index entries such as “50-12–17″. I eventually noticed that the second dash is an en-dash and realized that the notation means “pages 12 to 17 of article 50″. This should have been noted at the start of the index.

In fact my only serious criticism of the book is the index. It is simply too hard to find what you are looking for. For example, there is no entry for *Gerhsgorin’s theorem*, which appears on page 16-6. Nor is there one for *Courant-Fischer*, whose variational eigenvalue characterization theorem is on page 16-4. There is no index entry under “exponential”, but the matrix exponential appears under two other entries and they point to only one of the various pages where the exponential appears. The index entry for *Loewner partial ordering* points to Chapter 22, but the topic also has a substantial appearance in Section 9.5. Surprisingly, most of these problems were not present in the index to the first edition, which is also two pages longer!

Fortunately the glossary is effectively a high-level index with definitions of terms (and an interesting read in itself). So to get the best from the book **use the glossary and index together**!

An alternative book for reference is Bernstein’s Matrix Mathematics (second edition, 2009), which has an excellent 100+ page index, but no glossary. I am glad to have both books on my shelves (the first edition at home and the second edition at work, or vice versa—these books are too heavy to carry around!).

Overall, Leslie Hogben has done an outstanding job to produce a book of this size in a uniform style with such a high standard of editing and typesetting. Ideally one would have both the hard copy and the ebook version, so that one can search the latter. Unfortunately, the ebook appears to have the same relatively high list price as the hard copy (“unlimited access for $169.95″) and I could not see a special deal for buying both. Nevertheless, this is certainly a book to ask your library to order and maybe even to purchase yourself.

]]>

Functions of interest include the exponential, the logarithm, and real powers, along with all kinds of trigonometric functions and some less generally known functions such as the sign function and the unwinding function.

A large amount of software for evaluating matrix functions now exists, covering many languages (C++, Fortran, Julia, Python, …) and problem solving environments (GNU Octave, Maple, Mathematica, MATLAB, R, Scilab, …). Some of it is part of a core product or package, while other codes are available individually on researchers’ web sites. It is hard to keep up with what is available.

Edvin Deadman and I therefore decided to produce a catalogue of matrix function software, which is available in the form of MIMS EPrint 2014.8. We have organized the catalogue by language/package and documented the algorithms that are implemented. The EPrint also contains a summary of the rapidly growing number of applications in which matrix functions are used, including some that we discovered only recently, and a list of what we regard as the best current algorithms.

Producing the catalogue took more work than I expected. Many packages are rather poorly documented and we sometimes had to delve deep into documentation or source code in order to find out which algorithms had been implemented.

One thing our survey shows is that the most complete and up to date collection of codes for matrix functions currently available is that in the NAG Library, which contains over 40 codes all implementing state of the art algorithms. This is no accident. We recently completed a three year Knowledge Transfer Partnership (KTP) funded by the Technology Strategy Board, the University of Manchester, and EPSRC, whose purpose was to translate matrix function algorithms into software for the NAG Engine (the underlying code base from which all NAG products are built) and to embed processes and expertise in developing matrix functions software into the company. Edvin was the Associate employed on the project and wrote all the codes, which are in Fortran.

A video about the KTP project been made by the University of Manchester and more information about the project can be obtained from Edvin’s posts at the NAG blog:

- Matrix Functions in Parallel
- The Matrix Square Root, Blocking and Parallelism
- How Do I Know I’m Getting the Right Answer?

We intend to update the catalogue from time to time and welcome notification of errors and omissions.

]]>