What’s New in MATLAB R2018a?

MATLAB R2018a was released in March 2018. With each biannual release I try to give a brief overview of the changes in MATLAB (not the toolboxes) that are of most interest to me. These are not comprehensive summaries of what’s new and you should check the release notes for full details.


From the MATLAB “R2018a at a Glance” page.

Complex empty arrays, such as that generated with complex([]) now have an (empty) imaginary part instead of being real.

% R2017b
>> complex([])
ans =

% R2018a
>> complex([])
ans =
  0×0 empty complex double matrix

This is a consequence of a change in the way MATLAB stores complex arrays. Prior to R2018a it stored the real parts together and the imaginary parts together. In R2081 it uses an interleaved format in which the real and imaginary parts of a number are stored together; this is the storage format used in the C and C++ languages. For more details see MATLAB Support for Interleaved Complex API in C MEX Functions. Unless you write MEX files or create complex empty arrays, this change should have no effect on you.

The Live Editor continues to gain improved functionality, including embedded sliders and drop-down menus.

Legends can now have multiple columns, specified with the NumColumns property for Legend objects. I must admit that I had not realized that is was already possible to arrange legends in a horizontal (1-by-n) rather than vertical orientation (n-by-1) using the Orientation property.

Tall arrays have several new capabilities, including the ability to compute a least squares solution to an overdetermined linear system Ax = b with a tall A, which is done by QR factorization.

MATLAB starts up faster and has further optimizations in the execution engine.

The GraphPlot Object has some additional options for the 'force', 'force3', and 'circle' layouts.

I noticed that the recent Windows 10 Spring Creators Update broke the Symbolic Toolbox. An update to fix this (and various other issues) is available at this page (MathWorks login required).

For a concise but wide-ranging introduction and reference to MATLAB, see MATLAB Guide (third edition, 2017)

Posted in software | Tagged | 3 Comments

How to Program log z

While Fortran was the first high-level programming language used for scientific computing, Algol 60 was the vehicle for publishing mathematical software in the early 1960s. Algol 60 had real arithmetic, but complex arithmetic had to be programmed by working with real and imaginary parts. Functions of a complex variable were not built-in, and had to be written by the programmer.


I’ve written a number of papers on algorithms to compute the (principal) logarithm of a matrix. The problem of computing the logarithm of a complex scalar—given a library routine that handles real arguments—might appear trivial, by comparison. That it is not can be seen by looking at early attempts to provide such a function in Algol 60.

The paper

J. R. Herndon (1961). Algorithm 48: Logarithm of a complex number. Comm. ACM, 4(4), 179.

presents an Algol 60 code for computing \log z, for a complex number z. It uses the arctan function to obtain the argument of a complex number.


The paper

A. P. Relph (1962). Certification of Algorithm 48: Logarithm of a complex number. Comm. ACM, 5(6), 347.

notes three problems with Herndon’s code: it fails for z with zero real part, the imaginary part of the logarithm is on the wrong range (it should be (-\pi,\pi] for the principal value), and the code uses log (log to the base 10) instead of ln (log to the base e). The latter error suggests to me that the code had never actually been run, as for almost any argument it would produce an incorrect value. This is perhaps not surprising since Algol 60 compilers must have only just started to become available in 1961.

The paper

M. L. Johnson and W. Sangren, W. (1962). Remark on Algorithm 48: Logarithm of a complex number. Comm. CACM, 5(7), 391.

contains more discussion about avoiding division by zero and getting signs correct. In

D. S. Collens (1964). Remark on remarks on Algorithm 48: Logarithm of a complex number. Comm. ACM, 7(8), 485.

Collens notes that Johnson and Sangren’s code wrongly gives \log 0 = 0 and has a missing minus sign in one statement. Finally, Collens gives in

D. S. Collens (1964). Algorithm 243: Logarithm of a complex number: Rewrite of Algorithm 48. Comm. CACM, 7(11), 660.

a rewritten algorithm that fixes all the earlier errors.

So it took five papers over a three year period to produce a correct Algol 60 code for the complex logarithm! Had those authors had the benefit of today’s interactive computing environments that period could no doubt have been shortened, but working with multivalued complex functions is necessarily a tricky business, as I have explained in earlier posts here and here.

Posted in research | Leave a comment

Lectures on Multiprecision Algorithms in Kácov

At the end of May, I was one of four lecturers at the ESSAM school on Mathematical Modelling, Numerical Analysis and Scientific Computing, held in Kácov, about a hour’s drive south-east of Prague in the Czech Republic.

The event was superbly organized by Josef Malek, Miroslav Rozlozník, Zdenek Strakos and Miroslav Tuma. This was a relaxed and friendly event, and the excellent weather enabled most meals to be taken on the terrace of the family-run Sporthotel Kácov in which we were staying.


Françoise Tisseur lecturing on the nonlinear eigenvalue problem.

I gave three lectures of about one hour each on Multiprecision Algorithms. The slides are available from this link. Here is an abstract for the lectures:

Today’s computing environments offer multiple precisions of floating-point arithmetic, ranging from quarter precision (8 bits) and half precision (16 bits) to double precision (64 bits) and even quadruple precision (128 bits, available only in software), as well as arbitrary precision arithmetic (again in software). Exploiting the available precisions is essential in order to reduce the time to solution, minimize energy consumption, and (when necessary) solve ill-conditioned problems accurately.

In this course we will describe the precision landscape, explain how we can exploit different precisions in numerical linear algebra, and discuss how to analyze the accuracy and stability of multiprecision algorithms.

  • Lecture 1. IEEE standard arithmetic and availability in hardware and software. Motivation for low precision from applications, including machine learning. Exploiting reduced communication cost of low precision. Issued relating to rounding error analyses in low precision. Simulating low precision for testing purposes. Challenges of implementing algorithms in low precision.
  • Lecture 2. Basics of rounding error analysis, illustrated with summation. Why increasing precision is not a panacea. Software for high precision and its cost. Case study: the matrix logarithm in high precision.
  • Lecture 3. Solving very linear systems (possibly very ill conditioned and/or sparse) using mixed precision: iterative refinement in three precisions. A hybrid direct-iterative method: GMRES-IR.

I gave an earlier version of these lectures in March 2018 at the EU Regional School held at the Aachen Institute for Advanced Study in Computational Engineering Science (AICES), Germany. This single two and a half hour lecture was recorded and can be viewed on YouTube. The slides are available here.


Sporthotel Kácov, Czech Republic.

Posted in conferences | Leave a comment

Joan E. Walsh (1932–2017)

By Len Freeman and Nick Higham

Joan Eileen Walsh was born on 7 October 1932 and passed away on 30 December 2017 at the age of 85.

Joan obtained a First Class B.A. honours degree in Mathematics from the University of Oxford in 1954. She then spent three years working as an Assistant Mistress at Howell’s School in Denbigh, North Wales. In 1957 Joan left teaching and enrolled at the University of Cambridge to study for a Diploma in Numerical Analysis. This qualification was awarded, with Distinction, in 1958. At this point, Joan returned to the University of Oxford Computing Laboratory to study for a D.Phil. under the supervision of Professor Leslie Fox. She was Fox’s first doctoral student. Her D.Phil. was awarded in 1961.


Between October 1960 and March 1963, Joan worked as a Mathematical Programmer for the CEGB (Central Electricity Generating Board) Computing Department in London. In April 1963, she was appointed to a Lectureship in the Department of Mathematics at the University of Manchester. She progressed through the positions of Senior Lecturer (1966) and Reader (1971) before being appointed as Professor of Numerical Analysis at the University of Manchester in October 1974. For the academic year 1967-1968 Joan had leave of absence at the SRC Atlas Computer Laboratory—a joint appointment with St Hilda’s College, Oxford.

Joan led the Numerical Analysis group at the University of Manchester until 1985, after which Christopher Baker took over. This was a period of expansion both for the Numerical Analysis group at Manchester and, more generally, for numerical analysis in Britain. This expansion of British numerical analysis was supported by special grants from the SRC (Science Research Council) to provide additional funding for the subject at the Universities of Dundee, Manchester and Oxford, from 1973 until 1976. This funding supported one Senior Research Fellow and two Research Fellows at each Institution. Joan helped establish the Manchester group as one of the leading Numerical Analysis research centres in the United Kingdom (with eight permanent staff by 1987)—a position that is maintained to the present day.

Joan was Head of the Department of Mathematics between 1986 and 1989, and subsequently became Pro-Vice Chancellor of the University of Manchester in 1990. She held the latter role for four years, and was responsible for undergraduate affairs across the University. Joan’s tenure as Pro-Vice Chancellor coincided with substantial, and sometimes controversial, changes in undergraduate teaching—for example, the introduction of semesterisation and of credit-based degree programmes; Joan managed these major changes across the University with her customary tact, energy and determination. Joan was an efficient and effective administrator at a time when relatively few women occupied senior management roles in universities.

After 35 years’ service, Joan retired from the University in 1998 and was appointed Professor Emeritus.

In retirement, Joan returned to her studies; between 2000 and 2003 she studied for an MA in “Contemporary Theology in the Catholic Tradition” at Heythrop College of the University of London.

Over the years, and particularly during her tenure as Pro-Vice Chancellor, Joan sat on, and chaired, numerous University committees, far too many to list. She had a very long relationship with Allen Hall (a University Hall of Residence) where she was on the Hall Advisory Committee from 1975 until her retirement in 1998.

Joan served leadership roles nationally, as well as in the University. She was Vice President of the IMA (1992-1993) and a member of the Council of the IMA (1990-1991 and 1994-1995). She was elected Fellow of the Institute of Mathematics and its Applications (IMA) in 1984. She was a member of the Computer Board for Universities and Research Councils for several years from the late 1970s. She encouraged the creation of its Software Provision Committee, formally constituted in 1980 with Joan as its first Chairman, which she led until 1985. She was also President of the National Conference of University Professors (1993–1994). Further, she was a member of the Board of Governors at Withington Girls’ School, a leading independent school, for six years between 1993 and 1999.

Nowadays, all computational scientists take for granted the existence of software libraries such as the NAG Library. It is unimaginable to undertake major computational tasks without them. In 1970, Joan was one of a group of four academics who founded the Nottingham Algorithms Group with the aim of developing a comprehensive mathematical software library for use by the group of universities that were running ICL 1906A mainframe computers. Subsequently, the Nottingham Algorithms Group moved from the University of Nottingham to the University of Oxford and the project was incorporated as the Numerical Algorithms Group (NAG) Ltd. Joan became the Founding Chairman of NAG Ltd. in 1976, a position she held for the next ten years. She was subsequently a member of the Council of NAG Ltd. from 1992 until 1996. In recognition of her contribution to the NAG project Joan was elected as a Founding Member of the NAG Life Service Recognition Award in 2011.

Joan’s research interests focused on the numerical solution of ordinary differential equation boundary value problems and the numerical solution of partial differential equations. She conducted much of her research in collaboration with PhD students, supervising the following PhD students at the University of Manchester, who obtained their degrees in the years shown:

  • Thomas Sag, 1966;
  • Les Graney, 1973;
  • David Sayers, 1973;
  • Geoffrey McKeown, 1977;
  • Roderick Cook, 1978;
  • Patricia Hanson, 1979;
  • Guy Lonsdale, 1985;
  • Chan Basaruddin, 1990;
  • Fathalla Rihan (supervised jointly with C. T. H. Baker), 2000.

Joan was an important figure in the development of Numerical Analysis and Scientific Computing at the University of Manchester and in the UK more generally. Her essay Numerical Analysis at the Victoria University of Manchester, 1957-1979 gives an interesting perspective on early developments at Manchester.

Brian Ford OBE, Founder Director of NAG, writes:

Joan had a brilliant career in Mathematics (particularly areas of Numerical Mathematics, Ordinary and Partial Differential Equations), Computing, University Education and Teaching, and was an excellent researcher, teacher, administrator, doctoral supervisor and colleague. But she was so much more than that!

Joan was invariably kind and thoughtful, intellectually gifted and generous with advice and guidance. Her profound Christian faith illuminated every aspect of her life. Joan’s deep reading and wide intellectual interests coupled with her prudence and clear thinking gave her profound knowledge and command. She was excellent company –amusing, modest, never belittling nor intimidating- and enjoyed fine wine and food in good company. She held firm beliefs, gently and persuasively seeking what she saw as the right way. Many people turned to her for help, advice and references and were grateful for her readily-offered help and support.

Joan was a private, even guarded, person. A devout Catholic, on her retirement she completed an MA in “Contemporary Theology in the Catholic Tradition” at Heythorp College, University of London. Fluent in Latin and reading regularly at services, she loved the traditional Tridentine Mass of the Church. Along with her local bishop in Salford and other like-minded Catholics, she therefore worked actively for the restitution of the Tridentine Mass to the liturgy of the world-wide Church (sidelined after Vatican II in favour of local languages), an involvement which culminated her joining high-level discussions at the Vatican. This bore fruit, the Tridentine Latin Mass being officially declared the extraordinary form of the Roman Rite of Mass a few years later: Joan was thrilled. Such was Joan’s commitment to things she believed in and her endless thought and work for others.

Joan was an excellent contributor to the NAG Library, believing strongly in collaboration and sharing, with high quality standards for all aspects of our work and thorough checking and testing. She was an excellent first Chairman of NAG and invaluable colleague and advisor. We thoroughly enjoyed working together, invariably in an excellent spirit. We achieved much.

Posted in people | 2 Comments

Conference in Honour of Walter Gautschi

Last week I had the pleasure of attending and speaking at the Conference on Scientific Computing and Approximation (March 30-31, 2018) at Purdue University, held in honour of Walter Gautschi (Professor Emeritus of Computer Science and Mathematics at Purdue University) on the occasion of his 90th birthday.


The conference was expertly organized by Alex Pothen and Jie Shen. The attendees, numbering around 70, included many of Walter’s friends and colleagues.

The speakers made many references to Walter’s research contributions, particularly in the area of orthogonal polynomials. In my talk, Matrix Functions and their Sensitivity, I emphasized Walter’s work on conditioning of Vandermonde matrices.

A Vandermonde matrix V_n is an n\times n matrix depending on parameters x_1,x_2,\ldots,x_n that has j th column [1, x_j, \ldots, x_j^{n-1}]^T. It is nonsingular when the x_i are distinct. This is a notoriously ill conditioned class of matrices. Walter said that he first experienced the ill conditioning when he computed Gaussian quadrature formulas from moments of a weight function.

Walter has written numerous papers on Vandermonde matrices that give much insight into their conditioning. Here is a very a brief selection of Walter’s results. For more, see my chapter Numerical Conditioning in Walter’s collected works.

In a 1962 paper he showed that

\displaystyle\|V_n^{-1}\|_{\infty} \le \max_i \prod_{j\ne i}\frac{ 1+|x_j| }{ |x_i-x_j| }.

In 1978 he obtained

\displaystyle\|V_n^{-1}\|_{\infty} \ge \max_i \prod_{j\ne i} \frac{ \max(1,|x_j|) }{ |x_i-x_j| },

which differs from the upper bound by at most a factor 2^{n-1}. A 1975 result is that for x_i equispaced on [0,1],

\displaystyle\kappa(V_n)_{\infty} \sim \frac{1}{\pi} e^{-\frac{\pi}{4}} (3.1)^n.

A 1988 paper returns to lower bounds, showing that for x_i \ge 0 and n\ge 2,

\displaystyle\kappa(V_n)_{\infty} > 2^{n-1}.

When some of the x_i coincide a confluent Vandermonde matrix can be defined, in which columns are “repeatedly differentiated”. Walter has obtained bounds for the confluent case, too.

These results quantify the extreme ill conditioning. I should note, though, that appropriate algorithms that exploit structure can nevertheless obtain accurate solutions to Vandermonde problems, as described in Chapter 22 of Accuracy and Stability of Numerical Algorithms.


Ron DeVore speaking on Optimal Data Assimilation


Photo of Nick Higham’s talk by David Gleich.

Posted in conferences | Tagged | 1 Comment

Palomino Blackwing Pencil Tribute to Ada Lovelace

Despite the deep penetration of digital tools into our lives, a lot of mathematics is still written by hand in pencil, and so it is appropriate that the Palomino Blackwing Volumes 16.2 pencil is a tribute to Ada Lovelace, the 19th century mathematician who worked on Charles Babbage’s proposed Analytical Engine.

The Palomino Blackwing, from California Cedar Products Company, is a modern version of the Blackwing pencil produced up until 1998 by the Eberhard Faber Pencil Company. The Blackwing was a favorite of luminaries such as John Steinbeck and Leonard Bernstein, and was much missed until CalCedar acquired the brand and started production of its own version of the pencil in 2011. Blackwing Volumes are limited editions “celebrating the people, places and events that have defined our creative culture”.


The 16.2 in the volume name refers to the Analytical Engine’s storage capacity of 16.2 kB (enough to hold one thousand 40 decimal digit numbers). The matt white finish and matt black ferrule are “inspired by the simple styling of early personal computers”. The rear of the pencil contains a pattern that represents in binary the initials AAL that Lovelace used to sign her work.


Blackwing pencils are available with four different graphite hardnesses, of which the 16.2 is the second firmest, roughly equivalent to a B, and the same as for the regular Blackwing 602. The following test compares the 16.2 with the Blackwing (no number, and the softest), the Dixon Ticonderoga HB, and the Staedtler Noris HB. The paper is Clairefontaine and the shaded area shows a smear test where I rubbed my thumb over the shaded rectangle.


The pencils come in packs of 12 and are available at, for example Bureau Direct (UK), pencils.com (USA), and JetPens (USA). If you’re in New York City, pop into Caroline Weaver’s wonderful CW Pencil Enterprise store.

One review has suggested that a harder graphite (as in certain other limited editions) would be better for writing mathematics. For me the 16.2 core is fine, but I also enjoy using the softer Blackwing cores. For a mathematician, as for any writer, having to pause to sharpen a pencil is not necessarily a bad thing, especially as the shavings give off a wonderful odor of the California incense cedar from which the barrels are made.

Posted in writing | Tagged | 3 Comments

Photo Highlights of 2017

Here are some of my favourite photos taken at events that I attended in 2017.

Atlanta (January)

This was the first time I have attended the Joint Mathematics Meetings, which were held in Atlanta, January 4-7, 2017. It was a huge conference with over 6000 attendees. A highlight for me was the launch of the third edition of MATLAB Guide on the SIAM booth, with the help of The MathWorks: 170106-1123-14-5528.jpg Elizabeth Greenspan and Bruce Bailey looked after the SIAM stand: 170105-2056-12_5438.jpg If you are interested in writing a book or SIAM, Elizabeth would love to hear from you!

The conference was held in the Marriott Marquis Hotel and the Hyatt Regency Hotel, both of which have impressive atriums. This photo is taken taken with a fish-eye lens, looking up into the Marriott Marquis Hotel’s atrium 170104-2015-50-5388.jpg (For more photos, see Fuji Fisheye Photography: XT-2 and Samyang 8mm).

Atlanta (March)

I was back in Atlanta for the SIAM Conference on Computational Science and Engineering, February 27-March 3, 2017. A highlight was a 70th birthday dinner celebration for Iain Duff, pictured here speaking at the Parallel Numerical Linear Algebra for Extreme Scale Systems minisymposium: 170228-1020-54-5783.jpg Here is Sarah Knepper of Intel speaking in the Batched Linear Algebra on Multi/Many-Core Architectures symposium (a report on which is given in the blog post by Sam Relton) 170227-1712-57_5700.jpg Torrential rain one night forced me to take shelter on the way back from dinner, allowing a moment to capture this image of Peach Tree Street. 170301-1945-22_6376.jpg

Washington (April)

The National Math Festival was held at the Walter E. Washington Convention Center in Washington DC on April 22, 2017: 170422-1537-57_6202.jpg I caught the March for Science on the same day: 170422-1917-01_6325.jpg 170422-1944-58_6455.jpg

Pittsburgh (July)

The SIAM Annual Meeting, held July 10-14, 2017 at the David Lawrence Convention Center in Pittsburgh, was very busy for me as SIAM president. Here is conference co-chair Des Higham speaking in the minisymposium “Advances in Mathematics of Large-Scale and Higher-Order Networks”: 170713-1033-16_7413.jpg Emily Shuckburgh gave the I.E. Block Community Lecture “From Flatland to Our Land: A Mathematician’s Journey through Our Changing Planet”: 170712-1821-50_7337.jpg The Princeton Companion to Applied Mathematics was on display on the Princeton University Press stand: 170710-1728-15_7289.jpg Here are Des and I on the Roberto Clemente bridge over the Allegheny River, the evening before the conference started: 170708-2121-33_7255.jpg

Posted in conferences | Tagged , | Leave a comment

Numerical Linear Algebra Group 2017

The Manchester Numerical Linear Algebra Group (many of whom are in the October 2017 photo below) was involved in a variety of activities this year. This post summarizes what we got up to. Publications are not included here, but many of them can be found on MIMS EPrints under the category Numerical Analysis.



Together with Jack Dongarra’s team at the University of Tennessee the group is one of the two partners involved in the development of PLASMA: Parallel Linear Algebra Software for Multicore Architectures.

PhD students Weijian Zhang, Steven Elsworth and Jonathan Deakin released Etymo—a search engine for machine learning research and development.

We continue to make our research codes available, which is increasingly done on GitHub; see the repositories of Fasi, Higham, Relton, Sego, Tisseur, Zhang. We also put MATLAB software on MATLAB Central File Exchange and on our own web sites, e.g., the Rational Krylov Toolbox (RKToolbox).

PhD Students

New PhD students Gian Maria Negri Porzio and Thomas McSweeney joined the group in September 2017.

Postdoctoral Research Associates (PDRAs)

Sam Relton, who was working on the Parallel Numerical Linear Algebra for Extreme Scale Systems (NLAFET) project, left in June 2017 to take up a position at the University of Leeds. Negin Bagherpour joined NLAFET in March 2017, leaving in November 2017.

Srikara Pranesh joined the project in November 2017. Pierre Blanchard joined us in October 2017 to work jointly on the ICONIC project (which started in June 2017) and NLAFET.

Presentations at Conferences and Workshops

UK and Republic of Ireland Section of SIAM Annual Meeting, University of Strathclyde, January 12, 2017: Fasi, Gwynne, Higham, Zemaityte, Zhang.

2017 Joint Mathematics Meetings, Atlanta, January 4-7, 2017: Higham.

Workshop on Matrix Equations and Tensor Techniques, Pisa, Italy, February 13-14 2017: Fasi

Due Giorni di Algebra Lineare Numerica, Como, Italy, February 16-17, 2017: Fasi

International Conference on Domain Decomposition Methods DD24, Svalbard, Norway, February 6-10, 2017: Sistek.

Workshop on Batched, Reproducible, and Reduced Precision BLAS, Atlanta, February 23-25, 2017: Hammarling, Relton.

SIAM Conference on Computational Science and Engineering, Atlanta, February 27-March 3, 2017: Relton, Zounon. See the blog posts about the meeting by Nick Higham and Sam Relton.

High Performance Computing in Science and Engineering (HPCSE) 2017, Hotel Solan, Czech Republic, May 22-25, 2017: Sistek

Advances in Data Science, Manchester, May 15-16, 2017: Zhang.

27th Biennial Conference on Numerical Analysis, Glasgow, June 27-30, 2017: Tisseur.

Householder Symposium XX on Numerical Linear Algebra, The Inn at Virginia Tech, June 18-23, 2017: Tisseur.

SIAM Annual Meeting, Pittsburgh, July 10-14, 2017: Zhang (see this SIAM News article about Weijian’s presentation). A Storify of the conference is available in PDF form.

ILAS 2017 Conference, Iowa State University, USA, July 24-28, 2017: Güttel

24th IEEE Symposium on Computer Arithmetic, London, July 24-26, 2017: Higham (see this blog post by George Constantinides).

LMS-EPSRC Symposium on Model Order Reduction, Durham, UK, August 8-17, 2017: Güttel

Euro-Par 2017, 23rd International European Conference on Parallel and Distributed Computing, August 28-September 1, 2017: Zounon.

INdAM Meeting Structured Matrices in Numerical Linear Algebra: Analysis, Algorithms and Applications, Cortona, Italy, September 4-8, 2017: Fasi, Tisseur.

2017 Woudschoten Conferences, Zeist, The Netherlands, 4-6 October 2017: Tisseur.

ICERM Workshop on Recent Advances in Seismic Modeling and Inversion, Brown University, USA, November 6-10, 2017: Güttel. A video recording of this talk is available.

Conference and Workshop Organization

Güttel co-organized the SIAM UKIE Annual Meeting 2017 at the University of Strathclyde January 12, 2017 and the GAMM ANLA Workshop on High-Performance Computing at the University of Cologne, September 7-8, 2017.

The Manchester SIAM Student Chapter organized an Manchester Chapter Auto Trader Industry Problem Solving Event on February 22, 2017 and the 7th Manchester SIAM Student Chapter Conference on May 5, 2017.

The group organized three minisymposia at the SIAM Conference on Computational Science and Engineering, Atlanta, February 27-March 3, 2017:


Franco Zivcovic (Università degli Studi di Trento) visited the group from September 2017-January 2018.

Knowledge Transfer

The Sabisu KTP project, which ended in December 2016, has been awarded the highest grade of “Outstanding” by the KTP Grading Panel. A new KTP project with Process Integration Ltd. is under way, led by Stefan Güttel.

The MSc project of Thomas McSweeney was sponsored by NAG and produced a code for modified Cholesky factorization that will appear in the NAG Library.

Recognition and Service

Stefan Güttel continued his terms as Secretary/Treasurer of the SIAM UKIE section and vice-chair of the GAMM Activity Group on Applied and Numerical Linear Algebra.

Nick Higham served the first year of his two-year term as President of SIAM.

Weijian Zhang was awarded a SIAM Student Travel Award to attend the SIAM Annual Meeting 2017 in Pittsburgh.

Massimiliano Fasi and Mante Zemaityte were selected to present posters at the SET for Britain 2017 competition, which took place at the House of Commons, London. Fasi’s poster was “Finding Communities in Large Signed Networks with the Weighted Geometric Mean of Laplacians” and Zemaityte’s was “A Shift-and-Invert Lanczos Algorithm for the Dynamic Analysis of Structures”.

Jakub Sistek served as treasurer of the eu-maths-in.cz Czech Network for Mathematics in Industry.

Tweets of the Year

Posted in research | Leave a comment

The Strange Case of the Determinant of a Matrix of 1s and -1s

By Nick Higham and Alan Edelman (MIT)

In a 2005 talk the second author noted that the MATLAB det function returns an odd integer for a certain 27-by-27 matrix composed of 1s and -1s:

>> A = edelman; % Set up the matrix.
>> format long g, format compact, det(A)
ans =

However, the determinant is, from its definition, a sum of an even number (27 factorial) of odd numbers, so is even. Indeed the correct determinant is 839466457497600.

At first sight, this example is rather troubling, since while MATLAB returns an integer, as expected, it is out by 1. The determinant is computed as the product of the diagonal entries of the U factor in the LU factorization with partial pivoting of A, and these entries are not all integers. Standard rounding error analysis shows that the relative error from forming that product is bounded by nu/(1-nu), with n=27, where u \approx 1.1 \times 10^{-16} is the unit roundoff, and this is comfortably larger than the actual relative error (which also includes the errors in computing U) of 6 \times 10^{-16}. Therefore the computed determinant is well within the bounds of roundoff, and if the exact result had not been an integer the incorrect last decimal digit would hardly merit discussion.

However, this matrix has more up its sleeve. Let us compute the determinant using a different implementation of Gaussian elimination with partial pivoting, namely the function gep from the Matrix Computation Toolbox:

>> [Lp,Up,Pp] = gep(A,'p'); det(Pp)*det(Up)
ans =

Now we get the correct answer! To see what is happening, we can directly form the products of the diagonal elements of the U factors:

>> [L,U,P] = lu(A); 
>> d = diag(U); dp = diag(Up); 
>> rel_diff_U_diags = norm((dp - d)./d,inf)
rel_diff_U_diags =
>> [prod(d), prod(dp)]
ans =
          -839466457497601          -839466457497600
>> [prod(d(end:-1:1)), prod(dp(end:-1:1))]
ans =
          -839466457497600          -839466457497600

We see that even though the diagonals of the two U factors differ by a small multiple of the unit roundoff, the computed products differ in the last decimal digit. If the product of the diagonal elements of U is accumulated in the reverse order then the exact answer is obtained in both cases. Once again, while this behaviour might seem surprising, it is within the error bounds of a rounding error analysis.

The moral of this example is that we should not be misled by the integer nature of a result; in floating-point arithmetic it is relative error that should be judged.

Finally, we note that numerical evaluation of the determinant offers other types of interesting behaviour. Consider the Frank matrix: a matrix of integers that has determinant 1. What goes wrong here in the step from dimension 24 to 25?

>> A = gallery('frank',24); det(A)
ans =
>> A = gallery('frank',25); det(A)
ans =

The Edelman matrix is available in the MATLAB function available in this gist, which is embedded below. A Julia notebook exploring the Edelman matrix is available here.

Posted in research | Tagged , | 2 Comments

Fun Books for Learning Programming

I learned Fortran from the TV course and book by Jeff Rohl. Some years later I came across A FORTRAN Coloring Book by Roger Emanuel Kaufman (MIT Press, 1978). The text is entirely handwritten (even the copyright page), is illustrated with numerous cartoons, and is full of witty wordplay. Yet it imparts the basics of Fortran very well and I could have happily learned Fortran from it. It even describes some simple numerical methods, such as the bisection method. The book is one continuous text, with no chapters or sections, but it has a good index. I’ve long been a fan of the book and Des Higham, and I include three quotes from it in MATLAB Guide.



Kaufman’s book has attracted attention in cultural studies. In the article Bend Sinister: Monstrosity and Normative Effect in Computational Practice, Simon Yuill describes it as “the first published computing text to use cartoon and comic strip drawings as a pedagogic medium” and goes on to say “and it could be argued, is the archetype to the entire For Dummies series and all its numerous imitators”. I would add that the use of cartoons within magazine articles on computing was prevalent in the 1970s, notably in Creative Computing magazine, though I don’t recall anything comparable with Kaufman’s book.


A page from Illustrating C.

A book in a similar vein and from the same era, is the handwritten Illustrating Basic by Donald Alcock (Cambridge University Press, 1977). It’s a bit like Kaufman without the jokes, and is organized into sections. This was the first in a series of such books, culminating in Illustrating C (1992). Like Kaufman’s book, Alcock’s contain nontrivial examples and are a good way for anyone to learn a programming language.

Thinking Forth by Leo Brodie, about the Forth language, is typeset but contains lots of cartoons and hand-drawn figures. It was originally published in 1984 and is now freely available under a Creative Commons license.

A more recent book with a similarly fun treatment is Land of Lisp by Conrad Barski (No Starch Press, 2011). It teaches Common Lisp, coding various games along the way. It’s typeset but is heavily illustrated with cartoons and finishes with a comic strip.

Posted in software | Tagged , , , , | Leave a comment