Photo Highlights of 2017

Here are some of my favourite photos taken at events that I attended in 2017.

Atlanta (January)

This was the first time I have attended the Joint Mathematics Meetings, which were held in Atlanta, January 4-7, 2017. It was a huge conference with over 6000 attendees. A highlight for me was the launch of the third edition of MATLAB Guide on the SIAM booth, with the help of The MathWorks: 170106-1123-14-5528.jpg Elizabeth Greenspan and Bruce Bailey looked after the SIAM stand: 170105-2056-12_5438.jpg If you are interested in writing a book or SIAM, Elizabeth would love to hear from you!

The conference was held in the Marriott Marquis Hotel and the Hyatt Regency Hotel, both of which have impressive atriums. This photo is taken taken with a fish-eye lens, looking up into the Marriott Marquis Hotel’s atrium 170104-2015-50-5388.jpg (For more photos, see Fuji Fisheye Photography: XT-2 and Samyang 8mm).

Atlanta (March)

I was back in Atlanta for the SIAM Conference on Computational Science and Engineering, February 27-March 3, 2017. A highlight was a 70th birthday dinner celebration for Iain Duff, pictured here speaking at the Parallel Numerical Linear Algebra for Extreme Scale Systems minisymposium: 170228-1020-54-5783.jpg Here is Sarah Knepper of Intel speaking in the Batched Linear Algebra on Multi/Many-Core Architectures symposium (a report on which is given in the blog post by Sam Relton) 170227-1712-57_5700.jpg Torrential rain one night forced me to take shelter on the way back from dinner, allowing a moment to capture this image of Peach Tree Street. 170301-1945-22_6376.jpg

Washington (April)

The National Math Festival was held at the Walter E. Washington Convention Center in Washington DC on April 22, 2017: 170422-1537-57_6202.jpg I caught the March for Science on the same day: 170422-1917-01_6325.jpg 170422-1944-58_6455.jpg

Pittsburgh (July)

The SIAM Annual Meeting, held July 10-14, 2017 at the David Lawrence Convention Center in Pittsburgh, was very busy for me as SIAM president. Here is conference co-chair Des Higham speaking in the minisymposium “Advances in Mathematics of Large-Scale and Higher-Order Networks”: 170713-1033-16_7413.jpg Emily Shuckburgh gave the I.E. Block Community Lecture “From Flatland to Our Land: A Mathematician’s Journey through Our Changing Planet”: 170712-1821-50_7337.jpg The Princeton Companion to Applied Mathematics was on display on the Princeton University Press stand: 170710-1728-15_7289.jpg Here are Des and I on the Roberto Clemente bridge over the Allegheny River, the evening before the conference started: 170708-2121-33_7255.jpg

Posted in conferences | Tagged , | Leave a comment

Numerical Linear Algebra Group 2017

The Manchester Numerical Linear Algebra Group (many of whom are in the October 2017 photo below) was involved in a variety of activities this year. This post summarizes what we got up to. Publications are not included here, but many of them can be found on MIMS EPrints under the category Numerical Analysis.

171017-1041-49_7702.jpg

Software

Together with Jack Dongarra’s team at the University of Tennessee the group is one of the two partners involved in the development of PLASMA: Parallel Linear Algebra Software for Multicore Architectures.

PhD students Weijian Zhang, Steven Elsworth and Jonathan Deakin released Etymo—a search engine for machine learning research and development.

We continue to make our research codes available, which is increasingly done on GitHub; see the repositories of Fasi, Higham, Relton, Sego, Tisseur, Zhang. We also put MATLAB software on MATLAB Central File Exchange and on our own web sites, e.g., the Rational Krylov Toolbox (RKToolbox).

PhD Students

New PhD students Gian Maria Negri Porzio and Thomas McSweeney joined the group in September 2017.

Postdoctoral Research Associates (PDRAs)

Sam Relton, who was working on the Parallel Numerical Linear Algebra for Extreme Scale Systems (NLAFET) project, left in June 2017 to take up a position at the University of Leeds. Negin Bagherpour joined NLAFET in March 2017, leaving in November 2017.

Srikara Pranesh joined the project in November 2017. Pierre Blanchard joined us in October 2017 to work jointly on the ICONIC project (which started in June 2017) and NLAFET.

Presentations at Conferences and Workshops

UK and Republic of Ireland Section of SIAM Annual Meeting, University of Strathclyde, January 12, 2017: Fasi, Gwynne, Higham, Zemaityte, Zhang.

2017 Joint Mathematics Meetings, Atlanta, January 4-7, 2017: Higham.

Workshop on Matrix Equations and Tensor Techniques, Pisa, Italy, February 13-14 2017: Fasi

Due Giorni di Algebra Lineare Numerica, Como, Italy, February 16-17, 2017: Fasi

International Conference on Domain Decomposition Methods DD24, Svalbard, Norway, February 6-10, 2017: Sistek.

Workshop on Batched, Reproducible, and Reduced Precision BLAS, Atlanta, February 23-25, 2017: Hammarling, Relton.

SIAM Conference on Computational Science and Engineering, Atlanta, February 27-March 3, 2017: Relton, Zounon. See the blog posts about the meeting by Nick Higham and Sam Relton.

High Performance Computing in Science and Engineering (HPCSE) 2017, Hotel Solan, Czech Republic, May 22-25, 2017: Sistek

Advances in Data Science, Manchester, May 15-16, 2017: Zhang.

27th Biennial Conference on Numerical Analysis, Glasgow, June 27-30, 2017: Tisseur.

Householder Symposium XX on Numerical Linear Algebra, The Inn at Virginia Tech, June 18-23, 2017: Tisseur.

SIAM Annual Meeting, Pittsburgh, July 10-14, 2017: Zhang (see this SIAM News article about Weijian’s presentation). A Storify of the conference is available in PDF form.

ILAS 2017 Conference, Iowa State University, USA, July 24-28, 2017: Güttel

24th IEEE Symposium on Computer Arithmetic, London, July 24-26, 2017: Higham (see this blog post by George Constantinides).

LMS-EPSRC Symposium on Model Order Reduction, Durham, UK, August 8-17, 2017: Güttel

Euro-Par 2017, 23rd International European Conference on Parallel and Distributed Computing, August 28-September 1, 2017: Zounon.

INdAM Meeting Structured Matrices in Numerical Linear Algebra: Analysis, Algorithms and Applications, Cortona, Italy, September 4-8, 2017: Fasi, Tisseur.

2017 Woudschoten Conferences, Zeist, The Netherlands, 4-6 October 2017: Tisseur.

ICERM Workshop on Recent Advances in Seismic Modeling and Inversion, Brown University, USA, November 6-10, 2017: Güttel. A video recording of this talk is available.

Conference and Workshop Organization

Güttel co-organized the SIAM UKIE Annual Meeting 2017 at the University of Strathclyde January 12, 2017 and the GAMM ANLA Workshop on High-Performance Computing at the University of Cologne, September 7-8, 2017.

The Manchester SIAM Student Chapter organized an Manchester Chapter Auto Trader Industry Problem Solving Event on February 22, 2017 and the 7th Manchester SIAM Student Chapter Conference on May 5, 2017.

The group organized three minisymposia at the SIAM Conference on Computational Science and Engineering, Atlanta, February 27-March 3, 2017:

Visitors

Franco Zivcovic (Università degli Studi di Trento) visited the group from September 2017-January 2018.

Knowledge Transfer

The Sabisu KTP project, which ended in December 2016, has been awarded the highest grade of “Outstanding” by the KTP Grading Panel. A new KTP project with Process Integration Ltd. is under way, led by Stefan Güttel.

The MSc project of Thomas McSweeney was sponsored by NAG and produced a code for modified Cholesky factorization that will appear in the NAG Library.

Recognition and Service

Stefan Güttel continued his terms as Secretary/Treasurer of the SIAM UKIE section and vice-chair of the GAMM Activity Group on Applied and Numerical Linear Algebra.

Nick Higham served the first year of his two-year term as President of SIAM.

Weijian Zhang was awarded a SIAM Student Travel Award to attend the SIAM Annual Meeting 2017 in Pittsburgh.

Massimiliano Fasi and Mante Zemaityte were selected to present posters at the SET for Britain 2017 competition, which took place at the House of Commons, London. Fasi’s poster was “Finding Communities in Large Signed Networks with the Weighted Geometric Mean of Laplacians” and Zemaityte’s was “A Shift-and-Invert Lanczos Algorithm for the Dynamic Analysis of Structures”.

Jakub Sistek served as treasurer of the eu-maths-in.cz Czech Network for Mathematics in Industry.

Tweets of the Year

Posted in research | Leave a comment

The Strange Case of the Determinant of a Matrix of 1s and -1s

By Nick Higham and Alan Edelman (MIT)

In a 2005 talk the second author noted that the MATLAB det function returns an odd integer for a certain 27-by-27 matrix composed of 1s and -1s:

>> A = edelman; % Set up the matrix.
>> format long g, format compact, det(A)
ans =
           839466457497601

However, the determinant is, from its definition, a sum of an even number (27 factorial) of odd numbers, so is even. Indeed the correct determinant is 839466457497600.

At first sight, this example is rather troubling, since while MATLAB returns an integer, as expected, it is out by 1. The determinant is computed as the product of the diagonal entries of the U factor in the LU factorization with partial pivoting of A, and these entries are not all integers. Standard rounding error analysis shows that the relative error from forming that product is bounded by nu/(1-nu), with n=27, where u \approx 1.1 \times 10^{-16} is the unit roundoff, and this is comfortably larger than the actual relative error (which also includes the errors in computing U) of 6 \times 10^{-16}. Therefore the computed determinant is well within the bounds of roundoff, and if the exact result had not been an integer the incorrect last decimal digit would hardly merit discussion.

However, this matrix has more up its sleeve. Let us compute the determinant using a different implementation of Gaussian elimination with partial pivoting, namely the function gep from the Matrix Computation Toolbox:

>> [Lp,Up,Pp] = gep(A,'p'); det(Pp)*det(Up)
ans =
           839466457497600

Now we get the correct answer! To see what is happening, we can directly form the products of the diagonal elements of the U factors:

>> [L,U,P] = lu(A); 
>> d = diag(U); dp = diag(Up); 
>> rel_diff_U_diags = norm((dp - d)./d,inf)
rel_diff_U_diags =
      7.37206353875273e-16
>> [prod(d), prod(dp)]
ans =
          -839466457497601          -839466457497600
>> [prod(d(end:-1:1)), prod(dp(end:-1:1))]
ans =
          -839466457497600          -839466457497600

We see that even though the diagonals of the two U factors differ by a small multiple of the unit roundoff, the computed products differ in the last decimal digit. If the product of the diagonal elements of U is accumulated in the reverse order then the exact answer is obtained in both cases. Once again, while this behaviour might seem surprising, it is within the error bounds of a rounding error analysis.

The moral of this example is that we should not be misled by the integer nature of a result; in floating-point arithmetic it is relative error that should be judged.

Finally, we note that numerical evaluation of the determinant offers other types of interesting behaviour. Consider the Frank matrix: a matrix of integers that has determinant 1. What goes wrong here in the step from dimension 24 to 25?

>> A = gallery('frank',24); det(A)
ans =
         0.999999999999996
>> A = gallery('frank',25); det(A)
ans =
          143507521.082525

The Edelman matrix is available in the MATLAB function available in this gist, which is embedded below. A Julia notebook exploring the Edelman matrix is available here.

Posted in research | Tagged , | 2 Comments

Fun Books for Learning Programming

I learned Fortran from the TV course and book by Jeff Rohl. Some years later I came across A FORTRAN Coloring Book by Roger Emanuel Kaufman (MIT Press, 1978). The text is entirely handwritten (even the copyright page), is illustrated with numerous cartoons, and is full of witty wordplay. Yet it imparts the basics of Fortran very well and I could have happily learned Fortran from it. It even describes some simple numerical methods, such as the bisection method. The book is one continuous text, with no chapters or sections, but it has a good index. I’ve long been a fan of the book and Des Higham, and I include three quotes from it in MATLAB Guide.

kauf78_cover.jpg

kauf78_p75.jpg

Kaufman’s book has attracted attention in cultural studies. In the article Bend Sinister: Monstrosity and Normative Effect in Computational Practice, Simon Yuill describes it as “the first published computing text to use cartoon and comic strip drawings as a pedagogic medium” and goes on to say “and it could be argued, is the archetype to the entire For Dummies series and all its numerous imitators”. I would add that the use of cartoons within magazine articles on computing was prevalent in the 1970s, notably in Creative Computing magazine, though I don’t recall anything comparable with Kaufman’s book.

alco92_p139.jpg

A page from Illustrating C.

A book in a similar vein and from the same era, is the handwritten Illustrating Basic by Donald Alcock (Cambridge University Press, 1977). It’s a bit like Kaufman without the jokes, and is organized into sections. This was the first in a series of such books, culminating in Illustrating C (1992). Like Kaufman’s book, Alcock’s contain nontrivial examples and are a good way for anyone to learn a programming language.

Thinking Forth by Leo Brodie, about the Forth language, is typeset but contains lots of cartoons and hand-drawn figures. It was originally published in 1984 and is now freely available under a Creative Commons license.

A more recent book with a similarly fun treatment is Land of Lisp by Conrad Barski (No Starch Press, 2011). It teaches Common Lisp, coding various games along the way. It’s typeset but is heavily illustrated with cartoons and finishes with a comic strip.

Posted in software | Tagged , , , , | Leave a comment

Org Mode Syntax Cheat Sheet

org_mode_logo.jpg

I’m a keen user of Emacs and Org mode for a variety of tasks, including

  • note taking,
  • generating documents for exporting to LaTeX, Word, or html.
  • creating blog posts (notably for this blog, using Org2blog).

Although Org mode is usually associated with Emacs, it is a markup language in its own right, and one that is far more powerful and more standardized than the Markdown language.

I recently came across the excellent blog post Org-Mode Is One of the Most Reasonable Markup Language to Use for Text by Org enthusiast Karl Voit. In the post he includes a simple example displaying some of the most important aspects of Org syntax. I was struck by how much information can be conveyed in a short piece of Org code. I have adapted Karl’s example into this longer version:

#+TITLE: Org Mode Syntax Cheat Sheet
#+OPTIONS: toc:nil 
# Adapted from http://karl-voit.at/2017/09/23/orgmode-as-markup-only/

* Top Level Heading
** Second Level Heading
*** Third Level Heading

# A comment line.  This line will not be exported.

Paragraphs are separated by at least one empty line.

*bold* /italic/ _underlined_ +strikethrough+ =monospaced=

[[https://nickhigham.wordpress.com/][Link description]]

https://nickhigham.wordpress.com/ A link without a description.

A DOI (digital object identifier) link: 
[[doi:10.1093/comnet/cnv016][Matching Exponential-Based and Resolvent-Based Centrality Measures]]

A horizontal line, fill-width across the page:
-----

- First item in a list.
- Second item.
  - Sub-item
    1. Numbered item.
    2. Another item.
- [ ] Item yet to be done.
- [X] Item that has been done.  

LaTeX macros can be included: $x_2 = \alpha + \beta^2 - \gamma$.

**** TODO A todo item.
**** DONE A todo item that has been done.

#+BEGIN_QUOTE
This text will be indented on both the left margin and the right margin.
#+END_QUOTE

: Text to be displayed verbatim (as-is), without markup 
: (*bold* does not change font), e.g., for source code. 
: Line breaks are respected. 

Some MATLAB source code:
#+BEGIN_SRC matlab
>> rand(1,3)
ans =
   5.5856e-01   7.5663e-01   9.9548e-01
#+END_SRC

Some arbitrary text to be typeset verbatim in monospace font:
#+BEGIN_SRC text
Apples, oranges,
cucumbers, tomatoes
#+END_SRC

# Table and spreadsheet.  The column headed "Ratio" is automatically
# calculated by hitting C-c C-c in Emacs on the #+TBLFM line.

|----------------+-----------+-----------+-------|
| Country        | Abstracts | Downloads | Ratio |
|----------------+-----------+-----------+-------|
| United States  |         7 |       497 |  71.0 |
| Unknown        |         4 |        83 |  20.8 |
| United Kingdom |         3 |        41 |  13.7 |
| Germany        |         3 |        29 |   9.7 |
| Netherlands    |         2 |        21 |  10.5 |
| Japan          |         1 |        18 |  18.0 |
|----------------+-----------+-----------+-------|
#+TBLFM: $4=$3/$2;%.1f

Include an image:
file:nickhighamwordpress.jpg

I have put the source on GitHub along with the results of exporting the file to txt, LaTeX, PDF (direct link), and html. I include conversions done two ways:

  • With Emacs: the recommended way.
  • With Pandoc. This is useful if you do not use Emacs or want an easy way to automate the conversions. However, Pandoc does not support all Org syntax and has different defaults, so the conversions are not identical.

For more about Org see my previous writings and videos such as Using Emacs 2 – org and Getting Started With Org Mode.

Posted in Emacs | Tagged , | Leave a comment

What’s New in MATLAB R2017b?

Following my earlier posts What’s New in MATLAB R2016b? and What’s New in MATLAB R2017a? I take a look here at the R2017b release of MATLAB. As before, this is not a comprehensive treatment (for which see the Release Notes), but rather a brief description of the changes in MATLAB (not the toolboxes) that are the most notable from my point of view.

Decomposition Object

MATLAB now has a way of avoiding unnecessarily repeating the factorization of a matrix. In the past, if we wanted to solve two linear systems Ax = b and Ay = c involving the same square, general nonsingular matrix A, writing

x = A\b;
y = A\c;

was wasteful because A would be LU factorized twice. The way to avoid this unnecessary work was to factorize A explicitly then re-use the factors:

[L,U] = lu(A);
x = U\(L\b);
y = U\(L\c);

This solution lacks elegance. It is also less than ideal from the point of view of code maintenance and re-use. If we want to adapt the same code to handle symmetric positive definite A we have to change all three lines, even though the mathematical concept is the same:

R = chol(A);
x = R\(R'\b);
y = R\(R'\c);

The new decomposition function creates an object that contains the factorization of interest and allows it to be re-used. We can now write

dA = decomposition(A);
x = dA\b;
y = dA\c;

MATLAB automatically chooses the factorization based on the properties of A (as the backslash operator has always done). So for a general square matrix it LU factorizes A, while for a symmetric positive definite matrix it takes the Cholesky factorization. The decomposition object knows to use the factors within it when it encounters a backslash. So this example is functionally equivalent to the first two.

The type of decomposition can be specified as a second input argument, for example:

dA = decomposition(A,'lu');
dA = decomposition(A,'chol');
dA = decomposition(A,'ldl');
dA = decomposition(A,'qr');

These usages make the intention explicit and save a little computation, as MATLAB does not have to determine the matrix type. Currently, 'svd' is not supported as a second input augment.

This is a very welcome addition to the matrix factorization capabilities in MATLAB. Tim Davis proposed this idea in his 2013 article Algorithm 930: FACTORIZE: An Object-Oriented Linear System Solver for MATLAB. In Tim’s approach, all relevant functions such as orth, rank, condest are overloaded to use a decomposition object of the appropriate type. MATLAB currently uses the decomposition object only in backslash, forward slash, negation, and conjugate transpose.

The notation dA used in the MathWorks documentation for the factorization object associated with A grates a little with me, since dA (or \Delta A) is standard notation for a perturbation of the matrix A and of course the derivative of a function. In my usage I will probably write something more descriptive, such as decompA or factorsA.

String Conversions

Functions written for versions of MATLAB prior to 2016b might assume that certain inputs are character vectors (the old way of storing strings) and might not work with the new string arrays introduced in MATLAB R2016b. The function convertStringsToChars can be used at the start of a function to convert strings to character vectors or cell arrays of character vectors:

function y = myfun(x,y,z)
[x,y,z] = convertStringToChars(x,y,z);

The pre-2016b function should then work as expected.

More functions now accept string arguments. For example, we can now write (with double quotes as opposed to the older single quote syntax for character vectors)

>> A = gallery("moler",3)
A =
     1    -1    -1
    -1     2     0
    -1     0     3

Tall Arrays

There is more support in R2017b for tall arrays (arrays that are too large to fit in memory). They can be indexed, with sorted indices in the first dimension; functions including median, plot, and polyfit now accept tall array arguments; and the random number generator used in tall array calculations can be independently controlled.

Word Clouds

The new wordcloud function produces word clouds. The following code runs the function on a file of words from The Princeton Companion to Applied Mathematics.

words = fileread('pcam_words.txt');
words = string(words);
words = splitlines(words);
words(strlength(words)<5) = [];
words = categorical(words);
figure
wordcloud(words)

I had prepared this file in order to generate a word cloud using the Wordle website. Here is the MATLAB version followed by the Wordle version: wordcloud_pcam.jpg wordle-edited2.jpg Wordle removes certain stop words (common but unimportant words) that wordcloud leaves in. Also, whereas Wordle produces a different layout each time it is called, different layouts can be obtained from wordcloud with the syntax wordcloud(words,'LayoutNum',n) with n a nonnegative integer.

Minima and Maxima of Discrete Data

New functions islocalmin and islocalmax find local minima and maxima within discrete data. Obviously targeted at data manipulation applications, such as analysis of time series, these functions offer a number of options to define what is meant by a minimum or maximum.

The code

A = randn(10); plot(A,'LineWidth',1.5); hold on
plot(islocalmax(A).*A,'r.','MarkerSize',25); 
hold off, axis tight

finds maxima down the columns of a random matrix and produces this plot: islocalmax_plot.jpg

Whereas min and max find the smallest and largest elements of an array, the new functions mink(A,k) and maxk(A,k) find the k smallest and largest elements of A, columnwise if A is a matrix.

Posted in software | Tagged | Leave a comment

Foundations of Applied Mathematics Book Series

171014-1019-53_8051.jpg

Foundations of Applied Mathematics, Volume 1: Mathematical Analysis was published by SIAM this summer. Written by Jeffrey Humpherys, Tyler J. Jarvis, and Emily J. Evans, all from Brigham Young University, this is the first of a four-book series, aimed at the upper undergraduate or first year graduate level. It lays the analysis and linear algebra foundations that the authors feel are needed for modern applied mathematics.

The next book, Volume 2: Algorithms, Approximation, and Optimization, is scheduled for publication in 2018.

At 689 pages, hardback, beautifully typeset on high quality paper, and with four colours used for the diagrams and boxed text, this is a physically impressive book. Unusually for a SIAM book, it contains a dust jacket. I got a surprise when I took it off: underneath is a front cover very different to the dust jacket, containing a huge number 1. I like it and am leaving my book cover-free.

I always look to see what style decisions authors have made, as I ponder them a lot when writing my own books. One was immediately obvious: the preface uses five Oxford commas in the first six sentences!

I have only had time to dip into a few parts of the book, so this is not a review. But here are some interesting features.

  • Chapter 15, Rings and Polynomials, covers a lot of ground under this general framework, including the Euclidean algorithm, the Lagrange and Newton forms of the polynomial interpolant, the Chinese remainder theorem (of which several applications are mentioned), and partial fractions. This chapter has attracted a lot of interest on Reddit.
  • The authors say in the preface that one of their biggest innovations is treating spectral theory using the Dunford-Schwartz approach via resolvents.
  • There is a wealth of support material available here. More material is available here.

It’s great to see such an ambitious project, especially with SIAM as the publisher. If you are teaching applied mathematics with a computational flavour you should check it out.

Posted in books | Leave a comment

Christopher T. H. Baker (1939–2017)

By Nick Higham and Neville Ford (University of Chester)

970623_1_cthb_cropped.jpg

Christopher Baker at the eighth Leslie Fox Prize Meeting, Dundee University, June 23, 1997.

Christopher Thomas Hale Baker died on August 20, 2017 at the age of 78. He was born on the Isle of Thanet, Kent, in 1939, and was educated at Colchester Royal Grammar School and Jesus College Oxford, where he held an Edwin Jones Scholarship and a State Scholarship.

His first full-time employment was between school and college, when he worked in the Physics Research Laboratory of BX Plastics. He obtained his BA in 1961 and his M.A. and D.Phil., in 1964, from the University of Oxford. Between 1964 and 1966 he held a Fulbright Award and was Instructor and PG Research Mathematician at UC Berkeley. From 1966 Christopher was lecturer, senior lecturer and then reader at the University of Manchester, becoming professor in 1989. He had periods of leave at the University of Toronto (in 1972 and 1976) and Oxford University.

Christopher served as head of the numerical analysis group for around ten years and served as Head of Department for three years from September 1995. Following his retirement in 2004, Christopher joined the University of Chester as a part-time member of the department, retiring from that role in 2016. At the time of his death he held the title of Emeritus Professor at both the University of Manchester and the University of Chester.

Christopher was founding Director of the Manchester Centre for Computational Mathematics (MCCM), formed in 1992 by the numerical analysis groups at the University of Manchester and UMIST to build on existing collaborations. In his ten years as Director, the centre grew substantially in activity, as seen particularly in the Numerical Analysis Report series, and the M.Sc. in Numerical Analysis and Computing. Christopher was instrumental in involving external researchers in MCCM, notably the Chester numerical analysts.

His research interests included numerical solution of integral equations and functional differential equations (integro-differential and delay-differential equations), and parameter estimation in models. He is perhaps best-known for his monumental 1034-page monograph Numerical Treatment of Integral Equations (Clarendon Press, Oxford, 1977). He was able to expand some of the tools and techniques developed for integral equations into newly emerging fields of numerical dynamics and numerical methods for stochastic differential equations.

Christopher organized two Durham Symposia. The first, “Numerical Treatment of Integral Equations” (1982), was attended by 67 mathematicians from around the world. The second, “Evolutionary Problems: Continuous and Discretized Nonlinear Systems” (July 4-14, 1992), organized jointly with Ruth Thomas, had 92 attendees.

In his introduction to the 2000 annual report of MCCM, Christopher offered some thoughts on the nature of Numerical Analysis.

“To some, the emphasis should be on computational mathematics, to others the emphasis should be on a unifying perspective from the viewpoint of applied analysis. To the writer, numerical analysis is ideally a broad church and like other sections of applied mathematics should be informed by modelling considerations, investigations based on simulation or analysis, and the practicalities of modern computing. As an integrated part of applied mathematics, the skills developed in numerical analysis complement and are complemented by perspectives obtained from other areas; numerical analysis should be supported by insights from modelling, and from the more abstract areas of mathematics, and computer science.”

Those words strike us as just as valid today as when Christopher wrote them seventeen years ago.

Christopher was a member of the 1992 Mathematics Assessment Panel in the UFC Research Assessment Exercise and of the Applied Mathematics panel in the 1996 Research Assessment Exercise. He chaired the Applied Mathematics panel in the 2001 Research Assessment Exercise. Serving on three successive panels was a major service to the mathematics community. Some idea of this is given by Christopher’s comment in the 2002 MCCM annual report, “During most of 2001, every flat surface at home and in my office was covered with RAE paperwork”.

He was a Fellow of the Institute of Mathematics and its Applications and served as editor of the IMA Journal of Numerical Analysis from its foundation in 1981 to 1996. He was a dedicated editor, also giving long service to other journals including Journal of Computational and Applied Mathematics, Journal of Integral Equations and Applications, and Advances in Computational Mathematics.

Here is a list of his PhD students (complete as far as we know), with their last known affiliations.

  • Ian Gladwell (Southern Methodist University, Dallas)
  • Malcolm Keech (University of Bedfordshire)
  • Siamak Amini (University of Salford)
  • Athena Makroglou (University of Portsmouth)
  • Mishi Derakshan (Dassault Systems BIOVIA)
  • Neville Ford (University of Chester)
  • Christopher Paul (University of Manchester)
  • David Willé (Heidelberg, Germany)
  • Arsalang Tang (Concordia University, Montreal, Canada)
  • Fathalla A. Rihan (United Arab Emirates University)
  • Ali Filiz (Adnan Menderes University, Turkey)
  • Hongjiong Tian (Shanghai Normal University, China)
  • Yihong Song (Suzhou University, Jiangsu, China)
  • Ephraim O. Agyingi (Rochester Institute of Technology, NY)
  • Eugene Parmuzin (INMRAS, Moscow, Russia)

Christopher had heart bypass surgery in 1988 and the surgeon told him “We know these vein grafts last for 12 years”. Thankfully, that was a severe underestimate, and Christopher maintained all his usual activities right until the end.

Christopher will be remembered as a kind, generous, and sociable colleague as well as for his leadership in applied mathematics and numerical analysis in Manchester, Chester, across the UK, and beyond.

Christopher is survived by his wife Helen, his children Deborah and Mark, and four grandchildren

Nick writes:

Christopher was a student at Oxford when Leslie Fox was Professor of Numerical Analysis and head of the Computing Laboratory. According to David Sayers, Fox used to refer to Christopher as “that pure mathematician”—presumably because of the rigorous mathematical approach that Christopher used in his research on the numerical treatment of integral equations. When I was a PhD student I remember hearing of a seminar where the speaker showed how to solve numerically an integral equation for which there was later shown to be no solution. Christopher would never fall into such a trap! He served for three years on the adjudicating committee for the Leslie Fox prize, chairing it in 1997. He dedicated a paper (“Parallel continuous Runge-Kutta methods and vanishing lag delay differential equations”, 1993) to the memory of Leslie Fox.

Christopher was a shrewd operator at faculty level. He secured many promotions in the department at a time when they were limited by university finances. He was responsible for mathematics being chosen as the location for a large PC cluster in the early days of the use of PCs for teaching. I found a 1993 email in which Christopher wrote, to colleagues in the department, many of whom were not au fait with computing:

“You may ask why it is thought that computers can assist teaching … Computers can be used to yield visual and numerical insight, if the right packages are used. They can also access databases (library catalogues, journal and book reviews, etc.) The international trends are quite clear. It is also quite clear that computers are a genuine aid to modern mathematical research and development; some of our graduates will become real mathematicians.”

Christopher was an enthusiastic early adopter of email, originally on the university CMS system. He was professor in charge of computing for many years in the 1990s: a major task in a time of rapid change.

Neville writes:

Christopher’s involvement with colleagues at the University of Chester stems from his collaboration with me that has lasted more than 30 years. My former pure mathematics tutor, Brian Steer (who had been a student with Christopher during his time at Jesus College) put me in touch with Christopher as somebody who could supervise me (interests in functional and classical analysis) and help me establish myself in numerical analysis. As Nick describes, Christopher was always shrewd. I recognise the way careful supervision encouraged students to play to their strengths and to answer research questions which other people would find to be interesting. He frequently reminded us all that no question is worth answering unless somebody other than the author of the paper is asking it. I also remember being challenged repeatedly by his question ‘what do you mean by …’ (stability, for example) reflecting his determination to understand the underlying mathematics before venturing an opinion on a numerical scheme.

Christopher was responsible for inviting members of the Chester mathematics team to join with the Manchester Centre for Computational Mathematics, a co-operation which worked very effectively for our emerging research group, and on his retirement from Manchester we were pleased to welcome him as a member of our team, so collaborations between Christopher and the Chester group continued to develop. Even though some new members of our group had known him only for a short time before his death, they describe how much he continued to help by challenging their thinking, suggesting interesting fresh insights and contributing to the seminar programme.

Updated October 4, 2017 to correct the list of PhD students.

Posted in people | 4 Comments

How Fast is Quadruple Precision Arithmetic?

170724-2024-49-7501.jpg

The 24th IEEE Symposium on Computer Arithmetic included several talks on multiprecision arithmetic.

When I am testing an algorithm running in double precision arithmetic I often want to compare the computed solution with a reference solution: a solution that is fully accurate to double precision. To obtain one I solve the same problem in quadruple precision and round the result back to double precision. If the conditioning of the problem leads me to doubt that that this reference solution will be correct to double precision, I compute it at a precision even higher than quadruple. Whether it is feasible to compute a reference solution in this way depends on the size of the problem and the speed of quadruple precision arithmetic. This is just one scenario where quadruple precision arithmetic is needed; others are discussed in my earlier post The Rise of Mixed Precision Arithmetic.

Roughly speaking, a quadruple precision number x can be represented by a pair of double precision numbers (x_1,x_2), where x = x_1 + x_2 and x_1 and x_2 represent the leading and trailing significant digits of x, respectively. Then a product x y can be written (x_1 + x_2)(y_1 + y_2) = x_1y_1 + x_1y_2 + x_2y_1 + x_2y_2, which involves four double precision multiplications and three double precision additions. We can therefore expect quadruple precision multiplication to be at least seven times slower than double precision multiplication. Almost all available implementations of quadruple precision arithmetic are in software (an exception is provided by the IBM z13 mainframe system), so we should expect quadruple precision to be slower than the theoretical prediction would suggest. Bailey and Borwein (2015) state that quadruple precision implemented in software typically runs five to ten times slower than hardware double precision arithmetic.

Various aspects need to be taken into consideration when we compare double precision and quadruple precision arithmetics. First, the relative speed of the arithmetics may depend on the type of operations being performed (scalar versus vectorizable operations), memory traffic considerations, and to what extent the implementations exploit multicore processors. Second, are we to compare the same code running in different precisions or computational kernels coded specially for each precision? I will take the second approach, as it is more relevant to my usage of quadruple precision. However, this does mean that I am comparing algorithms and their implementations as well as the underlying arithmetic.

I have done some very simple comparisons in MATLAB using the VPA arithmetic of the Symbolic Math Toolbox and the MP arithmetic of the third party Multiprecision Computing Toolbox from Advanpix.

The Multiprecision Computing Toolbox supports IEEE-compliant quadruple precision arithmetic, which is invoked by using the function mp.Digits to set the precision to 34 decimal digits (in fact, this is the default). For the VPA arithmetic in the Symbolic Math Toolbox 34 decimal digit precision is specified with the command digits(34) (the default is 32 digits). According to the documentation, VPA arithmetic uses a few guard digits, that is, it computes with a few extra digits of precision. The number of guard digits cannot be set by the user. In the Multiprecision Computing Toolbox the number of guard digits can be set with the function mp.GuardDigits, and the default is to use no guard digits.

These experiments were performed on an Intel Broadwell-E Core i7-6800K CPU @ 3.40GHz, which has 6 cores. The software details are:

MATLAB Version: 9.2.0.538062 (R2017a)
Operating System: Microsoft Windows 10 Home Version 10.0
                  (Build 15063)
Advanpix Multiprecision Computing Toolbox   Version 4.4.1.12580   
Symbolic Math Toolbox                       Version 7.2  (R2017a)

Further details on the Multiprecision Computing Toolbox can be seen by typing mp.Info, which lists details of open source libraries that are used.

In these experiments I do not check that the computed results are correct. Such checks are done in more extensive tests available in the test scripts at https://www.advanpix.com/2015/02/12/version-3-8-0-release-notes/. I note that the timings vary from run to run, but it is the order of magnitudes of the ratios that are of interest and these are correctly reflected by the reported results.

Test 1: LU Factorization

This code compares the speed of the arithmetics for LU factorization of a 250-by-250 matrix.

%QUAD_PREC_TEST1  LU factorization.
rng(1), n = 250;
A = randn(n); A_mp = mp(A,34); A_vpa = vpa(A,34);
t = clock; [L,U] = lu(A);     t_dp = etime(clock, t)
t = clock; [L,U] = lu(A_mp);  t_mp = etime(clock, t)
t = clock; [L,U] = lu(A_vpa); t_vpa = etime(clock, t)
fprintf('mp/double: %8.2e, vpa/double: %8.2e, vpa/mp: %8.2e\n',....
         t_mp/t_dp, t_vpa/t_dp, t_vpa/t_mp)

The output is

t_dp =
   1.0000e-03
t_mp =
   9.8000e-02
t_vpa =
   2.4982e+01
mp/double: 9.80e+01, vpa/double: 2.50e+04, vpa/mp: 2.55e+02

Here, the MP quadruple precision is 98 times slower than double precision, but VPA is 25,000 times slower than double precision.

Test 2: Complete Eigensystem of Nonsymmetric Matrix

%QUAD_PREC_TEST2  Complete eigensystem of nonsymmetric amtrix.
rng(1), n = 125;
A = randn(n); A_mp = mp(A,34); A_vpa = vpa(A,34);
t = clock; [V,D] = eig(A);     t_dp = etime(clock, t)
t = clock; [V,D] = eig(A_mp);  t_mp = etime(clock, t)
t = clock; [V,D] = eig(A_vpa); t_vpa = etime(clock, t)
fprintf('mp/double: %8.2e, vpa/double: %8.2e, vpa/mp: %8.2e\n',....
         t_mp/t_dp, t_vpa/t_dp, t_vpa/t_mp)

The output is

t_dp =
   1.8000e-02
t_mp =
   1.3430e+00
t_vpa =
   1.0839e+02
mp/double: 7.46e+01, vpa/double: 6.02e+03, vpa/mp: 8.07e+01

Here, MP is 75 times slower than double, and VPA is closer to the speed of MP.

Test 3: Complete Eigensystem of Symmetric Matrix

%QUAD_PREC_TEST3  Complete eigensystem of symmetric amtrix.
rng(1), n = 200;
A = randn(n); A = A + A'; A_mp = mp(A,34); A_vpa = vpa(A,34);
t = clock; [V,D] = eig(A);     t_dp = etime(clock, t)
t = clock; [V,D] = eig(A_mp);  t_mp = etime(clock, t)
t = clock; [V,D] = eig(A_vpa); t_vpa = etime(clock, t)
fprintf('mp/double: %8.2e, vpa/double: %8.2e, vpa/mp: %8.2e\n',....
         t_mp/t_dp, t_vpa/t_dp, t_vpa/t_mp)

The output is

t_dp =
   1.1000e-02
t_mp =
   3.5800e-01
t_vpa =
   1.2246e+02
mp/double: 3.25e+01, vpa/double: 1.11e+04, vpa/mp: 3.42e+02

Note that there are at least three different algorithms that could be used here (the QR algorithm, divide and conquer, and multiple relatively robust representations), so the three eig invocations may be using different algorithms.

Test 4: Componentwise Exponentiation

%QUAD_PREC_TEST4  Componentwise exponentiation.
rng(1), n = 1000;
A = randn(n); A_mp = mp(A,34); A_vpa = vpa(A,34);
t = clock; X = exp(A);     t_dp = etime(clock, t)
t = clock; X - exp(A_mp);  t_mp = etime(clock, t)
t = clock; X - exp(A_vpa); t_vpa = etime(clock, t)
fprintf('mp/double: %8.2e, vpa/double: %8.2e, vpa/mp: %8.2e\n',....
         t_mp/t_dp, t_vpa/t_dp, t_vpa/t_mp)

The output is

t_dp =
   7.0000e-03
t_mp =
   8.5000e-02
t_vpa =
   3.4852e+01
mp/double: 1.21e+01, vpa/double: 4.98e+03, vpa/mp: 4.10e+02

Both MP and VPA come closer to double precision on this problem of computing the scalar exponential at each matrix element.

Summary

Not too much emphasis should be put on the precise timings, which vary with the value of the dimension n. The main conclusions are that 34-digit MP arithmetic is 1 to 2 orders of magnitude slower than double precision and 34-digit VPA arithmetic is 3 to 4 orders of magnitude slower than double precision.

It is worth noting that the documentation for the Multiplication Computing Toolbox states that the toolbox is optimized for quadruple precision.

It is also interesting to note that the authors of the GNU MPFR library (for multiple-precision floating-point computations with correct rounding) are working on optimizing the library for double precision and quadruple precision computations, as explained in a talk given by Paul Zimmermann at the 24th IEEE Symposium on Computer Arithmetic in July 2017; see the slides and the corresponding paper.

[Updated September 6, 2017 to clarify mp.Info.]

Posted in software | Tagged | 1 Comment

How and How Not to Compute a Relative Error

The relative error in a scalar y as an approximation to a scalar x is the absolute value of e = (x-y)/x. I recently came across a program in which e had been computed as 1 - y/x. It had never occurred to me to compute it this way. The second version is slightly easier to type, requiring no parentheses, and it has the same cost of evaluation: one division and one subtraction. Is there any reason not to use this parenthesis-free expression?

Consider the accuracy of the evaluation, using the standard model of floating point arithmetic, which says that fl(x \mathbin{\mathrm{op}} y) = (x \mathbin{\mathrm{op}} y)(1+\delta) with |\delta| \le u, where \mathrm{op} is any one of the four elementary arithmetic operations and u is the unit roundoff. For the expression e_1 = (x-y)/x we obtain, with a hat denoting a computed quantity,

\widehat{e_1} = \displaystyle\left(\frac{x-y}{x}\right) (1+\delta_1)(1+\delta_2),  \qquad |\delta_i| \le u, \quad i = 1, 2.

It follows that

\left| \displaystyle\frac{e - \widehat{e_1}}{e} \right| \le 2u + u^2.

Hence e_1 is computed very accurately.

For the alternative expression, e_2 = 1 - y/x, we have

\widehat{e_2} = \left(1 - \displaystyle\frac{y}{x}(1+\delta_1)\right) (1+\delta_2),  \qquad |\delta_i| \le u, \quad i = 1, 2.

After a little manipulation we obtain the bound

\left| \displaystyle\frac{e - \widehat{e_2}}{e} \right| \le          u + \left|\displaystyle\frac{1-e}{e}\right|(u + u^2).

The bound on the relative error in \widehat{e_2} is of order |(1-e)/e|u, and hence is very large when |e| \ll 1.

To check these bounds we carried out a MATLAB experiment. For 500 single precision floating point numbers y centered on x = 3, we evaluated the relative error of y as an approximation to x using the two formulas. The results are shown in this figure, where an ideal error is of order u \approx 6\times 10^{-8}. (The MATLAB script that generates the figure is available as this gist.)

rel-err-formula.jpg

As expected from the error bounds, the formula 1-y/x is very inaccurate when y is close to x, whereas (x-y)/x retains its accuracy as y approaches x.

Does this inaccuracy matter? Usually, we are concerned only with the order of magnitude of an error and do not require an approximation with many correct significant figures. However, as the figure shows, for the formula |1-y/x| even the order of magnitude is incorrect for y very close to x. The standard formula |x-y|/|x| should be preferred.

Posted in research | Tagged | 2 Comments