For sometime I have been collecting digital object identifiers (DOIs) in my BibTeX entries, as described in this blog post. When I use my own BST file to format the references BibTeX creates hyperlinks to the published source via the DOI. If I use the the SIAM BST file the DOI is instead displayed as part of the reference.

The Crossref organization (which provides DOIs) has recently issued revised guidelines on the display of DOIs. Up until now, DOIs have typically been displayed as, for example, 10.1137/130920137, and linked to as http://dx.doi.org/10.1137/16M1057577. The new guidelines say that the link should be https://doi.org/10.1137/16M1057577 and that it should always be displayed in this form, as a full URL. Note that the “dx” part of the URL has gone, and “https” has replaced “http”.

The main reason for the change is that the pure DOI on its own is not much use, as it can’t be clicked on or pasted into a browser address bar without first adding the https://doi.org/ prefix. Additionally, https provides more secure browsing than http, and Google gives a small ranking boost to sites that use https.

Crossmark states that the old http://dx.doi.org/10.1137/16M1057577 form of URL will continue to work forever.

I have updated my BST file `myplain2-doi.bst`

in this GitHub repository, which contains a BibTeX bibliography for all my outputs, so that it produces links in the required form.

SIAM has updated the BST file in its macro packages to implement the new guidelines.

]]>

A search of my hard disk reveals that I have always used the hyphen, probably because I don’t like consecutive w’s. Indeed, in 1999 I published a paper Row-Wise Backward Stable Elimination Methods for the Equality Constrained Least Squares Problem .

A bit more searching found recent SIAM papers containing “rowwise”, so it is clearly acceptable usage to omit the hyphen..

My dictionaries and usage guides don’t provide any guidance as far as I can tell. Here is what some more online searching revealed.

- The Oxford English Dictionary does not contain either form (in the entry for “row” or elsewhere), but the entry for “column” contains “column-wise” but not “columnwise”.
- The Google Ngram Viewer shows a great prevalence of the hyphenated form, which was about three time as common as the unhyphenated form in the year 2000.
- A search for “row-wise” and “rowwise” at google.co.uk finds about 724,000 and 248,00 hits, respectively.
- A Google Scholar search for “row-wise” and “rowwise” finds 31,600 and 18,900 results, respectively. For each spelling, there are plenty of papers with that form in the title. The top hit for “rowwise” is a 1993 paper
*The rowwise correlation between two proximity matrices and the partial rowwise correlation*, which manages to include the word twice for good measure!

Since the book is about MATLAB, it also seemed appropriate to check how the MATLAB documentation hyphenates the term. I could only find the hyphenated form:

doc flipdim: When the value of dim is 1, the array is flipped row-wise down

But for columnwise I found that MATLAB R2016b is inconsistent, as the following extracts illustrate, the first being from the documentation for the Symbolic Math Toolbox version of the function.

doc reshape: The elements are taken column-wise from A ... Reshape a matrix row-wise by transposing the result. doc rmmissing: 1 for row-wise (default) | 2 for column-wise doc flipdim: When dim is 2, the array is flipped columnwise left to right. doc unwrap: If P is a matrix, unwrap operates columnwise.

So what is our conclusion? We’re sticking to “row-wise” because we think it is easier to parse, especially for those whose first language is not English.

]]>

The features below are discussed in greater detail in the third edition of MATLAB Guide, to be published by SIAM in December 2016.

The Live Editor, introduced in R2016a, provides an interactive environment for editing and running MATLAB code. When the code that you enter is executed the results (numerical or graphical) are displayed in the editor. The code is divided into sections that can be evaluated, and subsequently edited and re-evaluated, individually. The Live Editor works with live scripts, which have a .mlx extension. Live scripts can be published (exported) to HTML or PDF. R2016b adds more functionality to the Live Editor, including an improved equation editor and the ability to pan, zoom, and rotate axes in output figures.

The Live Editor is particularly effective with the Symbolic Math Toolbox, thanks to the rendering of equations in typeset form, as the following image shows.

The Live Editor is a major development, with significant benefits for teaching and for script-based workflows. No doubt we will see it developed further in future releases.

I have already written about this powerful generalization of the long-standing MATLAB feature of scalar expansion. See this blog post.

Local functions are what used to be called subfunctions: they are functions within functions or, to be more precise, functions that appear after the main function in a function file. What’s new in R2016b is that a script can have local functions. This is a capability I have long wanted. When writing a script I often find that a particular computation or task, such as printing certain statistics, needs to be repeated at several places in the script. Now I can put that code in a local function instead of repeating the code or having to create an external function for it.

MATLAB has always had strings, created with a single quote, as in

s = 'a string'

which sets up a 1-by-8 character vector. A new datatype `string`

has been introduced. A string is an array for which each entry contains a character vector. The syntax

str = string('a string')

sets up a 1-by-1 string array, whereas

str = string({'Boston','Manchester'})

sets up a 1-by-2 string array via a cell array. String arrays are more memory efficient than cell arrays and allow for more flexible handling of strings. They are particularly useful in conjunction with tables. According to this MathWorks video, string arrays “have a multitude of new methods that have been optimized for performance”. At the moment, support for strings across MATLAB is limited and it is inconvenient to have to set up a string array by passing a cell array to the `string`

function. No doubt string arrays will be integrated more seamlessly in future releases.

Tall arrays provide a way to work with data that does not fit into memory. Calculations on tall arrays are delayed until a result is explicitly requested with the `gather`

function. MATLAB optimizes the calculations to try to minimize the number of passes through the data. Tall arrays are created with the `tall`

function, which take as argument an array (numeric, string, datetime, or one of several other data types) or a datastore. Many MATLAB functions (but not the linear algebra functions, for example) work the same way with tall arrays as they do with in-memory arrays.

Tables were introduced in R2013b. They store tabular data, with columns representing variables of possibly different types. The newly introduced timetable is a table for which each row has an associated date and time, stored in the first column. The `timerange`

function produces a subscript that can be used to select rows corresponding to a particular time interval. These new features make MATLAB an even more powerful tool for data analysis.

MATLAB has new capabilities for dealing with missing values, which are defined according to the data type: NaNs (Not-a-Number) for `double`

and `single`

data, NaTs (Not-a-Time) for datetime data, `<undefined>`

for categorical data, and so on. For example, the function `ismissing`

detects missing data and `fillmissing`

fills in missing data according to one of several rules.

I have Mac OS X Version 10.9.5 (Mavericks) on my Mac. Although this OS is not officially supported, R2016b installed without any problem and runs fine. The pre-release versions of R2016a and R2016b would not install on Mavericks, so it seems that compatibility with older operating systems is ensured only for the actual release.

At the time of writing, there are some compatibility problems with Mac OS Version 10.12 (Sierra) for certain settings of Language & Region.

]]>

Inspired by the book, I thought it might be useful to collect some information on early mathematical wordprocessing. Little information of this kind seems to be available online.

It is first worth noting that before the advent of word processors papers were typed on a typewriter and mathematical symbols were typically filled in by hand, as in this example from A Study of the Matrix Exponential (1975) by Charlie Van Loan:

Some institutions had the luxury of IBM Selectric typewriters, which had a “golf ball” that could be changed in order to type special characters. (See this page for informative videos about the Selectric.) Here is an example of output from the Selectric, taken from my MSc thesis: This illustrates some characteristic weaknesses of typewriter output: superscripts, subscripts, and operators are of the same size as the main symbols and spacing between characters is fixed (as in the vertical bars making up the norm signs here).

In 1980s at the Department of Mathematics at the University of Manchester the word processor Vuwriter was used. It was produced by a spin-off company Vuman Computer Systems, which took its name from “Victoria University of Manchester”, the university’s full name. Vuwriter ran on the Apricot PC, produced by the British company Apricot computers. At least one of my first papers was typed on Vuwriter by the office staff and I still have the original 1984 technical report *Computing the Polar Decomposition—with Applications*, which I have scanned and is available here. A typical equation from the report is this one:

The article

Peter Dolton, Comparing Scientific Word Processor Packages: and Vuwriter, The Economic Journal 100, 311-315, 1990

reviews a version of Vuwriter that ran under MS DOS on IBM PC compatibles. Another review is

D. L. Mealand, Word Processing in Greek using Vuwriter Arts: A Test case for Foreign Language Word Processing, Literary and Linguistic Computing 2, 30-33, 1987

which describes a version of the program for use with foreign languages.

The Department of Mathematics at UMIST (which merged with the University of Manchester in 2004) used an MSDOS word processor called ChiWriter.

In the same period I also prepared manuscripts on my own microcomputers: first on a Commodore 64 and then on a Commodore 128 (essentially a Commodore 64 with a screen 80 characters wide rather than 40 characters wide), using a wordprocessor called Vizawrite. For output I used an Epson FX-80 dot matrix printer, and later an Epson LQ 850 (which produced high resolution output thanks to its 24 pin print head, as opposed to the 9 pin print head of the FX-80). Vizawrite was able to take advantage of the Epson printers’ ability to produce subscripts and superscripts, Greek characters, and mathematical symbols. An earlier post links to a scan of my 1985 article *Matrix Computations in Basic on a Microcomputer* produced in Vizawrite.

In the 1980s some colleagues wrote papers on a Mac. An example (preserved at the Cornell eCommons digital repository) is A Storage Efficient WY Representation for Products of Householder Transformations (1987) by Charlie Van Loan. I think that report was prepared in Mac Write. Here is a sample equation:

Also in the 1980s I built a database of papers that I had read, and wrote a program that could extract the items that I wanted to cite, format them, and print a sorted list. This was a big time-saver for producing reference lists, especially for my PhD thesis. The database was originally held in Superbase for the Commodore C128, with the program written in Superbase’s own language, and was later transferred to PC-File running on an IBM PC-clone with the program converted to GW-Basic. I was essentially building my own much simpler version of BibTeX, which did not exist when I started the database.

I am aware of two good sources of information about technical word processors for the IBM PC. The first is the article

P. K. Wong, Choices for Mathematical Wordprocessing Software, SIAM News 17(6), pp. 8-9, 1984

This article notes that

“There are over 120 wordprocessing programs for the IBM PC alone and the machine is not yet three years old! Of this large number, however, less than half a dozen can claim scientific wordprocessing capabilities, and these have only been available within the past six to nine months.”

The other source is two articles published in the Notices of the American Mathematical Society in the 1980s.

PC Technical Group of IBM PC Users Group of the Boston Computer Society, Technical Wordprocessors for the IBM PC and Compatibles, Notices Amer. Math. Soc. 33, 8-37, 1986

PC Technical Group of IBM PC Users Group of the Boston Computer Society, Technical Wordprocessors for the IBM PC and Compatibles, Part IIB: Reviews, Notices Amer. Math. Soc. 34, 462-491, 1987

These two articles do not appear to be available online. The first of them includes a set of benchmarks, consisting of extracts from technical journals and books, which were used to test the abilities of the packages. The authors make the interesting comment that

“Microsoft chose not to answer our review request for Word, and based on discussion with Word owners, Word is not set up for equations.”

Finally, Roger Horn told me that his book Topics in Matrix Analysis (CUP, 1991), co-authored with Charlie Johnson, was produced from the camera-ready output of the wordprocessing system (reviewed in this 1988 article and in the SIAM News article above). was chosen because was not available on PCs when work on the the book began. It must have been a huge effort to produce this 607-page book in this way!

If you have any information to add, please put it in the “Leave a Reply” box below.

*Acknowledgement*: thanks to Christopher Baker and David Silvester for comments on a draft of this post.

]]>

>> A = spiral(2), B = A - 1 A = 1 2 4 3 B = 0 1 3 2

Here, MATLAB subtracts 1 from every element of `A`

, which is equivalent to expanding the scalar 1 into a matrix of ones then subtracting that matrix from `A`

.

Implicit expansion takes this idea further by expanding vectors:

>> A = ones(2), B = A + [1 5] A = 1 1 1 1 B = 2 6 2 6

Here, the result is the same as if the row vector was replicated along the first dimension to produce the matrix `[1 5; 1 5]`

then that matrix was added to `ones(2)`

. In the next example a column vector is added and the replication is across the columns:

>> A = ones(2) + [1 5]' A = 2 2 6 6

Implicit expansion also works with multidimensional arrays, though we will focus here on matrices and vectors.

So MATLAB now treats “matrix plus vector” as a legal operation. This is a controversial change, as it means that MATLAB now allows computations that are undefined in linear algebra.

Why have MathWorks made this change? A clue is in the R2016b Release Notes, which say

For example, you can calculate the mean of each column in a matrix

`A`

, then subtract the vector of mean values from each column with`A - mean(A)`

.

This suggests that the motivation is, at least partly, to simplify the coding of manipulations that are common in data science.

Implicit expansion can also be achieved with the function `bsxfun`

that was introduced in release R2007a, though I suspect that few MATLAB users have heard of this function:

>> A = [1 4; 3 2], bsxfun(@minus,A,mean(A)) A = 1 4 3 2 ans = -1 1 1 -1 >> A - mean(A) ans = -1 1 1 -1

Prior to the introduction of `bsxfun`

, the `repmat`

function could be used to explicitly carry out the expansion, though less efficiently and less elegantly:

>> A - repmat(mean(A),size(A,1),1) ans = -1 1 1 -1

An application where the new functionality is particularly attractive is multiplication by a diagonal matrix.

>> format short e >> A = ones(3); d = [1 1e-4 1e-8]; >> A.*d % A*diag(d) ans = 1.0000e+00 1.0000e-04 1.0000e-08 1.0000e+00 1.0000e-04 1.0000e-08 1.0000e+00 1.0000e-04 1.0000e-08 >> A.*d' % diag(d)*A ans = 1.0000e+00 1.0000e+00 1.0000e+00 1.0000e-04 1.0000e-04 1.0000e-04 1.0000e-08 1.0000e-08 1.0000e-08

The `.*`

expressions are faster than forming and multiplying by `diag(d)`

(as is the syntax `bsxfun(@times,A,d)`

). We can even multiply with the inverse of `diag(d)`

with

>> A./d ans = 1 10000 100000000 1 10000 100000000 1 10000 100000000

It is now possible to add a column vector to a row vector, or to subtract them:

>> d = (1:3)'; d - d' ans = 0 -1 -2 1 0 -1 2 1 0

This usage allows very short expressions for forming the Hilbert matrix and Cauchy matrices (look at the source code for `hilb.m`

with `type hilb`

or `edit hilb`

).

The `max`

and `min`

functions support implicit expansion, so an elegant way to form the matrix with element is with

d = (1:n); A = min(d,d');

and this is precisely what `gallery('minij',n)`

now does.

Another function that can benefit from implicit expansion is `vander`

, which forms a Vandermonde matrix. Currently the function forms the matrix in three lines, with calls to `repmat`

and `cumprod`

. Instead we can do it as follows, in a formula that is closer to the mathematical definition and hence easier to check.

A = (v(:) .^ (n-1:-1:0)')'; % Equivalent to A = vander(v)

The latter code is, however, slower than the current `vander`

for large dimensions, presumably because exponentiating each element independently is slower than using repeated multiplication.

An obvious objection to implicit expansion is that it could cause havoc in linear algebra courses, where students will be able to carry out operations that the instructor and textbook have said are not allowed. Moreover, it will allow programs with certain mistyped expressions to run that would previously have generated an error, making debugging more difficult.

I can see several responses to this objection. First, MATLAB was already inconsistent with linear algebra in its scalar expansion. When a mathematician writes (with a common abuse of notation) , with a scalar , he or she usually means and not with the matrix of ones.

Second, I have been using the prerelease version of R2016b for a few months, while working on the third edition of MATLAB Guide, and have not encountered any problems caused by implicit expansioneither with existing codes or with new code that I have written.

A third point in favour of implicit expansion is that it is particularly compelling with elementwise operations (those beginning with a dot), as the multiplication by a diagonal matrix above illustrates, and since such operations are not a part of linear algebra confusion is less likely.

Finally, it is worth noting that implicit expansion fits into the MATLAB philosophy of “useful defaults” or “doing the right thing”, whereby MATLAB makes sensible choices when a user’s request is arguably invalid or not fully specified. This is present in the many functions that have optional arguments. But it can also be seen in examples such as

% No figure is open and no parallel pool is running. >> close % Close figure. >> delete(gcp) % Shut down parallel pool.

where no error is generated even though there is no figure to close or parallel pool to shut down.

I suspect that people’s reactions to implicit expansion will be polarized: they will either be horrified or will regard it as natural and useful. Now that I have had time to get used to the conceptand especially now that I have seen the benefits both for clarity of code (the `minij`

matrix) and for speed (multiplication by a diagonal matrix)I like it. It will be interesting to see the community’s reaction.

]]>

The first edition of *Learning * (1997) is a popular introduction to characterized by its brevity, its approach of teaching by example, and its humour. (Full disclosure: the second author is my brother.)

The second edition of the book is 25 percent longer than the first and has several key additions. The amsmath package, particularly of interest for its environments for typesetting multiline equations, is now described. I often struggle to remember the differences between align, alignat, gather, and multline, but three pages of concise examples explain these environments very clearly.

The book now reflects the modern PDF workflow, based on pdflatex as the engine. If you are still generating dvi files you should consider making the switch!

Other features new to this edition are a section on making bibliographies with Bib and appendices on making slides with Beamer and posters with the a0poster class, both illustrated with complete sample documents.

Importantly for a book to be used for reference, there is an excellent 10.5-page index which, at about 10% of the length of the book, is unusually thorough (see the discussion on index length in my A Call for Better Indexes).

The 1997 first edition was reviewed in TUGboat (the journal of the TeX Users Group) in 2013 by Boris Veytsman, who had only just become aware of the book. He says

When Karl Berry and I discussed the current situation with books for beginners, he mentioned the old text by Grffiths and Higham as an example of the one made “right”…

This is indeed an incredibly good introduction to . Even today, when many good books are available for beginners, this one stands out.

No doubt Berry and Veytsman would be even more impressed by the improved and expanded second edition.

Given that I regard myself as an advanced user I was surprised to learn something I didn’t know from the book, namely that in an `\includegraphics`

command it is not necessary to specify the extension of the file being included. If you write `\includegraphics{myfig}`

then pdflatex will search for `myfig.png`

, `myfig.pdf`

, `myfig.jpg`

, and `myfig.eps`

, in that order. If you specify

\DeclareGraphicsExtensions{.pdf,.png,.jpg}

in the preamble, will search for the given extensions in the specified order. I usually save my figures in PDF form, but sometimes a MATLAB figure saved as a jpeg file is much smaller.

This major update of the first edition is beautifully typeset, printed on high quality, bright white paper, and weighs just 280g. It is an excellent guide for those new to and a useful reference for experienced users.

]]>

At the Sunday evening Welcome Reception I captured this photo of Don and Rob Corless (whose graduate textbook on numerical analysis I discussed here).

Don and Rob are co-authors on the classic paper

Robert M. Corless, Gaston N. Gonnet, D. E. G. Hare and David J. Jeffrey and Donald Knuth, On the Lambert Function, Adv. in Comput. Math. 5, 329-359, 1996

The Lambert W function is a multivalued function , with a countably infinite number of branches, that solves the equation . According to Google Scholar this is Don’s most-cited paper. Here is a diagram of the ranges of the branches of , together with values of (+), (×), and (o).

This is to be compared with the the corresponding plot for the logarithm, which consists of horizontal strips of height with boundaries at odd multiples of .

Following the Annual Meeting, Rob ran a conference Celebrating 20 years of the Lambert W function at the University of Western Ontario.

Rob co-authored with David Jeffrey an article on the Lambert W function for the Princeton Companion to Applied Mathematics. The article summarizes the basic theory of the function and some of its many applications, which include delay differential equations. Rob and David note that

The Lambert function crept into the mathematics literature unobtrusively, and it now seems natural there.

The article is one of the sample articles that can be freely downloaded from this page.

I have worked on generalizing the Lambert W function to matrices, as discussed in

Robert M. Corless, Hui Ding, Nicholas J. Higham and David J. Jeffrey, The solution of S exp(S) = A is not always the Lambert W function of A. in ISSAC ’07: Proceedings of the 2007 International Symposium on Symbolic and Algebraic Computation, ACM Publications, pp. 116-121, 2007.

Massimiliano Fasi, Nicholas J. Higham and Bruno Iannazzo, An Algorithm for the Matrix Lambert W Function, SIAM J. Matrix Anal. Appl., 36, 669-685, 2015.

The diagram above is from the latter paper.

]]>

10 PRINT CHR$(205.5+RND(1)); GOTO 10

This is essentially what was printed in the section “Random Graphics” of the Commodore 64 User’s Guide (1982). The program prints a random maze that gradually builds up on-screen. The following video demonstrates it

I recently came across a 309-page book by Montfort et al. (MIT Press, 2013) dedicated to discussing this program from every conceivable angle. The book, which can be freely downloaded in PDF form, has as its title the program itself and was written by a team of ten authors with backgrounds in digital media, art, literature, and computer science. I found the book an interesting read, not least for the memories it brought back of my days programming the Commodore PET and Commodore 64 machines in the early 1980s (discussed in this post about the Commodore PET and this post about the Commodore 64). I suspect that never has so much been written about a single line of code!

Various translations of the program into other languages have been done, but I could not find a MATLAB version. Here, then, is my MATLAB offering, which takes advantage of the MATLAB `char`

function’s ability to produce Unicode characters:

while 1, fprintf('%s\n',char(rand(1,80)+9585.5)); pause(.2), end

The `pause`

command is not necessary but helps to slow the printing down, and the argument of `pause`

may need adjusting for your computer.

Are there other interesting MATLAB one-liners? This one is from Mike Croucher:

x=[-2:.001:2]; plot(x,(sqrt(cos(x)).*cos(200*x)+sqrt(abs(x))-0.7).*(4-x.*x).^0.01)

And here is one to compute the Mandelbrot set, condensed from the code `mandel`

in MATLAB Guide:

[x,y]=ndgrid(-1.5:5e-3:1.5); c=x+1i*y; z=c; for k=1:50, z=z.^2+c; end, contourf(x,y,abs(z)<1e6)

If you know any other good one-liners please put them in the comment box below.

]]>

Charlie has been a huge inspiration to me and many others, not least through his book *Matrix Computations*, with Gene Golub, now in its fourth edition. I wrote about the book on the occasion of the publication of the fourth edition (2013) in this previous post.

Following his PhD at the University of Michigan, Charlie visited the Department of Mathematics at the University of Manchester in 1974–1975 as a Science Research Council Research Fellow. He wrote the department’s first Numerical Analysis Report as well as three more of the first ten reports, as explained in this post.

A 55-minute video interview with Charlie by his colleague Kavita Bala, recorded in 2015, is available at the Cornell University eCommons. In it, Charlie talks about his PhD, with Cleve Moler as advisor, life as a young Cornell faculty member, the “GVL” book, computer science education, and many other things.

A two-part minisymposium is being held in Charlie’s honor at the SIAM Annual Meeting in Boston, July 11-14, 2016, organized by David Bindel (Cornell University) and Ilse Ipsen (North Carolina State University). I will be speaking in the second part about Charlie’s work on the matrix exponential. The details are below. If you will be at the meeting come and join us. I hope to provides links to the slides after the event.

**SIAM Annual Meeting 2016**.

**Numerical Linear and Multilinear Algebra: Celebrating Charlie Van Loan**.

**Wednesday, July 13**

**Part I: MS73, MS89: 10:30 AM – 12:30 PM. BCEC Room 254B**. Abstracts

- 10:30-10:55 Parallel Tucker-Based Compression for Regular Grid Data, Tamara G. Kolda, Sandia National Laboratories, USA
- 11:00-11:25 Cancer Diagnostics and Prognostics from Comparative Spectral Decompositions of Patient-Matched Genomic Profiles, Orly Alter, University of Utah, USA
- 11:30-11:55 Exploiting Structure in the Simulation of Super Carbon Nanotubes, Christian H. Bischof, Technische Universität Darmstadt, Germany
- 12:00-12:25 A Revisit to the GEMM-Based Level 3 BLAS and Its Impact on High Performance Matrix Computations abstract Bo T. Kågström, Umeå University, Sweden

**Part II: MS92, 4:00 PM – 6:00 PM. BCEC Room 254B**. Abstracts

- 4:00-4:25 Charlie Van Loan and the Matrix Exponential, Nicholas J. Higham, University of Manchester, United Kingdom
- 4:30-4:55 Nineteen Dubious Ways to Compute the Zeros of a Polynomial, Cleve Moler, The MathWorks, Inc., USA
- 5:00-5:25 The Efficient Computation of Dense Derivative Matrices in MATLAB Using ADMAT and Why Sparse Linear Solvers Can Help, Thomas F. Coleman, University of Waterloo, Canada
- 5:30-5:55 On Rank-One Perturbations of a Rotation, Robert Schreiber, Hewlett-Packard Laboratories, USA

]]>

- AIDS: acquired immune deficiency syndrome,
- laser: light amplification by stimulated emission of radiation,
- radar: radio detection and ranging,
- scuba: self-contained underwater breathing apparatus,
- snafu: situation normal all fouled up,
- sonar: sound navigation and ranging,
- UNESCO: United Nations Educational, Scientific, and Cultural Organization,
- WYSIWYG: what you see is what you get.

There is even a recursive acronym, GNU, standing for “GNU’s not Unix”.

On close inspection, the OED definition is imprecise in two respects. First, can we take more than one letter from each word? The definition doesn’t say, but the examples radar and sonar make it clear that we can. Second, do we have to take the initial letters from the words in their original order. This is clearly the accepted meaning. *Merriam Webster’s Collegiate Dictionary* (10th ed., 1993) provides a more precise definition that covers both points, by saying “formed from the initial letter or letters of each of the successive parts or major parts of a compound term”.

In common with many fields, applied mathematics has a lot of acronyms. It also has a good number of the most elegant of acronyms: those that take exactly one letter from each word, such as

- BLAS: basic linear algebra subprograms,
- DCT: discrete cosine transform,
- FSAL: first same as last,
- MIMO: multi-input multi-output,
- NaN: not a number,
- PDE: partial differential equation,
- SIRK: singly-implicit Runge-Kutta,
- SVD: singular value decomposition.

New acronyms are regularly formed in research papers. Non-native speakers are advised to be careful in doing so, as their constructions may have unsuspected meanings. The authors of this article in Chemical Communcations managed to get two exceptionally inappropriate acronyms into print, and one wonders how these escaped the referees and editor.

Another question left open by the definitions mentioned above is whether an acronym has to be pronounceable. The big *Oxford English Dictionary* (3rd ed., 2015) lists two meanings, which allow an acronym to be pronounceable or unpronounceable. *The New York Times Manual of Style and Usage* (5th ed., 2015) says “unless pronounced as a word, an abbreviation is not an acronym”, while the *Style Guide* of *The Economist* (11th ed., 2015) also requires pronounceability, as do various other references.

Apart from SIAM (Society for Industrial and Applied Mathematics), not many mathematics societies have pronounceable acronyms. In the “pronounced by letter” camp we have, for example,

- AMS: American Mathematical Society
- AWM: Association for Women in Mathematics
- EMS: European Mathematical Society
- IMA: Institute of Mathematics and its Applications
- IMU: International Mathematical Union
- LMS: London Mathematical Society
- MAA: Mathematical Association of America
- MPS: Mathematical Programming Society

SIAM’s founders chose well when they named the society in 1952! Indeed the letters S, I, A, M have proved popular, forming in a different order the acronyms of the more recent bodies SMAI (La Société de Mathématiques Appliquées et Industrielles) and AIMS (the African Institute for Mathematical Sciences).

A situation where (near) acronyms are particularly prevalent is in research proposals, where a catchy acronym in the title is often felt to be an advantage. I suspect that in many cases the title is chosen to fit the acronym. Indeed there is now a word to describe this practice. In 2015 the OED added the word *backronym* (first occurrence in 1983), which refers to “a contrived explanation of an existing word’s origin, positing it as an acronym”. One backronym is “SOS”; see Wikipedia and this article by John Cook for more examples.

The Acronym Finder website does a good job of finding the meaning of an acronym, often returning multiple results. For SIAM it produces 17 definitions, of which the “top hit” is the expected one—and at least one is rather unexpected!

]]>