## MATLAB Guide, Third Edition (2017)

The third edition of MATLAB Guide, which I co-wrote with Des Higham, has just been published by SIAM. It is a major update of the second edition (2005) to reflect the many changes in MATLAB over the last twelve years, and is 25 percent longer. There are new sections and chapters, and almost every page has changed.

The new chapters are

• Object-Oriented Programming: presents an introduction to object-oriented programming in MATLAB through two examples of classes.
• Graphs: describes the new MATLAB classes graph and digraph for representing and manipulating undirected graphs and directed graphs.
• Large Data Sets: describes MATLAB features for handling data sets so large that they do not fit into the memory of the computer.
• The Parallel Computing Toolbox: describes this widely used and increasingly important toolbox.

The chapter The Symbolic Math Toolbox has been revised to reflect the change of the underlying symbolic engine from Maple (at the time of the second edition) to MuPAD.

New sections include Empty Matrices, Matrix Properties, Argument Checking and Parsing, Fine Tuning the Display of Arrays, Live Editor, Unit Tests, String Arrays, Categorical Arrays, Tables and Timetables, and Timing Code.

Two other big changes are that figures are now printed in color and there are thirteen “Asides”, highlighted in gray boxes, which contain discussions of MATLAB-related topics, such as anonymous functions, reproducibility, and color maps.

The book was launched with a reception hosted by The Mathworks and SIAM at the SIAM booth at the Joint Mathematics Meetings in Atlanta on January 6, 2017. Jim Rundquist (Senior Education Technical Evangelist) represented MathWorks, and several SIAM staff, including SIAM Publisher David Marshall, were present.

Two delicious cakes, one containing a representation of the cover of the book, were enjoyed by reception attendees. Inspired by MATLAB, the cakes were served using slice, deal, and input, and an occasional reshape or rotate, with a pool of workers consuming them asynchronously.

## Taking Up the SIAM Presidency

I am honored to be taking over the reins from Pam Cook as president of the Society for Industrial and Applied Mathematics (SIAM) for the next two years, starting January 1, 2017. Pam remains as past-president during 2017. I look forward to helping to address the challenges facing SIAM and to working with the excellent SIAM officers and staff.

Eighteen months ago I wrote a “candidate statement” for the fall 2015 SIAM elections. The comments I made then remain valid and so I thought it would be worth reproducing the statement here.

The January/February 2017 issue of SIAM News will contain my first From the SIAM President column, in which I give further thoughts on SIAM’s future.

I am happy to receive comments from SIAM members or potential members, either in the box below or by email.

Candidate Statement: SIAM is the leading international organization for applied mathematics and has been an important part of my professional life since I joined as a PhD student, 31 years ago. SIAM is the first place that many people turn to for publications, conferences, and news about applied mathematics and it represents the profession nationally and internationally.

I have been fortunate to be involved in the leadership for many years, having spent six years on the Council, eight years on the Board, and having recently served two terms as Vice President At Large (2010-2013).

SIAM faces a number of challenges that, if elected as President, I relish helping to address, working with SIAM members, SIAM officers, and the excellent SIAM staff.

SIAM’s publications remain strong, but are vulnerable to changes in the way scholarly journals operate (open access, article processing charges, etc.). SIAM needs to monitor the situation and respond appropriately, while striving to provide an even greater service to authors, referees and editors, for example by better use of web tools.

SIAM’s membership is also healthy, but SIAM must continue to enhance membership benefits and work hard to attract and retain student members, who are the future of the society, and to provide value for its members in industry.

Book sales are declining globally and in academic publishing it is becoming harder to find authors with the time to write a book. Nevertheless, the SIAM book program is in a strong position and the 2015 review of the program that I chaired has produced a list of recommendations that should help it to thrive.

SIAM conferences are a terrific place to learn about the latest developments in the subject, meet SIAM staff, browse SIAM books, and attend a business meeting. Attendances continue to grow (the SIAM CSE meeting in Salt Lake City last March was the largest ever SIAM meeting, with over 1700 attendees), but in any given year, the majority of SIAM’s 14,000 members do not attend a SIAM conference. Audio and slide captures of selected lectures are made available on SIAM Presents, but we need to do more to help members engage in virtual participation.

The SIAM web site has provided sterling service for a number of years, but is in need of a major redesign, which is underway. This is an excellent opportunity to integrate better the many services (conferences, journals, books, membership, activity groups, chapters, sections, etc.) in a responsive design. Beyond the core website, SIAM has a strong social media presence, posts a wide variety of videos on its YouTube channel, hosts SIAM Blogs (which I was involved in setting up in 2013), has recently made SIAM News available online, and has SIAM Connect and SIAM Unwrapped as further outlets. Optimizing the use of all these communication tools will be an ongoing effort.

These are just some of the challenges facing SIAM in the future as it continues to play a global leadership role for applied mathematics.

July 2015

Posted in miscellaneous | Tagged | 1 Comment

## Numerical Linear Algebra Group 2016

The Manchester Numerical Linear Algebra group (some of whom are in the photo below) was very active in 2016. This post summarizes what we got up to. Publications are not included here, but many of them can be found on MIMS EPrints under the category Numerical Analysis.

## Software

The group has joined Jack Dongarra’s team at the University of Tennessee to become one of the two partners involved in the development of PLASMA: Parallel Linear Algebra Software for Multicore Architectures.

We continue to make our research codes available, which is increasingly done on GitHub; see the repositories of Higham, Relton, Sego, Tisseur, Zhang. We also put MATLAB software on MATLAB Central File Exchange and on our own web sites, e.g., the Rational Krylov Toolbox (RKToolbox).

Several algorithms have been incorporated in other software packages, such as, from Stefan Guettel, the NLEIGS solver which is now part of SLEPc, the Zolotarev quadrature approach which is now part of the FEAST eigenvalue solver package, and rational deferred correction which is now part of pySDC.

## PhD Students

After defending her thesis in March 2016, Nataša Strabić (2012-2016) left in May to take up a position as Teacher of Mathematics at Sevenoaks School, Kent.

Bahar Arslan defended her PhD thesis in December 2016.

Mario Berljafa defended his PhD thesis in November 2016. In September he took up a postdoctoral position in the Department of Computer Science at KU Leuven.

Weijian Zhang spent January-February visiting the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) Lab.

New PhD students Jennifer Lau and Steven Elsworth joined the group in September 2016.

## Postdoctoral Research Associates (PDRAs)

Pedro Valero Lara and Mawussi Zounon joined us in January 2016 to work on the Parallel Numerical Linear Algebra for Extreme Scale Systems (NLAFET) project. Sam Relton, who had previously been working on the Functions of Matrices: Theory and Computation project, moved onto this project in March.

Jakub Sistek joined us in March 2016 to work on the Programming Model INTERoperability ToWards Exascale (INTERTWinE) project.

Mary Aprahamian (2011-2016) left in May to take up a position as Data Scientist at Bloom Agency in Leeds.

After several years as PhD student and then PDRA, James Hook left in April to take up a fellowship in the Institute for Mathematical Innovation at the University of Bath.

Prashanth Nadukandi joined the group in September 2016, supported by a Marie Skłodowska-Curie Individual Fellowship.

Timothy Butters (2013-2016), KTP Associate with Sabisu, has taken up a permanent position as Head of Research & Development with the company following the completion of the KTP in December 2016.

Pedro Valero Lara left in October to take up a position as Senior Researcher on the Human Brain project at Barcelona Supercomputer Centre.

David Stevens joined us in December 2016 to work on the Programming Model INTERoperability ToWards Exascale (INTERTWinE) project.

## Presentations

Members of the group gave presentations (talks or posters) at the following conferences and workshops.

SIAM UKIE Meeting 2016, January 7, 2016, University of Cambridge, UK: Strabic.

Bath–RAL Numerical Analysis Day, January 11, 2016, Didcot, UK: Guettel.

GAMM Annual Meeting, March 7-11, 2016, Braunschweig, Germany: Guettel

SIAM Conference on Parallel Processing for Scientific Computing, April 12-15, 2016, Paris: Valero Lara, Zhang.

University of Strathclyde SIAM Student Chapter Meeting, Glasgow, May 3, 2016: Higham.

Workshop on Batched, Reproducible, and Reduced Precision BLAS, Innovative Computing Laboratory, University of Tennessee, May 18–19, 2016: Zounon. See the report on the workshop by Sven Hammarling.

ESSAM School on Mathematical Modelling, Numerical Analysis and Scientific Computing, Czech Republic, May 29-June 3, 2016: Sistek.

ECCOMAS Congress 2016, Crete, Greece, June 5-10, 2016: Sistek.

Programs and Algorithms of Numerical Mathematics, Czech Republic June 19-24, 2-16: Sistek.

SIAM Annual Meeting, Boston, USA, July 11-15, 2016: Fasi, Higham, Tisseur, Zemaityte, Zhang.

Fifth IMA Conference on Numerical Linear Algebra and Optimization, University of Birmingham, UK, September 7-9, 2016: Gwynne, Relton, Tisseur, Zounon.

GAMM Workshop on Applied and Numerical Linear Algebra, TU Hamburg–Harburg, Germany. September 15-16, 2016: Guettel

4th Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE4), University of Manchester, September 12-14, 2016: Relton.

Chebyshev Day, University of Oxford, UK, November 14, 2016: Guettel.

SIAM Annual Student Chapter Conference, University of Warwick, November 23, 2016: Zhang.

## Conference and Workshop Organization

The Manchester SIAM Student Chapter organized their 6th Manchester SIAM Student Chapter Conference on May 4, 2016.

Jakub Sistek was one of the organizers of

Françoise Tisseur was on the organizing/scientific committees of

Mario Berljafa, Jonathan Deakin, Nick Higham, Matthew Gwynne, Mante Zemaityte, and Weijian Zhang organized the Manchester Julia Workshop, September 19-20, 2016 at the University of Manchester. Videos of the talks are available on YouTube.

Jakub Sistek and Maksims Abalenkovs were on the organizing committee of the European Exascale Applications Workshop held here in the School of Mathematics, October 11-12, 2016.

## Visitors

Vedran Sego visited the group until May 2016.

Peter Kandolf visited the group from September 2015 to March 2016.

Tomáš Gergelits visited the group from October 2015 to March 2016.

Meisam Sharify visited the group in September 2016.

## Knowledge Transfer

The three-year Knowledge Transfer Partnership with Sabisu (a data analytics platform for the oil and gas industries), involving KTP Associate Tim Butters, Stefan Guettel, Nick Higham, and Jon Shapiro (School of Computer Science) was completed in December 2016. Among other achievements, an alarm management system has been developed and launched as a product.

## Recognition and Service

Françoise Tisseur was elected SIAM Fellow.

Stefan Guettel was elected Secretary/Treasurer of the SIAM UKIE section, 2016–2018, and has also taken on the role of vice-chair of the GAMM Activity Group on Applied and Numerical Linear Algebra. He joined the editorial board of the SIAM Journal on Scientific Computing in January 2016.

Weijian Zhang won a bronze medal in the SET for Britain 2016 competition, which took place at the House of Commons, London, for his poster “Time-Dependent Network Modelling for Mining Scientific Literature”.

Photo from LMS Newsletter. April 2016. (l to r) Dr Stephen Benn (Royal Society of Biology), Sylaja Srinivasan (Bank of England), Professor Nick Woodhouse (Clay Mathematics Institute), Weijian Zhang (Bronze Award Winner), Dr Philip Pearce (Gold Award Winner), Dr Tom Montenegro-Johnson (Silver Award Winner), Professor Sir Adrian Smith (CMS), Stephen Metcalfe MP

Mario Berljafa won a SIAM Student Paper Prize for his work with Stefan Guettel entitled “Generalized Rational Krylov Decompositions with an Application to Rational Approximation”.

A poster “The Math behind Alarm Redundancy Detection” by Mario Berljafa, Massimiliano Fasi, Matthew Gwynne, Goran Malic, Mante Zemaityte, and Weijian Zhang won a prize in SIAM’s “Math Matters” contest and is featured on the Math Matters, Apply It! website.

Nick Higham served as president-elect of SIAM. He was also elected to Academia Europaea.

Weijian Zhang won a SIAM Travel Award to attend the SIAM Conference on Parallel Processing for Scientific Computing in Paris in April 2016.

Mario Berljafa, Massimiliano Fasi and Mante Zemaityte were awarded SIAM Student Travel Awards to attend the SIAM Annual Meeting 2016 in Boston. Weijian Zhang represented the Manchester Student Chapter at the meeting.

Jakub Sistek was re-elected in Februrary as treasurer of the Czech Network for Mathematics in Industry EU-MATHS-IN.CZ.

## New Guidelines for DOI Linking and Display

For sometime I have been collecting digital object identifiers (DOIs) in my BibTeX entries, as described in this blog post. When I use my own BST file to format the references BibTeX creates hyperlinks to the published source via the DOI. If I use the the SIAM BST file the DOI is instead displayed as part of the reference.

The Crossref organization (which provides DOIs) has recently issued revised guidelines on the display of DOIs. Up until now, DOIs have typically been displayed as, for example, 10.1137/130920137, and linked to as http://dx.doi.org/10.1137/16M1057577. The new guidelines say that the link should be https://doi.org/10.1137/16M1057577 and that it should always be displayed in this form, as a full URL. Note that the “dx” part of the URL has gone, and “https” has replaced “http”.

The main reason for the change is that the pure DOI on its own is not much use, as it can’t be clicked on or pasted into a browser address bar without first adding the https://doi.org/ prefix. Additionally, https provides more secure browsing than http, and Google gives a small ranking boost to sites that use https.

Crossmark states that the old http://dx.doi.org/10.1137/16M1057577 form of URL will continue to work forever.

I have updated my BST file myplain2-doi.bst in this GitHub repository, which contains a BibTeX bibliography for all my outputs, so that it produces links in the required form.

SIAM has updated the BST file in its macro packages to implement the new guidelines.

## Hyphenation Question: Row-wise or Rowwise?

Sam Clark of T&T Productions, the copy editor for the third edition of MATLAB Guide (co-authored with Des Higham and to be published by SIAM in December 2016), recently asked whether we would like to change “row-wise” to “rowwise”.

A search of my hard disk reveals that I have always used the hyphen, probably because I don’t like consecutive w’s. Indeed, in 1999 I published a paper Row-Wise Backward Stable Elimination Methods for the Equality Constrained Least Squares Problem .

A bit more searching found recent SIAM papers containing “rowwise”, so it is clearly acceptable usage to omit the hyphen..

My dictionaries and usage guides don’t provide any guidance as far as I can tell. Here is what some more online searching revealed.

• The Oxford English Dictionary does not contain either form (in the entry for “row” or elsewhere), but the entry for “column” contains “column-wise” but not “columnwise”.
• The Google Ngram Viewer shows a great prevalence of the hyphenated form, which was about three time as common as the unhyphenated form in the year 2000.
• A search for “row-wise” and “rowwise” at google.co.uk finds about 724,000 and 248,00 hits, respectively.
• A Google Scholar search for “row-wise” and “rowwise” finds 31,600 and 18,900 results, respectively. For each spelling, there are plenty of papers with that form in the title. The top hit for “rowwise” is a 1993 paper The rowwise correlation between two proximity matrices and the partial rowwise correlation, which manages to include the word twice for good measure!

Since the book is about MATLAB, it also seemed appropriate to check how the MATLAB documentation hyphenates the term. I could only find the hyphenated form:

doc flipdim:
When the value of dim is 1, the array is flipped row-wise down


But for columnwise I found that MATLAB R2016b is inconsistent, as the following extracts illustrate, the first being from the documentation for the Symbolic Math Toolbox version of the function.

doc reshape:
The elements are taken column-wise from A ...
Reshape a matrix row-wise by transposing the result.

doc rmmissing:
1 for row-wise (default) | 2 for column-wise

doc flipdim:
When dim is 2, the array is flipped columnwise left to right.

doc unwrap:
If P is a matrix, unwrap operates columnwise.


So what is our conclusion? We’re sticking to “row-wise” because we think it is easier to parse, especially for those whose first language is not English.

Posted in writing | Tagged | 1 Comment

## What’s New in MATLAB R2016b

MATLAB R2016b was released in the middle of September 2016. In this post I discuss some of its new features (I will not consider the toolboxes). This is personal selection of highlights; for a complete overview see the Release Notes.

The features below are discussed in greater detail in the third edition of MATLAB Guide, to be published by SIAM in December 2016.

## Live Editor

The Live Editor, introduced in R2016a, provides an interactive environment for editing and running MATLAB code. When the code that you enter is executed the results (numerical or graphical) are displayed in the editor. The code is divided into sections that can be evaluated, and subsequently edited and re-evaluated, individually. The Live Editor works with live scripts, which have a .mlx extension. Live scripts can be published (exported) to HTML or PDF. R2016b adds more functionality to the Live Editor, including an improved equation editor and the ability to pan, zoom, and rotate axes in output figures​.

The Live Editor is particularly effective with the Symbolic Math Toolbox, thanks to the rendering of equations in typeset form, as the following image shows.

The Live Editor is a major development, with significant benefits for teaching and for script-based workflows. No doubt we will see it developed further in future releases.

## Local Functions

Local functions are what used to be called subfunctions: they are functions within functions or, to be more precise, functions that appear after the main function in a function file. What’s new in R2016b is that a script can have local functions. This is a capability I have long wanted. When writing a script I often find that a particular computation or task, such as printing certain statistics, needs to be repeated at several places in the script. Now I can put that code in a local function instead of repeating the code or having to create an external function for it.

## String Arrays

MATLAB has always had strings, created with a single quote, as in

s = 'a string'


which sets up a 1-by-8 character vector. A new datatype string has been introduced. A string is an array for which each entry contains a character vector. The syntax

str = string('a string')


sets up a 1-by-1 string array, whereas

str = string({'Boston','Manchester'})


sets up a 1-by-2 string array via a cell array. String arrays are more memory efficient than cell arrays and allow for more flexible handling of strings. They are particularly useful in conjunction with tables. According to this MathWorks video, string arrays “have a multitude of new methods that have been optimized for performance”. At the moment, support for strings across MATLAB is limited and it is inconvenient to have to set up a string array by passing a cell array to the string function. No doubt string arrays will be integrated more seamlessly in future releases.

## Tall Arrays

Tall arrays provide a way to work with data that does not fit into memory. Calculations on tall arrays are delayed until a result is explicitly requested with the gather function. MATLAB optimizes the calculations to try to minimize the number of passes through the data. Tall arrays are created with the tall function, which take as argument an array (numeric, string, datetime, or one of several other data types) or a datastore. Many MATLAB functions (but not the linear algebra functions, for example) work the same way with tall arrays as they do with in-memory arrays.

## Timetables

Tables were introduced in R2013b. They store tabular data, with columns representing variables of possibly different types. The newly introduced timetable is a table for which each row has an associated date and time, stored in the first column. The timerange function produces a subscript that can be used to select rows corresponding to a particular time interval. These new features make MATLAB an even more powerful tool for data analysis.

## Missing Values

MATLAB has new capabilities for dealing with missing values, which are defined according to the data type: NaNs (Not-a-Number) for double and single data, NaTs (Not-a-Time) for datetime data, <undefined> for categorical data, and so on. For example, the function ismissing detects missing data and fillmissing fills in missing data according to one of several rules.

## Mac OS Version

I have Mac OS X Version 10.9.5 (Mavericks) on my Mac. Although this OS is not officially supported, R2016b installed without any problem and runs fine. The pre-release versions of R2016a and R2016b would not install on Mavericks, so it seems that compatibility with older operating systems is ensured only for the actual release.

At the time of writing, there are some compatibility problems with Mac OS Version 10.12 (Sierra) for certain settings of Language & Region.

## Mathematical Word Processing: Historical Snippets

Matthew G. Kirschenbaum’s recent book Track Changes: A Literary History of Word Processing contains a lot of interesting detail about the early days of word processing, covering the period 1964 to 1984. Most of the book concerns non-scientific writing, though $\TeX$ gets a brief mention.

Inspired by the book, I thought it might be useful to collect some information on early mathematical wordprocessing. Little information of this kind seems to be available online.

It is first worth noting that before the advent of word processors papers were typed on a typewriter and mathematical symbols were typically filled in by hand, as in this example from A Study of the Matrix Exponential (1975) by Charlie Van Loan:

Some institutions had the luxury of IBM Selectric typewriters, which had a “golf ball” that could be changed in order to type special characters. (See this page for informative videos about the Selectric.) Here is an example of output from the Selectric, taken from my MSc thesis: This illustrates some characteristic weaknesses of typewriter output: superscripts, subscripts, and operators are of the same size as the main symbols and spacing between characters is fixed (as in the vertical bars making up the norm signs here).

In 1980s at the Department of Mathematics at the University of Manchester the word processor Vuwriter was used. It was produced by a spin-off company Vuman Computer Systems, which took its name from “Victoria University of Manchester”, the university’s full name. Vuwriter ran on the Apricot PC, produced by the British company Apricot computers. At least one of my first papers was typed on Vuwriter by the office staff and I still have the original 1984 technical report Computing the Polar Decomposition—with Applications, which I have scanned and is available here. A typical equation from the report is this one:

The article

Peter Dolton, Comparing Scientific Word Processor Packages: $T^3$ and Vuwriter, The Economic Journal 100, 311-315, 1990

reviews a version of Vuwriter that ran under MS DOS on IBM PC compatibles. Another review is

D. L. Mealand, Word Processing in Greek using Vuwriter Arts: A Test case for Foreign Language Word Processing, Literary and Linguistic Computing 2, 30-33, 1987

which describes a version of the program for use with foreign languages.

The Department of Mathematics at UMIST (which merged with the University of Manchester in 2004) used an MSDOS word processor called ChiWriter.

In the same period I also prepared manuscripts on my own microcomputers: first on a Commodore 64 and then on a Commodore 128 (essentially a Commodore 64 with a screen 80 characters wide rather than 40 characters wide), using a wordprocessor called Vizawrite. For output I used an Epson FX-80 dot matrix printer, and later an Epson LQ 850 (which produced high resolution output thanks to its 24 pin print head, as opposed to the 9 pin print head of the FX-80). Vizawrite was able to take advantage of the Epson printers’ ability to produce subscripts and superscripts, Greek characters, and mathematical symbols. An earlier post links to a scan of my 1985 article Matrix Computations in Basic on a Microcomputer produced in Vizawrite.

In the 1980s some colleagues wrote papers on a Mac. An example (preserved at the Cornell eCommons digital repository) is A Storage Efficient WY Representation for Products of Householder Transformations (1987) by Charlie Van Loan. I think that report was prepared in Mac Write. Here is a sample equation:

Also in the 1980s I built a database of papers that I had read, and wrote a program that could extract the items that I wanted to cite, format them, and print a sorted list. This was a big time-saver for producing reference lists, especially for my PhD thesis. The database was originally held in Superbase for the Commodore C128, with the program written in Superbase’s own language, and was later transferred to PC-File running on an IBM PC-clone with the program converted to GW-Basic. I was essentially building my own much simpler version of BibTeX, which did not exist when I started the database.

I am aware of two good sources of information about technical word processors for the IBM PC. The first is the article

P. K. Wong, Choices for Mathematical Wordprocessing Software, SIAM News 17(6), pp. 8-9, 1984

“There are over 120 wordprocessing programs for the IBM PC alone and the machine is not yet three years old! Of this large number, however, less than half a dozen can claim scientific wordprocessing capabilities, and these have only been available within the past six to nine months.”

The other source is two articles published in the Notices of the American Mathematical Society in the 1980s.

PC Technical Group of IBM PC Users Group of the Boston Computer Society, Technical Wordprocessors for the IBM PC and Compatibles, Notices Amer. Math. Soc. 33, 8-37, 1986

PC Technical Group of IBM PC Users Group of the Boston Computer Society, Technical Wordprocessors for the IBM PC and Compatibles, Part IIB: Reviews, Notices Amer. Math. Soc. 34, 462-491, 1987

These two articles do not appear to be available online. The first of them includes a set of benchmarks, consisting of extracts from technical journals and books, which were used to test the abilities of the packages. The authors make the interesting comment that

“Microsoft chose not to answer our review request for Word, and based on discussion with Word owners, Word is not set up for equations.”

Finally, Roger Horn told me that his book Topics in Matrix Analysis (CUP, 1991), co-authored with Charlie Johnson, was produced from the camera-ready output of the $T^3$ wordprocessing system (reviewed in this 1988 article and in the SIAM News article above). $T^3$ was chosen because $\TeX$ was not available on PCs when work on the the book began. It must have been a huge effort to produce this 607-page book in this way!

Acknowledgement: thanks to Christopher Baker and David Silvester for comments on a draft of this post.

Posted in miscellaneous | 8 Comments

## Implicit Expansion: A Powerful New Feature of MATLAB R2016b

The latest release of MATLAB, R2016b, contains a feature called implicit expansion, which is an extension of the scalar expansion that has been part of MATLAB for many years. Scalar expansion is illustrated by

>> A = spiral(2), B = A - 1
A =
1     2
4     3
B =
0     1
3     2


Here, MATLAB subtracts 1 from every element of A, which is equivalent to expanding the scalar 1 into a matrix of ones then subtracting that matrix from A.

Implicit expansion takes this idea further by expanding vectors:

>> A = ones(2), B = A + [1 5]
A =
1     1
1     1
B =
2     6
2     6


Here, the result is the same as if the row vector was replicated along the first dimension to produce the matrix [1 5; 1 5] then that matrix was added to ones(2). In the next example a column vector is added and the replication is across the columns:

>> A = ones(2) + [1 5]'
A =
2     2
6     6


Implicit expansion also works with multidimensional arrays, though we will focus here on matrices and vectors.

So MATLAB now treats “matrix plus vector” as a legal operation. This is a controversial change, as it means that MATLAB now allows computations that are undefined in linear algebra.

Why have MathWorks made this change? A clue is in the R2016b Release Notes, which say

For example, you can calculate the mean of each column in a matrix A, then subtract the vector of mean values from each column with A - mean(A).

This suggests that the motivation is, at least partly, to simplify the coding of manipulations that are common in data science.

Implicit expansion can also be achieved with the function bsxfun that was introduced in release R2007a, though I suspect that few MATLAB users have heard of this function:

>> A = [1 4; 3 2], bsxfun(@minus,A,mean(A))
A =
1     4
3     2
ans =
-1     1
1    -1

>> A - mean(A)
ans =
-1     1
1    -1


Prior to the introduction of bsxfun, the repmat function could be used to explicitly carry out the expansion, though less efficiently and less elegantly:

>> A - repmat(mean(A),size(A,1),1)
ans =
-1     1
1    -1


An application where the new functionality is particularly attractive is multiplication by a diagonal matrix.

>> format short e
>> A = ones(3); d = [1 1e-4 1e-8];
>> A.*d  %  A*diag(d)
ans =
1.0000e+00   1.0000e-04   1.0000e-08
1.0000e+00   1.0000e-04   1.0000e-08
1.0000e+00   1.0000e-04   1.0000e-08
>> A.*d' % diag(d)*A
ans =
1.0000e+00   1.0000e+00   1.0000e+00
1.0000e-04   1.0000e-04   1.0000e-04
1.0000e-08   1.0000e-08   1.0000e-08


The .* expressions are faster than forming and multiplying by diag(d) (as is the syntax bsxfun(@times,A,d)). We can even multiply with the inverse of diag(d) with

>> A./d
ans =
1       10000   100000000
1       10000   100000000
1       10000   100000000


It is now possible to add a column vector to a row vector, or to subtract them:

>> d = (1:3)'; d - d'
ans =
0    -1    -2
1     0    -1
2     1     0


This usage allows very short expressions for forming the Hilbert matrix and Cauchy matrices (look at the source code for hilb.m with type hilb or edit hilb).

The max and min functions support implicit expansion, so an elegant way to form the matrix $A$ with $(i,j)$ element $\min(i,j)$ is with

d = (1:n); A = min(d,d');


and this is precisely what gallery('minij',n) now does.

Another function that can benefit from implicit expansion is vander, which forms a Vandermonde matrix. Currently the function forms the matrix in three lines, with calls to repmat and cumprod. Instead we can do it as follows, in a formula that is closer to the mathematical definition and hence easier to check.

A = (v(:) .^ (n-1:-1:0)')';  % Equivalent to A = vander(v)


The latter code is, however, slower than the current vander for large dimensions, presumably because exponentiating each element independently is slower than using repeated multiplication.

An obvious objection to implicit expansion is that it could cause havoc in linear algebra courses, where students will be able to carry out operations that the instructor and textbook have said are not allowed. Moreover, it will allow programs with certain mistyped expressions to run that would previously have generated an error, making debugging more difficult.

I can see several responses to this objection. First, MATLAB was already inconsistent with linear algebra in its scalar expansion. When a mathematician writes (with a common abuse of notation) $A - \sigma$, with a scalar $\sigma$, he or she usually means $A - \sigma I$ and not $A - \sigma E$ with $E$ the matrix of ones.

Second, I have been using the prerelease version of R2016b for a few months, while working on the third edition of MATLAB Guide, and have not encountered any problems caused by implicit expansion—either with existing codes or with new code that I have written.

A third point in favour of implicit expansion is that it is particularly compelling with elementwise operations (those beginning with a dot), as the multiplication by a diagonal matrix above illustrates, and since such operations are not a part of linear algebra confusion is less likely.

Finally, it is worth noting that implicit expansion fits into the MATLAB philosophy of “useful defaults” or “doing the right thing”, whereby MATLAB makes sensible choices when a user’s request is arguably invalid or not fully specified. This is present in the many functions that have optional arguments. But it can also be seen in examples such as

% No figure is open and no parallel pool is running.
>> close         % Close figure.
>> delete(gcp)   % Shut down parallel pool.


where no error is generated even though there is no figure to close or parallel pool to shut down.

I suspect that people’s reactions to implicit expansion will be polarized: they will either be horrified or will regard it as natural and useful. Now that I have had time to get used to the concept—and especially now that I have seen the benefits both for clarity of code (the minij matrix) and for speed (multiplication by a diagonal matrix)—I like it. It will be interesting to see the community’s reaction.

Posted in software | Tagged | 7 Comments

## Second Edition (2016) of Learning LaTeX by David Griffiths and Des Higham

What is the best way to learn $\LaTeX$? Many free online resources are available, including “getting started” guides, FAQs, references, Wikis, and so on. But in my view you can’t beat using a (physical) book. A book can be read anywhere. You can write notes in it, stick page markers on it, and quickly browse it or look something up in the index.

The first edition of Learning $\LaTeX$ (1997) is a popular introduction to $\LaTeX$ characterized by its brevity, its approach of teaching by example, and its humour. (Full disclosure: the second author is my brother.)

The second edition of the book is 25 percent longer than the first and has several key additions. The amsmath package, particularly of interest for its environments for typesetting multiline equations, is now described. I often struggle to remember the differences between align, alignat, gather, and multline, but three pages of concise examples explain these environments very clearly.

The book now reflects the modern PDF workflow, based on pdflatex as the $\TeX$ engine. If you are still generating dvi files you should consider making the switch!

Other features new to this edition are a section on making bibliographies with Bib$\TeX$ and appendices on making slides with Beamer and posters with the a0poster class, both illustrated with complete sample documents.

Importantly for a book to be used for reference, there is an excellent 10.5-page index which, at about 10% of the length of the book, is unusually thorough (see the discussion on index length in my A Call for Better Indexes).

The 1997 first edition was reviewed in TUGboat (the journal of the TeX Users Group) in 2013 by Boris Veytsman, who had only just become aware of the book. He says

When Karl Berry and I discussed the current situation with $\LaTeX$ books for beginners, he mentioned the old text by Grffiths and Higham as an example of the one made “right”…

This is indeed an incredibly good introduction to $\LaTeX$. Even today, when many good books are available for beginners, this one stands out.

No doubt Berry and Veytsman would be even more impressed by the improved and expanded second edition.

Given that I regard myself as an advanced $\LaTeX$ user I was surprised to learn something I didn’t know from the book, namely that in an \includegraphics command it is not necessary to specify the extension of the file being included. If you write \includegraphics{myfig} then pdflatex will search for myfig.png, myfig.pdf, myfig.jpg, and myfig.eps, in that order. If you specify

\DeclareGraphicsExtensions{.pdf,.png,.jpg}


in the preamble, $\LaTeX$ will search for the given extensions in the specified order. I usually save my figures in PDF form, but sometimes a MATLAB figure saved as a jpeg file is much smaller.

This major update of the first edition is beautifully typeset, printed on high quality, bright white paper, and weighs just 280g. It is an excellent guide for those new to $\LaTeX$ and a useful reference for experienced users.

Posted in books, LaTeX | 1 Comment

## Corless, Knuth and Lambert W

Attendees at the SIAM Annual Meeting in Boston last month had the opportunity to meet Donald Knuth. He was there to give the John von Neumann lecture, about which I reported at SIAM News.

At the Sunday evening Welcome Reception I captured this photo of Don and Rob Corless (whose graduate textbook on numerical analysis I discussed here).

Don and Rob are co-authors on the classic paper

Robert M. Corless, Gaston N. Gonnet, D. E. G. Hare and David J. Jeffrey and Donald Knuth, On the Lambert $W$ Function, Adv. in Comput. Math. 5, 329-359, 1996

The Lambert W function is a multivalued function $W_k(x)$, with a countably infinite number of branches, that solves the equation $x e^x = a$. According to Google Scholar this is Don’s most-cited paper. Here is a diagram of the ranges of the branches of $W_k(x)$, together with values of $W_k(1)$ (+), $W_k(10 + 10i)$ (×), and $W_k(-0.1)$ (o).

This is to be compared with the the corresponding plot for the logarithm, which consists of horizontal strips of height $2\pi$ with boundaries at odd multiples of $\pi$.

Following the Annual Meeting, Rob ran a conference Celebrating 20 years of the Lambert W function at the University of Western Ontario.

Rob co-authored with David Jeffrey an article on the Lambert W function for the Princeton Companion to Applied Mathematics. The article summarizes the basic theory of the function and some of its many applications, which include delay differential equations. Rob and David note that

The Lambert $W$ function crept into the mathematics literature unobtrusively, and it now seems natural there.