The latest release of MATLAB, R2016b, contains a feature called implicit expansion, which is an extension of the scalar expansion that has been part of MATLAB for many years. Scalar expansion is illustrated by

>> A = spiral(2), B = A - 1 A = 1 2 4 3 B = 0 1 3 2

Here, MATLAB subtracts 1 from every element of `A`

, which is equivalent to expanding the scalar 1 into a matrix of ones then subtracting that matrix from `A`

.

Implicit expansion takes this idea further by expanding vectors:

>> A = ones(2), B = A + [1 5] A = 1 1 1 1 B = 2 6 2 6

Here, the result is the same as if the row vector was replicated along the first dimension to produce the matrix `[1 5; 1 5]`

then that matrix was added to `ones(2)`

. In the next example a column vector is added and the replication is across the columns:

>> A = ones(2) + [1 5]' A = 2 2 6 6

Implicit expansion also works with multidimensional arrays, though we will focus here on matrices and vectors.

So MATLAB now treats “matrix plus vector” as a legal operation. This is a controversial change, as it means that MATLAB now allows computations that are undefined in linear algebra.

Why have MathWorks made this change? A clue is in the R2016b Release Notes, which say

For example, you can calculate the mean of each column in a matrix

`A`

, then subtract the vector of mean values from each column with`A - mean(A)`

.

This suggests that the motivation is, at least partly, to simplify the coding of manipulations that are common in data science.

Implicit expansion can also be achieved with the function `bsxfun`

that was introduced in release R2007a, though I suspect that few MATLAB users have heard of this function:

>> A = [1 4; 3 2], bsxfun(@minus,A,mean(A)) A = 1 4 3 2 ans = -1 1 1 -1 >> A - mean(A) ans = -1 1 1 -1

Prior to the introduction of `bsxfun`

, the `repmat`

function could be used to explicitly carry out the expansion, though less efficiently and less elegantly:

>> A - repmat(mean(A),size(A,1),1) ans = -1 1 1 -1

An application where the new functionality is particularly attractive is multiplication by a diagonal matrix.

>> format short e >> A = ones(3); d = [1 1e-4 1e-8]; >> A.*d % A*diag(d) ans = 1.0000e+00 1.0000e-04 1.0000e-08 1.0000e+00 1.0000e-04 1.0000e-08 1.0000e+00 1.0000e-04 1.0000e-08 >> A.*d' % diag(d)*A ans = 1.0000e+00 1.0000e+00 1.0000e+00 1.0000e-04 1.0000e-04 1.0000e-04 1.0000e-08 1.0000e-08 1.0000e-08

The `.*`

expressions are faster than forming and multiplying by `diag(d)`

(as is the syntax `bsxfun(@times,A,d)`

). We can even multiply with the inverse of `diag(d)`

with

>> A./d ans = 1 10000 100000000 1 10000 100000000 1 10000 100000000

It is now possible to add a column vector to a row vector, or to subtract them:

>> d = (1:3)'; d - d' ans = 0 -1 -2 1 0 -1 2 1 0

This usage allows very short expressions for forming the Hilbert matrix and Cauchy matrices (look at the source code for `hilb.m`

with `type hilb`

or `edit hilb`

).

The `max`

and `min`

functions support implicit expansion, so an elegant way to form the matrix with element is with

d = (1:n); A = min(d,d');

and this is precisely what `gallery('minij',n)`

now does.

Another function that can benefit from implicit expansion is `vander`

, which forms a Vandermonde matrix. Currently the function forms the matrix in three lines, with calls to `repmat`

and `cumprod`

. Instead we can do it as follows, in a formula that is closer to the mathematical definition and hence easier to check.

A = (v(:) .^ (n-1:-1:0)')'; % Equivalent to A = vander(v)

The latter code is, however, slower than the current `vander`

for large dimensions, presumably because exponentiating each element independently is slower than using repeated multiplication.

An obvious objection to implicit expansion is that it could cause havoc in linear algebra courses, where students will be able to carry out operations that the instructor and textbook have said are not allowed. Moreover, it will allow programs with certain mistyped expressions to run that would previously have generated an error, making debugging more difficult.

I can see several responses to this objection. First, MATLAB was already inconsistent with linear algebra in its scalar expansion. When a mathematician writes (with a common abuse of notation) , with a scalar , he or she usually means and not with the matrix of ones.

Second, I have been using the prerelease version of R2016b for a few months, while working on the third edition of MATLAB Guide, and have not encountered any problems caused by implicit expansion—either with existing codes or with new code that I have written.

A third point in favour of implicit expansion is that it is particularly compelling with elementwise operations (those beginning with a dot), as the multiplication by a diagonal matrix above illustrates, and since such operations are not a part of linear algebra confusion is less likely.

Finally, it is worth noting that implicit expansion fits into the MATLAB philosophy of “useful defaults” or “doing the right thing”, whereby MATLAB makes sensible choices when a user’s request is arguably invalid or not fully specified. This is present in the many functions that have optional arguments. But it can also be seen in examples such as

% No figure is open and no parallel pool is running. >> close % Close figure. >> delete(gcp) % Shut down parallel pool.

where no error is generated even though there is no figure to close or parallel pool to shut down.

I suspect that people’s reactions to implicit expansion will be polarized: they will either be horrified or will regard it as natural and useful. Now that I have had time to get used to the concept—and especially now that I have seen the benefits both for clarity of code (the `minij`

matrix) and for speed (multiplication by a diagonal matrix)—I like it. It will be interesting to see the community’s reaction.

What you’ve called “implicit expansion” often is called “broadcasting.” It’s in many systems, including GNU Octave since 3.6.0 (released in 2012, iirc).

Thanks for the excellent description, including both pros and cons. I am defintely pro. I have been advocating for the addition of this feature to MATLAB for a long time.

— Cleve

I’ll continue using bsxfun because it shows clearly I am using a dimension expansion (and is not a “normal” linear algebra expression – so you don’t have to know the variables dimension all the time)

It would have been better to allow optional aliases like ‘,*’ so that ‘a ,* b’ is evaluated as ‘a * b’, that way, it satisfies users who want to clearly indicate that they are doing dimension expansion.

See also the post by Steve Eddins at MathWorks blogs: http://blogs.mathworks.com/loren/2016/10/24/matlab-arithmetic-expands-in-r2016b/

I am definitely pro, but I think, that some extra suffix for operation with dimension expansion is extremely important. Why not to use “A,*B”, as suppose Alex and many many others on all MATLAB oriented forums.

Also worth reading for further insight into the thinking behind the introduction of implicit expansion is Loren Shure’s blog post http://blogs.mathworks.com/loren/2016/11/10/more_thoughts_about_implicit_expansion/

This is a horribly misguided attempt to make a few operations more efficient or, more likely, just easier to write. So for convenience we are making it much harder to debug Matlab programs. It is easy to mix together incompatible vectors when working in three-dimensional spaces (gradients versus tangent vectors, if you will). Adding together a gradient (row vector) with the column vector does not make sense. However, with the release 2016b the semantics changed and now it is not reported as error. Consequently, when debugging code like this this is not necessarily going to come up as an error. For instance, norm(a+b) will blithely be computed incorrectly when the matrices (vectors) a,b are incompatible. Hard to debug. So the Matlab language was made harder to debug in the name of convenience of a few? Where is the logic in that?

It gets a lot worse:

Many people, I’m sure, have written a function which may have assumed an input matrix to be supplied in a certain orientation. If the orientation (dimensions) was wrong, such as column vector was substituted for a row vector, the code would have raised in error. Easy for someone using the function to see that they made a mistake supplying the wrong input.

Now with implicit expansion, all of these error checks are no longer functional. Have you supplied the correct input? Who knows if the code appears to work, but internally does the wrong thing.

So now I have to go and inspect every single function I want to use to check that my input is going to be used in the right way? (Remember, even documentation can be wrong. There will be no substitute for looking at the code itself.) This is really bad news to anyone who uses nontrivial functions written by someone else.

I hope MathWorks is looking at these comments and taking note. I’m surprised they haven’t thought of the thousands of users that will be affected by this bad decision!

Agreed with Petr Krysal. This takes one of Matlab’s most frequent coding bugs, and hides it all away. What would before have been easy to debug– because the program crashes– can now go completely unnoticed.

This seems like a bad decision on Mathworks part. There should have been some requirement to express intent. Perhaps the original author is right that Matlab wasn’t perfectly consistent with scalar vs. array operations, but that’s no excuse to make things worse.

I agree with Kenz Dale and Petr Krystal. It will hide away errors in coding. Just makes my work harder….

I think these extensions are certainly useful, however, given that so many people seem to have depended on the errors raised by mismatching dimensions in earlier versions of Matlab, I wonder why there isn’t at least a warning when such expansion happens. Ideally, such a warning could exist for a few following versions of Matlab to alert old users to the change, and could be optionally switched off once someone has already gotten used to the new feature. This would help old-timers in the transition and warn of bugs! I really don’t get it why TMW didn’t implement this.

There’s something I don’t get, however, regarding division:

ones(5,1)/ones(1,5) -> error

ones(1,5)/ones(5,1) -> error

ones(1,5)/ones(1,5) = 1

ones(5,1)/ones(5,1) = repmat([ 1 0 0 0 0 ],[5 1])

1./ones(1,5) = [ 1 1 1 1 1 ]

1./ones(5,1) = [ 1 1 1 1 1 ]’

1/ones(1,5) -> error matrix dimensions should agree

1/ones(5,1) = [1 0 0 0 0] ???

I don’t see consistency in this system. Has it been always like this (just me not being aware) or is this part of the new feature? Could you explain to me these results, please, especially where the last one came from?

I’ve read several of the linked pages in this article and found comments which explain why warnings were not possible due to performance issues. I’m not a fan of the current way implicit expansion was implemented, but I guess this is what we’ll have to live with now.

I still don’t understand why division works like it does in my above examples. I wouldn’t want to go off-topic so if my problem is unrelated to implicit expansion, I’d appreciate if you could point me in the right direction where to find more info on this.

There’s good reason behind all of these examples. Implicit expansion of “A op B” only applies if one of the dimensions of A and B matches, which is not the case in ones(5,1)/ones(1,5) or ones(1,5)/ones(5,1). The other examples all follow from the definitions of backslash and forward slash. For example, the penultimate example gives an error because it is asking for a solution of the linear system X*1 = [1 1 1 1 1], which has mismatching dimensions so does not make sense.

Thanks, Nick, it actually makes sense. And thanks again for explaining despite it not being on-topic.