The relative error in a scalar as an approximation to a scalar is the absolute value of . I recently came across a program in which had been computed as . It had never occurred to me to compute it this way. The second version is slightly easier to type, requiring no parentheses, and it has the same cost of evaluation: one division and one subtraction. Is there any reason not to use this parenthesis-free expression?
Consider the accuracy of the evaluation, using the standard model of floating point arithmetic, which says that with , where is any one of the four elementary arithmetic operations and is the unit roundoff. For the expression we obtain, with a hat denoting a computed quantity,
It follows that
Hence is computed very accurately.
For the alternative expression, , we have
After a little manipulation we obtain the bound
The bound on the relative error in is of order , and hence is very large when .
To check these bounds we carried out a MATLAB experiment. For 500 single precision floating point numbers centered on , we evaluated the relative error of as an approximation to using the two formulas. The results are shown in this figure, where an ideal error is of order . (The MATLAB script that generates the figure is available as this gist.)
As expected from the error bounds, the formula is very inaccurate when is close to , whereas retains its accuracy as approaches .
Does this inaccuracy matter? Usually, we are concerned only with the order of magnitude of an error and do not require an approximation with many correct significant figures. However, as the figure shows, for the formula even the order of magnitude is incorrect for very close to . The standard formula should be preferred.