Sharper Probabilistic Backward Error Bounds
Most backward error bounds for numerical linear algebra algorithms are of the form , for a machine precision and a problem size . The dependence on of these bounds is known to be pessimistic: together with Nick Higham, our recent probabilistic analysis [SIAM J. Sci. Comput., 41 (2019), pp. A2815–A2835], which assumes rounding errors to be independent random variables of mean zero, proves that can be replaced by a small multiple of with high probability. However, even these smaller bounds can still be pessimistic, as the figure below illustrates.
The figure plots the backward error for summation (in single precision) of floating-point numbers randomly sampled from a uniform distribution. For numbers in the distribution, the bound is almost sharp and accurately predicts the error growth. However, for the distribution, the error is much smaller, seemingly not growing with . This strong dependence of the backward error on the data cannot be explained by the existing bounds, which do not depend on the values of the data.
In our recent preprint, we perform a new probabilistic analysis that combines a probabilistic model of the rounding errors with a second probabilistic model of the data. Our analysis reveals a strong dependence of the backward error on the mean of the data : indeed, our new backward error bounds are proportional to . Therefore, for data with small or zero mean, these new bounds are much sharper as they bound the backward error by a small multiple of the machine precision independent of the problem size .
Motivated by this observation, we also propose new algorithms that transform the data to have zero mean, so as to benefit from these more favorable bounds. We implement this idea for matrix multiplication and show that our new algorithm can produce significantly more accurate results than standard matrix multiplication.