## Sharper Probabilistic Backward Error Bounds

Most backward error bounds for numerical linear algebra algorithms are of the form $nu$, for a machine precision $u$ and a problem size $n$. The dependence on $n$ of these bounds is known to be pessimistic: together with Nick Higham, our recent probabilistic analysis [SIAM J. Sci. Comput., 41 (2019), pp. A2815–A2835], which assumes rounding errors to be independent random variables of mean zero, proves that $n$ can be replaced by a small multiple of $\sqrt{n}$ with high probability. However, even these smaller bounds can still be pessimistic, as the figure below illustrates.

The figure plots the backward error for summation (in single precision) of $n$ floating-point numbers randomly sampled from a uniform distribution. For numbers in the $[0,1]$ distribution, the bound $\sqrt{n}u$ is almost sharp and accurately predicts the error growth. However, for the $[-1,1]$ distribution, the error is much smaller, seemingly not growing with $n$. This strong dependence of the backward error on the data cannot be explained by the existing bounds, which do not depend on the values of the data.

In our recent preprint, we perform a new probabilistic analysis that combines a probabilistic model of the rounding errors with a second probabilistic model of the data. Our analysis reveals a strong dependence of the backward error on the mean of the data $\mu$: indeed, our new backward error bounds are proportional to $\mu\sqrt{n}u + u$. Therefore, for data with small or zero mean, these new bounds are much sharper as they bound the backward error by a small multiple of the machine precision independent of the problem size $n$.

Motivated by this observation, we also propose new algorithms that transform the data to have zero mean, so as to benefit from these more favorable bounds. We implement this idea for matrix multiplication and show that our new algorithm can produce significantly more accurate results than standard matrix multiplication.