A Class of Fast and Accurate Summation Algorithms
Summing numbers is a key computational task at the heart of many numerical algorithms. When performed in floating-point arithmetic, summation is subject to rounding errors: for a machine precision , the error bound for the most basic summation algorithms, such as recursive summation, is proportional to .
Nowadays, with the growing interest in low floating-point precisions and ever increasing in applications, such error bounds have become unacceptably large. While summation algorithms leading to smaller error bounds are known (compensated summation is an example), they are computationally expensive.
In our recent preprint, Pierre Blanchard, Nick Higham and I propose a class of fast and accurate summation algorithms called FABsum. We show that FABsum has an error bound of the form , where is a block size, which is independent of to first order. As illustrated by the figure below, which plots the measured error using single precision as a function of , FABsum can deliver substantially more accurate results than recursive summation. Moreover, FABsum can be easily incorporated in high-performance numerical linear algebra kernels in order to boost accuracy with only a modest drop in performance, as we demonstrate in the paper with the PLASMA library.