by Sven Hammarling, Nick Higham, and Françoise Tisseur

Jack Dongarra

July 18, 2020 is the 70th birthday of Professor Jack Dongarra, who holds appointments at the University of Tennessee, Oak Ridge National Laboratory, and the University of Manchester.

Jack has made seminal contributions to algorithms for numerical linear algebra and the design and development of high performance mathematical software for machines ranging from workstations to the largest parallel computers. His recent honours include election as a Foreign Member of the Royal Society and receipt of the SIAM/ACM Prize in Computational Science and Engineering (2019)and the IEEE Computer Society Computer Pioneer Award (2020).

Most backward error bounds for numerical linear algebra algorithms are of the form , for a machine precision and a problem size . The dependence on of these bounds is known to be pessimistic: together with Nick Higham, our recent probabilistic analysis [SIAM J. Sci. Comput., 41 (2019), pp. A2815–A2835], which assumes rounding errors to be independent random variables of mean zero, proves that can be replaced by a small multiple of with high probability. However, even these smaller bounds can still be pessimistic, as the figure below illustrates.

The figure plots the backward error for summation (in single precision) of floating-point numbers randomly sampled from a uniform distribution. For numbers in the distribution, the bound is almost sharp and accurately predicts the error growth. However, for the distribution, the error is much smaller, seemingly not growing with . This strong dependence of the backward error on the data cannot be explained by the existing bounds, which do not depend on the values of the data.

In our recent preprint, we perform a new probabilistic analysis that combines a probabilistic model of the rounding errors with a second probabilistic model of the data. Our analysis reveals a strong dependence of the backward error on the mean of the data : indeed, our new backward error bounds are proportional to . Therefore, for data with small or zero mean, these new bounds are much sharper as they bound the backward error by a small multiple of the machine precision independent of the problem size .

Motivated by this observation, we also propose new algorithms that transform the data to have zero mean, so as to benefit from these more favorable bounds. We implement this idea for matrix multiplication and show that our new algorithm can produce significantly more accurate results than standard matrix multiplication.

Professors Jack Dongarra and Nick Higham, together with Dr Laura Grigori (Inria Paris), have edited the issue Numerical Algorithms for High-Performance Computational Science of the journal Philosophical Transaction of The Royal Society A. The issue is now available online.

The issue contains papers from a Discussion meeting of the same title organized at the Royal Society in April 2019. A report on that meeting, along with photos from it, is available here. The content of the issue, with links to the papers, is as follows.

Professor Jack Dongarra, a member of the Manchester Numerical Linear Algebra Group who also holds appointments at the University of Tennessee and Oak Ridge National Laboratory, has been named as recipient of the IEEE Computer Society’s 2020 Computer Pioneer Award.

The award is given for significant contributions to early concepts and developments in the electronic computer field that have clearly advanced the state-of-the-art in computing. Dongarra is being recognized “for leadership in the area of high-performance mathematical software.”

Dongarra will receive his award at the Computer Society’s annual awards dinner and presentation to be held on Wednesday 27 May 2020 at the Hilton McLean Tysons Corner during the IEEE Computer Society Board of Governors meeting. The award consists of a silver medal and an invitation to speak at the award presentation.

This article is based on an IEEE news release. Further information about the award is available here.

Several members of the group attended the SIAM UKIE Section Meeting held at the University of Edinburgh on Friday January 10, 2020. Françoise Tisseur, President of the Section and one of the co-organizers, chaired the morning session.

Congratulations to Sri and Michael, who won first and second best poster prizes, respectively:
a cheque for £75 to both and also a copy of the book /50 Visions of Mathematics/ to Sri.

Embedded systems are based on low-power, low-performance processors and can be found in various medical devices, smart watches, various communication devices, cars, planes, mobile phones and many other places. These systems come in a hardware and software package optimized for specific computational tasks and most commonly have real-time constraints. As these systems usually have energy usage and cost constraints too, sophisticated numerical hardware that can process floating-point data is not included, but rather only integer arithmetic, which is simpler in terms of area and power of the processors.

ISO 18037:2008 is a standard for embedded C programming language support. It lays out various rules that C compilers should support to make embedded systems easier to program using a high-level language. One of the most important definitions in this standard is fixed-point arithmetic data types and operations. Support for fixed-point arithmetic is highly desirable, since if it is not provided integers with scaling factors have to be used, which makes code hard to maintain and debug and most commonly requires assembler level changes or completely new implementations for each different platform.

The GCC compiler provides some support of the fixed-point arithmetic defined in this standard for ARM processors. However, in my recent technical report (https://arxiv.org/abs/2001.01496) I demonstrated various numerical pitfalls that programmers of embedded systems based on ARM and using GCC can get into. The issues demonstrated include

larger than half machine epsilon errors in rounding decimal constants to fixed-point data types,

errors in conversions between different data types,

incorrect pre-rounding of arguments of mixed-format arithmetic operations before the operation is performed, and

lack of rounding of the outputs of arithmetic operations.

These findings can be used to improve the accuracy of various embedded numerical libraries that might be using this compiler. To demonstrate one of the issues, here is a piece of test code:

The multiplication operation is a mixed-format operation, since it multiplies an unsigned long fract argument with an accum argument, therefore it is subject to prerounding of the unsigned long fract argument as described in the report. Since the comparison step in the if () sees that the argument a is larger than zero and b larger than 1, the code is executed with a hope that c will not be set to zero. However, in the arithmetic operation, a is incorrectly pre-rounded to 0, which causes c = 0*b, an unexpected outcome and a bug that is hard to detect and fix.