## 2020 NLA group photo

This year’s group photo was taken on March 5, 2020 after the NLA group meeting. Most group members are in the photo; those missing include Jack Dongarra, Stefan Güttel, Ramaseshan Kannan and Marcus Webb.

The banner on this website has also been updated with the new group photo.  A high resolution version of the photo is available here.

By row from the back: Craig Lucas, Nick Higham, Xinye Chen, Steven Elsworth, Xiaobo (Bob) Liu, Michael Connolly, Mantas Mikaitis, Len Freeman, Massimiliano Fasi, Pierre Blanchard, Sven Hammarling, Asad Raza Aitor Mehasi Mehasi, Stephanie Lai, Gian Maria Negri Porzio, Thomas McSweeney Mawussi Zounon, Françoise Tisseur, Srikara Pranesh, Yuqing (Mila) Zhang, Eleni Vlachopoulou.

## NLA Group at the SIAM Conference on Parallel Processing for Scientific Computing

Several members of the group attended the SIAM Conference on Parallel Processing for Scientific Computing held in Seattle on February 12-15, 2020.

The presentations given are as follows:

Nick Higham and Srikara Pranesh also organised a two part mini-symposium (Advances in Algorithms Exploiting Low Precision Floating-Point Arithmetic, MS10 and MS21) at the conference.

Max Fasi, Mantas Mikatis, Mawussi Zounon, Sri Pranesh, Theo Mary at the SIAM Conference on Parallel Processing for Scientific Computing, Seattle, Washington, February 12-15, 2020.

## Conference Celebrating the 70th Birthday of Jack Dongarra

by Sven Hammarling, Nick Higham, and Françoise Tisseur

Jack Dongarra

July 18, 2020 is the 70th birthday of Professor Jack Dongarra, who holds appointments at the University of Tennessee, Oak Ridge National Laboratory, and the University of Manchester.

Jack has made seminal contributions to algorithms for numerical linear algebra and the design and development of high performance mathematical software for machines ranging from workstations to the largest parallel computers. His recent honours include election as a Foreign Member of the Royal Society and receipt of the
SIAM/ACM Prize in Computational Science and Engineering (2019)and the IEEE Computer Society Computer Pioneer Award (2020).

To celebrate Jack’s birthday we are organizing a conference New Directions in Numerical Linear Algebra and High Performance Computing: Celebrating the 70th Birthday of Jack Dongarra at The University of Manchester, July 17, 2020.  Registration is now open and we welcome submission of posters.

## Sharper Probabilistic Backward Error Bounds

Most backward error bounds for numerical linear algebra algorithms are of the form $nu$, for a machine precision $u$ and a problem size $n$. The dependence on $n$ of these bounds is known to be pessimistic: together with Nick Higham, our recent probabilistic analysis [SIAM J. Sci. Comput., 41 (2019), pp. A2815–A2835], which assumes rounding errors to be independent random variables of mean zero, proves that $n$ can be replaced by a small multiple of $\sqrt{n}$ with high probability. However, even these smaller bounds can still be pessimistic, as the figure below illustrates.

The figure plots the backward error for summation (in single precision) of $n$ floating-point numbers randomly sampled from a uniform distribution. For numbers in the $[0,1]$ distribution, the bound $\sqrt{n}u$ is almost sharp and accurately predicts the error growth. However, for the $[-1,1]$ distribution, the error is much smaller, seemingly not growing with $n$. This strong dependence of the backward error on the data cannot be explained by the existing bounds, which do not depend on the values of the data.

In our recent preprint, we perform a new probabilistic analysis that combines a probabilistic model of the rounding errors with a second probabilistic model of the data. Our analysis reveals a strong dependence of the backward error on the mean of the data $\mu$: indeed, our new backward error bounds are proportional to $\mu\sqrt{n}u + u$. Therefore, for data with small or zero mean, these new bounds are much sharper as they bound the backward error by a small multiple of the machine precision independent of the problem size $n$.

Motivated by this observation, we also propose new algorithms that transform the data to have zero mean, so as to benefit from these more favorable bounds. We implement this idea for matrix multiplication and show that our new algorithm can produce significantly more accurate results than standard matrix multiplication.

## Numerical Algorithms for High-Performance Computational Science Issue of Phil Trans R Soc A

Professors Jack Dongarra and Nick Higham, together with Dr Laura Grigori (Inria Paris), have edited the issue Numerical Algorithms for High-Performance Computational Science of the journal Philosophical Transaction of The Royal Society A. The issue is now available online.

The issue contains papers from a Discussion meeting of the same title organized at the Royal Society in April 2019.  A report on that meeting, along with photos from it, is available here.  The content of the issue, with links to the papers, is as follows.

Numerical algorithms for high-performance computational science by Jack Dongarra, Laura Grigori and Nicholas J. Higham.

The future of computing beyond Moore’s Law by John Shalf.

Hierarchical algorithms on hierarchical architectures by D. E. Keyes , H. Ltaief and G. Turkiyyah.

Stochastic rounding and reduced-precision fixed-point arithmetic for solving neural ordinary differential equations by Michael Hopkins, Mantas Mikaitis, Dave R. Lester and Steve Furber.

Preparing sparse solvers for exascale computing by Hartwig Anzt, Erik Boman, Rob Falgout et al.

On the cost of iterative computations by Erin Carson and Zdeněk Strakoš.

Rethinking arithmetic for deep neural networks by G. A. Constantinides.

Machine learning and big scientific data by Tony Hey , Keith Butler, Sam Jackson and Jeyarajan Thiyagalingam.

Exascale applications: skin in the game by Francis Alexander, Ann Almgren, John Bell et al.

Big telescope, big data: towards exascale with the Square Kilometre Array by A. M. M. Scaife.

Optimal memory-aware backpropagation of deep join networks by Olivier Beaumont, Julien Herrmann, Guillaume Pallez (Aupy) and Alena Shilova.

A survey of algorithms for transforming molecular dynamics data into metadata for in situ analytics based on machine learning methods by Michela Taufer , Trilce Estrada and Travis Johnston.

The parallelism motifs of genomic data analysis by Katherine Yelick , Aydın Buluç, Muaaz Awan et al.

## Jack Dongarra Selected to Receive the 2020 IEEE Computer Society’s Computer Pioneer Award

Jack Dongarra

Professor Jack Dongarra, a member of the Manchester Numerical Linear Algebra Group who also holds appointments at the University of Tennessee  and Oak Ridge National Laboratory, has been named as recipient of the IEEE Computer Society’s 2020 Computer Pioneer Award.

The award is given for significant contributions to early concepts and developments in the electronic computer field that have clearly advanced the state-of-the-art in computing.  Dongarra is being recognized “for leadership in the area of high-performance mathematical software.”

Dongarra will receive his award at the Computer Society’s annual awards dinner and presentation to be held on Wednesday 27 May 2020 at the Hilton McLean Tysons Corner during the IEEE Computer Society Board of Governors meeting. The award consists of a silver medal and an invitation to speak at the award presentation.