Author Archives: Nick Higham

Numerical Linear Algebra Group Activities 2018

The Numerical Linear Algebra Group, many of whom are in the October 2018 photo below, had a busy year. This post summarizes what we got up to. Publications are not included here, but many of them can be found on MIMS EPrints under the category Numerical Analysis.

As well as the new group members mentioned below, Stephanie Lai joined the group in September as Executive Assistant to Nick Higham. Stephanie, who holds an MMath degree from the University of Manchester, helps to maintain this website, which launched this autumn. Martin Lotz left the group in August 2018 and is now an Associate Professor at The University of Warwick.

DSC00120.ARW

Software

Together with Jack Dongarra’s team at the University of Tennessee the group is one of the two partners involved in the development of PLASMA: Parallel Linear Algebra Software for Multicore Architectures.

PhD students Weijian Zhang, Steven Elsworth and Jonathan Deakin continued to develop Etymo—a technology for developing knowledge graphs, including Etymo Scholar, a discovery engine for AI research.

We make our research codes available as open source, principally on GitHub; see the repositories of Connolly, Fasi, Higham, Liu, PraneshRelton, Sego, Tisseur, ZhangZounon.

We also put MATLAB software on MATLAB Central File Exchange and on our own web sites, e.g., the Rational Krylov Toolbox (RKToolbox).

PhD Students

We welcomed new PhD students Michael Connolly, Xiaobo Liu, and Conor Rogers.

Weijian Zhang successfully defended his PhD on Evolving Graphs and similarity-based Graphs with Applications in October 2018.

Postdoctoral Research Associates (PDRAs)

Theo Mary joined us in January 2018 to work jointly on the ICONIC project.

Mawussi Zounon, who was working on the Parallel Numerical Linear Algebra for Extreme Scale Systems (NLAFET) project, moved over in November 2018 to the position of KTP Associate in a three-year Knowledge Transfer Partnership with NAG.

Jakub Sistek returned to the Czech Academy of Sciences after the completion of the project Programming Model INTERoperability ToWards Exascale (INTERTWinE).

Prashanth Nadukandi completed his Marie Skłodowska-Curie Individual Fellowship in September 2018 and is now a Technical Advisor in Advanced Mathematics at the Repsol Technology Lab, Móstoles, Spain.

Maksims Abalenkovs completed his work on the EPSRC project SERT: Scale-free, Energy-aware, Resilient and Transparent Adaptation of CSE Applications to Mega-core Systems.

Presentations at Conferences and Workshops

UK and Republic of Ireland Section of SIAM Annual Meeting, National Oceanography Centre, Southampton, January 11, 2018: Blanchard, Nadukandi.

Waseda Research Institute for Science and Engineering Institute for Mathematical Science Kickoff Meeting, Waseda University, Tokyo, March 6, 2018: Higham.

SIAM Conference on Parallel Processing for Scientific Computing, Waseda University, Tokyo, Japan, March 7-10, 2018: Hammarling, Higham, Mary, Zounon.

Conference on Scientific Computing and Approximation, Purdue University, March 30-31, 2018: Higham.

SIAM Conference on Applied Linear Algebra, May 4-8, 2018, Hong Kong Baptist University: Elsworth, Güttel, Zemaityte.

Structured Matrix Days 2018, May 14-15, 2018, ENS Lyon, France: Mary, Tisseur.

Mathematical Modelling, Numerical Analysis and Scientific Computing, Kácov, Czech Republic, May 27-June 1, 2018: Higham, Tisseur.

International Conference on Approximation and Matrix Functions, May 31 and June 1, 2018, Université de Lille, France: Güttel.

Bohemian Matrices and Applications, University of Manchester, June 20-22, 2018: Higham.

International Supercomputing conference (ISC18), Frankfurt, June 24-28, 2018: Hammarling, Zounon.

10th International Workshop on Parallel Matrix Algorithms and Applications (PMAA18), June 27-29, 2018, ETH Zurich: Zounon.

6th IMA Conference on Numerical Linear Algebra and Optimization, June 27-29 2018, University of Birmingham: Blanchard, Mary, Tisseur.

SIAM Annual Meeting, Portland, USA, July 9-13, 2018: Blanchard, Higham, Tisseur.

JuliaCon 2018, London, August 7-11, 2018: Higham.

Sparse Day Meeting 2018, September 26-27 2018, Toulouse, France: Mary.

Conference and Workshop Organization

Visitors

Recognition and Service

    • Nick Higham gave the Samuel D. Conte Distinguished Lecture at the Department of Computer Science, Purdue University, USA, in March 2018.
  • Massimilano Fasi was presented with the SIAM Student Chapter Certificate of Recognition 2018.

Other Notable Tweets

A New Approach to Probabilistic Rounding Error Analysis

by Nick Higham and Theo Mary

hima18_fig.jpg

Backward error for random linear system of dimension n.

James Wilkinson developed a systematic way of analyzing rounding errors in numerical algorithms in the 1960s—a time when computers typically used double precision arithmetic and a matrix of dimension n = 100 was considered large. As a result, an error bound with a dominant term p(n)u, for p a low degree polynomial and u the machine precision, was acceptably small.

Today, the supercomputers heading the TOP500 list solve linear systems of equations of dimension 10^8 and half precision arithmetic (accurate to about 4 significant decimal digits) is increasingly available in hardware, notably on graphical processing units (GPUs) from AMD and NVIDIA. Traditional rounding error bounds cannot guarantee accurate results when the dimension is very large or the precision is low. Yet computations are often successful in such cases—for example in machine learning algorithms making use of half, or even lower, precision.

This discrepancy between theory and practice stems from the fact that traditional rounding error bounds are worst-case bounds and so tend to be pessimistic. Indeed, while a single floating-point operation incurs a rounding error bounded in modulus by u, the composition of n operations leads to a worst-case error bound with dominant term proportional to nu. But this worst-case error bound is attained only when each rounding error is of maximal magnitude and identical sign, which is very unlikely. Since the beginning of the digital computer era many researchers have modelled rounding errors as random variables in an attempt to obtain better estimates of how the error behaves on average. This line of thinking has led to the well-known rule of thumb, based on informal arguments and assumptions, that constants in rounding error bounds can be replaced by their square roots.

In our EPrint A New Approach to Probabilistic Rounding Error Analysis we make minimal probabilistic assumptions on the rounding errors and make use of a tool from probability theory called a concentration inequality. We show that in several core linear algebra algorithms, including matrix-vector products and LU factorization, the backward error can be bounded with high probability by a relaxed constant proportional to \sqrt{n\log n}u instead of nu. Our analysis provides the first rigorous justification of the rule of thumb.

This new bound is illustrated in the figure above, where we consider the solution of a linear system Ax = b by LU factorization. The matrix A and vector x have entries from the random uniform [0,1] distribution and b is formed as Ax. We compare the backward error with its traditional worst-case bound and our relaxed probabilistic bound. The figure shows that the probabilistic bound is in very good agreement with the actual backward error and is much smaller than the traditional bound. Moreover, it successfully captures the asymptotic behavior of the error growth, which follows \sqrt{n} rather than n.

The assumptions underlying our analysis—that the rounding errors are independent random variables of mean zero—do not always hold, as we illustrate with examples in the paper. Nevertheless, our experiments show that the bounds do correctly predict the backward error for a selection of real-life matrices from the SuiteSparse collection.

Fast Solution of Linear Systems via GPU Tensor Cores’ FP16 Arithmetic and Iterative Refinement

181112_sc18.jpg

NVIDIA Founder & CEO Jensen Huang talking about the work reported here in his special address at Supercomputing 2018 (8:30 onwards).

Over the last 30 years, hierarchical computer memories, multicore processors and graphical processing units (GPUs) have all necessitated the redesign of numerical linear algebra algorithms, and in doing so have led to algorithmic innovations. Mixed precision arithmetic—a concept going back to the earliest computers, which had the ability to accumulate inner products in extra precision—attracted renewed interest in the late 1990s once Intel chips were able to execute single precision at twice the rate of double precision. Now the increasing availability of low precision arithmetic is offering new opportunities.

In the paper Harnessing GPU Tensor Cores for Fast FP16 Arithmetic to Speed up Mixed-Precision Iterative Refinement Solvers presented at SC18 (the leading supercomputing conference), Azzam Haidar, Stanimire Tomov, Jack Dongarra and Nick Higham show how to exploit the half precision (fp16) arithmetic that is now available in hardware. Whereas fp16 arithmetic can be expected to run at twice the rate of fp32 (single precision) arithmetic, the NVIDIA V100 GPU has tensor cores that can execute half precision at up to eight times the speed of single precision and can deliver the results to single precision accuracy. Developing algorithms that can exploit half precision arithmetic is important both for a workstation connected to a single V100 GPU and the world’s fastest computer (as of November 2018): Summit at Oak Ridge National Laboratory, which contains 27,648 V100 GPUs.

The paper shows that a dense n-by-n double precision linear system Ax = b can be solved using mixed precision iterative refinement at a rate up to four times faster than a highly optimized double precision solver and with a reduction in energy consumption by a factor five.

The key idea is to LU factorize the matrix A in a mix of half precision and single precision then apply iterative refinement. The update equations in the refinement process are solved by an inner GMRES iteration that uses the LU factors as preconditioners. This GMRES-IR algorithm was proposed by Carson and Higham in two (open access) papers in SIAM J. Sci. Comput. (2017 and 2018). In the form used here, the algorithm converges for matrices with condition numbers up to about 10^8. It provides a backward stable, double precision solution while carrying out almost all the flops at lower precision.

Codes implementing this work will be released through the open-source MAGMA library.

htdh18_fig7a.jpg

Lecturer, Senior Lecturer or Reader in Applied Mathematics

The School of Mathematics is seeking mathematical scientists of outstanding ability or potential for appointments at Lecturer, Senior Lecturer or Reader level.

Applicants working in any area of applied mathematics are welcome, particularly those working in areas that complement and enhance the applied mathematics research of the School, which spans numerical linear algebra, uncertainty quantification, dynamical systems, data science, mathematics in the life sciences, industrial mathematics, inverse problems and continuum mechanics.

Applicants with research experience in numerical linear algebra, or at the interfaces with pure mathematics or probability and statistics, are strongly encouraged.

These positions provide an excellent opportunity to join the Numerical Linear Algebra Group.

The closing date is January 11, 2019. For the advert and more details see here.

Welcome to the NLA Group Website

Welcome to the new website of the Numerical Linear Algebra group at the University of Manchester! The group, numbering around 20 and listed here, comprises four permanent faculty—Nick Higham, Françoise Tisseur, Stefan Guettel and Jack Dongarra (part-time)—and PhD students, research associates, research fellows, visitors and other associated researchers.

The website reports our activities, including papers, software and presentations. It contains regularly updated news items as well as blog posts by members of the group on research topics of interest.

We hold weekly group meetings and organize workshops in Manchester and elsewhere. Some of our conference talks are available here.

We give a number of courses in the School of Mathematics in the undergraduate mathematics degree and at M.Sc. level, and also regularly give courses externally, especially at summer schools.

We are always looking to recruit, and current opportunities are listed here. Feel free to contact us if you would like to enquire about future opportunities.

Follow us on Twitter and watch the videos on our YouTube channel.

Primary Solutions of Matrix Equations

Max Fasi and Bruno Iannazzo have shown how to compute all primary solutions of a matrix equation f(X) = A for rational functions f. A primary solution is one that can be written as a polynomial in A. The proposed algorithm exploits the Schur decomposition and generalizes earlier algorithms for matrix roots.

Fasi and Iannazzo’s paper Computing Primary Solutions of Equations Involving Primary Matrix Functions appears in Linear Algebra Appl. 560, 17-42, 2019.

faia18_fig1.jpg

Recent Entries »