Our Alumni – Craig Lucas

In this blog post, we asked one of our alumni, Craig Lucas, a few questions about his time with the Numerical Linear Algebra Group.

Craig Lucas

Please can you introduce yourself and tell us a bit about your experience before attending University of Manchester?

I came to study Mathematics a little later than usual. I was a technician civil engineer working in land reclamation in Staffordshire and needed a change! I was always told I was good at maths and thought at 27 I should get a degree. I am very grateful to Graham Bowtell  at City University who took a chance on someone without A-levels. I developed an interest in Numerical Analysis and computing and wanted to take my study as far as I could. That brought me to Manchester for an MSc, and ultimately a PhD.

What was your PhD thesis on?

My thesis, supervised by Nick Higham, was “Algorithms for Cholesky and QR Factorizations, and the Semidefinite Generalized Eigenvalue Problem.” Arguably a rag bag of algorithms building on my MSc experience of symmetric matrices. I also met and worked with Sven Hammarling on QR updating. He then worked for NAG, as I do now.

Why did you choose to study your PhD in Manchester?

During my MSc I realised I was working with world leaders in their field. It wasn’t a difficult decision to stay on for a PhD, in fact, I felt incredibly lucky to have that opportunity.

How did you find Manchester?

I hated it! I had come up from London and it felt that a whole new world. I wasn’t used to strangers talking to me in the street! However, after about 18 months the place really started to grow on me, and now, nearly 20 years later, I can’t imagine living anywhere else. We have an incredible arts scene, fantastic restaurants, brilliant transport links and a cost of living that makes living back in London seem ridiculous.

Can you tell us about your career since leaving Manchester?

Firstly I never really left. In the 15 years since I finished my PhD I have taught on my old MSc, supervised students and several KTP projects jointly with the Numerical Linear Algebra group. After my PhD, I went to work in research computing at Manchester first, in high performance computing (HPC.) Then just over 10 years ago I joined NAG where I could use both my numerical analysis and HPC skills.

What is your current role?

I run NAG’s Manchester Office, which is a rather nice penthouse on Portland Street with a roof terrace, and the HPC team here. I am supervising my third KTP, involved in running NAG’s contribution to the EU POP project and every now and then write some mathematical software.

For a Few Equations More

On the occasion of the Gilles Kahn prize award ceremony, I was asked to write an article about my PhD thesis for the popular science blog Binaire from the French newspaper Le Monde (the French version of the article is available here). You can read the English translation below.


Can you tell what the following problems have in common: predict tomorrow’s weather, build crash-resisting cars, scan the bottom of the oceans searching for oil? These are all difficult problems that are too costly to be tackled physically. Importantly, they can also be described by a fundamental tool of mathematics: linear equations. Therefore, the solution of these physical problems can be numerically simulated by solving instead systems of linear equations.

A system of linear equations (under matrix form).

You probably remember from math class in high school how tedious solving these systems could get, even when they had a small number of equations. In practice, it is actually quite common to face systems with thousands or even millions or equations. While computers can fortunately solve these systems for us, the computational cost of the solution can become very high for such large numbers of equations.

To respond to this need, great quantities of resources and money have been dedicated to the construction of supercomputers of great computational power, equipped with a large number of computing units called “cores”. For example, while your personal computer is likely to have less than a dozen cores, the most powerful supercomputer in the world has several millions of these cores. Nevertheless, the size of the problems that we must tackle today is so great that even these supercomputers are not sufficient.

The Inteprid supercomputer, equipped with 164000 cores © Argonne National Laboratory

To take up this challenge, I have worked during my PhD thesis on new algorithms to solve systems of linear equations whose computational cost is greatly reduced. More precisely, a crucial property of these algorithms is that their cost grows slowly with the number of equations: this is referred to as their complexity. Methods of very low complexity (so-called “hierarchical”) have been proposed since the 2000s. However, these hierarchical methods are quite complex and sophisticated, which makes them unable to attain high performance on supercomputers: that is, their high reduction of the theoretical complexity is translated into only very modest gains in terms of actual computing time.

For this reason, my PhD thesis focused on another method (so-called “Block Low Rank”), that is better suited than hierarchical methods for high performance computing. My first achievement was to compute the complexity of this method, which was previously unknown. I proved that, even if its complexity is slightly higher than than of hierarchical methods, it is still low enough to tackle systems of very large dimensions. In the second part of my thesis, I worked on efficiently implementing this method on supercomputers, so as to translate this theoretical reduction into actual time gains.

By significantly reducing the cost of solving systems of linear equations, this work allowed us to solve several physical problems that were previously too large to be tackled. For instance, it took less than an hour to solve a system of 130 million equations arising in a geophysical application, using a supercomputer equipped with 2400 cores.

Computing the Wave-Kernel Matrix Functions

The wave-kernel functions \cosh{\sqrt{A}} and \mathrm{sinhc}{\sqrt{A}} arise in the solution of second order differential equations such as u''(t) - Au(t) = b(t) with initial conditions at t=0. Here, A is an arbitrary square matrix and \mathrm{sinhc}{z} = \sinh(z)/z. The square root in these formulas is illusory, as both functions can be expressed as power series in A, so there are no questions about existence of the functions.

How can these functions be computed efficiently? In Computing the Wave-Kernel Matrix Functions (SIAM J. Sci. Comput., 2018) Prashanth Nadukandi and I develop an algorithm based on Padé approximation and the use of double angle formulas. The amount of scaling and the degree of the Padé approximant are chosen to minimize the computational cost subject to achieving backward stability for \cosh{\sqrt{A}} in exact arithmetic.

In the derivation we show that the backward error of any approximation to \cosh{\sqrt{A}} can be explicitly expressed in terms of a hypergeometric function. To bound the backward error we derive and exploit a new bound for \|A^k\|^{1/k} in terms of the norms of lower powers of A; this bound is sharper than one previously obtained by Al-Mohy and Higham.

Numerical experiments show that the algorithm behaves in a forward stable manner in floating-point arithmetic and is superior in this respect to the general purpose Schur–Parlett algorithm applied to these functions.

principal_domain_cosh_sqrt.jpg

The fundamental regions of the

function cosh(sqrt(z)), needed for the backward error analysis

underlying the algorithm.

Our Alumni – Lijing Lin

In this blog post, we asked one of our alumni, Lijing Lin, a few questions about her time with the Numerical Linear Algebra Group.

Lijing Lin at PhD graduation

Please can you introduce yourself and tell us a bit about your experience before attending University of Manchester?

 I obtained my BSc from Nanjing University of Aeronautics and Astronautics and MSc from Fudan University in China, before coming to Manchester to study for my PhD in 2007.

What was your PhD thesis on?

 The title of my thesis is Roots of Stochastic Matrices and Fractional Matrix Powers. Computing roots of stochastic matrices arises from Markov chain models in finance and healthcare where a transition over a certain time interval is needed but only a transition over a longer time interval may be available. Besides developing new theories, we also developed a package for computing stochastic roots. Fractional matrix powers are more general functions than matrix roots. We developed a new algorithm for computing arbitrary real powers of matrices.

Why did you choose to study your PhD in Manchester?

 I had developed an interest in doing research in Numerical Linear Algebra during my MSc. The NLA group in Manchester is renowned for world-leading expertise in this area, and is one of the best places in the world to study and do research.

How did you find Manchester?

 I have studied, worked and lived in Manchester for over 11 years now. It is exciting, diverse and welcoming–a city that keeps growing and never stops surprising me.

Can you tell us about your career since leaving Manchester?

 After graduating, I continued working in Manchester as a Research Associate. With a solid background in NLA, my research now has moved toward machine learning, probabilistic modelling, and statistics.

What is your current role?

 I am currently a Turing PDRA in predictive healthcare. We are building prognostic models that allow consideration of “what if” scenarios to explore the effects of interventions, e.g. how would a person’s risk of getting heart attack change if he started or quit smoking now.

Celebrating the Centenary of James H. Wilkinson’s Birth

by Sven Hammarling and Nick Higham

Wilkinson_035.jpg

September 27, 2019 is the 100th anniversary of the birth of James Hardy Wilkinson—the renowned numerical analyst who died in 1986. We are marking this special anniversary year in several ways:

The tag wilkinson lists all the posts in this series.

Jack Dongarra Awarded SIAM/ACM Prize in Computational Science and Engineering

Congratulations to Jack Dongarra who recently received the SIAM/ACM Prize in Computational Science and Engineering.

Jack Dongarra will receive the SIAM/ACM Prize in Computational Science and Engineering at the SIAM Conference on Computational Science and Engineering (CSE19) held February 25 – March 1, 2019 in Spokane, Washington. He will receive the award and deliver his prize lecture, “The Singular Value Decomposition: Anatomy of an Algorithm, Optimizing for Performance,” on February 28, 2019.

SIAM and the Association for Computing Machinery (ACM) jointly award the SIAM/ACM Prize in Computational Science and Engineering every two years at the SIAM Conference on Computational Science and Engineering for outstanding contributions to the development and use of mathematical and computational tools and methods for the solution of science and engineering problems. With this award, SIAM and ACM recognize Dongarra for his key role in the development of software and software standards, software repositories, performance and benchmarking software, and in community efforts to prepare for the challenges of exascale computing, especially in adapting linear algebra infrastructure to emerging architectures.

When asked about his research for which the prize was awarded, Dongarra said “I have been involved in the design and development of high performance mathematical software for the past 35 years, especially regarding linear algebra libraries for sequential, parallel, vector, and accelerated computers. Of course, the work that led to this award could not have been achieved without the help, support, collaboration, and interactions of many people over the years. I have had the good fortune of working on a number of high profile projects: in the area of mathematical software, EISPACK, LINPACK, LAPACK, ScaLAPACK, ATLAS and today with PLASMA, MAGMA, and SLATE; community de facto standards such as the BLAS, MPI, and PVM; performance analysis and benchmarking tools such as the PAPI, LINPACK benchmark, the Top500, and HPCG benchmarks; and the software repository netlib, arguably the first open source repository for publicly available mathematical software.”

This article was extracted from SIAM News. Further information is available here.

Professor Jack Dongarra

Professor Jack Dongarra

Numerical Algorithms for High-Performance Computational Science, Royal Society, London, April 8-9, 2019

This 2-day scientific discussion meeting is being held at the Royal Society, London, UK, and is organized by Jack Dongarra, Laura Grigori and Nick Higham.

The programme consists of invited talks and contributed posters. The invited speakers are

  1. Guillaume Aupy, INRIA Bordeaux
  2. Erin Carson, Charles University, Prague
  3. George Constantinides, Imperial College
  4. Steve Furber, FRS, University of Manchester
  5. Mike Heroux, Sandia National Laboratories
  6. Tony Hey, Science and Technology Facilities Council
  7. David Keyes, King Abdullah University of Science and Technology, Saudi Arabia
  8. Doug Kothe, Director of the DOE Exascale Computing Project (ECP)
  9. Satoshi Matsuoka, RIKEN Center for Computational Science, Tokyo
  10. Tim Palmer, FRS, Oxford University
  11. Jack Poulson, Hodge Star Scientific Computing, Toronto
  12. Anna Scaife, University of Manchester
  13. John Shalf, Lawrence Berkeley National Laboratory
  14. Rick Stevens, Argonne National Laboratory
  15. Michela Taufer, University of Delaware
  16. Kathy Yelick, Lawrence Berkeley National Laboratory,

The deadline for submission of poster abstracts is Friday March 1, 2019 (extended from Monday 4 February 2019).

Attendance is free, but places are limited and advance registration is essential.

For more information, including the programme and registration, see https://royalsociety.org/science-events-and-lectures/2019/04/high-performance-computing/

« Older Entries Recent Entries »