Our Alumni – Sam Relton

In this blog post, we asked one of our alumni, Sam Relton, a few questions about his time with the Numerical Linear Algebra Group.

srelton.jpg

Please can you introduce yourself and tell us a bit about your experience before attending University of Manchester?

I was always pretty good at maths, because I liked understanding how things worked, and so I went to Manchester for my BSc. During that course I really enjoyed the numerical analysis and linear algebra modules because they underpin how all other mathematics is implemented in practice. I loved living in Manchester so I wanted to stick around, and I was lucky enough to be able to skip an MSc and go straight to a PhD in the NLA group.

What was your PhD thesis on?

My thesis was supervised by Nick Higham and called “Algorithms for Matrix Functions, their Frechet Derivatives, and Condition Numbers”. It consisted of four research papers covering theoretical and algorithmic advances in the computation of matrix functions, all woven together. Along with Nick, a few of these papers were co-authored with Awad Al-Mohy (a previous PhD student of Nick’s who was interested in similar problems).

Why did you choose to study your PhD in Manchester?

Manchester is a world-leading research group for numerical linear algebra and it was a privilege to learn from (and work with) the greatest researchers in the field. This also opens up a lot of opportunities in terms of attending conferences, visiting other institutions, and when looking for postdoctoral positions. Manchester is also a fantastic place to live, with plenty going on and a thriving community of PhD and post-doc researchers. I also had a few friends studying other courses that I shared a house with during my undergraduate degree and PhD.

How did you find Manchester?

I loved Manchester, it’s a large busy city full of interesting things to see and do whilst the cost of living is nowhere near that of London. Despite that, you can easily get into the countryside with a 30 minute drive! The maths department was brilliant with plenty of strong research groups to chat with, lots of seminars to attend, and a friendly and open atmosphere between all the staff and students.

Can you tell us about your career since leaving Manchester?

After doing a BSc, PhD, and 2 post-docs in high-performance computing at Manchester I decided to try something new. I now work in the School of Medicine at Leeds, applying complex statistical models and machine learning to electronic healthcare records (taken from GP and hospital databases) with collaborators in the School of Computing. Statistics and machine learning are really just a practical application of linear algebra / HPC, so much of what I learnt during my years in Manchester is still very relevant! Working with large interdisciplinary teams of doctors and nurses is an interesting change, and it’s nice to have direct impact on NHS policy decisions.

Version 4.0 of NLEVP Collection of Nonlinear Eigenvalue Problems

nlevpA new release, version 4.0, is available of the NLEVP MATLAB toolbox, which provides a collection of nonlinear eigenvalue problems. The toolbox has become a standard tool for testing algorithms for solving nonlinear eigenvalue problems.

When it was originally released in 2008, the toolbox contained 26 problems.  The new release contains 74 problems. It is now distributed via GitHub and is available at https://github.com/ftisseur/nlevp.

Further details are given in An Updated Set of Nonlinear Eigenvalue Problems. The collection will grow and contributions are welcome.

The following table shows the 22 new problems in version 4.0 of the toolbox .4.0 NLEVP problems

SIAM CSE19 Minisymposium on “Advances in Analyzing Floating-point Errors in Computational Science”

by Pierre Blanchard, Nick Higham, and Theo Mary

Last February the SIAM Computational Science and Engineering (CSE19) conference took place in Spokane, WA, USA. We organized a two-part minisymposium on recent Advances in Analyzing Floating-point Errors in Computational Science (see links to part 1 and part 2). Below is the list of the eight talks along with the slides which we have made available.

Simulating Low Precision Floating-Point Arithmetics

In earlier blog posts, I wrote about the benefits of using half precision arithmetic (fp16) and about the problems of overflow and underflow in fp16 and how to avoid them. But how can one experiment with fp16, or other low precision formats such as bfloat16, in order to study how algorithms behave in these arithmetics? (For an accessible introduction to fp16 and bfloat16 see the blog post by Nick Higham.)

As of now, fp16 is supported by several GPUs, but these are specialist devices and they can be very expensive.  Moreover, architectures that support bfloat16 have not yet not been released. Therefore software that simulates these floating-point formats is needed.

In our latest EPrint, Nick Higham and I investigate algorithms for simulating fp16, bfloat16 and other low precision formats. We have also written a MATLAB function chop that can be incorporated into other MATLAB codes to simulate low precision arithmetic.  It can easily be used to study the effect of low precision formats on various algorithms.

Imagine a hypothetical situation where the computer can just represent integers. Then the question is how do we represent numbers like 4/3? An obvious answer would be to represent it via the integer closest to it, 1 in this case. However, one will have to come with a convention to handle the case where the number is in the centre. Now replace the integer in the example with floating-point numbers, and a similar question arises. This process of converting any given number to a floating-point number is called  rounding. If we adopt a rule where we choose the closest floating-point number (as above), then we formally call it as ‘round to nearest’. There are other ways to round as well, and different rounding modes can yield different results for the same code. However meddling with the parameters of a floating-point format without a proper understanding of their consequences can be a recipe for disaster. Cleve Moler in his blog on sub-normal numbers makes this point by warning ‘don’t try this at home’. The MATLAB software we have written provides a safe environment to experiment with the effects of changing any parameter of a floating-point format (such as rounding modes and support of subnormal numbers) on the output of a code. All the technical details can be found in the Eprint and our MATLAB codes.

Our Alumni – Edvin Hopkins

In this blog post, we asked one of our alumni, Edvin Hopkins, a few questions about his time with the Numerical Linear Algebra Group.

Craig Lucas

Please can you introduce yourself and tell us a bit about your experience before attending University of Manchester?

I obtained my BA in Mathematics from the University of Cambridge in 2005 and remained there for a few more years to do a PhD in numerical relativity. My association with the University of Manchester began in 2010, when I joined the NLA group as a KTP Associate, working on a joint project with NAG to implement some of the NLA group’s matrix function algorithms for the NAG Library.

Why did you choose to work with the University of Manchester?

The project I was involved in was a great opportunity to bridge the gap between academia and industry and to work with world leaders in their fields.

How did you find Manchester?

Well, I’m still there! It has really grown on me in the past few years, and is a great place to work.

Can you tell us about your careers since leaving Manchester?

At the end of the KTP project I continued in the NLA group as a post doctoral research associate, working with Professor Nick Higham for a year and half on his ERC-funded project on matrix functions. I then returned to work for NAG (in their Manchester office) which is where I am now. NAG still has very strong links with the University of Manchester and with the NLA group in particular.

What is your current role?

I am a Technical Consultant at NAG. My work involves implementing mathematical algorithms for the NAG Library, and high performance computing consultancy projects.

Wilkinson and Backward Error Analysis

by Sven Hammarling and Nick Higham

It is often thought that Jim Wilkinson developed backward error analysis because of his early involvement in solving systems of linear equations. In his 1970 Turing lecture [5] he described an experience, during world war II at the Armament Research Department, of solving a system of twelve linear equations on a desk computer, using Gaussian elimination. (He doesn’t say how long it took, but it must surely have been several days.) The coefficients were of order unity and, using ten decimal digit computation, he found that the coefficients of the reduced equation determining x_{12} had four leading zeros, so he felt that the solutions could surely have no more than six correct figures. As a check on his calculations, he then computed the residuals and to his surprise the left hand sides agreed with the right hand sides to the full ten figures.

Wilkinson_slide2_cropped.jpg

A slide of Wilkinson’s describing the solution of the system of 18 equations.

After the war Wilkinson joined the Mathematics Division at the National Physical Laboratory. Soon after his arrival, a system of eighteen equations were given to the Mathematics Division. This required a joint effort, which was manned by Leslie Fox, Eric Goodwin, Alan Turing and Wilkinson. Again, the solution was somewhat ill conditioned, as revealed by the final reduced equation, but again in computing the residuals the right and left hand sides agreed to full accuracy. Incidentally, Wilkinson and his colleagues used iterative refinement, which convinced them that the first solution had been accurate to six figures.

These experiences did not straightforwardly lead Wilkinson to develop backward error analysis. In [7] he says that he first used backward error analysis in connection with simple programs for computing zeros of polynomials soon after PILOT ACE came into use, specifically a program for evaluating a polynomial by nested multiplication and a program for carrying out polynomial deflation. But he nevertheless did not recognise backward error analysis as a general tool. Wilkinson explains

It is natural to ask why I did not immediately set about using this type of error analysis as a general purpose tool. In retrospect it seems amazing that I did not try it on Gaussian elimination and on various eigenvalue algorithms in which I was keenly interested at the time. The truth is that it did not occur to me for one moment to do so.

His explicit recognition of a tool that he decided to call backward error analysis soon came through his experience of solving eigenvalue problems on PILOT ACE. He states further in [7]:

Because of the small storage capacity of the PILOT ACE virtually the only algorithm that could be used for dealing with large unsymmetric eigenvalue problems was the power method supplemented by various techniques for accelerating convergence. After each eigenvalue/eigenvector was determined this pair was removed by deflation. Now at that time deflation was generally held to be extremely unstable and accordingly I used it at first with great trepidation. However, it soon became evident that it was being remarkably effective.

As with the linear equation problem, Wilkinson computed residuals, r = A\hat{x} - \hat{\lambda} \hat{x}, where \hat{x} and \hat{\lambda} are the computed values, with \hat{x} normalised so that \hat{x}^T \hat{x} = 1, and, even after many deflations, he found that the residuals were remarkably small. He then realised that

(A - r \hat{x}^T) \hat{x} = \hat{\lambda} \hat{x} \;\;\; \mbox{exactly}.

and this led him directly to the backward error analysis since, if we put E = -r \hat{x}^T, then \hat{\lambda} and \hat{x} are an exact eigenvalue and eigenvector of the matrix A + E. He now recognised that the process could be widely used and this, of course, led to his 1963 book Rounding Errors in Algebraic Processes [3] and, soon after, to The Algebraic Eigenvalue Problem [4].

It should be noted that Wilkinson did not claim to be the first to perform a backward error analysis. He attributes the first analysis to von Neumann and Goldstine in their 1947 paper [2], which, as Wilkinson said in his von Neumann prize paper “is not exactly bedtime reading” [6]. Wilkinson also gives great credit to Givens for his backward error analysis of orthogonal tridiagonalisation in his, sadly, unpublished technical report [1].

References

[1]   W. Givens. Numerical computation of the characteristic values of a real symmetric matrix. Technical Report ORNL-1574, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA, 1954.

[2]   J. von Neumann and H. H. Goldstine. Numerical inverting of matrices of high order. Bull. Amer. Math. Soc., 53:1021–1099, 1947.

[3]   J. H. Wilkinson. Rounding Errors in Algebraic Processes. Notes on Applied Science, No.32. HMSO, London, UK, 1963. (Also published by Prentice-Hall, Englewood Cliffs, NJ, USA, 1964. Reprinted by Dover Publications, New York, 1994).

[4]   J. H.Wilkinson. The Algebraic Eigenvalue Problem. Oxford University Press, Oxford, UK, 1965.

[5]   J. H. Wilkinson. Some comments from a numerical analyst. J. ACM, 18:137–147, 1971. (The 1970 A. M. Turing lecture).

[6]   J. H. Wilkinson. Modern error analysis. SIAM Review, 13:548–568, 1971. (The 1970 von Neumann lecture).

[7]   J. H. Wilkinson. The state of the art in error analysis. NAG Newsletter, 2/85:5–28, 1985. (Invited lecture for the NAG 1984 Annual General Meeting).

Our Alumni – Craig Lucas

In this blog post, we asked one of our alumni, Craig Lucas, a few questions about his time with the Numerical Linear Algebra Group.

Craig Lucas

Please can you introduce yourself and tell us a bit about your experience before attending University of Manchester?

I came to study Mathematics a little later than usual. I was a technician civil engineer working in land reclamation in Staffordshire and needed a change! I was always told I was good at maths and thought at 27 I should get a degree. I am very grateful to Graham Bowtell  at City University who took a chance on someone without A-levels. I developed an interest in Numerical Analysis and computing and wanted to take my study as far as I could. That brought me to Manchester for an MSc, and ultimately a PhD.

What was your PhD thesis on?

My thesis, supervised by Nick Higham, was “Algorithms for Cholesky and QR Factorizations, and the Semidefinite Generalized Eigenvalue Problem.” Arguably a rag bag of algorithms building on my MSc experience of symmetric matrices. I also met and worked with Sven Hammarling on QR updating. He then worked for NAG, as I do now.

Why did you choose to study your PhD in Manchester?

During my MSc I realised I was working with world leaders in their field. It wasn’t a difficult decision to stay on for a PhD, in fact, I felt incredibly lucky to have that opportunity.

How did you find Manchester?

I hated it! I had come up from London and it felt that a whole new world. I wasn’t used to strangers talking to me in the street! However, after about 18 months the place really started to grow on me, and now, nearly 20 years later, I can’t imagine living anywhere else. We have an incredible arts scene, fantastic restaurants, brilliant transport links and a cost of living that makes living back in London seem ridiculous.

Can you tell us about your career since leaving Manchester?

Firstly I never really left. In the 15 years since I finished my PhD I have taught on my old MSc, supervised students and several KTP projects jointly with the Numerical Linear Algebra group. After my PhD, I went to work in research computing at Manchester first, in high performance computing (HPC.) Then just over 10 years ago I joined NAG where I could use both my numerical analysis and HPC skills.

What is your current role?

I run NAG’s Manchester Office, which is a rather nice penthouse on Portland Street with a roof terrace, and the HPC team here. I am supervising my third KTP, involved in running NAG’s contribution to the EU POP project and every now and then write some mathematical software.

« Older Entries Recent Entries »