Monthly Archives: October 2018

Dame Kathleen Ollerenshaw Fellowships (8 Positions)

The Faculty of Science and Engineering from The University of Manchester has eight Dame Kathleen Ollerenshaw Fellowships available. Each Fellowship is for an initial 5 year period, leading to full academic tenure on completion, subject to performance and probation. The salary is £40,792 to £50,132 per annum (according to relevant experience).

The flagship Dame Kathleen Ollerenshaw Research Fellowships are aimed at outstanding scientists and engineers at an early stage in their academic careers. Fellows should show a high level of creativity and ambition in their ideas and want to develop potentially transformative research.

The Fellowship is for Early Career Researchers (ECR). Applicants are expected to hold a PhD by the start date of the Fellowship or have equivalent research experience. There are no eligibility rules based on years of post-doctoral experience or whether the applicant holds a permanent academic position. The ethos of the Early Career Researcher scheme is to support candidates who have a track record of outstanding research and in delivering impact.

The closing date is November 23, 2018. More information about the Dame Kathleen Ollerenshaw Research Fellowships is available in the advert and further particulars here.



ICIAM 2019 Invited Speaker – Nick Higham


šProfessor Nick Higham has been selected as an invited speaker at the International Congress on Industrial and Applied Mathematics (ICIAM) in Valencia, Spain, July 2019.

šICIAM is an international congress in applied mathematics held every four years. ICIAM 2019 will serve as a showcase for the most recent advances in industrial and applied mathematics.

More information about ICIAM 2019 is available here.

Heilbronn Fellowships in Mathematics (3 positions)

The School of Mathematics at the University of Manchester has three Helibronn Fellowships in Mathematics available, in association with the Heilbronn Institute for Mathematical Research. Experience in Algebra or Numerical Linear Algebra, interpreted broadly, is preferred. The Fellowships last for three years, starting in October 2019 or at a mutually agreed alternative date. The salary is £37,345 to £42,036 per annum (according to relevant experience) plus a supplement of £3500 per annum, and at least £2,500 per annum is available for research expenses.

The Heilbronn Institute for Mathematical Research (HIMR) is a major national centre which works in collaboration with universities and Government Communication Headquarters (GCHQ) to support mathematics research. It employs more than 30 Heilbronn Fellows, who divide their time between academic research and work for GCHQ. The Institute also runs a highly successful programme of events to promote and further the cause and understanding of advanced mathematical research. These include conferences, focused research groups and workshops. It is named after Professor Hans Heilbronn FRS, who was a major contributor to UK mathematics.

The Heilbronn Fellowship holders will divide their time equally between their own academic research (in the School of Mathematics at the University) and the research programme of the Heilbronn Institute. The Institute’s work offers opportunities to engage in collaborative work as well as individual projects.

The closing date is November 11, 2018. More information about the Heilbronn Fellowship in Mathematics is available in the advert and further particulars here, which also describe security requirements attached to these posts.

Steven Elsworth wins second SIAM Student Travel Award

Steven Elsworth has been awarded a SIAM Student Travel Award to attend the 2019 SIAM Conference on Computational Science and Engineering (CSE19) in Spokane, Washington. These travel awards support strong PhD students to present their work at SIAM conferences with $650 for domestic travel and $800 for intercontinental travel, as well as free registration. The application process is competitive, with a strong record of scholarship being one of the criteria.

Steven is a third-year PhD student on a CASE Studentship with Sabisu, a long-standing industry partner of our School. In Spokane, Steven will give a minisymposium talk about the RKToolbox, which he develops with his supervisor Dr Stefan Güttel. The RKToolbox is written in MATLAB and collects scientific computing tools based on rational Krylov methods. This is already Steven’s second SIAM Student Travel Award: in May 2018 he presented work at the SIAM Conference on Applied Linear Algebra in Hong Kong.

Steven Elsworth

Half Precision Arithmetic in Numerical Linear Algebra

In whatever shape or form, solution of a linear system of equation is the workhorse for many applications in scientific computing. Therefore our ability to accurately and efficiently solve these linear systems plays a pivotal role in advancing the boundaries of science and technology. Here, efficiency is measured with respect to time—if we have an algorithm to solve a linear system quickly then we can aspire to solve larger and more difficult problems. In this regard the purpose of this post is to describe one of  the latest developments of numerical linear algebra in which our group is actively involved, specifically in the solution of a system of linear equations.

Metaphorically one can think of the algorithm used for the solution of a linear system as an engine in a motor vehicle and the underlying application as the body of the vehicle built around it. Throughout this post I will use this metaphor to explain the course of development and current trends in the algorithms for the solution of a linear system of equations.

In scientific computing there are two components:  developing an algorithm and implementing it on a computer. If we again draw parallels with the engine of a motor vehicle, algorithm developers do the job of designing the various components of the engine and criteria to check if various parts are working as they should, and people implementing on a computer do the job of manufacturing the parts, assembling them, and making sure the engine is working as expected. Finally the most important component for the engine to work is the fuel, and for mathematical algorithms it is numbers. In a computer these numbers are stored in a specific format, and they are called  floating point numbers. 

Until very recently computers were getting faster every year, and computer scientists were devising intelligent ways of using the existing algorithms to solve larger and larger problems in the new computers. One important point to note is that, even though the computers were becoming bigger, the underlying mathematics of the algorithms did not change. Again drawing parallel with the engine analogy, the engine became more powerful and therefore the motor vehicle built around it became bigger, but the basic design of the engine parts and the fuel used remained same. But soon this was about to change!

Traditionally double precision or single precision floating point numbers are used for computation. A double precision number occupies 64 bits of memory and a single precision number occupies 32 bits. Double and single precision numbers carry a lot of informations, but at the same time they create a lot of traffic jam in communication channels! If you think of a communication channel in a computer as a road connecting point A to point B, and since the width of the road is fixed, if we send too many trucks on the road they will cause traffic jam, even though they can carry a lot of goods. Therefore the natural solution is to use smaller vehicles instead of bigger vehicles, but this will drastically reduce the amount goods that one can transport. This exactly was the solution proposed to avoid the jam in communication channels, but at the cost of amount of information that can be transferred. This new floating point format is called half precision where a single half precision number occupies 16 bit of memory. The development of half precision as a floating point format was kick-started by  developments in machine learning, where it was found that, for accurate prediction, machine learning models did not require very accurate representation of the input data. Because of this development in the machine learning community hardware vendors such as NVIDIA and AMD started developing chips that support half precision. The world’s fastest supercomputer, SUMMIT at Oak Ridge National Lab, can perform 3.3 X 1018 operations in one seconds when half precision is used. However when a linear system of equations are solved on the same computer using the High Performance LINPACK benchmark it performs 122.3 X 1015 operations in one second, and note that here double precision floating point format is used. Therefore to capitalise on the recent advances in hardware, we need to exploit half precision in the solution of linear systems of equations.

summit-1Image credits Oak Ridge National Lab Summit Gallery


Returning back to our engine analogy, using half precision is like changing the fuel. As we all are aware, we cannot use diesel in a petrol engine, so the engine has to be redesigned or, in the context of linear systems, the algorithms have to be redesigned. This has precisely been one of the core research efforts in our group. To conclude we are at a very interesting point in the development of algorithms for the solution of linear systems of equations, where the emerging architectures provide interesting opportunities for numerical analysts to rethink old algorithms.

For recent work relevant in the group see

Further work is in progress and will be reported here soon.