Meetings

Group meetings are on Thursday at 11am in Frank Adams 1. If you would like to give a presentation please contact Stephanie Lai.

Forthcoming Meetings

Thursday November 28

Thomas McSweeney on “An efficient new static scheduling heuristic for accelerated architectures”.
Abstract: Heterogeneous architectures comprising multicore CPUs and GPUs are increasingly common, both in high-performance computing and beyond. However many of the existing methods for scheduling precedence-constrained tasks on such platforms, such as the classic Heterogeneous Earliest Finish Time (HEFT) heuristic, were originally intended for clusters comprising many diverse nodes. In this talk I briefly outline HEFT before introducing a new static scheduling heuristic called Heterogeneous Optimistic Finish Time (HOFT) which has the same structure but exploits the low degree of heterogeneity of accelerated environments. Using custom software for simulating task scheduling problems on user-defined CPU-GPU platforms, I present results which show that HOFT can obtain schedules at least 5% shorter than HEFT’s for certain numerical linear algebra applications.

 

Past Meetings 2019-2020

Thursday November 21

Srikara Pranesh on “Exploiting Lower Precision Arithmetic in Solving Symmetric Positive Definite Linear Systems”.

Thursday November 14

Xiaobo Liu on “On the Computation of the Scalar and the Matrix Mittag-Leffler Functions”.
Abstract: In this talk, I will introduce existing methods for computing the scalar Mittag-Leffler (ML) function and discuss their generalizability to the matrix case. Also, I will present our algorithm (based on the numerical inversion of the Laplace transform by the trapezoidal rule) for computing the matrix function on the real line with some numerical experiments. Finally, I will share the difficulties we are facing in the computation and the ideas we have.

Thursday November 07

Marcus Webb on “The infinite dimensional QL factorisation”.
Abstract: Finite dimensional matrices sometimes come from truncating a highly structured matrix with infinitely many elements. The most common approach for computing the eigenvalues (and more generally, the spectrum) of such an infinite dimensional matrix is take the n x n principle submatrix, and compute the eigenvalues of that. In principle, if n is sufficiently large then the spectrum of the main object of interest is sufficiently well approximated, but this principle can fail catastrophically for some embarrassingly simple examples. As an alternative, Sheehan Olver (Imperial College London) and I have been exploring computing spectra via the QL factorisation of highly structured infinite dimensional matrices, with some surprising and mind-bending results. Examples will be demonstrated live in Julia.

Thursday October 31

Massimiliano Fasi on “Generating large matrices with pre-assigned 2-norm condition number”. Slides.

Thursday October 24

Mantas Mikaitis on “Solving neural ODEs using fixed-point arithmetic with stochastic rounding”. Slides.
Abstract: In this talk I will go through some of the experimental results with ODE solvers in fixed-point arithmetic, using stochastic rounding in multiplications. This work was carried out as part of my PhD at the department of Computer Science. The main goal was to improve the accuracy of the Izhikevich neuron model which is described by an ODE that does not have a closed-form solution. The neuron model is simulated on the SpiNNaker neuromorphic computer – a large-scale neuromorphic platform (1 million ARM968 cores) designed in Manchester.

Thursday October 17

Nick Higham on “Tips and Tricks for Research Workflow”. Slides.
Abstract: I will discuss various tools and websites that can help us in our research.

Thursday October 10

Michael Connolly on “Stochastic rounding of floating-point arithmetic”. Slides.

Thursday October 3

Gian Maria Negri Porzio on “The AAA algorithm and its variations to approximate matrix-valued functions”. Slides.

Past Meetings 2018-2019

Wednesday September 11

Theo Mary on “Numerical Stability of Block Low-Rank LU Factorization”.
Abstract: Block low-rank (BLR) matrices exploit blockwise low-rank approximations to reduce the complexity of numerical linear algebra algorithms. The impact of these approximations on the numerical stability of the algorithms in floating-point arithmetic has not previously been analyzed. We present rounding error analysis for solution of a linear system by LU factorization of BLR matrices. We prove backward stability, assuming that a stable pivoting scheme is used, and obtain new insights into the numerical behavior of BLR variants. We show that the predictions from the analysis are realized in practice by testing them numerically on a wide range of matrices coming from various real-life applications.

Wednesday July 31

Elisa Riccietti on “On the iterative solution of systems of the form A^TAx=A^Tb+c”. Abstract, Sildes.

Wednesday June 19

Theo Mary on “Sharper and smaller rounding error bounds for low precision scientific computing”.

Wednesday May 22

Conor Rogers on “Some experiments on rational approximation with noisy data”. Slides

Wednesday May 8

Florent Lopez on “Performance and accuracy of the LU factorisation on GPU using Tensor Cores”.
Abstract: We currently observe an emergence of new hardware supporting low precision floating-point formats, such as the IEEE half precision format. This is for example the case of recent NVIDIA GPUs such as the Volta and Turing architectures that dramatically improve the performance of half-precision computations with the introduction of new processing units called Tensor Cores. In this talk we discuss the use of Tensor cores  for performing a reduced precision LU factorisation and we show the benefits of exploiting such processing units both in terms of performance and accuracy compared to traditional single and half precision variants.

Wednesday May 1

Srikara Pranesh on “Simulating Low Precision Floating-Point Arithmetic”.

Wednesday April 3

Michael Connolly on “Multiprecision Inverse Computation”. Slides.
Abstract: In this talk we discuss the Newton-Schulz iteration for computing the generalized inverse of a matrix. We discuss numerical results arising from a multiprecision implementation of this algorithm as well as the use of a low precision solution to this problem as a preconditioner for mixed precision iterative refinement.

Wednesday March 27

Xiaobo Liu on “Computing the Matrix Mittag-Leffler Function”. Slides.
Abstract: Similar to the matrix exponential in the solutions of linear differential equation systems, the matrix Mittag-Leffler (ML) function plays an analogous role in solving linear fractional differential equation systems, which have been proved to be the best way of modelling many physical and engineering processes in real life. In this talk, I will mainly review a recent paper for computing the matrix ML function based on the Schur-Parlett algorithm.

Wednesday March 19

Mawussi Zounon will give a talk about PLASMA and the NAG Library.

Wednesday March 13

Xuelei Lin from Hong Kong Baptist University
Title: Crank-Nicolson ADI method for space-fractional diffusion equations with non-separable coefficients
Abstract: Alternating direction implicit (ADI) scheme is a popular and efficient solver for linear systems arising from discretization of high-dimension fractional PDEs. However, ADI schemes developed in previous works requires strong assumptions, such as separable coefficients, or even constant coefficients. In this talk, I will introduce our newly developed theory for Crank-Nicolson ADI, which does not require separable-coefficients assumptions. Also, an optimal preconditioning technique is applied to the one-dimensional problems arising from ADI scheme, for which the condition number of the preconditioned matrices are proven to be independent of discretization step-sizes.

Wednesday March 6

Chris Armstrong from Arm
Title: Arm Performance Libraries FFT development
Abstract: In this talk I will give a quick overview of Arm Performance Libraries before describing the work we have undertaken to improve the performance of our Fast Fourier Transform (FFT) functions. I will outline the algorithms we use and our results to date.

Wednesday January 30

Pierre Blanchard on “Yet Another Fast And Accurate Summation Algorithm”.

Wednesday January 23

Sven Hammarling on “Machines for the Solution of Linear Equations”. NotesSlides and References.

Tuesday December 11

Massimiliano Fasi on “Efficient and Accurate evaluation of polynomials of matrices”. Notes and Slides.

Tuesday December 4

Theo Mary on “A New Approach to Rounding Error Analysis”. Paper and Slides.

Tuesday November 20

Lijing Lin on “Phenotyping immune responses in asthma and respiratory infections using clustering techniques”.  An abstract for the talk is available here.

Tuesday November 13

Srikara Pranesh on “Squeezing a Matrix Into Half Precision, with an Application to Solving Linear Systems”.

Tuesday November 6

Elisa Riccietti on “High-order multilevel optimization strategies and their application to the training of artificial neural networks”. Abstract for the talk is available here.

Tuesday October 30

Gain Maria Negri Porzio on “The contour integral approach for the nonlinear eigenvalue problem”.

Tuesday October 23

Nick Higham on “The adjugate matrix”. From the meeting: Notes.

Tuesday October 16

Tom McSweeney on ”Task scheduling in high-performance computing”. From the meeting: Slides, Notes.

Tuesday October 9

Nick Higham on ”Optimizing the Wilson Matrix”. From the meeting: Notes.

Here is the group photo from the meeting (hires version).DSC00120.ARW

Tuesday October 2

Pierre Blanchard on ”Optimized Polar Decomposition for Modern Computer Architectures”. From the meeting: Slides, Notes.

Tuesday September 25

Introductions, new website, and plans for the academic year, including conferences. From the meeting: Notes.