Meetings

1-5-20: For the time being meetings are Wednesdays 10.30 by Zoom.

Group meetings are on Thursday at 11am in Frank Adams 1. If you would like to give a presentation please contact Stephanie Lai.

Forthcoming Meetings

Wednesday May 28, 2020

Nick Higham on “Random Matrices Generating Large Growth in LU Factorization
with Pivoting”.

Past Meetings

Wednesday May 20, 2020

Michael Connolly on “Comparator Precision in Stochastic Rounding”.

Wednesday May 13, 2020

Mantas Mikaitis on “Numerical Behavior of the NVIDIA Tensor Cores”.  Slides.

Wednesday May 6, 2020

Theo Mary on “GMRES-based Iterative Refinement in Five Precisions for Sparse Direct Solvers”. Slides.

Thursday March 26, 2020 – Cancelled

Theo Mary on “GMRES-based Iterative Refinement in Three and Four Precisions for Sparse Direct Solvers”.
Abstract: We will discuss the potential of LU-based and GMRES-based iterative refinement in multiple precisions for the solution of large, sparse systems of linear equations. Compared with dense linear solvers, sparse solvers possess key specificities that lead to different tradeoffs between robustness, performance, and memory consumption. These specificities motivate new GMRES-based IR variants, some of which use up to four different precisions. Preliminary results using the MUMPS solver will be used to illustrate the discussion.

Thursday March 19, 2020 – Cancelled

Steven Elsworth on “Time Series Forecasting Using LSTM Networks: A Symbolic Approach”.

Thursday March 05, 2020

Nick Higham on “Computing Matrix Factorizations with Matrix Multiplication”.

Thursday February 27, 2020

Mawussi Zounon on “Multiprecision algorithms for sparse matrix computations”. Slides.

Thursday February 20, 2020

Françoise Tisseur on “Min-Max Elementwise Backward Error for Roots of Polynomials and a Corresponding Backward Stable Root Finder”.

Thursday February 06, 2020

Oliver Sheridan-Methven from University of Oxford
Title: Numerical simulations using approximate random numbers.
Abstract: Considering random number generation as a computational bottle neck, we introduce approximate random variables which are computationally cheap. We introduce these approximate random variables into the Euler-Maruyama scheme for stochastic differential equations, and show the approximations still converge under this substitution. We incorporate this into a multilevel Monte Carlo framework and show the modified scheme converges, showing the errors introduce couple together. Combined with low precision formats, we highlight the added benefits when working in low precisions. Slides.

Thursday January 30, 2020

Massimiliano Fasi on “Generating matrices with a given infinity-norm condition number”.

Thursday January 23, 2020

Srikara Pranesh on “Three Precision GMRES-Based Iterative Refinement for Least Squares Problems”. Slides.

Thursday November 28, 2019

Thomas McSweeney on “An efficient new static scheduling heuristic for accelerated architectures”. Slides.
Abstract: Heterogeneous architectures comprising multicore CPUs and GPUs are increasingly common, both in high-performance computing and beyond. However many of the existing methods for scheduling precedence-constrained tasks on such platforms, such as the classic Heterogeneous Earliest Finish Time (HEFT) heuristic, were originally intended for clusters comprising many diverse nodes. In this talk I briefly outline HEFT before introducing a new static scheduling heuristic called Heterogeneous Optimistic Finish Time (HOFT) which has the same structure but exploits the low degree of heterogeneity of accelerated environments. Using custom software for simulating task scheduling problems on user-defined CPU-GPU platforms, I present results which show that HOFT can obtain schedules at least 5% shorter than HEFT’s for certain numerical linear algebra applications.

Thursday November 21, 2019

Srikara Pranesh on “Exploiting Lower Precision Arithmetic in Solving Symmetric Positive Definite Linear Systems”. Slides.

Thursday November 14, 2019

Xiaobo Liu on “On the Computation of the Scalar and the Matrix Mittag-Leffler Functions”. Slides.
Abstract: In this talk, I will introduce existing methods for computing the scalar Mittag-Leffler (ML) function and discuss their generalizability to the matrix case. Also, I will present our algorithm (based on the numerical inversion of the Laplace transform by the trapezoidal rule) for computing the matrix function on the real line with some numerical experiments. Finally, I will share the difficulties we are facing in the computation and the ideas we have.

Thursday November 07, 2019

Marcus Webb on “The infinite dimensional QL factorisation”.
Abstract: Finite dimensional matrices sometimes come from truncating a highly structured matrix with infinitely many elements. The most common approach for computing the eigenvalues (and more generally, the spectrum) of such an infinite dimensional matrix is take the n x n principle submatrix, and compute the eigenvalues of that. In principle, if n is sufficiently large then the spectrum of the main object of interest is sufficiently well approximated, but this principle can fail catastrophically for some embarrassingly simple examples. As an alternative, Sheehan Olver (Imperial College London) and I have been exploring computing spectra via the QL factorisation of highly structured infinite dimensional matrices, with some surprising and mind-bending results. Examples will be demonstrated live in Julia.

Thursday October 31, 2019

Massimiliano Fasi on “Generating large matrices with pre-assigned 2-norm condition number”. Slides.

Thursday October 24, 2019

Mantas Mikaitis on “Solving neural ODEs using fixed-point arithmetic with stochastic rounding”. Slides.
Abstract: In this talk I will go through some of the experimental results with ODE solvers in fixed-point arithmetic, using stochastic rounding in multiplications. This work was carried out as part of my PhD at the department of Computer Science. The main goal was to improve the accuracy of the Izhikevich neuron model which is described by an ODE that does not have a closed-form solution. The neuron model is simulated on the SpiNNaker neuromorphic computer – a large-scale neuromorphic platform (1 million ARM968 cores) designed in Manchester.

Thursday October 17, 2019

Nick Higham on “Tips and Tricks for Research Workflow”. Slides.
Abstract: I will discuss various tools and websites that can help us in our research.

Thursday October 10, 2019

Michael Connolly on “Stochastic rounding of floating-point arithmetic”. Slides.

Thursday October 3, 2019

Gian Maria Negri Porzio on “The AAA algorithm and its variations to approximate matrix-valued functions”. Slides.

Wednesday September 11, 2019

Theo Mary on “Numerical Stability of Block Low-Rank LU Factorization”.
Abstract: Block low-rank (BLR) matrices exploit blockwise low-rank approximations to reduce the complexity of numerical linear algebra algorithms. The impact of these approximations on the numerical stability of the algorithms in floating-point arithmetic has not previously been analyzed. We present rounding error analysis for solution of a linear system by LU factorization of BLR matrices. We prove backward stability, assuming that a stable pivoting scheme is used, and obtain new insights into the numerical behavior of BLR variants. We show that the predictions from the analysis are realized in practice by testing them numerically on a wide range of matrices coming from various real-life applications.