Group meetings are on Thursday at 11am in Frank Adams 1. If you would like to give a presentation please contact Stephanie Lai.
Thursday January 30, 2020
Massimiliano Fasi on “Generating matrices with a given infinity-norm condition number”.
Thursday February 06, 2020
Françoise Tisseur – TBA
Thursday January 23, 2020
Srikara Pranesh on “Three Precision GMRES-Based Iterative Refinement for Least Squares Problems”. Slides.
Thursday November 28, 2019
Thomas McSweeney on “An efficient new static scheduling heuristic for accelerated architectures”. Slides.
Abstract: Heterogeneous architectures comprising multicore CPUs and GPUs are increasingly common, both in high-performance computing and beyond. However many of the existing methods for scheduling precedence-constrained tasks on such platforms, such as the classic Heterogeneous Earliest Finish Time (HEFT) heuristic, were originally intended for clusters comprising many diverse nodes. In this talk I briefly outline HEFT before introducing a new static scheduling heuristic called Heterogeneous Optimistic Finish Time (HOFT) which has the same structure but exploits the low degree of heterogeneity of accelerated environments. Using custom software for simulating task scheduling problems on user-defined CPU-GPU platforms, I present results which show that HOFT can obtain schedules at least 5% shorter than HEFT’s for certain numerical linear algebra applications.
Thursday November 21, 2019
Srikara Pranesh on “Exploiting Lower Precision Arithmetic in Solving Symmetric Positive Definite Linear Systems”. Slides.
Thursday November 14, 2019
Xiaobo Liu on “On the Computation of the Scalar and the Matrix Mittag-Leffler Functions”. Slides.
Abstract: In this talk, I will introduce existing methods for computing the scalar Mittag-Leffler (ML) function and discuss their generalizability to the matrix case. Also, I will present our algorithm (based on the numerical inversion of the Laplace transform by the trapezoidal rule) for computing the matrix function on the real line with some numerical experiments. Finally, I will share the difficulties we are facing in the computation and the ideas we have.
Thursday November 07, 2019
Marcus Webb on “The infinite dimensional QL factorisation”.
Abstract: Finite dimensional matrices sometimes come from truncating a highly structured matrix with infinitely many elements. The most common approach for computing the eigenvalues (and more generally, the spectrum) of such an infinite dimensional matrix is take the n x n principle submatrix, and compute the eigenvalues of that. In principle, if n is sufficiently large then the spectrum of the main object of interest is sufficiently well approximated, but this principle can fail catastrophically for some embarrassingly simple examples. As an alternative, Sheehan Olver (Imperial College London) and I have been exploring computing spectra via the QL factorisation of highly structured infinite dimensional matrices, with some surprising and mind-bending results. Examples will be demonstrated live in Julia.
Thursday October 31, 2019
Massimiliano Fasi on “Generating large matrices with pre-assigned 2-norm condition number”. Slides.
Thursday October 24, 2019
Mantas Mikaitis on “Solving neural ODEs using fixed-point arithmetic with stochastic rounding”. Slides.
Abstract: In this talk I will go through some of the experimental results with ODE solvers in fixed-point arithmetic, using stochastic rounding in multiplications. This work was carried out as part of my PhD at the department of Computer Science. The main goal was to improve the accuracy of the Izhikevich neuron model which is described by an ODE that does not have a closed-form solution. The neuron model is simulated on the SpiNNaker neuromorphic computer – a large-scale neuromorphic platform (1 million ARM968 cores) designed in Manchester.
Thursday October 17, 2019
Nick Higham on “Tips and Tricks for Research Workflow”. Slides.
Abstract: I will discuss various tools and websites that can help us in our research.
Thursday October 10, 2019
Michael Connolly on “Stochastic rounding of floating-point arithmetic”. Slides.
Thursday October 3, 2019
Gian Maria Negri Porzio on “The AAA algorithm and its variations to approximate matrix-valued functions”. Slides.
Wednesday September 11, 2019
Theo Mary on “Numerical Stability of Block Low-Rank LU Factorization”.
Abstract: Block low-rank (BLR) matrices exploit blockwise low-rank approximations to reduce the complexity of numerical linear algebra algorithms. The impact of these approximations on the numerical stability of the algorithms in floating-point arithmetic has not previously been analyzed. We present rounding error analysis for solution of a linear system by LU factorization of BLR matrices. We prove backward stability, assuming that a stable pivoting scheme is used, and obtain new insights into the numerical behavior of BLR variants. We show that the predictions from the analysis are realized in practice by testing them numerically on a wide range of matrices coming from various real-life applications.