13 May 2022
|
Associate Professor, Cornell University
Title : Building a bridge between numerical linear algebra and theoretical computer science
Abstract : The numerical linear algebra (NLA) and theoretical computer science (TCS) research communities work on similar linear algebra problems from strikingly different viewpoints. During the pandemic, we organized an online fortnightly seminar to explore the connections between these two communities. In the first half of the talk, I will discuss some observations in how the two communities formulate problems, design and analyse algorithms, and publicise their findings. In the second half, I will give a personal account of how TCS ideas has impacted my research. The aim of the seminar was to foster future collaborations between NLA and TCS and generally bring about a greater appreciation for each other’s work. The organizing committee for the seminar was Ilse Ipsen (NCSU), Mike Mahoney (Berkeley), Yuji Nakatsukasa (University of Oxford), Nikhil Srivastava (Berkeley), Alex Townsend (Cornell), and Joel Tropp (Caltech), along with founding members Daniel Kressner (EPFL) and Cleve Moler (MathWorks).
Time : 2:00 PM to 3:00 PM.
Venue : Frank Adams 1, Alan Turing Building. |
6 May 2022
|
Associate Professor in Predictive Modelling, University of Warwick and Alan Turing Institute
Title : Γ-convergence of Onsager–Machlup functionals and MAP estimation in non-parametric Bayesian inverse problems
Abstract : The Bayesian solution to a statistical inverse problem can be summarised by a mode of the posterior distribution, i.e.\ a MAP estimator. The MAP estimator essentially coincides with the (regularised) variational solution to the inverse problem, seen as minimisation of the Onsager–Machlup functional of the posterior measure. An open problem in the stability analysis of inverse problems is to establish a relationship between the convergence properties of solutions obtained by the variational approach and by the Bayesian approach. To address this problem, we propose a general convergence theory for modes that is based on the Γ-convergence of Onsager–Machlup functionals, and apply this theory to Bayesian inverse problems with Gaussian and edge-preserving Besov priors.
Time : 2:00 PM to 3:00 PM.
Venue : Frank Adams 1, Alan Turing Building. |
29 Apr 2022
|
Lecturer in Bio-Inspired Computing, University of Manchester
Title : Neuromorphic Computing
Abstract :
This talk will introduce the field of neuromorphic computing: building machines to explore brain function, and how we can use our enhanced understanding of the brain to build better computer hardware and algorithms. Specifically it will discuss spiking neural networks, including how they can be used to model neural circuits, and how these models can be employed to develop low-power bio-inspired AI systems.
Time : 2:00 PM to 3:00 PM.
Venue : Frank Adams 1, Alan Turing Building. |
1 Apr 2022
|
Lecturer in Applied Mathematics, University of Manchester
Title : A Random Walk from Semi Supervised Learning to Neural Networks
Abstract : Semi-supervised learning is the problem of finding missing labels; more precisely one has a data set of feature vectors of which a (often small) subset are labelled. The semi-supervised learning assumption is that similar feature vectors should have similar labels which implies one needs a geometry on the set of feature vectors. A typical way to represent this geometry is via a graph where the nodes are the feature vectors and the edges are weighted by some measure of similarity. Laplace learning is a popular graph-based method for solving the semi-supervised learning problem which essentially requires one to minimise a Dirichlet energy defined on the graph (hence the Euler-Lagrange equation is Laplace’s equation). However, at low labelling rates Laplace learning typically performs poorly. This is due to the lack of regularity, or the ill-posedness, of solutions to Laplace’s equation in any dimension higher (or equal to) two. The random walk interpretation allows one to characterise how close one is to entering the ill-posed regime. In particular, it allows one to give a lower bound on the number of labels required and even provides a route for correcting the bias. Correcting the bias leads to a new method, called Poisson learning. Finally, the ideas behind correcting the bias in Laplace learning have motivated a new graph neural network architecture which do not suffer from the over-smoothing phenomena. In particular, this type of neural network, which we call GRAND++ (GRAph Neural Diffusion with a source term) enables one to employ deep architectures. This is joint work with Jeff Calder, Dejan Slepčev, Brendan Cook, Tan Nguyen, Hedi Xia, Thomas Strohmer, Andrea Bertozzi, Stanley Osher and Bao Wang.
Time : 2:00 PM to 3:00 PM.
Venue : Frank Adams 1, Alan Turing Building. |
24 March 2022
|
Junior Research Fellow, Trinity College, Cambridge
Title : Koopman operators and the computation of spectral properties in infinite dimensions
Abstract : Koopman operators are infinite-dimensional operators that globally linearise nonlinear dynamical systems, making their spectral information valuable for understanding dynamics. Their increasing popularity, dubbed “Koopmania”, includes 10,000s of articles over the last decade. However, Koopman operators can have continuous spectra and lack finite-dimensional invariant subspaces, making computing their spectral properties a considerable challenge. This talk describes data-driven algorithms with rigorous convergence guarantees for computing spectral properties of Koopman operators from trajectory data. We introduce residual dynamic mode decomposition (ResDMD), the first scheme for computing the spectra and pseudospectra of general Koopman operators from trajectory data without spectral pollution. By combining ResDMD and the resolvent, we compute smoothed approximations of spectral measures associated with measure-preserving dynamical systems. When computing the continuous and discrete spectrum, explicit convergence theorems provide high-order convergence, even for chaotic systems. Kernelized variants of our algorithms allow for dynamical systems with a high-dimensional state-space. These infinite-dimensional numerical linear algebra algorithms are placed within a broader programme on the foundations of infinite-dimensional spectral computations. We end with computing the spectral measures of a protein molecule (20,046-dimensional state-space) and computing nonlinear Koopman modes with error bounds for chaotic turbulent flow past aerofoils with Reynolds number greater than 100,000 (295,122-dimensional state-space).
Time : 2:00 PM to 3:00 PM.
Venue : Frank Adams 1, Alan Turing Building. |
18 Mar 2022
|
Postdoctoral Associate, University of Edinburgh
Title : Novel preconditioners for saddle point weak-constraint 4D-Var
Abstract : The saddle point formulation of weak-constraint four-dimensional data assimilation offers the possibility of exploiting modern computer architectures and algorithms due to itsunderlying block structure. Developing good preconditioners which retain the highly-structured nature of the saddle point system has been an area of recent research interest, especially for applications to numerical weather prediction. In this talk I will present two new approaches for preconditioners for the model terms within the saddle point system. Firstly, we consider including a small number of model terms within the preconditioner, which preserves a user-determined level of parallelisability. In contrast our second approach exploits inherent Kronecker structure within a matrixGMRES implementation. I will present theoretical results comparing our new preconditioners to existing standard choices of preconditioners. Finally I will present two numerical experiments for the heat equation and Lorenz 96 problem and show that our new approaches are competitive compared to current state-of-the-art preconditioners.
Time : 2:00 PM to 3:00 PM.
Venue : Frank Adams 1, Alan Turing Building. |
4 Mar 2022
|
Research Associate, University of Manchester
Title : Numerical Behavior of GPU Matrix Multiply-Accumulate Hardware
Abstract : Tensor cores and matrix engines are hardware units on the latest GPUs that perform dot product or matrix multiply accumulate (MMA) operations. 127 of the TOP500 supercomputers contain these units and a lot of the numerical libraries begin to utilize them in various algorithms in scientific computing. Tensor cores and similar arithmetic units are targeted at low precision machine learning algorithms and therefore are not necessarily compliant with the IEEE 754 standard. The features such as rounding, normalization, order of operations, subnormal number support and others can differ from a standard software implementation of the matrix multiplication. In this talk I will discuss our recent work on determining various numerical features of MMAs, using NVIDIA tensor cores as an example test case. We determined the features of the three generations of the tensor core with the carefully constructed numerical test cases on the V100, T4 and the A100 NVIDIA GPUs and have explored the effects those features have on matrix multiplication algorithms, comparing real runs on the GPUs with the theoretical rounding error bounds that we have derived.
Time : 2:00 PM to 3:00 PM.
Venue : Frank Adams 1, Alan Turing Building. |
18 Feb 2022
|
Professor in Applied Mathematics, University of Manchester
Title : Stochastic Galerkin Finite Element Approximation for Linear Elasticity and Poroelasticity Problems with Uncertain Inputs
Abstract : The study of theoretical and computational aspects of stochastic Galerkin (SG) methods, also known as intrusive polynomial chaos methods, is now mature for several classes of model problems, such as scalar elliptic PDEs with parameter-dependent diffusion coefficients. For such problems, state-of-the-art multilevel adaptive SG methods have been developed and are competitive with more widely used non-intrusive approximation schemes.
Since intrusive methods require the solution of large highly complex linear systems, in contrast to sampling methods, much work remains to be done to determine whether SG methods can provide a computationally feasible framework for facilitating forward UQ in more complex PDE models with uncertain inputs that arise in engineering applications. In this talk, we will discuss recent work on developing stochastic Galerkin mixed finite element schemes and solvers for linear elasticity and poroelasticity models, with parameter-dependent Young’s modulus and hydraulic conductivity fields.
Time : 2:00 PM to 3:00 PM.
Venue : Frank Adams 1, Alan Turing Building. |