## 2020-2021

### Wednesday July 21, 2021

Michael Connolly on “Mixed Precision Randomized SVD”.

Massimiliano Fasi on “Determinants of Normalized Bohemian Upper Hessenberg Matrices”.

### Wednesday May 19, 2021

Xiaobo Liu on “A Multiprecision Derivative-Free Schur-Parlett Algorithm for Computing Matrix Functions”.

### Wednesday April 28, 2021

Xinye Chen on “Rational approximation of matrix-valued functions in nonlinear eigenvalue problems”.

### Wednesday April 13, 2021

At 11.30: Emacs discussion – Nick Higham.  File: demo21a.org

### Wednesday March 24, 2021

Nick Higham and Mantas Mikaitis on “Demonstration of AnyMatrix”.

### Wednesday March 17, 2021

Gian Maria Negri Porzio on “Influence of the quadrature approximation on contour integral algorithms”.

### Wednesday March 10, 2021

Massimiliano Fasi on “On the solution of some matrix equations”.

### Friday February 26, 2021

Mantas Mikaitis on “Algorithms for Stochastically Rounded Elementary Arithmetic Operations in IEEE 754 Floating-Point Arithmetic”.

### Wednesday February 24, 2021

Srikara Pranesh on “Three-Precision GMRES-Based Iterative Refinement for Least Squares Problems”.

Michael Connolly on “Stochastic Rounding and its Probabilistic Backward Error Analysis”.

### Monday February 08, 2021

Stefan Güttel on “Rational Krylov: A Toolkit for Scientific Computing”.

### Wednesday January 13, 2021

Nick Higham on “When Is Positive Definiteness Preserved Under Zeroing of Elements?”

### Monday December 07, 2020

Matteo Croci from University of Oxford on “Solving parabolic PDEs in half precision”.

Abstract:

Motivated by the advent of machine learning, the last few years saw the return of hardware-supported low-precision computing. Computations with fewer digits are faster and more memory and energy efficient, but can be extremely susceptible to rounding errors. An application that can largely benefit from the advantages of low-precision computing is the numerical solution of partial differential equations (PDEs), but a careful implementation and rounding error analysis are required to ensure that sensible results can still be obtained.

In this talk we study the accumulation of rounding errors in the solution of the heat equation, a proxy for parabolic PDEs, via Runge-Kutta finite difference methods using round-to-nearest (RtN) and stochastic rounding (SR). We demonstrate how to implement the numerical scheme to reduce rounding errors and we present \emph{a priori} estimates for local and global rounding errors. Let $u$ be the roundoff unit. While the worst-case local errors are $O(u)$ with respect to the discretization parameters, the RtN and SR error behaviour is substantially different. We show that the RtN solution is discretization, initial condition and precision dependent, and always stagnates for small enough $\Delta t$. Until stagnation, the global error grows like $O(u\Delta t^{-1})$. In contrast, the leading order errors introduced by SR are zero-mean, independent in space and mean-independent in time, making SR resilient to stagnation and rounding error accumulation. In fact, we prove that for SR the global rounding errors are only $O(u\Delta t^{-1/4})$ in 1D and are essentially bounded (up to logarithmic factors) in higher dimensions.

No meeting

### Monday November 23, 2020

Andrew Horning from Cornell University on “Twice is enough for dangerous eigenvalues”. Abstract

### Wednesday September 30, 2020

Marcus Webb on “Abstract Krylov Methods and Fast Adaptive Poisson Solvers”.

### Wednesday September 16, 2020

Mantas Mikaitis on “Specification of a new MATLAB matrix collection”.

### Friday September 11, 2020

Theo Mary on “Mixed Precision Low Rank Compression of Data Sparse Matrices”.

Abstract: Modern hardware increasingly supports low precision arithmetics, that provide unprecedented speed, communication, and energy benefits. Mixed precision algorithms are being developed that combine these lower precision arithmetics with higher precision ones, so as to avoid compromising the accuracy of the computations. In this talk, we present an approach to compute low rank approximations with multiple precisions. For low rank matrices with rapidly decaying singular values, we show that only a fraction of the entries need to be stored in the higher, target precision. We apply this idea to the compression of data sparse matrices, which possess a block low rank structure, and obtain significant gains in storage for a variety of applications. We conclude by discussing ongoing research on how to factorize data sparse matrices in mixed precision.