## A new preconditioner exploiting low-rank factorization error

The solution of a linear system $Ax = b$ is a fundamental task in scientific computing. Two main classes of methods to solve such a system exist.

• Direct methods compute a factorization of matrix $A$, such as LU factorization, to then directly obtain the solution $x=U^{-1}L^{-1}b$ by triangular substitution; they are very reliable but also possess a high computational cost, which limits the size of problems that can be tackled.
• Iterative methods compute a sequence of iterates $x_k$ converging towards the solution $x$; they are inexpensive but their convergence and thus reliability strongly depends on the matrix properties, which limits the scope of problems that can be tackled.

A current major challenge in the field of numerical linear algebra is therefore to develop methods that are able to tackle a large scope of problems of large size.

To accelerate the convergence of iterative methods, one usually uses a preconditioner, that is, a matrix $M$ ideally satisfying three conditions: (1) $M$ is cheap to compute; (2) $M$ can be easily inverted; (3) $M^{-1}$ is a good approximation to $A^{-1}$. With such a matrix $M$, the preconditioned system $M^{-1}Ax=M^{1}b$ is then cheap to solve with an iterative method and often requires a small number of iterations only. An example of a widely used class of preconditioners is when $M$ is computed as a low-accuracy LU factorization.

Unfortunately, for many important problems it is quite difficult to find a preconditioner that is both of good quality and cheap to compute, especially when the matrix $A$ is ill conditioned, that is, when the ratio between its largest and smallest singular values is large.

In our paper A New Preconditioner that Exploits Low-rank Approximations to Factorization Error, with Nick Higham, which recently appeared in SIAM Journal of Scientific Computing, we propose a novel class of general preconditioners that builds on an existing, low-accuracy preconditioner $M=A-\Delta A$.

This class of preconditioners is based on the following key observation: ill-conditioned matrices that arise in practice often have a small number of small singular values. The inverse of such a matrix has a small number of large singular values and so is numerically low rank. This observation suggests that the error matrix $E = M^{-1}A - I = M^{-1}\Delta A \approx A^{-1}\Delta A$ is of interest, because we may expect $E$ to retain the numerically low-rank property of $A^{-1}$.

In the paper, we first investigate theoretically and experimentally whether $E$ is indeed numerically low rank; we then describe how to exploit this property to accelerate the convergence of iterative methods by building an improved preconditioner $M(I+\widetilde{E})$, where $\widetilde{E}$ is a low-rank approximation to $E$. This new preconditioner is equal to $A-M(E-\widetilde{E})$ and is thus almost a perfect preconditioner if $\widetilde{E}\approx E$. Moreover, since $\widetilde{E}$ is a low-rank matrix, $(I+\widetilde{E})^{-1}$ can be cheaply computed via the Sherman–Morrison–Woodbury formula, and so the new preconditioner can be easily inverted.

We apply this new preconditioner to three different types of approximate LU factorizations: half-precision LU factorization, incomplete LU factorization (ILU), and block low-rank (BLR) LU factorization. In our experiments with GMRES-based iterative refinement, we show that the new preconditioner can achieve a significant reduction in the number of iterations required to solve a variety of real-life $Ax=b$ problems.