## Solving Block Low-Rank Linear Systems by LU Factorization is Numerically Stable

In many applications requiring the solution of a linear system $Ax=b$, the matrix $A$ possesses a block low-rank (BLR) property: most of its off-diagonal blocks are of low numerical rank and can therefore be well approximated by low-rank matrices. This property arises for example in the solution of discretized partial differential equations, because the numerical rank of a given off-diagonal block is closely related to the distance between the two subdomains associated with this block. BLR matrices have been exploited to significantly reduce the cost of solving $Ax=b$, in particular in the context of sparse direct solvers such as MUMPS, PaStiX, and STRUMPACK.

However, the impact of these low-rank approximations on the numerical stability of the solution in floating-point arithmetic has not been previously analyzed. The difficulty of such an analysis lies in the fact that, unlike for classical algorithms without low-rank approximations, there are two kinds of errors to analyze: floating-point errors (which depend on the unit roundoff $u$), and low-rank truncation errors (which depend on the the low-rank threshold $\varepsilon$, the parameter controlling the accuracy of the blockwise low-rank approximations). Moreover, the two kinds of error cannot easily be isolated in the analysis, because the BLR compression and factorization stages are often interlaced.

In our recent preprint, with Nick Higham, we present rounding error analysis for the solution of a linear system by LU factorization of BLR matrices. Assuming that a stable pivoting scheme is used, we prove backward stability: the relative backward error is bounded by a modest constant times $\varepsilon$, and does not depend on the unit roundoff $u$ as long as $u$ is safely smaller than $\varepsilon$. This is a very desirable theoretical guarantee that can now be given to the users, who can therefore control the numerical behavior of BLR solvers simply by setting $\varepsilon$ to the target accuracy. We illustrate this key result in the figure below, which shows a strong correlation between the backward error and the $\varepsilon$ threshold for 26 matrices from the SuiteSparse collection coming from a wide range of real-life applications.