The need to compute matrix functions arises in many applications in science and engineering. Specialized methods exist for evaluating particular matrix functions, for example, the well-known scaling and squaring algorithm for the matrix exponential, Newton’s method for matrix sign function, and the inverse scaling and squaring method for the matrix logarithm. For some functions a specialized method is not available, in which case a general purpose algorithm is needed. The Schur-Parlett algorithm computes a general function f of a matrix, with the function dependence restricted to the evaluation of f on the diagonal blocks of the reordered and blocked Schur form. It evaluates f on the nontrivial diagonal blocks via a Taylor series, so it requires the derivatives of f and it also requires the Taylor series to have a sufficiently large radius of convergence. However, the derivatives are not always available or accurately computable.

In our recent preprint, Nick Higham and I present a version of the Schur-Parlett algorithm that requires only function values and uses higher precision arithmetic to evaluate f on the diagonal blocks of order greater than 2 (if there are any) of the reordered and blocked Schur form. Our algorithm is inspired by Davies’s randomized approximate diagonalization method, but we show that the measure of error that underlies randomized approximate diagonalization makes it not a reliable numerical method for computing matrix functions. The key idea in our algorithm is to compute by diagonalization the function of a small random diagonal perturbation of each triangular block, where the perturbation ensures that diagonalization will succeed. Multiprecision algorithms have already been developed for the matrix exponential and the matrix logarithm, and those algorithms are tightly coupled to the functions in question, whereas here we place no restrictions on the function. And the algorithm we propose is not restricted only to a working precision of double since its framework is precision independent. We test our algorithm in double precision on a variety of test problems, and the results show that the algorithm generally behaves in a numerically stable fashion. We apply our algorithm to the matrix Mittag-Leffler function and show that it yields results of accuracy similar to, and in some cases much greater than, the state of the art algorithm for this function.

In the following example we employ the algorithm to compute in double precision the matrix Airy function, and compare the result with that produced by a spectral decomposition method (which breaks down if A is defective, and will be inaccurate if A is close to being defective).

We observe that the normwise relative difference between our algorithm and the spectral decomposition method is tiny. This is just one of the many matrix functions that no specialized algorithm is available for, and the MATLAB \texttt{funm} function, which implements the Schur-Parlett algorithm, is not able to compute. Our new algorithm greatly expands the class of readily computable matrix functions, and its MATLAB code is available here.


  • Congratulations on the important result and thank you for publishing the code!
    (I read the paper a week ago but code wasn’t available then).

    Few questions though.

    The ‘trim_diagpertub.m’ uses ‘digits()’ function from Symbolic Math Toolbox (lines 54 and 77).
    But I couldn’t find any code doing actual computations using the toolbox. Looks like calls to ‘digits’ are not needed.

    Same function sets the elevated precision using ‘mp.Digits’ (lines 56 and 69) but doesn’t return precision back after computations. For example it does the ‘digits(d_old);’ but there is no equivalent ‘mp.Digits(d_old);’. Maybe it is not that important but still…

    How the algorithm compares to specialized multiprecision logm/expm algorithms when used for various non-double precision levels?

    Would appreciate your reply.


    • Pavel, many thanks for your comments! You’re right.
      I confused the use of ‘digits()’ function with ‘mp.Digits()’.
      I’ve corrected the mistake in an updated version of the codes on GitHub.
      Now the function returns mp.Digits() to the value it had at the start.
      And I’ve rerun all the experiments in the paper using the new
      codes and the results don’t change.

      Regarding your last question, in this work we are focusing on a working
      precision of double. The framework is precision independent (as stated in
      the paper), but the parameters in the algorithm are tuned for double. For
      other working precisions more work is needed to choose the parameters –
      this is future work. Then a comparison can be made between funm_nd and
      specialized multiprecision algorithms on different functions.


      • Thank you for the reply.

        @”For other working precisions more work is needed to choose the parameters – this is future work.”

        Is there any chance you can describe all these parameters in paper (or readme file) with some basic suggestions on how to choose them for the given precision N? This is absolutely essential information for any algorithm with “multiprecision” in its title.



    • We will prepare some information on using the algorithms with an arbitrary
      working precision. We note, though, we use “multiprecision algorithm” with
      the meaning that the algorithm employs one or more arbitrary precisions in
      the computation and it does not necessarily mean that the target precision
      is arbitrary. This is distinct from a “mixed precision algorithm”, which uses
      two or more precisions chosen from a small number of available precisions,
      typically half, single, double, and quad.


      • I see, thank you for clarification on the meaning of “multiprecision” in the paper.

        I would say it is quite different from other papers published by NLA members. See for example papers on multiprecision algorithms for EXPM and LOGM (Fasi & Higham). The meaning there is exactly that algorithms are targeted and able to work with arbitrary precision.

        @”We will prepare some information on using the algorithms with an arbitrary working precision.”
        This update would be of great importance and much appreciated. Thank you in advance.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s