## Extreme-Scale Test Matrices with Specified 2-Norm Condition Number

The supercomputers in the TOP500 list are ranked using the High-Performance Linpack (HPL) Benchmark, which gauges the performance of a computer by solving a large dense linear system by Gaussian elimination with partial pivoting. The size of the coefficient matrix depends on the computational power of the machine being assessed, because more powerful systems require larger benchmark problems in order to reach their peak performance.

The test matrices used by HPL have random entries uniformly distributed on the interval . The 2-norm condition number of such matrices depends on their size, and can potentially be very large for the matrices required by today’s most powerful computers: the largest linear systems solved on Summit, the machines that leads the November 2019 TOP500 ranking, have order , and even larger systems will be needed to benchmark the coming generations of exascale systems.

An matrix with specified 2-norm condition number can be generated as , where and are random real orthogonal matrices from the Haar distribution (a natural uniform probability distribution on the orthogonal matrices) and is diagonal with nonnegative entries . It is well known that has 2-norm condition number if and otherwise. This technique, which is used by the `gallery('randsvd', ...)`

function, requires floating-point operations to generate a test matrix of order , and would thus be impractical in an extreme-scale setting.

In our recent EPrint Generating extreme-scale matrices with specified singular values or condition numbers, Nick Higham and I present four methods that, by giving up the requirement that the matrices and be from the Haar distribution, reduce the cost of the approach above from cubic to quadratic. The matrices generated retain a number of desirable properties that make them a suitable choice for testing linear solvers at scale.

These cheaper algorithms are particularly well suited to distributed-memory environments, since all communication between the nodes can be avoided at the price of a negligible increase in the overall computational cost. They are appealing to MATLAB users too, as the following example demostrates.

n = 10000; kappa = 1e6; mode = 2; rng(1) % gallery('randsvd',...) fprintf('gallery(''randsvd'',...): %5.2f seconds elapsed.\n',... timeit(@()gallery('randsvd',n,kappa,mode,[],[],1))); % Algorithm 3.1 in our EPrint. method = 1; matrix = 0; fprintf('Alg. 3.1 with Haar U: %5.2f seconds elapsed.\n',... timeit(@()randsvdfast(n,kappa,mode,method,matrix))); matrix = 2; fprintf('Alg. 3.1 with U=gallery(''orthog'',n,2): %5.2f seconds elapsed.\n',... timeit(@()randsvdfast(n,kappa,mode,method,matrix))); % Algorithm 4.1 in our EPrint. method = 3; matrix = 0; fprintf('Alg. 4.1 with Haar U: %5.2f seconds elapsed.\n',... timeit(@()randsvdfast(n,kappa,mode,method,matrix))); matrix = 2; fprintf('Alg. 4.1 with U=gallery(''orthog'',n,2): %5.2f seconds elapsed.\n',... timeit(@()randsvdfast(n,kappa,mode,method,matrix)));

In the listing above, `randsvdfast`

is an implementation of our algorithms available on the MATLAB Central File Exchange. Setting the argument `matrix`

to 0 tells our function to pick from the Haar distribution, whereas setting it to 2 causes to be chosen in a non-random way. Running this code in MATLAB 2019b on a machine equipped with an Intel processor I5-6500 running at 3.20GHz the script produces the output

gallery('randsvd',...): 79.52 seconds elapsed. Alg. 3.1 with Haar U: 19.70 seconds elapsed. Alg. 3.1 with U=gallery('orthog',n,2): 1.90 seconds elapsed. Alg. 4.1 with Haar U: 19.28 seconds elapsed. Alg. 4.1 with U=gallery('orthog',n,2): 1.43 seconds elapsed.

Therefore, for `randsvdfast`

is up to 56 times faster than `gallery('randsvd',...)`

.