Exploiting the MATLAB Language in Implementing a Matrix Collection

“Concise code” may not be commonly used or defined, but as in languages, conciseness—the quality of communicating information in as few words as is feasible—can be a good guide in programming. Conciseness of code, which could mean short and self-explanatory lines or blocks of code, is easier to achieve in higher level languages such as Python or MATLAB, and more difficult when writing, for example, in RISC assembly languages with short instructions each doing very little.

For developing the recently released matrix collection, Anymatrix [1,2] (with Nick Higham), we used MATLAB and exploited its string handling routines to do complicated string manipulations in a few lines of code. This post describes some details of the implementation and further information can be found in the user’s guide [3].

Groups of Matrices

Anymatrix enforces the organization of matrices into groups, which means a new matrix has to be placed either into one of the existing groups or a new group. For example, the built-in groups are contest, core, gallery, hadamard, matlab, nessie, and regtools. Each of these come with around 20 matrices, which can be fixed-sized matrices or matrix generators.

Matrices are identified by <group_name>/<matrix_name>. For example, core/beta is a matrix called beta in the core group. Matrix names are their file names minus the extension .m. Group names are directory names of the groups. Since the IDs of matrices are derived from the directory and file names, uniqueness is guaranteed. This also means that multiple matrices can have the same name as long as they are in different groups.

Around the time we started working on this collection, MATLAB R2020b introduced pattern, to search and match strings conveniently. An example usage in Anymatrix is as follows.

% Matrix ID pattern
matrix_ID_pat = ...
  asManyOfPattern(alphanumericsPattern | ...
    characterListPattern("_") | ...
    characterListPattern("-")) + ...
  '/' + ...
  asManyOfPattern(alphanumericsPattern | ...
    characterListPattern("_") | ...
    characterListPattern("-"));

This code creates a pattern for matrix IDs, which matches sequences of letter, digit, underscore or hyphen characters, followed by a single forward slash, followed again by the sequence. This sort of pattern can then be used, for example, to match strings (using matches) and extract substrings (using extract):

>> S = '...core/beta...';
>> extract(S, matrix_ID_pat)
ans =
  1×1 cell array
    {'core/beta'}

The pattern functionality helped in writing concise code. We used it in various places of Anymatrix, for example in detecting matrix IDs in the commands and extracting substrings in the Boolean search expressions to separate properties from brackets and operators and/or/not.

Properties of Matrices

Matrix properties can be specified in two ways, either as an assignment properties = {...} inside the matrix .m files, or in the am_properties.m files in group folders. With the latter option we may specify properties of multiple matrices in one assignment. Anymatrix contains a list of recognized properties, such as ill conditioned or banded. When Anymatrix scans the file structure, it gives a warning if a matrix contains an unrecognized property.

Properties can be searched by Boolean expressions with operators and/or/not and brackets for specifying precedence. For example:

>> anymatrix('p', 'integer and ill conditioned')
ans =
  3×1 cell array
    {'core/wilson'     }
    {'gallery/dramadah'}
    {'matlab/pascal'   }

The following function that transforms the supplied Boolean expression into an expression that can be evaluated in MATLAB, demonstrates a lot of MATLAB’s features that we made use of.

function IDs = search_by_properties(expression)
  IDs = {};
  % Replace 'and', 'or', and 'not' by corresponding MATLAB symbols.
  expression = replace(expression, ' and ', ' & ');
  expression = replace(expression, ' or ', ' | ');
  expression = replace(expression, ' not ', ' ~');
  expression = replace(expression, '(not ', '(~');
  if startsWith(expression, 'not')
    expression = expression(4:length(expression));
    expression(1) = '~';
  end
  % Assume properties are made up letters, can include a hyphen
  % or a white space character, and there is no case sensitivity.
  pat = (lettersPattern + whitespacePattern + lettersPattern) ...
        | (lettersPattern + characterListPattern('-') ...
          + lettersPattern) ...
        | lettersPattern;
  % Extract properties from the logical expression and replace
  % them by function calls to test for membership.
  ex_props = extract(expression, pat);
  ind = 1;
  new_expression = '';
  for p = ex_props.'
    mod_prop = strcat('ismember(''', ...
        strrep(lower(p{1}), '-', ' '), ...
        ''', strrep(lower(matrix_properties{1}), ''-'', '' ''))');
    trunc_exp = expression(ind:end);
    % Find where the property is in the expression.
    prop_index = strfind(trunc_exp, p{1});
    % Take anything before the property and append the modified
    % version of the property.
    new_expression = strcat(new_expression, ...
        trunc_exp(1:prop_index(1)-1), ...
        mod_prop);
    % Move the index after the property that was replaced.
    ind = ind + prop_index(1) + length(p{1}) - 1;
  end
  new_expression = strcat(new_expression, expression(ind:end));

  % Find matrices whose properties satisfy the specified logical
  % expression.
  k = 0;
  for matrix_properties = properties.'
    k = k + 1;
    % Test if the expression is true for this matrix
    % and add it's ID.
    if eval(new_expression)
      IDs = [IDs; matrix_IDs{k}];
    end
  end
end

On lines 4 to 11 we replace the operators and/or/not by the corresponding MATLAB ones &/|/~. We then extract properties from the expression, making use of patterns, modify them to test for membership of the properties and place them back into the expression (lines 14 to 38). Finally, on lines 42 to 50 we use eval to run the Boolean expression as MATLAB code for every matrix, and return a list of suitable matrices.

In the example command anymatrix('p', 'integer and ill conditioned'), the expression 'integer and ill conditioned' is transformed internally to 'ismember('integer', strrep(lower(matrix_properties{1}), '-', ' ')) & ismember('ill conditioned', strrep(lower(matrix_properties{1}), '-', ' '))' which can then be evaluated using eval. Notice that Anymatrix replaces hyphens by white spaces to allow two ways to specify properties, and converts properties to lower case to avoid case sensitivity. This way, ill conditioned and Ill-conditioned, for example, are equivalent properties in Anymatrix.

Testing in Anymatrix

There are two ways to implement testing in Anymatrix. One way is to add a function in file test_run.m in the group folder which can be invoked by an appropriate Anymatrix command. Another way is to test matrices for their properties. This is provided in the directory testing/ in the root folder of Anymatrix.

It is worth noting that not all supported properties can be tested, so only a subset in the built-in collection are. Each property that is feasible to test has a corresponding test_<property>.m file. Functions in these files must return true or false, given a matrix. Anymatrix utilizes MATLAB’s unit testing framework and automatically generates function-based unit tests for every matrix in the collection. When a new matrix or a new property to an existent matrix is added, the unit testing framework picks that up automatically when run. The following is an example function-based unit test that is automatically generated by Anymatrix. It tests the properties of a set of the core/fourier matrices with arbitrary dimensions.

function test_core_fourier(testcase)
  A = anymatrix('core/fourier',3);
  anymatrix_check_props(A, 'core/fourier', testcase);
  A = anymatrix('core/fourier',5);
  anymatrix_check_props(A, 'core/fourier', testcase);
  A = anymatrix('core/fourier',8);
  anymatrix_check_props(A, 'core/fourier', testcase);
  A = anymatrix('core/fourier',10);
  anymatrix_check_props(A, 'core/fourier', testcase);
  A = anymatrix('core/fourier',15);
  anymatrix_check_props(A, 'core/fourier', testcase);
  A = anymatrix('core/fourier',24);
  anymatrix_check_props(A, 'core/fourier', testcase);
  A = anymatrix('core/fourier',25);
  anymatrix_check_props(A, 'core/fourier', testcase);
  A = anymatrix('core/fourier',30);
  anymatrix_check_props(A, 'core/fourier', testcase);
  A = anymatrix('core/fourier',31);
  anymatrix_check_props(A, 'core/fourier', testcase);
end

Above, the function anymatrix_check_props looks for files test_<property>.m and runs those tests that it finds on the given matrix:

for prop = P.'
  test_func_name = strcat('test_', ...
    strrep(strrep(lower(prop{1}), ...
    '-', '_'), ' ', '_'));
  if isfile(strcat(root_path, '/private/', test_func_name, '.m'))
    handle = str2func(test_func_name);
    verifyTrue(testcase, handle(M), ...
      strcat("Matrix ", matrix_ID, " is not ", prop{1}, "."));
  end
end

Using this, Anymatrix adds a level of almost automatic testing invoked by adding new matrices or properties to matrices. A good example of this is the MATLAB built-in gallery group, which we made available in Anymatrix. By appending each matrix with properties we can check that MATLAB’s gallery matrices have the expected properties.

Summary

Anymatrix is both a collection of matrices and a tool to organize them. It is worth noting that much of the infrastructure is not specific to organizing matrices and so it can be reused to organize into groups, search, and access through IDs any kind of objects appended with properties.

Anymatrix 1.0 was released in October 2021, but the development continues. We welcome contributions to the collection either in the form of remote groups that can be downloaded into Anymatrix given a git url (an example is matrices-mp-cosm group by X. Liu), or suggestions to include matrices or groups in the built-in collection. We also welcome reports of bugs or recommendations for improvements to the clarity, correctness and conciseness of the source code.

References

[1] N. J. Higham and M. Mikaitis. Anymatrix: An Extensible MATLAB Matrix Collection. MIMS EPrint 2021.16, Manchester Institute for Mathematical Sciences, The University of Manchester, UK. Oct. 2021 (to appear in Numer. Algorithms).

[2] N. J. Higham. Anymatrix: An Extensible MATLAB Matrix Collection. Nov. 2021.

[3] N. J. Higham and M. Mikaitis. Anymatrix: An Extensible MATLAB Matrix Collection. Users’ Guide. MIMS EPrint 2021.15, Manchester Institute for Mathematical Sciences, The University of Manchester, UK. Oct. 2021.

Nick Higham Awarded the 2022 Hans Schneider Prize

Professor Nick Higham has been awarded the 2022 Hans Schneider Prize by the International Linear Algebra Society (ILAS). The Hans Schneider Prize in Linear Algebra is awarded every three years by ILAS for research, contributions, and achievements at the highest level of Linear Algebra.

Nick is cited for his fundamental contributions in the analysis of a wide range of numerical linear algebra problems and matrix functions. He will present his lecture at the 25th ILAS Conference in Madrid, Spain, June 5-9, 2023.

O62A9943.jpg

Professor Nicholas J. Higham, The University of Manchester

Read more

Plotting Circles to Test Low Precision Arithmetics and Rounding Modes

As part of the 28th IEEE Symposium of Computer Arithmetic (held in July 2021), a special issue of IEEE Transactions on Emerging Topics in Computing on “Emerging and Impacting Trends on Computer Arithmetic” was organized, which has recently been published [1]. With Massimiliano Fasi, we have contributed an article [2] (available Open Access) on simulating stochastic rounding [3] in floating-point arithmetics, and about which we wrote in this blog before [4].

One interesting numerical example which we used to test various arithmetics and rounding modes [Sec. VIII C2, 4] is an ordinary differential equation (ODE)

u'(t)=v(t), \\ v'(t)=-u(t).

With the initial conditions u(0)=1 and v(0)=0, the solution of this ODE lies on the unit circle in the uv-plane [p. 51, 5]. When solving it numerically our is aim to obtain the solution that lies as close as possible to the unit circle. A similar example is approximating orbits of asteroids in Physics [Sec. II A, 8], although it is more complicated since it additionally includes velocity and acceleration.

Using a forward Euler scheme we obtain:

u_{k+1}=u_k+hv_k, \\ v_{k+1}=v_k-hu_k,

where h=\frac{2\pi}{n} is a step size. Now, solving this with n=32 in bfloat16 arithmetic [6] with round to nearest (RN) and stochastic rounding in arithmetic operations, we obtain the solution in Figure 1 (dotted line—exact unit circle, dashed—RN, solid—SR), which spirals out as previously shown [p. 51, 5]. To improve this we could use a different solver, such as Trapezium or Leapfrog [p.51, 5], but for the purposes of testing arithmetics rather than solvers we stick with Euler and try different step sizes.

Figure 1: Unit circle ODE solutions with n=32. The x-axis represents u and the y-axis represents v. Dotted line—exact unit circle, dashed—RN, and solid—SR.

Next, we tried n=2^9 and the plot is shown on the left hand side of Figure 2. Visually, this timestep size for the bfloat16 arithmetic with RN looks ideal since it quite closely approximates the unit circle and the solution does not spiral out as before. SR performs quite well here as well but noticeably worse than RN.

Next, we further reduce the timestep to n=2^{11} and the solution with that is plotted on the right hand side of Figure 2. In this case the solution with RN has been affected by rounding errors—the approximation of the unit circle looks visually as an octagon rather than a circle!

So what happened there? In the paper, we explain this through the problem of stagnation in floating-point arithmetic. It happens when many addends to some relatively large value are small enough so that they are all lost to rounding and the sum stays at some initial or intermediately reached value. In this case, once we start at an initial point u=1 and v=0, we expect that both u and v will start to decrease. However, only v is doing that since in u_{k+1}=u_k+hv_k the term hv_k is too small initially to change u_k.

The same pattern repeats during the whole solution and u and v keep switching places in suffering from stagnation. Since SR is immune to stagnation [7], this issue does not appear.

Figure 2: Unit circle ODE solutions with n=2^{9} (left) and n=2^{11}.

Finally, we reduce the timestep by a further factor of 100 (Figure 3). This revealed that the ODE solution solved in bfloat16 hardly moves away from the initial conditions with RN, but is still quite accurately computing the approximation of the unit circle with SR.

Figure 3: Unit circle ODE solutions with n=2^{13}.

The unit circle ODE constitutes an easy to run experiment to observe stagnation in floating-point arithmetic, which is usually done through recursive summation (also in our paper, [Sec. VIII A-B, 2]), and is a good visually convenient benchmark for testing low precision arithmetics and alternative rounding modes, such as stochastic rounding, and perhaps for teaching students about floating point and stagnation.

Further detail on the unit circle experiments, as well as other experiments with different ODEs and solvers, can be found in Section VIII of our paper [2]. The MATLAB code for the experiments is available at https://github.com/mmikaitis/stochastic-rounding-evaluation.

References

[1] M. Joldes, F. Lamberti, and A. Nannarelli, Special Section on “Emerging and Impacting Trends on Computer Arithmetic”. IEEE Trans. Emerg. Topics Comput., 9:3. Sep. 2021

[2] M. Fasi and M. Mikaitis, Algorithms for Stochastically Rounded Elementary Arithmetic Operations in IEEE 754 Floating-Point Arithmetic. IEEE Trans. Emerg. Topics Comput., 9:3. Sep. 2021

[3] N. J. Higham, What is Stochastic Rounding?. Jul. 2020

[4] M. Mikaitis, Simulating Stochastically Rounded Floating-Point Arithmetic Efficiently. Nov. 2020

[5] N. J. Higham, “Goals of applied mathematical research” in The Princeton Companion to Applied Mathematics, N. J. Higham, M. R. Dennis, P. Glen- dinning, P. A. Martin, F. Santosa, and J. Tanner, Eds. Princeton, NJ, USA: Princeton Univ. Press, 2015, pp. 48–55.

[6] N. J. Higham, What is Bfloat16 Arithmetic?. Jun. 2020

[7] M. P. Connolly, N. J. Higham, and T. Mary, Stochastic rounding and its probabilistic backward error analysis. SIAM J. Sci. Comput., 43(1), A566–A585. Feb. 2021

[8] D. A. Fauxa and J. Godolphin, The floating point: Tales of the unexpected. Am. J. Phys., 89 (8). Aug. 2021

SIAM AN21 Minisymposium on Computational Frontiers in Numerical Linear Algebra

The SIAM Annual Meeting 2021 was held virtually, July 19 – 23, 2021. Nick Higham and I organised a two-part minisymposium “Computational Frontiers in Numerical Linear Algebra” that addressed recent algorithmic and software advances in numerical linear algebra. Links to slides from some of the talks are given below.

Minisymposium description: Numerical linear algebra (NLA) is fundamental to many applications in scientific computing. Therefore developing fast algorithms for various NLA problems is crucial to enhance our ability to tackle bigger scientific challenges. Furthermore, NLA software is used as a black box in various applications and hence theoretical guarantees on the computed results are important. Algorithmic development in NLA needs to work in tandem with the ongoing advances in computer hardware. This minisymposium will give a broad overview of various theoretical, algorithmic and software ideas that are being pursued to accelerate NLA problems.

  • Part 1
    • When Floating-Point Error Matters: the Hazards and Challenges of Low-Precision Computation. Erin C. Carson, Charles University, Czech Republic. Abstract. Slides.
    • Randomization for Solving Large Systems of Linear Equations. Laura Grigori, Oleg Balabanov, and Matthias Beaupere, Inria Paris, France. Abstract.
    • Mixed Precision Algorithms for Pushing the Performance Limits of Modern HPC Architectures. Hartwig Anzt, University of Tennessee, U.S. Fritz Goebel, Thomas Gruetzmacher, and Terry Cojean, Karlsruhe Institute of Technology, Germany. Andres Tomas and Enrique S. Quintana-Orti, Universitat Politècnica de València, Spain. Abstract. Slides.
    • HeFFTe: FFT Computations Towards Exascale. Alan F. Ayala, University of Tennessee, U.S. Miroslav Stoyanov, Oak Ridge National Laboratory, U.S. Stanimire Tomov and Sebastien Cayrols, University of Tennessee, Knoxville, U.S. Jack J. Dongarra, University of Tennessee and Oak Ridge National Laboratory, U.S. Abstract. Slides.
  • Part 2
    • Replacing Pivoting in Distributed Gaussian Elimination with Randomized Transforms. Neil Lindquist and Piotr Luszczek, University of Tennessee, U.S. Jack J. Dongarra, University of Tennessee and Oak Ridge National Laboratory, U.S. Abstract. Slides.
    • Data-Aware Mixed Precision Algorithms. Theo Mary, Sorbonne Universités and CNRS, France. Abstract. Slides.
    • Random Matrices Generating Large Growth in LU Factorization with Pivoting. Srikara Pranesh and Nicholas J. Higham, The University of Manchester, United Kingdom; Desmond John Higham, University of Edinburgh, United Kingdom. Abstract. Slides.
    • Mixed Precision Randomized SVD. Michael P. Connolly, Nicholas J. Higham, and Srikara Pranesh, The University of Manchester, United Kingdom. Abstract.

NLA Group Presentations at SIAM Annual Meeting 2021

Members of the Numerical Linear Algebra Group will be giving six presentations at the upcoming SIAM Annual Meeting (AN21). They are also organizing the two-part minisymposiums Computational Frontiers in Numerical Linear Algebra (Part 1, Part 2) and Bohemian Matrices and Applications (Part 1, Part 2).

The conference will be held virtually, from 19th July to 23rd July, 2021.

Here are the dates and times (listed in Eastern Time (UTC-4)) where members of our group will be giving their presentations:

Tuesday 20 July
18:30 – 20:30 Massimiliano Fasi
Cpfloat: A C Library for Emulating Low-Precision Arithmetic (poster)

Wednesday 21 July
14:30 – 15:30 Nick Higham
Past President’s Address – Mixed Precision Numerical Linear Algebra with More Realistic Error Bounds

Thursday 22 July
16:45 – 17:10 Massimiliano Fasi
Determinants of Normalized Bohemian Upper Hessenberg Matrices

Friday 23 July
10:15 – 10:40 Theo Mary
Data-Aware Mixed Precision Algorithms
10:45 – 11:10 Srikara Pranesh 
Random Matrices Generating Large Growth in LU Factorization with Pivoting
11:15 – 11:40 Michael Connolly
Mixed Precision Randomized SVD

More information on AN21 is available here.

Nick Higham Awarded the SIAM George Pólya Prize for Mathematical Exposition

The Society for Industrial and Applied Mathematics (SIAM) has chosen Nick Higham, Royal Society Research Professor and Richardson Professor of Applied Mathematics, as the 2021 recipient of the George Pólya Prize for Mathematical Exposition.

The prize, which is awarded for expository work that communicates mathematics effectively, is named after George Pólya, the Hungarian mathematician who wrote the million-selling book How to Solve It, first published in 1945.

“Pólya was a brilliant researcher, teacher, and expositor of mathematics,” said Higham. “It is an honor to receive this SIAM prize named after him, especially as most of my research and expository writing has been published with SIAM.”

Higham was cited for “the crisp clarity, elegance and accessibility of his mathematical and popular exposition on a broad range of topics in applied mathematics.”

The award ceremony will be held at the 2021 SIAM Annual Meeting, scheduled to be held in a virtual format during July 19-23, 2021. See also the SIAM announcement of the award.

Professor Nicholas J. Higham, University of Manchester. Photo: Rob Whitrow

The BLAS named as one of the “Ten Codes that Transformed Science”

By Sven Hammarling.

An article in a recent issue of the journal Nature has the title “Ten Codes that Transformed Science”.

Two of the codes are purely numerical, namely the fast Fourier transform (FFT) and the Basic Linear Algebra Subprograms (BLAS). The BLAS evolved in three stages, the Level 1 BLAS2 for scalar and vector operations, the Level 2 BLAS3 for matrix-vector operations and the Level 3 BLAS4 for matrix-matrix operations. The first draft proposal for the Level 1 BLAS appeared in the ACM Signum Newsletter in 1973, so the project as a whole spanned about seventeen years.

Two members of the Numerical Linear Algebra Group were directly involved in the development of the BLAS. Jack Dongarra wrote a number of the Level 1 BLAS routines, was involved in testing the routines on an IBM 370/195 and provided several efficient implementations of the routines, all of which was acknowledged in the published paper. Both Jack Dongarra and Sven Hammarling were authors of the Level 2 and 3 BLAS, together with Jeremy Du Croz and Richard Hanson for the Level 2 BLAS, and Jeremy Du Croz and Iain Duff for the Level 3 BLAS.

To take two quotes by Jack Dongarra from the Nature article:

“In effect, BLAS reduced matrix and vector mathematics to a basic unit of computation as fundamental as addition and subtraction … It provides the fabric on which we do computing.”

As well as providing a standard interface, it was hoped that computer manufacturers would provide optimised implementations and that hope has certainly been realised.

It should be said that the BLAS were very much a community project, with input from many people. The BLAS, especially the Level 3 BLAS, provided the basic high performance operations, particularly for the block partitioned methods, used by the LAPACK5 project, which started in 1987, before the actual publication of the Level 3 BLAS.

More recently, driven by modern parallel machines and the desire to solve larger and larger problems, a standard for Batched BLAS (BBLAS) operations has been proposed and has been accepted for publication by ACM ToMS6. The development of the BBLAS was done by the Numerical Linear Algebra Group as part of an EU project, NLAFET7, in collaboration with Jack Dongarra’s Innovative Computing Laboratory at the University of Tennessee.

1J. M. Perkel. Ten Codes that Transformed Science. Nature, 589:344–349, January 2021.
2C. L. Lawson, R. J. Hanson, D. Kincaid, and F. T. Krogh. Basic Linear Algebra Subprograms for FORTRAN usage. ACM Trans. Math. Software, 5:308–323, 1979.
3J. J. Dongarra, J. Du Croz, S. Hammarling, and R. J. Hanson. An extended set of FORTRAN Basic Linear Algebra Subprograms. ACM Trans. Math. Software, 14:1–32, 399, 1988.
4J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling. A set of Level 3 Basic Linear Algebra Subprograms. ACM Trans. Math. Software, 16:1–28, 1990.
5E. Anderson, Z. Bai, C. H. Bischof, S. Blackford, J. Demmel, J. J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. C. Sorensen. LAPACK Users’ Guide. SIAM, Philadelphia, PA, USA, 3rd edition, 1999. ISBN 0-89871-447-8. (http://www.netlib.org/lapack/lug/).
6A Set of Batched Basic Linear Algebra Suprograms, ACM Trans. Math. Software. To appear.
7Deliverable D7.6, (https://www.nlafet.eu/public-deliverables/).

 

 

SIAM CSE21 MINISYMPOSIUM ON “Mixed Precision Algorithms for High Performance Scientific Computing”

The biannual SIAM Conference on Computational Science and Engineering (CSE) was conducted virtually between March 1 to 5, 2021. Theo Mary and I organised a two-part minisymposium on recent algorithmic and software advances of mixed precision methods in scientific computing. Below are the links to the slides of the talk.

Minisymposium description: The increasing support of lower precision arithmetics in hardware, such as fp16 and bfloat16, provides new opportunities for high performance scientific computing. However, even though low precision arithmetics can provide significant speed, communication, and energy benefits, their use in scientific computing poses the challenge of preserving the accuracy and stability of the computation. To address this issue, a variety of mixed precision algorithms that combine low and high precisions have emerged.

Nick Higham Named 2020 ACM Fellow

Professor Nick Higham has been named among the 2020 Association for Computing Machinery (ACM) Fellows, who are recognised for work underpinning contemporary computing.

The accomplishments of the 2020 ACM Fellows have driven innovations that have ushered in significant improvements across many areas of technology, industry, and personal life.

Nick has been recognised for his contributions to numerical linear algebra, numerical stability analysis, and communication of mathematics.

He is among 95 ACM Fellows, representing universities, corporations and research centres around the world, who are celebrated for their wide-ranging and fundamental contributions in areas including artificial intelligence, cloud computing, computer graphics, virtual reality, and more.

The ACM Fellows programme recognises the top 1% of ACM members for their outstanding accomplishments in computing and information technology and/or outstanding service to ACM and the larger computing community. Fellows are nominated by their peers, with nominations reviewed by a distinguished selection committee.

ACM President Gabriele Kotsis said: “The 2020 ACM Fellows have demonstrated excellence across many disciplines of computing. These men and women have made pivotal contributions to technologies that are transforming whole industries, as well as our personal lives. We fully expect that these new ACM Fellows will continue in the vanguard in their respective fields.”

Professor Nicholas J. Higham, University of Manchester

Numerical Linear Algebra Group Activities 2020

The Numerical Linear Algebra Group had a productive year in 2020, despite working remotely from March onwards because of the pandemic. This post summarizes what we got up to. Publications are not included here, but many of them can be found on MIMS EPrints under the category Numerical Analysis; see also these news stories about our publications.

200305-1133-38_cropped

Craig Lucas, Nick Higham, Xinye Chen, Steven Elsworth, Xiaobo Liu, Michael Connolly, Mantas Mikaitis, Len Freeman, Massimiliano Fasi, Pierre Blanchard, Sven Hammarling, Asad Raza Aitor Mehasi Mehasi, Stephanie Lai, Gian Maria Negri Porzio, Thomas McSweeney, Mawussi Zounon, Françoise Tisseur, Srikara Pranesh, Yuqing Zhang, Eleni Vlachopoulou, March 2020.

Software

We make our research codes available as open source, principally on GitHub; see the repositories of ConnollyFasiHighamLiuPranesh, Tisseur, and Zounon.

We also put MATLAB software on MATLAB Central File Exchange and on our own web sites, e.g., the Rational Krylov Toolbox (RKToolbox).

PhD Students

We welcomed new PhD students Xinye Chen and Thomas Seleiro.

Steven Elsworth successfully defended his PhD thesis Rational Krylov Methods and Machine Learning Approaches to Time Series Forecasting in March 2020 .

Michael Connolly took an internship with MathWorks from July to September 2020.

Postdoctoral Research Associates (PDRAs)

Mantas Mikaitis, previously an EPSRC Doctoral Prize Fellow in the group, is now working on the ICONIC project in the group.   During the year he successfully defended his PhD thesis Arithmetic Accelerators for a Digital Neuromorphic Processor in the Department of Computer Science.

Massimiliano Fasi left the group in September 2020 and is now working at Örebro University in Sweden.

Roberto Cahuantzi  was a member of the group from March to September 2020, working with Stefan Güttel.

Recognition and Service

Jack Dongarra received the 2020 IEEE Computer Society’s Computer Pioneer Award.

Srikara Pranesh and Michael Connolly won first and second best poster prizes, respectively, at the SIAM UKIE Section Meeting, Edinburgh, January 2020.

Françoise Tisseur received the London Mathematics Society’s Fröhlich Prize.

Theo Mary received an honourable mention for the 2020 Householder Prize and the 2021 SIAG/LA Early Career Prize. He also received a grant from the Faculty of Engineering Sciences of Sorbonne University for a project on”Mixed precision algorithms for HPC”.

Sixteen publications by members of the NLA group feature among the top 40 most read articles in two SIAM journals, both leading venues for publications in numerical linear algebra.

Stefan  Güttel was awarded the 2021 James H. Wilkinson Prize in Numerical Analysis and Scientific Computing.

Nick Higham received the IMA Gold Medal 2020.

Theo Mary has been awarded the 2021 SIAG/LA Early Career Prize by the SIAM Activity Group on Linear Algebra.

Grants

Stefan Güttel’s and Nick Higham’s Alan Turing Fellowships have been extended by one year to September 2021.

Stefan Güttel received a Small Project Grant from the Alan Turing Institute.

Nick Higham and Françoise Tisseur received funding for work on multi-precision algorithms from Lawrence Livermore National Laboratory under the Exascale Computing Project.

Nick Higham and Françoise Tisseur received funding from The MathWorks, Inc. to support a PhD student to work on exploiting multiprecision arithmetic.

Massimiliano Fasi is one of the participants of the 2020 INdAM-GNCS project “Low-rank methods for linear algebra problems with data-sparse structure” funded by the Scientific Computing Group of the Istituto Nazionale di Alta Matematica “Francesco Severi”.

External Presentations

SIAM UKIE Annual Meeting 2020, Edinburgh, January 10: Connolly, Liu, Negri Porzio, Pranesh, Higham, Pranesh and Tisseur.
SIAM Conference on Parallel Processing for Scientific Computing (PP20) in Seattle, Washington, US, February 12 – 15: Fasi, Mary, Mikaitis, Pranesh and Zounon.
Theo Mary, Performance and Accuracy of Mixed-Precision Matrix Factorizations, SIAM PP20, February, 2020.
Srikara Pranesh, Point Arithmetic for the Solution of Linear System of Equations, SIAM PP20, February, 2020.
Mawussi Zounon, Opportunities for Multi Precision Computation in Memory Bound Applications, SIAM PP20, February, 2020.
Nick Higham, Are Numerical Linear Algebra Algorithms Accurate at Extreme Scale and at Low Precisions?, in E-NLA Online seminar series on Numerical Linear Algebra, April 29, 2020.
Nick Higham, Random Orthogonal Matrices in High Performance Computing, Masked Guest Speaker, King Abdullah University of Science and Technology, 2020.
Nick Higham, The Anatomy of the World’s Fastest Linear Equation Solver, Online Distinguished Lecture, The Hong Kong Polytechnic University, September 2020.
Theo Mary, Mixed Precision Low Rank Compression of Data Sparse Matrices, Communications in NLA, online, September 2020.
Nick Higham, Rehabilitating Correlations, Leicester Actuarial Science Society and Students Union Mathematical Society, 2020.
Theo Mary, Mixed precision arithmetic: hardware, algorithms and analysis, London Mathematical Society Virtual Graduate Student Meeting, November, 2020.
Nick Higham, Mathematics of today’s floating-point arithmetic, London Mathematical Society Naylor Lecture, November 20, 2020.
Françoise Tisseur, Towards Reliable Eigensolvers for Nonlinear Eigenvalue Problems, in E-NLA Online seminar series on Numerical Linear Algebra, November 25, 2020.

Other Notable Tweets

« Older Entries