## Our Alumni – Vanni Noferini

In this blog post, we asked one of our alumni, Vanni Noferini, a few questions about his time with the Numerical Linear Algebra Group.

Please can you introduce yourself and tell us a bit about your experience before attending University of Manchester?

I was born in Florence, Tuscany. I did my undergraduate studies there, obtaining an M.Sc. in Theoretical Physics. Later, I got my Ph.D in Mathematics in Pisa. During my PhD I was jointly supervised by Dario Bini and Luca Gemignani. While in Italy I also played volleyball at a somewhat competitive level, and I have had a number of part-time/student jobs. For example I have been writing science popularisation articles for a Swiss newspaper.

Why did you choose to do your postdoc in Manchester?

While I was writing my PhD thesis I had applied to an open call for a postdoc position to work on functions of a matrix, with Nick Higham. Nick is a world-famous superstar in the area, so that sounded like a great opportunity. Moreover, I liked Nick and Françoise, and the rest of the group, since the day of my interview; they probably have also not disliked me too much because I was offered the job. Accepting it was a great decision: I learnt a lot and my time there has certainly shaped my career to a significant extent. I started in September 2012 (technically as a predoc: I flew to Pisa to defend my viva in October 2012) and stayed until August 2015.

During my Manchester years, I have worked not only with Nick but also with many other people who then were in the group: for example Yuji Nakatsukasa, Javi Perez, Meisam Sharify, Françoise Tisseur. Plus there were other fantastic mathematicians around to talk with, such as Stefan Guettel or Martin Lotz  — not to mention the frequent international visitors. Those were the days and Manchester was just the place to be… Now I manage my own research group at Aalto, and I am doing my best to create a similarly fruitful environment: my inspiration for this goal is definitely the NLA group in Manchester during my postdoc!

How did you find Manchester?

A little humid, but fun.  In my first couple of weeks I actually stayed in the Peak District (the husband of a friend of my then girlfriend had rented me a room in Congleton, Cheshire) which was beautiful but would not have been a very convenient long-term address to work in the Alan Turing Building. Thus, I soon moved to Didsbury and I lived there for most of my 3 years in Manchester. I was living in Austin Drive, not far from the Burnage railway station, and apparently very close to where Charles Van Loan had stayed during his own Manchester postdoc (at least, so he once told me). Later, more NLA group members figured out that Didsbury was a rather nice place, so eventually we had grown a small community there. In fact, fellow Didsbury resident Martin Lotz and I used to refer to Manchester as “Greater Didsbury”. On the occasional sunny weekends I liked to go around by car, so I got to know quite well the broader area of Greater Dids… I mean, Manchester, and North-West England.

Can you tell us about your career since leaving University of Manchester?

I have continued a career in academia. I was a Lecturer (Research) at the University of Essex in Colchester for about 4 years. I obtained tenure in Essex, and also indefinite leave to remain in the UK post-Brexit, having got a “settled status” as a long-enough UK resident and EU citizen. However, even though I have enjoyed my 7 years in England, I was offered an attractive position from Aalto University (Finland), that would give me more research time and better funding opportunities. So I moved here in May 2019.

Currently I am an Associate Professor in Mathematics at Aalto University, which is located in Espoo (Finland). Here I am the leader of a research group on matrix theory and its applications. At the time of this interview, my group includes one visiting professor (Paul Van Dooren), two postdocs (Giovanni Barbarino and Manme Quintana), two PhD students (Lauri Nyman and Ryan Wood) and one MSc student by research (Antti Haavikko). We are having fun!

## Jack Dongarra Receives 2021 ACM A.M. Turing Award

Congratulations to Jack Dongarra who has received the 2021 ACM A.M. Turing Award. He is cited “for pioneering contributions to numerical algorithms and libraries that enabled high performance computational software to keep pace with exponential hardware improvements for over four decades.”

Dongarra has been Turing Fellow in the Department of Mathematics since 2007, and also holds appointments at the University of Tennessee and Oak Ridge National Laboratory.

In summer 2021 the NLA Group organized a workshop New Directions in Numerical Linear Algebra and High Performance Computing: Celebrating the 70th Birthday of Jack Dongarra. Videos of talks, including one by Dongarra, are available here and provide much insight into Dongarra’s career and achievements.

The photo below shows Dongarra speaking at the 2019 Manchester workshop Advances in Numerical Linear Algebra: Celebrating the Centenary of the Birth of James H. Wilkinson. Wilkinson, seen at the top eight-hand corner of the slide, won the Turing Award in 1971.

Jack Dongarra

## NLA Group Work on Stochastic Rounding Featured in Science News

The NLA group’s work on stochastic rounding has been featured in a recent article in Science News.

In an interview, Mantas Mikaitis talks about the advantages of stochastic rounding and his work on devising a method to simulate stochastic rounding.

The full article is available here. Other news outlets also picked up the story, such as Tech Xplore (article).

## SIAM PP22 Minisymposium on Understanding and Exploiting Mixed-Precision Accelerators for High-Performance Computing

This double minisymposium, organized by Mantas Mikaitis and Nick Higham, took place during SIAM Conference on Parallel Processing for Scientific Computing (PP22) which happened virtually on 23-26 February, 2022. Here we provide slides of some of the talks that were delivered during this minisymposium.

Abstract: The growth of domain-specific hardware devices, such as low- and mixed-precision Matrix-Multiply Accumulate (MMA) accelerators (for example Tensor Processing Units and Tensor Cores), motivates several strands of research in scientific computing. First, algorithm designers aim to benefit from the speedup these hardware devices make possible by adapting algorithms, or parts of them, to run in low or mixed precisions. Second, we need to understand the low level details of how the devices implement floating-point arithmetic and to what extent they satisfy floating-point arithmetic standards. Third, new rounding error analysis is being developed to further support the task of finding the best ways to use the accelerators in order to maximize the accuracy of the results. This minisymposium gathers researchers in scientific computing, numerical analysis, and the standardization and testing of floating-point arithmetics to report the latest research on applying and understanding the MMA hardware.

Numerical Behavior of GPU Matrix Multiply-Accumulate Hardware. Massimiliano Fasi, Örebro University, Sweden; Nicholas J. Higham, Mantas Mikaitis, and Srikara Pranesh, The University of Manchester, United Kingdom; Florent Lopez, ANSYS, Inc., U.S.; Theo Mary, Sorbonne Universités and CNRS, France. Abstract.

Mixed Precision in Linear Algebra. Jack J. Dongarra, University of Tennessee and Oak Ridge National Laboratory, U.S.

Challenges of Mixed-Precision Fast Fourier Transforms from the Instruction Level to at Scale Computations. Lukasz Ligowski, NVIDIA, U.S. Abstract.

Double-Word Arithmetic and Accurate Calculation of Euclidean Norms. Vincent Lefèvre, Inria Paris, France; Nicolas Louvet, Université Claude Bernard Lyon 1, France; Jean-Michel Muller, CNRS, France; Joris Picot, Ecole Normale Superieure de Lyon, France; Laurence Rideau, Inria Sophia Antipolis, France. Abstract.

Online and Offline Precision Tuning in Hardware Accelerators. George A. Constantinides, Imperial College London, United Kingdom. Abstract.

Reducing Data Movement in Mixed Precision LU Factorization with GPU Tensor Cores. Atef Dorai, LIP-ENS Lyon, France; Roman Iakymchuk, Sorbonne Université, France and Fraunhofer ITWM, Germany; Florent Lopez, ANSYS, Inc., U.S.; Theo Mary, Sorbonne Universités and CNRS, France. Abstract.

BLIS: Mixing, Matching, and Extending Precision. Greg Henry, Intel Corporation, U.S.; Devin Matthews, Southern Methodist University, U.S.; Maggie E. Myers,  Devangi N. Parikh, Robert A. van de Geijn, and Field G. Van Zee, University of Texas at Austin, U.S. Abstract.

Fluid Simulations Accelerated with 16 bit: Approaching 4x Speedup on A64FX by Squeezing ShallowWaters.jl into Float16. Milan Kloewer, University of Oxford, United Kingdom; Sam Hatfield, European Center for Medium-Range Weather Forecasts (ECMWF) ; Matteo Croci, University of Oxford, United Kingdom; Peter D. Dueben, European Weather Centre, United Kingdom; Tim Palmer, University of Oxford, United Kingdom. Abstract.

## Numerical Linear Algebra Group Activities 2021

The Numerical Linear Algebra Group had a busy year in 2021, despite the pandemic limiting our travel. This post summarizes what we got up to. Publications are not included here, but many of them can be found on MIMS EPrints under the category Numerical Analysis; see also these news stories about our publications.

## Software

Nick Higham and Manta Mikaitis released Anymatrix: a MATLAB toolbox that provides an extensible collection of matrices, organized in groups, with the ability to search the collection by matrix properties. See this blog post by Nick and this blog post by Mantas.

We make our research codes available as open source, principally on GitHub; see the repositories of ConnollyFasiHighamLiu, Mikaitis, PraneshZounon.

We also put MATLAB software on MATLAB Central File Exchange and on our own web sites, e.g., the Rational Krylov Toolbox (RKToolbox).

## PhD Students

We welcomed new PhD students Ayana Mussabayeva and Alban Bloor Riley.

Thomas McSweeney (Scheduling with Precedence Constraints in Heterogeneous Parallel Computing) and Gian Maria Negri Porzio (Nonlinear Eigenvalue Problems: Theory and Algorithms) successfully defended their PhD dissertations.

## Postdoctoral Research Associates (PDRAs)

Srikara Pranesh left the group in October 2021 and is now working at V-Labs in Bengaluru. He joined us in 2017 to work on the NLAFET  (Parallel Numerical Linear Algebra for Extreme Scale Systems) project and then moved onto the ICONIC project.

Mawussi Zounon left the group in June 2021 to become a senior software engineer at Arm Manchester. He joined us in 2016 to work on the NLAFET project. At the end of the NLAFET project, he continued working with the Numerical Linear Algebra Group as a Knowledge Transfer Partnership (KTP) associate, in partnership with NAG. The KTP, which ended in June 2021, was deemed “Outstanding” by the KTP Grading Panel for its achievement in meeting the KTP Objectives.

Ian McInerney joined the group in December 2021, having completed his PhD in the High Performance Embedded and Distributed Systems (HiPEDS) Centre for Doctoral Training at Imperial College, London, under the supervision of Eric Kerrigan and George Constantinides.

## Grants

Nick Higham and Françoise Tisseur received funding for work on multi-precision algorithms from Lawrence Livermore National Laboratory under the Exascale Computing Project.

Françoise Tisseur received EPSRC funding to work on mixed precision symmetric eigensolvers.

## Presentations at Conference and Workshops

SIAM Conference on Computational Science and Engineering (virtual), March 1-5: Connolly, Fasi, Higham, Mikaitis, Pranesh, Tisseur.

SIAM Conference on Applied Linear Algebra (virtual), May 17-21: Fasi, Liu.

New Directions in Numerical Linear Algebra and High Performance Computing (virtual) July 7-8, 2021: Hammarling, Higham, Tisseur.

SIAM Annual Meeting 2021(virtual), July 19-23: Connolly, Fasi, Higham, Mary, and Pranesh.

IEEE International Conference on Data Mining (ICDM) 2021, Auckland, New Zealand (virtual), December 7-10: Chen

## Conference and Workshop Organization

Stefan Güttel continued to co-organize the E-NLA online seminar series dedicated to topics in Numerical Linear Algebra.

Nick Higham organized a minisymposium on Reduced Precision Arithmetic and Stochastic Rounding at the SIAM Conference on Computational Science and Engineering,  March 1-4, 2021.

Theo Mary and Srikara Pranesh organised a two-part minisymposium on Mixed Precision Algorithms for High Performance Scientific Computingr at the SIAM Conference on Computational Science and Engineering, March 1-5, 2021.

Massimiliano Fasi and Xiaobo Liu organised a two-part minisymposium on Computing Functions of Matrices at the SIAM Conference on Applied Linear Algebra, May 17-21, 2021.

Sven Hammarling, Nick Higham and Françoise Tisseur organized the New Directions in Numerical Linear Algebra and High Performance Computing workshop which was held virtually on July 7-8, 2021, celebrating the 70th birthday of Jack Dongarra. Videos from the workshop are available here.

Rob Corless and Nick Higham organized a two-part minisymposium Bohemian Matrices and Applications at the SIAM Annual Meeting (virtual), July 2021.

Nick Higham and Sri Pranesh organized a two-part minisymposium Computational Frontiers in Numerical Linear Algebra at the SIAM Annual Meeting (virtual), July 2021.

Max Fasi was on the organizing committee of the the 9th Matrix Equations and Tensor Techniques Workshop9-10 September 2021 in Perugia, Italy.

## Recognition and Service

Nick Higham was been named a 2020 ACM Fellow.

Stefan Güttel was awarded the 2021 James H. Wilkinson Prize in Numerical Analysis and Scientific Computing.

Nick Higham received the George Pólya Prize for Mathematical Exposition from the Society for Industrial and Applied Mathematics (SIAM).

Nick Higham was awarded the 2022 Hans Schneider Prize by the International Linear Algebra Society (ILAS).

## External Members

Max Fasi took up a Lectureship in Computer Science at Durham University in October 2021.

## Nick Higham Featured in People of ACM

Professor Nick Higham is featured in a People of ACM profile. “People of ACM” is a bulletin at the Association for Computer Machinery (ACM) highlighting “the unique scientific accomplishments and compelling personal attributes of ACM members who are making a difference in advancing computing as a science and a profession”.

Professor Jack Dongarra was also featured in People of ACM in 2013.

Professor Nicholas J. Higham, University of Manchester

## Exploiting the MATLAB Language in Implementing a Matrix Collection

“Concise code” may not be commonly used or defined, but as in languages, conciseness—the quality of communicating information in as few words as is feasible—can be a good guide in programming. Conciseness of code, which could mean short and self-explanatory lines or blocks of code, is easier to achieve in higher level languages such as Python or MATLAB, and more difficult when writing, for example, in RISC assembly languages with short instructions each doing very little.

For developing the recently released matrix collection, Anymatrix [1,2] (with Nick Higham), we used MATLAB and exploited its string handling routines to do complicated string manipulations in a few lines of code. This post describes some details of the implementation and further information can be found in the user’s guide [3].

### Groups of Matrices

Anymatrix enforces the organization of matrices into groups, which means a new matrix has to be placed either into one of the existing groups or a new group. For example, the built-in groups are contest, core, gallery, hadamard, matlab, nessie, and regtools. Each of these come with around 20 matrices, which can be fixed-sized matrices or matrix generators.

Matrices are identified by <group_name>/<matrix_name>. For example, core/beta is a matrix called beta in the core group. Matrix names are their file names minus the extension .m. Group names are directory names of the groups. Since the IDs of matrices are derived from the directory and file names, uniqueness is guaranteed. This also means that multiple matrices can have the same name as long as they are in different groups.

Around the time we started working on this collection, MATLAB R2020b introduced pattern, to search and match strings conveniently. An example usage in Anymatrix is as follows.

% Matrix ID pattern
matrix_ID_pat = ...
asManyOfPattern(alphanumericsPattern | ...
characterListPattern("_") | ...
characterListPattern("-")) + ...
'/' + ...
asManyOfPattern(alphanumericsPattern | ...
characterListPattern("_") | ...
characterListPattern("-"));

This code creates a pattern for matrix IDs, which matches sequences of letter, digit, underscore or hyphen characters, followed by a single forward slash, followed again by the sequence. This sort of pattern can then be used, for example, to match strings (using matches) and extract substrings (using extract):

>> S = '...core/beta...';
>> extract(S, matrix_ID_pat)
ans =
1×1 cell array
{'core/beta'}

The pattern functionality helped in writing concise code. We used it in various places of Anymatrix, for example in detecting matrix IDs in the commands and extracting substrings in the Boolean search expressions to separate properties from brackets and operators and/or/not.

### Properties of Matrices

Matrix properties can be specified in two ways, either as an assignment properties = {...} inside the matrix .m files, or in the am_properties.m files in group folders. With the latter option we may specify properties of multiple matrices in one assignment. Anymatrix contains a list of recognized properties, such as ill conditioned or banded. When Anymatrix scans the file structure, it gives a warning if a matrix contains an unrecognized property.

Properties can be searched by Boolean expressions with operators and/or/not and brackets for specifying precedence. For example:

>> anymatrix('p', 'integer and ill conditioned')
ans =
3×1 cell array
{'core/wilson'     }
{'matlab/pascal'   }

The following function that transforms the supplied Boolean expression into an expression that can be evaluated in MATLAB, demonstrates a lot of MATLAB’s features that we made use of.

function IDs = search_by_properties(expression)
IDs = {};
% Replace 'and', 'or', and 'not' by corresponding MATLAB symbols.
expression = replace(expression, ' and ', ' & ');
expression = replace(expression, ' or ', ' | ');
expression = replace(expression, ' not ', ' ~');
expression = replace(expression, '(not ', '(~');
if startsWith(expression, 'not')
expression = expression(4:length(expression));
expression(1) = '~';
end
% Assume properties are made up letters, can include a hyphen
% or a white space character, and there is no case sensitivity.
pat = (lettersPattern + whitespacePattern + lettersPattern) ...
| (lettersPattern + characterListPattern('-') ...
+ lettersPattern) ...
| lettersPattern;
% Extract properties from the logical expression and replace
% them by function calls to test for membership.
ex_props = extract(expression, pat);
ind = 1;
new_expression = '';
for p = ex_props.'
mod_prop = strcat('ismember(''', ...
strrep(lower(p{1}), '-', ' '), ...
''', strrep(lower(matrix_properties{1}), ''-'', '' ''))');
trunc_exp = expression(ind:end);
% Find where the property is in the expression.
prop_index = strfind(trunc_exp, p{1});
% Take anything before the property and append the modified
% version of the property.
new_expression = strcat(new_expression, ...
trunc_exp(1:prop_index(1)-1), ...
mod_prop);
% Move the index after the property that was replaced.
ind = ind + prop_index(1) + length(p{1}) - 1;
end
new_expression = strcat(new_expression, expression(ind:end));

% Find matrices whose properties satisfy the specified logical
% expression.
k = 0;
for matrix_properties = properties.'
k = k + 1;
% Test if the expression is true for this matrix
if eval(new_expression)
IDs = [IDs; matrix_IDs{k}];
end
end
end

On lines 4 to 11 we replace the operators and/or/not by the corresponding MATLAB ones &/|/~. We then extract properties from the expression, making use of patterns, modify them to test for membership of the properties and place them back into the expression (lines 14 to 38). Finally, on lines 42 to 50 we use eval to run the Boolean expression as MATLAB code for every matrix, and return a list of suitable matrices.

In the example command anymatrix('p', 'integer and ill conditioned'), the expression 'integer and ill conditioned' is transformed internally to 'ismember('integer', strrep(lower(matrix_properties{1}), '-', ' ')) & ismember('ill conditioned', strrep(lower(matrix_properties{1}), '-', ' '))' which can then be evaluated using eval. Notice that Anymatrix replaces hyphens by white spaces to allow two ways to specify properties, and converts properties to lower case to avoid case sensitivity. This way, ill conditioned and Ill-conditioned, for example, are equivalent properties in Anymatrix.

### Testing in Anymatrix

There are two ways to implement testing in Anymatrix. One way is to add a function in file test_run.m in the group folder which can be invoked by an appropriate Anymatrix command. Another way is to test matrices for their properties. This is provided in the directory testing/ in the root folder of Anymatrix.

It is worth noting that not all supported properties can be tested, so only a subset in the built-in collection are. Each property that is feasible to test has a corresponding test_<property>.m file. Functions in these files must return true or false, given a matrix. Anymatrix utilizes MATLAB’s unit testing framework and automatically generates function-based unit tests for every matrix in the collection. When a new matrix or a new property to an existent matrix is added, the unit testing framework picks that up automatically when run. The following is an example function-based unit test that is automatically generated by Anymatrix. It tests the properties of a set of the core/fourier matrices with arbitrary dimensions.

function test_core_fourier(testcase)
A = anymatrix('core/fourier',3);
anymatrix_check_props(A, 'core/fourier', testcase);
A = anymatrix('core/fourier',5);
anymatrix_check_props(A, 'core/fourier', testcase);
A = anymatrix('core/fourier',8);
anymatrix_check_props(A, 'core/fourier', testcase);
A = anymatrix('core/fourier',10);
anymatrix_check_props(A, 'core/fourier', testcase);
A = anymatrix('core/fourier',15);
anymatrix_check_props(A, 'core/fourier', testcase);
A = anymatrix('core/fourier',24);
anymatrix_check_props(A, 'core/fourier', testcase);
A = anymatrix('core/fourier',25);
anymatrix_check_props(A, 'core/fourier', testcase);
A = anymatrix('core/fourier',30);
anymatrix_check_props(A, 'core/fourier', testcase);
A = anymatrix('core/fourier',31);
anymatrix_check_props(A, 'core/fourier', testcase);
end

Above, the function anymatrix_check_props looks for files test_<property>.m and runs those tests that it finds on the given matrix:

for prop = P.'
test_func_name = strcat('test_', ...
strrep(strrep(lower(prop{1}), ...
'-', '_'), ' ', '_'));
if isfile(strcat(root_path, '/private/', test_func_name, '.m'))
handle = str2func(test_func_name);
verifyTrue(testcase, handle(M), ...
strcat("Matrix ", matrix_ID, " is not ", prop{1}, "."));
end
end

Using this, Anymatrix adds a level of almost automatic testing invoked by adding new matrices or properties to matrices. A good example of this is the MATLAB built-in gallery group, which we made available in Anymatrix. By appending each matrix with properties we can check that MATLAB’s gallery matrices have the expected properties.

### Summary

Anymatrix is both a collection of matrices and a tool to organize them. It is worth noting that much of the infrastructure is not specific to organizing matrices and so it can be reused to organize into groups, search, and access through IDs any kind of objects appended with properties.

Anymatrix 1.0 was released in October 2021, but the development continues. We welcome contributions to the collection either in the form of remote groups that can be downloaded into Anymatrix given a git url (an example is matrices-mp-cosm group by X. Liu), or suggestions to include matrices or groups in the built-in collection. We also welcome reports of bugs or recommendations for improvements to the clarity, correctness and conciseness of the source code.

### References

[1] N. J. Higham and M. Mikaitis. Anymatrix: An Extensible MATLAB Matrix Collection. MIMS EPrint 2021.16, Manchester Institute for Mathematical Sciences, The University of Manchester, UK. Oct. 2021 (to appear in Numer. Algorithms).

[2] N. J. Higham. Anymatrix: An Extensible MATLAB Matrix Collection. Nov. 2021.

[3] N. J. Higham and M. Mikaitis. Anymatrix: An Extensible MATLAB Matrix Collection. Users’ Guide. MIMS EPrint 2021.15, Manchester Institute for Mathematical Sciences, The University of Manchester, UK. Oct. 2021.

## Nick Higham Awarded the 2022 Hans Schneider Prize

Professor Nick Higham has been awarded the 2022 Hans Schneider Prize by the International Linear Algebra Society (ILAS). The Hans Schneider Prize in Linear Algebra is awarded every three years by ILAS for research, contributions, and achievements at the highest level of Linear Algebra.

Nick is cited for his fundamental contributions in the analysis of a wide range of numerical linear algebra problems and matrix functions. He will present his lecture at the 25th ILAS Conference in Madrid, Spain, June 5-9, 2023.

Professor Nicholas J. Higham, The University of Manchester

## Plotting Circles to Test Low Precision Arithmetics and Rounding Modes

As part of the 28th IEEE Symposium of Computer Arithmetic (held in July 2021), a special issue of IEEE Transactions on Emerging Topics in Computing on “Emerging and Impacting Trends on Computer Arithmetic” was organized, which has recently been published [1]. With Massimiliano Fasi, we have contributed an article [2] (available Open Access) on simulating stochastic rounding [3] in floating-point arithmetics, and about which we wrote in this blog before [4].

One interesting numerical example which we used to test various arithmetics and rounding modes [Sec. VIII C2, 4] is an ordinary differential equation (ODE)

$u'(t)=v(t), \\ v'(t)=-u(t).$

With the initial conditions $u(0)=1$ and $v(0)=0$, the solution of this ODE lies on the unit circle in the $uv$-plane [p. 51, 5]. When solving it numerically our is aim to obtain the solution that lies as close as possible to the unit circle. A similar example is approximating orbits of asteroids in Physics [Sec. II A, 8], although it is more complicated since it additionally includes velocity and acceleration.

Using a forward Euler scheme we obtain:

$u_{k+1}=u_k+hv_k, \\ v_{k+1}=v_k-hu_k,$

where $h=\frac{2\pi}{n}$ is a step size. Now, solving this with $n=32$ in bfloat16 arithmetic [6] with round to nearest (RN) and stochastic rounding in arithmetic operations, we obtain the solution in Figure 1 (dotted line—exact unit circle, dashed—RN, solid—SR), which spirals out as previously shown [p. 51, 5]. To improve this we could use a different solver, such as Trapezium or Leapfrog [p.51, 5], but for the purposes of testing arithmetics rather than solvers we stick with Euler and try different step sizes.

Next, we tried $n=2^9$ and the plot is shown on the left hand side of Figure 2. Visually, this timestep size for the bfloat16 arithmetic with RN looks ideal since it quite closely approximates the unit circle and the solution does not spiral out as before. SR performs quite well here as well but noticeably worse than RN.

Next, we further reduce the timestep to $n=2^{11}$ and the solution with that is plotted on the right hand side of Figure 2. In this case the solution with RN has been affected by rounding errors—the approximation of the unit circle looks visually as an octagon rather than a circle!

So what happened there? In the paper, we explain this through the problem of stagnation in floating-point arithmetic. It happens when many addends to some relatively large value are small enough so that they are all lost to rounding and the sum stays at some initial or intermediately reached value. In this case, once we start at an initial point $u=1$ and $v=0$, we expect that both $u$ and $v$ will start to decrease. However, only $v$ is doing that since in $u_{k+1}=u_k+hv_k$ the term $hv_k$ is too small initially to change $u_k$.

The same pattern repeats during the whole solution and $u$ and $v$ keep switching places in suffering from stagnation. Since SR is immune to stagnation [7], this issue does not appear.

Finally, we reduce the timestep by a further factor of 100 (Figure 3). This revealed that the ODE solution solved in bfloat16 hardly moves away from the initial conditions with RN, but is still quite accurately computing the approximation of the unit circle with SR.

The unit circle ODE constitutes an easy to run experiment to observe stagnation in floating-point arithmetic, which is usually done through recursive summation (also in our paper, [Sec. VIII A-B, 2]), and is a good visually convenient benchmark for testing low precision arithmetics and alternative rounding modes, such as stochastic rounding, and perhaps for teaching students about floating point and stagnation.

Further detail on the unit circle experiments, as well as other experiments with different ODEs and solvers, can be found in Section VIII of our paper [2]. The MATLAB code for the experiments is available at https://github.com/mmikaitis/stochastic-rounding-evaluation.

## References

[1] M. Joldes, F. Lamberti, and A. Nannarelli, Special Section on “Emerging and Impacting Trends on Computer Arithmetic”. IEEE Trans. Emerg. Topics Comput., 9:3. Sep. 2021

[2] M. Fasi and M. Mikaitis, Algorithms for Stochastically Rounded Elementary Arithmetic Operations in IEEE 754 Floating-Point Arithmetic. IEEE Trans. Emerg. Topics Comput., 9:3. Sep. 2021

[3] N. J. Higham, What is Stochastic Rounding?. Jul. 2020

[4] M. Mikaitis, Simulating Stochastically Rounded Floating-Point Arithmetic Efficiently. Nov. 2020

[5] N. J. Higham, “Goals of applied mathematical research” in The Princeton Companion to Applied Mathematics, N. J. Higham, M. R. Dennis, P. Glen- dinning, P. A. Martin, F. Santosa, and J. Tanner, Eds. Princeton, NJ, USA: Princeton Univ. Press, 2015, pp. 48–55.

[6] N. J. Higham, What is Bfloat16 Arithmetic?. Jun. 2020

[7] M. P. Connolly, N. J. Higham, and T. Mary, Stochastic rounding and its probabilistic backward error analysis. SIAM J. Sci. Comput., 43(1), A566–A585. Feb. 2021

[8] D. A. Fauxa and J. Godolphin, The floating point: Tales of the unexpected. Am. J. Phys., 89 (8). Aug. 2021

## SIAM AN21 Minisymposium on Computational Frontiers in Numerical Linear Algebra

The SIAM Annual Meeting 2021 was held virtually, July 19 – 23, 2021. Nick Higham and I organised a two-part minisymposium “Computational Frontiers in Numerical Linear Algebra” that addressed recent algorithmic and software advances in numerical linear algebra. Links to slides from some of the talks are given below.

Minisymposium description: Numerical linear algebra (NLA) is fundamental to many applications in scientific computing. Therefore developing fast algorithms for various NLA problems is crucial to enhance our ability to tackle bigger scientific challenges. Furthermore, NLA software is used as a black box in various applications and hence theoretical guarantees on the computed results are important. Algorithmic development in NLA needs to work in tandem with the ongoing advances in computer hardware. This minisymposium will give a broad overview of various theoretical, algorithmic and software ideas that are being pursued to accelerate NLA problems.

• Part 1
• When Floating-Point Error Matters: the Hazards and Challenges of Low-Precision Computation. Erin C. Carson, Charles University, Czech Republic. Abstract. Slides.
• Randomization for Solving Large Systems of Linear Equations. Laura Grigori, Oleg Balabanov, and Matthias Beaupere, Inria Paris, France. Abstract.
• Mixed Precision Algorithms for Pushing the Performance Limits of Modern HPC Architectures. Hartwig Anzt, University of Tennessee, U.S. Fritz Goebel, Thomas Gruetzmacher, and Terry Cojean, Karlsruhe Institute of Technology, Germany. Andres Tomas and Enrique S. Quintana-Orti, Universitat Politècnica de València, Spain. Abstract. Slides.
• HeFFTe: FFT Computations Towards Exascale. Alan F. Ayala, University of Tennessee, U.S. Miroslav Stoyanov, Oak Ridge National Laboratory, U.S. Stanimire Tomov and Sebastien Cayrols, University of Tennessee, Knoxville, U.S. Jack J. Dongarra, University of Tennessee and Oak Ridge National Laboratory, U.S. Abstract. Slides.
• Part 2
• Replacing Pivoting in Distributed Gaussian Elimination with Randomized Transforms. Neil Lindquist and Piotr Luszczek, University of Tennessee, U.S. Jack J. Dongarra, University of Tennessee and Oak Ridge National Laboratory, U.S. Abstract. Slides.
• Data-Aware Mixed Precision Algorithms. Theo Mary, Sorbonne Universités and CNRS, France. Abstract. Slides.
• Random Matrices Generating Large Growth in LU Factorization with Pivoting. Srikara Pranesh and Nicholas J. Higham, The University of Manchester, United Kingdom; Desmond John Higham, University of Edinburgh, United Kingdom. Abstract. Slides.
• Mixed Precision Randomized SVD. Michael P. Connolly, Nicholas J. Higham, and Srikara Pranesh, The University of Manchester, United Kingdom. Abstract.