Numerical Linear Algebra Group Activities 2022

During 2022 the Numerical Linear Algebra Group introduced a new logo, designed by Massimiliano Fasi, former PhD student and Research Associate in the group.

We published interviews with alumni Younes Chahlaoui and Vanni Noferini.

In addition to our papers in journals we published articles in SIAM News:

and a book

The award of the ACM A. M. Turing Award to Jack Dongarra was a highlight.

PhD Students

Zhengbo Zhou started as a PhD student, working with Nick Higham and Françoise Tisseur.

Xiaobo Liu (Computing Matrix Functions in Arbitrary Precision Arithmetic) and Michael Connolly (Probabilistic Rounding Error Analysis for Numerical Linear Algebra) successfully defended their PhD dissertations. Michael is now working as a data scientist at Bet365.

Postdoctoral Research Associates (PDRAs)

Xiaobo Liu and Bastien Vieuble took up PDRA positions towards the end of the year to work with Nick Higham.

Mantas Mikaitis left the group to take up a lectureship in the School of Computing at the University of Leeds on September 1st. He joined us in 2019 to take up his EPSRC Doctoral Prize Fellowship and he was a PDRA in the group from 2020.

Presentations at Conferences and Workshops

International Congress of Mathematics (ICM), July 6–14, 2022: Higham, invited sectional talk Numerical Stability of Algorithms at Extreme Scale and Low Precisions (slides, video).

A Journey in Numerical Linear Algebra: A Workshop in Honor of Michele Benzi’s 60th Birthday, University of Pisa, June 10–11, 2022: Higham.

Linear Algebra, Matrix Analysis and Applications-ALAMA 2022-ALN2gg, Madrid, 1–3 June 2022: Tisseur.

Householder Symposium XXI on Numerical Linear Algebra, Selva di Fasano, Italy, 12–17 June 2022: Fasi, Tisseur.

Computational Mathematics for Quantum Technologies, Bath, August 1–5, 2022: Güttel.

7th IMA Conference on Numerical Linear Algebra and Optimization, University of Birmingham, June 29–July 1, 2022: Liu, McInnerney.

2022 SIAM Annual Meeting, July 11–15, 2022: Liu.

Conference and Workshop Organization

Sixteen members of the Group attended a two-day creativity Workshop held in April 2022, facilitated by Dennis Sherwood, an expert in creativity. See this report on the workshop.

Mantas Mikaitis and Nick Higham organized a double minisymposium Understanding and Exploiting Mixed-Precision Accelerators for High-Performance Computing at the SIAM Conference on Parallel Processing for Scientific Computing (PP22) in February 2022. Slides of some of the talks are available.

The conference Advances in Numerical Linear Algebra: Celebrating the 60th Birthday of Nick Higham was held at the University of Manchester, July 6–8, 2022, organized by Stefan Güttel, Sven Hammarling, Stephanie Lai, Françoise Tisseur and Marcus Webb. Most of the talks are available on the NLA Group YouTube channel and links to them are available on the conference web page. Photos are available here.

Recognition and Service

Jack Dongarra received the 2021 ACM A.M. Turing Award.

Nick Higham was elected Fellow of Royal Academy of Engineering.

Stefan Güttel was awarded the 2023 ILAS Taussky–Todd Prize.

Stefan Güttel was reappointed for another term as a member of the SIAM Membership Committee, 1/1/2023–12/31/2025.

Stefan Güttel Awarded 2023 ILAS Taussky-Todd Prize

Professor Stefan Güttel has been awarded the Taussky–Todd Prize by the International Linear Algebra Society (ILAS).

The ILAS Taussky-Todd Prize is given every three years at an ILAS Conference to an outstanding mid-career researcher in the field of linear algebra, for distinguished contributions to the field within about fifteen years of receiving the PhD or equivalent degree.

The 2023 ILAS Taussky–Todd Prize honours Professor Güttel’s “deep and impactful work on rational Krylov methods for nonlinear eigenvalue problems and matrix functions, in all aspects: analysis, software development, and applications”.

The prize is named for Olga Taussky and John Todd, who had a decisive impact on the development of theoretical and numerical linear algebra for over half a century. It honours them for their many and varied mathematical achievements and for their efforts in promoting linear algebra and matrix theory.

Professor Güttel will deliver a prize lecture at the 25th ILAS Conference in Madrid, 12 to 16 June 2023.

Computational Graphs for Matrix Functions

Numerical algorithms for evaluating a function f at an n \times n matrix A are based on a variety of different approaches, but for a large class of methods the approximation is formed by relying on three basic operations:

  • linear combination of matrices Z \gets c_X X + c_Y Y,
  • matrix multiplication Z \gets X \cdot Y, and
  • solution of a linear system with n right-hand sides X^{-1} \cdot Y,

where X and Y are n \times n matrices, and c_X and c_Y are scalars.

Algorithms that combine only these three basic building blocks are particularly attractive, as they correspond to functions that are easy to work with: if an expression for the scalar (n = 1) function g features only linear combinations, multiplications, and inversions, and g is defined on the spectrum of A, then a formula for g(A) can be obtained by replacing all occurrences of z in the formula for g(z) with A.

The choice of considering only three operations may seem restrictive, but, in fact, we are looking at a large class of methods—a class that comprises most algorithms based on polynomial or rational approximation and includes, among others, the scaling and squaring algorithm for the matrix exponential and many iterative algorithms for matrix roots.

These algorithms can be interpreted as graphs, and in particular as directed acyclic graphs (DAGs). Each node in the graph represents a matrix: the inputs are stored in the leaves (nodes without incoming edges), the internal nodes (with both incoming and outgoing edges) hold the intermediate results, and the output of the computation can be read off the root nodes (nodes without outgoing edges). In such a graph, the arcs represent data dependencies and tell us in what order the matrices ought to be computed for the algorithm to return the desired result. Two examples of such computational graphs are in the figure below.

Two examples of computational graphs.
Examples of computational graphs: ten steps of the Denman–Beavers iteration for the square root (left) and the scaling and squaring algorithm for the matrix exponential using a degree-13 rational approximant and five squaring steps (right). Blue arcs represent linear combinations, red arcs represent matrix multiplications, and black dash-dotted arcs represent system solves. The input nodes have a white background, and all other nodes have a the same color as the arc of the operation that produces them. The output node (result of the computation) has a double border. Nodes with only one visible parent are the result of either a squaring (A^2, A^4, A^6, Z^{2}, Z^{4}, Z^{8}, Z^{16}, and Z^{32}) or a trivial operation involving the identity matrix: a matrix inversion, represented as C=A^{-1}I, or a multiplication by a scalar, represented as C = \alpha A + 0 I

Computational graphs are arguably an elegant way of looking at matrix functions and at algorithms to compute them. In recent work, we have shown that they can also be a practical tool to study existing techniques, improve them, and ultimately derive new ones. The accuracy of an algorithm for evaluating a matrix function is determined by the accuracy of its scalar counterpart, thus the design of new methods for functions of matrices can be seen as a scalar-valued optimization problem. Computational graphs provide a particularly compelling way of carrying out the optimization, as the scalar derivatives can be evaluated automatically by exploiting the structure of the graph. This technique is akin to backpropagation, an approach for training neural networks customarily used in machine learning.

The workflow to design by optimization a new algorithm to approximate f consists of three steps.

First, we choose the topology of the graph, that is, we fix how many linear combinations, matrix multiplications, and matrix inversions the algorithm is allowed to use, and in which order these operations are to be combined. In practice, we take an existing algorithm for evaluating f at a matrix argument, we recast it as a computational graph, and we add nodes representing linear combinations wherever possible. These extra nodes are beneficial, because they increase the number of available degrees of freedom without changing the computational cost of the underlying algorithm.

Next, we optimize the coefficients of the graph. Let us denote by g the function represented by the computational graph. We write g(z; c) to stress that the value of g at z depends on the set of parameters c, which is a vector containing the coefficients of all the linear combinations that appear in the computational graph. These coefficients are the scalars c_X and c_Y that appear in all linear combinations of the form c_X X + c_Y Y in the graph, and they are the quantities that we want to optimize. The goal of the optimization is to find a vector of coefficients c^* such that g(\cdot; c^*) approximates a given function f in a domain \Omega \subset \mathbb C. In practice, we set \Omega to a disc where f and g are both analytic, so that we only need to minimize the error |g(z,c) - f(z)| over (a discretization of) the boundary of \Omega. This produces a surrogate nonlinear least-square problem, which we solve by means of the Gauss–Newton algorithm, a simple choice that already produces good results.

Finally, we use error analysis to bound the error of g as an approximant to f. By relying on higher precision, we are able to guarantee that within \Omega the forward and backward error are below a tolerance parameter \varepsilon. By choosing a suitable set of domains \Omega_1 \subset \ldots \subset \Omega_\ell, we can obtain a set of functions g_1, , g_\ell that approximate f to a specified accuracy and at increasing computational cost. Given these functions and a matrix A, we can then choose, inexpensively and at runtime, the function g_i that represents the best trade-off between accuracy and speed, and we can use it to compute f(A) \approx g_i(A).

Numerical results show that this process has the potential to improve state-of-the-art algorithms when f is the matrix exponential or the square root of a matrix. But care is needed in assessing the quality of the new algorithms, as the accuracy of the new algorithms depends on both the topology of the graph and the quality of the starting guess. This is illustrated in the figure below for two algorithms for the computation of \sqrt{z+1} within the disk of radius 1/2 centered at 0: optimizing the coefficients of a graph representing four steps of the Denman–Beavers iteration brings down the maximum absolute approximation error by over nine orders of magnitude, but for a graph constructed using the Paterson–Stockmeyer evaluation of the degree-13 truncated Taylor approximant the error only improves by three orders of magnitude.

Absolute forward error of four schemes for approximating the function f(x) = \sqrt{x+1}. The plots on the left correspond to four steps of the Denman–Beavers iteration (top) and to the Paterson–Stockmeyer evaluation of the degree-13 truncated Taylor approximants (bottom). The plots on the right correspond to graphs with optimized coefficients (the computational graphs on the left in the same row were used as starting value for the optimization). The blue circles have radius 1/2 and z-value equal to the maximum relative error within the disk: this value drops from 1.5 \cdot 10^{-6} to 1.4 \cdot 10^{-15} for the graphs in the first line, but only from 1.9 \cdot 10^{-6} to 1.5 \cdot 10^{-9} for those in the second.

As part of this project, we developed a Julia package, GraphMatFun.jl, which offers tools to generate and manipulate computational graphs, modify their topology, optimize their coefficients, and generate C, Julia, and MATLAB code to evaluate them efficiently at scalar and matrix arguments. The software is released under the terms of the MIT license and is freely available on GitHub.

Nick Higham Elected Fellow of Royal Academy of Engineering

Congratulations to Professor Nick Higham, who has been
elected Fellow of Royal Academy of Engineering, in recognition for developing
“theory, algorithms and software that have had major impacts on
engineering practice in fields ranging from structural engineering to
financial modelling”.

The full citation is at https://raeng.org.uk/about-us/fellowship/new-fellows-2022/professor-nicholas-higham-freng-frs

Nick Higham

A Logo for the Numerical Linear Algebra Group

nla_logo_colour_600.jpg

The Numerical Linear Algebra Group recently held a competition amongst its members to design a logo for the group. Chosen from among four excellent entries, the winner was designed by Max Fasi, former PhD student and Research Associate in the group and now an external member of it.

The logo contains a matrix with a sparsity pattern that depicts a bee, the symbol of the city of Manchester since 1842. The bee symbolized the hard work of Mancunians during the industrial revolution.

You’ll see the logo on our slides and our social media accounts, and perhaps on physical items in due course.

Our Alumni – Younes Chahlaoui

In this blog post, we asked one of our alumni, Younes Chahlaoui, a few questions about his time with the Numerical Linear Algebra Group.
Younes Chahlaoui

Please can you introduce yourself and tell us a bit about your experience before working at the University of Manchester?

I obtained my licence in Applied Mathematics from the University Chouaib Doukkali in my hometown El Jadida, Morocco. Then I got a CIUF grant to do a DEA and a Ph.D. in Mathematical Engineering at INMA, ICTEAM, Polytechnical school at UCLouvain, Belgium. I worked with Prof. Paul Van Dooren.

The title of my Ph.D. is “Low-rank approximation and model reduction.” The central part of my research dealt with developing recursive algorithms for computing low-rank approximations of Gramians of LTI systems that we can use to form reduced models or compute some systems norms. Some of these algorithms are in the SLICOT package.

I did a Postdoc at FSU, Tallahassee, working with Prof. Kyle Gallivan, before working as visiting researcher in different Moroccan universities for three years.

I moved to Manchester in January 2008 for a 4yrs Postdoc within the CICADA EPSRC project.

Why did you choose to take a postdoctoral position in Manchester?

I received the CICADA job announcement and applied. Prof. Nick Higham contacted me for the interview. Working with the NLA group and all CICADA teams was a challenge. The international reputation of the NLA group helped me a lot then and after.

How did you find Manchester?

It was an invaluable experience for my family and me. Even after leaving Manchester in December 2011, we still have many good friends there. My boys are hoping to do their studies at the University of Manchester.

Can you tell us about your career since leaving Manchester?

Since January 2012, I have been working at the Department of Mathematics at King Khalid University, Abha, KSA. Since 2014, I have supervised 4 M.Sc. students, and I am actively participating in developing new academic programs within the department.

What is your current role?

Since the beginning of 2021, I have overseen the Quality & Academic Development (Q&D) unit at the College of Science. We have applied for the National Center for Academic Accreditation and Evaluation accreditation of 4 B.Sc. and 4 M.Sc. programs, awaiting decision soon (June 2022).

I am American Institute of Professional Studies (AIPS) accredited as a consultant in Quality Management System (ISO 9001-2015) and European Foundation for Quality Management (EFQM).

Now I am a member of many committees at different levels at the university related to Learning Outcomes, KPIs, and accreditations.

Nick Higham to give International Congress of Mathematicians Lecture

Professor Nick Higham will give an invited sectional talk at the International Congress of Mathematics (ICM), to be held July 8–12, 2022.

The 2022 ICM is taking place virtually and Nick will deliver his lecture, titled “Numerical Stability of Algorithms at Extreme Scale and Low Precisions”, in person at an overlay conference at Imperial College,

The ICM is the largest conference on mathematics, takes place every four years, and was first held in 1897.

Professor Nicholas J. Higham, University of Manchester

2022 NLA group photo

A new group photo was taken on May 16, 2022—our first in over two years. Most group members are in the photo.

A high resolution version of the photo is available here.

Numerical Linear Algebra Group. By row from the back: Mantas Mikaitis, Xinye Chen, Alban Bloor Rile, Xiaobo (Bob) Liu, Michael Connolly, Nick Higham, Sven Hammarling, Stefan Güttel, Ian McInerney, Bastien Vieuble, Ayana Mussabayeva, Thomas Seleiro, Françoise Tisseur, Marcus Webb.

Our Alumni – Vanni Noferini

In this blog post, we asked one of our alumni, Vanni Noferini, a few questions about his time with the Numerical Linear Algebra Group.

Please can you introduce yourself and tell us a bit about your experience before attending University of Manchester?

I was born in Florence, Tuscany. I did my undergraduate studies there, obtaining an M.Sc. in Theoretical Physics. Later, I got my Ph.D in Mathematics in Pisa. During my PhD I was jointly supervised by Dario Bini and Luca Gemignani. While in Italy I also played volleyball at a somewhat competitive level, and I have had a number of part-time/student jobs. For example I have been writing science popularisation articles for a Swiss newspaper.

Why did you choose to do your postdoc in Manchester?

While I was writing my PhD thesis I had applied to an open call for a postdoc position to work on functions of a matrix, with Nick Higham. Nick is a world-famous superstar in the area, so that sounded like a great opportunity. Moreover, I liked Nick and Françoise, and the rest of the group, since the day of my interview; they probably have also not disliked me too much because I was offered the job. Accepting it was a great decision: I learnt a lot and my time there has certainly shaped my career to a significant extent. I started in September 2012 (technically as a predoc: I flew to Pisa to defend my viva in October 2012) and stayed until August 2015.

During my Manchester years, I have worked not only with Nick but also with many other people who then were in the group: for example Yuji Nakatsukasa, Javi Perez, Meisam Sharify, Françoise Tisseur. Plus there were other fantastic mathematicians around to talk with, such as Stefan Guettel or Martin Lotz  — not to mention the frequent international visitors. Those were the days and Manchester was just the place to be… Now I manage my own research group at Aalto, and I am doing my best to create a similarly fruitful environment: my inspiration for this goal is definitely the NLA group in Manchester during my postdoc!

How did you find Manchester?

A little humid, but fun.  In my first couple of weeks I actually stayed in the Peak District (the husband of a friend of my then girlfriend had rented me a room in Congleton, Cheshire) which was beautiful but would not have been a very convenient long-term address to work in the Alan Turing Building. Thus, I soon moved to Didsbury and I lived there for most of my 3 years in Manchester. I was living in Austin Drive, not far from the Burnage railway station, and apparently very close to where Charles Van Loan had stayed during his own Manchester postdoc (at least, so he once told me). Later, more NLA group members figured out that Didsbury was a rather nice place, so eventually we had grown a small community there. In fact, fellow Didsbury resident Martin Lotz and I used to refer to Manchester as “Greater Didsbury”. On the occasional sunny weekends I liked to go around by car, so I got to know quite well the broader area of Greater Dids… I mean, Manchester, and North-West England.

Can you tell us about your career since leaving University of Manchester?

I have continued a career in academia. I was a Lecturer (Research) at the University of Essex in Colchester for about 4 years. I obtained tenure in Essex, and also indefinite leave to remain in the UK post-Brexit, having got a “settled status” as a long-enough UK resident and EU citizen. However, even though I have enjoyed my 7 years in England, I was offered an attractive position from Aalto University (Finland), that would give me more research time and better funding opportunities. So I moved here in May 2019.

What is your current role?

Currently I am an Associate Professor in Mathematics at Aalto University, which is located in Espoo (Finland). Here I am the leader of a research group on matrix theory and its applications. At the time of this interview, my group includes one visiting professor (Paul Van Dooren), two postdocs (Giovanni Barbarino and Manme Quintana), two PhD students (Lauri Nyman and Ryan Wood) and one MSc student by research (Antti Haavikko). We are having fun!

« Older Entries