Xiaobo Liu and Bastien Vieuble took up PDRA positions towards the end of the year to work with Nick Higham.
Mantas Mikaitis left the group to take up a lectureship in the School of Computing at the University of Leeds on September 1st. He joined us in 2019 to take up his EPSRC Doctoral Prize Fellowship and he was a PDRA in the group from 2020.
Professor Stefan Güttel has been awarded the Taussky–Todd Prize by the International Linear Algebra Society (ILAS).
The ILAS Taussky-Todd Prize is given every three years at an ILAS Conference to an outstanding mid-career researcher in the field of linear algebra, for distinguished contributions to the field within about fifteen years of receiving the PhD or equivalent degree.
The 2023 ILAS Taussky–Todd Prize honours Professor Güttel’s “deep and impactful work on rational Krylov methods for nonlinear eigenvalue problems and matrix functions, in all aspects: analysis, software development, and applications”.
The prize is named for Olga Taussky and John Todd, who had a decisive impact on the development of theoretical and numerical linear algebra for over half a century. It honours them for their many and varied mathematical achievements and for their efforts in promoting linear algebra and matrix theory.
Numerical algorithms for evaluating a function at an matrix are based on a variety of different approaches, but for a large class of methods the approximation is formed by relying on three basic operations:
linear combination of matrices ,
matrix multiplication , and
solution of a linear system with right-hand sides ,
where and are matrices, and and are scalars.
Algorithms that combine only these three basic building blocks are particularly attractive, as they correspond to functions that are easy to work with: if an expression for the scalar () function features only linear combinations, multiplications, and inversions, and is defined on the spectrum of , then a formula for can be obtained by replacing all occurrences of in the formula for with .
The choice of considering only three operations may seem restrictive, but, in fact, we are looking at a large class of methods—a class that comprises most algorithms based on polynomial or rational approximation and includes, among others, the scaling and squaring algorithm for the matrix exponential and many iterative algorithms for matrix roots.
These algorithms can be interpreted as graphs, and in particular as directed acyclic graphs (DAGs). Each node in the graph represents a matrix: the inputs are stored in the leaves (nodes without incoming edges), the internal nodes (with both incoming and outgoing edges) hold the intermediate results, and the output of the computation can be read off the root nodes (nodes without outgoing edges). In such a graph, the arcs represent data dependencies and tell us in what order the matrices ought to be computed for the algorithm to return the desired result. Two examples of such computational graphs are in the figure below.
Computational graphs are arguably an elegant way of looking at matrix functions and at algorithms to compute them. In recent work, we have shown that they can also be a practical tool to study existing techniques, improve them, and ultimately derive new ones. The accuracy of an algorithm for evaluating a matrix function is determined by the accuracy of its scalar counterpart, thus the design of new methods for functions of matrices can be seen as a scalar-valued optimization problem. Computational graphs provide a particularly compelling way of carrying out the optimization, as the scalar derivatives can be evaluated automatically by exploiting the structure of the graph. This technique is akin to backpropagation, an approach for training neural networks customarily used in machine learning.
The workflow to design by optimization a new algorithm to approximate consists of three steps.
First, we choose the topology of the graph, that is, we fix how many linear combinations, matrix multiplications, and matrix inversions the algorithm is allowed to use, and in which order these operations are to be combined. In practice, we take an existing algorithm for evaluating at a matrix argument, we recast it as a computational graph, and we add nodes representing linear combinations wherever possible. These extra nodes are beneficial, because they increase the number of available degrees of freedom without changing the computational cost of the underlying algorithm.
Next, we optimize the coefficients of the graph. Let us denote by the function represented by the computational graph. We write to stress that the value of at depends on the set of parameters , which is a vector containing the coefficients of all the linear combinations that appear in the computational graph. These coefficients are the scalars and that appear in all linear combinations of the form in the graph, and they are the quantities that we want to optimize. The goal of the optimization is to find a vector of coefficients such that approximates a given function in a domain . In practice, we set to a disc where and are both analytic, so that we only need to minimize the error over (a discretization of) the boundary of . This produces a surrogate nonlinear least-square problem, which we solve by means of the Gauss–Newton algorithm, a simple choice that already produces good results.
Finally, we use error analysis to bound the error of as an approximant to . By relying on higher precision, we are able to guarantee that within the forward and backward error are below a tolerance parameter . By choosing a suitable set of domains , we can obtain a set of functions , …, that approximate to a specified accuracy and at increasing computational cost. Given these functions and a matrix , we can then choose, inexpensively and at runtime, the function that represents the best trade-off between accuracy and speed, and we can use it to compute .
Numerical results show that this process has the potential to improve state-of-the-art algorithms when is the matrix exponential or the square root of a matrix. But care is needed in assessing the quality of the new algorithms, as the accuracy of the new algorithms depends on both the topology of the graph and the quality of the starting guess. This is illustrated in the figure below for two algorithms for the computation of within the disk of radius centered at 0: optimizing the coefficients of a graph representing four steps of the Denman–Beavers iteration brings down the maximum absolute approximation error by over nine orders of magnitude, but for a graph constructed using the Paterson–Stockmeyer evaluation of the degree-13 truncated Taylor approximant the error only improves by three orders of magnitude.
As part of this project, we developed a Julia package, GraphMatFun.jl, which offers tools to generate and manipulate computational graphs, modify their topology, optimize their coefficients, and generate C, Julia, and MATLAB code to evaluate them efficiently at scalar and matrix arguments. The software is released under the terms of the MIT license and is freely available on GitHub.
Congratulations to Professor Nick Higham, who has been elected Fellow of Royal Academy of Engineering, in recognition for developing “theory, algorithms and software that have had major impacts on engineering practice in fields ranging from structural engineering to financial modelling”.
The Numerical Linear Algebra Group recently held a competition amongst its members to design a logo for the group. Chosen from among four excellent entries, the winner was designed by Max Fasi, former PhD student and Research Associate in the group and now an external member of it.
The logo contains a matrix with a sparsity pattern that depicts a bee, the symbol of the city of Manchester since 1842. The bee symbolized the hard work of Mancunians during the industrial revolution.
You’ll see the logo on our slides and our social media accounts, and perhaps on physical items in due course.
In this blog post, we asked one of our alumni, Younes Chahlaoui, a few questions about his time with the Numerical Linear Algebra Group.
Please can you introduce yourself and tell us a bit about your experience before working at the University of Manchester?
I obtained my licence in Applied Mathematics from the University Chouaib Doukkali in my hometown El Jadida, Morocco. Then I got a CIUF grant to do a DEA and a Ph.D. in Mathematical Engineering at INMA, ICTEAM, Polytechnical school at UCLouvain, Belgium. I worked with Prof. Paul Van Dooren.
The title of my Ph.D. is “Low-rank approximation and model reduction.” The central part of my research dealt with developing recursive algorithms for computing low-rank approximations of Gramians of LTI systems that we can use to form reduced models or compute some systems norms. Some of these algorithms are in the SLICOT package.
I did a Postdoc at FSU, Tallahassee, working with Prof. Kyle Gallivan, before working as visiting researcher in different Moroccan universities for three years.
I moved to Manchester in January 2008 for a 4yrs Postdoc within the CICADA EPSRC project.
Why did you choose to take a postdoctoral position in Manchester?
I received the CICADA job announcement and applied. Prof. Nick Higham contacted me for the interview. Working with the NLA group and all CICADA teams was a challenge. The international reputation of the NLA group helped me a lot then and after.
How did you find Manchester?
It was an invaluable experience for my family and me. Even after leaving Manchester in December 2011, we still have many good friends there. My boys are hoping to do their studies at the University of Manchester.
Can you tell us about your career since leaving Manchester?
Since January 2012, I have been working at the Department of Mathematics at King Khalid University, Abha, KSA. Since 2014, I have supervised 4 M.Sc. students, and I am actively participating in developing new academic programs within the department.
The 2022 ICM is taking place virtually and Nick will deliver his lecture, titled “Numerical Stability of Algorithms at Extreme Scale and Low Precisions”, in person at an overlay conference at Imperial College,
The ICM is the largest conference on mathematics, takes place every four years, and was first held in 1897.
Professor Nicholas J. Higham, University of Manchester
A new group photo was taken on May 16, 2022—our first in over two years. Most group members are in the photo.
A high resolution version of the photo is available here.
Numerical Linear Algebra Group. By row from the back: Mantas Mikaitis, Xinye Chen, Alban Bloor Rile, Xiaobo (Bob) Liu, Michael Connolly, Nick Higham, Sven Hammarling, Stefan Güttel, Ian McInerney, Bastien Vieuble, Ayana Mussabayeva, Thomas Seleiro, Françoise Tisseur, Marcus Webb.
In this blog post, we asked one of our alumni, Vanni Noferini, a few questions about his time with the Numerical Linear Algebra Group.
Please can you introduce yourself and tell us a bit about your experience before attending University of Manchester?
I was born in Florence, Tuscany. I did my undergraduate studies there, obtaining an M.Sc. in Theoretical Physics. Later, I got my Ph.D in Mathematics in Pisa. During my PhD I was jointly supervised by Dario Bini and Luca Gemignani. While in Italy I also played volleyball at a somewhat competitive level, and I have had a number of part-time/student jobs. For example I have been writing science popularisation articles for a Swiss newspaper.
Why did you choose to do your postdoc in Manchester?
While I was writing my PhD thesis I had applied to an open call for a postdoc position to work on functions of a matrix, with Nick Higham. Nick is a world-famous superstar in the area, so that sounded like a great opportunity. Moreover, I liked Nick and Françoise, and the rest of the group, since the day of my interview; they probably have also not disliked me too much because I was offered the job. Accepting it was a great decision: I learnt a lot and my time there has certainly shaped my career to a significant extent. I started in September 2012 (technically as a predoc: I flew to Pisa to defend my viva in October 2012) and stayed until August 2015.
During my Manchester years, I have worked not only with Nick but also with many other people who then were in the group: for example Yuji Nakatsukasa, Javi Perez, Meisam Sharify, Françoise Tisseur. Plus there were other fantastic mathematicians around to talk with, such as Stefan Guettel or Martin Lotz — not to mention the frequent international visitors. Those were the days and Manchester was just the place to be… Now I manage my own research group at Aalto, and I am doing my best to create a similarly fruitful environment: my inspiration for this goal is definitely the NLA group in Manchester during my postdoc!
How did you find Manchester?
A little humid, but fun. In my first couple of weeks I actually stayed in the Peak District (the husband of a friend of my then girlfriend had rented me a room in Congleton, Cheshire) which was beautiful but would not have been a very convenient long-term address to work in the Alan Turing Building. Thus, I soon moved to Didsbury and I lived there for most of my 3 years in Manchester. I was living in Austin Drive, not far from the Burnage railway station, and apparently very close to where Charles Van Loan had stayed during his own Manchester postdoc (at least, so he once told me). Later, more NLA group members figured out that Didsbury was a rather nice place, so eventually we had grown a small community there. In fact, fellow Didsbury resident Martin Lotz and I used to refer to Manchester as “Greater Didsbury”. On the occasional sunny weekends I liked to go around by car, so I got to know quite well the broader area of Greater Dids… I mean, Manchester, and North-West England.
Can you tell us about your career since leaving University of Manchester?
I have continued a career in academia. I was a Lecturer (Research) at the University of Essex in Colchester for about 4 years. I obtained tenure in Essex, and also indefinite leave to remain in the UK post-Brexit, having got a “settled status” as a long-enough UK resident and EU citizen. However, even though I have enjoyed my 7 years in England, I was offered an attractive position from Aalto University (Finland), that would give me more research time and better funding opportunities. So I moved here in May 2019.
What is your current role?
Currently I am an Associate Professor in Mathematics at Aalto University, which is located in Espoo (Finland). Here I am the leader of a research group on matrix theory and its applications. At the time of this interview, my group includes one visiting professor (Paul Van Dooren), two postdocs (Giovanni Barbarino and Manme Quintana), two PhD students (Lauri Nyman and Ryan Wood) and one MSc student by research (Antti Haavikko). We are having fun!