Category Archives: news

Françoise Tisseur elected as SIAM UKIE Section President

Professor Françoise Tisseur has been elected as the new SIAM United Kingdom and Republic of Ireland Section President (SIAM UKIE) for 2 years starting in May 2019.  She previously served as Vice President, 2013-2015.

In her candidate statement for the election, Françoise said “I would like to grow SIAM membership in our region. I will support the 14 SIAM student chapters, encourage the establishment of new ones, continue the recent SIAM-IMA cooperation and promote joint meetings between the two societies. I will work to ensure that the annual meeting is successful and has a suitably diverse program that includes industrial involvement.”

The SIAM UKIE Section was formed in 1996 by the Society for Industrial and Applied Mathematics (SIAM) with the aim of promoting and supporting applied and industrial mathematics in the UK and the Republic of Ireland.

IMG_8330.CR2

Françoise Tisseur

Jack Dongarra elected as Foreign Member of the Royal Society

Jack Dongarra


Professor Jack Dongarra

Jack Dongarra, Professor and Turing Fellow in the School of Mathematics and member of the Numerical Linear Algebra Group, has been elected as a Foreign Member of the Royal Society.  This honour recognizes his seminal contributions to algorithms for numerical linear algebra and the design and development of high performance mathematical software for machines ranging from workstations to the largest parallel computers.

Dongarra’s software and libraries, which include LINPACK, EISPACK, LAPACK, the BLAS, MPI, ATLAS, PLASMA, MAGMA, and PAPI, are universally considered as standards, both in academia and industry. They excel in the accuracy of the underlying numerical algorithms and the reliability and performance of the software. They benefit a very wide range of users through their incorporation into software including MATLAB, Maple, Mathematica, Octave, R, SciPy, and vendor libraries.

The Royal Society is the oldest scientific academy in continuous existence, going back to 1663. Each year the Royal Society elects up to 52 new Fellows and up to 10 new Foreign Members. Fellows and Foreign Members are elected for life on the basis of excellence in science. Each candidate is considered on their merits and can be proposed from any sector of the scientific community.

The full list of the newly elected Fellows and Foreign Members of the Royal Society is available here.

Version 4.0 of NLEVP Collection of Nonlinear Eigenvalue Problems

nlevpA new release, version 4.0, is available of the NLEVP MATLAB toolbox, which provides a collection of nonlinear eigenvalue problems. The toolbox has become a standard tool for testing algorithms for solving nonlinear eigenvalue problems.

When it was originally released in 2008, the toolbox contained 26 problems.  The new release contains 74 problems. It is now distributed via GitHub and is available at https://github.com/ftisseur/nlevp.

Further details are given in An Updated Set of Nonlinear Eigenvalue Problems. The collection will grow and contributions are welcome.

The following table shows the 22 new problems in version 4.0 of the toolbox .4.0 NLEVP problems

SIAM CSE19 Minisymposium on “Advances in Analyzing Floating-point Errors in Computational Science”

by Pierre Blanchard, Nick Higham, and Theo Mary

Last February the SIAM Computational Science and Engineering (CSE19) conference took place in Spokane, WA, USA. We organized a two-part minisymposium on recent Advances in Analyzing Floating-point Errors in Computational Science (see links to part 1 and part 2). Below is the list of the eight talks along with the slides which we have made available.

For a Few Equations More

On the occasion of the Gilles Kahn prize award ceremony, I was asked to write an article about my PhD thesis for the popular science blog Binaire from the French newspaper Le Monde (the French version of the article is available here). You can read the English translation below.


Can you tell what the following problems have in common: predict tomorrow’s weather, build crash-resisting cars, scan the bottom of the oceans searching for oil? These are all difficult problems that are too costly to be tackled physically. Importantly, they can also be described by a fundamental tool of mathematics: linear equations. Therefore, the solution of these physical problems can be numerically simulated by solving instead systems of linear equations.

A system of linear equations (under matrix form).

You probably remember from math class in high school how tedious solving these systems could get, even when they had a small number of equations. In practice, it is actually quite common to face systems with thousands or even millions or equations. While computers can fortunately solve these systems for us, the computational cost of the solution can become very high for such large numbers of equations.

To respond to this need, great quantities of resources and money have been dedicated to the construction of supercomputers of great computational power, equipped with a large number of computing units called “cores”. For example, while your personal computer is likely to have less than a dozen cores, the most powerful supercomputer in the world has several millions of these cores. Nevertheless, the size of the problems that we must tackle today is so great that even these supercomputers are not sufficient.

The Inteprid supercomputer, equipped with 164000 cores © Argonne National Laboratory

To take up this challenge, I have worked during my PhD thesis on new algorithms to solve systems of linear equations whose computational cost is greatly reduced. More precisely, a crucial property of these algorithms is that their cost grows slowly with the number of equations: this is referred to as their complexity. Methods of very low complexity (so-called “hierarchical”) have been proposed since the 2000s. However, these hierarchical methods are quite complex and sophisticated, which makes them unable to attain high performance on supercomputers: that is, their high reduction of the theoretical complexity is translated into only very modest gains in terms of actual computing time.

For this reason, my PhD thesis focused on another method (so-called “Block Low Rank”), that is better suited than hierarchical methods for high performance computing. My first achievement was to compute the complexity of this method, which was previously unknown. I proved that, even if its complexity is slightly higher than than of hierarchical methods, it is still low enough to tackle systems of very large dimensions. In the second part of my thesis, I worked on efficiently implementing this method on supercomputers, so as to translate this theoretical reduction into actual time gains.

By significantly reducing the cost of solving systems of linear equations, this work allowed us to solve several physical problems that were previously too large to be tackled. For instance, it took less than an hour to solve a system of 130 million equations arising in a geophysical application, using a supercomputer equipped with 2400 cores.

Celebrating the Centenary of James H. Wilkinson’s Birth

by Sven Hammarling and Nick Higham

Wilkinson_035.jpg

September 27, 2019 is the 100th anniversary of the birth of James Hardy Wilkinson—the renowned numerical analyst who died in 1986. We are marking this special anniversary year in several ways:

The tag wilkinson lists all the posts in this series.

Jack Dongarra Awarded SIAM/ACM Prize in Computational Science and Engineering

Congratulations to Jack Dongarra who recently received the SIAM/ACM Prize in Computational Science and Engineering.

Jack Dongarra will receive the SIAM/ACM Prize in Computational Science and Engineering at the SIAM Conference on Computational Science and Engineering (CSE19) held February 25 – March 1, 2019 in Spokane, Washington. He will receive the award and deliver his prize lecture, “The Singular Value Decomposition: Anatomy of an Algorithm, Optimizing for Performance,” on February 28, 2019.

SIAM and the Association for Computing Machinery (ACM) jointly award the SIAM/ACM Prize in Computational Science and Engineering every two years at the SIAM Conference on Computational Science and Engineering for outstanding contributions to the development and use of mathematical and computational tools and methods for the solution of science and engineering problems. With this award, SIAM and ACM recognize Dongarra for his key role in the development of software and software standards, software repositories, performance and benchmarking software, and in community efforts to prepare for the challenges of exascale computing, especially in adapting linear algebra infrastructure to emerging architectures.

When asked about his research for which the prize was awarded, Dongarra said “I have been involved in the design and development of high performance mathematical software for the past 35 years, especially regarding linear algebra libraries for sequential, parallel, vector, and accelerated computers. Of course, the work that led to this award could not have been achieved without the help, support, collaboration, and interactions of many people over the years. I have had the good fortune of working on a number of high profile projects: in the area of mathematical software, EISPACK, LINPACK, LAPACK, ScaLAPACK, ATLAS and today with PLASMA, MAGMA, and SLATE; community de facto standards such as the BLAS, MPI, and PVM; performance analysis and benchmarking tools such as the PAPI, LINPACK benchmark, the Top500, and HPCG benchmarks; and the software repository netlib, arguably the first open source repository for publicly available mathematical software.”

This article was extracted from SIAM News. Further information is available here.

Professor Jack Dongarra

Professor Jack Dongarra

« Older Entries