Category Archives: Uncategorized

The Contribution of Dr. J. H. Wilkinson to Numerical Analysis

Wilkinson_033.jpg

President HRH Duke of Edinburgh presenting honorary fellowship of The Institute of Mathematics and its Applications to James Wilkinson in 1977 (© The IMA).

The title of this post is the same as that of a symposium organized by Michael J. D. Powell and the Institute of Mathematics and its Applications (IMA) at the Royal Society in London on July 6th, 1977. The meeting commemorated the election of James Hardy Wilkinson to an Honorary Fellowship of the IMA.

The proceedings of the meeting were published by the IMA in a 91-page A5 booklet. As far as I am aware, few copies of the booklet survive and its contents have not previously been made available online. I am grateful to David Youdan, Executive Director of the IMA, for giving me permission to provide here a scan of the booklet. It is timely to do so, because this year marks the 100th anniversary of the birth of Wilkinson.

Here are the individual chapters, with comments from Mike Powell’s preface in quotes.

  • About Jim Wilkinson, with a Commemorative Snippet on Backward Error Analysis, L. Fox (Oxford University Computing Laboratory). “Leslie Fox describes many of Jim Wilkinson’s achievements that have not been published before and he exposes the accuracy of some ill-conditioned least squares calculations.”
  • Inverse Iteration, Newton’s Method, and Non-linear Eigenvalue Problems, M. R. Osborne (Australian National University). “Mike Osborne unifies the convergence properties of a main class of iterative methods for calculating eigenvalues.”
  • A New Look at Error Analysis, C. W. Clenshaw (University of Lancaster). “Charles Clenshaw develops an idea, due to Frank Olver, for treating the accumulation of errors in floating point arithmetic.”
  • A Problem in Numerical Linear Algebra, J. H. Wilkinson (National Physical Laboratory). “Jim Wilkinson shows the relevance in practice of the equivalence of repeated matrix eigenvalues, the ill-conditioning of the matrix eigenvector calculation, and the orthogonality of left and right hand eigenvectors that have a common eigenvalue.”

For more information about Wilkinson, see this web page that Sven Hammarling and I have created.

ima78_cover.jpg

Our Alumni – Pythagoras Papadimitriou

In this blog post, we asked one of our alumni, Pythagoras Papadimitriou, a few questions about his time with the Numerical Linear Algebra Group.
Pythagoras Papadimitriou

Please can you introduce yourself and tell us a bit about your experience before attending University of Manchester?

I was born in Athens and grew up there.  I studied Mathematics (B.Sc.) at Athens University. Then I did my national service and I moved to Manchester in September 1990 to attend the M.Sc. course “Numerical Analysis & Computing” at Manchester University.

What was your PhD thesis on?

The title of my Ph.D. is “Parallel Solution of SVD-Related Problems, with Applications”. The main part of my research dealt with the development of parallel algorithms for computing the Singular Value Decomposition and the Polar Decomposition. The algorithms were designed for the particular architecture of the KSR1, which was a virtual shared-memory parallel computer, a leading-edge parallel computer technology of the 90s.

Why did you choose to study your PhD in Manchester?

My initial plan was to complete my M.Sc. and work in industry. But when Nick Higham asked me to do a Ph.D. with him with a scholarship from S.E.R.C. I did not think twice. I am very proud that Nick was my supervisor.

How did you find Manchester?

It was an invaluable experience, three wonderful years. I made good friends and I am still in touch with most of them. When I left Manchester in October 1993 I took only good memories with me, including some of the best moments in my life. They say that Manchester has changed and the city is more beautiful now. I hope my sons will have the opportunity to study in Manchester.

Can you tell us about your career since leaving Manchester?

I joined the IT industry as a software engineer a few days after the submission of my Ph.D. thesis in October 1993. My first employer was a start-up in Greece. I joined Intrasoft SA, the then largest Systems Integrator in southeast Europe, in 1995. At Intrasoft SA, I moved from SW Development to Project Management in 1997 and to Sales in 1999. I joined Nortel Networks in Athens in January 2001 as Sales Director for Greece and Cyprus. A dream came true in October 2005 when I joined Sun Microsystems, a company that I admired from my Manchester days when I used Sun Unix workstations for carrying out my research. I headed the SW Business Unit of Sun Microsystems for 5 years, being responsible for a huge geography, from Saint Petersburg to Cape Town and from Vienna to Vladivostok! In January 2011 I joined HP in Athens as Managing Director. I moved to Vienna in September 2014 to join Oracle as Senior Sales Director responsible for Central Eastern Europe, Russia, Turkey and Central Asia. Today my organisation is responsible for the Systems partners of Oracle in this region plus in Italy, France and Iberia.

What is your current role?

I head the Systems Alliance and Channel organisation of Oracle for Italy, France, Iberia, Central Eastern Europe, Russia, Turkey and Central Asia. We are responsible for driving with our channel partners (Value Added Distributors, Value Added Resellers, Systems Integrators and Independent Software Vendors) the hardware business of Oracle in this region.

Wilkinson Quotes

Wilkinson_021.jpg

by Sven Hammarling and Nick Higham

We collect here some quotes from the work of Jim Wilkinson. These reflect his unique perspective as a mathematician who was involved in designing and building one of the first digital computers and who subsequently developed and analyzed a variety of numerical algorithms

We have arranged the quotes under the following headings:

Program libraries | Floating-point arithmetic | Rounding error analysis | Conditioning | Backward error analysis | Polynomials | Interaction on Pilot ACE | Communication avoidance | Linear algebra on Pilot ACE

Program libraries

Since the programming is likely to be the main bottleneck in the use of an electronic computer we have given a good deal of thought to the preparation of standard routines of considerable generality for the more important processes involved in computation. By this means we hope to reduce the time taken to code up large-scale computing problems, by building them up, as it were, from prefabricated units. [W48, p. 286]

In spite of the self-contained nature of the linear algebra field, experience has shown that even here the preparation of a fully tested set of algorithms is a far greater task than had been anticipated. [W71a, p. v]

Floating-point arithmetic

At a time when the arithmetic provided on modern computers is often so disappointing, it is salutary to recall that the subroutines included provision for accumulating inner products in double-precision floating-point arithmetic and all rounding was immaculate! [W80, p. 105]

Rounding error analysis

The two main classes of rounding error analysis are not, as my audience might imagine, `backwards’ and `forwards’, but rather `one’s own’ and `other people’s’. One’s own is, of course, a model of lucidity; that of others serves only to obscure the essential simplicity of the matter in hand. [W85, p. 5]

In general, the statistical distribution of the rounding errors will reduce considerably the function of n occurring in the relative errors. We might expect in each case that this function should be replaced by something which is no bigger than its square root and is usually appreciably smaller. [W61, p. 38]

For me, then, the primary purpose of the rounding error analysis was insight. [W86, p. 197]

Conditioning

The system was mildly ill-conditioned, though we were not so free with such terms of abuse in those days, [W71, p. 144]

Backward error analysis

“You have been solving these damn problems better than I can pose them.” Sir Edward Bullard, Director NPL, in a remark to Wilkinson (mid 1950s) [W85, p. 11]

I first used backward error analysis in connection with simple programs for computing zeros of polynomials soon after the PILOT ACE came into use. [W85, p. 8]

There does seem to be some misunderstanding about the purpose of an a priori backward error analysis. All too often, too much attention is paid to the precise error bound that has been established. The main purpose of such an analysis is either to establish the essential numerical stability of an algorithm or to show why it is unstable and in doing so to expose what sort of change is necessary to make it stable. The precise error bound is not of great importance. [W74, p. 356]

The great stability of unitary transformations in numerical analysis springs from the fact that both the \ell_2-norm and the Frobenius norm are unitarily invariant. This means in practice that even when rounding errors are made, no substantial growth takes place in the norms of the successive transformed matrices. [W65, p. 77]

Although backward analysis is a perfectly straightforward concept there is strong evidence that a training in classical mathematics leaves one unprepared to adopt it. … I have even detected a note of moral disapproval in the attitude of many to its use and there is a tendency to seek a forward error analysis even when a backward error analysis has been spectacularly successful. [W85, p. 5]

Polynomials

The Fundamental Theorem of Algebra asserts that every polynomial equation over the complex field has a root. It is almost beneath the dignity of such a majestic theorem to mention that in fact it has precisely n roots. [W84, p. 21]

The cosy relationship that mathematicians enjoyed with polynomials suffered a severe setback in the early fifties when electronic computers came into general use. Speaking for myself I regard it as the most traumatic experience in my career as a numerical analyst. [W84, p. 3]

Interaction on Pilot ACE

Since the use of the punched-card equipment required the use of an operator, it encouraged user participation generally, and this was a distinctive feature of Pilot ACE operation. For example, various methods of accelerating the convergence of matrix iterative processes were left under the control of operators, and the skill with which these stratagems were used by young women with no more than high school mathematics qualifications was most impressive. Speaking for myself I gained a great deal of experience from user participation, and it was this that led to my own conversion to backward error analysis. [W80, p. 112]

Communication avoidance

Since all machines have stores of finite size often divided up into high speed and auxiliary sections, storage considerations often have a vitally important part to play. [W55, p. 188]

Linear algebra on Pilot ACE

An interesting feature of these codes is that they make a very intensive use of subroutines; the addition of two vectors, multiplication of a vector by a scalar, inner products, etc, are all coded in this way. [W80, p. 105]

From 1946–1948 a great deal of quite detailed coding was done.… The subroutines for floating-point arithmetic were … produced by Alway and myself in 1947 … They were almost certainly the earliest floating-point subroutines. [W80, pp. 104–105]

References

[W48]   J. H.Wilkinson, The Automatic Computing Engine at the National Physical Laboratory, Proc. Roy. Soc. London Ser. A 195, 285-286, 1948.

[W55]   J. H. Wilkinson, The use of iterative methods for finding the latent roots and vectors of matrices, Mathematical Tables and Other Aids to Computation 9, 184-191, 1955.

[W65]   J. H. Wilkinson, Error Analysis of Transformations Based on the Use of Matrices of the Form I-2ww^H, pages 77-101, in Louis Rall, ed., Error in Digital Computation, vol. 2, Wiley, 1965.

[W61]   J. H. Wilkinson. Error analysis of direct methods of matrix inversion. J. ACM, 8:281-330, 1961.

[W71]   J. H. Wilkinson. Some comments from a numerical analyst. J. ACM, 18:137–147, 1971. (The 1970 A. M. Turing lecture).

[W71a]   J. H. Wilkinson and C. Reinsch, eds, Linear Algebra, II, Springer, 1971.

[W74]   J. H. Wilkinson, Numerical linear algebra on digital computers, IMA Bull. 10, 354-356, 1974.

[W80]   J. H. Wilkinson, Turing’s work at the National Physical Laboratory and the construction of Pilot ACE, DEUCE, and ACE, pages 101-114, in N. Metropolis, J. Howlett and G.-C. Rota, eds, A History of Computing in the Twentieth Century: A Collection of Essays, Academic Press, 1980.

[W84]   James Wilkinson, The Perfidious Polynomial, in G. H. Golub, ed., Studies in Numerical Analysis, Mathematical Association of America, Washington, D.C., 24, pp 1-28, 1984.

[W85]   J. H. Wilkinson. The state of the art in error analysis. NAG Newsletter, 2/85:5–28, 1985. (Invited lecture for the NAG 1984 Annual General Meeting).

[W86]   J. H. Wilkinson, Error Analysis Revisited, IMA Bull. 22, 192-200, 1986

Our Alumni – Ramaseshan Kannan

In this blog post, we asked one of our alumni, Ramaseshan Kannan, a few questions about his time with the Numerical Linear Algebra Group.

Ramaseshan Kannan SQ

Please can you introduce yourself and tell us a bit about your experience before attending University of Manchester?

I studied an Undergraduate and a Master’s in Civil and Structural Engineering at the Indian Institute of Technology in Chennai. Upon graduation I started working for Arup in India, developing finite element-based structural engineering software. In due course I became very interested in the “solver” stack of the code, which is the linear algebra layer. I asked if my employer would support my PhD in this area and, to my surprise, they agreed. However the funding I was being offered only covered a fraction of my tuition and expenses as I was a non-EU candidate. At this point the School of Maths and in particular my supervisors helped out and I was offered a school scholarship to do a collaborative PhD in the NLA group. That’s how I landed up in sunny Manchester.

What was your PhD thesis on?

I worked on a range of sparse linear algebra problems that originated in the software I was developing. The mainstay of my research was a new eigenvector clustering algorithm that allowed engineers to debug errors in their mathematical models. Other parts concerned the execution performance of matrix algorithms on parallel computers.

During the PhD I continued working for Arup as a technology translator implementing my own research back into commercial software. As a result I was able to see most parts of my PhD being used on real world problems, which was very satisfying.

Why did you choose to study your PhD in Manchester?

Having secured my employer’s funding we started scouting for research groups and centres of expertise around the world in the area of eigenvalue problems. Very soon it became clear that Manchester was a leader in both theory and numerical software so it was an obvious choice. In addition, my supervisors Nick Higham and Francoise Tisseur were open to making my rather bespoke arrangement work, all of which contributed to the decision.

How did you find Manchester?

I liked it so much that I haven’t left! I find it a great mix of practicality and opportunity. We have some of the best schools in the country. Plus we have the Peak District, the Lake District, and Yorkshire Dales all within a day-trip’s distance.

Can you tell us about your career since leaving Manchester?

After finishing my PhD I have continued to work for Arup in our Manchester office. Over the years I have been involved in a gamut of activities such as internal and external research including sponsored MSc and PhDs, writing numerical software, publishing and peer reviewing and consulting with engineers to understand their technical problems, to name a few.

What is your current role?

As above, I don multiple hats although my primary role is centred around developing numerical software with the eventual aim of making simulations faster, more accurate, or more productive for the end-user. I am also tasked with blue sky activities so, as an example, I’m looking at ways in which machine learning can be symbiotically used with traditional numerical analysis/engineering simulation to help engineers.

Our Alumni – Sam Relton

In this blog post, we asked one of our alumni, Sam Relton, a few questions about his time with the Numerical Linear Algebra Group.

srelton.jpg

Please can you introduce yourself and tell us a bit about your experience before attending University of Manchester?

I was always pretty good at maths, because I liked understanding how things worked, and so I went to Manchester for my BSc. During that course I really enjoyed the numerical analysis and linear algebra modules because they underpin how all other mathematics is implemented in practice. I loved living in Manchester so I wanted to stick around, and I was lucky enough to be able to skip an MSc and go straight to a PhD in the NLA group.

What was your PhD thesis on?

My thesis was supervised by Nick Higham and called “Algorithms for Matrix Functions, their Frechet Derivatives, and Condition Numbers”. It consisted of four research papers covering theoretical and algorithmic advances in the computation of matrix functions, all woven together. Along with Nick, a few of these papers were co-authored with Awad Al-Mohy (a previous PhD student of Nick’s who was interested in similar problems).

Why did you choose to study your PhD in Manchester?

Manchester is a world-leading research group for numerical linear algebra and it was a privilege to learn from (and work with) the greatest researchers in the field. This also opens up a lot of opportunities in terms of attending conferences, visiting other institutions, and when looking for postdoctoral positions. Manchester is also a fantastic place to live, with plenty going on and a thriving community of PhD and post-doc researchers. I also had a few friends studying other courses that I shared a house with during my undergraduate degree and PhD.

How did you find Manchester?

I loved Manchester, it’s a large busy city full of interesting things to see and do whilst the cost of living is nowhere near that of London. Despite that, you can easily get into the countryside with a 30 minute drive! The maths department was brilliant with plenty of strong research groups to chat with, lots of seminars to attend, and a friendly and open atmosphere between all the staff and students.

Can you tell us about your career since leaving Manchester?

After doing a BSc, PhD, and 2 post-docs in high-performance computing at Manchester I decided to try something new. I now work in the School of Medicine at Leeds, applying complex statistical models and machine learning to electronic healthcare records (taken from GP and hospital databases) with collaborators in the School of Computing. Statistics and machine learning are really just a practical application of linear algebra / HPC, so much of what I learnt during my years in Manchester is still very relevant! Working with large interdisciplinary teams of doctors and nurses is an interesting change, and it’s nice to have direct impact on NHS policy decisions.

Our Alumni – Edvin Hopkins

In this blog post, we asked one of our alumni, Edvin Hopkins, a few questions about his time with the Numerical Linear Algebra Group.

Craig Lucas

Please can you introduce yourself and tell us a bit about your experience before attending University of Manchester?

I obtained my BA in Mathematics from the University of Cambridge in 2005 and remained there for a few more years to do a PhD in numerical relativity. My association with the University of Manchester began in 2010, when I joined the NLA group as a KTP Associate, working on a joint project with NAG to implement some of the NLA group’s matrix function algorithms for the NAG Library.

Why did you choose to work with the University of Manchester?

The project I was involved in was a great opportunity to bridge the gap between academia and industry and to work with world leaders in their fields.

How did you find Manchester?

Well, I’m still there! It has really grown on me in the past few years, and is a great place to work.

Can you tell us about your careers since leaving Manchester?

At the end of the KTP project I continued in the NLA group as a post doctoral research associate, working with Professor Nick Higham for a year and half on his ERC-funded project on matrix functions. I then returned to work for NAG (in their Manchester office) which is where I am now. NAG still has very strong links with the University of Manchester and with the NLA group in particular.

What is your current role?

I am a Technical Consultant at NAG. My work involves implementing mathematical algorithms for the NAG Library, and high performance computing consultancy projects.

Wilkinson and Backward Error Analysis

by Sven Hammarling and Nick Higham

It is often thought that Jim Wilkinson developed backward error analysis because of his early involvement in solving systems of linear equations. In his 1970 Turing lecture [5] he described an experience, during world war II at the Armament Research Department, of solving a system of twelve linear equations on a desk computer, using Gaussian elimination. (He doesn’t say how long it took, but it must surely have been several days.) The coefficients were of order unity and, using ten decimal digit computation, he found that the coefficients of the reduced equation determining x_{12} had four leading zeros, so he felt that the solutions could surely have no more than six correct figures. As a check on his calculations, he then computed the residuals and to his surprise the left hand sides agreed with the right hand sides to the full ten figures.

Wilkinson_slide2_cropped.jpg

A slide of Wilkinson’s describing the solution of the system of 18 equations.

After the war Wilkinson joined the Mathematics Division at the National Physical Laboratory. Soon after his arrival, a system of eighteen equations were given to the Mathematics Division. This required a joint effort, which was manned by Leslie Fox, Eric Goodwin, Alan Turing and Wilkinson. Again, the solution was somewhat ill conditioned, as revealed by the final reduced equation, but again in computing the residuals the right and left hand sides agreed to full accuracy. Incidentally, Wilkinson and his colleagues used iterative refinement, which convinced them that the first solution had been accurate to six figures.

These experiences did not straightforwardly lead Wilkinson to develop backward error analysis. In [7] he says that he first used backward error analysis in connection with simple programs for computing zeros of polynomials soon after PILOT ACE came into use, specifically a program for evaluating a polynomial by nested multiplication and a program for carrying out polynomial deflation. But he nevertheless did not recognise backward error analysis as a general tool. Wilkinson explains

It is natural to ask why I did not immediately set about using this type of error analysis as a general purpose tool. In retrospect it seems amazing that I did not try it on Gaussian elimination and on various eigenvalue algorithms in which I was keenly interested at the time. The truth is that it did not occur to me for one moment to do so.

His explicit recognition of a tool that he decided to call backward error analysis soon came through his experience of solving eigenvalue problems on PILOT ACE. He states further in [7]:

Because of the small storage capacity of the PILOT ACE virtually the only algorithm that could be used for dealing with large unsymmetric eigenvalue problems was the power method supplemented by various techniques for accelerating convergence. After each eigenvalue/eigenvector was determined this pair was removed by deflation. Now at that time deflation was generally held to be extremely unstable and accordingly I used it at first with great trepidation. However, it soon became evident that it was being remarkably effective.

As with the linear equation problem, Wilkinson computed residuals, r = A\hat{x} - \hat{\lambda} \hat{x}, where \hat{x} and \hat{\lambda} are the computed values, with \hat{x} normalised so that \hat{x}^T \hat{x} = 1, and, even after many deflations, he found that the residuals were remarkably small. He then realised that

(A - r \hat{x}^T) \hat{x} = \hat{\lambda} \hat{x} \;\;\; \mbox{exactly}.

and this led him directly to the backward error analysis since, if we put E = -r \hat{x}^T, then \hat{\lambda} and \hat{x} are an exact eigenvalue and eigenvector of the matrix A + E. He now recognised that the process could be widely used and this, of course, led to his 1963 book Rounding Errors in Algebraic Processes [3] and, soon after, to The Algebraic Eigenvalue Problem [4].

It should be noted that Wilkinson did not claim to be the first to perform a backward error analysis. He attributes the first analysis to von Neumann and Goldstine in their 1947 paper [2], which, as Wilkinson said in his von Neumann prize paper “is not exactly bedtime reading” [6]. Wilkinson also gives great credit to Givens for his backward error analysis of orthogonal tridiagonalisation in his, sadly, unpublished technical report [1].

References

[1]   W. Givens. Numerical computation of the characteristic values of a real symmetric matrix. Technical Report ORNL-1574, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA, 1954.

[2]   J. von Neumann and H. H. Goldstine. Numerical inverting of matrices of high order. Bull. Amer. Math. Soc., 53:1021–1099, 1947.

[3]   J. H. Wilkinson. Rounding Errors in Algebraic Processes. Notes on Applied Science, No.32. HMSO, London, UK, 1963. (Also published by Prentice-Hall, Englewood Cliffs, NJ, USA, 1964. Reprinted by Dover Publications, New York, 1994).

[4]   J. H.Wilkinson. The Algebraic Eigenvalue Problem. Oxford University Press, Oxford, UK, 1965.

[5]   J. H. Wilkinson. Some comments from a numerical analyst. J. ACM, 18:137–147, 1971. (The 1970 A. M. Turing lecture).

[6]   J. H. Wilkinson. Modern error analysis. SIAM Review, 13:548–568, 1971. (The 1970 von Neumann lecture).

[7]   J. H. Wilkinson. The state of the art in error analysis. NAG Newsletter, 2/85:5–28, 1985. (Invited lecture for the NAG 1984 Annual General Meeting).

« Older Entries