Iterative methods of linear algebra I
Due Friday Oct 29, 2004 - three weeks
The problems you solved in the last set were initial-value problems in ordinary differential equations. The methods used were explicit methods which calculate time updates using information from previous times. For boundary value problems and for partial differential equations, we often need more sophisticated methods called implicit methods. But before we can apply implicit methods, we must learn to solve problems in linear algebra.
Two of the most common problems in linear algebra are
finding solutions to a large set of linear equations, and
finding eigenvalues of large matrices. There are many
standard computer packages to do these calculations -- Mathematica
has some -- however the majority are not optimized for physics
problems. Physics problems often involve sparse
matrices which have most of their entries being zero. In that
case it is important to design algorithms that only
operate with the non-zero elements of the matrix. Sparse linear
algebra problems arise in solving partial differential
equations using implicit methods as will be illustrated
in the next worksheet.
In this worksheet you will develop a C++ code for a simple
iterative method for finding the largest eigenvalue of
a real symmetric matrix. In the next worksheet you will apply this
method to find the lowest energy state of a quantum particle in
a box.
Structuring of code
You should strive to write code that is self-explanatory.
So, the code you hand in for this assignment needs to have,
(i) Comments that explain each step in
the algorithm;
(ii) Variable names that are descriptive -- e.g.,
do NOT just use ``m'' to represent a matrix.
The ``iterative power method''
The eigenvalue of largest magnitude of a
real symmetric square matrix can be found by taking an
arbitrary vector and applying the matrix to it many times.
(This is sometimes called the ``iterative power method.'') So
if A is a matrix and |u0> is a nearly arbitrary
normalised starting vector, then
(1) |
To find the largest eigenvalue, use the following steps:
1. Create an "arbitrary" normalized vector |ulast>.
2. Multiply by the matrix:
|uprod>=A|ulast>.
3. Calculate the approximate eigenvalue:
eapprox = (<uprod|uprod>)1/2.
Print out eapprox.
4. Generate the next normalized eigenvector approximation
|ulast>=|uprod>/eapprox.
5. Quit if the iteration has converged.
6. Return to step 2.
On exit, |ulast> is the approximate largest eigenvector.
When writing your code, create two separate functions to handle the
matrix product and the inner product.
Demonstrate
that your algorithm works by checking it
on a couple of matrices for which you know the largest
eigenvalue. You can do this by finding the eigenvalues
of your test matrices using Mathematica.
Hand in a hardcopy of your code, one of your test matrices and its eigenvalues (e.g. from Mathematica) and the output from your fortran code for this matrix (i.e. the sequence of estimates to eapprox). Plot the sequence of estimates on a log-linear plot. Note that at large iteration number the convergence is exponential. Why?