3.5 problem hints

Take heart, these are much easier than chapter 2. However, the back of the book hints refer to the outdated linalg package commands, which have been updated in the worksheets you have gone through for this chapter to LinearAlgebra.

1. associativity (relatively easy).

2. distributive law (relatively easy)

3. matrix ->Matrix (short as well).

4. outdated (short and easy).
[evalf can now be applied to vectors and matrices etc]

5. Null and column spaces of Hilbert matrices: bases

Finally an interesting problem, but a routine use of MAPLE commands. However, now Maple refuses to show directly more than 10x10 matrices without double clicking on the output placeholder to browse the table of entries in a separate popup window. The idea is to look at the square Hilbert matrix, and the two non-square cases where respectively there are more rows or more columns. To view the matrices directly change the dimensions of the 3 suggested cases to
10x10, 10x4, 6x10. Start by defining:
> restart: with(LinearAlgebra): h:= (i,j)-> 1/(i+j-1)
> Hm_n:=Matrix(m,n,h)

The columns of a square Hilbert matrix are sufficiently complicated that they are linearly independent. If we delete some columns, those that remain are still linearly independent, so the NullSpace in either case remains empty (the null space = kernel of the matrix tells us the linear relationships among the columns, the Maple command Nullspace gives us a set of basis vectors of this subspace). However, if we delete some rows at the bottom, we have too many equations for two few variables in the homogeneous linear system with that coefficient matrix, so we get a nontrivial NullSpace, but only because we have too few equations for too many variables, not because of any redundancy in those equations. The Rank is the number of linearly independent columns.

There are two ways of identifying a basis of the ColumnSpace.

1) The subset of columns which correspond to the leading columns of the row reduced matrix are a basis of the column space.

2) Take the transpose to convert the columns into rows, then row reduce to get a set of linearly independent combinations of these columns, discarding the zero rows at the bottom which represent redundancies in the set, and then taking the transpose to convert them back to column matrices. This is what ColumnSpace does, providing us with a list of basis vectors of the column space. To see that this agrees with the previous method, you can take the transpose of the subset of columns found in approach 1) and row reduce it. The result should agree with ColumnSpace modulo permutations.
 

6. orthogonal complement

The set of all vectors orthogonal to a given vector a (defined as Vector in Maple, by default a column matrix) is its orthogonal complement and any such vector x (also defined as a Vector in Maple) must satisfy DotProduct(a,x,conjugate=false) = 0 . How can this be written as a linear matrix equation A x = b? By the definition of matrix multiplication the 1xn matrix A is the row vector Transpose(a), but to act like a row matrix for Red..., Back...  row reduction to the coefficient matrix augmented by the zero 1x1 matrix, you have to first A:=convert(Transpose(a),Matrix)

Here we have a single linear condition on 8 unknowns. We can solve this with Maple in two different ways and compare them.

First use the sequence of maple Reduction/Backsubing commands to solve this matrix equation. Then compare with the direct solution LinearSolve(A,0). Remember that 0 must be a 1-vector here. And ReducedRowEchelonForm must be applied to the augmented matrix <A|0> representing the full homogeneous system augmented matrix in order to use the BackwardSubstitute command. What is the difference in the way parameters are introduced in these two equivalent ways of solving the system?

7. orthogonal projection and least squares geometry

When the system A x = b is inconsistent, it means that the vector b does not belong to the column space of A, i.e., cannot be expressed as a linear combination of the columns of A with coefficients x. But if one orthogonally projects b down to the subspace which is the column space (think of a plane in 3 space, with b off the plane), then the resulting vector can be so expressed and the system does have a solution. The difference between b and its projection is the shortest such vector for all possible values of x none of which solve the original system, but the projected vector solution has the sum of its squared error from the target vector b minimized. This is the least squares solution x.

So one can find a basis of the column space, orthogonalize it with the GramSchmidt procedure (maple command), and then evaluate the projection of b by summing the vector components along each basis vector using the calc3 projection formula: u = Sum_i  innerprod(v_i,b)/innerprod(v_i,v_i) * v_i. Solve A x = u instead to get the least squares solution and compare with the LeastSquares command result. Finally, solve the normal equations described in the main text by multiplying the original system by the transpose of A. show that this third solution is also the same.

DotProduct(u,v,conjugate=false) is unfortunately necessary to work with symbolic matrices because of the extra complex conjugate operation on the left factor.

8. Cayley-Hamilton theorem: characteristic polynomial

The displayed 3x3 matrix equation represents a system of 9 equations for the three unknown coefficients c2, c1, c0, which is 6 too many! The book hint says this matrix equation must be true for each of its columns, so why not just evaluate the first column of the left hand side and get a single 3 vector equation, i.e., 3 scalar equations, containing 3 unknowns, and solve that. You have to extract the coefficient columns for the coefficient matrix of this new 3x3 system, as well as the right hand side 3-vector, then solve by LinearSolve or RowReduction/Backsubing. Don't just copy their code, try to do this yourself. Then evaluate the full matrix equation with these coefficients you found to see if the LHS reduces to the zero matrix. Finally compare with the CharacteristicPolynomial for the matrix. The C-H theorem says that every square matrix satisfies its own characteristic equation. We have just found the characteristic polynomial of a specific 3x3 matrix (formed by taking 21 and its 8 consecutive integers to fill out its rows) by finding the only polynomial equation of 3rd degree that it satisfies.

9. programming the projection operation

Figure out first on a piece of paper what the sequence of steps one needs to do to project a vector along another vector. Then try it on a pair of vectors using MAPLE commands and check that your result is orthogonal to its its difference with the original vector (the perpendicular part is the original vector minus the part parallel to the given direction). Then try to make this into a procedure based on some of the procedure examples from chapter 1. Then check that it works on your example pair.

DotProduct(u,v,conjugate=false) is unfortunately necessary to work with symbolic matrices because of the extra complex conjugate operation on the left factor. If you have concrete real number entries, you don't need it.