{VERSION 6 0 "IBM INTEL NT" "6.0" }
{USTYLETAB {CSTYLE "Maple Input" -1 0 "Courier" 0 1 255 0 0 1 0 1 0 0
1 0 0 0 0 1 }{CSTYLE "2D Math" -1 2 "Times" 0 1 0 0 0 0 0 0 2 0 0 0 0
0 0 1 }{CSTYLE "2D Comment" 2 18 "" 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 }
{PSTYLE "Normal" -1 0 1 {CSTYLE "" -1 -1 "Times" 1 12 0 0 0 1 2 2 2 2
2 2 1 1 1 1 }1 1 0 0 0 0 1 0 1 0 2 2 0 1 }{PSTYLE "Heading 1" -1 3 1
{CSTYLE "" -1 -1 "Times" 1 18 0 0 0 1 2 1 2 2 2 2 1 1 1 1 }1 1 0 0 8
4 1 0 1 0 2 2 0 1 }{PSTYLE "Title" -1 18 1 {CSTYLE "" -1 -1 "Times" 1
18 0 0 0 1 2 1 1 2 2 2 1 1 1 1 }3 1 0 0 12 12 1 0 1 0 2 2 19 1 }}
{SECT 0 {EXCHG {PARA 18 "" 0 "" {TEXT -1 59 "motivating the SVD decomp
osition representation of a matrix" }}}{EXCHG {PARA 0 "" 0 "" {TEXT
-1 84 "We look at the simpler case of a diagonalizable square matrix t
o show the main idea." }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 13 "wi
th(linalg):" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 144 "By working backwa
rds, I create a simple matrix which has the columns of the matrix B as
eigenvectors, and with exact eigenvalues 1, 1/10, 1/100." }}}{SECT 1
{PARA 3 "" 0 "" {TEXT -1 7 "details" }}{EXCHG {PARA 0 "> " 0 ""
{MPLTEXT 1 0 37 "lambda:=[1.,.1,.01];\nLambda:=[a,b,c];" }}}{EXCHG
{PARA 0 "> " 0 "" {MPLTEXT 1 0 38 "B:=augment([1,1,1],[-2,1,1],[0,1,-1
]);" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 11 "inverse(B);" }}}
{EXCHG {PARA 0 "" 0 "" {TEXT -1 245 "Notice that the eigevectors are o
rthogonal, and that the inverse matrix rows are equal to the eigenvect
ors except for being divided by the square of the length of the corres
ponding eigenvector, namely the diagonal values of the following matri
x:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 23 "evalm(transpose(B)&*B
);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 295 "If we had normalized these
eigenvectors, then the matrix would be orthogonal and its inverse wou
ld be equal to its trasnpose. However, the matrix product of the corre
sponding row of the inverse matrix and the column of the first matrix \+
gives the same result as the normalized vectors would have." }}{PARA
0 "" 0 "" {TEXT -1 0 "" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 83 "Now we \+
create the desired matrix by working backwards from the diagonalized m
atrix:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 25 "A_D:=diag(1, 1/10
,1/100);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 40 "Here it is and its nu
merical equivalent:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 44 "A:=e
valm(B&*A_D&*inverse(B));\nA_N:=evalf(%);" }}}}{EXCHG {PARA 0 "> " 0 "
" {MPLTEXT 1 0 91 "A := matrix([[2/5, 3/10, 3/10], [3/10, 71/200, 69/2
00], [3/10, 69/200, 71/200]]); evalf(%);" }}}{EXCHG {PARA 0 "" 0 ""
{TEXT -1 66 "As advertized, it is diagonalized by its matrix B of eige
nvectors:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 24 "evalm(inverse(
B)&*A&*B);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 78 "or conversely can b
e represented in terms of its eigenvectors and eigenvalues:" }}}
{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 52 "evalm(B)&*evalm(A_D)&*evalm(
inverse(B));\nA=evalm(%);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 71 "Mapl
e gives the same eigenvectors as we chose (we rigged the problem!):" }
}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 16 "eigenvectors(A);" }}}
{EXCHG {PARA 0 "" 0 "" {TEXT -1 206 "Now we show the three separate co
ntributions to the matrix coming from each of the separate terms assoc
iated with each eigenvalue after extracting the columns from the matri
x B and the rows from its inverse" }}}{EXCHG {PARA 0 "> " 0 ""
{MPLTEXT 1 0 122 "for i from 1 to 3 do\nU[i]:=convert(col(evalf(B),i),
matrix);\nV[i]:=transpose(convert(row(evalf(inverse(B)),i),matrix));\n
od;" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 17 "Since the matrix " }
{XPPEDIT 18 0 "A_D;" "6#%$A_DG" }{TEXT -1 202 " is diagonal, the matri
x product expressing the original matrix in terms of its eigenvalues c
an be expressed as a sum of three terms, one associated with each diag
onal value (eigenvalue) of this matrix:" }}}{EXCHG {PARA 0 "> " 0 ""
{MPLTEXT 1 0 61 "evalm(add(lambda[i]*evalm((U[i])&*V[i]),i=1..3));\nev
alm(%-A);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 106 "These are the three
contributions with the eigenvalues kept as variables to see their mat
rix coefficients:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 43 "add(La
mbda[i]*evalm((U[i])&*V[i]),i=1..3);\n" }}}{EXCHG {PARA 0 "" 0 ""
{TEXT -1 242 "When we add them together we get the original matrix, wi
thout even any roundoff error! On the other hand if we neglect the ter
ms multiplied by the smallest eigenvalue, we almost reproduce the orig
inal matrix except for terms of order 10^(-2):" }}}{EXCHG {PARA 0 "> \+
" 0 "" {MPLTEXT 1 0 61 "evalm(add(lambda[i]*evalm((U[i])&*V[i]),i=1..2
));\nevalm(%-A);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 448 "This is the \+
idea of reconstructing a matrix from fewer data by neglecting the cont
ribution of small eigenvalues. For general matrices which are not diag
onalizable like this one, the best we can do is use the singular value
decomposition, but the neglect of the smaller singular values (like t
he eigenvalues here) still reproduces the approximate matrix we starte
d with. This is actually a powerful idea, as the example with image co
mpression shows." }}}{SECT 1 {PARA 3 "" 0 "" {TEXT -1 52 "interpreting
the matrix product that makes this work" }}{EXCHG {PARA 0 "" 0 ""
{TEXT -1 47 "When we multiply the first two factors together" }}}
{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 40 "evalm(B)&*evalm(A_D)&*evalm(
inverse(B));" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 7 "we get:" }}}
{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 239 "matrix([[1, -2, 0], [1, 1, \+
1], [1, 1, -1]])&*matrix([[1, 0, 0], [0, 1/10, 0], [0, 0, 1/100]])\n=
\n1*matrix([[1, 0, 0], [1, 0, 0], [1, 0, 0]])\n+1/10*matrix([[0, -2, 0
], [0, 1, 0], [0, 1, 0]])\n+1/100*matrix([[0, 0, 0], [0, 0, 1], [0, 0,
-1]]);" }}}{EXCHG {PARA 0 "" 0 "" {TEXT -1 446 "i.e., only the corres
ponding column contributes to the product, so that finally in the full
product with three factors, only the corresponding row of the third f
actor contributes to the matrix product in each of these three separat
e terms, so that a sum of the eigenvalue and the matrix product of the
corresponding column of the left factor and corresponding row of the \+
right factor results. this is much easier to express using index notat
ion:\n" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 8 "restart:" }}}
{EXCHG {PARA 0 "" 0 "" {TEXT -1 19 "The matrix product " }{XPPEDIT 18
0 "B*A_D*B_inv;" "6#*(%\"BG\"\"\"%$A_DGF%%&B_invGF%" }{TEXT -1 147 " r
educes to the sum of the eigenvalues multiplied by the matrix product \+
of the corresponding row of B on the left and column of B_inv on the r
ight:" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 93 "Sum(Sum(B[i,m]*A_D
[m,n]*B_inv[n,j],m=1..3),n=1..3)\n=Sum(B[i,m]*lambda[m]*B_inv[m,j],m=1
..3)\n;" }}}{EXCHG {PARA 0 "> " 0 "" {MPLTEXT 1 0 0 "" }}}}}{MARK "0 0
0" 59 }{VIEWOPTS 1 1 0 1 1 1803 1 1 1 1 }{PAGENUMBERS 0 1 2 33 1 1 }