vector fields as first order derivative operators?

Tempting look forward:

How do we understand a linear transformation with a given square matrix A? Through its eigenvector analysis which decomposes the space into a direct sum of eigenspaces, on each one of which matrix multiplication by A reduces to scalar multiplication by the eigenvalue. When a matrix group acts on a vector space as a group of linear transformations, then we can understand its action by doing an eigenvector analysis to decompose the whole space into a direct sum of subspaces on each of which the matrices act simply. For a 1-dimensional Abelian matrix group, there is only one independent matrix generator in its Lie algebra and its eigenvectors in the representation are enough to decompose the whole space on which it acts into eigenspaces. [See above Maple worksheet.]

Matrices that do not commute cannot be simultaneously diagonalized, i.e., have common eigenvectors, since on them, matrix multiplication reduces to scalar multiplication which does commute. For a matrix group generated by more than 1 basis matrix that don't commute, only one can be "diagonalized" (find an eigenvector basis) but the others can be chosen to have a simple behavior on each such eigenspace. The three generating "spin angular momentum" matrices S1 ,S2, S3 of the rotation group do not commute, so only one of them can be diagonalized by an eigenvector analysis. But  we can introduce the total spin angular momentum operator  S2 =S12 + S22 + S32 which commutes with all the Si 's and decompose the space in terms of simultaneous eigenvectors of the pair S2, S3.

For the representation of the rotation group on (0,2)-tensors over  R3 (i.e., matrices), this leads to the spin 0,1,2 subspaces of total spin angular momentum, and we can further introduce bases of each eigenspace of total spin angular momentum which are also eigenvectors of L3. These are called tensor harmonics.

But wait. Suppose we want to understand how a function on the sphere deviates from rotational symmetry? The infinite dimensional vector space of functions on the sphere (add functions to get new functions, multiply by numbers to get new functions, presto, its a vector space) is also a representation of the rotation group under composition with the rotation, which rotates the function itself. It is the corresponding partial derivative operators (now called orbital angular momentum operators) L1 ,L2, L3 and L2 which represent S1 ,S2, S3 and S2  on that representation (the first 3 are the corresponding vector fields on R3 of course tangent to the sphere,, the last is related to the angular part of the Laplacian as we will later see), so if we decompose the space of functions into eigenvectors (i.e., eigenfunctions) of the pair L2, L3 , then on each space the eigenvalue pairs are (-Ɩ (Ɩ+1), i m) where Ɩ = 1/2, 1, 3/2, 2, ... and for each such value, the second eigenvalue takes 2 Ɩ + 1 values m = -Ɩ, -Ɩ+1, 0, ...,Ɩ-1Ɩ . The lowest eigenvalues (0,0) correspond to invariance, and as one increases Ɩ, the deviation from invariance gets more and more pronounced with more detailed structure on smaller and smaller length scales compared to the radius of the sphere. The eigenfunctions are the spherical harmonics, usually complex because we are interested in complex wave functions in quantum mechanics, but can be taken to be real for analyzing real functions on the sphere like Cole was doing in the temperature distribution on the surface of a star.

Finally for vector or spinor or tensor fields on the sphere or in space, we can consider the combined action of spin on the indices (use the symbol
Si) and orbital angular momentum on the functional behavior under rotations (use the symbol Li) and the vector sum of the two yields "total angular momentum" Ji = Li + Si which later we will see is the Lie derivative with respect to the vector field generators of the rotations. Because the rotations leave the dot product inner product invariant, all of these decompositions of the representations are orthogonal with respect to the natural inner products on the representation spaces.

So while there is not time to dwell on these exciting implications, they should motivate why we take seriously vector fields interpreted as first order derivative operators.