This course seeks to familiarize students with the general ideas of ordinary differential equations and linear systems of such equations, while introducing elementary notions of linear algebra which facilitate the understanding of linear differential equations, either alone or in systems. Emphasis is on first and second order differential equations, those which most frequently occur in applications.

First order differential equations provide a view of the geometric setting of differential equations through the directionfield, and a taste of solution techniques through the two most commonly occurring first order differential equation types: separable and linear. Students should be able to interpret a directionfield plot, and know whether a first order differential equation is separable or linear (in either variable) or neither, and should be able to apply their respective solution algorithms when they are appropriate and the algebraic/calculus operations involved are reasonable, and understand the difference between explicit and implicit solutions.

Students should be be able to solve homogeneous second order constant coefficient differential equations and understand the different kinds of solutions which are possible, including complex characteristic values. [If the current text handles differential equations of any order in this context, emphasis should be on the less exotic possibilities, as in the method of undetermined coefficients, for example.] They should be able to use the method of undetermined coefficients for reasonable driving functions in the nonhomogeneous case and understand the difference between the undetermined coefficients of the technique and the arbitrary constants which appear in the solution. Since the harmonically driven damped harmonic oscillator system is the single most important application of this and connects somewhat to their intuition and prior knowledge from high school physics, it is reasonable that it be used as an example for them. The opportunity to discuss resonance also sets the stage for real applications of eigenvalue analysis (modal analysis) in the linear system case, while the interpretation of the steady state and transient parts of the solution give a physical setting to the mathematical distinction between the homogeneous and nonhomogeneous terms.

In order to appreciate systems of differential equations as well as the
nature of higher order linear differential equations, students must acquire some
familiarity with matrix algebra and matrix techniques for solving linear systems
of equations, including row reduced echelon form with backsubstitution, the determinant (without exaggerating
computational techniques), and inverse matrices. Then students should digest the idea of
vector spaces, subspaces, span, linear independence and dependence, bases and dimension,
with the emphasis on *R ^{n}* spaces only (and perhaps in passing back to
differential equations, mention of the polynomial spaces) and while discussing
higher order differential equations, solution spaces of linear homogeneous differential
equations as examples, in order to understand the linear aspects of the eigenvalue
technique for solving linear (homogeneous) systems of first order ordinary differential
equations (with diagonalizable coefficient matrices). It seems reasonable and
expected that they should
be able to handle complex eigenvalues and eigenvectors in this process. Optionally some
exposure to simple nonhomogeneous systems can give them insight into more complicated
applications of this process.

A valuable capstone topic is the treatment of a coupled system of harmonic oscillators, where reduction of order shows them how one handles higher order linear systems, the second order case being extremely important for dynamics. Whether or not this is required material for testing, it gives an arena where their intuition about vibrating objects helps them see this eigenvalue technique as a more concrete activity, building on the single harmonic oscillator problem.

**Matrix Linear Algebra Note**

In addition to supporting the linear foundations of the differential equations
studied in this course, the matrix mathematics covered here also provides many
students their only exposure to a careful treatment of solving linear systems of
equations using row reduction methods. They should thoroughly understand the
reduction process, and the backsubstitution step from the reduced matrix to the
parametrized solution of a linear system of equations *A x = b*, utilizing
technology for the row echelon reduction process once understood. Going further,
they should also understand the interpretation of a nonhomogeous linear system *A x =
b* as resolving the problem of finding the most general coefficients of a linear
combination of the columns of the coefficient matrix *A*
which equals the vector *b*:

* x*^{1}Col(*A*,1) + ... + *x ^{n }*Col(

Correspondingly they should understand the interpretation of the solution of a homogeneous linear system

As a mathematician/physicist, I have used both linear algebra and differential equations in problem solving in physical applications first in an undergraduate and then in a graduate program in theoretical physics, and throughout my career of three decades in doing research in general relativity and mathematical cosmology. Some of the things we have taught regularly in this course I have never used in my entire career. This is not to say that they should not be taught, but it should help set some priorities in a one semester course combining two subjects which each deserve their own individual semester course.

While mathematical reform has done some nice things to the study of (mostly nonlinear) differential equations, there is no time for that here where the basics of these two topics must be covered in combination.

To give some concrete examples, for constant coefficient linear differential equations, about the only practical use of this topic is the second order case for the driven damped harmonic oscillator where at most the constant or sinusoidal driving function cases are ever used in practice without needing more sophistication than this course has time for (Laplace transforms etc), and the ideas of resonance and amplitude response are perhaps the most important results of this analysis to be conveyed to students. Thus, for example, techniques which overemphasize the most general annihilator cases for arbitrarily high order are in practice wasting the students' time.

Also, one can justify spending a lot more time on many aspects of the course, but as it is packaged (differential equations with linear algebra), it seems like most priority should be given to reaching the solution of linear systems of differential equations early enough so that some real practice with both real and complex eigenvectors can be had, and not as I suspect often occurs: just reaching the topic at the very end of the course so little time is left to develop some familiarity for actually using the solution technique and perhaps even understanding some interpretational issues.

bob jantzen January 2001, updated April 2007

Only the Standard Maple should be used in this course, not Classic Maple.

All students should learn quickly how to input any combination of differential equations and initial conditions in standard prime derivative notation and how to solve them by right click menu choice. The course should test the solution algorithms and how to interpret what the solutions mean and what structure they have. Knowing the solution is a good check.

Similarly any matrix is easy to input from the Matrix palette and right-clicking on the output enables the student to find its row reduced form, calculate its determinant and inverse if appropriate (simply a "-1" superscript on the matrix!), and eigenvectors and eigenvalues. The LinearSolveTutor with the Gauss-Jordan reduction choice from the Student[LinearAlgebra] package (available directly from the Tutors menu) should be highlighted in learning the matrix reduction process and in solving linear systems. Matrix multiplication to confirm diagonalization of a square matrix using the matrix of eigenvectors and its inverse is trivial in standard math notation in the Maple input region.

**2011: These remarks may be outdated by attempts to improve MAT3400 over
recent years. No one has ever given me feedback, and I have not taught it.**

Instructors should resist the temptation to spend too much time on the linear
algebra topics which are only intended in this course to support the
understanding of the solution of higher order linear differential equations and
systems of first and second order linear differential equations. ** We are
failing in this regard**, since many students never really get much
practice working seriously with the latter topic: real and complex eigenvectors
in solving both first and second order linear differential equations. The
Edwards and Penney textbook is deficient in not explaining how the
eigenvectors diagonalize the coefficient matrix of these systems through a
linear change of variables, so I try to repair this weakness by giving students
practice in visualizing the new grids associated with such a linear change of
coordinates in the plane, which enables them to interpret the linear
transformation associated with the coefficient matrix and what the eigenvectors
mean in terms of a directionfield plot superimposed with the eigenvectors and
their new coordinate grid. No one else does this.

If MAT3400 were successful in my opinion, it would start by reviewing in the context of *R ^{n}*
(and

By talking about the new idea of the dot product and its associated geometry which is not a part of the MAT2705 syllabus, one can then use it to discuss the orthogonal decompositions of the domain space of a linear transformation into the row space and null space of the coefficient matrix, and the transposed linear transformation whose matrix is the transpose matrix, so that the column space of the original matrix becomes the row space of the transposed matrix, and the null space of the transposed matrix is its orthogonal complement. These 4 spaces associated with a matrix and the dot product are emphasized by Strang at MIT. Projection and least squares naturally follow, and then the pseudoinverse of noninvertible or nonsquare matrices and then the singular value decomposition.

It would also be natural to study linear transformations more abstractly together with their matrix representations, and how a linear transformation of a vector space into itself can be interpreted by finding its eigenvectors and eigenvalues, i.e., in terms of stretching, contracting and possible reflections within eigenspaces, as well as rotations and more general complex eigenvalue transformations. The Jordan canonical form is a natural completion of this topic. Direct sums and orthogonal direct sums are also natural to discuss in this context since the eigenvalue problem decomposes a vector space into a direct sum of eigenspaces when the matrix is diagonalizable, and this is an orthogonal direct sum when the matrix is symmetric. The Jordan canonical form also decomposes a vector space into a direct sum of subspaces. Projection operations are associated with any direct sum.

With only a little effort, one can complete the eigenvector application to first order differential equation systems started in MAT2705 by talking about the matrix exponential, thus seeing the unity of mathematics in the direct generalization of the exponential function solution of a single linear constant coefficient homogeneous differential equation.

A number of interesting applications of all of these mathematical tools are available.

At present little of this collection of next step topics seem to be covered in MAT3400. However, these remarks only reflect my personal opinion as a mathematical physicist who has employed the tools of linear algebra his entire working life.

in progress March, 2008