Systems of Linear Equations; Brief Introduction to Laplace Transforms


Each video corresponds to two links, one of the form diff_**_** and the other of the form http://youtu.be/*******.

The first probably won't work, as it points to a file on a DVD you might not have. 

The second should work, and is played through YouTube.


We solve the system of equations

y_1 ' = 5 y_1 + 2 y_2

y_2 ' = 4 y_1 + 3 y_2.

This system can be written in matrix notation as

[y_1 ' ; y_2 '] = [5 , 2  ;  4 , 3] * [y_1;y_2]

where the right-hand side represents the matrix product

.

By the rules of matrix multiplication, which it is assumed you know, this product is calculated as

[5 , 2  ;  4 , 3] * [y_1;y_2] = [5 y_1, 2 y_2 ; 4 y_1, 3 y_2]

This equality is written in standard notation as

y_1 and y_2 are regarded as functions of t.

(If you don't know and remember basic matrix operations such as matrix product, determinants, basic properties of matrices, and the solution of systems of linear equations using matrices, you need to review and, if necessary, learn them.  This is standard material in a precalculus or high school analysis course.  If you have had linear algebra, you will know this and much more; however while precalculus is in the line of prerequisites for this course and you are responsible for the associated subject matter, linear algebra is not a prerequisite.  There are many online resources through which you can learn the basics of matrices.)

The expressions [y_1; y_2] and [y_1 ' ; y_2 ' ] are column vectors, which we will denote as

y = [y_1; y_2] and y ‘ = [y_1 ‘ ; y_2 ‘ ]

or, if we wish to emphasize that these are functions of t, by

y(t) = [y_1 (t) ; y_2 (t) ] and y ‘ (t)  = [y_1 ‘ (t) ; y_2 ‘ (t) ]

With this notation our matrix equation for the original system becomes

y ‘ = [5, 2; 4, 3] * y

We postulate that there is a solution to this equation of the form

y  = e^(lambda t) * [x_1; x_2],

where x_1 and x_2 are regarded as constant numbers.

Substituting this 'trial solution' into the original matrix equation we fairly easily obtain the equivalent equation

[ (5 - lambda),   2; 4, (3 - lambda) ] * [x_1; x_2] = [ 0; 0 ].

This matrix equation embodies the two linear equations (5 - lambda) x_1 + 2 x_2 = 0 and 4 x_1 + (3 - lambda) x_2 = 0.  So we have two equations in three unknowns.

The matrix equation introduces a third condition.  If the matrix A is invertible then the matrix equation

A x =  0

has a solution

x = A^-1 * 0  = 0 ,

meaning that x_1 and x_2 would both be zero.  It follows that A must be noninvertible, so that its determinant must be 0.

Our third condition is therefore that

det ( A ) = 0.

diff_06_001

http://youtu.be/Ke1jwLnY9fs

diff_06_002

http://youtu.be/Gii3Xfn77L0

The equation

[ (5 - lambda),   2; 4, (3 - lambda) ] * [x_1; x_2] = [ 0; 0 ]

has nontrivial solutions for x_1 and x_2 if, and only if, the determinant of the matrix

[ (5 - lambda),   2; 4, (3 - lambda) ]

is zero.

det ( [ (5 - lambda),   2; 4, (3 - lambda) ] ) = lambda^2 - 8 lambda + 7

so this condition holds if

lambda^2 - 8 lambda + 7 = 0

We easily solve this equation, obtaining two solutions

lambda = 1

or

lambda = 7.

These values are called eigenvalues.

Substituting eigenvalue lambda = 1 into our matrix equation

[ (5 - lambda),   2; 4, (3 - lambda) ] * [x_1; x_2] = [ 0; 0 ]

we obtain the equation

[ 4,   2; 4, 2 ] * [x_1; x_2] = [ 0; 0 ],

which translates to the two linear equations

4 x_1 + 2 x_2 = 0

4 x_1 + 2 x_2 = 0.

These equations are identical, both having solution

x_2 = -2 * x_1.

Since the equations are identical the system is degenerate and we do not obtain specific values for x_1 and x_2, as we would if the system was not degenerate.

(In general we expect the two equations to be multiples of one another, in which case they are still degenerate and still have identical solutions).

This means that we are free to pick any pair of values x_1 and x_2 that satisfy x_2 = - 2 x_1.  We pick what is probably the simplest pair, letting x_1 = 1 so that x_2 = -2.  Thus our solution is

[x_1; x_2] = [1, -2].

Recalling that our original assumption was that

y = e^(lambda * t) [x_1; x_2]

we obtain the solution

y = [y_1; y_2] = e^(lambda * t) [1, -2] = [ e^(lambda * t) ; -2 e^(lambda * t) ] = [ e^t; -2 e^t]

so that

y_1 = e^t

y_2 = -2 e^t.

If we substitute these functions into our original system

y_1 ' = 5 y_1 + 2 y_2

y_2 ' = 4 y_1 + 3 y_2.

we will verify that these functions do constitute a solution.

diff_06_003

http://youtu.be/S-QVZm6VRI8

diff_06_004

http://youtu.be/iFJcCsGiWbU

We have solved the system for the eigenvalue lambda = 1, obtaining solution

y = [ e^t; -2 e^t] .

We also need to solve the system for eigenvalue lambda = 7.  To distinguish our two solutions we will denote our first solution with subscript 1, saying that

y_1 = [ e^t; -2 e^t]

It is important to distinguish y_1 , the column vector function, from y_1, the single-variable function that we previous used to represent the first-row entry of our column vector [y_1; y_2].  y_1 is a column vector function whereas y_1 is just a plain old function.  (There is no intent to confuse you; notation in this context is a challenge with no ideal resolution).

If we repeat the process for eigenvalue lambda = 7, we obtain the solution

y_2 = [ e^(7 t); e^(7 t)] .

The general solution to our system is a linear combination of these solution vectors.  Using y to now represent our general solution, we have

y(t) = c_1 y_1 (t) + c_2 y_2 (t) = c_1 [ e^t; -2 e^t] + c_2 [e^(7 t); e^(7 t)] = [ c_1 e^t + c_2 e^(7 t); -2 c^2 e^t + c_2 e^(7 t) ]

Thus our solution is of the form

y(t) = [y_1; y_2] = [ c_1 e^t + c_2 e^(7 t); -2 c^2 e^t + c_2 e^(7 t) ]

with our single-variable functions y_1 and y_2 of the form

y_1 (t) = c_1 e^t + c_2 e^(7 t)

and

y_2(t) = -2 c^2 e^t + c_2 e^(7 t).

Substituting these functions into our original system of equations yields a solution, as you should verify. 

diff_06_005

http://youtu.be/hN7UOx9eba4

The process presented above consists of the following steps:

1.  Write the system as a matrix equation.

2.  Assume solution form e^(lambda * t) [ x_1; x_2], where lambda, x_1 and x_2 are regarded as constants for which we must solve.

3.  Subtract lambda from each of the diagonal elements of our 2 x 2 matrix.

4.  Set the determinant of the resulting matrix equal to zero and solve for lambda.

5.  Substitute each value of lambda into the original matrix equation and solve for one of the variables x_1 or x_2 in terms of the other.  Pick a convenient value for one of the variables, and find the corresponding value of the other.

6.  Write out the solution e^(lambda * t) [ x_1; x_2] for each value of lambda.

7.  Form an arbitrary linear combination of these solutions, which provides the general solution.

8.  Substitute the resulting general form of the solution functions y_1(t) and y_2(t) into the original equation to verify the solution.

The same steps generalize to systems of more than three equations.  The number of functions y_1, y_2, etc. matches the number of equations, as do the column vectors.  The matrix will consist of a number of rows equal to the number of equations, and each row will have a number of entries equal to the number of equations.  The degree of the characteristic equation will be equal to the number of equations, so the number of eigenvalues will equal the number of equations.

Eigenvalues may be real or complex.  Complex eigenvalues are easily dealt with.

Eigenvalues may be repeated.  We can and will deal with this situation, but repeated eigenvalues do complicate the process.

The equations we have seen here are homogeneous.  Nonhomogeneous equations are also important.  It will turn out that the method of undetermined coefficients becomes unmanageably complication and is not usually used with systems, but that variation of parameters turns out to be quite simple.

diff_06_006

http://youtu.be/WasJcbDwmQs

Having seen a reasonably thorough example, we turn to an example of a situation in which systems of linear differential equations can arise, we consider the theory so we will know both our limitations and what to expect, and we develop some of the formal notatations and procedures we will need to go further.

A mixing problem with two tanks, in which solutions flows into and out of each of the tanks, with mixed solution also being pumped from each tank to the other, naturally leads to a system of equations.  A typical system might be

Q_1 ' = a Q_1 + b Q_2 + c

Q_2 ' = d Q_1 + f Q_2 + g

with a, b, c, d, f, g depending on concentrations of various solutions, the various pumping rates, container volumes and perhaps time.

diff_06_007

http://youtu.be/CJQTL3qOY_0

Matrix functions are of the form

A(t) = [ a_11(t), a_12(t), ..., a_1n(t); a_21(t), a_22(t), ..., a_2n(t); ... ; a_n1(t), a_n2(t), ..., a_nn(t) ]

where n is an integer.  We abbreviate this as

A(t) = [ a_ij (t) ], 1 <= i <= n, 1 <= j <= n.

The derivative of a matrix function is defined by

A ' (t) = limit [ h -> 0 ] (A(t + h) - A(t) ) / h) = limit [ h -> 0 ] ( [ (a_ij (t + h) - a_ij(t) ) / h) ],

applying the limiting process to each entry of the matrix.  We conclude that

A ' (t) = [ a_ij ' (t) ],

the derivative of the matrix is just the matrix consisting of the derivatives of its entries.  The matrix is differentiable on any interval on which all of its entry functions are differentiable.

It is natural enough to understand that

(A + B) ' = A ' + B '

( f (t) A) ' = f ' (t) A + f(t) A ' (a product rule)

(A B) ' = A ' B + A B ' (another product rule)

integral(A(t) with respect to t) = [ integral(a_ij (t) with respect to t ], 1 <= i <= n, 1 <= j <= 1.

These and other properties are easily proved.

diff_06_008

http://youtu.be/tgZFMkSGWu8

Returning for a moment to the example of the system obtained from two-tank mixing, we see that it is easy to express the system in the form

Q(t) = A * Q  + b ,

with A being a 2 x 2 matrix and

Q(t) = [ Q_1(t), Q_2(t) ]

Q ‘ (t) = [ Q_1 ‘ (t), Q_2 ‘ (t) ]

b = [ b_1(t), b_2(t) ].

More generally we will use the notation

y_1 ‘ = p_11 y_1 + p_12 y_2 + … + p_1n y_n + g_1

y_2 ‘ = p_21 y_1 + p_22 y_2 + … + p_2n y_n + g_2

y_n ‘ = p_n1 y_1 + p_n2 y_2 + … + p_nn y_n + g_n

for a system of n linear differential equations, and we will write the system in matrix notation as

y ' (t) = P * y  + b ,

with P being the 2 x 2 matrix

P(t) = [ p_11, p_12, ..., p_1n; p_21, p_22, ..., p_2n; ... ; p_n1, p_n2, ..., p_nn ]

and

y(t) = [ y_1(t), y_2(t), ... , y_n(t) ]

y ‘ (t) = [ y_1 ‘ (t), y_2 ‘ (t), ..., y_n ' (t) ]

g = [ g_1(t), g_2(t), ..., g_n(t) ].

We also impose an initial condition on the system:

y (t_0) = y_0

where y_0 = [ y_01, y_02, ..., y_0n ] is a constant vector (i.e., y_01, ..., y_0n are just constant numbers).

We ask under what conditions we can expect the system, with the given initial condition, to have a solution of the form y(t) = [ y_1(t), y_2(t), ... , y_n(t) ] .

The answer:

It is so, provided there is an interval a < t < b containing t_0, on which the matrix function P(t) and the vector function g(t)  are continuous.  The solution is valid for all t in the interval (a, b).

diff_06_009

http://youtu.be/rddgG6bLrK4

Our equation

y ' (t) = P * y  + b , y (t_0) = y_0

is nonhomogeneous.  The corresponding homogeneous equation is

y ' (t) = P * y .

Returning to the first example of this chapter, with the equations

y_1 ' = 5 y_2 + 2 y_2

y_2 ' = 4 y_1 + 3 y_2

we obtained solutions

y_1(t) and y_2(t).

In the process of verifying that the resulting general function

c_1 y_1(t) + c_2 y_2(t)

is a solution to the original equation you likely used, without much thought, the fact that

( c_1 y_1(t) + c_2 y_2(t) ) ' = c_1 y_1 ' (t) + c_2 y_2 ' (t)

This fact is called the linearity property of derivatives. 

In general, it follows that if y_1(t) and y_2(t) are solutions to an equation, so is c_1 y_1(t) + c_2 y_2(t) .

More generally, it is clear that this applies to any number of solutions.  That is, if y_1(t) , ..., y_n(t) are solutions to an equation, so is c_1 y_1(t) + ... + c_n y_n(t) .

We call

 c_1 y_1(t) + ... + c_n y_n(t)

a linear combination of the functions y_1(t) , ..., y_n(t) . 

What we have been talking about here is called the superposition principle.  The above linear combination is called a superposition of the individual functions c_1 y_1 (t), ..., c_n y_n (t).

The set { y_1(t) , ..., y_n(t) } is called a solution set for our equation.

If every possible solution of the equation can be expressed as a linear combination of functions in the solution set, then we say that the set { y_1(t) , ..., y_n(t) } is a fundamental set of solutions.

In our original example, we obtained the functions

y_1(t) = [e^t; -2 e^t] and y_2(t) = [e^(7 t); e^(7 t)]. 

The set { y_1(t) , y_2(t)  } = { [ e^(t); -2 e^t ], [ e^(7 t); e^(7 t) ] } is a solution set for that system and for the equivalent matrix equation.

As we will see, this solution set is also a fundamental set, in that every possible solution of that system is a linear combination of those functions.

diff_06_010

http://youtu.be/BTa6x6YWRRw

Using the solution set

{ y_1(t) , y_2(t)  } = { [ e^(t); -2 e^t ], [ e^(7 t;, e^(7 t) ] }

of our original system, we form the solution matrix

psi(t) = [ y_1(t) , y_2(t) ] whose columns consist of the two vector function, so that

psi(t) = [ e^(t), e^(7 t);  -2 e^t ,  e^(7 t)] .

If the solution matrix psi(t) is formed from a fundamental set, it is called a fundamental matrix.

If we multiply the psi(t) matrix above by the constant column vector  c = [ c_1; c_2 ] we obtain

psi(t) * c =  [ e^(t), e^(7 t);  -2 e^t ,  e^(7 t)] * [ c_1; c_2 ] = [ c_1 e^t + c_2 e^(7 t); -2 c_1 e^t + c_2 e^(7 t) ]

whose rows consist of the general solution functions c_1 e^t + c_2 e^(7 t) and -2 c_1 e^t + c_2 e^(7 t). 

diff_06_011

http://youtu.be/VbSGCiC4DQw

If the matrix equation

y ' (t) = P * y

is defined on an interval (a, b) on which the P matrix is continuous, and if

{ y_1(t) , ..., y_n(t) } is a fundamental set, then

psi(t) = [ y_1(t) , ..., y_n(t) ]

is a fundamental matrix and for any constant vector c the product

psi(t) * c = c_1 y_1(t) + ... + c_n y_n(t)

is a solution to the system.

We have seen that for any t_0 in the interval (a, b) and any constant vector y_0 there is a solution to the system satisfying y( t_0) = y_0.

This constant column vector c is therefore a solution to the equation

psi(t_0) * c = y_0

This solution exists only if the psi(t_0) matrix is invertible.  It follows that

det(psi(t_0)) is nonzero.

The determinant of the psi matrix is the Wronskian, so this results says that

W(t_0) is nonzero.

This sequence of reasoning ensures the following, which we state as a theorem:

 

Theorem:  If { y_1(t) , ..., y_n(t) } is a fundamental set for the matrix equation

y ' (t) = P * y , a < t < b

then for any t_0 in the interval (a, b) the corresponding Wronskian W(t_0) is nonzero.

 

This occurs for every t_0 between a and b, so that the Wronskian is nonzero for every t in the interval (a, b).

 

Recalling that { y_1(t) , ..., y_n(t) } is a fundamental set provided there exists a point t_0 at which the Wronskian is nonzero, we conclude the following:

Corollary to above Theorem:

{ y_1(t) , ..., y_n(t) } is a fundamental set if, and only if, the Wronskian is nonzero at all points of the interval (a, b).

 

Now suppose that

{ y_1(t) , ..., y_n(t) }

is a fundamental set and psi(t) the corresponding fundamental matrix [ y_1(t) , ..., y_n(t) ]

If we multiply the psi(t) matrix by any constant matrix C, we get a solution matrix.  Furthermore, if C is invertible, the resulting solution matrix is a fundamental matrix.

There's more:

Suppose that

{ y_1_hat (t) , ..., y_n_hat (t) }

is a solution set, which may or may not be a fundamental set, and psi_hat(t) the corresponding solution matrix. 

There is then a matrix C such that

psi_hat(t) = psi(t) * C.

The matrix C is unique.  For a given fundamental matrix psi(t) and solution matrix psi_hat(t) there is one and only one such matrix C.

Furthermore, psi_hat(t) is a fundamental matrix if, and only if, C is invertible (i.g., if and only if the determinant of C is nonzero).

 

In terms of our original system, the psi matrix was seen to be

psi(t) = [ e^t, e^(7 t); -2 e^t, e^(7 t) ]

and psi(t) * [ c_1; c_2 } was seen to be [ c_1 e^t + c_2 e^(7t); -2 c_1 e^t + c_2 e^(7 t) ].

If we multiply psi(t) by a matrix C = [ a, b ; c, d ] we get

psi(hat(t) = [ e^t, e^(7 t); -2 e^t, e^(7 t) ] * [ a, b; c, d] = [ a e^t + c e^(7 t), b e^t + d e^(7 t) ; -2 a e^t + c e^(7 t), -2 b e^t + d e^(7 t) ].

The columns of this matrix are

[ a e^t + c e^(7 t) ; -2 a e^t + c e^(7 t) ].

and

[  b e^t + d e^(7 t) ; -2 b e^t + d e^(7 t) ],

the corresponding solution set is

{ [ a e^t + c e^(7 t), b e^t + d e^(7 t) ], [ -2 a e^t + c e^(7 t), -2 b e^t + d e^(7 t) ] }

and the matrix can be written as

psi_hat(t) = [ [ a e^t + c e^(7 t), b e^t + d e^(7 t) ], [ -2 a e^t + c e^(7 t), -2 b e^t + d e^(7 t) ] ]

If, as is in fact the case, our psi(t) matrix is a fundamental matrix and our matrix C is invertible, which is the case provided (ad - b c) is nonzero, then our solution set is a fundamental set consisting of the two columns of our matrix, and the matrix psi_hat(t) is a fundamental matrix for our system of equations.

 

It might be worth noting that if psi_hat and psi are both fundamental matrices, there is a matrix which we can call D such that

psi_hat * D = psi.

It should be easy for you to verify that this matrix D is just C^-1, the matrix inverse to the matrix C for which psi * C = psi_hat.

diff_06_012

diff_06_012  http://youtu.be/cPjv0wdLckw

If we assume solution of the form

y ‘ = e^(lambda * t) [x_1; x_2; ..., x_n]

to the matrix equation

y ‘ = A yy

where A is an n x n matrix, we obtain the equation

( A - lambda I ) y  = 0, which can be represented as a set of n equations in x_1, x_2, ..., x_n.

These equations, along with the equation det( A - lambda I) = 0, provide us with n + 1 equations in the n+1 unknowns x_1, ..., x_n and lambda.

The equation det( A - lambda I) = 0 is an nth degree polynomial in lambda and may have up to n solutions.  These solutions might have any of the following characteristics:

All the solutions for lambda might be real and distinct, in which case the solution for each lambda will turn out to give us n linearly independent solution vectors x = [x_1; ..; x_n], where x_1, ..., x_n are real numbers.  We call lambda our eigenvalues.  For each eigenvalue the corresponding solution vector will be called its eigenvector.  Multiplying e^(lambda t) by the corresponding eigenvector x = [x_1; ..., x_n] we get solution vector e^(lambda t) x = e^(lambda t) [ x_1; ..., x_n].  The n values of lambda will give us n solution vectors, which form a linearly independent set of solutions.  This set of solutions will be our fundamental set.

Some of our solutions might occur in complex conjugate pairs.  It is still possible that all our solutions are distinct.  A typical complex conjugate pair will be of the form a +- b i, representing the two solutions a + b i and a - b i, corresponding to two values of e^(lambda t).  One value will be e^(a t) ( cos(b t) + i sin( b t) ) and the other e^(a t) ( cos(b t) - i sin( b t) ).  Our eigenvectors will again be of form x = [ x_1; ..., x_n], where now x_1, ..., x_n will typically be a mix of real and complex numbers.  The solutions will be of form e^(at) ( cos(b t) +- sin(b t) ) x = e^(at) ( cos(b t) +- sin(b t) ) [ x_1; ..., x_n].  If all the solutions, real and complex, are distinct then all the eigenvectors x will be linearly independent.

In the above two cases, where all solutions to det ( A - lambda I) = 0 are distinct, we will obtain a fundamental set without complications.

However if some of the solutions to det( A - lambda I) = 0 are repeated, the process is more complicated.  In many cases we can still obtain a linearly independent set of n eigenvectors, but sometimes this is not possible.  The number of linearly independent eigenvectors is called the geometric multiplicity of our equation and may therefore be less than n, whereas the number of solutions to the equation det( A - lambda I ) = 0 is called the algebraic multiplicity, and is equal to n.

diff_06_013 http://youtu.be/TO9KWu-XCV  http://youtu.be/zw-VSmLxY9A

diff_06_014 http://youtu.be/JlwrNRoJtRo

The matrix A = [3, 5; 1, 7] stands for the matrix system y ‘ = A y, which represents the equations y_1 ' = 3 y_1 + 5 y_2, y_2 ' = y_1 + 7 y_2.

The equation det ( A - lambda I ) = 0 yields solutions lambda_1 = 8 and labmda_2 = 2.  These are our eigenvalues.

For lambda = lambda_1 = 8:

Our equation   (A - lambda I )  x = 0  will be [-5, 5; 1, -1]  x = 0  so that x = [x_1; x_2] yields equal values x_1 and x_2. 

Arbitrarily choosing x_1 = 1, we find that x_2 = 1 so that our eigenvector is x = [1; 1] , our eigenvalue and eigenvector give us the eigenpair ( 8, [1; 1] ), and solution y_1 = e^(lamba t) [x_1; x_2] = e^(8 t) [ 1; 1 ], which can also be written as [e^(8t), e^(8t)].

For lambda = lambda_1 = 2 we obtain, by a similar process, the eigenpair (2, [-5; 1]) leading to solution y_1 = e^(lamba t) [x_1; x_2] = e^(2 t) [ -5; 1 ] = [-5 e^(2 t); e^(2 t)].

Our eigenvalues are distinct and therefore linearly independent, so our fundamental set for this system is

{ y_1 ; y_2 } = { [e^(8t), e^(8t)];  [-5 e^(2 t); e^(2 t)] }.

The general solution of the equation is

 y = c_1 [e^(8t), e^(8t)] + c_2 [-5 e^(2 t); e^(2 t)] = [ c_1 e^(8t) -5 c_2 e^(2t); c_1 e^(8t) + c_2 e^(2 t)]

corresponding to the two functions

y_1(t) = c_1 e^(8t) -5 c_2 e^(2t)

y_2(t) = c_1 e^(8t) + c_2 e^(2 t)

Our fundamental matrix for this system is

psi(t) = [e^(8t), -5 e^(2 t); e^(8t), e^(2 t)].

diff_06_015 http://youtu.be/ro7PkTxkzts

If we apply the initial condition  y(0) =   y_0 = [-2; 3], corresponding to y_1(0) = -2 and y_2(0) = 3, we obtain the vector equation

[ c_1 e^(8*0) -5 c_2 e^(2*0); c_1 e^(8*0) + c_2 e^(2 *0)] = [c_1, -5 c_2; c_1, c_2] = [-2; 3]

The solution can be obtained either by inverting the matrix [c_1, -5 x_2; c_1, c_2 ] and multiplying the result by [-2; 3], or by writing out the two simultaneous equations corresponding to the matrix equation [c_1, -5 c_2; c_1, c_2] = [-2; 3], and solving by elimination or substitution.  You should be able to solve the equation either way.

Our solution is c_1 = 13/6, c_2 = 5/6, so that the specific solution to the equation defined by the matrix A = [3, 5; 1, 7] with the given initial condition is

y = [ 13/6 e^(8t) - 25/6 e^(2t); 13/6 e^(8t) + 5/6 e^(2 t)]

corresponding to the two functions

y_1(t) = 13/6 e^(8t) - 25/6 e^(2t)

y_2(t) = 13/6 e^(8t) + 5/6 e^(2 t)]

You can easily verify that these solutions satisfy the given system.

diff_06_016 http://youtu.be/oiUPKP0egCg

We solve the equation defined by the matrix A = [5, 2, 3; 3, 6, -7; 0, 0, 2]. 

We obtain eigenvalues lambda_1 = 8, lambda_2 = 3 and lambda_3 = 2.

lambda_1 = 8 yields the matrix equation

[-3, 2, 3; 3, -2, -7; 0, 0, -6] * [x_1; x_2; x_3] = [0; 0; 0], corresponding to the equations

-3 x_1 + 2 x_2 + 3 x_3 = 0

3 x_1 - 2 x_2 - 7 x_3 = 0

                         -6 x_3 = 0

from which we find that x_3 = 0, and x_2 = 3/2 x_1.  Letting x_1 = 2 we obtain eigenvector

[ 2; 3; 0 ], giving us the eigenpair

( 8, [2; 3; 0] ) and solution y_1 = e^(8 t) [ 2; 3; 0] ) = [2 e^(8t); 3 e^(8t); 0]

Similar steps give us solutions

y_2 = e^(3 t) [ 1; 1; 0] ) = [e^(3t); e^(3t); 0]

y_3 = e^(2 t) [ 2; 3; 0] ) = [a e^(2t), 5  e^(2t),  e^(2t)] where a can equal any constant number.

Our eigenvalues are distinct so our eigenvectors are linearly independent, so our solution set

{ [2 e^(8t); 3 e^(8t); 0],  [e^(3t); e^(3t); 0], [a e^(2t), 5  e^(2t),  e^(2t)] }

is a fundamental set and we can therefore write our general solution as

y_(t) = c_1 y_1 +  c_2 y_2 +  c_3 y_3

= c_1 [2 e^(8t); 3 e^(8t); 0] + c_2  [e^(3t); e^(3t); 0] + c_3 [a e^(2t), 5  e^(2t),  e^(2t)]

= [ 2 c_1 e^(8t) - c_2 e^(3 t) + c_3 a e^(2 t); 3 c_1 3^(8 t) + c_2 e^(3 t) + 5 c_3 e^(2 t); -c_2 e^(3 t) + c_3 e^(2 t) ]

diff_06_017  http://youtu.be/zw-VSmLxY9A

diff_06_018  http://youtu.be/zEuV33Lh6pk

The system defined by matrix A = [2, 5; -2, 4] leads to the eigenvalue equation det ( 2 - lambda, 5; -2, 4 - lambda) = lambda^2 - 6 lambda + 18 = 0, which has conjugate complex solutions

lambda_1 = 3 + 3 i

lambda_2 = 3 - 3 i

The first of these eigenvalues leads to the matrix equation

[ 2 - (3 + 3 i), 5; -2, 4 - (3 + 3 i) ] [x_1; x_2] = [0; 0]

with solution

x_2 = ( 1 + 3 i ) / 5 * x_1.

We verify that this solution applies to either of the two equations obtained from the above matrix equation, which is good confirmation that we have done the algebra of the complex numbers correctly.

Choosing x_1 = 1 we obtain eigenvector [ 1; (1 + 3 i) / 5 ], yielding eigenpair (3 + 3 i , [1; (1 + 3 i ) / 5 ] ) and associated solution

y_1 = e^((3 + 3 i) t) [ 1; (1 + 3 i) / 5 ] ) = e^(3 t) ( cos(3t) + i sin(3t) ) * [ 1; (1 + 3 i) / 5 ] ).

Similar calculations with lambda_2 = 3 - 3 i yield a second solution

y_2 = e^((3 - 3 i) t) [ 1; (1 - 3 i) / 5 ] ) =  e^(3 t) ( cos(3 t) - i sin(3 t) ) * [ 1; (1 - 3 i) / 5 ] ).

Our eigenvalues being distinct our solution set

[ e^(3 t) ( cos(3t) + i sin(3t) ) * [ 1; (1 + 3 i) / 5 ] ),  e^(3 t) ( cos(3 t) - i sin(3 t) ) * [ 1; (1 - 3 i) / 5 ] ) ]

is a fundamental set.

Our first solution

y_1 = e^(3 t) ( cos(3t) + i sin(3t) ) * [ 1; (1 + 3 i) / 5 ] )

can be multiplied out to obtain

y_1 = [ e^(3 t) ( cos(3t) + i sin(3t) ); e^(3 t) / 5 * ( cos(3 t) - 3 sin(3 t) + i ( 3 cos(3 t) + sin(3 t) ) ]

This solution has a real and an imaginary part:

The real part of y_1  is

real part = [ e^(3 t) cos(3t); e^(3 t) / 5 * ( cos(3 t) - 3 sin(3 t) ) ]

and the imaginary part is

imaginary part = [ e^(3 t) sin(3t); e^(3 t) / 5 * ( 3 cos(3 t) + sin(3 t) ]

diff_06_019  http://youtu.be/BVh7ZObgA0k

diff_06_020 http://youtu.be/WqxiVxET-uY

We show that the real part

real part = [y_1; y_2] = [ e^(3 t) cos(3t); e^(3 t) / 5 * ( cos(3 t) - 3 sin(3 t) ) ]

of our first solution y_1 = [ e^(3 t) ( cos(3t) + i sin(3t) ); e^(3 t) / 5 * ( cos(3 t) - 3 sin(3 t) + i ( 3 cos(3 t) + sin(3 t) ) ]

is a solution to our original system.  Our original system was defined by matrix A = [2, 5; -2, 4], so the system is

y_1 ' = 2 y_1 + 5 y_2

y_2 ' = -2 y_1 + 4 y_2.

The real part of our solution gives us the functions

y_1 ( t ) =  e^(3 t) cos(3t)

y_2 ( t ) =  e^(3 t) / 5 * ( cos(3 t) - 3 sin(3 t) )

We can easily calculate y_1 ' and y_2 ' and substitute, along with y_1 and y_2, into the system.  The solutions do check out, as show on the slip.

The same is true of the imaginary part.  That is, if we had done the same with

[y_1; y_2] =  [ e^(3 t) sin(3t); e^(3 t) / 5 * ( 3 cos(3 t) + sin(3 t) ]

the solution would have again checked out.

diff_06_021 http://youtu.be/wXko37yCRj4

We show why in general the real and imaginary parts of a complex-valued solution must both solutions to the original equation.  This is a simple consequence of the linearity of matrix addition and the derivative operation.

diff_06_022 http://youtu.be/9-pn6lp4PLA

We show furthermore that our real and imaginary parts form a fundamental set.  This is easily shown, since our solution matrix

psi(t) = [ real part, imaginary part ] = [  [ e^(3 t) cos(3t); e^(3 t) / 5 * ( cos(3 t) - 3 sin(3 t) ) ],  [ e^(3 t) sin(3t); e^(3 t) / 5 * ( 3 cos(3 t) + sin(3 t) ] ]

has nonzero determinant.  The determinant is easily calculated, giving us

1/5 e^(6 t) ( 3 cos^2(3 t) + cos(3t) sin(3t) - cos(3 t) sin(3 t) + 3 sin^2(3 t) ),

which very easily simplifies to just 1/5 e^(6 t), which is nonzero for all values of t.

We furthermore indicate that the resulting general solution

y(t) = c_1 * real part + c_2 * imaginary part

is equal, for appropriate values of c_1 and c_2, to our original solution

y_2 = e^( (3 - 3 i) t) [ 1, (1 - 3 i) / 5 ].

It is left to you to verify this, as a suggested exercise.

diff_06_023 http://youtu.be/ItQBcZpCWjU

We solve our system subject to the initial condition y(0) = [3; 2 i - 1 ], using our fundamental set

{  [ e^(3 t) cos(3t); e^(3 t) / 5 * ( cos(3 t) - 3 sin(3 t) ) ],  [ e^(3 t) sin(3t); e^(3 t) / 5 * ( 3 cos(3 t) + sin(3 t) ] }.

Assuming solution

y(t) = c_1 [ e^(3 t) cos(3t); e^(3 t) / 5 * ( cos(3 t) - 3 sin(3 t) )   +  c_2 [ e^(3 t) sin(3t); e^(3 t) / 5 * ( 3 cos(3 t) + sin(3 t) ]

we quickly obtain

y(0) = [ c_1; 1/5 c_2 + 3/5 c_2 ] = [ 3, 2 i - 1 ]

which has solutions c_1 = 3, c_2 = 5/3 ( 2 i - 2/5) .

Our solution is therefore

y(t) = 3 [ e^(3 t) cos(3t); e^(3 t) / 5 * ( cos(3 t) - 3 sin(3 t) )   +  (5/3 ( 2 i - 2/5)  [ e^(3 t) sin(3t); e^(3 t) / 5 * ( 3 cos(3 t) + sin(3 t) ]

We multiply this out to put it into a simplified form in which our solution is expressed as a column vector with two components, each the sum of a real and an imaginary part.

diff_06_024 http://youtu.be/bW8T0Dd9zNM

The matrix A = [3, -2; 2, 7] yields eigenvalue equation

lambda^2 - 10 lambda + 25 = 0

which factors into

(lambda - 5)^2 = 0

producing a repeated solution

lambda_1 = 5, lambda_2 = 5.

We find an eigenpair for lambda_1, obtaining

(lambda_1, x) = (5, [1, -1] )

and solution

y_1(t) = e^(5 t) [ 1; -1].

diff_06_025  http://youtu.be/WvypwDJHdaA

We cannot follow the same procedure to find a second solution for lambda_2 = 5, since we would simply obtain the same result as before, or a multiple thereof.

We instead try solution

y2(t) = t e^(5 t) v_1 + e^(5 t) v_2 ,

where v_1 and v_2 are constant vectors.

We easily see that

y_2 ' = e^(5 t) ( 1 + 5 t) v_1 + 5 e^(5 t) v_2

Substituting into the equation

y_2 ' = 5 y_2

we obtain the equation

A ( t e^(5 t) v_1 + e^(5 t) v_2 ) = e^(5 t) ( 1 + 5 t) v_1 + 5 e^(5 t) v_2

which by the linearity of matrix multiplication is equivalent to

( t A v_1 + A v_2 ) =  ( 1 + 5 t) v_1 + 5 v_2

The coefficients of t must be equal, giving us

 v_1 = 5 v_1  , or

(A - 5 I) v_1 = 0.

The other terms must be equal, so we also have

A v_2  v_1 + 5 v_2 , or

(A - 5 I) v_2  v_1

From our first equation (A - 5 I) v_1 = 0 we obtain solution

v_1 = [1; -1].

This is the same eigenvector we obtained for lambda_1 = 5, but it is now part of our new solution y2(t) = t e^(5 t) v_1 + e^(5 t) v_2 .

Our second equation (A - 5 I) v_2  v_1 becomes

(A - 5 I) v_2 = [1; -1]   

which yields matrix equation [-2, -2; 2, 2] [x_1; x_2] = [1; -1].

One solution to this equation is [x_1; x_2] = [1; 1/2].

Thus the second solution to our equation is

y_2 = t e^(5 t) v_1 + e^(5 t) v_2 = e^(5 t) [ t + 1 ; -t + 1/2 ]

giving us solution set

{ e^(5 t) [ 1; -1], e^(5 t) [ t + 1 ; -t + 1/2 ] }

The Wronskian is easily shown to be nonzero, so this is indeed a fundamental set and our general solution is

y(t) = c_1 e^(5 t) [ 1; -1] + c_2 e^(5 t) [ t + 1 ; -t + 1/2 ] .

diff_06_026 http://youtu.be/BJFXgp0X9OQ

In general if the equation

y ' = A  y

for y = [y_1; y_2]

leads to repeated eigenvalues lambda_1 = lambda_2, we can follow the usual method to obtain a solution

y_1 = e^(lambda_1) x_1

then use trial solution

y_2 = t e^(lambda_2 t) v_1 + e^(lambda_2 t) v_1

in a manner completely analogous to the above to obtain the equations

(A - lambda_1 I) v_1 = 0

(A - lambda_1 I) v_2 = v_1

The first equation can be solved with v_1 = x_1, the same as the eigenvector we obtained for lambda_1.

The second equation then leads to a degenerate system of equations to be solved for the components of v_2 .

diff_06_027 http://youtu.be/0q0eB7h330U

diff_06_028 http://youtu.be/QaNVaRB25D8

( Nonworking link diff_06_029 )

Suppose we have a nonhomogeneous equation

y ' = P(t) y + g(t) ,   y(t_0)  y_0

If we have a fundamental set { y_1  y_2 }

our fundamental matrix

psi(t) = [ y_1  y_2 ]

is itself a solution to the equation

psi ' (t) = P(t) psi(t).

Our goal will be to find

u(t) =  [u_1; u_2; ... , u_n]

such that

psi(t) u(t) solves the original equation.

This is a variation of parameters method, analogous to the variation of parameters we used with second-order linear equations.

Our equation will be

( psi(t) u(t) ) ' = P(t) (psi(t) u(t) )

which upon taking the derivative becomes

psi' (t) u(t) + psi(t) u ' (t) = P(t) (psi(t) u(t)) + g(t)

Since psi' (t) u(t) = (P(t) psi(t)) u(t)) this reduces to

 psi(t) u ' (t) = g(t)

with solution

 u ' (t) =  psi^-1(t) g(t)

 

diff_06_030 http://youtu.be/mxpYoqyM044

Letting  

u_0 = psi^-1 y_0

we can write our solution to

 u ' (t) =  psi^-1(t) g(t)

as

u (t) =  integral( psi^-1(s) g(s) with respect to t, t from t_0 to t) +  u_0.

Our particular solution matrix is thus

psi(t)  u(t) = psi(t) * integral( psi^-1(s) g(s) with respect to t, t from t_0 to t) + psi(t) *  u_0.

diff_06_031 http://youtu.be/6LSINc3AEBw

We apply this method to the equation

 y ' (t) = [3, 3; 2, 4]  y (t) + [t; e^t],  y(0) = [1; 1].

Eigenvalues are lambda_1 = 1 and lambda_2 = 6 and the fundamental matrix is

psi(t) = [-3 e^t, e^(6 t); 2 e^t, e^(6t) ].

Assuming solution

psi(t)  u  (t) we will obtain

 u ' (t) = psi^-1(t) *  g(t)

The inverse matrix is easily found and multiplied by the function g(t) and we obtain

 u ' (t) = [ -1/5 t e^(-t) + 1/5; 2/5 t e^(-6 t) + 3/5 e^(-5 t) ].

diff_06_032 http://youtu.be/T1MiIndHoHA

We easily calculate  u_0 = psi^-1 * y_0 and perform the integration to obtain

u(t) = integral( psi^-1(s) g(s) with respect to t, t from t_0 to t) +  u_0.

Erroneous solution:  The result is erroneously found to be

u(t) = [ 1/5 t ( e^(-t) + 1) + 1/5 e^(-t) - 1/5; 1/75 ( -5 e^(-6 t - 9 e^(-5 6) - e^(-6 t) - 3 e^(-5 6) + 2749 / 2250 ]

Our corresponding solution is

y(t) = psi(t)  u(t) = [ -3 e^t, e^(6 t); 2 e^t, e^(6t) ] u = [y_1; y_2].

Correction shown in subsequent video clip:

The correct solution for the u vector is

u(t) = [ 1/5 t ( e^(-t) + 1) + 1/5 e^(-t) - 1/5; -1/90 ( 6 t + 1) e^(-6 t) - 3/25 e^(-5 t) + 509 / 450 ]

diff_06_033 http://youtu.be/j-_MtmqezC4

y(t) = psi(t) u(t) = [ -3 e^t, e^(6 t); 2 e^t, e^(6t) ] * [ 1/5 t ( e^(-t) + 1) + 1/5 e^(-t) - 1/5; -1/90 ( 6 t + 1) e^(-6 t) - 3/25 e^(-5 t) + 509 / 450 ]

= [ -3/5 (t + 1) - 3/5 t e^5 + 3/5 e^t - 1/90 ( 6 t + 1) - 1/25 e^t + 509/450 e^(6 t);  2/5(t+1) + 2/5 t e^t - 2/5 e^t - 1/90 ( 6 t + 1)  - 1/25 e^t + 509/450 e^(6 t) ] = [y_1; y_2]

with

y_1 = -3/5 (t + 1) - 3/5 t e^5 + 3/5 e^t - 1/90 ( 6 t + 1) - 1/25 e^t + 509/450 e^(6 t)

y_2 = 2/5(t+1) + 2/5 t e^t - 2/5 e^t - 1/90 ( 6 t + 1)  - 1/25 e^t + 509/450 e^(6 t)

This can be checked in the original system, which in expanded form is

y_1 ' = 3 y_1 + 3 y_2 + t

y_2 ' = 2 y-1 + 4 y_2 + e^t

The solution may or may not check out, so be sure to check it.

diff_06_034 http://youtu.be/3fUkn-xCnR4

Nonworking links:  diff_06_035  diff_06_036   diff_06_037  diff_06_038

The propagator matrix for a homogeneous system

y ' = P(t) y

is defined to be the matrix psi(t) psi^-1(s), where t is regarded as our variable and s as the value of some fixed parameter.

The matrix can be represented as phi(t, s), indicating that it is a function of the variable t and the value of the chosen parameter s.  We write

phi(t, s) = phi(t) phi^-1(s).

This matrix has a number of important properties.  Among others:

Perhaps the most useful property is that if y(t) is a solution,

y(t) = phi(t, s) * y(s).

This is easily shown to be the case. 

In this sense, our matrix is said to 'propagate' the solution from t = s to arbitrary t.

Applied to the system

y ' = P(t) y + g(t) ,   y(t_0)  y_0

we can say that

y(t) = phi(t, t_0) * y(t_0)

= phi(t, t_0) * y_0.

As an example, the equation

y(t) = [2, 3; -1, 6] with y(0) = [1; 1] has fundamental matrix

psi(t) = [ e^(3 t), e^(5 t); 3 e^(3 t), -3^(5 t) ].

psi(t) has determinant -4 e^(8 t) and inverse

psi^-1(t) = -1 / (4 e^(8t) ) * [ -e^(5t), -e^(5 t);  -3 e^(3 t), e^(3 t) ]

The propagator matrix is shown in the video clip to be as follows, though there are errors in the signs of a couple of terms (see if you can spot the errors):

phi(t, s) = [ 1/4 e^(t - s) + 3/4 e^(5(t-s)), 1/4 e^(3(t-s)) + 1/4 e^(5(t-s)); 3/4 e^(3(t-s)) + 3/4 e^(5(t-s)), 3/4 e^(3(t-s)) + 1/4 e^(5(t-s)) ]

To get psi(t), we would multiply phi(t, 0) by y(0):

psi(t) = phi(t, 0) * y(0).

For the matrix as shown in the clip,

phi(t, 0) = [1/4 e^(3t) + 3/4 e^(5 t), 1/4 e^(3 t) + 1/4 e^(5(t-s)); 3/4 e^(3 t) + 3/4 e^(5 t), 3/4 e^(3 t) + 1/4 e^(5 t) ]

so that

y(t) = phi(t, 0) * y_0 = [ 1/4 e^(3t) + 3/4 e^(5 t) + 1/4 e^(3 t) + 1/4 e^(5(t-s)); 3/4 e^(3 t) + 3/4 e^(5 t) + 3/4 e^(3 t) + 1/4 e^(5 t) ]

= [1/2 e^(3 t) + e^(5 t);  3/2 e^(3 t) + e^(5 t) ].

However this is clearly in error, since for t = 0 this matrix, which should be equal to y_0 = [1; 1] reduces to [3/2; 5/2].

The problem is careless multiplication of the psi (t) and psi^-1 (s) matrices.  The errors were in the signs of some terms.

If those errors are corrected, the initial conditions will check out and the solution y(t) will be correct.

 

diff_06_039 http://youtu.be/8BdQ7_R474s

diff_06_040 http://youtu.be/4VQm2p8i4ik

 

We motivate our introduction to Laplace Transforms with an example related to the familiar equation for oscillatory motion with drag force:

m y '' + gamma y ' + k g = f(t), y(0) = y_0, y ' (0) = v_0.

Given a real system we can drive it with a known force function and measure the resulting position function y(t). 

The question is, from this information can we infer the values of m, gamma and k?

For certain common and easily applied forcing functions f(t) we could solve the equation by variation of parameters, and attempt to infer the values of m, gamma and k from the resulting solution function.  However this would prove to be difficult or impossible.

A more tractable approach is to transform the equation to a system of simpler algebraic equations which can be solved for m, gamma and k, then apply the inverse transform to obtain a usable solution.

The Laplace transform, defined by F(s) = integral ( f(t) e^(-s t) dt, t from 0 to infinity) can be used to tranform a time-dependent function f(t) to an algebraic function F(s).  We will explore the basic properties of the Laplace transform, see how it can be used to solve differential equations, and see how it can allow us to infer the parameters m, gamma and k for the oscillatory system.

diff_07_001 http://youtu.be/adD1BFkU3VU

As a simple example we compute the Laplace transform of f(t) = e^(3 t).  We obtain

F(s) = integral ( e^(3 t) e^(-s t) dt, t from 0 to infinity).

The integral is easily calculated.  We get

F(s) = 1 / (s - 3), provided s > 3.

(if s < = 3 the integral is divergent and the Laplace transform is not defined).

 

Similarly we compute the Laplace transform of the more general function f(t) = e^(alpha t).  The integral and the integration are almost identical to the above, with 3 replace by alpha, and we get

F(s) = 1 / (s - alpha), s > alpha.

 

We call the function f(t) and its Laplace transform F(s) a 'Laplace transform pair'.

Thus

f(t) = e^(alpha t) and F(s) = 1 / (s - alpha) , s > alpha

form a Laplace transform pair.

diff_07_002 http://youtu.be/sn-lOHu2Gz0

We ask

"Why are we doing this"?

and the related question

"What good is this"?

To begin to answer these questions, let's consider the following question:

If f(t) has Laplace transform F(s) = L(f(t)), then what is the Laplace transform of f ' (t)?

The question is fairly easy to answer.  All we need to do is take the first obvious step, then take a good look at what we have.

The first step is to apply the definition of the Laplace transform to our derivative function f ' (t):

L( f ' (t) ) = integral ( f ' (t) * e^(-s t) dt, t from 0 to infinity).

In this course we have often encountered integrals where the integrand is the product of some function and an exponential function (we have for example integrated expressions like sin(t) e^(k t) or t^2 e^(k t)).  The method of choice has been integration by parts.  So we think about how we might apply integration by parts to the present integral.

It turns out that letting u = e^(-s t) and dv = f ' (t) dt gives us du = - s e^(-s t) dt, and v = f(t).

The integral of v du will be the integral of s * f(t) e^(- s t).  The integral of f(t) e^(-s t) is just F(s), which we assume we already know.

The details are shown in the video.  The result:

L( f ' (t) ) = s F(s) - f(0),

where F(s) is the Laplace transform of f(t).

(Note:  my hands aren't really as quick as they appear to be in part of this video; there was an obvious malfunction of the camera.  Other than the rapid hand movements, the clip was not affected and the audio was fortunately intact).

diff_07_003 http://youtu.be/r3HyDCCCfyE

At this point we know that 1 / (s - alpha), s > alpha is the Laplace transform of f(t) = e^(alpha t).

We also know that

L ( f ' (t) ) = s F(s) - f(0), where F(s) is the Laplace transform of f(t).

We can express the preceding for a function y(t) and its derivative y ' (t):

L ( y ' (t) ) = s F(s) - y(0), where F(s) is the Laplace transform of y(t).

This can also be written as

L ( y ' (t) ) = s L(y) - t(0)

We will eventually know many more Laplace transform pairs, and we will easily enough develop rules for the Laplace transforms of higher derivatives.  But for now, we know enough to illustrate the application of Laplace transforms to solving differential equations:

Let's consider the equation

y ' - 2 y = e^(3 t), y(0) = y_0.

This equation can be solved using integrating factor e^(-2 t), as in the early part of the course.  Here we will solve it using what we already know about Laplace transforms.

We transform the equation, using the linearity of the Laplace transform (which follows very easily from the linearity of the integration operation), obtaining

L ( y ' ) - 2 L (y) = L ( e^(3 t) ).

This becomes

s (L(y) - y_0) - 2 L(y) = 1 / (s - 3), s > 3.

We need to solve this equation for L(y).

The equation is easily rearranged to the form

(s - 2) L(y) = 1 / (s - 3) + y_0  so that

L(y) = 1 / ( (s - 2) (s - 3) ) + y_0 / (s - 2).

We can already recognize that 1 / (s - 2) is the Laplace transform of e^(2 t), which is a solution of the homogeneous equation and is therefore to be expected.  The (s - 2) and (s - 3) factors in the first denominator suggest more of the function e^(2 t), and the function e^(3 t).  We will proceed with the analysis in the next clip, and see how these functions do indeed follow from our expression for L(y).

diff_07_004 http://youtu.be/HPicAtA1DCI

The main task will be to apply partial fractions to the expression 1 / ( ( s - 2) ( s - 3) ).

We write

1 / ( ( s - 2) ( s - 3) ) = A / (s - 2) + B / (s - 3) and proceed to solve for A and B.  The details are straightforward and are shown in the clip.

We obtain A = 1, B = -1, so we can now write

L(y) = 1 / (s - 2) - 1 / (s - 3) + y_0 / (s - 2).

Knowing that the Laplace transform 1 / (s - alpha) corresponds to the function y(t) = e^(alpha t) we conclude that y(t) is

y(t) = e^(3 t) + (y_0 - 1) e^(2 t).

We check to verify that y(0) = y_0:

y(0) = e^(3 * 0) + (y_0 - 1) * e^(2 * 0) = 1 + y_0 - 1 = y_0.

Substituting our function y(t) into our original equation also leads to an identity, completing the verification of the solution.

diff_07_005  http://youtu.be/S_biMSi9DHY

diff_07_006 http://youtu.be/3oE9kbtKUWs

It is not difficult to verify that

L ( f ''(t) ) = s^2 L(y) - y ' (0) - s * y(0).

Applying this to the equation

m y '' + gamma y ' + k y = f(t), y(0) = 0, y ' (0) = 0

we obtain

m s^2 L(y) + gamma s L(y) + k L(y) = L(f(t))

so that

L(y) = L( f ) / (m s^2 + gamma s + k)

so that

m s^2 + gamma s + k = L(f) / L(y).

If we know the forcing function f(t) and the response function y(t), we can find L(f) and L(y), do the division, and solve the resulting equation for m, gamma and k.

diff_07_007  http://youtu.be/rMtcau7rqMA

We obtain the Laplace transforms of cos(omega t) and sin(omega t).

The transform for sin(omega t) is the integral

integral( sin(omega t) e^(-s t) dt, t from 0 to infinity).

We use integration by parts with u = sin(omega t) and dv = e^(-s t) dt, so that du = omega cos(omega t) - 1/s e^(-s t).  Our integral becomes

u v - integral(v du).

u v = -1 / s * sin(omega t) e^(-s t).  The change in this quantity between integraion limits 0 and infinity is zero, since sin(0) = 0 and e^(-s t) approaches zero as t approaches infinity.

integral( v du) = integral( -1/s cos(omega t) e^(-s t) )

The latter integral is integrated by parts, this time with u = -1/s cos(omega t) and dv = e^(-s t) dt so that du = omega / s sin(omega t) and v = -1/s e^(-s t).

The rest of the details are covered on the video clip.  We end up with

integral ( sin(omega t) e^(-s t) dt) = omega / s^2 - omega^2 / s^2 * integral ( sin(omega t) e^(-s t) dt)

so that

(1 + omega^2 / s^2) integral ( sin(omega t) e^(-s t) dt) = omega / s^2

and

integral ( sin(omega t) e^(-s t) dt) = omega / s^2 * 1 / ( (s^2 + omega^2) / s^2 ) = omega / (s^2 + omega^2).

Thus the Laplace transform of sin(omega t) is seen to be omega / (s^2 + omega^2).

We thus have the Laplace transform pair

sin(omega t),  omega / (s^2 + omega^2).

Similar operations give us the pair for the cosine function:

L(cos(omega t) ) = s / (s^2 + omega^2) so the Laplace transform pair is

cos(omega t), s / (s^2 + omega^2).

diff_07_008 http://youtu.be/cGkuse2jlM0

We solve the equation

y '' + 4 y = cos(3 t), y(0) = y ' (0) = 0

We can easily see from previous work that the homogeneous equation will contain sines and cosines of 2 t, while the particular solutions will contain sines and/or cosines of 3 t.

Let's see how this works out using Laplace transforms.

The Laplace transform of y '' is s^2 L(y) - s y (0) - y ' (0), and the transform of 4 y is just 4 L(y). 

The Laplace transform of cos(3 t) is s / (s^2 + 3^2) = s / (s^2 + 9).

Noting that y (0) = y ' (0) = 0 we obtain the transformed equation

s^2 L(y) + 4 L(y) = s / (s^2 + 9).

Factoring out L(y) and dividing both sides by s^2 + 4 we obtain

L(y) = s / ( (s^2 + 9) * (s^2 + 4) ).

Before continuing our analysis we observe that the two denominator terms imply sines or cosines of 3 t and of  2 t, as expected.

To separate our expression into terms to which we can apply our knowledge of the transform pairs, we use partial fractions:

s / ( (s^2 + 9) * (s^2 + 4) ) = (A s + B) / (s^2 + 4) + (C s + D) / (s^2 + 9).

The usual analysis, which is shown in detail on the video clips, gives us

A = 1/5, C = -1/5, B = D = 0

so that

L(y) = 1/5 * s / (s^2 + 4) - 1/5 * s / (s^2 + 9)

s / (s^2 + omega^2) is the Laplace transform of cos(omega t), so the inverse transform yields solution

y(t) = 1/5 cos( 2 t ) - 1/5 cos(3 t).

Note that y(0) = 0, since cos(0) = 1 gives us 1/5 - 1/5 = 0, and y ' (0) = 0 since sin(0) = 0.  So this solution does satisfy the initial conditions.

If we plug our y(t) solution into our original equation

y '' + 4 y = cos(3 t)

we easily see that we get an identity, verifying our solution.

diff_07_009 http://youtu.be/mKLP1qpahoI