Solving a second-order homogeneous equation with constant coefficients using trial solution e^(r t)
We know from previous experience that the general solution to the equation
y ' = 4 y
is
y = A e^(4 t),
as can be easily verified.
We could have solved the equation by assuming a solution of the form
y = A e^(k t)
and substituting it into the equation, obtaining
k A e^(k t) = 4 A e^(k t),
which simplifies to
k = 4
which when substituted back into our assumed form y = A e^(k t) yields the solution y = A e^(4 t).
Our arbitrary constant A allows us to place on condition on the solution. For example if we want y(0)to take value -2, we can accomodate this as follows:
y(0) = A e^(4 * 0)
so if y(0) = -2 we have
A e^(4 * 0) = -2, which simplifies to
A = -2.
Now suppose we have the second-order equation
y '' = 4 y.
It doesn't take much thought to realize that one solution of this equation is y = A e^(2 t).
There is another solution as well, which may or may not be obvious from inspection. You can easily verify that y = A e^(-2 t) is also a solution to our equation.
As you know, solutions to differential equations aren't always as obvious as these. So let's begin to develop a strategy for discovering solutions.
We suspect that the solutions to equations of this nature are exponential. So when we come across an equation of this type, it doesn't hurt to use a trial solution of the form y = A e^(k t).
If we do so with our current equation y '' = 4 y we get
k^2 A e^(k t) = 4 A e^(k t)
which simplifies to
k^2 = 4.
This equation has two solutions, k = 2 and k = -2, corresponding to solutions of the form
y = A e^(2 t) and y = A e^(- 2 t).
Since A is arbitrary, we can in fact write our two solutions as
y = A e^(2 t) and y = B e^(-2 t).
We can even combine these two solutions to get
y = A e^(2 t) + B e^(- 2 t).
You can easily verify that this function is a solution to the equation y '' = 4 y, for any values of A and B.
A more general method for solving an equation of this type is similar to that used in our first example: Let y = A e^(k t) and solve for k.
Applying this method to our equation y '' = 4 y:
If y = A e^(k t) then y '' = k^2 A e^(k t) so our equation becomes
k^2 A e^(k t) = 4 A e^(k t).
This simplifies to just
k^2 = 4
with solutions
k = 2 and k = -2.
We write our solutions as
y_1(t) = A e^(2 t)
and
y_2(t) = B e^(- 2 t)
(Since there is no reason to use the same constant for both solutions, we write the constant for the second solution as B).
We observe that in leading us to the equation k^2 = 4 our method made it less likely that we would miss the solution k = -2.
Once more we can verify that our general solution is
y(t) = y_1(t) + y_2(t) = A e^(2 t) + B e^(-2 t).
If k is a complex number we get sines and cosines
We now apply our method to the equation
y '' = - 6 y.
Letting y = A e^(k t) we obtain the equation
k^2 A e^(kt) = -6 A e^(kt)
which simplifies to
k^2 = -6
with solutions
k = sqrt(6) i and k = -sqrt(6) i,
where i = sqrt(-1).
This yields our general solution
y(t) = A e^(sqrt(6) i) + B e^(-sqrt(-6) i).
This is fine, but what does it mean to raise e to a power which is a multiple of i?
Digression: reasoning for e^(i theta) = cos(theta) + i sin(theta)
We recall that the function y = e^t has Taylor Series expansion
y = e^t = 1 + t + t^2 / 2! + t^3 / 3! + ...
so that y = e^(i * theta) would have expansion
y = 1 + i * theta + i^2 * theta^2 / 2 ! + theta^3 + i^3 / 3 ! + ...
= 1 + i*theta - theta^2 / 2! - theta^3 i / 3! + ...
= (1 - theta^2 / 2! + ...) + i ( theta - theta^3 / 3! + ...)
Recall (and preferable verify) that
sin(theta) = theta - theta^3 / 3! + theta^5 / 5! ...
and
cos(theta) 1 - theta^2 / 2! + theta^4 / 4! ...
If we write out additional terms of the sequence for e^(i theta) we find that
y = (1 - theta^2 / 2! + ...) + i ( theta - theta^3 / 3! + ...)
is just
y = cos(theta) + i sin(theta).
Applying this to our solution y = A e^(sqrt(6) i t) + B e^(-sqrt(6) i t) we get
y(t) = A ( cos(sqrt(6) t) + i sin(sqrt(6) t) ) + B ( cos(-sqrt(6) t) + i sin (-sqrt(6) t) ).
Using the fact that the cosine is an even function and the sine an odd function we can rearrange this to the form
y(t) = (A + B) cos(sqrt(6) t) + (A - B) sin(sqrt(6) t )
which we write in the form
y(t) = C cos(sqrt(6) t) + i * D sin(sqrt(6) t
where
C = A + B and
D = A - B.
These equations are solvable for C and D in that we can always find values A and B which will yield our desired values of C and D.
We note that this is so even if C and D are complex numbers.
We conclude that the funciton
y(t) = C cos(sqrt(6) t) + i * D sin(sqrt(6) t
is a solution to our equation y '' = - 6 y for any real or complex values of C and D.
If we let C = c_1 + c_2 i and D = d_1 + d_2 i our solution becomes
y(t) = (c_1 + c_2 i) cos(sqrt(6) t) + i * (d_1 + d_2 i) sin(sqrt(6) t
which simplifies algebraically to
y(t) = (c_1 cos(sqrt(6) t) - d_2 sin(sqrt(6) t) ) + i ( c_2 cos(sqrt(6) t) + d_1 sin(sqrt(6) t ).
By basic trigonometric identities (which, incidentally follow rather easily from the Euler identity e^(i theta) = cos(theta) + i sin(theta)), both (c_1 cos(sqrt(6) t) - d_2 sin(sqrt(6) t) ) and c_2 cos(sqrt(6) t) + d_1 sin(sqrt(6) t are of the form
alpha * cos( sqrt(6) t + phi )
so we can write our solution as
y(t) = alpha_1 cos(sqrt(6) t + phi_1) + i * alpha_2 cos(sqrt(6) t + phi_2).
It is easy to verify the following:
y(t) = (c_1 cos(sqrt(6) t) - d_2 sin(sqrt(6) t) ) + i ( c_2 cos(sqrt(6) t) + d_1 sin(sqrt(6) t ) is a solution to the equation y '' = - 6 y.
y = (c_1 cos(sqrt(6) t) - d_2 sin(sqrt(6) t) ) and y = ( c_2 cos(sqrt(6) t) + d_1 sin(sqrt(6) t ) are each independently solutions to the same equation.
y(t) = alpha_1 cos(sqrt(6) t + phi_1) + i * alpha_2 cos(sqrt(6) t + phi_2) is a solution to the equation y '' = - 6 y.
y(t) = alpha_1 cos(sqrt(6) t + phi_1) and y(t) = alpha_2 cos(sqrt(6) t + phi_2) are both independently solutions to the equation.
Application to motion of a simple pendulum
A simple pendulum has mass m and length L. At position x << L with respect to equilibrium the mass experiences a restoring force F = -(m g / L) * x, directed back toward the equilibrium position.
For a freely swinging pendulum this is the net force, so the motion of the pendulum is described by the differential equation
m x '' = - (m g / L) * x.
The mass m divides out, leaving us
x '' = -(g / L) * x.
We solve this equation using the scheme developed above:
Let x = A e^(k t). Then x '' = k^2 A e^(k t) and our equation becomes
k^2 A e^(k t) = -(g / L) A e^(k t). Dividing by A e^(k t) we get
k^2 = -g / L so that
k = +-sqrt( g / L ) * i.
For notational simplicity we let omega = sqrt( g / L ) so that
k = +- omega * i.
This yields solutions
x_1(t) = A e^(omega * i) and
x_2(t) = B e^(-omega * i)
where A and B are arbitrary constants, which may be real or complex.
The general solution to our equation is therefore
x(t) = x_1(t) + x_2(t) = A e^(omega * i) + B e^(-omega * i).
Applying Euler's Identity we get
x(t) = (A + B) cos(omega * t) + i * (A - B) sin(omega * t)
which as before can be written
x(t) = C cos(omega * t) + i * D sin(omega * t)
or as
x(t) = alpha_1 cos(omega * t + phi_1) + i * alpha_2 cos(omega * t + phi_2)
where C and D are arbitrary complex constant, and alpha_1, phi_1, alpha_2 and phi_2 are arbitrary real constants.
As before the real or imaginary part of either of these solutions is also a solution.
It follows that the motion of the pendulum can be modeled by the function
x(t) = alpha cos(omega * t + phi),
where alphs and phi are arbitrary constants.
This solution is easily interpreted. Since the maximum value of the cosine function is 1, the maximum value of x(t) is alpha. So alpha is identified with the amplitude of the pendulum's motion.
The expression cos(omega t + phi) can be interpreted as the x component of a point moving about the unit circle with angular velocity omega, with the point at angular position phi when t = 0. The angular position can be chosen so that the pendulum is initially at any specified initial position between x = -alpha and x = alpha, and unless | x | = alpha its initial direction of motion can also be specified.
The motion of any simple pendulum, as represented with respect to any chosen clock, is completely characterized by its amplitude and initial phase (i.e., the angular position of the reference point at t = 0).
x = A e^( omega * i )
http://youtu.be/jNeNolIjWA8
Application to damped simple pendulum
The oscillation of a real pendulum is not completely free. Typically the pendulum will move through some medium, which resists its motion, with the resistance increasing with increasing pendulum speed.
One common form of the resisting force, which we will call the 'drag force', is
F_drag = -gamma * v
where v is the velocity of the pendulum and gamma is a constant that depends on the pendulum and the medium through which it moves. The negative sign indicates that the direction of the force is opposite to that of the velocity.
Since the velocity of the pendulum is the first derivative, with respect to time, of its position function, we can write the drag force as
F_drag = - gamma * x '.
The net force on this pendulum will therefore be the sum of the restoring force and the drag force, so that
F_net = -m g / L * x - gamma * x '.
Replacing F_net by m x '' we obtain the equation
m x '' = - m g / L * x - gamma * x '
which can be simplified and rearranged to the form
x '' + gamma / m * x ' + g / L * x = 0
To solve this equation we are again going to assume a solution of the form x = A e^(r t) (we previous used A e^(k t), but the use of r is more standard in this context, so at this point we'll switch to the use of r rather than k).
Substituting the resulting expressions x ' = rA e^(r t) and and x '' = r2 A e^(r t) we obtain the equation
r^2 A e^(r t) + gamma / m * r A e^(r t) + g / L * A e^(r t) = 0
Dividing through by A e^(r t) our equation becomes
r^2 + gamma / m * r + g / L = 0.
We recognize this as a quadratic equaion of the form a r^2 + b r + c = 0, with a = 1, b = gamma / m and c = g / L.
Using the quadratic formula we obtain solutions
r = (-gamma / m +- sqrt( gamma^2 / m^2 - 4 g / L) ) / 2,
which can also be expressed as
r = -gamma / (2 m) +- 1/2 sqrt( gamma^2 / m^2 - 4 g / L)
We'll more fully investigate the nature of this solution later, but for now we make the following observations:
When gamma = 0 we get r = +- sqrt( g / L ), which agrees with the solutions we obtained earlier for the ideal undamped pendulum.
When gamma is not zero, the nature of our solutions depends on the discriminant gamma^2 / m^2 - 4 g / L.
· If gamma^2 / m^2 - 4 g / L = 0, we have a single repeated real solution r = -gamma / (2 m). Our solution function will be x(t) = A e^(-gamma / m * t). The pendulum will approach its equilibrium position exponentially, with a constant 'half-life' (i.e., taking the same time to move twice as close to equilibrium from any point of its motion).
· If gamma^2 / m^2 - 4 g / L > 0 we have two real solutions, both of which yield negative values of r. The result is a linear combination of two exponential funcitons, both approaching zero, one more rapidly than the other. The resulting behavior of the pendulum, depending on its initial conditions, can be to approach the equilibrium position strictly from one side, or to 'overshoot' the equilibrium position one time then approach zero from the other side.
· If gamma^2 / m^2 - 4 g / L < 0 we have two complex-valued solutions, both with real part -gamma / (2 m) and imaginary part 1/2 sqrt(gamma^2 / m^2 - 4 g / L).
http://youtu.be/8oJqvkwweFE
Existence of solutions
In general if we have a differential equation of the form
a y '' + b y ' + c y = 0
with constant coefficients a, b and c, we can use the above method to solve it. Simply use trial solution y(t) = A e^(r t), rewrite the equation using this function, take the derivatives, and divide out the common factor A e^(r t). This results in a quadratic equation with a disciminant which can be positive, negative or zero.
The corresponding solutions of the equation will be of the types encountered above.
The coefficients of a second-order equation aren't necessarily constant. However if they aren't constant, the equation will usually be very difficult, if not impossible, to solve.
This doesn't mean that no solution exists, it just means that very often we won't be able to find the solution or express it in terms of familiar functions.
Let's consider the question of whether a solution to an equation exists.
We pretty much know from the examples we've already seen that if a, b and c are constant numbers, then a solution exists (i.e., we can always get that quadratic equation, which always has solutions, so we always get a solution). We haven't really proved that, but it's pretty clear that it will always be so.
Let's therefore consider a second-order differential equation for which a, b and c are functions of t, at least some being nonconstant functions.
Our equation will have the form
a(t) y '' + b(t) y ' + c(t) y = 0.
We also assume initial conditions of the form y(t_0) = y_0, y ' (t_0) = v_0.
On a graph of y vs. t, this corresponds to an intial point (t_0, y_0) and slope v_0. A solution curve satisfying these conditions must pass through (t_0, y_0) and have slope v_0.
We want to see under what conditions the equation has a solution y(t) satisfying these conditions.
Geometrically, we wish to see whether there exists a solution curve through (t_0, y_0) having at that point the slope v_0.
Any solution curve will be tangent at the point (t_0, y_0) to a 'slope segment' having slope v_0. We don't yet know whether the graph will be concave up, concave down, or will perhaps have a point of inflection at the point of tangency, so we don't yet have much of a clue as the the shape of the curve, but our equation will allow us to figure that out.
With the initial conditions, then, we know the values of y and y ' when t = 0. If our functions a(t), b(t) and c(t) can also be evaluated at t = t_0, then, we can write down an equation for which only y ''(t_0) is unknown. So these values y_0, v_0, t_0 should be all we need in order to find the value of y '' when t = 0.
Specifically we know the values of the following:
t_0, the t coordinate of our initial point
y(t_0), which is equal to y_0
y ' (t_0), which is equal to v_0
so, provided a(t_0), b(t_0) and c(t_0) are defined, we can write the equation
a(t_0) y '' (t_0) + b(t_0) y ' (t_0) + c(t_0) y(t_0) = 0
in which the only value we don't know or can't calculate is y ''(t_0).
So we can solve the equation for y ''(t_0). This will give us the value of y '' at our point of tangency, which will give us a lot of information about the shape of the curve at that point.
Our solution is easily seen to be
y ''(t_0) = (-b(t_0) v_0 - c(t_0) y_0) ) / a(t_0).
If our functions a(t_0), b(t_0) and c(t_0) are defined at this point, then everything on the right-hand side will be known, and our value of y ''(t_0) will exist, provided only that our denominator a(t_0) isn't zero.
So, given our equation and initial conditions, we can proceed as follows:
Starting from our point (t_0, y_0) we move along our initial 'slope segment' to a new point with coordinate t_0 + `dt. The slope of our segment is v_0, so our y value will change by approximately `dy = v_0 `dt.
This puts us at the point (t_1, y_1) = (t_0 + `dt, y_0 + v_0 `dt).
In moving t_0 units to the right, our slope will have changed. The rate at which the slope changes is given by the second-derivative function y ''(t).
Having solved our differential equation for y ''(t_0), as seen previously, we can therefore approximate the change in y ' corresponding:
On the interval from t_0 to t_0 + `dt, our value of y ' will change by about
change in y ' = y ''(t_0) * `dt
so that our slope at the new point (t_1, y_1) will be
new slope = old slope + change in slope = v_0 + y ''(t_0) * `dt.
Putting this another way,
y ' (t_1) = v_0 + y ''(t_0) * `dt, approx.
Now we can estimate the change in y if we move from another `dt units from our point (t_1, y_1):
`dy = y ' (t_1) * `dt
so that our new y coordinate will be
y_2 = y_1 + `dy = y_1 + y ' (t_1) * `dt.
Our new point will thus be
(t_2, y_2) = (t_1 + `dt, y_1 + y ' (t_1) * `dt).
It is worth reminding ourselves that our value of y ' (t_1) depended on our value of y ''(t_0), which came from our differential equation. We need to remember that it was possible to get this value of y '' only because our funcions a(t), b(t) and c(t) were defined at t_0, and a(t) was nonzero at t_0.
Now we have our point (t_2, y_2), and we want to project it forward through another increment `dt. All we need to do so it a slope.
To estimate our new slope, we need to go back to (t_1, y_1) and find the value of y '' (t_1), which we will use to predict the change in y ' . This will require that we go back to our differential equation again.
We will plug in our values y(t_1) = y_1 , y ' (t_1), a(t_1), b(t_1) and c(t_1), giving us the equation
a(t_1) y ''( t_1) + b(t_1) y ' (t_1) + c(t_1) y(t_1) = 0,
which we solve for y '' (t_1).
Our new slope will therefore be
new slope = old slope + change in slope
which we express as
y ' (t_2) = y ' (t_1) + y ''(t_1) * `dy., approx.
Form the point (t_2, y_2) we then follow the new slope as t changes by `dt, obtaining a new point (t_3, y_3) = (t_2 + `dt, y_2 + y ' (t_2) * `dt).
Once more we see that this is possible, as long as a(t), b(t) and c(t) satisfy the usual requirements (all functions defined, the a function nonzero at t_2).
It should be clear that we can follow this process as far as we like, doing as many steps as we wish for a small an increment `dt as we choose, generating a series of points on an approximate solution curve, provided only that we don't get to a place where a(t) is zero, or where any of our functions a(t), b(t) and c(t) are not defined.
We could make this argument more rigorous, but that would go beyond the scope of the present course. We state the result at follows:
The equation
a(t) y '' + b(t) y ' + c(t) y = 0
with initial conditions
y(t_0) = y_0, y ' (t_0) = v_0
has a solution within the interval alpha < t < beta as long as t_0 is within this interval, a(t) is nonzero within the interval, and a(t), b(t) and c(t) are all defined everywhere within the interval.
http://youtu.be/jLGNGhC9dQQ
Examples of solutions of constant-coefficient equations
We now return to the case of the constant-coefficient equation.
One more quick summary:
If a, b and c are constants the equation
a y '' + b y ' + c y = 0 with initial condition y(t_0) = y_0, y ' (t_0) = v_0
is solved by using trial function y = A e^(r t), as has been demonstrated by considering a freely swinging pendulum, and a pendulum with a linear drag force.
The solution process leads us to a quadratic characteristic equation whose discriminant might be positive, negative or zero, as we have previously seen.
We will look at three more examples, one illustrating each of these three possibilities:
Example 1: Equation y '' + 5 y ' - 6 y = 0 with y(0) = 1 and y ' (0) = -1.
We let y = A e^(r t), obtaining the characteristic equation
r^2 + 5 r - 6 = 0
with solutions set {-6, 1).
The corresponding solutions to our equation are therefore
y_1(t) = A e^(-6 t)
and
y_1(t) = B e^t
and our general solution is
y(t) = A e^(-6 t) + B e^t.
To satisfy our initial conditions y(0) = 1 and y ' (0) = -1 we plug t = 0 into our expressions for y(t) and y ' (t).
y(t) = A e^(-6 t) + B e^t, so
y(0) = A e^0 + B e^0 = A + B.
Our initial condition on y becomes
A + B = 1.
y ' (t) = -6 A e^(-6 t) + B e^t so
y ' (0) = -6 A e^0 + B e^0 = -6 A + B.
Our initial condition on y ' is therefore
-6 A + B = -1.
Our initial conditions therefore yield two simultaneous equations in A and B:
A + B = 1
-6 A + B = -1.
These equations are easily solved. We get
A = 2/7, B = 5/7
so that our solution function is
y = A e^(-6 t) + B e^t = 2/7 e^(-6 t) + 5/7 e^t.
Example 2: Equation y '' + 5 y + 7 = 0 with y (0) = 1, y ' (0) = -1.
Letting y(t) = A e^( r t ) we get characteristic equation
r^2 + 5 r + 7 = 0
with solutions
r = -5/2 + sqrt(3) / 2 * i
r = -5/2 - sqrt(3) / 2 * i
Our solutions to the differential equation will therefore be
y = A e^( (-5/2 + sqrt(3) / 2 * i) t )
and
y = B e^( (-5/2 - sqrt(3) / 2 * i) t ),
which can be written in the form
y = A e^(-5/2 t) e^(sqrt(3) / 2 * t * i)
y = B e^(-5/2 t) e^(-sqrt(3) / 2 * t * i)
Using Euler's identity e^(i theta) = cos(theta) + i sin(theta) our solutions become
y = A e^(-5/2 t) ( cos(sqrt(3)/2 * t) + i sin(sqrt(3) / 2 * t) )
y = B e^(-5/2 t) ( cos(sqrt(3)/2 * t) - i sin(sqrt(3) / 2 * t) )
so that our general solution is
y = e^(-5/2 t) * ( (A + B) cos(sqrt(3) / 2 * t) + i * (A - B) sin(sqrt(3) / 2 * t) ).
As we have previously seen this solution can be put into the form
y = e^(-5/2 t) * (c_1 cos(sqrt(3) / 2 * t) + c_2 sin(sqrt(3) / 2 * t) )
where c_1 and c_2 are arbitrary constants
or the form
y = C e^(-5/2 t) * cos( sqrt(3) / 2 * t + phi) )
where C and phi are arbitrary constants.
Our initial conditions are more easily applied with the first form y = e^(-5/2 t) * (c_1 cos(sqrt(3) / 2 * t) + c_2 sin(sqrt(3) / 2 * t) ). With this form we note that
y ' = -5/2 e^(-5/2 t) * (c_1 cos(sqrt(3) / 2 * t) + c_2 sin(sqrt(3) / 2 * t) ) + e^(-5/2 t) * (sqrt(3) / 2 * (-c_1 sin(sqrt(3) / 2 * t) + c_2 cos(sqrt(3) / 2 * t) ).
Now our initial conditions give us
y(0) = e^0 * (c_1 cos(0) + c_2 sin(0) ) = c_1
and
y ' (0) = -5/2 e^0 * (c_1 cos(0) + c_2 sin(0) ) + e^0 * sqrt(3) / 2 * (-c_1 sin(0) + c_2 cos(0) ) = -5/2 * c_1 + sqrt(3) / 2 * c_2
Since y(0) = 1 and y ' (0) = -1 we have the simultaneous equations
c_1 = 1
-5/2 c_1 + sqrt(3) / 2 * c_2 = -1
The solutions to this system are
c_1 = 1
c_2 = (-1 + 5/2) * 2 / sqrt(3) = 3 sqrt(3) / 3 = sqrt(3)
so our solution to the equation is
y(t) = e^(-5/2 t) (cos(sqrt(3) / 2 * t) + sqrt(3) sin(sqrt(3) / 2 * t).
We note that this equation could be put into the form
y(t) = C e^(-5/2 t) cos( sqrt(3) / 2 * t + phi)
with values of C and phi determined by straightforward trigonometric identities.
Example 3: y '' + 4 y + 4 = 0, y(0) = 1, y ' (0) = -1
Following the usual procedure we get characteristic equation
r^2 + 4 r + 4 = 0.
The discriminant of this equation is zero, so there is only one solution, which we find to be r = -2.
The factored form of the equation is (r + 2) (r + 2) = 0; each factor yields the solution r = -2 so we say that this is a 'repeated root' of the equation.
The solution
y = A e^(-2 t)
follows and is easily verified.
However this is a second-order equation, and so far we have only one solution and one arbitrary constant. We need another solution.
We will see shortly how this solution can be reasoned out, using a technique called 'reduction of order'. However for the moment we will just state the following:
If r is a repeated root of the characteristic equation for a differential equation of the form a y '' + b y' + c y = 0, then the functions
y = A e^(r t)
and
y = A t e^(r t)
are solutions, and the general solution can be written
y = A e^(r t) + B t e^(r t).
Applying this to the present equation we find that our solution is
y(t) = A e^(-2 t) + B t e^(- 2 t ).
It is easy to verify that this is a solution to the equation.
We apply the initial conditions:
y(0) = 1 yields the equation
A e^0 + 0 = 1
so that A = 1.
y ' (t) = -2 A e^(-2 t) + B e^(-2 t) - 2 B t e^(-2 t) so that
y ' (0) = -2 A + B = -1
Since A = 1, we conclude that B = 1.
Our solution is therefore
y(t) = e^(-2 t) + t e^(-2 t).
Solution for repeated roots; reduction of order
We've seen that the 'trick' of using A t e^(-2 t) as one of our solutions worked for our the example of the preceding problem.
What could have motivated such a trick?
The answer leads us to the more subtle 'trick' entitled 'reduction of order'.
The main idea is that once we have our repeated solution e^(r t) (in the case of the preceding example, y = e^(-2 t); where for now we leave off our arbitrary constant A) we look for another solution of the form u(t) e^(r t).
For the equation of the preceding example, then, we have the solution y_1(t) = e^(-2 t) and we look for a solution of the form
y_2(t) = u(t) e^(-2 t).
Our derivatives of y_2(t) are
y_2 ' = u ' e^(-2 t) + u (e^(-2 t) ) '
and
y_2 '' = u '' e^(-2 t) + u ' (e^(-2 t)) ' + u ' (e^(-2 t) ) ' + u ( e^(-2 t) ) ''.
We could of course take the indicated derivatives of e^(-2 t) but for now we leave them as written above.
Substituting into the equation y '' + 4 y ' + 4 y = 0 we get
u '' e^(-2 t) + u ' (e^(-2 t)) ' + u ' (e^(-2 t) ) ' + u ( e^(-2 t) ) '' + 4 ( u ' e^(-2 t)) + u (e^(-2 t) ) ') + 4 u e^(-2 t) = 0.
which with application of the distributive law gives us
u '' e^(-2 t) + u ' (e^(-2 t)) ' + u ' (e^(-2 t) ) ' + u ( e^(-2 t) ) '' + 4 u ' e^(-2 t)) + 4 u (e^(-2 t) ) ' + 4 u e^(-2 t) = 0.
Knowing that e^(-2 t) is a solution to the differential equation (so that (e^(-2t)) '' + 4 (e^(-2 t)) ' + 4 e^(-2 t) = 0) we can separate out the terms
u (e^(-2 t) )'' + 4 u (e^(-2 t)) ' + 4 u e^(-2 t) , which are equal to
u ((e^(-2t)) '' + 4 (e^(-2 t)) ' + 4 e^(-2 t) ) and are so equal to 0, leaving us with
u '' e^(-2 t) + u ' (e^(-2 t)) ' + u ' (e^(-2 t) ) ' + 4 u ' e^(-2 t)) = 0.
We can now take the derivatives (e^(-2 t) ) ' to obtain
u '' e^(-2 t) - 2 u ' (e^(-2 t)) - 2 u ' (e^(-2 t) ) + 4 u ' e^(-2 t)) = 0.
- 2 u ' (e^(-2 t)) - 2 u ' (e^(-2 t) ) + 4 u ' e^(-2 t)) adds up to 0, so all we are left with is
u '' e^(-2 t) = 0.
Since e^(-2 t) is never 0, we conclude that
u '' = 0.
u '' = 0 implies that u is a linear function of t, of the form
u = c_1 t + c_2
so our solution y_2 will be
y_2 = u e^(-2t) = (c_1 t + c_2) e^(-2 t).
The general solution of our differential equation is therefore
y(t) = A e^(-2 t) + B ( c_1 t + c_2) e^(-2 t)
which can be simplified to the form
y(t) = (A + c_2 B) e^(-2 t) + c_1 B t e^(-2 t)
Since A, B, c_1 and c_2 are arbitrary constants, (A + c_2 B) and c_1 B can take any values we wish. So we're just going to call these two values A and B, allowing the arbitrary constants A and B to 'absorb' the other constant. We will then say that our solution is
y(t) = A e(-2 t) + B t e^(-2 t)
thereby justifying what we treated as a 'trick' in our solution of the equation in Example 3.
Damped harmonic motion generalized; nature of solutions
We previously saw that the motion of a damped pendulum might be modeled by the
solutions of the equation
m y '' + gamma y ' + k y = 0,
where k = m g / L.
More generally any mass subjected to a linear restoring force F_restoring = -k y
and a damping force F_damping = -gamma y ' can be similarly modeled.
As we have seen the nature of the solution to the equation depends on the
discriminant (gamma^2 - 4 k).
Solutions might be in any of the following forms:
y(t) = A e^(lambda_1 * t) + B e^(lambda_2 * t), where lambda_1 and lambda_2 are
real constants
y(t) = e^(lambda_1 * t) * (A cos(omega t + phi) +B sin(omega t + phi)) , where
omega = 1/2 sqrt( (4 k - gamma^2 ) / m)
y(t) = A e^(lambda_1 * t) * cos(omega t + phi), where omega = 1/2 sqrt( (4 k -
gamma^2 ) / m)
y(t) = A cos(omega t + phi), where omega = sqrt(k / m)
y(t) = A cos(omega t) + B sin(omega t), where omega = sqrt(k / m)
Solving
nonhomogeneous equations
The following are direct results of the familiar property of linearity of differential operators (e.g., in this case the fact, which should be obvious by now, that if you plug y(t) = y_1 (t) + y_2 (t) into the expression y '' - y you get the same thing as if you first plugged y_1 (t) into the expression, then plugged in y(2) (t), and added the two results; also if you plug c y_1 (t) into the expression you get the same thing as if you had plugged in just y_1 (t), then multiplied by c).
y '' - y = t^2
consists of the sum of the general solution y_C (t), called the characteristic solution, to the homogeneous equation
y '' = y = 0
and any solution y_P ( t), called a particular solution to the equation
y '' - y = t^2.
These statement generalize to any differential equation of the form
a y '' + b y ' + c y = g(t)
or in fact to any equation of the form
a(t) y '' + b(t) y ' + c(t) y = g(t).
diff 04 036 http://youtu.be/iRvIvumfdLk
Undetermined Coefficients
To solve the equation
y '' - y = t^2
we add the known solution to the homogeneous equation, i.e., the general characteristic solution
y_C (t) = c_1 e^t + c_2 e^(-t)
to any particular solution of the nonhomogeneous equation.
To find a particular solution of this equation we can use the general form
y_P (t) = A t^2 + B t + C,
which is motivated by knowing that y and its derivatives can combine to yield the desired expression t^2, which is an integer power of t, only if y_P consists of some linear combination of integer powers of t. We also use the fact that the highest power present in y_P will be included in the expression y_P '' - y_P, which dictates that t^2 is the highest power we need include in our expression for y_P.
Plugging a trial solution into the appropriate equation should allow us to solve for the undetermined coefficients A, B and C, yielding a specific particular solution which we add to our characteristic solution to obtain the general solution to the equation.
To solve the equation
y '' - y = cos(t)
we consider that cos(t) comes from derivatives of the functions sin(t) and cos(t), which motivates a trial solution of the form
y_P (t) = A cos(t) + B sin(t).
Once more we plug the trial solution into the equation and evaluate the coefficients A and B to obtain a specific particular solution which we add to our characteristic solution to obtain the general solution to the equation.
diff 04 037
http://youtu.be/kveQSIIZ10o
diff 04 038
http://youtu.be/Y9d-qBhuo10
diff
04 039 http://youtu.be/2tTdKRFYGFY
diff 04 040 http://youtu.be/QpEdRdXY93E
diff 04 041
http://youtu.be/wp6mtc4J_qU
The equation
y '' - y = 3 e^t
has nonhomogeneous part (3 e^t) which is itself a solution to the homogeneous equation, so a trial solution of form y_P = A e^t will not work. When plugged into y '' - y this expression will yield 0, not 3 e^t.
We therefore try particular solution
y_P(t) = t * A e^t,
which when plugged into the equation gives us A = 3/2, therefore yielding the particular solution y = 3/2 t e^(3 t) and the general solution
y(t) = c_1 e^t + c_2 e^(-t) + 3/2 t e^(3 t)
We also proceed to consider the equation
y '' - y = t^2 e^t + cos(t).
By the linearity of differential operators we can find particular solutions y_P1 (t) and y_P2 (t) to the separate equations
y '' - y = t^2 e^t
and
y '' - y = cos(t).
The sum of these particular solutions will be a particular solution to the original equation.
We have already solved the second of these equations, obtaining particular solution y_P2 (t) = -1/2 cos(t). [error note: this is erroneously written as 1/2 cos(t) in the video]
To solve the first we might think our trial solution should be of the form y_P1 = (A t^2 + B t + C) e^t, but the term C e^t is a solution to the homogeneous equation and will not provide us with any information. We therefore multiply this expression by t to get the form
y_P1 = t ( A t^2 + B t + C) e^t.
Plugging this in and evaluating the coefficients A, B and C we get
y_P1 (t) = (t^3 / 6 - t^2 / 4 + t / 4) e^t.
diff 04 042
http://youtu.be/Mnrd6m0YXxM
diff 04 043
http://youtu.be/VzBS1gyrCKQ
diff
04 044 http://youtu.be/GPqLAuJawag
Undetermined coefficients don't always work. Hence variation of parameters.
The equation
y '' - y = tan(t)
defies our attempts to construct a particular solution. There are no simple functions whose second derivatives combined with those functions will yield multiples of tan(t).
The equation
y '' - t y = cos(t)
cannot be solved by the current method either. The trial solution y_P = A cos(t) + B sin(t) might seem to help, but in the end we conclude that our parameters A and B are not constants, contradicting our assumption that they are.
diff 04 045 http://youtu.be/8QfqvB_O_m8
Given a fundamental set {y_1 (t), y_2 (t) } for the homogeneous equation
y '' + p(t) y ' + q(t) y = g(t)
we know that the solution to the homogeneous equation y '' + p(t) y ' + q(t) y = 0 is of the general form
y(t) = c_1 y_1(t) + c_2 t_2(t).
To obtain a solution for the nonhomogeneous equation we can attempt to form a solution to the nonhomogeneous equation of the form
y_p (t) = u_1 (t) y_1 (t) + u_2 (t) y_2 (t).
We will later show why the method work, but the end result is that we can find a matrix equation for the derivatives u_1 ' (t) and u_2 ' (t) of the functions u_1 and u_2. If we can then integrate these functions we will obtain our particular solution.
The matrix equation turns out to be
[ y_1 (t) , y_2 (t); y_1 ' (t), y_2 ' (t) ] * [ u_1 ' (t); u_2 ' (t) ] = [ 0 ; g(t) ]
which is the same as the equation
W(t) * [ u_1 ' (t); u_2 ' (t) ] = [ 0 ; g(t) ]
where W(t) is the Wronskian of the fundamental set.
The solution to the equation is just
[ u_1 ' (t); u_2 ' (t) ] = = W^-1 (t) * [ 0 ; g(t) ]
where W^-1 is the inverse of the Wronskian. This inverse exists because the Wronskian of a fundamental set has a nonzero determinant.
Note: I've used incorrect terminology here, which is a bit embarrassing. The Wronskian W(t) is the determinant of the matrix, not the matrix itself. This language will recur in the notes and in the videos. If you are aware of this, it shouldn't be confusing, but be careful you don't make this mistake in your work, especially outside of this course.
diff 04 046 http://youtu.be/Ip6MJwaOgEM
We review how to form the inverse 1 / (a d - b c) * [ -d, b; c, -a ] of the 2 x 2 matrix [a, b; c, d], and apply this to find the inverse
W^-1 (t) = 1 / (y_1 y_2 ' - y_2 y_1 ' ) * [ -y_2 ', y_1; y_1 ', -y_2 ]
of the 2 x 2 Wronskian [ y_1, y_2; y_1 ', y_2 '].
diff 04 047 http://youtu.be/75Gu6oZic3o
We apply the above method to obtain the solution for u_1 ' and u_2 ' to solve the equation
y '' - y = tan(t).
Our Wronskian is
[e^t, e^(-t); e^t, -e^(-t)]
and its inverse is
1 / (- e^t * e^(-t) - e^(-t) * e^t) ) * [ -e^(-t), e^t; e^(t), e^(-t) ]
so that
[u_1 ', u_2 '] = 1 / (- e^t * e^(-t) - e^(-t) * e^t) ) * [ -e^(-t), e^t; e^(t), e^(-t) ] * [0, tan(t)]
yielding
u_1 ' = 1/2 e^(-t) tan(t)
u_2 ' = -1/2 e^t tan(t).
If we can integrate these expressions, our particular solution will be
y(t) = u_1(t) e^t + u_2(t) e^(-t).
Unfortunately, as it turns out in this case, neither substitution nor integration by parts seems to work for either of these integrals.
diff 04 048 http://youtu.be/4C5tbrshfXk
We apply the above method to the solution of the equation
y '' - y = cos(t),
for which we already know that a particular solution is
y_P (t) = -1/2 cos(t).
The same steps followed above yield the equations
u_1 ' = 1/2 e^(-t) cos(t)
u_2 ' = -1/2 e^t cos(t).
Both of these functions can be integrated, and we find that
u_1 = integral( u_1 ' (t) dt) = int( 1/2 e^(-t) cos(t) dt) = 1/4 e^(-t) ( sin(t) - cos(t))
u_2 = integral( u_2 ' (t) dt) = int(1/2 e^(t) cos(t) dt) = -1/4 e^(t) ( sin(t) + cos(t))
so that our particular solution is
y_P(t) = u_1 y_1 + u_2 y_2
= 1/4 e^(-t) ( sin(t) - cos(t)) * e^t + 1/4 e^t ( sin(t) + cos(t) ) * e^t
= -1/2 cos(t),
the same as the particular solution found previously using undetermined coefficients.
diff 04 049
http://youtu.be/IDEoXXzNfZs
diff 04 050
http://youtu.be/_BdHiLH9yX8
Nonhomogeneous equation for driven mass-on-spring system
For a low-amplitude pendulum or an ideal mass-on-spring system we obtain the force rule
F_net = - k x
so that
m x '' = - k x
and x '' + k/m x = 0
with fundamental set
{ sin(omega_0 t), cos( omega_0 t) },
where omega_0 = sqrt( k/m ).
This leads to general solution
x(t) = c_1 cos(omega_0 * t) + c_2 sin(omega_0 * t),
indicating an oscillating system with natural angular frequency omega_0 = sqrt(k/m).
diff 04 051 http://youtu.be/h8zZnZ9E4xA
If we drive our oscillator with a force function F(t) then our equation becomes
F_net = - k x + F(t)
leading to the form
x '' + (k/m) x = F(t) / m.
The homogeneous equation is as in the preceding, so our characteristic solution is
x(t) = c_1 cos(omega_0 * t) + c_2 sin(omega_0 * t).
diff 04 052 http://youtu.be/cmD343wH-EY
If the pendulum is driven by a force function
F(t) = F cos( omega_1 t)
where omega_1 differs from omega_0, we get the equation
x '' + (k/m) x = F cos(omega_1 t) / m.
Our particular solution can be found using the trial solution
x_P (t) = A cos(omega_1 t) + B sin(omega_1 t).
Substituting this solution into our equation we find that our constants A and B are
A = F / m / (omega_0^2 - omega_1^2)
and
B = 0
leading to particular solution
x_P (t) = (F / m) / (omega_0^2 - omega_1^2) * cos(omega_1 t)
and general solution
x(t) = c_1 cos(omega_0 * t) + c_2 sin(omega_0 * t) + (F / m) / (omega_0^2 - omega_1^2) * cos(omega_1 t).
Subjecting this solution to the initial conditions
x(0) = 0
x ' (0) = v_0
we obtain
c_2 = (-F/m) / (omega_0 ^ 2 - omega_1 ^ 2) and c_1 = v_0 / omega_0
so that
x(t) = v_0 / omega_0 * sin(omega_0 t) + (F/m) / (omega_0 ^ 2 - omega_1 ^ 2) * ( cos(omega_0 t) - cos(omega_1 t) )
diff 04 053 http://youtu.be/IspDNYCDbHg
The factor
cos(omega_0 t) - cos(omega_1 t)
can be understood by using the trigonometric identity for cos(theta_1 + theta_2).
The necessary identity can be easily derived using the Euler formula e^(i * theta) = cos(theta) + i sin(theta).
The identity is
cos(theta_1 + theta_2) = cos(theta_1) cos(theta_2) - sin(theta_1) sin(theta_2)
diff 04 054 http://youtu.be/kGKpFFk0O58
Letting omega_bar be the average of omega_1 and omega_0, with beta equal to half the difference omega_1 - omega_0, our two frequencies can be expressed as
omega_0 = omega_bar - beta
omega_1 = omega_bar + beta,
with omega_0 and omega_1 equidistant from omega_bar, at distances beta to the left and right of omega_bar.
Using the identity derived previously we obtain
cos(omega_0 t) + cos(omega_1 t) = cos(omega_bar t) * sin(beta t).
diff 04 055 http://youtu.be/PCsqIrFfrCo
diff 04 056
http://youtu.be/BJQatOKk7MI
diff 04 057
http://youtu.be/ddM_LAuVv5s
After some corrections we conclude that due to a sign error the correct solution to our nonhomogeneous equation, with initial conditions, should have been
x(t) = v_0 / omega_0 * sin(omega_0 t) + (F/m) / (omega_0 ^ 2 - omega_1 ^ 2) * ( cos(omega_0 t) - cos(omega_1 t) )
and that
cos(omega_0 t) - cos(omega_1 t) = sin(omega_bar t) * sin(beta t)
so that the solution becomes
x(t) = v_0 / omega_0 * sin(omega_0 t) + (F/m) / (omega_0 ^ 2 - omega_1 ^ 2) * ( sin(omega_bar t) * sin(beta t) ).
diff 04 058 http://youtu.be/s5Me_MNVdrU
The oscillations of our system therefore consist of a steady oscillation
x(t) = v_0 / omega_0 sin(omega_0 t)
with a superimposed solution
x(t) = (F/m) / (omega_0 ^ 2 - omega_1 ^ 2) * ( sin(omega_bar t) * sin(beta t) ).
If omega_0 and omega_1 are fairly close together, beta is much smaller than omega_bar and the latter consists of an oscillation with angular frequency omega_bar, occurring within an 'envelope' defined by | sin(beta t) |.
diff 04 059 http://youtu.be/pVc7i1JzBd4
If our driving frequency approaches the natural frequency of the oscillator, the length of a half-cycle of the 'envelope' approaches infinity.
This 'envelope' has a near-linear domain within which it is very close to the tangent lines formed by x = sin(beta t) and x = -sin(beta t) at the origin. The result is that the oscillation driven by sin(omega_bar t) increases almost linearly in amplitude within this linear domain. The linear domain approaches infinite extent as omega_1 approaches omega_0, and omega_bar approaches omega_0, so that the term
(F/m) / (omega_0 ^ 2 - omega_1 ^ 2) * ( sin(omega_bar t) * sin(beta t) )
approaches a linearly increasing oscillation at angular frequency omega_0.
The magnitude of the coefficient (F/m) / (omega_0 ^ 2 - omega_1 ^ 2) increases as omega_1 approaches omega_0 in such a way that the slopes of the tangent lines remain constant as omega_1 approaches omega_0.
The increasing oscillation amplitude is the defining characteristic of the resonance that occurs when the driving frequency omega_1 is equal to the natural frequency omega_0.
diff 04 060 http://youtu.be/aqYj0Nu6pB8
diff 04 061
http://youtu.be/XahhyEvBQT4
In reality any physical system has limited energy, so resonance does not result in a never-ending linear increase in amplitude. In reality either the driving force runs out of energy, or forces other than the driving force act on the system. Typically these forces are drag forces, which are often proportional to the velocity of the system.
The equation that represents this situation for the forcing function F cos(omega_0 t) would be
y '' + 2 delta y ' + omega_0^2 y = F / m cos(omega_0 t).
The fundamental sets solving the homogeneous equation are as follows:
If delta > omega_0: { e^(-delta + sqrt( delta^2 - omega_0^2)) * t), e^(-delta - sqrt( delta^2 - omega_0^2)) * t) }
If delta < omega_0: { e^(-delta * t) sin(sqrt( delta^2 - omega_0^2) * t), e^(-delta * t) cos(sqrt( delta^2 - omega_0^2) * t) }
If delta = omega_0: {e^(-delta * t) , t e^(-delta * t) }.
diff 04 062 http://youtu.be/ZEFiR0b0loQ
We can find a particular solution to the equation
y '' + 2 delta y ' + omega_0^2 y = F / m cos(omega_0 t)
by using trial solution
y_P = A cos(omega_0 t) + B sin(omega_0 t).
We can do this because cos(omega_0 t) is no longer a solution to the homogeneous equation, due to the introduction of the 'drag force' term 2 delta y ' in the original differential equation.
This leads to particular solution
y_P = (F / m) / (2 delta omega_0) cos(omega_0 t) = F / (2 delta omega_0 m) cos(omega_0 t)
and for the case where delta is not too large, so that delta < omega_0, to general solution
y(t) = c_1 e^(-delta * t) sin(sqrt( delta^2 - omega_0^2) * t) + c_2 e^(-delta * t) cos(sqrt( delta^2 - omega_0^2) * t) + F / (2 delta omega_0 m) cos(omega_0 t).
The first two terms can be expressed as
e^(-delta * t) ( c_1 sin(sqrt( delta^2 - omega_0^2) * t) + c_2 cos(sqrt( delta^2 - omega_0^2) * t) ).
Since the sine and cosine function are bounded and e^(-delta * t) approaches 0 exponentially, the contribution of this term approaches zero exponentially, and we call this the transient part of our solution.
The last term
F / (2 delta omega_0 m) cos(omega_0 t)
represents a steady oscillation with a constant frequency and amplitude, and this is called the steady-state solution. As t increases, the transient solution eventually (and often quickly) disappears and the steady-state solution persists.
The transient solution oscillates with decreasing amplitude and angular frequency sqrt( omega_0 ^ 2 - delta ^ 2), which is close to omega_0 for small delta and further from omega_0 for larger delta (though for the solution to be valid delta must still be less than omega_0). This means that the transient solution will come in and out of phase with the steady-state solution, doing so more and more frequently as delta (the drag coefficient) increases.
During the transient phase, and especially when the transient and steady-state solutions have comparable amplitudes, these parts of the solution will interfere in such a way as to produce beats. These beats will disappear as the transient contribution decreases.
Note that small delta and small omega_0 both produce large values of F / (2 delta omega_0 m) and hence a steady-state solution F / (2 delta omega_0 m) cos(omega_0 t) of large amplitude. This is easily understood for small delta, which corresponds to a small drag force and a system that behaves somewhat like an undamped system (which produces an 'ideal' resonance when driven by the natural frequency omega_0 of the system).
diff 04 063 http://youtu.be/MVaAw0n3V8w
diff 04 064
http://youtu.be/nswq8oO-nJo
diff 04 065
http://youtu.be/r6UioHc6Ef4
If delta > omega_0 we get a combination of two decreasing exponential functions, which can result in a direct approach to equilibrium or an approach where the oscillator first approaches and moves beyond the equilibrium position before 'relaxing' back toward equilibrium. This can be seen by graphical analysis of various combinations of the fundamental solutions
e^(-delta + sqrt( delta^2 - omega_0^2)) * t) and e^(-delta - sqrt( delta^2 - omega_0^2)) * t).
diff 04 066 http://youtu.be/cw_G9tSK3d0
Application to Electrical Circuits
A constant voltage source across a capacitor tends to move charge from one part of the capacitor to the other, accumulating positive charge in one location and negative charge in another. As it does so the accumulated charges, with their tendency to repel one another and to be attracted to opposite charges, develop a voltage in opposition to that of the capacitor.
In a series circuit with an initially uncharged capacitor, a constant source and a resistor the voltage across the resistor will at any instant be equal to the difference between the voltage of the source and that of the capacitor, and a charge will flow through the circuit at a rate which is proportional to the voltage across the resistor.
If V_s is the voltage of the source, C the capacitance of the capacitor, V_c the voltage of the capacitor, R the resistance and Q the charge on the capacitor, we therefore have
dQ/dt = constant * (V_s - V_c)
with the constant equal to the reciprocal of the resistance and V_c = Q / C, giving us the first-order linear equation
dQ/dt = 1/R * (V_s - Q / C),
which is easily solved. The charge on the capacitor is seen to exponentially approach the charge at which its voltage is equal to that of the source.
If Q_eq stands for the capacitor charge at which the capacitor voltage equals source voltage (that charge is V_s * C), then for an initially uncharged capacitor the charge is
Q(t) = Q_eq * ( 1 - e^(t / (R C) ).
diff 04 068 http://youtu.be/zHkGEt8A_Ao
If a series circuit is set up to be powered by two equal but oppositely oriented sources and a toggle switch which switches between the sources, then if we toggle the switch the charge on the capacitor will alternately approach the positive and negative charges dictated by the positively and negatively oriented sources. The result is an oscillation of the charge and hence the voltage of the capacitor. Faster toggling gives the charge little time to approach its limit before the source is reversed, and hence an oscillation of less amplitude than would result from slower toggling.
With every toggle the capacitor charge reverses sign, but not right away, as it takes time to discharge before building the opposite charge. So the capacitor voltage stays somewhat ahead of the source voltage. The voltage across the resistor being low when the capacitor voltage is high and vice versa lags the voltage of the capacitor by 1/4 cycle, and hence the current (i.e., rate of transfer of charge) in the circuit does the same.
A similar but smoother oscillation of capacitor voltage results when the circuit is driven by a sinusoidally oscillating voltage.
diff 04 069 http://youtu.be/3rsK4gqi_0o
diff 04 070
http://youtu.be/rVOD3G0Tlbw
An inductor resists the buildup of current. (Quick reference to the physics: Current creates a magnetic field inside of a coil. Changing magnetic field in a coil creates a voltage in the coil that tends to oppose the change.)
So if a circuit consisting of an inductor, a source and a resistor in series is closed, we end up with an exponential approach of the current to its maximum. If the polarity of the source is reversed, the direction of the current tends to reverse, and inductor will immediately reverse its voltage to compensate. The behavior of the inductor is in some important ways opposite that of the capacitor. Specifically, the current in the capacitor circuit tends to precede the voltage of the source while the current in the inductor circuit tends to lag that of the source.
diff_04_071
diff 04 071
http://youtu.be/RXUGHfu4tPQ
In a series circuit consisting of an oscillating source, a capacitor, a resistor and an inductor the charge on the capacitor satisfies the equation
Q '' + R / L * Q ' + Q / (L C) = V_s (t)
where V_s (t) is the source voltage.
This equation is of the same form as the equation
y '' + 2 delta y ' + omega_0^2 y = F(t) / m
that governs the motion of a system subject to linear restoring force and a drag force which is proportional to velocity.
The techniques for solving these equations are identical. The only difference is in interpretation of the solutions.
Solutions of the homogeneous equation
Q '' + R / L * Q ' + Q / (L C) = 0
are real if sqrt( (R / (2 L)^2 - 1 / LC ) is positive, leading the exponentially decaying solutions, and conjugate complex if negative, leading to sinusoidally oscillating solutions.
diff 04 072 http://youtu.be/pLh6oRS7MQU