The sum in the figure below consist of the i = 0 to i = 10 terms of the geometric sequence defined by a r^n = 4(.91)^n, with a = 4 and r = .91.
To calculate the sum in the first line of the figure below, we note that every term is the product of 11 and a power of 1.1.
At first the expression the first line below doesn't look exactly like a geometric series because many powers are missing.
The sum of a geometric series converges if | r | < 1 and diverges if | r | > = 1.
Our definition of convergence is just that used implicitly above. A sequence converges provided the large-n limit of its partial sums exists.
From the geometric series we see that, at least for this series, convergence is associated with individual terms a r^n that approach 0.
We recapitulate the argument shows that the sum of the harmonic series, for which an = 1/n, diverges.
If however we position the rectangles as indicated in the figure below, we see that the area in the rectangles, which corresponds to the sum of the harmonic series, is greater than the area under the curve.
We conclude that, though the terms 1/n of the harmonic series certainly approach 0 as n approaches infinity, they don't approach 0 fast enough to prevent the series from diverging.
We see that though the limits of 1/n and 1/n^2 are both 0 for increasing n, one sequence converges while the other diverges.
`The figure below summarizes a fundamental theorem about the convergence of series in terms of the limits of the ratios a(n+1) / an.
Again we reemphasize the fact that for a geometric series the limit of the ratio is r, and that the sequence converges when an only when r < 1.
For a Taylor polynomial of degree 1 (i.e., a tangent-line approximation) about a = 0, we define the error to be the difference f(x) - P1(x) between the function f(x) and the Taylor polynomial P1(x).
As we will show next time, if the second derivative f''(x) for x within some distance d of the origin lies between the lower limit L and and upper limit M, then the error will lie between L/2 x^2 and M/2 x^2.
If we let E1(x) stand for the error of the first-degree polynomial at x, E2(x) for the error of the second-degree polynomial at x, and in general En(x) for the error of the nth-degree polynomial at x, we have the sequence of error limits depicted below.
If the Taylor polynomial is expanded about x = a, then the maximum errors are given by the expressions below, which follow from the previous expressions by replacing x by (x - a).
The figure below depicts a function f(x) and a possible Taylor polynomial Pn(x), expanded about x = a.
The Error Theorem states that if the n+1 derivative of f has an absolute value bounded by M for all x in the interval, then for any xi interval the error bounded is valid.
As an example we find and error bound for the degree-5 approximation to the cosine function, expanded about x = 0, for values of x that lie between -.5 and .5.
- Note that it is not necessary to always use the best possible upper limit; in many cases any reasonable limit will allow less to make reasonable estimates of our possible errors.
We now write down the error bound for the error E6(x), using the formula for En(x).