# Chapter 1: sequences and series

# Sequences and series

Consider the following sum:

The dots at the end indicate that the sum goes on forever. Does this make sense? Can we assign a numerical value to an infinite sum? While at first it may seem difficult or impossible, we have certainly done something similar when we talked about one quantity getting “closer and closer” to a fixed quantity. Here we could ask whether, as we add more and more term, the sum gets closer and closer to some fixed value. That is, look at

And so on, ask whether these values have a limit. It seems pretty clear that they do, namely 1. In fact, as we will see, it’s not hard to show that

And then

There is one place that you have accepted this notion of infinite sum without really thinking of it as a sum:

For example, or

Our first task, then, to investigate infinite sums, called series, is to investigate limits of **sequences **of numbers. That is, we officially call

A series, while

Is a sequence, and

That is, the value of a series is the limit of a particular sequence.

**11.1 Sequences**

While the idea of sequence of numbers, a_{1},a_{2},a_{3},… is straightforward, it is useful to think of a sequence as a function. We have up until now dealt with functions whose domains are the real numbers, or a subset of the real numbers, like *f*(*x*) = sin x. A sequence is a function with domain the natural numbers N ={1,2,3,…} or the non-negative integers, ℤ 0 {0,1,2,3,…}. The range of the function is still allowed to be the real numbers; in symbols, we say that a sequence is a function f*: *ℕ→ ℝ . Sequences are written in a few different ways, all equivalent; these all mean the same thing:

As with functions on the real numbers, we will most often encounter sequences that can be expressed by a formula. We have already seen the sequence *a _{i }*=

*f*(

*i) =*

*,*

And others are easy to come by:

Frequently these formulas will make sense if thought of either as functions with ℝ or ℕ, though occasionally one will make sense only for integer values.Faced with a sequence we are interested in the limit

We already understand

When *x* is a real valued variable ; now we simply want to restrict the “input” values to be integers. No real difference is required in the definition of limit, except that we specify, perhaps implicitly, that the variable is an integer. Compare this definition to definition 4.10.2.

DEFINITION 11.1.1 Suppose that is a sequence. We say that = L if for every €>0 there is an N > 0 so that whenever *n *>N, If we say that the sequence converges, otherwise it diverges.

If f(i) defines a sequence, and f(x) makes sense, and , then it is clear that as well, but it is important to note that the converse of this statement is not true. For example, since it is clear that also that is, the numbers

Get closer and closer to 0. Consider this, however: Let ƒ(n) = sin(nπ ). This is the sequence

Sin(0π ), sin(1π ),sin (2π ),sin (3 π), . .…. = 0,0,0,0,…..

Since sin(nπ )=0 when n is an integer. Thus But , when x is real, does not exist: as x gets bigger and bigger, the values sin(xπ ) do not get closer and closer to a single value, but take on all values between -1 and 1 over and over. In general whenever you want to know you should first attempt to compute , since if the latter exists it is also equal to the first limit. But if for some reason does not exist, it may still be true that exists, but you’ll have to figure out another way to compute it.

It is occasionally useful to think of the graph of a sequence. Since the function is defined only for integer values, the graph is just a sequence of dots. In figure 11.1.1 we see the graphs of two sequences and the graphs of the corresponding real functions.

Figure 11.1.1 Graphs of sequences and their corresponding real functions

Not surprisingly, the properties of limits of real functions translate into properties of sequences quite easily. Theorem 2.3.6 about limits becomes.

THEOREM 11.1.2

Suppose that and k is some constant.

Then

Likewise the Squeeze Theorem (4.3.1) becomes.

THEOREM 11.1.3

And a final useful fact:

THEOREM 11.1.4

This says simply that the size of ” an” gets close to zero if and only if “an” gets close to zero.

**EXAMPLE 11.1.5**

Determine whether converges or diverges. If it converges, compute the limit. Since this makes sense for real numbers we consider

Thus the sequence converges to 1.

**EXAMPLE 11.1.6**

Determine whether converges or diverges. If it converges, compute the limit. We compute

Using L’Hopital’s Rule. Thus the sequence converges to 0.

**EXAMPLE** 11.1.7

Determine whether converges or diverges. If it converges, compute the limit. This does not make sense for all real exponents, but the sequence is easy to understand: it is

1,-1,1,-1,1… And clearly diverges.

EXAMPLE 11.1.8

Determine whether converges or diverges. If it converges, compute the limit. We *consider the sequence*

so by theorem 11.1.4 the sequence converges to 0

**EXAMPLE 11.1.9**

determine wether converges or diverges .If it converges compute the limite. since

and we can use theorem 11.1 .3

**EXAMPLE 11.1.10**

A particularly common and useful sequence is , for various values of *r*. Some are quite easy to understand: If *r* = 1 the sequence converges to 1 since every term is 1, and likewise if *r =0 *the sequence converges to 0. If *r =-1 *this is the sequence of example 11.1.7 and diverges. If *r>* 1 or *r<** *-1 the terms *r ^{n }*get large without limit, so the sequence diverges. If 0 <

*r*< 1 then the sequence converges to 0.

If -1< *r<0* then |*r ^{n}*| = |

*r*|

*and 0 < |*

^{n}*r*|< 1, so the sequence converges to 0, so also converges to 0. In summary,

converges precisely when -1 < *r<* 1 in which case.

Sometime we will not be able to determine the limit of a sequence, but we still would like to know whether it converges. In some cases we can determine this even without being able to compute the limit.

A sequence is called **increasing **or something **strictly increasing **if for all i. It is called non-decreasing or sometimes (unfortunately) increasing if for all i. Similarly a sequence is decreasing if for all i and non-increasing if for all i. If a sequence has any of these properties it is called monotonic.

**EXAMPLE 11.1.11**

Is increasing and

Is decreasing.

A sequence is bounded above if there is some number N such that for every n, and bounded below if there is some number N such that for every n. If a sequence is bounded above and bounded below it is bounded. If a sequence is increasing or non-decreasing it is bounded below (by ), and if it is decreasing or non-increasing it is bounded above (by ). Finally, with all this new terminology we can state an important theorem.

THEOREM 11.1 12

If a sequence is bounded and monotonic then it converges.

We will not prove this; the proof appears in many calculus books. It is not hard to believe: suppose that a sequence is increasing and bounded, so each term is larger than the one before, yet never larger than some fixed value N. The terms must then get closer and closer to some value between and N. It need not be N, since N may be a “too-generous” upper bound; the limit will be the smallest number that is above all of the terms .

**EXAMPLE 11.1.13**

All of the terms are less than 2, and the sequences is increasing. As we have seen, the limit of the sequence is 1, 1 is the smallest number that is bigger than all the terms in the sequence. Similarly, all of the terms (n+1)/n are bigger than ½, and the limit is 1 1 is the largest number that is smaller than the term of the sequence.

We don’t actually need to know that a sequence is monotonic to apply this theorem; it is enough to know that the sequence is “eventually” monotonic, that is, that at some point is becomes increasing or decreasing. For example, the sequence 10, 9, 8, 15, 3, 21, 4, 3/4, 7/8, 15/16, 31/32,… is not decreasing, because among the first few terms it is not. But starting with the term ¾ it is increasing, so the theorem tells us that the sequence 3/4, 7/8, 15/16, 31/32,… converges. Since convergece depends only on what happens as n gets large, adding a few terms at the beginning can’t turn a convergent sequence into a divergent one.

**EXAMPLE 11.1.14**

We first show that this sequence is decreasing, that is, that Consider the real function when We can compute the derivative, and note that when this is negative. Since the function has negative slope, when . Since all terms of the sequence are positive, the sequence is decreasing and bounded when and so the sequence converges. (As it happens, we can compute the limit in this case, but we know it converges even without knowing the limit; see exercise 1.)

**EXAMPLE 11.1.15**

Again we show that the sequences is decreasing, and since each term is positive the sequence converges. We can’t take the derivative this time, as x! Doesn’t make sense for x real. But we note that if then , which is what we want to know. So we look at

:

(Again it is possible to compute the limit; see exercise 2.)

**11.2 SERIES**

While much more can be said about sequences, we now turn to our principal interest, series. Recall that a series, roughly speaking, is the sum of a sequence: if is a sequence then the associated series is

Associated with a series is a second sequence, called the sequence of partial sums

A series converges if the sequence of partial sums converges, and otherwise the series diverges.

EXAMPLE 11.2.14

Is called a geometric series. A typical partial sum is

.

We note that

So

If

Thus, when the geometric series converges to . When, for example, k=1 and x=1/2:

We began the chapter with the series namely, the geometric series without the first term 1. Each partial sum of this series is 1 less than the corresponding partial sum for the geometric series, so of course the limit is also one less than the value of the geometric series, that is,

It is not hard to see that the following theorem follows from theorem 11.1.2.

THEOREM 11.1.2

Suppose that and are convergent series, and c is a constant. Then

- is convergent and
- is convergent and

The two parts of this theorem are subtly different. Suppose that diverges; does also diverge if c is non-zero? Yes: suppose instead that converges; then by the theorem, converges, but this is the same as which by assumption diverges. Hence also diverges. Note that we are applying the theorem with replaced by and c replaced by (1/c).

Now suppose that and diverge; does also diverge? Now the answer is no: let and so certainly and diverge. But Of course, sometimes will also diverge, for example, if , then diverges.

In general, the sequence of partial sums is harder to understand and analyze than the sequence of terms , and it is difficult to determine whether series converge and if so to what. Sometimes things are relatively simple, starting with the following.

THEOREM 11.2.3

If converges then

Proof. Since converges, and , because this really says the same thing but “renumbers” the terms. By theorem 11.1.2,

But , so as desired

This theorem presents an easy divergence test: if given a series the limit does not exist or has a value other than zero, the series diverges. Note well that the converse is not true: if then the series does not necessarily converge.

**EXAMPLE 11.2.4**

Show that diverges.

We compute the limit:

Looking at the first few terms perhaps makes it clear that the series has no chance of converging:

will just get larger and larger; indeed, after a bit longer the series starts to look very much like…+1+1+1+1…, and of course if we add up enough 1’s we can make the sum as large as we desire.

**EXAMPLE 11.2.5**

Show that diverges.

Here the theorem does not apply: , so it looks like perhaps the series converges. Indeed, if you have the fortitude (or the software) to add up the first 1000 terms you will find that so it might be reasonable to speculate that the series converges to something in the neighborhood of 10. But in fact the partial sums do go infinity; they just get big very, very slowly. Consider the following:

and so on. By swallowing up more and more terms we can always manage to add at least another ½ to the sum, and by adding enough of these we can make the partial sums as big as we like. In fact, it’s not hard to see from this pattern that so to make sure the sum is over 100, for example, we’d add up terms until we get to around , that is, about 4 terms. This series, , is called the harmonic series.

Exercises 11.2.

1 Explain why 2 Explain why

3 Explain why 4 Compute

5 Compute 6 Compute

7 Compute 8 Compute

9 Compute

11.3 THE INTEGRAL TEST

It is generally quite difficult, often impossible, to determine the value of a series exactly. In many cases it is possible at least to determine whether or not the series converges, and so we will spend most of our time on this problem.

If all of the terms in a series are non-negative, then clearly the sequence of partial sums is non-decreasing. This means that if we can show that the sequence of partial sums is bounded, the series must converge. We know that if the series converges, the terms approach zero, but this does not mean that for every n. many useful and interesting series do have this property, however, and they are among the easiest to understand. Let’s look at an example.

**EXAMPLE 11.3.1** Show that converges.

The terms are positive and decreasing, and since the terms 1/*n ^{2}*approach zero. We seek an upper bound for all the partial sums, that is, we want to find a number

*N*so that

*s*

_{n }_{ }

*N*for every

*n.*the upper bound is provided courtesy of integration, and is inherent in figure 11.3.1.

Figure 11.3.1 Graph of *y*=1/*x ^{2}* with rectangles.

The figure shows the graph of *y*=1/*x ^{2}* together with some rectangles that lie completely below the curve and that all have base length one. Because the heights of the rectangles are determined by the height of the curve, the areas of the rectangles are 1/1

^{2}, 1/2

^{2}, 1/3

^{2}, and so on –in other words, exactly the terms of the series. The partial sum

*s*is simply the sum of the areas of the first

_{n }_{ }*n*rectangles. Because the rectangles all lie between the curve 2any sum of rectangle areas is less than the area under the entire curve, that is, all the way to infinity, there is bit of trouble at the left end, where there is an asymptote, but we can work around that easily. Here it is:

*S _{n}* =

*dx*< 1+

*dx*= 1+ ,

Recalling that we computed this improper integral in section 9.7. Since the sequence of partial sums *s _{n}* is increasing and bounded above by 2, we know that

_{n }= L <2, and to show that L =

^{2}/6 1.6.

We already know that diverges. What goes wrong if we try to apply this technique to it? Here’s the calculation:

*S _{n}*=

The problem is that the improper integral doesn’t converge. Note well that this does *not* prove that diverges, just that this particular calculation fails to prove that it converges. A slight modification, however, allows us to prove in a second way that diverges.

**Example 11.3.2 **consider a slightly altered version of figure 11.3.1, shown figure 11.3.2.

Figure 11.3.2 Graph of *y *=1/x with rectangles.

The rectangle this time are above the curve, that is, each rectangle completely contains the corresponding area under the curve. This means that

*S _{n }*=

As *n* gets bigger, In (*n*+1) goes to infinity, so the sequence of partial sums *s _{n }_{ }*must also go to infinity, so the harmonic series diverges.

The important fact that clinches this example is that

Which we can rewrite as

So these two examples taken together indicate that we can prove that a series converges or prove that it diverges with a single calculation of an improper integral. This is known as the **integral test, **which we state as a theorem.

**THEOREM 11.3.3 **suppose that *f (x) > 0 *and is decreasing on the infinite interval [*k,* * *(for some *k* * *and that *a _{n }*=

*f*(

*n)*. Then the series converges if and only if the improper integral converges.

The two examples we have seen are calls *p*-series is any series of the from n^{p}. if *p* , ,

So the series diverges. For positive values of *p *we can determine precisely which converge.

**THEOREM 11.3.4 **A *p*-series with *p*>0 converges if *p>* 1.

** Proof**. We use the integral test; we have already done

*p*= 1, so assume that

*p*

If *p*>1 then 1-*p<0* and = 0, so the integral converges. If 0<*p*<1 then 1-*p>0* and = , so the integral diverges.

**EXAMPLE 11.3.5 **Show that converges.

We could of course use the integral test, but now that we have the theorem we may simply note that this is a p-series with p .

**EXAMPLE 11.3.6 **Show that converges.

We know that if converges then also converges, by theorem 11.2.2. Since is a convergent p-series, converges also.

**EXAMPLE 11.3.7 **Show that diverges.

This also follows from theorem 11.2.2: Since is a p-series p= , it diverges, and so does

Since it is typically difficult to compute the value of a series exactly, a good approximation is frequently required. In a real sense, a good approximation is only as good as we know it is, that is, while an approximation may in fact be good, it is only valuable in practice if we can guarantee its accuracy to some degree. This guarantee is usually easy to come by for series with decreasing positive terms.

**EXAMPLE 11.3.8 **Approximate to two decimal places.

Referring to figure 11.3.1, if we approximate the sum by the error we make is the total area of the remaining rectangles, all of which lie under the curve from x=N out to infinity. So we know the true value of the series is larger than the approximation plus the area under the curve from N to infinity. Roughly, then, we need to find N so that

We can compute the integral: , so N=100 is a good starting point. Adding up the first 100 terms gives approximately 1.634983900, and that plus 1/100 is 1.644983900, so approximating the series by the value halfway between these will be at most 1/200=0.005 in error. The midpoint is 1.639983900, but while this is correct to 0.005, we can’t tell if the correct two-decimal approximation is 1.63 or 1.64. We need to make N big enough to reduce the guaranteed error, perhaps to around 0.004 to be safe, so we would need 1/N 0.008, or N=125. Now the sum of the first 125 terms is approximately 1.636965982, and that plus 0.008 is 1644965982 and the point halfway between them is 1.640965982.the true value is then 1.640965982 0.004, and all numbers is this range round to 1.64, so 1.64 is correct to two decimal places. We have mentioned that the true value of this series can be shown to be / 6 1.644934068 which rounds down to 1.64 (just barely) and is indeed below the upper bound of 1.644965982, again just barely. Frequently approximations will be even better than “guaranteed” accuracy, but not always, as this example demonstrates.

Exercises 11.3.

Determine whether each series converges or diverges.

1. 2.

3. 4.

5. 6.

7. 8.

9. Find an N so that is between +0.005.

10. Find an N so that is between and +10^{-4}

11. Find an N so that is between and +0.005.

12. Find an N so that is between and + 0.005.

**11.4 Alternating series**

Next we consider series with both positive terms, but negative terms, but in a regular pattern: they alternate, as in the **alternating harmonic series **for example:

In this series the sizes of the terms decrease, that is, forms a decreasing sequence, but this is not required in an alternating series. As with positive term series, however, when the terms do have decreasing sizes it is easier to analyze the series, much easier, in fact, than positive term series. Consider pictorially what is going on in the alternating harmonic series, shown in figure 11.4.1. Because the sizes of the terms are decreasing, the partial sums , , , and so on, form a decreasing sequence that is bounded below by , so this sequence must converge. Likewise, the partial sums , , and so on, from an increasing sequence that is bounded above by , so this sequence also converges. Since all the even numbered partial sums are less than all the odd numbered ones, and since the “jumps” (that is, the terms) are getting smaller and smaller, the two sequences must converge to the same value, meaning the entire sequence of partial sums , , ,… converges as well.

Figure 11.4.1 The alternating harmonic series.

There’s nothing special about the alternating harmonic series-the same argument works for any alternating sequence with decreasing size terms. The alternating series test is worth calling a theorem.

**THEOREM 11.4.1 **Suppose that { } =1 is a non-increasing sequence of positive numbers and =0. Then the alternating series converges.

** Proof. **The odd numbered partial sums, , , , and so on, from a non-increasing sequence, because

= – + , since . This sequence is bounded blew by , so tit must converge, say = L. likewise, the partial sums , , , and so on, form a non-decreasing sequence that is bounded above by , so this sequence also converges, say = M. Since = 0 and

= + ,

*L*= = + +1) = + = *M+0 =M,*

So *L= M, *the two sequences of partial sums converge to the same limit, and this means the entire sequence of partial sums also converges to* L*.

Another useful fact is implicit in this discussion. Suppose that

And that we approximate *L* by a finite part of this sum, say

Because the terms are decreasing in size, we know that the true value of *L* must be between this approximation and the next one, that is, between

and

Depending on whether *N *is odd or even, the second will be larger or smaller than the first.

**EXAMPLE 11.4.2 **Approximate the alternating harmonic series to one decimal place. We need to go roughly to the point at which the next term to be added or subtracted is 1/10. Adding up the first nine and the first ten terms we get approximately 0.746 and 0.646. These are 1/10apart, but it is not clear how the correct value would be rounded. It turns out that we are able to settle the question by computing the sums of the first eleven and twelve terms, which give 0.737 and 0.653, so correct to one place the value is 0.7.

We have considered alternating series with first index 1, and in which the first term is positive, but a little thought shows this is not crucial. The same test applies to any similar series, such as , , , etc.

*Exercises 11.4.*

Determine whether the following series converge or diverge.

1. 2.

3. 4.

5. Approximate to two decimal places.

6. Approximate to two decimal places.

**11.5 COMPARISON TESTS **

As we begin to compile a list of convergent and divergent series, new ones can sometimes be analyzed by comparing them to ones that we already understand.

**EXAMPLE 11.5.1 **Dose converge?

The obvious first approach, based on what we know, is the integral test. Unfortunately, we can’t compute the required antiderivative. But looking at the series, it would appear that it must converge, because the terms we are adding are smaller than the terms of a p-series, that is,

<

When *n* 3. Since adding up the terms 1/ doesn’t get “too big”, the new series “should” also converge. Let’s make this more precise.

The series converges if and only if converges-all we’ve done is dropped the initial term. We know that converges. Looking at two typical partial sums:

+ + + … + < + + + … + =

Since the p-series converges, say to *L*, and since the terms are positive, <* L*. since the terms of the new series are positive, the from an increasing sequence and < < *L* for all *n*. Hence the sequence { } is bounded and so converges.

Sometimes, even when the integral test applies, comparison to a known series is easier, so it’s generally a good idea to think about doing a comparison before doing the integral test.

**EXAMPLE 11.5.2** Does converge?

We can’t apply the integral test here, because the terms of this series are not decreasing. Just as in the previous example, however,

Because 1. Once again the partial sums are non-decreasing and bounded above by = *L*, so the new series converges.

Like the integral test, the comparison test can be used to show both convergence and divergence. In the case of the integral test, a single calculation will confirm whichever is the case. To use the comparison test we must first have a good idea as to convergence or divergence and pick the sequence for comparison accordingly.

**EXAMPLE 11.5.3 **Does converge?

We observe that the -3 should have little effect compared to the inside the square root, and therefore guess that the terms are enough like 1/ = 1/*n* that the series should diverge. We attempt to show this by comparison to the harmonic series. We note that

> =

So that

= + + … + > + + … + =

Where is 1 less than the corresponding partial sum of the harmonic series (because we start at *n*=2 instead of *n=1).* Since = , = as well.

So the general approach is this: if you believe that a new series is convergent, attempt to find a convergent series whose terms are larger than the terms of the new series; if you believe that a new series is divergent, attempt to find a divergent, attempt to find a divergent series whose terms are smaller than the terms of the new series.

**EXAMPLE 11.5.4 ** Does converge?

Just as in the last example, we guess that this is very much like the harmonic series and so diverges. Unfortunately, , so we can’t compare the series directly to the harmonic series. A little thought leads us to , so if diverges then the given series diverges. But since , theorem 11.2.2 implies that it does indeed diverge.

For reference we summarize the comparison test in a theorem.

**THEOREM 11.5.5 **Suppose that and are not-negative for all n and that when n N, for some N.

If converges, so does .

If diverges, so does .

**Exercises 11.5**

Determine whether the series converge or diverge.

**11.6 ABSOLUTE CONVERGENCE**

Roughly speaking there are two ways for a series to converge: As in the case of , the individual terms get small very quickly, so that the sum of all of them stays finite, or, as in the case of , the terms don’t get small fast enough ( diverges), but a mixture of positive and negative terms provides enough cancellation to keep the sum finite. You might guess from what we’ve seen that if the terms get small fast enough to do the job, then whether or not some terms are negative and some positive the series converges.

**THEOREM 11.6.1 **If converges, then converges.

**Proof. **Note that 0 so by the comparison test converges.

Now converges by theorem 11.2.2.

So given a series with both positive and negative terms, you should first ask whether converges. This may be an easier question to answer, because we have tests that apply specifically to terms with non-negative terms. If converges then you know that converges as well. If diverges then it still may be true that converges; you will have to do more work to decide the question. Another way to think of this result is: it is (potentially) easier for to converge than for to converge, because the latter series cannot take advantage of cancellation.

If converge we say that converges absolutely; to say that converges absolutely is to say that any cancellation that happens to come along is not really needed as the terms already get small so fast that convergence is guaranteed by that alone. If converges but does not, we say that converges conditionally. For example converges absolutely, while converges conditionally.

**EXAMPLE 11.6.2 **Does converge?

In example 11.5.2 we saw that converges, so the given series converges absolutely.

**EXAMPLE 11.6.3** Does converge?

Taking the absolute value, diverges by comparison to , so if the series converges it does so conditionally. It is true that , so to apply the alternating series test we need to know whether the terms are decreasing. If we let f(x)=(3x+4)/( +3x+5) then f’(x)=-(, and it is not hard to see that this is negative for so the series is decreasing and by the alternating series test it converges.

**Exercises 11.6**

Determine whether each series converges absolutely, converges conditionally, or diverges.

4.

5.

6.

7.

8.

**11.7 THE RATIO AND ROOT TESTS**

Does the series converge? It is possible, but a bit unpleasant, to approach this with the integral test or the comparison test, but there is an easier way. Consider what happens as we move from one term to the next in this series:

The denominator goes up by a factor of 5, but the numerator goes up by much less: which is much less than when n is large, because is much less than . So we might guess that in the long run it begins to look as if each term is 1/5 of the previous term. We have seen series that behave like this: , a geometric series. So we might try comparing the given series to some variation of this geometric series. This is possible, but a bit messy. We can in effect do the same thing, but bypass most of the unpleasant work.

The key is to notice that .

This is really just what we noticed above, done a bit more officially: in the long run, each term is one fifth of the previous term. Now pick some number between 1/5 and 1, say 1/2.

Because , then when n is enough, say N for some N, and .

So and so on. The general form is So if we look at the series

Its terms are less than or equal to the terms of the sequence

.

So by the comparison test, converges, and this means that converges, since we’ve just added the fixed number .

Under what circumstances could we do this? What was crucial was that the limit of , say L, was less than 1 so that we could pick a value r so that L The fact that in our example) means that we can compare the series , and the fact that guarantees that converges. That’s really all that is required to make the argument work. We also made use of the fact that the terms of the series were positive; in general we simply consider the absolute values of the terms and we end up testing for absolute convergence.

**THEOREM 11.7.1 The Ratio Test** Suppose that . If the series converges absoluatnely, if the series diverges, and if L=1 this test gives no information.

**Proof. **The example above essentially proves the first part of this, if we simply replace 1/5 by L and 1/2 by r. Suppose that , and pick r so that . Then for , for some N,

and .

This implies that , but since this means that , which means also that . By the divergence test, the series diverges.

To see that we get no information when L=1, we need to exhibit two series with L=1, one that converges and one that diverges. It is easy to see that and do the job.

**EXAMPLE 11.7.2 ** The ratio test is particularly useful for series involving the factorial function. Consider

Since , the series converges.

A similar argument, which we will not do, justifies a similar test that is occasion ally easier to apply.

**THEOREM 11.7.3 The Root Test ** Suppose that ^{1/n}** **= *L. *If *L*<1 the series ** ** ** **converges absolutely, *L*>1 the series diverges, and if *L =*1 this test gives no information.

The proof of the root test is actually easier that of the ratio test, and is a good exercise.

**EXEMPLE 11.7.4 **Analyze

The ratio turns out to be a bit difficult on this series (try it). Using the root test:

^{1/n} = = =0.

Since 0<1, the series converges.

The root test is frequently useful when *n* appears as an exponent in the general term of the series.

* *

*Exercises 11.7.*

- Compute for the series
- Compute for the series
- Compute
^{1/n}for the series - Compute
^{1/n}for the series

Determine whether the series converge.

^{n}6.- 8.
- Prove theorem 11.7.3, the root test.

**11.8 Power Series**

Recall that we were able to analyze all geometric series “simultaneously” to discover that

=

If <1, and that the series diverges when 1. At the time, we thought of *x* as an unspecified constant, but we could just as well think of it as a variable, in which case the series

Is a function, namely, the function *k/ *(1-*x*), as long as <1. While *k/* (1-*x*) is a reasonably easy function to deal with, the more complicated does have its attractions:

It appears to be an infinite version of one of the simplest function types — a polynomial.

This leads naturally to the questions: do other functions have representations as series?

Is there an advantage to viewing them in this way?

The geometric series has a special feature that makes it unlike a typical polynomial the coefficients of the powers of *x* are the same, namely *k.* We will need to allow more general coefficients if we are to get anything other than the geometric series.

**DEFINITION 11.8.1** A power series has the form

With the understanding that may depend on n but not on x.

**EXAMPLE 11.8.2 ** ** **is a power series. We can investigate convergence using the ratio test:

Thus when < 1 the series converges and when > 1 it diverges, leaving only two values in doubt. When *x* = 1 the series the harmonic series and diverges: when *x* = -1 it is the alternating harmonic series (actually the negative of the usual alternating harmonic series) and converges. Thus, we may think of as a function form the interval to the real numbers.

A bit of though reveals that the ratio test applied to a power series will always have the same nice form. In general, we will compute

Assuming that lim / exists. Then the series converges if <1 , that is, if , and diverges if

Only the two values require further investigation. Thus the series will definitely define a function on the interval (, and perhaps will extend to one or both endpoints as well. Two special cases deserve mention: if L=0 the limit is 0 no matter what value x takes, so the series converges for all x and the function is defined for all real numbers. If , then no matter what value x takes the limit is infinite and the series converges only when x=0. The value 1/L is called the radius of convergence of the series, and the interval on which the series converges is the interval of converge.

Consider again the geometric series, .

Whatever benefits there might be in using the series from of this function are only available to us when x is between -1 and 1. Frequently we can address this shortcoming by modifying the power series slightly. Consider this series:

Because this is just a geometric series with x replaced by (x+2)/3. Multiplying both sides by 1/3 gives , the same function as before. For what values of x does this series converges? Since it is a geometric series, we know that is converges when

So we have a series representation for 1/(1-x) that works on a larger interval than before, at the expense of a somewhat more complicated series. The endpoints of the interval of convergence now are -5 and 1, but note that they can be more compactly described as We say that 3 is the radius of convergence, and we now say that the series is centered at -2.

**DEFINITION 11.8.3 **A power series centered at a has the form with the understanding that may depend on n but not on x.

**Exercises 11.8**

Find the radius and interval of convergence for each series. In exercises 3 and 4, do not attempt to determine whether the endpoints are in the interval of convergence.

- 4.
- 5.
- 6.

**11.9 CALCULUS WITH POWER SERIES**

Now we know that some function can be expressed as power series, which look like infinite polynomials. Since calculus, that is, computation of derivatives and antiderivatives, is easy for polynomials, the obvious question is whether the same is true for infinite series. The answer is yes.

**THEOREM 11.9.1 **Suppose the power series has radius of convergence R. Then and these two series have radius of convergence R as well.

**EXAMPLE 11.9.2 **Starting with the geometric series:

when

The series does not convergence when x=-1 or 1-x=2. The interval of convergence is , or 0 so we can use the series to represent when For example and so

Because this is an alternating series with decreasing terms, we know that the true value is between 909/2240 and 909/2240-1/2048=29053/71680 0,4053 so correct to two decimal places the value is 0,41.

What about Since 9/4 is larger than 2 we cannot use the series directly, but so in fact we get a lot more from this one calculation than first meets the eye. To estimate the true value accurately we actually need to be a bit more careful. When we multiply by two we know that the true value is between 0,8106 and 0,812, so rounded to two decimal places the true value is 0,81.

**Exercises 11.9**

- Find a series representation for ln 2.
- Find a power series representation for 1/
- Find a power series representation for 2/
- Find a power series representation for 1/ What is the radius of convergence?
- Find a power series representation for

**11.10 TAYLOR SERIES**

We have seen that some function can be represented as series, which may give valuable information about the function. So far, we have seen only those examples that result from manipulation of our one fundamental example, the geometric series. We would like to start with a given function and produce a series to represent it, if possible.

Suppose that on some interval of convergence. Then we know that we can compute derivatives of f by taking derivatives of the terms of the series. Let’s look at the first few in general:

By examining these it’s not hard to discern the general pattern. The kth derivative must be