4 Absolute and Conditional Convergence
4.1 Absolute Convergence and Alternating Series
In the previous chapters, we established the Cauchy criterion as the fundamental test for series convergence. A series \sum a_n converges if and only if its tail sums \sum_{k=n+1}^{m} a_k can be made arbitrarily small for sufficiently large n. This criterion requires no knowledge of the limit—only control over tail behavior.
But the Cauchy criterion, while theoretically complete, can be difficult to verify directly. For series with nonnegative terms, the situation simplifies: partial sums form a monotone sequence, and various comparison arguments become available. We will develop these comparison tests systematically in the next chapter.
The present chapter addresses a different problem: series with both positive and negative terms. Consider \sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n} = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots
The partial sums oscillate—they do not increase monotonically, and monotonicity-based arguments fail. Yet the series might converge through systematic cancellation between positive and negative terms. How can we test for convergence when signs alternate or vary irregularly?
This chapter introduces two approaches. First, we consider absolute convergence: ignore the signs and test whether the series of absolute values converges. This reduces the problem to nonnegative series, deferring the question of how to test those series to the next chapter. Second, we develop the alternating series test, which exploits systematic cancellation when signs alternate in a controlled pattern. The distinction between absolute and conditional convergence will reveal fundamental properties about how infinite sums behave.
4.2 Ignoring Signs
When faced with a series of mixed signs, a natural first approach is to consider whether the series of absolute values converges.
Consider \sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n^2} = 1 - \frac{1}{4} + \frac{1}{9} - \frac{1}{16} + \cdots. The series of absolute values is \sum_{n=1}^{\infty} \left|\frac{(-1)^{n+1}}{n^2}\right| = \sum_{n=1}^{\infty} \frac{1}{n^2}.
Intuitively, if the terms are becoming small rapidly enough that their absolute values sum to a finite number, then adding signs should not disrupt convergence—the positive and negative contributions should still produce a finite total. This intuition is correct, and it motivates our first definition.
Definition 4.1 (Absolute Convergence) A series \sum a_n converges absolutely if the series of absolute values \sum |a_n| converges.
The definition simply formalizes the strategy: replace every term with its absolute value and test the resulting nonnegative series. Whether that nonnegative series converges can be determined by comparison tests, ratio tests, and other techniques we develop in the next chapter. For now, we take absolute convergence as a defined property and explore its consequences.
4.3 Absolute Convergence Implies Convergence
The definition of absolute convergence states a condition on \sum |a_n|. It says nothing yet about whether \sum a_n itself converges. The relationship between the two must be established.
Intuitively, if the absolute values sum to a finite number, then the terms themselves—whether positive or negative—cannot contribute more than that finite amount. The signs might cause partial cancellation, reducing the sum, but they cannot make it diverge. This intuition is precise.
Theorem 4.1 (Absolute Convergence Implies Convergence) If \sum a_n converges absolutely, then \sum a_n converges.
Suppose \sum |a_n| converges. We verify that \sum a_n satisfies the Cauchy criterion.
Since \sum |a_n| converges, its partial sums form a Cauchy sequence. Given \varepsilon > 0, there exists N such that for all m > n \geq N, \sum_{k=n+1}^{m} |a_k| < \varepsilon.
Now consider the tail sum of the original series. By the triangle inequality, \left|\sum_{k=n+1}^{m} a_k\right| \leq \sum_{k=n+1}^{m} |a_k| < \varepsilon.
The tail sums of \sum a_n can be made arbitrarily small. By the Cauchy criterion for series, \sum a_n converges. \square
The proof show that absolute convergence controls tail sums of \sum |a_n|, and the triangle inequality transfers this control to tail sums of \sum a_n. The signs cannot amplify the tails—they can only reduce them through cancellation.
To prove convergence of a series with mixed signs, it suffices to prove absolute convergence. Test \sum |a_n| using whatever methods apply to nonnegative series. If \sum |a_n| converges, the original series converges automatically.
But how do we test whether \sum |a_n| converges? The remainder of this section develops three tests: comparison, ratio, and root tests. These will apply to any series with nonnegative terms, giving us tools to verify absolute convergence.
4.4 Comparison Test
When partial sums are monotone increasing (as they are for nonnegative series), convergence reduces to boundedness. If we can bound the partial sums of one series by those of a known convergent series, we obtain convergence.
Theorem 4.2 (Comparison Test) Let 0 \leq a_n \leq b_n for all n \geq N for some N.
(i) If \sum b_n converges, then \sum a_n converges.
(ii) If \sum a_n diverges, then \sum b_n diverges.
Since convergence depends only on tail behavior, we may assume 0 \leq a_n \leq b_n for all n.
(i) Suppose \sum b_n converges to B. The partial sums s_n = \sum_{k=1}^{n} a_k satisfy s_n = \sum_{k=1}^{n} a_k \leq \sum_{k=1}^{n} b_k \leq B.
The sequence \{s_n\} is increasing (since a_n \geq 0) and bounded above by B. By the Monotone Convergence Theorem, s_n converges. Therefore \sum a_n converges.
(ii) This is the contrapositive of (i). \square
The comparison test requires finding a suitable comparison series. For series involving n^p, we often compare to p-series.
Definition 4.2 (p-Series) A series of the form \sum_{n=1}^{\infty} \frac{1}{n^p} is called a p-series.
Example 4.1 (Convergence of \sum 1/n^2) The series \sum \frac{1}{n^2} converges. To see this, use the comparison from earlier:
\frac{1}{n^2} < \frac{1}{n(n-1)} = \frac{1}{n-1} - \frac{1}{n} \quad \text{for } n \geq 2.
The right side telescopes
\sum_{n=2}^{m} \left(\frac{1}{n-1} - \frac{1}{n}\right) = 1 - \frac{1}{m} < 1.
By comparison, \sum_{n=2}^{\infty} \frac{1}{n^2} converges. We will establish the general convergence criterion for p-series shortly.
Example 4.2 (Divergence of \sum 1/\sqrt{n}) Test \sum_{n=1}^{\infty} \frac{1}{\sqrt{n}} for divergence.
For all n \geq 1, we have \sqrt{n} < n, so \frac{1}{\sqrt{n}} > \frac{1}{n}. The harmonic series \sum \frac{1}{n} diverges. By comparison, \sum \frac{1}{\sqrt{n}} diverges.
4.4.1 The Limit Comparison Test
Often, finding a strict inequality a_n \leq b_n is difficult, but we can show that a_n and b_n have the same growth rate. The limit comparison test handles this situation.
Theorem 4.3 (Limit Comparison Test) Let a_n, b_n > 0 for all n. If
\lim_{n \to \infty} \frac{a_n}{b_n} = L
where 0 < L < \infty, then \sum a_n and \sum b_n either both converge or both diverge.
Since \lim_{n \to \infty} \frac{a_n}{b_n} = L with 0 < L < \infty, there exists N such that for all n \geq N,
\frac{L}{2} < \frac{a_n}{b_n} < 2L.
This gives \frac{L}{2} b_n < a_n < 2L b_n for n \geq N.
If \sum b_n converges, then \sum 2L b_n converges, so by comparison \sum a_n converges.
If \sum a_n converges, then \sum \frac{L}{2} b_n converges, so by comparison \sum b_n converges. \square
Example 4.3 (Limit Comparison Test Applied) Test \sum_{n=1}^{\infty} \frac{2n + 3}{n^2 + 1} for convergence.
The dominant term in the numerator is 2n, and in the denominator is n^2. Compare to b_n = \frac{n}{n^2} = \frac{1}{n}: \lim_{n \to \infty} \frac{(2n+3)/(n^2+1)}{1/n} = \lim_{n \to \infty} \frac{n(2n+3)}{n^2+1} = \lim_{n \to \infty} \frac{2n^2+3n}{n^2+1} = 2.
Since \sum \frac{1}{n} diverges and the limit is finite and positive, \sum \frac{2n+3}{n^2+1} diverges.
Example 4.4 (Limit Comparison with 1/n^2) Test \sum_{n=1}^{\infty} \frac{1}{n^2 + \sqrt{n}} for convergence.
Compare to b_n = \frac{1}{n^2}: \lim_{n \to \infty} \frac{1/(n^2 + \sqrt{n})}{1/n^2} = \lim_{n \to \infty} \frac{n^2}{n^2 + \sqrt{n}} = \lim_{n \to \infty} \frac{1}{1 + 1/n^{3/2}} = 1.
Since \sum \frac{1}{n^2} converges and the limit equals 1, the series \sum \frac{1}{n^2 + \sqrt{n}} converges.
4.5 The Ratio Test
For series where consecutive terms involve factorials or exponentials—including geometric series (Section 2.6)—the ratio of successive terms often reveals convergence behavior. This leads to the ratio test.
Theorem 4.4 (Ratio Test) Let a_n > 0 for all n. Suppose \lim_{n \to \infty} \frac{a_{n+1}}{a_n} = L.
(i) If L < 1, then \sum a_n converges.
(ii) If L > 1 (or L = \infty), then \sum a_n diverges.
(iii) If L = 1, the test is inconclusive.
(i) Suppose L < 1. Choose r such that L < r < 1. Since \frac{a_{n+1}}{a_n} \to L, there exists N such that for all n \geq N, \frac{a_{n+1}}{a_n} < r.
This gives a_{n+1} < r a_n for n \geq N. Iterating: a_{N+k} < r^k a_N.
The series \sum_{k=0}^{\infty} r^k a_N is geometric with ratio r < 1, hence converges. By comparison, \sum_{n=N}^{\infty} a_n converges. Therefore \sum a_n converges.
(ii) Suppose L > 1. There exists N such that for n \geq N, we have \frac{a_{n+1}}{a_n} > 1, so a_{n+1} > a_n. The terms increase, so a_n \not\to 0. By the divergence test, \sum a_n diverges.
(iii) Examples show the test fails when L = 1. The harmonic series has \frac{a_{n+1}}{a_n} = \frac{n}{n+1} \to 1 and diverges. The series \sum \frac{1}{n^2} has \frac{a_{n+1}}{a_n} = \frac{n^2}{(n+1)^2} \to 1 and converges. \square
Example 4.5 (Ratio Test with Factorial) Test \sum_{n=1}^{\infty} \frac{n!}{2^n} for convergence.
Compute the ratio: \frac{a_{n+1}}{a_n} = \frac{(n+1)!/2^{n+1}}{n!/2^n} = \frac{n+1}{2} \to \infty.
By the ratio test, the series diverges.
Example 4.6 (Ratio Test with Exponential over Factorial) Test \sum_{n=1}^{\infty} \frac{2^n}{n!} for convergence.
Compute the ratio
\frac{a_{n+1}}{a_n} = \frac{2^{n+1}/(n+1)!}{2^n/n!} = \frac{2}{n+1} \to 0 < 1.
By the ratio test, the series converges.
4.6 The Root Test
The root test generalizes geometric series (Section 2.6) by examining the n-th root of terms. It is particularly effective when terms involve n-th powers.
Theorem 4.5 (Root Test) Let a_n \geq 0 for all n. Suppose \lim_{n \to \infty} \sqrt[n]{a_n} = L.
(i) If L < 1, then \sum a_n converges.
(ii) If L > 1 (or L = \infty), then \sum a_n diverges.
(iii) If L = 1, the test is inconclusive.
(i) Suppose L < 1. Choose r such that L < r < 1. Since \sqrt[n]{a_n} \to L, there exists N such that for all n \geq N, \sqrt[n]{a_n} < r.
This gives a_n < r^n for n \geq N. The series \sum r^n is geometric with ratio r < 1, hence converges. By comparison, \sum a_n converges.
(ii) Suppose L > 1. Then \sqrt[n]{a_n} > 1 for all sufficiently large n, so a_n > 1. The terms do not approach zero. By the divergence test, \sum a_n diverges.
(iii) As with the ratio test, both \sum \frac{1}{n} and \sum \frac{1}{n^2} have \sqrt[n]{a_n} \to 1 but behave differently. \square
Example 4.7 (Root Test Convergence) Test \sum_{n=1}^{\infty} \left(\frac{n}{2n+1}\right)^n for convergence.
Compute the root \sqrt[n]{a_n} = \frac{n}{2n+1} = \frac{1}{2 + 1/n} \to \frac{1}{2} < 1.
By the root test, the series converges.
Example 4.8 (Root Test Divergence) Test \sum_{n=1}^{\infty} \left(\frac{2n+1}{n}\right)^n for convergence.
Compute the root \sqrt[n]{a_n} = \frac{2n+1}{n} = 2 + \frac{1}{n} \to 2 > 1.
By the root test, the series diverges.
4.7 Applying the Tests
Now that we have tests for nonnegative series, we can test for absolute convergence.
Example 4.9 (Testing for Absolute Convergence) Test \sum_{n=1}^{\infty} \frac{(-1)^n n}{2^n} for absolute convergence.
Examine \sum \frac{n}{2^n}. Apply the ratio test \frac{a_{n+1}}{a_n} = \frac{(n+1)/2^{n+1}}{n/2^n} = \frac{n+1}{2n} \to \frac{1}{2} < 1.
The series converges absolutely.
Example 4.10 (Absolute Convergence with Sine) Test \sum_{n=1}^{\infty} \frac{\sin(n)}{n^2} for absolute convergence.
We have \left|\frac{\sin(n)}{n^2}\right| \leq \frac{1}{n^2}. The series \sum \frac{1}{n^2} converges. By comparison, \sum \left|\frac{\sin(n)}{n^2}\right| converges. The series converges absolutely.
4.8 Conditional Convergence
Absolute convergence is a sufficient condition for convergence, not a necessary one. A series can converge without converging absolutely. When this occurs, something subtle is happening: the series converges through cancellation between positive and negative terms, not because the terms themselves are becoming negligible in absolute size.
Consider the alternating harmonic series \sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n} = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots
To test absolute convergence, we examine \sum_{n=1}^{\infty} \left|\frac{(-1)^{n+1}}{n}\right| = \sum_{n=1}^{\infty} \frac{1}{n}.
This is the harmonic series. We proved its divergence using Theorem 3.5, the tail sum from a_{n+1} to a_{2n} satisfies \sum_{k=n+1}^{2n} \frac{1}{k} \geq n \cdot \frac{1}{2n} = \frac{1}{2}, so the Cauchy condition fails with \varepsilon = 1/4. The series does not converge absolutely.
Yet we might still ask: does the original series \sum \frac{(-1)^{n+1}}{n} converge? The divergence of \sum |a_n| does not immediately answer this question. It tells us only that convergence, if it occurs, cannot be explained by the absolute sizes of terms becoming small. The alternating signs might produce enough cancellation to yield convergence even though the absolute series diverges.
This motivates a new concept.
Definition 4.3 (Conditional Convergence) A series \sum a_n converges conditionally if \sum a_n converges but \sum |a_n| diverges.
Conditional convergence is inherently more delicate than absolute convergence. It represents a situation where the infinite sum exists only because positive and negative contributions balance in a precise way. Remove this balance—say, by rearranging the terms—and convergence can be destroyed. We return to this phenomenon at the chapter’s end for the curious reader.
For now, we face a practical question: how do we test for convergence when absolute convergence fails?
4.9 The Alternating Series Test
An alternating series is one whose terms switch between positive and negative in a regular pattern. The standard form is \sum_{n=1}^{\infty} (-1)^{n+1} b_n = b_1 - b_2 + b_3 - b_4 + \cdots where each b_n > 0. The factors (-1)^{n+1} impose the alternation: positive, negative, positive, negative, and so on.
When an alternating series has terms that decrease steadily toward zero, the partial sums oscillate but converge. The positive terms pull the sum in one direction, the negative terms pull it back, and the oscillations dampen because each term is smaller than the last. This controlled cancellation ensures convergence.
To see why, consider the even partial sums: s_{2n} = (b_1 - b_2) + (b_3 - b_4) + \cdots + (b_{2n-1} - b_{2n}).
Each parenthesis contains a positive term minus a smaller positive term (since b_k \geq b_{k+1}), yielding a positive contribution. The even partial sums increase. But we can also write s_{2n} = b_1 - (b_2 - b_3) - (b_4 - b_5) - \cdots - (b_{2n-2} - b_{2n-1}) - b_{2n}.
Each parenthesis is now positive (since b_k > b_{k+1}), and we subtract them from b_1. This shows s_{2n} \leq b_1. The even partial sums form an increasing sequence bounded above—they must converge by the Monotone Convergence Theorem.
The odd partial sums s_{2n+1} = s_{2n} + b_{2n+1} then converge to the same limit, since b_{2n+1} \to 0. This reasoning establishes the alternating series test.
Theorem 4.6 (Alternating Series Test (Leibniz Test)) Let b_n > 0 for all n. If
(i) b_{n+1} \leq b_n for all n (decreasing terms), and
(ii) \lim_{n \to \infty} b_n = 0 (terms approach zero),
then the alternating series \sum_{n=1}^{\infty} (-1)^{n+1} b_n converges.
Let s_n = \sum_{k=1}^{n} (-1)^{k+1} b_k. We show that the even and odd subsequences converge to the same limit.
Even partial sums. Write s_{2n} = (b_1 - b_2) + (b_3 - b_4) + \cdots + (b_{2n-1} - b_{2n}).
Since b_k \geq b_{k+1}, each term is nonnegative, so the sequence \{s_{2n}\} is increasing. Moreover, s_{2n} = b_1 - (b_2 - b_3) - (b_4 - b_5) - \cdots - (b_{2n-2} - b_{2n-1}) - b_{2n} \leq b_1.
The sequence \{s_{2n}\} is increasing and bounded above by b_1. By the Monotone Convergence Theorem, s_{2n} \to L for some L \in [0, b_1].
Odd partial sums. We have s_{2n+1} = s_{2n} + b_{2n+1}. Since b_{2n+1} \to 0 and s_{2n} \to L, s_{2n+1} = s_{2n} + b_{2n+1} \to L + 0 = L.
Both subsequences converge to L. Every index is either even or odd, so the full sequence converges to L. \square
The two conditions are essential. Condition (i) ensures that the oscillations are controlled—each swing is smaller than the last. Condition (ii) ensures that the terms eventually become negligible, allowing the partial sums to stabilize. Together, they guarantee that the alternating series converges through systematic cancellation. Note that condition (ii) is necessary for any convergent series by the divergence test, but here it combines with the decreasing condition to produce the stronger conclusion.
Apply the test to \sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n}. Set b_n = \frac{1}{n}. We verify both conditions: first, b_{n+1} = \frac{1}{n+1} < \frac{1}{n} = b_n, so the terms decrease. Second, \lim_{n \to \infty} \frac{1}{n} = 0. The alternating series test confirms convergence.
Since the harmonic series \sum \frac{1}{n} diverges, the convergence is conditional. The alternating harmonic series converges to \ln(2), though proving this requires techniques from power series that we develop later.
Not every alternating series satisfies the hypotheses. Consider \sum_{n=1}^{\infty} \frac{(-1)^{n+1} n}{n+1}. Set b_n = \frac{n}{n+1}. We check condition (ii): \lim_{n \to \infty} \frac{n}{n+1} = 1 \neq 0. The test does not apply. In fact, the divergence test immediately shows the series diverges, since the terms do not approach zero.
4.10 Error Bounds
The alternating series test provides more than just a convergence criterion—it yields a precise error bound when approximating the series by partial sums. This is a noteworthy feature not shared by most convergence tests.
The proof of the alternating series test showed that even and odd partial sums approach the limit from opposite sides. The even partial sums increase toward the limit, the odd partial sums decrease toward it. This means the true sum s lies between any two consecutive partial sums: s_{2n} \leq s \leq s_{2n+1} or s_{2n+1} \leq s \leq s_{2n+2}, depending on parity.
The distance from s_n to s is therefore at most the distance to the next partial sum, which equals |s_{n+1} - s_n| = b_{n+1}, the absolute value of the next term. This observation gives the error bound.
Theorem 4.7 (Error Bound for Alternating Series) Under the hypotheses of the alternating series test, let s = \sum_{n=1}^{\infty} (-1)^{n+1} b_n and s_n = \sum_{k=1}^{n} (-1)^{k+1} b_k. Then |s - s_n| \leq b_{n+1}.
The error in approximating s by s_n is at most the absolute value of the first omitted term.
From the proof of the alternating series test, the limit s lies between consecutive partial sums. For any n, either s_n \leq s \leq s_{n+1} or s_{n+1} \leq s \leq s_n, depending on whether n is even or odd.
In either case, |s - s_n| \leq |s_{n+1} - s_n| = |(-1)^{n+2} b_{n+1}| = b_{n+1}. \quad \square
This error bound is useful. It allows us to determine how many terms are needed for a desired accuracy without knowing the limit s. We need only examine the size of the terms themselves.
The series \sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n} = \ln(2) illustrates the application. Suppose we approximate using the first 10 terms: s_{10} = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots + \frac{1}{10}.
The error satisfies |\ln(2) - s_{10}| \leq b_{11} = \frac{1}{11} \approx 0.091. To achieve error less than 0.001, we need b_{n+1} < 0.001, which requires n+1 > 1000, hence n \geq 999. At least 999 terms are needed.
4.11 Rearrangement
A note on scope. The following lie beyond the scope of Calculus 2 and properly belong to analysis in the stricter sense. The results are not intended for routine use, nor is it necessary to retain this in any detail. They are included to indicate a fundamental change in the behavior of infinite series: the distinction between absolute and conditional convergence imposes genuine limitations on permissible operations. The reader should regard this discussion as supplementary, intended to convey the nature of the phenomenon rather than to supply additional tools.
We conclude with a theorem that reveals the deep difference between absolute and conditional convergence. The statement concerns what happens when we rearrange the terms of a series—that is, when we sum the same terms in a different order.
For finite sums, order is irrelevant: 1 + 2 + 3 = 3 + 1 + 2 = 6. The commutative property of addition ensures that any rearrangement yields the same result. One might expect the same for infinite series. After all, if \sum a_n represents “the sum of all the terms a_n,” shouldn’t the order in which we add them be immaterial?
For absolutely convergent series, this expectation is correct.
Theorem 4.8 (Rearrangement of Absolutely Convergent Series) If \sum a_n converges absolutely to s, then every rearrangement of \sum a_n converges to s.
We omit the proof, which requires careful bookkeeping of partial sums. The key idea: when \sum |a_n| < \infty, the tail of the series—the sum of all terms beyond any finite point—can be made arbitrarily small, regardless of the order in which we sum them. Any rearrangement eventually captures the same contributions, differing only in the order of accumulation.
In stark contrast, conditionally convergent series behave differently under rearrangement.
Theorem 4.9 (Riemann Rearrangement Theorem (Statement Only)) If \sum a_n converges conditionally, then for any L \in \mathbb{R} \cup \{\pm \infty\}, there exists a rearrangement of \sum a_n that converges to L (or diverges to \pm \infty).
This result shows that conditionally convergent series have no well-defined sum independent of order. By rearranging the terms, we can make the series converge to any value we choose, or diverge to infinity in either direction. The sum depends fundamentally on the order of summation.
The intuition behind Riemann’s theorem is that in a conditionally convergent series, both the positive terms and the negative terms form divergent series when summed separately. We have an “infinite reservoir” of both positive and negative contributions. By strategically selecting when to add positive terms versus negative terms, we can steer the partial sums toward any target value.
The alternating harmonic series \sum \frac{(-1)^{n+1}}{n} = \ln(2) provides a concrete example. This series converges conditionally. Riemann’s theorem guarantees we can rearrange it to converge to, say, \pi or 100 or to diverge to \infty.
A concrete rearrangement that changes the sum: take two positive terms, one negative term, two positive, one negative, and so on: 1 + \frac{1}{3} - \frac{1}{2} + \frac{1}{5} + \frac{1}{7} - \frac{1}{4} + \frac{1}{9} + \frac{1}{11} - \frac{1}{6} + \cdots
This rearrangement converges to \frac{3}{2} \ln(2) \neq \ln(2). The order matters fundamentally for conditionally convergent series. Absolute convergence is precisely the condition that prevents such behavior.