10 Elementary Integrals
10.1 The Existence-Construction Gap
The Fundamental Theorem establishes that continuous functions possess antiderivatives. Theorem 9.1 guarantees existence: given f continuous on [a,b], the function F(x) = \int_a^x f(t)\,dt satisfies F' = f. Yet this is an existence proof—it provides an antiderivative implicitly, as an accumulation function, without offering an explicit formula.
For evaluation purposes, Theorem 9.4 requires an antiderivative in closed form. Given F with F' = f, we compute \int_a^b f = F(b) - F(a). But how do we find F?
For many functions—polynomials, exponentials, trigonometric functions—we know antiderivatives by inspection, reversing differentiation rules. For f(x) = x^n, we have F(x) = x^{n+1}/(n+1). For f(x) = e^x, we have F(x) = e^x. For f(x) = \sin x, we have F(x) = -\cos x.
But this is ad hoc. Can we systematize antiderivative computation? Can we derive all elementary antiderivatives from a unified principle?
The answer lies in analyticity and term-by-term integration.
10.2 Analytic Functions and Series Representations
Recall from Section 7.5 that a function f is analytic at a if it equals its Taylor series in some neighborhood of a: f(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!}(x-a)^n for all x in an interval (a-R, a+R) with R > 0. The function is completely determined by its derivatives at a—all information about f is encoded in the sequence \{f^{(n)}(a)\}.
Most functions encountered in elementary calculus are analytic:
Polynomials (everywhere)
e^x (everywhere)
\sin x, \cos x (everywhere)
\ln(1+x) (for |x| < 1)
(1+x)^\alpha (for |x| < 1, any \alpha)
Rational functions (away from poles)
These functions possess convergent power series representations. This structure is useful, as it turns out, if f is analytic we can integrate it term-by-term.
10.3 Term-by-Term Integration
Theorem 10.1 (Term-by-Term Integration) Let f(x) = \sum_{n=0}^{\infty} c_n (x-a)^n converge absolutely for |x-a| < R. Then f is integrable on any closed subinterval of (a-R, a+R), and \int_a^x f(t)\,dt = \sum_{n=0}^{\infty} c_n \int_a^x (t-a)^n\,dt = \sum_{n=0}^{\infty} c_n \frac{(x-a)^{n+1}}{n+1} for |x-a| < R.
Fix x with |x-a| < R and choose r with |x-a| < r < R. The interval from a to x is contained in the disk of radius r. Since \sum |c_n| r^n < +\infty and |c_n(t-a)^n| \leq |c_n|r^n for all t between a and x, the Weierstrass M-test shows that \sum c_n(t-a)^n converges uniformly on this interval. Uniform convergence of continuous functions permits interchange of limit and integral, so \int_a^x f(t)\,dt = \int_a^x \lim_{N\to\infty} \sum_{n=0}^N c_n(t-a)^n\,dt = \lim_{N\to\infty} \sum_{n=0}^N c_n \int_a^x (t-a)^n\,dt = \sum_{n=0}^{\infty} c_n \frac{(x-a)^{n+1}}{n+1}.\qquad\square
The proof requires uniform convergence—a condition ensuring that limits and integrals commute.
The intuition is straightforward: if f is a sum of functions, and each term can be integrated, we integrate term-by-term. The series \sum_{n=0}^{\infty} c_n (x-a)^n becomes \sum_{n=0}^{\infty} c_n \frac{(x-a)^{n+1}}{n+1}.
Each monomial (x-a)^n integrates to (x-a)^{n+1}/(n+1), and the series structure is preserved. The antiderivative is itself a power series, converging on the same interval.
To find \int f(x)\,dx for an analytic function, expand f as a power series, integrate term-by-term, and add the constant of integration.
Note on uniform convergence. The technical justification for term-by-term integration invokes uniform convergence: if f_n \to f uniformly on [a,b] and each f_n is continuous, then \int_a^b f = \lim_{n \to \infty} \int_a^b f_n.

The animation shows f_n converging uniformly to f. The \varepsilon-tube (shaded red region) surrounds the limit function. As n increases, the blue curve settles entirely within this tube—uniform convergence means the entire graph fits for all x simultaneously once n is large enough.
For power series within their radius of convergence, uniform convergence holds automatically. We defer the precise statement to § Uniform Continuity (optional) or a course on real analysis, but the upshot is clear: for functions we encounter in this course—polynomials, exponentials, trigonometric functions, and their compositions—term-by-term integration is valid.
Functions admitting Taylor series representations are “sufficiently smooth” that integration and summation commute. We need not verify uniform convergence each time; analyticity guarantees it.
10.4 Deriving Elementary Antiderivatives
We now derive antiderivatives of elementary functions using their power series.
10.4.1 Polynomials
A polynomial f(x) = \sum_{k=0}^n a_k x^k is already a finite power series. Integrate term-by-term: \int f(x)\,dx = \sum_{k=0}^n a_k \int x^k\,dx = \sum_{k=0}^n a_k \frac{x^{k+1}}{k+1} + C.
Each monomial x^k integrates to x^{k+1}/(k+1), the reverse of the power rule (x^{k+1})' = (k+1)x^k.
10.4.2 The Exponential Function
From Section 7.5, we know e^x = \sum_{n=0}^{\infty} \frac{x^n}{n!} with radius of convergence R = \infty. Integrate term-by-term: \int e^x\,dx = \int \sum_{n=0}^{\infty} \frac{x^n}{n!}\,dx = \sum_{n=0}^{\infty} \frac{1}{n!} \int x^n\,dx = \sum_{n=0}^{\infty} \frac{x^{n+1}}{(n+1) \cdot n!} + C.
Simplify, (n+1) \cdot n! = (n+1)!, so \int e^x\,dx = \sum_{n=0}^{\infty} \frac{x^{n+1}}{(n+1)!} + C = \sum_{m=1}^{\infty} \frac{x^m}{m!} + C, where m = n+1. But \sum_{m=1}^{\infty} \frac{x^m}{m!} = e^x - 1, hence \int e^x\,dx = e^x - 1 + C.
Redefining the constant as C' = C - 1, we obtain \int e^x\,dx = e^x + C'.
The exponential function is its own antiderivative. This is immediate from the series: shifting the index increases each term’s power by one while multiplying by the same factor that appears in the factorial denominator.
10.4.3 Sine and Cosine
From Section 7.5: \sin x = \sum_{n=0}^{\infty} \frac{(-1)^n x^{2n+1}}{(2n+1)!}, \quad \cos x = \sum_{n=0}^{\infty} \frac{(-1)^n x^{2n}}{(2n)!}.
Integrate sine term-by-term \int \sin x\,dx = \sum_{n=0}^{\infty} \frac{(-1)^n}{(2n+1)!} \int x^{2n+1}\,dx = \sum_{n=0}^{\infty} \frac{(-1)^n x^{2n+2}}{(2n+2) \cdot (2n+1)!} + C.
Simplify (2n+2) \cdot (2n+1)! = (2n+2)!, so \int \sin x\,dx = \sum_{n=0}^{\infty} \frac{(-1)^n x^{2n+2}}{(2n+2)!} + C = \sum_{m=1}^{\infty} \frac{(-1)^{m-1} x^{2m}}{(2m)!} + C, where m = n+1. The series \sum_{m=1}^{\infty} \frac{(-1)^{m-1} x^{2m}}{(2m)!} equals -\sum_{m=1}^{\infty} \frac{(-1)^m x^{2m}}{(2m)!}. Adding the missing zeroth term: \sum_{m=0}^{\infty} \frac{(-1)^m x^{2m}}{(2m)!} = \cos x.
Thus \sum_{m=1}^{\infty} \frac{(-1)^m x^{2m}}{(2m)!} = \cos x - 1, whence \int \sin x\,dx = -(\cos x - 1) + C = -\cos x + C', redefining C' = C + 1.
Similarly, integrate cosine \int \cos x\,dx = \sum_{n=0}^{\infty} \frac{(-1)^n}{(2n)!} \int x^{2n}\,dx = \sum_{n=0}^{\infty} \frac{(-1)^n x^{2n+1}}{(2n+1)!} + C = \sin x + C.
These derivations confirm the antiderivatives. The structure is transparent: shifting indices in the series corresponds to integrating term-by-term.
10.4.4 The Natural Logarithm
The logarithm arises as the antiderivative of 1/x, but we derive it via series. Consider \ln(1+x) = \int_0^x \frac{1}{1+t}\,dt.
From the geometric series (Section 7.5): \frac{1}{1+t} = \sum_{n=0}^{\infty} (-1)^n t^n for |t| < 1. Integrate term-by-term: \ln(1+x) = \int_0^x \sum_{n=0}^{\infty} (-1)^n t^n\,dt = \sum_{n=0}^{\infty} (-1)^n \int_0^x t^n\,dt = \sum_{n=0}^{\infty} \frac{(-1)^n x^{n+1}}{n+1}.
This is the Mercator series (1668): \ln(1+x) = x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} + \cdots
converging for |x| < 1 (and also at x = 1 by the alternating series test).
Differentiating term-by-term recovers \frac{1}{1+x}, confirming that \ln(1+x) is the antiderivative of \frac{1}{1+x}.
For the general logarithm, use substitution. Let u = 1 + x, so x = u - 1 and dx = du. Then \int \frac{1}{u}\,du = \ln|u| + C.
More precisely, \ln|x| is the antiderivative of 1/x for x \neq 0. The absolute value accounts for negative arguments.
10.4.5 Power Functions
For f(x) = x^\alpha with \alpha\in\mathbb{R} \setminus\{-1\}, we have \int x^\alpha\,dx = \frac{x^{\alpha+1}}{\alpha+1} + C.
Again, this follows from the power rule (x^{\alpha+1})' = (\alpha+1)x^\alpha applied in reverse. The case \alpha = -1 yield the logarithm as shown above.
For fractional or negative exponents, the result remains valid provided x > 0 (or appropriate domain restrictions). For instance, \int x^{1/2}\,dx = \frac{2x^{3/2}}{3} + C, \quad \int x^{-2}\,dx = -x^{-1} + C.
10.5 Summary: The Standard Antiderivatives
We have derived the following antiderivatives from first principles using power series:
| Function f(x) | Antiderivative F(x) + C | Domain |
|---|---|---|
| x^n (n \neq -1) | \frac{x^{n+1}}{n+1} + C | \mathbb{R} (or \mathbb{R}^+ if n < 0) |
| \frac{1}{x} | \ln\|x\| + C | \mathbb{R} \setminus \{0\} |
| e^x | e^x + C | \mathbb{R} |
| \sin x | -\cos x + C | \mathbb{R} |
| \cos x | \sin x + C | \mathbb{R} |
These are the elementary antiderivatives—the building blocks for integration. All other antiderivatives are obtained via:
Linearity: \int [af(x) + bg(x)]\,dx = a\int f(x)\,dx + b\int g(x)\,dx
Substitution (change of variables, covered in the next chapter)
Integration by parts (covered subsequently)
Partial fractions (for rational functions)
Remark. Certain elementary antiderivatives evaluate to inverse‑trigonometric functions; the study of these cases is postponed until the treatment of trigonometric substitution, which both motivates and explains the occurrence of such antiderivatives.
10.6 Remarks on Analyticity
As mentioned, the functions we integrate in this course—polynomials, exponentials, trigonometric functions, logarithms, and combinations thereof—are analytic on appropriate domains. This has two consequences:
Term-by-term operations are valid. We can differentiate and integrate power series term-by-term without verifying technical conditions each time. Analyticity guarantees that limits and operations commute.
Antiderivatives exist and are computable. Every analytic function has an antiderivative that is itself analytic. The antiderivative may not always be expressible in terms of elementary functions (e.g., \int e^{-x^2}\,dx has no closed form), but it exists as a power series.
The machinery we developed in Section 7.5 now pays dividends: Taylor series are not just approximations—they provide exact representations. When we write e^x = \sum_{n=0}^{\infty} \frac{x^n}{n!},
this is not an approximation. It is the function e^x, expressed as an infinite polynomial. Integrating this series term-by-term yields the antiderivative exactly.
Contrast with non-analytic functions. The function f(x) = \begin{cases} e^{-1/x^2} & x \neq 0 \\ 0 & x = 0 \end{cases} is smooth but not analytic at x = 0 (see Section 7.8). Its Taylor series at 0 is identically zero, failing to represent f. Such pathologies do not arise for the functions we encounter in elementary calculus.
This is why we emphasize analyticity. It is not pedantry—it is the structural property that makes calculus tractable. Functions admitting power series representations are “well-behaved” in a precise sense: they can be manipulated algebraically, integrated term-by-term, and differentiated freely. The interplay between local information (derivatives at a point) and global behavior (the function everywhere) is “seamless.”
10.7 Beyond Elementary Antiderivatives
We revisit the function f(x) = e^{-x^2} (the Gaussian) which as we know has no antiderivative expressible in terms of polynomials, exponentials, logarithms, and trigonometric functions. Yet it has an antiderivative as a power series \int e^{-x^2}\,dx = \int \sum_{n=0}^{\infty} \frac{(-1)^n x^{2n}}{n!}\,dx = \sum_{n=0}^{\infty} \frac{(-1)^n x^{2n+1}}{(2n+1)n!} + C.
This series defines the error function \text{erf}(x), central to probability theory and statistics. It is a perfectly well-defined function—differentiable, integrable, analytic—despite lacking a closed form.
The takeaway is that antiderivatives always exist for continuous functions (by Theorem 9.1), but they may not be elementary. Power series provide a way to compute and represent such antiderivatives, extending our toolkit beyond the standard table of integrals.
Subsequent chapters develop techniques—substitution, integration by parts, partial fractions—that expand the class of functions for which we can find closed-form antiderivatives. But the foundational insight remains: integration is the inverse of differentiation, and analytic functions admit term-by-term integration. This is the principle underlying all antiderivative computations.