Integration by parts
From Free net encyclopedia
In calculus, and more generally in mathematical analysis, integration by parts is a rule that transforms the integral of products of functions into other, possibly simpler, integrals. The rule arises from the product rule of differentiation.
Contents |
The rule
Suppose f(x) and g(x) are two continuously differentiable functions. Then the integration by parts rule states that given an interval with endpoints a, b, one has
- <math>\int_a^b f(x) g'(x)\,dx = \left[ f(x) g(x) \right]_{a}^{b} - \int_a^b f'(x) g(x)\,dx</math>
where we use the common notation
- <math>\left[f(x) g(x) \right]_{a}^{b} = f(b) g(b) - f(a) g(a).</math>
The rule is shown to be true by using the product rule for derivatives and the fundamental theorem of calculus. Thus
<math> f(b)g(b) - f(a)g(a)\, </math> <math>= \int_a^b \frac{d}{dx} ( f(x) g(x) ) \, dx </math> <math>=\int_a^b f'(x) g(x) \, dx + \int_a^b f(x) g'(x) \, dx </math>
In the traditional calculus curriculum, this rule is often stated using indefinite integrals in the form
- <math>\int f(x) g'(x)\,dx = f(x) g(x) - \int g(x) f'(x)\,dx</math>
or in an even shorter form, if we let u = f(x), v = g(x) and the differentials du = f′(x) dx and dv = g′(x) dx, then it is in the form in which it is most often seen:
- <math>\int u\,dv = u v - \int v\,du</math>
Note that the original integral contains the derivative of g; in order to be able to apply the rule, the antiderivative g must be found, and then the resulting integral ∫g f′ dx must be evaluated.
One can also formulate a discrete analogue for sequences, called summation by parts.
An alternative notation has the advantage that the factors of the original expression are identified as f and g, but the drawback of a nested integral:
- <math>\int f g\,dx = f \int g\,dx - \int \left ( f' \int g\,dx \right )dx</math>
This formula is valid whenever f is continuously differentiable and g is continuous.
Examples
In order to calculate:
- <math>\int x\cos (x) \,dx</math>
Let:
- u = x, so that du = dx,
- dv = cos(x) dx, so that v = sin(x).
Then:
<math>\int x\cos (x) \,dx </math> <math>= \int u \,dv </math> <math>= uv - \int v \,du</math>
- <math>\int x\cos (x) \,dx = x\sin (x) - \int \sin (x) \,dx</math>
- <math>\int x\cos (x) \,dx = x\sin (x) + \cos (x) + C</math>
where C is an arbitrary constant of integration.
By repeatedly using integration by parts, integrals such as
- <math>\int x^{3} \sin (x) \,dx \quad \mbox{and} \quad \int x^{2} e^{x} \,dx</math>
can be computed in the same fashion: each application of the rule lowers the power of x by one.
An interesting example that is commonly seen is:
- <math>\int e^{x} \cos (x) \,dx</math>
where, strangely enough, in the end, the actual integration does not need to be performed.
This example uses integration by parts twice. First let:
- u = cos(x); thus du = -sin(x)dx
- dv = exdx; thus v = ex
Then:
- <math>\int e^{x} \cos (x) \,dx = e^{x} \cos (x) + \int e^{x} \sin (x) \,dx</math>
Now, to evaluate the remaining integral, we use integration by parts again, with:
- u = sin(x); du = cos(x)dx
- v = ex; dv = exdx
Then:
<math>\int e^{x} \sin (x) \,dx </math> <math>= e^{x} \sin (x) - \int e^{x} \cos (x) \,dx </math>
Putting these together, we get
- <math>\int e^{x} \cos (x) \,dx = e^{x} \cos (x) + e^x \sin (x) - \int e^{x} \cos (x) \,dx</math>
Notice that the same integral shows up on both sides of this equation. So we can simply add the integral to both sides to get:
- <math>2 \int e^{x} \cos (x) \,dx = e^{x} ( \sin (x) + \cos (x) )</math>
- <math>\int e^{x} \cos (x) \,dx = {e^{x} ( \sin (x) + \cos (x) ) \over 2}</math>
Two other well-known examples are when integration by parts is applied to a function expressed as a product of 1 and itself. This works if the derivative of the function is known, and the integral of this derivative times x is also known.
The first example is ∫ ln(x) dx. We write this as:
- <math>\int \ln (x) \cdot 1 \,dx</math>
Let:
- u = ln(x); du = 1/x dx
- v = x; dv = 1·dx
Then:
<math>\int \ln (x) \,dx </math> <math>= x \ln (x) - \int \frac{x}{x} \,dx </math> <math>= x \ln (x) - \int 1 \,dx</math>
- <math>\int \ln (x) \,dx = x \ln (x) - {x} + {C}</math>
- <math>\int \ln (x) \,dx = x ( \ln (x) - 1 ) + C</math>
where, again, C is the arbitrary constant of integration
The second example is ∫ arctan(x) dx, where arctan(x) is the inverse tangent function. Re-write this as:
- <math>\int 1 \cdot \arctan (x) \,dx</math>
Now let:
- u = arctan(x); du = 1/(1+x2) dx
- v = x; dv = 1·dx
Then:
<math>\int \arctan (x) \,dx </math> <math>= x \arctan (x) - \int \frac{x}{1 + x^2} \,dx </math> <math>= x \arctan (x) - {1 \over 2} \ln \left( 1 + x^2 \right) + C</math>
using a combination of the inverse chain rule method and the natural logarithm integral condition.
The LIPET rule
A rule of thumb for choosing which of two functions is to be u and which is to be dv is to choose u by whichever function comes first in this list:
L :the logarithmic function: ln x
I :inverse trigonometric functions: arctan x , arcsec x, etc.
P :polynomial functions: <math>x^2, 3x^{50}</math>, etc.
E :exponential functions: <math>e^x</math>, <math>13^x</math>, etc.
T :trigonometric functions: sin x, tan x, etc.
Then make dv the other function. You can remember the list by the mnemonic LIPET. The reason for this is that functions longer down in the list have easier antiderivatives than the functions above them.
To demonstrate this rule, consider the integral
- <math>\int x\cos x \,dx.\,</math>
Following the LIPET rule, u = x and dv = cos x dx , hence du = dx and v = sin x , which makes the integral become
- <math> x\sin x - \int 1\sin x \,dx,\,</math>
which equals
- <math> x\sin x + \cos x+C. \, </math>
In general, one tries to choose u and dv such that du is simpler than u and dv is easy to integrate. If instead cos x was chosen as u and x as dv, we would have the integral
- <math> x^2\cos x + \int -x^2\sin x\,dx\,\,</math>
which, after recursive application of the integration by parts formula, would clearly result in an infinite recursion and lead nowhere.
Recursive formulation
Integration by parts can often be applied recursively on the <math>\int v\,du</math> term to provide the following formula
- <math>\int uv = u v_1 - u' v_2 + u v_3 - \cdots + (-1)^{n+1}\ u^{(n)} \ v_{n+1} </math>
Here, <math> u' </math> is the first derivative of <math> u </math> and <math> u </math> is the second derivative of <math> u </math>. Further, <math> u^{(n)} </math> is a notation to describe its nth derivative (with respect to the variable u and v are functions of). Another notation has been adopted:
<math> v_{n+1}(x)=\int\! \int\ \cdots \int v \ (dx)^{n+1}.</math>
There are n + 1 integrals.
Note that the integrand above (<math> uv </math>)differs from the previous equation. The <math> dv </math> factor has been written as <math> v </math> purely for convenience.
The above mentioned form is convenient because it can be evaluated by differentiating the first term and integrating the second (with a sign reversal each time), starting out with <math> u v_1 </math>. It is very useful especially in cases when <math> u^{(k+1)} </math> becomes zero for some k + 1. Hence, the integral evaluation can stop once the <math> u^{(k)} </math> term has been reached.
Higher dimensions
The formula for integration by parts can be extended to functions of several variables. Instead of an interval one needs to integrate over a n-dimensional set. Also, one replaces the derivative with a partial derivative.
More specifically, suppose Ω is an open bounded subset of <math>\mathbb{R}^n</math> with a piecewise smooth boundary ∂Ω. If u and v are two continuously differentiable functions on the closure of Ω, then the formula for integration by parts is
- <math> \int_{\Omega} \frac{\partial u}{\partial x_i} v \,dx = \int_{\partial\Omega} u v \, \nu_i \,d\sigma - \int_{\Omega} u \frac{\partial v}{\partial x_i} \, dx</math>
where ν is the outward unit surface normal to ∂Ω, νi is its i-th component, and i ranges from 1 to n. Replacing v in the above formula with vi and summing over i gives the vector formula
- <math> \int_{\Omega} \nabla u \cdot \mathbf{v}\, dx = \int_{\partial\Omega} u\, \mathbf{v}\cdot\nu\, d\sigma - \int_\Omega u\, \nabla\cdot \mathbf{v}\, dx</math>
where v is a vector-valued function with components v1, ..., vn.
Setting u equal to the constant function 1 in the above formula gives the divergence theorem. For <math>\mathbf{v}=\nabla v</math> where <math>v\in C^2(\bar{\Omega})</math>, one gets
- <math> \int_{\Omega} \nabla u \cdot \nabla v\, dx = \int_{\partial\Omega} u\, \nabla v\cdot\nu\, d\sigma - \int_\Omega u\, \Delta v\, dx</math>
which is the first Green's identity.
The regularity requirements of the theorem can be relaxed. For instance, the boundary ∂Ω need only be Lipschitz continuous. In the first formula above, only <math>u,v\in H^1(\Omega)</math> is necessary (where H1 is a Sobolev space); the other formulas have similarly relaxed requirements.
For reference, consult Appendix C of Evans or the applied math notes of Arbogast and Bona.
References
External links
- Integration by Parts - From MathWorld
- Also useful is the technique of tabular integration
Template:Wikibooksparcs:Per partes de:Partielle Integration he:אינטגרציה בחלקים nl:Partiële integratie pl:Całkowanie przez części ru:Интегрирование по частям sv:Partialintegration