I have often noted that calculus class is where you really learn algebra. Certain techniques in calculus demand algebraic skills that either were not taught in algebra classes (because they are not needed until you get to calculus), or have been forgotten. Chief among these is the method of partial fractions. I have here put together an early answer explaining how to do it, and two later discussions of why it works, both in general and in detail.
How to find partial fractions
First, from 1998, we have a question that just asks how to solve some examples:
Partial Fractions How do I express the following in partial fractions? 3 (i) --------- 1-(x^3) 2x (ii) --------- (x^3)+1
If you have not seen partial fractions, it is a process of splitting a complex fraction (rational expression) into a sum of simpler fractions, reversing the process of adding fractions using a common denominator. In calculus, the “simpler fractions” are specifically intended to be as easy as possible to integrate; but the process can be understood without knowing any calculus.
Commonly, at this point we would ask to see where the student is stuck, so we can give appropriate help without just doing the work. Doctor Rob chose to treat the first as an example to work completely, and leave the second “as an exercise for the reader”, perhaps to discuss further when the student had tried it.
First you have to factor the denominators into linear or quadratic factors. In this case 1 - x^3 = (1 - x)*(1 + x + x^2) 1 + x^3 = (1 + x)*(1 - x + x^2) Those factors will be the denominators of the partial fractions. The numerators will be of lower degree, with unknown constant coefficients, so the numerator of a fraction with a degree-1 denominator will just be an unknown constant, and the numerator of a fraction with a degree-2 denominator will be a degree-1 polynomial with unknown constant coefficients. Thus 3/(1-x^3) = A/(1-x) + (C+B*x)/(1+x+x^2) where A, B, and C are constants, to be determined. This must be an identity, that is, true for all values of x. Now clear fractions, and you get 3 = A*(1+x+x^2) + (C+B*x)*(1-x) Expand and collect terms, bringing everything to one side: 0 = -3 + A + A*x + A*x^2 + C - C*x + B*x - B*x^2 0 = (A - B)*x^2 + (A + B - C)*x + (A + C - 3) This must still be an identity. Now there are two ways to proceed.
What is often new to students here is the idea of being an identity. We are solving for A, B, and C, so that this equation will be true for all values of x, not just for some particular value. An important fact to know is that if two polynomials are equal for all x, then they must be exactly the same polynomial – their coefficients must be the same.
Another point on which some students stumble is the fact that A, B, and C are in the end going to be constants; but for now, they are the variables that we are solving for. You have to keep your mind flexible!
One more point: I notice that Doctor Rob wrote “C + B * x” rather than the more usual “B x + C“. This may be because all the other polynomials are written “backward”, in ascending order of degree, and he felt consistency would be helpful.
Doctor Rob continues, showing two commonly taught methods from this point.
First, if this is true for all x values, we can pick a few and substitute them in, and get a system of linear equations in A, B, and C to solve: x = 0: A + C - 3 = 0. x = 1: 3*A - 3 = 0. x = -1: A - 2*B + 2*C - 3 = 0. From the second equation A = 1, so from the first equation, C = 2, and then from the last equation, B = 1.
Often these systems can be solved by a sequence of carefully chosen substitutions like these, because not all variables appear in every equation. This one was particularly clean, with one equation with one variable, one with two variables, and one with all three. I call these “1-2-3 systems”, and they are as easy as 1-2-3. In particular, the choice of x = 1 was a good one because (x – 1) was a factor of a term on the right, which therefore eliminated B and C from the resulting equation. An alternate method that is often taught focuses on this idea, but Doctor Rob didn’t use that because it is less useful here.
The second method is to observe that if this is an identity, then the coefficients of each power of x must all be zero. This gives the system of equations A - B = 0, A + B - C = 0, A + C - 3 = 0. The solution is again A = B = 1, C = 2.
This system is not quite as easily solved; here I might start by adding the first two equations, or just by solving the first for B and substituting into the second.
Either way, you have the identity 3/(1-x^3) = 1/(1-x) + (x+2)/(x^2+x+1) The second problem is similar. Using the above as a model, you should be able to solve it yourself.
As always, you could check this by starting on the right and combining the fractions, to see that you get the left-hand side.
Doctor Rob didn’t explain why he used Bx + C in his work, beyond saying that is what we do. That is usually sufficient; but knowing why may help some students to do the right thing.
Why it works (for linear factors)
In 2007, Ramiro asked the following question, carefully stating the general theory of partial fractions (followed by examples I am omitting here), but then asking that favorite question, “Why?”:
Integrals of Rational Functions by Partial Fractions It is theoretically possible to write any rational expression f(x)/g(x) as a sum of rational expressions whose denominators involve powers of polynomials of degree not greater than two. Specifically, if f(x) and g(x) are polynomials and the degree of f(x) is less than the degree of g(x), then it can be proved that f(x)/g(x) = F_1 + F_2 + ... + F_r such that each term F_k of the sum has one of the forms A Ax + B ----------- or ----------------- (ax + b)^n (ax^2 + bx + c)^n for real number A and B and a nonnegative integer n, where ax^2 + bx + c is irreducible in a sense that this quadratic polynomial has no real zeros (that is, b^2 - 4ac < 0). In this case, ax^2 + bx + c cannot be expressed as a product of two first-degree polynomials with real coefficients. The sum F_1 + F_2 + ... + F_r is the partial fraction decomposition of f(x)/g(x), and each F_k is a partial fraction. ... I know how to do this, but I just don't see why it works.
I first referred Ramiro to a page containing a detailed proof; but proofs are not always satisfying at a human level:
"Why" has many different meanings, so you may have to clarify what kind of answer you want--a proof, or a method, or a feeling for the reasonableness of the process. You can find a couple levels of answer here: Proof of the Partial Fractions Theorem for Quadratic Factors http://mathforum.org/library/drmath/view/51687.html The first question there focuses on quadratic factors, but the second answer deals with all cases. That, however, is rather formal and may not satisfy you. A more basic answer, just waving my hands a little rather than actually proving anything, goes like this:
If you want a proof, read the reference (which is too long to quote here). But see if what follows helps in understanding the essential ideas; my goal here was to show why one might invent the details of the method, rather than to prove that it always works:
Partial fraction decomposition is a way of reversing the process of adding fractions. If we were to do the same thing with numbers, we might try to break up a fraction with a composite denominator into a sum of fractions whose denominators are primes or powers of primes: 1 1 a b 1 -4 -- = ----- = --- + --- = --- + --- 18 2*3^2 2 3^2 2 9 To add, we find an LCD by taking appropriate powers of each factor, and then adjust to use that LCD. To reverse this, we split up factors of the denominator and find appropriate numerators. It's a little more complicated with rational expressions, because the goal is not just a number but an equivalent expression--one that is equal to the original for all x. That will be the key to the whole concept.
This first idea motivates the use of each factor of the denominator as a denominator itself. But there’s going to be more to it.
We first look for a set of simple denominators whose LCD will be the given one. The easiest way to do that would look like what we did with 1/18: 3x^3 - 18x^2 + 29 x - 4 ? ? ------------------------ = --- + ------- (x + 1)(x - 2)^3 x+1 (x-2)^3 The numerators, in order to make these proper fractions, can have any degree less than that of the denominator, so to cover all possibilities we would have to allow this: 3x^3 - 18x^2 + 29 x - 4 A Bx^2 + Cx + D ------------------------ = --- + ------------- (x + 1)(x - 2)^3 x+1 (x-2)^3 This could be done; we'd multiply both sides by the denominator and set coefficients of each power of x equal (so that they are actually the same polynomial, equal for all x), and solve the resulting four equations for the four unknowns. But this would not yield something easy to integrate. In order to accomplish that, we'd rather have the second fraction have only a constant in the numerator (that is, we'd like to drop B and C). But if you try this, you find that when you try solving for the unknowns you would have four equations but only two variables--not enough to expect a solution. We need those four variables, but we want them in a nicer form.
Note the relevance of the context: Our motivation for partial fractions is their use in calculus, and that drives the required form of the fractions. What I suggested above would otherwise be perfectly valid, and we would not need all the specific rules that are taught. But then the calculus would be harder. (I will not go into the details, but you might like to try integrating an expression of the form above, to see why.)
So what if we wrote the big numerator in terms of powers of (x-2) rather than of x: 3x^3 - 18x^2 + 29 x - 4 A B(x-2)^2 + C(x-2) + D ------------------------ = --- + --------------------- (x + 1)(x - 2)^3 x+1 (x-2)^3 Now we still have something we can solve, and that covers all possible numerators; but when we simplify that last fraction we get something easy to integrate: 3x^3 - 18x^2 + 29 x - 4 A B C D ------------------------ = --- + --- + ------- + ------- (x + 1)(x - 2)^3 x+1 x-2 (x-2)^2 (x-2)^3 And that's the standard form we're looking for. So, why does it work? It allows for all possible numerators and provides enough variables to solve for, while yielding a useful form. Let me know if you'd like to discuss this further, because it's something that isn't explained enough--too often we accept it without questioning. In writing this, I discovered some details I'd probably never thought about!
Ramiro’s reply was to ask about details deep down into the theorems used in the proof I referred to, so I will omit them here. As I commented at the end,
Note that what I gave you gives more of a sense of what partial fractions are all about, without going through any of these details. If you want a real proof, keep working through this, but if not, you don't really have to. It is good preparation for later math, though, so I encourage you to go as far as you can.
Quadratic factors
In the explanation above, I dealt only with linear factors. I did that, at least in part, because the case of quadratic factors had already been answered in the page I referred to.
Now let’s look at the first (less formal) part of that page, from 2001, where that question was answered:
Proof of the Partial Fractions Theorem for Quadratic Factors I was reviewing a chapter in my calculus book about Integration using Partial Fractions. The concept seems to be clear for linear factors and repeated linear factors, but why is it that when you have a non-reducible quadratic factor, you have to let the numerator of the partial fraction be Ax+B? Could you please show me a proof or explain me why it is Ax+B instead of just A?
The basic idea will be similar to what I later did with linear factors. Doctor Fenton replied:
Hi William, Thanks for writing to Dr. Math. Essentially, the reason you have to allow such terms is that the theorem isn't true unless you do. Partial fractions "undoes" the operation of adding rational functions by finding a common denominator. To decompose P(x)/Q(x), you first make sure the rational function is "proper": P(x)/Q(x) is proper if the degree of P(x) is strictly less than the degree of Q(x). If the degree of P(x) is greater than or equal to the degree of Q(x), you carry out long division A(x) ------- Q(x) | P(x) : ---- R(x) getting a quotient of A(x) (a polynomial) and a remainder R(x), whose degree is (strictly) less than the degree of Q(x), so that P(x) R(x) ---- = A(x) + ---- Q(x) Q(x) , and you apply partial fractions decomposition to R(x)/Q(x).
The “Division Algorithm” theorem shows that it is always possible to do this, reducing a rational function to a proper rational function, which is necessary for the general theorem.
When decomposing a proper fraction into partial fractions, you must allow enough proper fractions to occur in the decomposition. For example, Ax+B --------- x^2+Cx+D is a proper fraction, and you must allow for its occurrence. There is no way to write x A --------- = ------- x^2 + 1 x^2 + 1 . If this equation were true for all x, you could multiply by x^2+1 (which is never 0) and get x = A, which means that x can have only one value, contradicting the claim that the equation is true for ALL x. So, if you don't allow for enough types of proper fractions in the decomposition into partial fractions, you won't be able to decompose the original rational function. However, you can prove that if you do allow partial fraction terms of the form Ax+B Ex+F -------- and ------------ x^2+Cx+D (x^2+Cx+D)^k (for repeated irreducible factors) then the decomposition is always possible.
So, just as in the linear case, what drives the details is the need to have enough flexibility to handle any fraction. The fractions that result in this case are all (with a lot of work!) suitable for integrating.
The rest of this page answers a request for the proof that was mentioned; it opens with this comment from Doctor Fenton:
I looked up the proof in B. L. van der Waerden's _Modern Algebra_, and I also found one in Birkhoff and MacLane's _A Survey of Modern Algebra_. I don't think I had ever actually read a proof of the result. Every calculus book I have is content to show the method and state that there is a proof. Actually, it's not too hard to follow, but it depends upon some division properties of polynomials that may not be familiar.
If you’re interested, dig in!
Genius,what a brilliant piece really helped me in this x^3+1 area,thanks
Pingback: Partial Fractions: Complex and Trivial Cases – The Math Doctors
Pingback: Integrating Rational Functions: Beyond Partial Fractions – The Math Doctors
Pingback: Integration: It Takes a Whole Toolbox – The Math Doctors