Numerical Approximation Methods: When Algebra Doesn’t Work

The problems students see in class are usually only those that can be solved by the methods they have been taught. Too many students conclude that algebra can solve anything! But the reality is that if you just wrote an equation at random, it probably could not be solved algebraically. When students ask us about such a problem, we refer them to what are called “numerical methods”, which give approximate answers. Here I want to give some examples of such methods.

Survey of iterative methods

First, an overview, based on a question from Joanne in 2005:

How to Solve Equations with No Analytic Solution Method

One of my friends gave me this question several days ago:
 
  If x(e^x) = 3, find the value of x.

I realized that even if I changed the base to "ln" form, it would not help as the two x's are in different levels:

  x(e^x)= 3
  ln x + x = ln 3
  x = ln 3 - ln x
  x = ln (3/x)

Using "ln", "e", or any sort of combination seems fruitless for this case.  Please help.

It’s a good exercise to try solving this as Joanne did; but eventually you will agree that it is fruitless. When I look at the problem, I know ahead of time that it probably can’t be solved algebraically, because the variable occurs both inside and outside of an exponent. Doctor Fenton took this, showing some of the basic methods that can be used to get an approximate solution:

Thanks for writing to Dr. Math.  Simple-looking problems can be difficult to solve.  This equation can't be solved analytically, that is, by finding a formula and substituting 3 into it.

Instead, you have to use an iterative method, which makes an initial estimate ("guess"), and then finds improved estimates by some algorithm.

Unlike algebraic solutions, you will not get an exact answer. But the more you repeat the algorithm, the closer you will come to the exact value. In the real world, that is enough.

Bisection (Binary search)

There are many such algorithms, some better (faster) than others.  The simplest, which always works when the problem has a solution, but which is usually the slowest, is bisection.  You first write the problem in the form f(x) = 0, so that you are looking for a root of the function f.  Then you need to find two numbers a and b such that f(a) and f(b) have different signs.  (I am assuming that f is a continuous function, so that it must cross the axis between two values where it has different signs.)  Next, you evaluate f at the midpoint m=(a+b)/2 of the interval.  If this should happen to be 0, you are done.  Otherwise, f(m) will have the same sign as one of the old endpoints.  Replace that old endpoint with m, and you have a new interval on which f has different signs at the endpoints, and the new interval is half as large as the original interval.

Then you repeat the above process with the new interval.  After 10 steps, the interval will be 1/1024 as large as the original interval; after 20, less than 1/1000000 of the original length.

This method is typically slow, as he mentions; but is by far the easiest to explain! We’ll see an example later.

False position (linear interpolation)

"False Position" is a method which takes the straight line through the two points (a,f(a)) and (b,f(b)) and finds where it crosses the axis, say, at x = c.  Evaluate f(c).  Again, unless f(c) = 0 (possible but extremely unlikely), you get a new, smaller interval on which which f has opposite signs at the endpoints.  It generally finds an approximate root faster than bisection.

This method follows the same basic procedure as bisection, but finds a different “midpoint” that is better suited to a particular problem. I will examine it more closely in a later post.

Newton’s method

The best method is usually Newton's method (although it sometimes doesn't work), and it works extremely well here.  It takes calculus to derive the formulas, but for this problem, you make an initial estimate x_0.  Given an estimate x_n, you make an improved estimate x_(n+1) by using

                xe^x - 3
   next x = x - --------  .
                (x+1)e^x

The function f(x) = xe^x - 3, and f(1) < 0, while f(2) > 0 .  Using a spreadsheet, with x_0 = 1, I got 10 decimal place accuracy in 5 steps. With x_0 = 2, I got 8 decimal place accuracy in 5 steps.

Newton's method generally doubles the number of correct decimals in each step, once you are "close" to the root.

Here are the results of applying each of the three methods to this particular problem; you can see how much more quickly Newton’s method reaches a good answer:

First, bisection:

a b mid f(a) f(b) f(mid)
1 2 1.5 -0.28172 11.77811 3.722534
1 1.5 1.25 -0.28172 3.722534 1.362929
1 1.25 1.125 -0.28172 1.362929 0.465244
1 1.125 1.0625 -0.28172 0.465244 0.074446
1 1.0625 1.03125 -0.28172 0.074446 -0.10779
1.03125 1.0625 1.046875 -0.10779 0.074446 -0.01773
1.046875 1.0625 1.054688 -0.01773 0.074446 0.02809
1.046875 1.054688 1.050781 -0.01773 0.02809 0.005113

After 8 steps, our answer is 1.050781, where \(xe^x = 3.005112\).

Next, false position (where c is the new value for x, which replaces one of the previous values):

a b f(a) f(b) c f(c)
1 2 -0.28172 11.77811 1.02336 -0.15247
1.02336 2 -0.15247 11.77811 1.035841 -0.08154
1.035841 2 -0.08154 11.77811 1.042471 -0.04333
1.042471 2 -0.04333 11.77811 1.04598 -0.02294
1.04598 2 -0.02294 11.77811 1.047835 -0.01213
1.047835 2 -0.01213 11.77811 1.048815 -0.0064
1.048815 2 -0.0064 11.77811 1.049332 -0.00338
1.049332 2 -0.00338 11.77811 1.049604 -0.00178

After 8 steps, we have 1.049604, where \(xe^x = 2.99821\), just a little closer but on the other side. (A modified version of the method could have been much faster.)

Now, Newton’s method:

Newton’s method
x_n f(x_n)
1 -0.28172
1.051819 0.011205
1.049912 0.0000159
1.049909 0.000000000032

Only 4 steps take us to 1.049909, where \(xe^x = 3.000000615\). Not bad!

Now let’s look at more detailed explanations, with different examples, for two of these methods.

Bisection

A student in 1998 asked specifically about this method in the context of polynomials, though it can be used with any continuous function:

Roots and the Bisection Method

What is the Bisectional Method? How would I solve 2x^3+x^2+5x+2 = 0 or any other polynomial? Can the method be used by something greater than the 3rd degree?

Doctor Benway answered, starting with a familiar example in a guessing game:

The bisection method is one of my favorite surefire methods for finding roots of polynomials. The great thing about this method is that it works for everything, at least everything normal that you're likely to run into at this point in your mathematical education.

Have you ever watched the game show "The Price is Right"? That show actually uses a form of the bisection method, in a way. For example:

   Bob Barker: This lovely toaster oven/can
               opener from General Electric can
               be yours if you can guess the
               price within thirty seconds.
               Start the clock.
   Contestant: Ummm....$20
   Bob Barker: Higher
   Contestant: $100
   Bob Barker: Lower
   Contestant: $60
   Bob Barker: Lower
   Contestant: $40
   Bob Barker: Higher
   Contestant: $50
   Bob Barker: Correct!  You win!
   Contestant: Wheeeeeeeee!

As you notice, the contestant made two initial guesses, $20 and $100, which were too low and too high respectively. This told her that the price was between these two guesses. She then guessed halfway between, $60, and found that was too high, so it must be between $20 and $60.  Her next guess, $40, was too low, so she knew it was between $40 and $60. She guessed $50, halfway between, and she was correct!

The method essentially consists of a series of guesses, narrowing down the range.

This example turns out to be very similar to how the bisection method works. The usual method is to first plot a function to get an idea about where the roots are. Say f(x) has a root somewhere between 2 and 3 when you graph it. Your goal is to guess numbers as above until you get a number that turns out to be zero when you plug it in. Your initial "guesses" are 2 and 3. To find out whether you need to go "higher" or "lower," you plug the numbers you are guessing into the function. In this case, f(x) being positive or negative will tell you whether you need to go higher or lower. The best way to do this is to look at the graph to tell which way you should go. If you have an increasing function on that interval, a negative number means you guessed too low, and a positive number means you guessed too high (see below):  

                /   +
               /   too high
   ___________*__________
             /    
     -      /
    too    /
    low   /

If the function is decreasing, then the opposite rule applies.

This is like searching for a hidden object while a friend tells you when you are getting “warmer” or “colder” — but here the friend is the function itself.

Where "bisection" comes in is that the best way to do this is to pick the midpoint of the interval you know the root to lie in, just as the contestant picked midway between the price range she knew. Eventually you will come closer and closer to the answer, and when you feel you have an x where f(x) is close enough to being zero, then you can put it down as an approximation for the root. 

This works for all continuous functions. It's not the best method, but it is certainly the easiest. If you take a class such as Numerical Analysis, you'll learn better methods for approximating roots.

Of course, the midpoint is not the “best” next point to try; but it is always going to be closer than one of our previous guesses, and it is easy to do. When I look at False Position next time, it will be clear that that method is essentially Bisection with a better way to make the next guess.

Newton’s Method

This is the method we most often recommend, for those who have some knowledge of calculus, or who can just use a formula we provide. In the following example from 2006, a student wants to know how to solve a transcendental equation, and Doctor Jerry demonstrates this method:

Solving Trig Equations with Newton's Method

What is the technique to solve a trigonometric equation like this?  Do I need to use calculus?  I've studied a bit of that.

  3sin(x) = x + 1  (0 < x < 2pi)

What if the above equation is in quadratic or some higher order, what is the technique to do that?

Here again we have the x both inside and outside a “transcendental function”, this time a trigonometric function. Doctor Jerry first pointed out the need for an approximation method:

This equation is of "higher order;" it is called a "transcendental equation."  The needed technique is either numerical or graphical.  I'll assume that you can do the graphical with the help of a calculator.  Actually, most "scientific" calculators these days can also solve this kind of equation numerically.

The typical graphing calculator can both show you the graph (as he does below), and find a solution more accurately than you can by reading the graph. It likely uses Newton’s method or something like it. Since Sam mentioned knowing some calculus, this was the right method to explain here.

If you want to do it "by hand," you can use something like Newton's Method, which uses calculus.  Take a look at this figure:



Notice that I've graphed in the left figure the two original graphs.  In the right figure I've graphed the function f(x) = 3sin(x) - (x+1). We want to find its two zeros.  I'll concentrate on the left one, near 0.53.

Newton’s Method is designed specifically to find zeros of a function; that is, it finds a value of x such that f(x) = 0. This is why he has defined f as the difference of the two functions that are to be equal. The method also depends on an initial guess near the desired solution, which makes an initial (rough) graph useful. We can see that one solution is near 0.5; in order to make the process more visible, Doctor Jerry chose to start a little farther away.

In Newton's Method, one makes a guess as to where the root is.  In this case, I'll guess that the root is near 0.75.  Call this x1.  We work out a sequence of guesses, which usually improve and converge to the root.

To find x2 we do this:  From (x1,0) we go vertically until we hit the graph of f.  At that point we draw a tangent line.  Call the x-intercept of this tangent line x2.  Once we have x2, we repeat this procedure.

It's not difficult to show that

  x2 = x1 - f(x1)/f'(x1)

  x3 = x2 - f(x2)/f'(x2) 

and so on.

So what we have is a formula to find the next “guess”, \(x_{n+1}\), given the current “guess”, \(x_{n}\). The formula in general is \(\displaystyle x_{n+1} = x_n – \frac{f(x_n)}{f'(x_n)}\), where \(f'(x)\) is the derivative of \(f(x)\).

For this particular function, \(f(x) = 3\sin(x) – (x+1)\), the derivative is \(f'(x) = 3\cos(x) – 1\), so the formula becomes \(\displaystyle x_{n+1} = x_n – \frac{3\sin(x_n) – (x_n+1))}{3\cos(x_n) – 1}\).

The results are:

If x1 = 0.75, then
   x2 = 0.503222
   x3 = 0.537907
   x4 = 0.53847

We may be reasonably confident that the root is near 0.538.  We can check this by calculating f(0.538).  We find f(0.538) = -0.000741359. We can continue generating new approximations until we achieve the accuracy we want.

Note that the last two guesses both rounded to 0.538, so the solution will, also. In four steps, we found a solution to three decimal places.

For another example, explained in great detail, see:

Using Newton's Method to Solve an Implicit Equation

Next time, I’ll discuss the Method of False Position, which has two sides to discuss, only one of which really fits the present discussion, but both of which are interesting.

2 thoughts on “Numerical Approximation Methods: When Algebra Doesn’t Work”

  1. Pingback: Evaluating Square Roots by Hand – The Math Doctors

  2. Pingback: Solving Equations with Newton’s Method – The Math Doctors

Leave a Comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.