(An archive question of the week)
Many calculus courses start out with a chapter on limits; or they may be introduced in a “precalculus” course. But too often the concept is not sufficiently motivated. What good are limits? Why did they have to be invented? Are they as simple as they seem? Why is an epsilon-delta proof necessary?
Are limits useless?
The following question arrived in 2001:
Understanding the Need for Limits I searched all over your site and couldn't find anything about understanding limits. Basically I'm trying to find any extra help I can on understanding the concept of a limit. Most of the time I get the same vague description: "The value that f(x) approaches as x approaches some number." I can see the need for a limit on things like: lim (x->0) [sinx/x] because you need to know what the value "approaches" not what it actually equals, because at 0 it basically doesn't exist. But why do they give us problems like: lim (x->3) [x^2] is this just for practice, or am I missing something? Basically what I'm asking: 1. Should the concept of a limit be applicable to all functions, or is it really only useful for certain situations? 2. Are there any other ways of describing a limit? 3. Why is this concept so vague? One more thing. The definition of a derivative is: lim (h->0) [f(x+h) - f(x)] / h but I don't see the need for a limit here because all you do is plug in the value of 0 for h once you've reduced the function into its simplest form. So why the limit? Why not just say, "when h = 0"? I really appreciate any time you spend on this. A service like this is very useful and informative.
Josh basically understands what a limit is, but sees it as unnecessary or trivial in many cases. So, beyond the mechanics of finding or proving limits, what is the concept really about?
I answered this one:
Hi, Josh. These are good questions, and too easily overlooked when this subject is taught too formally. I think it's important to have not only a good understanding of the formal definitions, but also a feel for how things work and what they are all about. But why do they give us problems like: (x->3)lim [x^2] is this just for practice, or am I missing something? Exactly: This is for practice with the concept of limits and the methods for proving them. But also, it introduces the concept of continuity. It is possible that such a function might NOT approach its value at 3, so you have to prove that it does. In case you haven't been introduced to continuity yet, a continuous function is one whose limit is in fact equal to its value; the fact that most familiar functions are continuous is the reason the concept seems a little silly to you.
This happens in many fields of math: to keep it simple, we have to give problems where the answer seems almost trivial. As Josh recognized, some limit questions, like \(\displaystyle \lim_{x\rightarrow 0}\frac{\sin(x)}{x}\), are important and non-obvious; but others seem hardly worth doing, because you can just plug in the number. I’m not sure all calculus students need to be able to prove limits (that’s for the math majors), but seeing and understanding such a proof is intended to help clarify the point of limits, which is, in part, that approaching a value is not the same as actually having that value.
1. Should the concept of a limit be applicable to all functions, or is it really only useful for certain situations? As I suggested in my mention of continuity, the limit concept is applicable to all functions, but is only "interesting" for those peculiar functions that either are not continuous, or are not defined at some points.
For a continuous function, evaluating a limit requires just evaluating the function (once you know it really is continuous). It just happens, as we’ll see, that the primary purpose of limits in calculus, the derivative, is a case where you can’t (just) plug in a number; that’s the interesting case.
2. Are there any other ways of describing a limit? The general idea of "approaching," and the formal definition using delta and epsilon, are the two main ways to discuss limits. The latter is just a precise statement of the former. Actually, there's an even more formal and general (and therefore hard to follow) definition that takes it beyond functions of a single real variable, which generalizes the idea of a delta to "open balls" or "neighborhoods." If you didn't find "Formal Definition of a Limit" in our archives, you may want to read it, because it attempts to demystify the formal definition.
That link is one that I gave last time, but didn’t quote.
3. Why is this concept so vague? When calculus was first being developed, the concepts of differentiation and integration were far more vague, and needed careful definition before mathematicians could be comfortable working with them - there was no way to prove anything about such unformed concepts as "sliding two points on a curve together." The delta-epsilon definition was introduced precisely so that these concepts did not have to be so vague. I'm not sure exactly what about limits seems vague to you. I think probably you don't mean "vaguely defined," but something like "vaguely related to anything else," or "not clearly needed." I hope my answers to your other questions deal with that.
Limits and derivatives
The derivative, before the concept of limit, sounded like magic: the slope of the line formed by two points that have moved toward one another until the are really the same point and so can’t determine a line? That’s “vague”! This is what philosopher George Berkeley famously called “ghosts of departed quantities”, mocking the inadequate mathematical foundations of calculus as it was then. The limit concept is what rescued the sanity of mathematics.
The definition of a derivative is:
lim (h->0) [f(x+h) - f(x)] / h
I don't see the need for a limit here because all you do is plug in the value of 0 for h once you've reduced the function into its simplest form. So why the limit? Why not just say, "when h = 0?"
Because "reducing a function to its simplest form" is not something we can do arbitrarily, or even define clearly in all cases. (You admitted that there is a need to use a limit for sin(x)/x, and that is exactly what you get if you work out the derivative of the sine at zero.) If we want a definition of the derivative, it has to apply to any differentiable function, not just to those we can work with easily. And in fact the limit concept clarifies exactly what you mean when you talk about the "simplest form," in a way that is precisely defined.
Think about the geometric model of the derivative as the slope of a curve. This makes it the limit of the slope of chords, as the endpoints move together. This can't be defined at all for h = 0; there is no chord in that case. So it has to be defined as a limit. The "h" form of the definition merely formalizes this definition; we can't drop the need for a limit just because it's now written in algebraic form.
Feel free to continue this conversation - your questions are definitely worth asking, and I'd like to keep talking until you're confident that you understand the concepts.
In common introductory examples of finding derivatives from the definition, the limit is a simple one that can be found by simplifying the expression in a way that “fills the hole” of a removable discontinuity (see below). Such an example makes it look as if the limit is a trivial part of the definition. But this is not true of the derivative of the sine function, for what that previously mentioned limit of sin(x)/x is needed.
Josh replied, to check his understanding:
Let me start by thanking you for your response. I can't stress enough how great this service is! Now, on to more questions. I have read the simplified formal definition of the limit and it certainly makes things more clear. Let's see if I have this straight: A limit is when you take a value for an independent variable and show that the function produces a value for the dependent variable that is consistently close to (whatever value it produces) even if the independent variable's value is only close to it's initial value? So if I put in a 3 and get out a 5, the limit exists if a 3.0001 produces a number close to 5, and 2.9999 produces a number close to 5? Which is basically like saying that the value was approached from both sides. Then you take this concept and use it to evaluate "interesting" functions where you need to know what value a function approaches but not what it equals at certain points. Or to show that a function does, or does not, remain continuous for all independent values. Am I close? I guess the hard part about limits is understanding what, or when, they are used. Our Calculus book covered them in the first couple of chapters, but then we haven't really seen them again since. Other than using the definition of a derivative, which now we just use the shortcut rules rather then the formal definition, when differentiating. Thank you again. I hope I'm not coming off too thickheaded. I've always been the type to prefer understanding things thoroughly, not superficially.
What a limit is
Excellent questions and comments! I responded:
Never apologize for trying to understand something thoroughly! It's wonderful to see someone who isn't satisfied with the shallow stuff.
I think you've basically got it. Of course, you just gave examples, so what you said is far from complete. It wouldn't be enough just to show that the value at 3.00001 is close to the limit; you have to show that this is true no matter how you define "close." I forget just how the answer I pointed you to stated it, but here's how I explain the definition of a limit:
You don't believe I'm right about the limit, and challenge me to prove that the value of the function, f(x), stays close to 5 whenever x is close to 3. Since "close" is not well defined, you're not satisfied to let me decide how close is close enough; in order to convince you, I let you decide how close it has to be. You say, okay, I want you to make f(x) within 0.00000000001 of 5. I do some calculating and say, as long as x is within 0.000000000000000001 of 3, it will work. Then you say, well, maybe I should have picked a smaller number, and then you couldn't do it! But I turn around and show you that I have a formula I can use to pick my distance, which will work no matter how small your definition of "close" is. At that point you give up. I've convinced you. We don't need to play the game any more.
So the epsilon-delta proof says, no matter how close you consider close, there will be some range of x around the target value for which f(x) will be that close to the proposed limit.
This is similar to the explanations of the epsilon-delta formulation that I referred to last time.
Where a limit is needed
You're right that limits are used mostly at the definition level in elementary calculus; once you've been convinced that the shortcuts give the correct derivative or integral, you can ignore them. Whenever you come across a new type of function, of course, you'll have to go back to the definition; and there are some types of problems where the limit is an essential part of the problem (such as infinite series, which involve a related sort of limit). But the main use of the limit, as far as you are concerned right now, is just to make it possible to do calculus with confidence that everything is working right behind the scenes. Calculus was done before the limit concept was clearly defined; but mathematicians had a queasy feeling that their foundations weren't quite stable until that work was done. Now that the foundation has been stabilized, you can just shut the basement door and not worry what's going on down there (most of the time). It's still good to understand it, though, so you'll know what to think about when an earthquake hits.
He asked some final questions:
I can see where you're coming from on the definition you gave me. It makes perfect sense. All I need to do is get more comfortable with the mathematical proof side of the limit concept (the delta epsilon one.) I just need to let it sink in. Can I ask you a semi-personal question? At what point in your mathematics career did you first feel like you understood limits? Is this concept something that a first semester Calculus student should feel comfortable with? Or is it something that comes with time and more advanced knowledge?
I really liked your last description of a limit. That makes a lot more sense.
To which I replied:
I don't specifically recall being troubled by the limit concept when I first learned it (though it was a long time ago); but my understanding has certainly deepened with time and further courses. I suppose I've used it too much since then to be able to recall what that first exposure was like - just as you probably don't recall how amazing the concept of talking felt when you said your first word.
Why not just substitute?
I’d like to add another answer, from half a year later, in which I referred to this one; it provides an example of the sort of removable discontinuity that Josh found uninspiring:
Graphing Limits I was hoping you could explain the concept of "limits" and how to read them from graphs, and why substituting the "c" number into the equation doesn't always work (i.e. lim (x^2-1)/(x-1) ) x-->1 the answer is 2, but I didn't get 2, I got undefined. Help, please!
Here, you can’t just substitute -1 into the formula, because it results in 0/0. Justin here is expecting all limits to be entirely trivial, the opposite of Josh’s concern! So here, I started by answering the broader question, about the meaning of limits, once again explaining the epsilon-delta “game”:
Here's the basic idea of a limit: We have a function, which may or may not be defined at x = c. We want to know how it behaves near x = c, so we look at values of x closer and closer to c. Suppose the graph of y = f(x) looks like this:
y
| o :
| o :
| o :
L+ - - - - - - - - - -?- - - - - - -
| : o
| : o
| :
+--------------------+-------------- x
c
The closer x gets to c, the closer y gets to L. Then we say that f(x) approaches L as x approaches c; that is, L is the limit of f(x) as x approaches c. For the sort of functions we usually work with, L is in fact the value of f(c), so this seems like a waste of effort. But in some very important cases (those for which this concept was invented), f(c) is not even defined, so the limit is the only way we can talk about this idea of closeness. In a sense, the limit "fills the gap" in the definition of the function, allowing us to assign a value to f(c) that makes the function "continuous."
The trick is to define "closer and closer" precisely. We do this by seeing it as sort of a game, the sort of game you might have between two gamblers who don't trust one another. You say it's close - but how do I know that what you mean by "close" is really close enough?
So I make a challenge: I'll believe that L is the limit, if for ANY definition of "close" I choose (that is, any distance I call "close enough"), you can give me a range around x = c within which the function is always that close to L. That's what the "epsilon-delta definition" of a limit means. If _you_ can keep the function as close to L as _I_ want it to be, I'll believe you. (It's sort of like having one kid cut the cake and the other choose the larger piece.)
Filling the hole
Then I took a look specifically at the example:
Your problem is
lim (x^2-1)/(x-1)
x-->1
To find this limit, you have to simplify the expression so that it is defined for x = 1, and then substitute. Without simplifying, you get 0/0, which is indeterminate. (That is, it might have any value at all, and can't be determined without the additional information in the limit.) So this is an example where the function is not defined at x = 1, yet we can talk about its limit there. What we are doing is showing that the function is equal to a new function everywhere except at x = 1, so we can "fill the gap" by saying it is also equal to that function at x = 1. This value is the limit.
Note that when we simplify such a rational function, we change its domain: we factor the numerator, making it \(\displaystyle \frac{(x+1)(x-1)}{x-1} = x+1\), and find that the new function is defined for all values of x, but is equal to the original wherever it was defined. Therefore (since we know the new function is continuous, having previously proved this for all linear functions), we can find the limit by mere substitution. It looks almost trivial, as Josh saw it; but there is a lot going on behind the scenes to make it work.
Pingback: Limits: What Does “Approach” Mean? – The Math Doctors