Last week we looked at numbers raised to the zero power, as part of our series on oddities of zero. We’ve looked at zero divided by zero in the past, and just recently observed how 0 to the 0 power relates to degree in polynomials, which is part of the motivation for this series. But there’s more to say about it.
Indeterminate, in terms of limits
We’ll start with a 1995 question to introduce the ideas:
Questions about Zero - Undefined or Indeterminate? 1/ Is 0,0,0,... a geometric sequence ? 2/ What is the value of 0/0 ? (is it really undefined or are there an infinite number of values) 3/ What is the value of 0^0 (^ means exponent)? Can it be considered to be the limit of a^0 as a approaches 0 ? Any thoughts on these would be appreciated. Thanx Norman Rogers
The third question is what we are concerned with, but the others introduce important concepts.
Doctor Ken answered each question. First, is the sequence \(0,0,0,\dots\) a geometric sequence?
1. Well, sort of. You could say that it's a geometric sequence with common ratio 4, or whatever, but I wouldn't. The reason is that you can't find the common ratio by looking at the sequence, dividing one term by the previous term. So I guess I'd say no. What you might say is that this is a degenerate case of a geometric sequence.
A geometric sequence is one in which each term is a fixed multiple of the term before it (e.g. \(5,15,45,135,\dots\), where we multiply by 3, the common ratio, each time); as he says, this sequence is, but it isn’t, because there is no one common ratio! It’s just on the edge of fitting the definition, so I agree that “degenerate” is an appropriate term. We discussed that concept here.
Second, what is the value of \(\frac{0}{0}\)?
2. There's a special word for stuff like this, where you could conceivably give it any number of values. That word is "indeterminate." It's not the same as undefined. It essentially means that if it pops up somewhere, you don't know what its value will be in your case. For instance, if you have the limit as x->0 of x/x and of 7x/x, the expression will have a value of 1 in the first case and 7 in the second case. Indeterminate.
We discussed the idea of indeterminate expressions when we talked about zero divided by zero. There we looked at it both from an arithmetical perspective, and from the calculus perspective using limits, as here. As Norman said, we can think of an indeterminate expression as having an “infinite number of values”.
Third, our question: What is the value of \(0^0\)?
3. 0^0 is indeed indeterminate. It turns out that you could make it have any value between 0 and 1, inclusive. You could have 0 if it's the limit as a->0 of 0^a, you could have 1 if it's the limit as a->0 of a^0, and for x in between 0 and 1, (and this is the neat part from Dr. Shimimoto) look at the expression (x^n)^(1/n). This just equals x for all positive values of n. As n->Infinity, this fraction goes to 0^0, but if it's just x the whole time, the limit of the expression as it goes to 0^0 is x. So we could make it anything in between 0 and 1, so it's got to be indeterminate.
The trouble is that both the base and the exponent could be replaced by variables approaching zero, and depending on how they jointly approach zero, you get different values. If you think of it as \(x^y\), then
- If x is fixed at \(x=0\), then \(x^y=0^y=0\rightarrow0\) as \(y\rightarrow0\).
- If y is fixed at \(y=0\), then \(x^y=x^0=1\rightarrow1\) as \(x\rightarrow0\).
- If \(x=a^n\) where \(0<a<1\), and \(y=\frac{1}{n}\), then as \(n\rightarrow\infty\), \(x\rightarrow0\) and \(y\rightarrow0\),
but \(x^y=\left(a^n\right)^\frac{1}{n}=a^1=a\rightarrow a\), whatever a might be.
So you can get any limit from 0 to 1.
Indeterminate, in more basic terms
Now, a question from 2001:
What is 0^0? I know you've answered this before, but I really don't understand your answer. What is 0^0? We are doing exponents in school and we were talking about how 9^0=1, 10^0=1, etc., and I asked what 0^0 is. My teacher didn't know so I decided to find out. Your answer to this question in your archives confuses me, so could you explain it better?
It isn’t clear which answer Molly looked at; it could be Doctor Ken’s above (which assumes knowledge of calculus), or one of several others, including the FAQ we’ll see later.
Doctor Jeremiah answered, keeping it more basic:
Hi Molly, This following section of the Dr. Math FAQ is where all the good information is: http://www.mathforum.org/dr.math/faq/faq.0.to.0.power.html This is why 0^0 is called an indeterminate form: Anything times zero is zero: 0^1 = 0 = 0 0^2 = 0*0 = 0 0^3 = 0*0*0 = 0 0^4 = 0*0*0*0 = 0 But as you mentioned earlier, anything to a power of zero is one: 1^0 = 1 2^0 = 1 3^0 = 1 4^0 = 1 So would 0^0 be 1 or would it be 0? Well, there is no right or wrong answer to this, and since there are two right answers, we say that it's not answerable. (We call it "indeterminate" because indeterminate means that the answer can't be determined.)
Rather than talk about limits, we can just observe two conflicting rules: zero to any (non-zero) power is 0, while anything (other than zero) to the zero power is 1. Which do we extend to this case? There is no reason to choose between them (yet!), so we have to say, “We can’t say”.
There are other things you can do with 0 that are indeterminate; 0/0 is one of them. All you can really say about 0^0 is that there is no way to know what the answer is. That's why it's indeterminate, but you can sometimes find out what answer it might be by taking the limit of the top and dividing it by the limit of the bottom. This is called L'Hopital's Rule and is something you learn much later on. So sometimes there is an answer, and sometimes you can figure out what it might be, but usually there is just no way to know (which is why it's called "indeterminate").
This is discussed here, which takes us back into calculus. The idea is that for a particular expression that looks like \(\frac{0}{0}\), we can sometimes determine a specific value, not based on merely dividing 0 by 0, but on something specific.
The Ask Dr. Math FAQ: Sometimes we say it’s 1!
Before we move on, it will be helpful to see that FAQ, which will be mentioned again, and which goes beyond what we’ve said so far:
What is 0 to the 0 power?
This answer is adapted from an entry in the sci.math Frequently Asked Questions file, which is Copyright (c) 1994 Hans de Vreught (hdev@cp.tn.tudelft.nl).
According to some Calculus textbooks, 0^0 is an “indeterminate form.” What mathematicians mean by “indeterminate form” is that in some cases we think about it as having one value, and in other cases we think about it as having another.
When evaluating a limit of the form 0^0, you need to know that limits of that form are “indeterminate forms,” and that you need to use a special technique such as L’Hopital’s rule to evaluate them. For instance, when evaluating the limit Sin[x]^x (which is 1 as x goes to 0), we say it is equal to x^x (since Sin[x] and x go to 0 at the same rate, i.e. limit as x->0 of Sin[x]/x is 1). Then we can see from the graph of x^x that its limit is 1.
In the context of calculus and limits, then, we must consider \(0^0\) as an indeterminate form, which means that when the limits of the numerator and denominator of a fraction are both 0, we can’t (yet) say what the limit of the quotient is.
But there are other contexts:
Other than the times when we want it to be indeterminate, 0^0 = 1 seems to be the most useful choice for 0^0 . This convention allows us to extend definitions in different areas of mathematics that would otherwise require treating 0 as a special case. Notice that 0^0 is a discontinuity of the function f(x,y) = x^y, because no matter what number you assign to 0^0, you can’t make x^y continuous at (0,0), since the limit along the line x=0 is 0, and the limit along the line y=0 is 1.
This means that depending on the context where 0^0 occurs, you might wish to substitute it with 1, indeterminate or undefined/nonexistent.
Some people feel that giving a value to a function with an essential discontinuity at a point, such as x^y at (0,0), is an inelegant patch and should not be done. Others point out correctly that in mathematics, usefulness and consistency are very important, and that under these parameters 0^0 = 1 is the natural choice.
Because the limit under different conditions varies, no value for \(0^0\) will make the function \(f(x,y)=x^y\) continuous; this is still calculus. But there are contexts in which we can usefully ignore that. Three justifications follow, the first is more calculus (showing why this limit is different from other indeterminate limits); the second is algebra, pointing out one of several contexts where taking \(0^0=1\) is essential; and the third largely repeats the first.
The following is a list of reasons why 0^0 should be 1.
Rotando & Korn show that if f and g are analytic functions and f > 0, then f(x)^g(x) approaches 1 as x approaches 0 from the right.
From Concrete Mathematics p.162 (R. Graham, D. Knuth, O. Patashnik):
Some textbooks leave the quantity 0^0 undefined, because the functions 0^x and x^0 have different limiting values when x decreases to 0. But this is a mistake. We must define x^0=1 for all x , if the binomial theorem is to be valid when x=0 , y=0 , and/or x=-y . The theorem is too important to be arbitrarily restricted! By contrast, the function 0^x is quite unimportant.
Published by Addison-Wesley, 2nd printing Dec, 1988.
As a rule of thumb, one can say that 0^0 = 1 , but 0.0^(0.0) is undefined, meaning that when approaching from a different direction there is no clearly predetermined value to assign to 0.0^(0.0) ; but Kahan has argued that 0.0^(0.0) should be 1, because if f(x), g(x) → 0 as x approaches some limit, and f(x) and g(x) are analytic functions, then f(x)^g(x) → 1 .
The discussion of 0^0 is very old. Euler argues for 0^0 = 1 since a^0 = 1 for a not equal to 0 . The controversy raged throughout the nineteenth century, but was mainly conducted in the pages of the lesser journals: Grunert’s Archiv and Schlomilch’s Zeitshrift. Consensus has recently been built around setting the value of 0^0 = 1 .
The binomial theorem says that $$(1+x)^n=\sum_{k=0}^n{n\choose k}x^k,$$ so if \(x=0\), $$1=(1+0)^n=\sum_{k=0}^n{n\choose k}0^k\\={\color{Red}{0^0}}+0^1+\dots+0^n={\color{Red}{1}}+0+\dots+0,$$ and \(0^0\) must be 1 to make this work.
So \(0^0=1\) appears to be a common choice, at least for algebra and combinatorics; but how can that make sense? Some students read that and are confused, so they write to us.
Clarifying the FAQ
That is what happened, perhaps, in this question from 2006:
Proof That 0/0 = 1 Based on x^0 Equaling 1? You wrote that the reason x^0 = 1 is because of the laws of exponents: (3^4)/(3^4) = 3^(4-4) = 3^0. You also wrote that 0^0 = 1 while maintaining that 0/0 doesn't make sense. If the proof that any x^0 is 1 is through exponents, then how do we prove 0^0 = 1? (0^3)/(0^3) = 0^(3-3) = 0^0 but (0^3) = 0 so: 0/0 = 1 I am specifically targeting Euler's argument to make all x^0 = 1. Does he have a different method of proof besides exponents?
The mention of Euler suggests that Joseph may be basing this on the last paragraph of the FAQ, or something similar. As he says, and we saw last week, we can prove that for any \(x\ne0\), \(x^0=1\); but this proof doesn’t apply when x is 0. The FAQ says that \(0^0=1\) is taken to be true for particular purposes, not that it is proved to be so in general.
But he sees a contradiction: If we take that as actually true, then we can “prove” that \(0\div0=1\), though we know that it is indeterminate. So these “facts” aren’t consistent. We need to explain the inconsistency. (We’ll look at his “proof” in detail below.)
I answered:
Hi, Joseph. It sounds like you may not have seen our FAQ on this topic: 0 to 0 Power http://mathforum.org/dr.math/faq/faq.0.to.0.power.html The fact is, we DON'T prove that 0^0 is 1! When x is not 0, x^0 is equal to 1, because that definition is consistent with the rules for exponents, as you mentioned: x^0 = x^(1-1) = x^1 / x^1 = x/x = 1 These rules don't work when x = 0, however. In order to choose a reasonable definition for this, we have to consider limits: what happens to x^y when both x and y approach zero? It turns out that the answer depends on how they approach zero: if x goes to zero first, then x^y = 0^y = 0 for all y, so the limit is 0; but if y goes to zero first, then x^y = x^0 = 1 for all x except zero, and the limit is 1. We call such a limit "indeterminate", meaning that as it stands there is no way to choose one correct value.
This summarizes what we’ve said above. As far as proof is concerned, all we can do is to show that \(0^0\) does not have a single value. But …
That is all we can do in general. However, 0^0 is an unusual indeterminate case, in that for most purposes the value 1 is appropriate. When working with those formulas, we arbitrarily define 0^0 as equal to 1, so that the formulas work neatly. This is not something we prove, but just a definition that works in these cases. The FAQ gives some examples of the reason for this choice.
What I mean by “for most purposes” is, essentially, “in algebra” (and not in limits). In particular, in algebra and in combinatorics, the exponent is typically an integer, so limits where the exponent goes to zero are not present even in the background. When the exponent is zero, it is solidly zero, and the expression is solidly 1!
Note that although 0^0 is therefore defined as 1 for use in certain formulas, it is still true that 0^0 as a limit is still indeterminate, and must be resolved specially in each problem in which it occurs. Also, we have to be aware that 0^0 can't be treated as 1 in every situation, as illustrated by your "proof" that 0/0 = 1.
There are some definitions that are merely arbitrary conventions, but always make sense. This, on the other hand, is a limited convention, which must be dropped when you get into dangerous territory (namely, when you are working with limits). That’s the reason for the comment in the FAQ about “an inelegant patch”. It doesn’t fit smoothly everywhere!
Let’s take a moment to look more closely at Joseph’s “proof”. He says that, on one hand, the laws of exponents show that $$\frac{0^3}{0^3}=0^{3-3}=0^0=1,$$ with this definition; but on the other hand, $$\frac{0^3}{0^3}=\frac{0}{0}.$$ Therefore, the two quantities must be equal, and \(\frac{0}{0}=1\). There are no explicit limits involved, so this seems to be in an algebraic (non-limit) context where \(0^0=1\) is acceptable. The best way I see to explain the problem is that algebra only applies where operations are defined, and since \(\frac{0}{0}\) is not defined, we can’t claim it is equal to anything. It is illegal to use this in a proof!
Why I believe 0^0 = 1
Finally, from 2007:
Why Does 0^0 = 1 and Not Undefined? Your proof for why x^0 = 1 uses a law which breaks down at x = 0. Then in your definition for 0^0 you side significantly in favor of 0^0 = 1 based on your rule for x^0 = 1 (which was based on a law that breaks down at 0). Based on what I've read I would side in favor of undefined. Are there any more conclusive reasons for siding with 0^0 = 1?
I answered:
Hi, Jesse. I presume you are referring to our FAQs: N to 0 power http://mathforum.org/dr.math/faq/faq.number.to.0power.html 0 to 0 power http://mathforum.org/dr.math/faq/faq.0.to.0.power.html You can also find a restatement of the 0^0 issue here: Proof That 0/0 = 1 Based on x^0 Equaling 1? http://www.mathforum.org/library/drmath/view/69917.html Ultimately, the answer really depends on your context, as both 0^0 discussions above recognize to different extents. It HAS to be taken as an indeterminate form in calculus, because different limits that reduce to 0^0 have different values; we can't just say, "Oh, 0^0 is defined as 1, so that's the limit". But in many specific formulas or types of equations, taking 0^0 as 1 is necessary in order to write the formula without exceptions.
We saw my conversion to the latter belief in the recent post Polynomials: A Matter of Degrees, where a 2001 question made this more real to me; I repeated that idea here:
For me, the clincher was when I realized that I've always talked about the constant term in a polynomial as the zero degree term, and yet if you write ax^2 + bx + c = ax^2 + bx^1 + cx^0 and don't take 0^0 as 1, you've changed the polynomial, which is defined for all x, into something that is undefined for x=0! I've never intended to do that, and never even thought about it, until a student pointed it out. Now I'm a believer: 0^0 not only must be, but simply IS, taken as 1 in many cases in ordinary algebra.
But, again, that is not a “must” for all cases:
But that doesn't mean we can automatically assume that it makes sense to define it that way in any context we come across. We have to keep our eyes open, and determine whether a new context is one in which this definition fits.
Pingback: Zero Factorial: Why Does 0! = 1 ? – The Math Doctors
One way see why it makes sense to define \(0^0\) as \(1\) in the context of combinatorics is to think about counting the possible functions from a finite set \(S\) to a finite set \(T\).
If we use (as mathematicians sometimes do) the notation \(T^S\) to denote the set of all functions from \(S\) to \(T\) and the notation \(|{X}|\) to denote the number of elements in a set \(X\), then the equation
(*) \(|{T^S}| = |{T}|^{|{S}|}\)
(where the superscripting on left side of the \(=\) sign is the notation I’ve just described for sets of functions and the superscripting on the right side of the \(=\) sign denotes ordinary exponentiation) applies in all cases, including cases where \(S\), \(T\), or both are empty, provided that we take \(0^0\) to be \(1\). Let’s take a closer look at how this works, first when \(|{S}|\) and \(|{T}|\) are nonempty, and then when \(S\), \(T\), or both are empty.
• As an example of Equation (*) applying where \(|{S}|\) and \(|{T}|\) are nonempty, suppose we want to count the number of possible ways to distribute a set \(S\) of \(5\) objects among a set \(T\) of \(3\) bins. Then we have \(3\) choices of bin for the first object, \(3\) choices for the second object, etc., giving a total of \(3\times 3\times 3\times 3\times 3 = 3^5 = |{T}|^{|{S}|}\) possibilities altogether.
• If \(S\) is nonempty but \(T\) is empty, then the number of functions from \(S\) to \(T\) is zero. [Proof: Choose some element \(s\in S\). Then for any function \(f\) from \(S\) to \(T\), we must have \(f(s)\in T\). But this is impossible because \(T\) is the empty set.] So we see that \(|{T}^{S}|=|{\emptyset^S}| = 0\), and this is indeed the same as \(|T|^{|{S}|} = |{\emptyset}|^{|{S}|} = 0^{|{S}|} = 0\). That is, the rule \(|{T^S}| = |{T}|^{|{S}|}\) indeed applies in cases where \(S\) is nonempty but \(T\) is empty. Notice, however, that our proof that \(|{\emptyset^S}| = 0\) does not apply to the case \(S=\emptyset\) (equivalently, \(|{S}| = 0\)), since it depends on the possibility of choosing some \(s\in S\).
• Finally, if \(S\) is empty, then there is exactly one function from \(S\) to \(T\), namely the empty function, and this is so regardless of whether \(T\) is empty or nonempty. Thus, we are led to the conclusion that it makes sense to say \(|{T}|^0 = |{T}|^{|{\emptyset}|} = |{T}^{\emptyset}| = 1\) for every possible value of \(|{T}|\) including \(0\).
To recap, the preceding analysis shows that the equation \(|{T^S}| = |{T}|^{|{S}|}\) applies for all finite sets \(S\) and \(T\), empty or not, provided that we define \(0^0\) as \(1\). It think it also provides some insight into why it makes sense, in this context, to extend the rule \(n^0=1\), but not the rule \(0^n=0\), from the case where \(n>0\) to the case \(n=0\).
One might say, “Okay, I see why it make sense to say \(0^0=1\) in this context, but how important is it? How often do we care about counting functions from the empty set to the empty set, anyway?” Without going into details, I’ll simply remark that in combinatorics we often make use of recurrence relations that give answers to various counting problems as functions of the answers to simpler counting problems, and that it is often convenient to state such recurrences so that the base cases are trivial counting problems involving sets, functions, strings, etc. of size \(0\).
One final, slightly off-topic, comment: Mathematicians have developed a number of different ways to define addition, subtraction, and exponentiation when one or both arguments are infinite. In one of those systems, known as cardinal arithmetic, exponentiation is defined so that Equation (*) applies to all sets \(S\) and/or \(T\) whether finite or infinite.