To close out this series that started with postulates and theorems in geometry, let’s look at different kinds of facts elsewhere in math. What is commonly called a postulate in geometry is typically an axiom in other fields (or in more modern geometry); but what about those things we call properties (in, say, algebra)?
What does it take to make a property?
First, we’ll look at this question from 1999:
Properties and Postulates When people discover (or create) a property, do they just discover it ONCE and then know that from then on it applies to all similar situations, or do they just happen to keep discovering the property until they decide to call it a property? In other words, how long does it take for something to become a property? COMMUTATIVE PROPERTY: 1. The commutative property of addition seems so intuitive and fundamental - (obviously if you have an apple, an orange, and a lemon in a box, you could also call them a lemon, an apple, and an orange and still be describing the same box) - that it is almost like a postulate. What distinguishes postulates from properties such as the commutative property of addition? 2. How can people be sure that properties such as the commutative property of multiplication, which are not as intuitive as the commutative property of addition, work in every case? (I think I found a way to prove this property - but I want to know if it is something that even NEEDS to be proven.)
Doctor Ian took this one, first looking at the history question (which, of course, varies a lot):
In theory, once you've discovered a property - that is, proved that some theorem is true - then you never need to discover it again. And anyone who is playing the same game as you (for example, standard number theory) can use your discovery to make more discoveries of his own. But that's in theory. In practice, you would have to publish your result, and other people would have to read and verify it. Gauss made several major discoveries that he wrote in his diary, and which only became known decades after his death, after some of them had been discovered independently by other mathematicians. Similarly, Isaac Newton invented the Calculus in order to prove to his own satisfaction that an inverse square law of force would result in an elliptical orbit, but he didn't tell anyone until Edmund Halley asked him about it many years later. In Germany, Leibniz, not knowing what Newton had done, invented it on his own, using a different notation. The Indian mathematician Ramanujan 'discovered' many results by some intuitive process that no one understood, but which clearly didn't involve the notion of 'proof'. So while he was able to report interesting discoveries, some turned out to be wrong, others had to be proven by mathematicians who couldn't have dreamed them up, and some remain unproven today.
The ideal process, however, is simple:
But these are all exceptional cases. Usually, a property becomes known when a mathematician either observes or guesses that some kind of pattern exists, proves that it does, tells other mathematicians about it, and has his proof verified by independent mathematicians. At that point, other mathematicians can treat it as if it were 'obviously' true.
Now, what about that commutative property? First, addition:
You're right that many properties seem so basic that it's tempting to think of them as postulates. But one of the primary differences between postulates and properties is that the number of postulates in a given formal system either stays the same or decreases, while the number of properties continues to grow. Properties are knowledge, and knowledge is power, so you want to have as many properties as you can find. But postulates are assumptions, so you want to have as few of them as you can get away with. That's why, for centuries, mathematicians tried to 'prove' Euclid's parallel postulate using the other postulates as a starting point. And that's why mathematicians find it preferable to prove things like the commutative property of addition, even though from a certain point of view, proving something so obvious seems like a waste of time.
As we saw previously, postulates are the facts you take as your starting point; ideally, they should be minimal, so that as much as possible is proved. Theorems are properties or facts that have been proved from the postulates or other theorems.
Kiki has the common impression that a postulate is any fact that is obvious, so that it doesn’t need proof to be accepted. She sees the commutative property of addition as obvious, but not so for multiplication. But really, both can be demonstrated in almost the same way, by just looking at the same object from two different perspectives:
By the way, I don't agree with you that the commutative property of multiplication is any less 'intuitive' than the commutative property of addition. Visually, you can represent a sum of two numbers like this: +--+---+ |**|***| +--+---+ Flip it around, and you get +---+--+ |***|**| +---+--+ Since it's the same object, the order of the operation can't matter. Similarly, you can represent the product of two numbers like this: * * * * * * * * * * * * * * * * * * Rotate it 90 degrees, and you get * * * * * * * * * * * * * * * Again, since it's the same object, the order of the operation can't matter.
But a proof has to be based on previously known facts within math, so (a) the demonstrations above can’t be thought of as proofs, making the properties theorems, and (b) both might be thought of merely as justifications for accepting these properties as postulates.
But it's important to remember that a picture isn't a proof. A picture shows that something is true for one particular case, while a proof shows that it is true for all possible cases. When you want to convince yourself of something, a picture is often good enough. But when you want to prove it to someone else - especially someone who might be using it as a starting point for discoveries of his own - you have to meet a higher standard. Also, if you remember how Russell's paradox led to a re-examination of the foundations of mathematics, you'll see that we often learn the most by trying to really understand the simplest cases, rather than the more complicated ones.
We don’t really have any postulates yet on which to build proofs, because the choice of postulates is not a matter of believing what’s “obvious”, but of constructing a system with a minimal set of postulates. Are there any more basic ideas from which both of these properties might be derived?
In fact, this has been done. Here are a few answers (all by Doctor Rob) about one well-known set of axioms for the natural numbers, how they are used to prove theorems such as the commutative property, and how to extend that to other numbers:
Proof that 1 + 1 = 2 Proving the Properties of Natural Numbers Real Numbers
Postulates, and other kinds of fact
These explanations were the basis of further questions in 2003, similar to questions we have previously looked at for geometry:
Flavors of Facts Is it a fact that 1 + 1 = 2? I have seen your proof using the Peano postulate. Is the postulate a hypothesis which is unproven, or is it proven, i.e., a fact? For example, 1 + 1 = 10 in base 2. So is the value of 1 + 1 open to interpretation? I think I find some of the terminology confusing, e.g., what do we really mean by the terms 'fact', 'premise', 'assumption', 'axiom', 'postulate', and so on?
Karen is asking about the set of postulates Doctor Rob had explained (the Peano postulates for the natural numbers), and the resulting theorem that \(1 + 1 = 2\). Does the fact that using a different base changes the result mean that the postulates aren’t universally true? And how are all these different kinds of fact related?
I answered this one:
You'll have to decide for yourself what you mean by "actual fact"; philosophers can have trouble pinning that down! But consider that every thought has to be based on something else; there has to be some starting point, since you reason ABOUT something. So in order to say something is true, you have to believe that its premises are true first. That might be an observation (but how do you know that your observations are true?); or it can be an "assumption", something we take as true as the basis of our reasoning.
This is the same thing we have said about postulates; they state the defining assumptions of a field of math. We “postulate” them (take them to be true for the sake of argument) either because they appear to be basic facts from our observation, or because we just decide to “suppose” them.
That is how we think of math: we choose some set of axioms (or postulates, which are the same thing) and definitions as our starting point, the things we are thinking about. We can choose different axioms and come up with different mathematical systems (such as Euclidean or non-Euclidean geometry). But once we choose them, we consider them to be true -- within the particular system we are working on. An axiom or postulate can't be proven, since there is nothing before it on which to build a proof; it stands at the base of the mathematical system that is built on it. So it is an assumption. But that doesn't make it untrue; it is the truest thing there is _within that system_, the basis of the whole construction. Outside of that system, there is really nothing to tell you whether it is true or false. But we are not "assuming" something about some entity outside of the system, that might really be true or false; rather, we are "assuming" something only in the sense of deciding what it is that our system is about.
As we’ve seen before, when we apply our theory to something in the real world, we need some basis for thinking that the postulates apply to that something; but within the math itself, we don’t concern ourselves with that.
But your example of 1+1=10 is not really an illustration of any problem with axioms and assumptions. It is nothing more than notation: the numeral 10 in binary is just a different way to WRITE the number 2. The fact you are stating MEANS exactly the same thing as 1+1=2. On the other hand, there is a system (modulo 2 arithmetic) in which 1+1=0. That is no less true than 1+1=2, within its system; but the meanings of 1, +, and = are different than in normal arithmetic. We are talking about different things, based on different definitions. One is a fact about integers, the other is a fact about modulo-2 numbers. One doesn't contradict the other; they just live in different worlds of thought, which are built on different definitions and axioms.
Modulo 2 arithmetic works with a different kind of “number”; it has different definitions and assumptions than integer arithmetic. There are only two numbers: 0 represents any even number, and 1 represents any odd number. In effect, \(1 + 1 = 0\) means “odd + odd = even”. The addition is different from what we are used to, because we are adding different things than we are used to.
So to answer your basic question, yes, 1+1=2 is a fact--given that 1 and 2 refer to the integers 1 and 2, and that + and = have their normal meanings. All the axioms and definitions on which the real number system are based are assumed when I say that! If you don't make some such assumptions, then "1+1=2" has no meaning; it is just a string of symbols on your computer screen, and can't be said to be either true or false.
Karen wrote back and asked about my use of the words “premise” and “assumption”; I just looked them up in the dictionary, and found that they mean what I meant: a premise is the basis of a logical argument, and an assumption is a fact that is not proved. As I concluded,
They're two halves of the same fact, that something is assumed without proof, so that something else can be proved.
When is an axiom not an axiom?
Karen presumably asked her question from the perspective of a student who has only been taught about these facts as “properties”, and is wondering how they fit into the larger mathematical world of axioms and theorems that she has just discovered. The following question, from 2014, takes us to the higher perspective of someone who is learning about “abstract algebra”, which studies things similar to numbers that follow some subset of the rules of real numbers, called “groups”, “rings”, and “fields”. (This includes modular arithmetic, and many other ideas.)
Properties? Axioms? What to Call Characteristics of Field, and When Are the characteristics of fields and rings best referred to as axioms or as properties? This is an odd question, I suppose; but I have looked at several sources -- and have found different answers. It seems like many high school and college math texts begin with the number system, and present these "characteristics" to students right away. A college algebra textbook I have right now, for example, calls these properties. That is the term I use for associativity, commutativity, identity, inverse, and distributivity. On the other hand, the Wolfram math website uses the term "field axioms." So, what exactly are they? I never paid much attention to this stuff in the past, but after reading Berlinski's book on "absolutely" elementary mathematics, I found it fascinating. I have no illusions about mastering group/field theory, but this little problem of terminology is like having a stone in my shoe. My guess is that these are properties because they can be proved with mathematical induction. Or not?
I responded:
I think what you are finding is that the same topic can be approached from several different directions: it is not that these are NOT axioms, but rather that not everyone needs to talk about axioms. Properties only need to be called axioms when you are taking an axiomatic approach to the subject. And axioms are a starting point in developing a mathematical concept abstractly; they tell us what we are working with. In this case, a field is defined as anything that satisfies a certain set of properties, which we call axioms because they are the basis of proofs, and are not proved WITHIN the development of the concept, but are used to prove theorems about these entities. We can abstractly prove theorems that apply to ALL fields by relying only on the axioms that define a field in our proofs.
So, in field theory, we start with a set of axioms that define what a field is; any set of entities that satisfy those axioms is a field, and all theorems about fields apply to it. In proving these theorems, we take the axioms as given.
One example of a field is the set of real numbers.
But when we APPLY the concept of a field more concretely, we show that some particular entity IS a field by proving that it satisfies all the axioms. For example, we show that the real numbers form a field by proving that the axioms are true OF the set of real numbers; for this, we might start with some more fundamental axioms that define natural numbers, and work up to rational and then real numbers by introducing additional definitions and axioms. In this process, we do not think of the properties we are proving as axioms, but just say these are provable properties of the particular field we are talking about. Furthermore, in relatively elementary presentations of algebra -- not abstract algebra, which deals with general entities such as fields, but just working with real numbers and variables -- we don't need to even mention the word "field," or talk about axioms. That would only scare off most students who are not ready for high levels of abstraction. We would just say these are properties of the real numbers, and not bother to prove them. "College algebra" is still elementary in this sense, so they would not need to use the word axiom (unless they choose to mention in a sidebar that these properties apply to other kinds of systems that students will meet later).
So within a course in (real number) algebra, we just talk about properties; in a course of “analysis” we prove these properties; and in a course on abstract algebra, we take the properties as axioms, which can’t be proved (and perhaps use the real numbers as examples).
You might diagram these ideas in this way: ________________________ ( ) ( field ) (________________________) ^ ^ ^ | | | axiom axiom axiom | | | __|________|________|___ | | | | | | ppty ppty ppty | | | | real numbers | |________________________| ^ ^ ^ | | | axiom axiom axiom What we think of as an axiom when we are looking from above, using it as a foundation for an abstract concept, is a property when we look at it from within the study of a particular example.
The idea here is that the concept of field is represented by a cloud, up in the air, abstract, whose properties are defined by the axioms that undergird it; the real numbers themselves have properties that can be proved by their own axioms (as I described above), but which in turn make them a field, so that all theorems proved about fields apply to them.