What does it mean in mathematics for two things to be equal, or for two things to have the same "identity"? For example, because anything divided by zero is "undefined", can we say that 1/0 = 2/0? What about the relational database concept of "null" which is supposed to stand for "unknown"? In relational algebra, they say NULL is not equal to NULL, but doesn't that violate the law of identity that everything is equal to itself?

I think it is important to distinguish here between the meanings of expressions and the things that those expressions denote. Peter is right that the expressions "2+2" and "4" are different expressions, and they are not synonymous. But they both denote the same thing, namely the number 4. Now, in the equation "2+2=4", is the equal sign being used to express a relationship between meanings of expressions, or does it express a relationship between what is denoted by those expressions? (In other words, is this an intensional context or an extensional context?) I would say it expresses a relationship between what is denoted by the expressions, and the relationship is identity: what is denoted by the two sides of the equation is one and the same thing, namely the number 4. So I disagree with Peter's conclusion about what "=" means in mathematics. I would say "=" in mathematics means "is identical with". I would say that the situation here is very much like the situation in the sentence "The morning...

Is an event which has zero probability of occurring but which is nonetheless conceivably possible rightly termed "impossible"? For instance, is it "impossible" that I could be the EXACT same height as another person? I take it that the chance of this is zero in that there are infinitely many heights I could be (6 ft, 6.01 ft, 6.001 ft, 6.0001 ft, etc.) but only one which could match that of a given other person exactly; at the same time, I have no problem at all imagining a world in which I really am exactly as tall as this other.

I agree that there's nothing paradoxical here; surprising, perhaps, but not paradoxical. The only kind of additivity that is usually assumed in probability theory is countable additivity, and there's no violation of that here. But you do have uncountably many non-overlapping outcomes, each with probability zero, such that the probability of at least one of those outcomes happening is one. So uncountable additivity doesn't work. I would agree that an outcome with probability zero need not be impossible. Consider, for example, flipping a coin infinitely many times. Each infinite sequence of heads and tails has probability zero of occurring, but one of them has to occur, so it wouldn't make sense to say that they're all impossible. (Notice that there are uncountably many possible sequences of heads and tails.) But of course this is not a realistic experiment--no one can actually flip a coin infinitely many times. The original example proposed also seems unrealistic to me--according to...

In a critical thinking textbook I’m trying to study from, there is an exercise which gives groups of three different independent reasons from which I must select the one which supports a stated conclusion. For example: Conclusion: Blood donors should be paid for giving blood. (a) The blood donor service is expensive to administer. (b) People who give blood usually do so because they want to help others. (c) There is a shortage of blood donors, and payment would encourage more people to become donors. (Anne Thomson, Critical Reasoning - a practical introduction .) For each question I must pick the answer which could be a reason for a conclusion, say why it is the right answer, and why the other options are wrong. I’ve had absolutely no problems selecting the correct answer, but I can’t seem to say why. It would seem that I could easily say THAT a particular reason gives or doesn’t give support to a conclusion, but I can’t seem to put into words HOW or WHY. So my question is, why and how do...

One way of further spelling out Alex's standard for deductive inference ("if the reasons are true then the conclusion must be as well") is to use the idea of "possible worlds"--different ways that the world might be. To say that if the reasons are true then the conclusion must also be true means that in all possible worlds in which the reasons are true, the conclusion is true. The idea of possible worlds can be helpful in thinking about many kinds of reasoning. For example, consider the problem you presented about blood donors. One way to think about the problem is to consider different possible worlds. What would the world be like if blood donors were paid? What would the world be like if they weren't? Choice (c) says that in a world in which blood donors are paid there would be less of a shortage of blood donors than in a world in which they aren't, so it supports the conclusion. The other two choices don't give any sense in which a world in which blood donors are paid would be better than one...

My girlfriend and I had a discussion about probability as it relates to a weekly lottery draw. She argued that the probablity of winning remains the same from draw to draw, and because of this anyone who plays the lottery more than once stands no greater chance of winning than someone who only plays it on one occasion. Against this, I argued that because any lottery operates with a finite series of numbers, given enough draws all possible combinations will eventually have appeared at least once, and as such someone who plays more than once stands a greater chance of winning. I also claimed that the probability relating to each draw is different from that which relates to a succession of draws (again because of the finite series of numbers). Which of us is right?

Marc is right that if you play the lottery more than once, your chance of winning at least once is higher than if you only play once. However, there is another possible interpretation of your question. Supppose you have played the lottery many times and lost every time. Is your chance of winning the next time higher than if you hadn't played before? Some people think that your chance of winning is now higher because the lottery "owes" you a win. But this is wrong; your chance of winning the next time is exactly the same as if you hadn't played before. This does not contradict Marc's answer. This is because probabilities depend on the precise situation, and when the situation changes, the probabilities can change. Let's consider again Marc's case of rolling a die twice. Before you start rolling the die, the probability of getting at least one 6 is, as Marc says, 11/36. But now suppose you do your first roll of the die, and you don't get a 6. Now the only way of getting a 6 on one of the two...

I have been studying axiomatic set theory as a foundation of mathematics and am stuck on the definition of a relation as a subset of a Cartesian product. I have two problems. The first is that a large number of relations seem to be presupposed prior to this definition: the truth-functional relations of logic, for example, or the relations of set-membership and subset. Doesn't this make the definition circular? Second, in specifying which subset of the Cartesian product is intended, a polyadic predicate is usually invoked; but isn't a polyadic predicate a relation, thus giving a second circularity? Furthermore, these are vicious circles, not harmless ones.

All theorems about relations in axiomatic set theory are proven just from the axioms, using the rules of first-order logic. Thus, no facts about relations are presupposed in these proofs--at least, not if by "presupposed" you mean "used to justify a step in a proof." But perhaps this is not the sense of "presupposed" that you have in mind. Perhaps what you are thinking is that in order to really understand what's going on in the development of the theory of relations in axiomatic set theory, you have to have an intuitive understanding of what a relation is. Or perhaps you mean that no one would believe that the rules of first-order logic represent correct reasoning, or that the axioms of set theory are true statements about sets, if they weren't familiar with certain relations, such as the truth-functional relations of logic or the set-membership relation. You may be right about this. Is this a problem for the idea that axiomatic set theory is a foundation for mathematics? Not necessarily. ...

Hello philosophers. I was just wondering about Gödel's Incompleteness Theorem. What exactly is it and does it limit what we are capable of knowing? I have no training in mathematics or formal logic so if you could reply in lay terms, I would appreciate that. Thanks, Tim.

Godel's Incompleteness Theorem is a theorem about formal axiomatic theories: theories in which there is a collection of axioms from which all of the theorems are deduced, and in which the theorems are deduced from these axioms by the application of rules of logic. It applies to a wide range of theories, but to start off it might be helpful to focus on one such theory, so let's consider Peano Arithmetic, often abbreviated PA. This is an axiomatic theory of the properties of addition and multiplication of the natural numbers 0, 1, 2, ... (Peano Arithmetic is named after Giuseppe Peano.) Godel's Incompleteness Theorem says that if PA is consistent--that is, if the axioms don't contradict each other--then there are statements about the arithmetic of the natural numbers that are neither provable nor disprovable from the PA axioms. Thus, the axioms are not powerful enough to settle every question of number theory. Now, you might think that all this shows is that Peano must have forgotten an axiom...

To Whom it May Concern: Mathematical results are assumed to be precise. But how can mathematics be precise if results are rounded up or down? Don't such small incremental "roundings" add up to imprecision? So, in general, don't "roundings", in some way, betray the advertised precision of mathematics? Sincerely, Alexander

You're right, mathematical results will not be precise if they are rounded off -- which is why mathematicians usually don't round off their results. I think such rounding is much more common among people who are applying mathematics to real world problems than among mathematicians doing theoretical work. For example, consider the question: What is the height of an equilateral triangle whose sides have length 1? A mathematician would most likely say that the answer is sqrt(3)/2, which is exactly correct. But someone who needed the answer to this question in order to apply it to a real world situation might prefer to have a decimal value, and so they might round it off to 0.87.

How can was say that a variable such as x exists as a number or at all in an equation when by using a variable we claim to know nothing of what "goes in there" to complete the equation?

I would add that it is important to distinguish between a variable and what the variable stands for. You ask how we can say that "a variable such as x exists as a number." I would say the variable is a letter, not a number, but it stands for a number. As Richard has explained, what number it stands for, or whether there is a single number that it stands for, may depend on the context.

Is infinity a number or not and why?

I think it's worth pointing out that there are many different number systems, which mathematicians use for different purposes, so your question is really ambiguous. If you are interested in determining how many things of some particular kind there are, then the appropriate numbers to use are the cardinal numbers, and as Richard has explained, there are indeed infinite cardinal numbers. On the other hand, if you're a student in a calculus class, then the numbers you are using are probably the real numbers, and all of the real numbers are finite. Although the symbol for infinity is used in calculus, it is not used as a name for a real number.

Do computers defy the law of conservation of mass? Because, if a computer can copy a program there is twice the amount of space taken up. But how can you just duplicate an amount of space (MB, KB, GB,etc.) if you add nothing to it?

The only mass involved in computer memory is the mass of the electronic components that make up the memory, and this mass is unchanged when information is stored in memory. If your computer has, say, 256 MB of memory, then it has memory chips inside it that are capable of holding 256 MB of information. When information is stored in memory, the physical state of those memory chips is changed, but no new mass is added. Perhaps an analogy will help. Imagine a row of ten coins sitting on a table. By letting tails represent 0 and heads represent 1, you could think of this row of coins as a primitive kind of memory, capable of storing a sequence of ten 0's and 1's. You would store such a sequence of 0's and 1's by flipping over some of the coins in order to get the appropriate sequence of heads and tails. Flipping the coins changes their physical arrangement, but not their mass. So mass is conserved during this operation.

Pages