When we write "one and a half" in decimal notation as 1.5 .. do we really mean 1.5000... (with an infinity of zeros?) If so, how do we know there's no "3" lurking at the 10 millionth decimal place? Is this a problem of converting an analogue world into digital mathematics?

Yes, "one and a half" means 1.5000..., with infinitely many zeros. How do we know there's no "3" lurking at the 10 millionth decimal place? Well, it depends on how the number is being specified. If you are the one specifying the number, and you say that the number you have in mind is one and a half (exactly), or you say that it is 1.5000... and you add (as you did) that the decimal expansion you have in mind is the one in which the zeros go on forever, then you've already answered your own question--you have specified that there's no 3 lurking at the 10 millionth decimal place. If, on the other hand, the number was the result of a physical measurement, then there will surely be some amount of error in the measurement. So if the readout on the measuring instrument says "1.5000", then most likely the quantity being measured is not exactly one and a half, and there is some nonzero digit lurking somewhere--perhaps in the very next digit. If someone else tells you something about a number, and he...

This one is mathematical, but seems to address philosophical issues regarding definition and the nature of mathematical truth. So: If, for any x, x^0 = 1, and, for any y, 0^y = 0, then what is the value of 0^0?

There are no right or wrong answers when it comes to making definitions in mathematics. We can define things however we please. However, mathematicians generally try to make definitions that will be useful, and one way to do this is to preserve general rules. Now, you have identified a case in which this strategy leads to conflicting answers. One general rule, x 0 = 1, suggests that we should define 0 0 to be 1. Another rule, 0 y = 0, suggests that we should define 0 0 to be 0. There is no way to define 0 0 and preserve both rules, so we have to make a choice. I don't think there is universal agreement among mathematicians about how to define 0 0 --sometimes it is regarded as being undefined. However, in many contexts mathematicians take 0 0 to be 1. In choosing which definition of 0 0 is most useful or natural, it may help to think about why the rules x 0 = 1 and 0 y = 0 make sense: x 0 can be thought of as representing an empty product, but what does that...

What do you believe to be the square root of -1? Is it a flaw in our complex math system? Is it really a non-existent and incomprehendable thing? Or are our brains simply too basic to understand this notion?

Mathematicians have defined many different number systems--integers, rational numbers, real numbers, complex numbers, quaternions, etc. The answer to your question will be different depending on which number system you use. In the real numbers, there is no number whose square is -1. In the complex numbers, there are two numbers whose squares are -1; in the usual notation for complex numbers, they are called i and - i . Just as the real numbers are often represented as the points on a line, the complex numbers are represented as the points on a plane. In this picture, the real numbers lie along a horizontal axis, and the numbers i and - i are 1 unit above and below 0. But this answer may not satisfy you. Perhaps you want to know what the square root of -1 is really , independent of any choice of number system. I am inclined to say that that question is meaningless. To make sense of your question, you have to have a number system containing the number -1, in which there is a...

5 divided by 0? Personally, I believe that it is infinite based on the idea that division is just repeated subtraction just like multiplication is repeated addition. For example, in 4/2, it's pretty much like saying how many times can you subtract 2 from 4 before you get to 0.

I would give a slightly different moral to Peter's story. Mathematicians could have defined 5 divided by 0 to be infinity--one of the wonderful things about mathematics is that we can define things however we want. However, what Peter's proof shows is that if you define division by 0, then some of the familiar algebraic laws aren't going to work anymore. (It is an interesting exercise to identify the algebraic law used in the proof that would stop working if we defined division by 0.) So it would actually be quite inconvenient to change the usual definition of division, according to which division by 0 is undefined.

Does the square root of 2 exist?

If I interpret your question as a straightforward mathematical question--Is there a number whose square is 2?--then the answer is of course yes. The numerical value of the number is approximately 1.41421. But perhaps that isn't what you meant. Perhaps your question is about the sense in which mathematical objects like numbers exist. Do they really exist (whatever that means), or are they just, in some sense, figments of our imaginations? You might want to look at the answers to question 139 , and also the links to the Stanford Encyclopedia in those answers, for a discussion of two different ways that people have interpreted mathematical existence, namely platonism and intuitionism. It is perhaps worth mentioning that both a platonist and an intuitionist would agree that the square root of 2 exists, but they would mean different things by that. The platonist would mean that the square root of 2 is one of the objects in a world of mathematical objects that exists independent of us and our...

In math class at my school they drill into our heads that a real line goes on forever, while a line segment is a line with two ends. So my question is: if a line can go on forever wouldn't it take up every possible space? Can such a line even exist in the universe? If it can't then why do mathematicians use that term?

Yes, lines in mathematics go on forever, and such things most likely don't exist in our universe. So why do mathematicians study them? There are most likely only a finite number of elementary particles in the universe. Should mathematicians say that the positive integers end at some finite number? The study of the positive integers would actually be a lot more difficult (and a lot less attractive) if we placed some bound on the numbers to be studied, based on the number of particles in the universe. It is easier (and more interesting) to study the infinite collection of all positive integers, even if most of those numbers will never be used for counting objects in the physical universe. In general, mathematicians don't think of themselves as studying things that exist in the physical universe. Rather, they study abstractions, like infinite lines or the positive integers. These abstractions may be motivated by things in the physical universe, such as the lines we draw on paper or the process of...

Suppose we decide to let 'Steve' name the successor of the largest number anyone has ever thought about before next Tuesday. Can I now think about Steve? For example can I think (or even know) that Steve is greater than 2? If not, why not? If so, wouldn't that mean that some numbers are greater than themselves?

It is tempting to think that the phrase "the successor of the largest number anyone has ever thought about before next Tuesday" unambiguously defines a number. After all, it seems that we could compute the value of "Steve" as follows: Wait until next Tuesday, and then make a list of all the numbers anyone has ever thought about (a finite list, given the finite history of humans thinking about numbers), find the largest number on the list, and add one. But what counts as "thinking about a number"? From your question, it appears that you want to count thinking about a number by means of a description of that number, including descriptions like ... the definition of Steve! So the computation of the value of Steve isn't as simple as it sounds. What will happen next Tuesday when we sit down to compute the value of Steve? Well, you and I have been thinking about Steve, so when we go to make our list of all the numbers anyone has ever thought about, Steve will be on the list. This means that as part of...

If we changed the way we count, could 2+2 fail to equal 4? If, for example, we started counting with zero, we might count X X X as 0, 1, 2 X's. Then 2 + 2 would equal X X X X X X, and when we counted the X's, we would count, "zero, one, two, three, four, five." So 2+2=5. So, does this example show that 2 + 2 doesn't necessarily equal 4? Or, would we have to say that we were speaking another language when we truthfully say that 2+2=5?

I'd say we're speaking another language. In this language the numeral "2" means 3 and the numeral "5" means 6, so "2+2=5" means 3+3=6, which is true. But 2+2 is still equal to 4. This isn't really a fact about mathematics, it's a fact about language. If we changed the names we use to refer to people so that "John Kerry" referred to George W. Bush, then in that new language the sentence "John Kerry is president of the U.S. " would be true. But it wouldn't change the facts about who the president is. George W. Bush would still be president, we'd just be using a different name to refer to him. If you want to overturn the results of the last election, this isn't the way to do it.

I'm sure the mathematical anomaly that .999 repeating equals 1 has been brought up, but I was wondering what you think of it. Why is this possible? x=.999 (repeating) therefore 10x=9.999 (repeating) Subtract one x from the 10x 10x=9.999 - x=0.999 and you get 9x=9 divide both sides by 9 x=1 I was wondering if you could explain why this happens. Does it show a flaw in our math system? Or is it just a strange occurrence that should be overlooked? Or is it true?

Yes, it is true that .9999... = 1, and there's nothing paradoxical about it. But to see why that is, you need to think about the meaning of decimal notation. Consider a decimal number of the form: 0.d1 d2 d3 d4 ... where each of d1, d2, d3, ... is one of the digits from 0 to 9. Of course, d1 is in the tenths place, d2 is in the hundredths place, and so on. What this means is that the number represented by this decimal notation is: d1/10 + d2/100 + d3/1000 + ... But if the list of digits goes on forever, then this summation goes on forever, so now we have to ask what an infinite summation means. You can never finish adding up infinitely many numbers, so we can't just say that this is what you get when you finish adding up all of the infinitely many numbers. Here's how mathematicians define this infinite sum: Start with d1/10, then add on d2/100, then add on d3/1000, and so on. The process never ends, so you will never actually get to the answer. However, as you add more and more...

Pages