Even if determinism has been somewhat refuted by Quantum Uncertainty (a fact that is peddled by the layman, and never acknowledged by the leading scientists - Einstein, Bohr etc.), isn't it still the case that all events on a slightly larger scale are still determined. After all a gust of wind isn't random (as to transcend causation). Determinism is, in part, the prerequisite to sanity as none of us expect the Earth to stop turning or our cars to stop working for no mechanically justified reason. As a note of interest, a computer cannot be programed to do something random.

I'd just like to add one comment to Peter's response. His statement that "most large scale systems behave as if they were governed by deterministic laws" is not only compatible with quantum mechanics, it is actually predicted by quantum mechanics. In most cases, quantum mechanics predicts that a large scale system will, with probability only slightly less than 1, behave according to the laws of classical mechanics.

What is the exact purpose of math? Why is it that math was created? I know that some math has a purpose like finding out how much you may owe someone, but how about Linear Equations or Polynomials? What is the purpose of all this?

Math has many applications. To take one of your examples, a physicist might use a linear equation to describe the position of an object moving at a constant velocity. Some areas of mathematics have applications to other areas of mathematics (which may then have applications outside of mathematics). Polynomials may be a good example of this. Polynomials are sometimes used to approximate other, more complicated functions, and such approximations may be useful for solving a variety of mathematical problems (which may then have applications outside of mathematics). But you also ask why math was created. Often when mathematicians create mathematics, they are not thinking about applications, they are just trying to solve a problem because they find it intriguing. You might say that in many cases mathematicians solve mathematical problems for the same reason that mountain climbers climb mountains: because they are there. The applications often come later. A good example of this might be prime...

It was said [in a Google groups post ] that "If a [mathematical] proof requires the checking of a very large but finite number of cases, far too many for a human to check, and we use a computer to perform that check, should we count the proposition as proved?" is an open question of mathematical philosophy. Why would anyone think the answer is anything but "yes"? The proof may not have desired aesthetic qualities, but no mathematician would deny its validity even though she may try to create a more pleasing proof.

It depends on what you think a proof is supposed to be. If you think a proof is just a certain kind of abstract object--say, a finite arrangement of symbols that satisfies the rules of logic--then it seems that a computer-checked proof is a proof just as much as a human-checked proof is. But another way of looking at it is to think about the role proofs play. When you read and understand a proof, you become convinced that a certain mathematical statement is true, and the proof is what convinces you. Now, suppose you are convinced by a computer-checked proof--say, the proof of the 4-color theorem. What is it that convinced you that the 4-color theorem is true? Not the proof that the computer checked--you never saw that proof, and it's far too long for you to read yourself. No, what convinced you of the truth of the 4-color theorem is a certain physical experiment: connecting a box full of wires to a source of electricity and seeing what happens. It seems that your belief in the 4-color theorem...

The numbers e, i and pi are related. Is this natural or a consequence of the way we do our mathematics? Iain Nicholson

I assume the relationship you have in mind is e iπ = -1. I wouldn't go quite as far as Richard in saying that this is"independent of how we choose to do mathematics," because it doesdepend on how we choose to define exponentiation for complex numbers.Mathematicians were faced with a choice when they were deciding how to defineexponentiation with complex numbers, and there was no one definitionthat was the unique "right" definition. However, the definition thatmathematicians chose is a very natural one, and once you choose thatdefinition the relation e iπ = -1 follows. Perhaps it would be of interest for me to give an explanation, forthose who are familiar with power series, of why the definition ofexponentiation with complex numbers is natural. If you studied powerseries in a calculus class, then you are probably familiar with thefollowing formulas: e x = 1 + x + x 2 /2 + x 3 /3! + x 4 /4! + ... cos( x ) = 1 - x 2 /2 + x 4 /4! - ... sin( ...

Richard makes a good point, but I still think that I had a good point also, although I may not have expressed it as well as I might have. It is often said that Euler proved that e ix = cos( x ) + i sin( x ), but it seems to me this is somewhat misleading. Many (most?) modern complex analysis books present Euler's equation as a definition , not a theorem--it is the definition of e ix . On this view, what Euler did is not to prove that e ix is equal to cos( x ) + i sin( x ), but rather to show that it would be a good idea to define it that way . If you want to regard Euler's equation as a theorem , then you have to take something else to be the definition of e ix . You could do that--you could, for example, take the power series for e x , which was derived for x a real number, and extend it into the complex numbers to define e z when z is a complex number, and then Euler's equation would be a theorem. But this requires...

Does the equation "e to the power i x pi = -1" have any physical meaning? Is there a meaning waiting to be discovered?

Euler's equation is important in quantum mechanics. In quantum mechanics, the state of a particle is given by a function, and the formula for that function generally includes a term of the form e ix , which, according to Euler's equation, is equal to cos (x ) + i sin( x ). The appearance of cos( x ) and sin( x ) here introduces an oscillatory term into the formula. This is why quantum mechanics predicts wave-like behavior of particles. There is a geometrical interpretation of Euler's equation. The complex numbers are usually pictured as a plane. (See question 316 for more about this.) In this plane, the number e ix always lies on a circle of radius 1, centered at 0. When x = 0, e ix = e 0 = 1, which is the rightmost point on the circle. As x increases, the point e ix moves counterclockwise around the circle, completing a full revolution every time x increases by 2 pi. Although Euler's equation has applications in science, I'm...

When we write "one and a half" in decimal notation as 1.5 .. do we really mean 1.5000... (with an infinity of zeros?) If so, how do we know there's no "3" lurking at the 10 millionth decimal place? Is this a problem of converting an analogue world into digital mathematics?

Yes, "one and a half" means 1.5000..., with infinitely many zeros. How do we know there's no "3" lurking at the 10 millionth decimal place? Well, it depends on how the number is being specified. If you are the one specifying the number, and you say that the number you have in mind is one and a half (exactly), or you say that it is 1.5000... and you add (as you did) that the decimal expansion you have in mind is the one in which the zeros go on forever, then you've already answered your own question--you have specified that there's no 3 lurking at the 10 millionth decimal place. If, on the other hand, the number was the result of a physical measurement, then there will surely be some amount of error in the measurement. So if the readout on the measuring instrument says "1.5000", then most likely the quantity being measured is not exactly one and a half, and there is some nonzero digit lurking somewhere--perhaps in the very next digit. If someone else tells you something about a number, and he...

I am reading a logic book which discussed the differences between Aristotelian Logic and Boole-Russell (modern) Logic. If the Boole-Russell logic leaves 5 valid moods out, which Aristotelian Logic covers, why do we continue to use Boole-Russell logic if it is "incomplete" per se?

I'm not sure what you (or your book) are referring to when you say that modern logic "leaves 5 valid moods out". But modern logic is complete. To explain what this means, the following terminology is helpful: We say that a conclusion is a semantic consequence of a collection of premises if, in every situation in which the premises are true, the conclusion is also true. Then it is possible to prove the following statement: Whenever a conclusion is a semantic consequence of a collection of premises, it is possible to derive the conclusion from the premises using the rules of modern logic. To put it another way: If you cannot derive a conclusion from a collection of premises using the rules of modern logic, then there must be some possible situation in which all the premises would be true and the conclusion false. This is the completeness theorem of logic, proven by Godel in his doctoral dissertation in 1929.

This one is mathematical, but seems to address philosophical issues regarding definition and the nature of mathematical truth. So: If, for any x, x^0 = 1, and, for any y, 0^y = 0, then what is the value of 0^0?

There are no right or wrong answers when it comes to making definitions in mathematics. We can define things however we please. However, mathematicians generally try to make definitions that will be useful, and one way to do this is to preserve general rules. Now, you have identified a case in which this strategy leads to conflicting answers. One general rule, x 0 = 1, suggests that we should define 0 0 to be 1. Another rule, 0 y = 0, suggests that we should define 0 0 to be 0. There is no way to define 0 0 and preserve both rules, so we have to make a choice. I don't think there is universal agreement among mathematicians about how to define 0 0 --sometimes it is regarded as being undefined. However, in many contexts mathematicians take 0 0 to be 1. In choosing which definition of 0 0 is most useful or natural, it may help to think about why the rules x 0 = 1 and 0 y = 0 make sense: x 0 can be thought of as representing an empty product, but what does that...

What do you believe to be the square root of -1? Is it a flaw in our complex math system? Is it really a non-existent and incomprehendable thing? Or are our brains simply too basic to understand this notion?

Mathematicians have defined many different number systems--integers, rational numbers, real numbers, complex numbers, quaternions, etc. The answer to your question will be different depending on which number system you use. In the real numbers, there is no number whose square is -1. In the complex numbers, there are two numbers whose squares are -1; in the usual notation for complex numbers, they are called i and - i . Just as the real numbers are often represented as the points on a line, the complex numbers are represented as the points on a plane. In this picture, the real numbers lie along a horizontal axis, and the numbers i and - i are 1 unit above and below 0. But this answer may not satisfy you. Perhaps you want to know what the square root of -1 is really , independent of any choice of number system. I am inclined to say that that question is meaningless. To make sense of your question, you have to have a number system containing the number -1, in which there is a...

Pages