Consider the following game that costs $2 to play: You roll a fair, six-sided die. You are awarded a $6 prize if, and only if, you roll a six; otherwise, you get nothing. Should you play the game? Well, considering the odds, the average payout - or "expected utility" - is (1/6)x($6)=$1, which is *less* than the $2 cost of playing. Therefore, since over many trials you would lose out, you should not play this game. That line of reasoning sounds OK. But let's say you are given a chance to play only once. What sort of bearing does this "average payout" argument have on this special "one shot" case? If you are in this for a single trial, it is not obviously irrelevant what the trend is "over many trials?"

Good question. My own view is that what happens in the long run is irrelevant to the rationality of betting (or in your case not betting) according to the odds in the single case. I think that it is a basic principle of practical rationality that your choices should be guided by the probabilities and that, surprisingly, there is no further justification for this. A first point. You say 'over many trials you would lose out'. Well, if you are talking about a finite number of trials, that's not guaranteed. It is possible--indeed there will be a positive probability--that in a finite number of trials you will win even if you bet against the odds. All we can say it that the probability of winning over many trials is low. So now we are just back with the original problem. Why is it rational to avoid doing something just because the probability of success is low? Does the situation change if we think about an infinite number of trials? Well, it's not even obvious that you are guaranteed to...

Is an event which has zero probability of occurring but which is nonetheless conceivably possible rightly termed "impossible"? For instance, is it "impossible" that I could be the EXACT same height as another person? I take it that the chance of this is zero in that there are infinitely many heights I could be (6 ft, 6.01 ft, 6.001 ft, 6.0001 ft, etc.) but only one which could match that of a given other person exactly; at the same time, I have no problem at all imagining a world in which I really am exactly as tall as this other.

Probability theorists often consider random trials with infinitely many possible outcomes each with probability zero--for example, the probability that a quantum particle will be at some particular point in space. In such cases, the probability that the result falls within an (infinite) set of such outcomes need not be zero----the probability that the particle is in some region of space, say. I don't see that there is anything paradoxical here. It's true that cases like this violate the 'additivity' assumption that the probability of a disjunction of non-overlapping outcomes is the sum of the probabilities of the individual outcomes. But there's nothing manadatory about this assumption when we are dealing with infinite sets of outcomes, and probability theories covering this kind of case are perfectly consistent. The question asked whether we should use the term 'impossible' for probability zero outcomes like the particle being at some particular point in space. I'd say not, ...

I have a question about probability (and baseball). Say that a hitter has consistently hit .300 for many years. Now, suppose that he begins a new season in a slump, and hits only .200 for the first half; should we infer that he will hit well above .300 for the second half (and so finish with the year-end .300 average we have reason to expect of him), or would this be an instance of the gambler's fallacy?

If the hitter were just a batting machine that averages .300 in the long run, then it would indeed be an instance of the gambler's fallacy to think he would end up with his normal .300 even though he's .200 halfway through the season (just as it would be a fallacy to suppose a fair coin that has come down heads 5 times in a row is more likely to come down tails than heads over the next 5 throws). But our hitter isn't a batting machine, and one respect in which this may matter is that he may try harder in the second half of the season so as to keep up his record of hitting .300 each season, and this may itself make him score well above .300 for the second half. What we have here is an instance of the 'reference class problem'. Should we consider the hitter's second-half performance as an instance of (a) the class of all his half-season performances (in which case we should expect him to average only .300), or should we consider it as an instance of (b) the class of his-second-half-season...