I am an undergraduate student who is interested in attending medical school. My primary reason for wanting to work in the medical field is to improve access to medical care in underserved further along my career path. However, attending medical school costs quite a bit. While I am fortunate enough to likely be able to pay for med school without crippling debt, I can't help but think that the money going towards my education could go towards better causes, such as improving infrastructure in rural, underserved communities and improving vaccination rates. Would the most moral option here be to donate money going towards my education to these causes or to go to medical school and use my education to improve access to healthcare in underserved populations?

Some people hold the view that if we're doing what we really ought to, we'll give up to the point where giving more would decrease the overall good that our giving produces. The most obvious arguments for that sort of view come from utilitarianism, according to which the right thing to do is the action that maximizes overall utility (good). If I could give more and overall utility would rise on that account, giving more is what I should do. Other views are less demanding. A Kantian would say that our most important duty is avoid acting in ways that treat others as mere means to our own ends. Kantians also think we have a duty to do some positive good, but how much and in what way is left open. I'm not aware of any Kantians who think we're obliged to give up to the point where it would begin to hurt. Who's right? I do think there's real wisdom in the idea that a system of morality won't work well if it's so demanding that few people will be able to follow it, and so I'm not persuaded by the point of...

Is it strange that you can't divide by zero?

It may seem strange at first blush, but there's a pretty good reason why division by 0 isn't defined: if it were, we'd get an inconsistency. You can find many discussions of this point with a bit of googling, but the idea is simple. Suppose x = y/z. Then we must have y = x*z That means that if y = 2, for example, and z = 0, we must have 2 = x*0 But if we multiply a number by 0, we get 0. That's part of what it is to be 0. So no matter what x we pick, we get x*0 = 0, not x*0 = 2. Is it still strange that we can't divide by 2? If by "strange" you mean "feels peculiar," then it's strange from at least some peoples' point of view. But this sense of "strange" isn't a very good guide to the truth. On the other hand, if by "strange" you mean "paradoxical" or something like that, it's not strange at all. On the contrary: we get paradox (or worse: outright contradiction) if we insist that division by zero is defined.

When asked to choose between two competing theories, A and B, each of which fits the facts, people will sometimes resort to asking questions like, "Which theory is the more probable?" or "Which theory is simpler?" or even "Which theory involves the least upset to all my other beliefs?" Well, what about, "Which is the less weird theory?" Could weirdness (that is, something like distance from everyday experience) count as a good criterion on which to endorse one theory over another? Einstein seems to be appealing to some idea like this in the comment that God doesn't play dice. And would it be fair to say that many philosophers appeal to something like this when they reject panpsychism?

Philosopher John Haugeland once offered a sort of counterpart to Ockham's razor: "Don't get weird beyond necessity." (from "Ontological Supervenience," 1984, Southern Journal of Philosophy pp. 1—12.) Of course, the hard part is spelling out what weirdness amounts to and why it counts against a hypothesis. For example: Ockham's Razor tells us not to multiply entities beyond necessity: it stands in favor of parsimonious theories. Panpsychism is certainly weird, but from one point of view it's parsimonious: it says that there aren't actually two kinds of physical things (conscious and unconscious) but only one. Does the weirdness swamp the parsimony? If so why? So as a quick and dirty rule of thumb, "Pick the less weird theory" seems fine. As a serious methodological rule, it may need some work.

What is the panel's response to the philosophic community's ad hominem attacks on Rebecca Tuvel and her article in Hypatia? There was no engagement of her ideas at all, and the editors of Hypatia were forced to remove her article and publish an apology, merely because Ms Tuvel asked uncomfortable questions.

I just wanted to clear up an important point. The article was not removed. It is still in the journal, including the online edition, and it will stay there. I'd suggest reading this piece http://www.chronicle.com/article/A-Journal-Article-Provoked-a/240021 which gives a clearer picture of the review process of the journal itself. In particular, it makes clear that the associate editorial board doesn't make decisions about what gets published, and isn't involved in the day-to-day operation of the journal. I will leave it to others to discuss more substantive issues.

Many people, like myself, think of Ayn Rand when we think of philosophy, having read her books when young, etc. Coming from this sort of background, it was surprising to me, recently, to be told that the majority of professional philosophers don't regard her as a philosopher at all, or, if they do, take little notice of her. Is that truly the attitude amongst philosophers? If so, is there any particular reason for it? For instance, is it to do with resistance to ideas that come from outside the university?

I don't know if most philosophers would say that she's no philosopher at all, but I suspect many would say she's a marginal philosopher. One reason is that however influential she may have been, many philosophers don't think she's a very good philosopher—not very careful or original or analytically deep—even if they happen to be broadly sympathetic to her views. The fact that she came from outside the academy by itself wouldn't be disqualifying, but in one sense, philosophers are not just people who engage with philosophical issues; they're people who are part of a community whose members read and respond to one another (even when they disagree deeply) and interact in a variety of particular ways. Being outside the academy tends to put you outside the ongoing conversation of that community. Whether that's good, bad, or neutral is another story, but to whatever extent "philosopher" means "someone who's a member of a certain intellectual community," the fact that she was outside the academy is part,...

It seems to me that there are two kinds of numbers: the kind that the concept of which we can grasp by imagining a case that instantiates the concept, and the kind that we cannot imagine. For example, we can grasp the concept of 1 by imagining one object. The same goes for 2, 3, 0.5 or 0, and pretty much all the most common numbers. But there is this second kind that we cannot imagine. For example, i (square root of -1) or '532,740,029'. It seems to me that nobody can really imagine what 532,740,029 objects or i object(you see, I don't even know whether I should put 'object' or 'objects' here or not because I don't know whether i is single or plural; I don't know what i is) are like. So, Q1) if I cannot imagine a case that instantiates concepts like '532,740,029', do I really know the concept, and if so, how do I know the concept? Q2) is there a fundamental difference between numbers whose instances I can imagine and those I cannot? (I lead towards there is no difference, but I don't know how to account...

I'd suggest that while there may be differences in how easy it is for us to "picture" or "imagine" different numbers, this isn't a difference in the numbers themselves; it's a rather variable fact about us. I can mentally picture 5 things with no trouble. If I try for ten, it's harder (I have to think of five pairs of things.) If I try for 100, it's pretty hopeless, though you might be better at it than me. But I'm pretty sure that there's no interesting mathematical difference behind that. I'm also pretty sure that I understand the number 100 quite well. I don't need to be able to imagine 100 things to be able to see that 2x2x5x5 is the prime factorization of 100, for example, nor to see that 100 is a perfect square. But that may still be misleading. I have no idea offhand whether 532,740,029 is prime. But I know what it would mean for it to be prime -- or not prime. And in fact, a bit of googling for the right calculators tells me that 532,740,029 = 43 x 1621 x 7643 I can't verify that by doing the...

Is it morally acceptable to hate a crime but not the criminal?

I'm having a bit of trouble understanding why it wouldn't be. Among possible reasons why it would be just fine, here are a few. 1) People, and more generally sentient beings, occupy a very different place in the moral universe than mere things (including events and abstract ideas). Moral notions don't even get a grip unless they refer back one way or another to beings as opposed to things. There's simply no reason to think that our attitudes toward people should be in lock-step with out attitudes toward non-sentient things. 2) Moreover, you might think that hating people is almost always not a good thing. It makes it harder to see their humanity, it makes you more likely to treat them less fairly, it fills you up with emotional bile. Hating a crime might not be emotionally healthy either, but given the distinction you're interested in, it's not personal; it's strong moral disapproval of a certain kind of action, and that might be both appropriate and productive. 3) Suppose someone you care...

When I read most discussions about free will, it seems that there is an implicit unspoken assumption that might not be accurate once it is brought forward and addressed explicitly. We know from research (and for me, from some personal experiences) that we make decisions before we are consciously aware that we have made that decision. The discussions about free will all seem to assume that one of the necessary conditions of free will is that we be aware that we are exercising it, in order to have it. (sorry if I did not phrase that very well). In other words, if we are not consciously aware that we are exercising free will in the moment that we are making a decision, then it is assumed that we do not have free will, merely because of that absence of conscious awareness. Suppose we do have free will, and we exercise it without being consciously aware that we are doing so at that particular moment. That might merely be an artifact that either we are using our awareness to do something that requires...

Part of the problem with this debate is that it's not always clear what's really at issue. Take the experiments in which subjects are asked to "freely" choose when to push a button and we discover that the movement began before the subject was aware of any urge to act. The conclusion is supposed to be that the movement was not in response to a conscious act of willing and so wasn't an act of free will. But the proper response seems to be "Who cares?" What's behind our worries about free will has more or less nothing to do with the situation of the subjects in Libet's experiment. Think about someone who's trying to make up their mind about something serious—maybe whether to take a job or go to grad school. Suppose it's clear that the person is appropriately sensitive to reasons, able to reconsider in the light of relevant evidence and so on. There may not even be any clear moment we can point to and say that's when the decision was actually made. I'd guess that if most of us thought about it, we'd...

Lately, I have been hearing many arguments of the form: A is better than B, therefore A should be more like B. This is despite B being considered the less desirable option (often by the one posing the argument). For example: The poor in our country have plenty of food and places to live. In other countries, the poor go hungry and have little to no shelter. It is then implied that the poor in our country should go hungry and have little to no shelter. I was thinking this was a fallacy of suppressed correlative, but that doesn't quite seem to fit. What is the error or fallacy in this form of argument? How might one refute such an argument?

Years ago, I used to teach informal reasoning. One of the things I came to realize was that my students and I were in much the same position when it came to names of fallacies: I'd get myself to memorize them during the term, but not long after, I'd forget most of the names, just as my students presumably did. Still, I think that in this case we can come up with a name that may even be helpful. Start here: the conclusion is a complete non sequitur ; it doesn't even remotely follow from the premises. How do we get from "The poor in some countries are worse off than the poor in our country" to "The poor in our country should be immiserated until they are as wretched as the poor in those other countries"? Notice that the premise is a bald statement of fact, while the conclusion tells use what we ought to do about the fact. By and large, an "ought" doesn't simply follow from an "is", and so we have a classic "is/ought" fallacy. However, pointing this out isn't really enough. After all, in some cases...

If the basis of morality is evolutionary and species-specific (for instance, tit for tat behaviour proving reproductively successful for humans; cannibilism proving reproductively successful for arachnids), is it thereby delegitimised? After all, different environmental considerations could have favoured the development of different moral principles.

There's an ambiguity in the words "basis of morality." It might be about the natural history of morality, or it might be about its justification. The problem is that there's no good way to draw conclusions about one from the other. In particular, the history of morality doesn't tell us anything about whether our moral beliefs and practices are legitimate. Even more particularly, the question of how morality figured in reproductive success isn't a question about the correctness of moral conclusions. Here's a comparison. When we consider a question and try to decide what the best answer is, we rely on a background practice of reasoning. That practice has a natural history. I'd even dare say that reproductive success is part of the story. But whether our reasoning has a natural history and whether a particular way of reasoning is correct are not the same. modus ponens (from "A" and "If A then B," conclude "B") is a correct principle of reasoning whatever the story of how we came to it. On the other...

Pages