I'm a lawyer. One of my previous clients asked me for specific legal advice that he later used to commit financial fraud. I strongly suspected at the time that he was going to use my advice for that very purpose but I told him anyway because I like him as a person and I also disagree with the law that prohibits the particular type of fraud that he committed. Have I acted immorally according to virtue ethics?

First, a thought about the question: you ask whether you've "acted immorally according to virtue ethics." You might be trying to understand what light virtue ethics in particular casts on a case like this, or you might be interested in whether what you did was wrong, period. In either case, I don't think we have enough information to say. But let's take the cases in turn. Some views provide what's supposed to be a criterion that we might be able to use rather like an algorithm to figure out what's right or wrong. Utilitarianism would tell us to do a sort of cost/benefit analysis, toting up the goods and the harms and deciding whether one action is better than another by seeing how the arithmetic works out. Kantianism would direct us to apply the Categorical Imperative in one or another of its forms. (For example: we might ask whether what we're considering would call for treating someone merely as a means to an end.) Virtue ethics doesn't work that way. It's often understood as telling us to do...

Is it possible to disprove or refute the seemingly indubitable Cogito ergo sum? Is it possible for even that to be doubted? Is it possible for something to think, but it does not exist? In my opinion, I think that the only "thing" of which anything that "thinks" could be certain, is that, "there is something," "it is," "there is," or "it is." I feel that for one even to doubt that "there is something," there has to be "something," or one could not doubt at all, or that there could not even have been a "one" in the first place to do anything let alone "doubt." I have just confused myself now, and I apologize for not explaining this much better. I am trying to go beyond René Descartes, and "truly" find "something" that could not be doubted at all, or is it possible to doubt anything or everything, even that statement itself, ad infinitum, and even that?

There are two questions here: first, can Descartes' cogito be doubted—is it open to doubt that "I" exist? Second, more generally, is there anything that's not open to legitimate or reasonable or rational doubt? (What people are psychologically capable of doubting maybe another matter.) On the first, may philosophers would say yes. Even if it's certain that there's thinking going on, it doesn't follow that there's some one or some thing doing the thinking. Consider the Buddhist/ Humean "no self" view. On that way of understanding things, there's no substantial self. There is, as the Humean might say, just a bundle of perceptions. "I think," on this account, is just a manner of speaking. We can't get from "there is thinking" to "I exist." So maybe it's open to doubt that I exist. Is it open to doubt that there's thinking going on when it seems that there is? Maybe not, though I don't doubt that some clever philosopher could offer an interesting argument to the contrary. What else? Can it...

What happens, morally speaking, if I promise to do something that happens to be slightly immoral? Do I still have some kind of obligation to do it?

Nice question! On some views, there's no judging intrinsically whether doing what you promised is immoral—slightly or otherwise. If you're a consequentialist (someone who thinks consequences always decide what's right), the question is what, overall, produces the best consequences, and it might be that overall, it's better to do what you promised, even if it's something we'd normally expect you shouldn't do. Someone else could say that the case contains a moral dilemma by its very nature. One the one hand, someone might say, it's wrong to break a promise. On the other hand, we've assumed that what you've promised to do is also wrong. On that way of looking at it, we have a dilemma, and on one way of understanding dilemmas, you will do wrong no matter what. That said, you may still be obliged, all things considered, to do what you promised—or not to, depending on the case. We could add other theoretical possibilities here, but for anyone who faces a situation like this in real life, the answer is "It...

Hello. Listening to a radio programme about utilitarianism I was struck by the difficulty of making a universal framework fit in our relationship-driven world, and how a concept of egoistic or relative utilitarianism might do this. That is, we maximise utility not equally over everyone but across those with whom we feel a relationship, and to the extent that we do. So, where a utilitarian sacrifices his children to make a small dent in third world poverty but ignores his newly unemployed neighbour because she is not starving, an "R.utilitarian" buys his children the cheaper laptop, using the balance to contribute to the starving and to help his neighbour out with an interest-free loan while she gets back on her feet. I googled every combination of relationship/relative/egoistic and utilitarianism that I could find, and came up blank. Please can you tell me what this theory is called, and who came up with it 200 years before I did? If not, please don't steal it before I write it up ;-)

Interesting. Here's a possible way of thinking about it. Utilitarianism (Capital "U") as a philosophical view says that the right thing to do is what maximizes utility, where "utility" is characterized in a very particular way: roughly, the sum total of well-being among sentient creatures (or something like that.) That may or may not be the right account of right and wrong, but most people probably don't have a view on that question one way or another. However, it's at least somewhat plausible that people are utilitarians (small "u") in a different sense: they try to maximize utility, understood as what they value. Whether it makes us good or bad, many of us actually do value the well-being of our children more than we value the well-being of strangers, and our actions reflect that. A small-"u" utilitarian, then, might well behave as what you call an R-utilitarian. That would be because the small-"u" utilitarian is maximizing over what s/he values. In any case, there's been a fair bit written...

Justice Scalia famously stated that crosses on graves have, well, crossed-over from an overtly religious symbol to one that may represent any dead soldier. How do philosophers treat such claims? How do we establish when religious practices, symbols, rituals, etc. have entered the secular public domain to the extent that the law can recognize them as such?

I'll have to admit that I think Justice Scalia is full of prunes on this one, as my grandmother would have said. And I think the case was decided wrongly by the Supreme Court. (Here's an account of the decision that's not just neutral, but still... http://www.patheos.com/blogs/friendlyatheist/2010/04/28/supreme-court-rules-that-a-cross-is-not-a-symbol-of-christianity/ ) As for your question, it has an empirical component and a conceptual one. The conceptual part calls for deciding what it would mean for a symbol not to have a religious meaning, and the empirical part would be finding out if crosses on graves now have a secular meaning. The answer to the conceptual question might call for some bells and filigrees, but the basic idea is pretty clear: do most people, including in this case most non-Christian people , agree that a particular symbol (in this case, a cross on a grave) has no religious meaning? If the answer is yes, then Justice Scalia is right. If the answer is no, then he's wrong....

If two people share a thought influenced by their shared experiences, would this be considered telepathy? For example, if two people see a stimulus and instantly link that stimulus to a situation experienced with the other person, does it become telepathy because they both think it at the same time, and have some time of relationship?

No. At least, not if by "telepathy," you mean what most people mean. Usually when people talk about telepathy, what they have in mind is one person's thoughts influencing another person's thoughts without usual means of influence such as speaking, telephoning, etc. What you've described is a case of "common cause." It's not a matter of one person's thoughts influencing another person's. It's a matter of a common stimulus influencing each person's thoughts. To give a clearer example: suppose you and I are, as it happens, both watching the same TV program, though in different cities. An image of a mushroom cloud appears onscreen and we both think of Hiroshima. That's not telepathy. Nor would it be telepathy if the two of us had also once met and talked about the history of the atomic bomb.

I am sometimes struck by how we use language in an exaggerated manner. We often say "That is SO GOOD!" when it is not that good; we say "it has been a pleasure to talk to you" simply out of convention, regardless whether we derive any pleasure from the conversation. I am troubled by this because first when I hear people say those words I cannot help doubting their sincerity. Also, it is because those words become devalued: when I want to express my genuine praise by saying "this is really good," it just sounds like what everybody else will say no matter what. So how should we view those uses of words?

If I'm writing a letter to someone I don't know very well, I might begin it "Dear _____" and end it "Yours truly." But nobody is under the slightest impression that the recipient really is dear to me, nor that I'm declaring any sort of fealty. I said "nobody," but of course that's not quite right. Nobody who's even noddingly familiar with the conventions of letter writing will be confused, though someone from a very different culture might be. What someone means by using certain words isn't just a matter of what you find when you look the words up in a dictionary. Or suppose I run into a nodding acquaintance by chance. I hug them and say "Good to see you." Is the hug an expression of intimacy? Am I really pleased to see this person? Maybe or maybe not, but at least in my part of the world, this is how people great one another. I don't make judgments about people's overall sincerity based on interactions like this, because in following the conventions of polite greeting, sincerity isn't the issue. Do...

Would it be ethically sound to love a machine that is a perfect replica of a human? For example. If it was impossible for anyone to tell the difference, would it be wrong? If this robot were programmed to have human feelings and think in a manner that is indistinguishable from a human, would it be moral to love them as though they were a human. (apologies if this is unclear, English is not my first language)

To get to the conclusion first, I think that the answer is yes, broadly speaking. But I'd like to add a few qualifications. The first is that I'm not sure the root question is about whether it would be ethically right or wrong. It's more like: would it be some kind of confusion to love this sort of machine in this way? Suppose a child thinks that fancy stuffed animal really has feelings and thoughts, but in fact that's not true at all. The toy seems superficially to have emotions and a mind, but it's really a matter of a few simple preprogrammed, responses of a highly mechanical kind. This might produce strong feelings in the child—feelings that seem like her love for her parents or her siblings or her friends. But (so we're imagining) the feelings are based on a mistake: the toy is just a toy. On the other hand, if an artificial device (let's call it an android) actually has thoughts and feelings and is able to express them and to respond to what people like us feel or think, then it's hard to see...

The probability in my mind that I am correct in attributing extensive moral personhood to other humans is very high. With non-human vertebrate, I attribute slightly less extensive but still quite broad moral personhood, and I am in this too quite confident. But I accept given I am a fallible human being I might be wrong and should give them no moral personhood or moral personhood of the kind I ascribe to humans. Continuing in the same line, I ascribe almost no moral personhood to bacteria and viruses. But again given I am fallible musnt I accept some non-zero probability that they deserve human like personhood? If so, and I am a utilitarian, given the extremely large number of bacteria and viruses on the planet it seems even if I am very sure that bacteria are of only minimal moral importance, I still must make serious concessions to them because it seems doubtful that my certainty is so high as to overcome the vast numbers of bacteria and viruses on this planet. (I am aware it is not entirely clear how...

It's a very interesting question. It's about what my colleague Dan Moller calls moral risk. And it's a problem not just for utilitarians. The general problem is this: I might have apparently good arguments for thinking it's okay to act in a certain way. But there may be arguments to the contrary—arguments that, if correct, show that I'd be doing something very wrong if I acted as my arguments suggest. Furthermore, it might be that the moral territory here is complex. Putting all that together, I have a reason to pause. If I simply follow my arguments, I'm taking a moral risk. Now there may be costs of taking the risks seriously. The costs might be non-moral (say, monetary) or, depending on the case, there may be potential moral costs. There's no easy answer. Moller explores the issue at some length, using the case of abortion to focus the arguments. You might want to have a look at his paper HERE . A final note: when we get to bacteria, I think the moral risks are low enough to be discounted. I...

Most all ethical theories have a problem with them, whether it has some sort of internal inconsistency, has no answer for a certain scenario, or whatever. How can anyone accept an ethical theory that they know is flawed? Don't the flaws mean we need to keep looking and thinking?

There are two sorts of things that might be at issue here and they call for different answers. If I want the best ethical theory we can come up with, and the available alternatives all seem flawed, then that's a reason to keep looking and thinking—especially if the goal is to get as close as possible to the (probably unattainable) ideal theory. But if "accept an ethical theory" means something like "use it as the basis for making ethical judgments," then the issue changes. That's because it's debatable, to say the least, that the best way to make ethical judgments is to come up with an ethical theory and apply it. What's the alternative? Here's one. Assume that by and large, we're able to make reasonable ethical judgments. The job of an ethical theory on this view is to provide a coherent account of what makes those judgments right or wrong (or true or false, or whatever the appropriate contrast may be.) It could very well be that even though we have the capacity to make sound moral judgments,...

Pages