Are mandatory school vaccinations ethical from a deontological perspective assuming parents could still chose to homeschool their children?

Since the word "deontological" covers a lot of territory, I'm going to start with an assumption: I'm assuming that for you, the deontological point of view is non-consequentialist and broadly Kantian: it says that we can't treat people as mere means to an end, even if the end is otherwise a good one. (If that's not what you mean, my apologies.) If we have a mandatory vaccination rule, on the surface it's all about consequences: it's to protect people from preventable illnesses. But if we make people get their kids vaccinated even if they'd rather not, that sounds like treating them as means to the end of protecting others, even if there are other values at issue from their point of view. So your question is whether letting them opt out by home schooling their children is enough to make the rule acceptable. We already have such a rule for parents who prefer not to send their children to public school. If that's okay, is there any reason to think a similar rule wouldn't do in the case of vaccinations? ...

Is it right to value the life of a family member over a random person of equal moral values?

It depends on what you mean. I'd be mistaken if I thought that members of my family were more valuable or worthy than other people just because they're my family. In the general scheme of things, my children's well-being is not a bit more important than the well-being of anyone else's children. The same goes, of course, for parents, siblings, spouses, lovers, friends, fellow citizens. But I have a feeling that's not the question you're asking. Im guessing you're asking whether it's okay to treat one's own family, friends, etc. preferentially. In some cases the answer is no. Suppose I'm in charge of hiring a new employee. It would be wrong to hire my daughter instead of another candidate simply because she's my daughter. That would be giving my daughter an unfair advantage. The right thing to do, if at all possible, would be to turn the decision over to someone who doesn't have a personal stake in the outcome. On the other hand, to use a sort of example well-known among philosophers, if I have to...

I recently read that the majority of philosophers are moral realists. I either do not understand moral realism or, if I do understand it, I don't buy it. Below I describe how I view the ideas of 'right' and 'wrong.' Is my understanding incompatible with moral realism? And how would you critique my understanding? Also if you want to give a version of moral realism that is easy to understand that would be greatly appreciated. Let’s say that I find test taking difficult. I declare: test taking is difficult. This statement is relational in nature. I am saying that because of various elements of my personal makeup the action of taking a test is difficult for me. It would be incorrect of me to say that test taking was objectively difficult. Some, as a result of various differing elements of their personal makeup, may find test taking easy. It is hypothetically possible to enumerate all of the events in my life as a child and the specific neuroanatomical structures that cause test taking to be difficult for me...

Let's start with "test taking is difficult." There's a difference between "test taking is difficult for me" and "test taking is difficult." If what I mean is just that I personally find test taking difficult, then simply saying "test taking is difficult" is a recipe for being misunderstood. Now of course if I say something is hard, I'm usually not saying it's hard for literally everyone. I mean, roughly, that it's hard for most people. That's fine, but it can be true —whether or not the task is hard for me personally. This gives us our first point: if a statement doesn't seem just to be about the speaker, then don't read it that way unless there's a good reason to. Now let's turn to moral statements. When people say that punching a child in the face is wrong, they don't just mean something about themselves. In fact, most people, including most philosophers, would reject that reading out of hand. If I say "it's wrong to punch a child in the face" and you say "what you really mean is that you have...

Whenever ethics and aesthetics come into conflict, is it always aesthetics that must give way? What is so bad about killing ugly people to decrease the net ugliness in the world?

I have to wonder: are you trolling? If not, I'm not sure whether any possible reply is likely to satisfy you. That said, since it can be useful to try to articulate things we normally take for granted, a handful of comments. If someone thinks that getting rid of ugly people trumps not killing people, there's an obvious question: perhaps you're beautiful now, or at least, perhaps you're not ugly. But that can change. It might change slowly through the depredations of aging, or it might change in an instant because of some horrific accident. If you think it would be okay to kill someone because they're ugly, you should agree that it would also be okay to kill you if you become ugly. Now the reply might be: this amounts to begging the question; it implicitly puts ethics above aesthetics. The test I've offered is near kin to the Golden Rule or, at least the Silver Rule, or in any case Kant's Categorical Imperative. But that misses the point. If Jack thinks it would be okay to kill someone else just because...

A postscript: the larger question was whether ethics always trumps aesthetics. A closely-related question is whether a life that always puts moral considerations above all other considerations, no matter how apparently trivial the issue, is a good one. Susan Wolf had interesting things to say about this some years ago in her paper "Moral Saints." ( Journal of Philosophy, August 1982.) Here's a link to her essay: http://philosophyfaculty.ucsd.edu/faculty/rarneson/Courses/susanwolfessay1982.pdf

Is lying by omission really a form of lying?

If the question is about the word "lying," then there's probably no clear answer. But what word we use isn't the interesting question. Suppose X isn't true, but it's to my advantage that the person I'm talking to should think it's true. For example: maybe I'm talking to my boss, who would reasonably expect that I would have carried out a certain task. In fact, I didn't, and the result was not good. By answering questions artfully, I may be able to leave my boss with the impression that I actually carried out whatever the task was, without ever actually saying this. I've left a crucial detail out of what I said. Have I actually lied? Maybe not; depends on how you want to use the word. Have I deliberately tried to deceive my boss? The whole point of the story is that I did. Of course my boss might not just assume that I did my job properly, but I'm hoping that's what she thinks, and I'm trying to make that as likely as I can short of outright saying something false. Is this as bad as an outright...

I'm a lawyer. One of my previous clients asked me for specific legal advice that he later used to commit financial fraud. I strongly suspected at the time that he was going to use my advice for that very purpose but I told him anyway because I like him as a person and I also disagree with the law that prohibits the particular type of fraud that he committed. Have I acted immorally according to virtue ethics?

First, a thought about the question: you ask whether you've "acted immorally according to virtue ethics." You might be trying to understand what light virtue ethics in particular casts on a case like this, or you might be interested in whether what you did was wrong, period. In either case, I don't think we have enough information to say. But let's take the cases in turn. Some views provide what's supposed to be a criterion that we might be able to use rather like an algorithm to figure out what's right or wrong. Utilitarianism would tell us to do a sort of cost/benefit analysis, toting up the goods and the harms and deciding whether one action is better than another by seeing how the arithmetic works out. Kantianism would direct us to apply the Categorical Imperative in one or another of its forms. (For example: we might ask whether what we're considering would call for treating someone merely as a means to an end.) Virtue ethics doesn't work that way. It's often understood as telling us to do...

What happens, morally speaking, if I promise to do something that happens to be slightly immoral? Do I still have some kind of obligation to do it?

Nice question! On some views, there's no judging intrinsically whether doing what you promised is immoral—slightly or otherwise. If you're a consequentialist (someone who thinks consequences always decide what's right), the question is what, overall, produces the best consequences, and it might be that overall, it's better to do what you promised, even if it's something we'd normally expect you shouldn't do. Someone else could say that the case contains a moral dilemma by its very nature. One the one hand, someone might say, it's wrong to break a promise. On the other hand, we've assumed that what you've promised to do is also wrong. On that way of looking at it, we have a dilemma, and on one way of understanding dilemmas, you will do wrong no matter what. That said, you may still be obliged, all things considered, to do what you promised—or not to, depending on the case. We could add other theoretical possibilities here, but for anyone who faces a situation like this in real life, the answer is "It...

Hello. Listening to a radio programme about utilitarianism I was struck by the difficulty of making a universal framework fit in our relationship-driven world, and how a concept of egoistic or relative utilitarianism might do this. That is, we maximise utility not equally over everyone but across those with whom we feel a relationship, and to the extent that we do. So, where a utilitarian sacrifices his children to make a small dent in third world poverty but ignores his newly unemployed neighbour because she is not starving, an "R.utilitarian" buys his children the cheaper laptop, using the balance to contribute to the starving and to help his neighbour out with an interest-free loan while she gets back on her feet. I googled every combination of relationship/relative/egoistic and utilitarianism that I could find, and came up blank. Please can you tell me what this theory is called, and who came up with it 200 years before I did? If not, please don't steal it before I write it up ;-)

Interesting. Here's a possible way of thinking about it. Utilitarianism (Capital "U") as a philosophical view says that the right thing to do is what maximizes utility, where "utility" is characterized in a very particular way: roughly, the sum total of well-being among sentient creatures (or something like that.) That may or may not be the right account of right and wrong, but most people probably don't have a view on that question one way or another. However, it's at least somewhat plausible that people are utilitarians (small "u") in a different sense: they try to maximize utility, understood as what they value. Whether it makes us good or bad, many of us actually do value the well-being of our children more than we value the well-being of strangers, and our actions reflect that. A small-"u" utilitarian, then, might well behave as what you call an R-utilitarian. That would be because the small-"u" utilitarian is maximizing over what s/he values. In any case, there's been a fair bit written...

Would it be ethically sound to love a machine that is a perfect replica of a human? For example. If it was impossible for anyone to tell the difference, would it be wrong? If this robot were programmed to have human feelings and think in a manner that is indistinguishable from a human, would it be moral to love them as though they were a human. (apologies if this is unclear, English is not my first language)

To get to the conclusion first, I think that the answer is yes, broadly speaking. But I'd like to add a few qualifications. The first is that I'm not sure the root question is about whether it would be ethically right or wrong. It's more like: would it be some kind of confusion to love this sort of machine in this way? Suppose a child thinks that fancy stuffed animal really has feelings and thoughts, but in fact that's not true at all. The toy seems superficially to have emotions and a mind, but it's really a matter of a few simple preprogrammed, responses of a highly mechanical kind. This might produce strong feelings in the child—feelings that seem like her love for her parents or her siblings or her friends. But (so we're imagining) the feelings are based on a mistake: the toy is just a toy. On the other hand, if an artificial device (let's call it an android) actually has thoughts and feelings and is able to express them and to respond to what people like us feel or think, then it's hard to see...

The probability in my mind that I am correct in attributing extensive moral personhood to other humans is very high. With non-human vertebrate, I attribute slightly less extensive but still quite broad moral personhood, and I am in this too quite confident. But I accept given I am a fallible human being I might be wrong and should give them no moral personhood or moral personhood of the kind I ascribe to humans. Continuing in the same line, I ascribe almost no moral personhood to bacteria and viruses. But again given I am fallible musnt I accept some non-zero probability that they deserve human like personhood? If so, and I am a utilitarian, given the extremely large number of bacteria and viruses on the planet it seems even if I am very sure that bacteria are of only minimal moral importance, I still must make serious concessions to them because it seems doubtful that my certainty is so high as to overcome the vast numbers of bacteria and viruses on this planet. (I am aware it is not entirely clear how...

It's a very interesting question. It's about what my colleague Dan Moller calls moral risk. And it's a problem not just for utilitarians. The general problem is this: I might have apparently good arguments for thinking it's okay to act in a certain way. But there may be arguments to the contrary—arguments that, if correct, show that I'd be doing something very wrong if I acted as my arguments suggest. Furthermore, it might be that the moral territory here is complex. Putting all that together, I have a reason to pause. If I simply follow my arguments, I'm taking a moral risk. Now there may be costs of taking the risks seriously. The costs might be non-moral (say, monetary) or, depending on the case, there may be potential moral costs. There's no easy answer. Moller explores the issue at some length, using the case of abortion to focus the arguments. You might want to have a look at his paper HERE . A final note: when we get to bacteria, I think the moral risks are low enough to be discounted. I...

Pages