Recent Responses

Any reason someone could give for why they love me renders me replaceable. For instance, if they love me for my appearance, intelligence, kindness, well, there's always someone more attractive, smarter, kinder. So, all things being equal, they ought to trade up to a better model if presented with the choice; or if God is the most perfect example of all desirable traits, then they ought to love God and no one else. I'd like to ask the panel: in contrast to loving someone because of some quality that they might or might not be the best exemplar of, does it at all make sense to love someone in their particularity, ie simply because they occupy a certain position in the time-space continuum? Or does that make a nonsense of the concept of love? Or is it silly, in the first place, to look for reasons for love?

I don't know about loving someone thanks to their position in the time-space continuum, but yes, we can and do love people for their particularity, as you put it. It's important to distinguish what sparks love for someone and what sustains it. It's certainly true that we cannot love just anyone. We vary in the traits we find attractive or lovable. When we first encounter someone, their attributes are what sparks our love for them. But love has a history, and as loving relationships develop or evolve, we often come to love someone less for their attributes than for the person that they are. It's then that the beloved seems irreplaceable, for only that person will have their distinctive set of attributes, etc.

Christopher Grau's article "Love and history" (https://philpapers.org/archive/GRALAH.pdf) is very good on this subject. Bennett Helm's article on love in the Stanford Encyclopedia of Philosophy (https://plato.stanford.edu/entries/love/) is a good place to get a foothold on the philosophical issues love raises.

Suppose some man is absolutely shy in romantic matters. Still, he loves to talk to beautiful women about all kinds of non-romantic, non-sexual subjects, and people like to talk to him. The main reason why he likes to talk to beautiful women is that it secretly arouses him sexually. Moreover, when talking to women he gets to see them at a close distance, to hear their voices clearly and to smell them. Perhaps on some occasions women will even touch him in a friendly manner. When he is alone at home, this man will remember those conversations and masturbate while thinking about those women and their physical closeness. My question is whether this is wrong (assuming that masturbation is not generally wrong). I think it is not wrong, but I have some doubts. My first problem is that this man is using those women without their full consent. They don’t know his real reason for talking with them nor what he will do “with” their conversation. I think Kant said something like we should not use other people as means for our interests without their consent. Secondly, I think it is not impossible that this practice will, over the years, have some bad effect on this man. Perhaps it will make him come to see women, in general, as “objects”, and perhaps someday he will do something nasty because of that. This may be improbable, but it seems possible. All in all, do you think it is wrong for him to “use” women in this way?

You've asked an interesting question. I'm not going to say much directly about whether this person is doing wrong. I'm going to say some things more in line with a remark of John Austin's in a very different context: "If only we could forget for a while about the beautiful and get down instead to the dainty and the dumpy." (From Austin's "A Plea For Excuses.") What seems interesting here is more at the level of moral psychology than broad moral judgments. The counterpart for daintiness or dumpiness that came to mind was creepiness.

I suspect I'm not the only panelist who found your first few sentences creepy. I'd stress that this isn't a way of saying that you are creepy, but let's try to bring the creepiness reaction into clearer focus.

I don't know of any philosophical literature on creepiness, but this piece from a website called Family Share gets the basics right:

https://familyshare.com/21482/7-signs-youre-a-creep

(Thanks to Taimur Khan for that link.)

What you describe triggers the creepiness response, but the character you describe doesn't fit the stereotype of a creep. The classic creep is someone who makes people uncomfortable because he's either clueless about or doesn't care about personal/social boundaries. Staring, sitting too close, asking inappropriately personal questions, oversharing, not backing off when he should, are the sorts of things that make someone a classic creep. But the person you describe isn't like that. He doesn't creep people out; they like talking with him. And yet at another level the creepiness doesn't go away.

Why is that? One reason is that the matter-of-fact description itself feels invasive—feels like it's crossing boundaries in the way that the classic creep does. It has a meta-creepy feel. That said, clinical descriptions of a good many kinds of behavior bring out the same reaction, and so let's set that aside. Let me offer a three-type picture of deepening creepitude.

The first type is the person who is well-meaning but bad at picking up on social cues. If you explained what made his behavior creepy, he'd feel bad and would want to do better. This is someone you might even try to befriend once you understood them.

The second type may or may not have good social perception, but doesn't care. The creepy behavior wouldn't change if you brought it to his attention, and he may get some sort of pleasure from creeping people out.

The third type is closer to the case you present. This is someone who is skilled at presenting a front that hides his real intentions. As you describe your example, he can simulate innocence and friendliness so that the women he talks to serve the role of pornography for him. His inner attitude is sexual objectification, but he's able to disguise that. This is what makes the portrait creepy. He's crossing boundaries while hiding his tracks in plain sight. What keeps things from adding up to psychopathy is that what drives his behavior is a certain kind of "shyness" as you describe it, or perhaps a sense of inadequacy. If you have a real person in mind, making an all-things-considered judgment of him would call for a lot more information. I can imagine him as someone I'd feel sorry for just as easily as I can imagine him as contemptible.

A fourth type, of course, would be a genuine psychopath: someone who is deeply manipulative and has no empathy at all for the people he interacts with. This person wouldn't care at about the women he manipulates, but understands the social rules exquisitely well and is skilled at using them for his particular form of gratification.

Where does this leave your question about wrongness? You've identified two reasons for moral worry. That said, many (perhaps even most?) people have entertained fantasies that would not be pleasing to their objects were they revealed. Drawing the lines here is complicated, though what you describe suggests someone who's stepped over at least some of them. Rather than trying to sort all that out, I'd say that the person you describe is morally out of tune, as it were. You depict him as stunted in various ways. He's emotionally stunted, but it goes beyond that. Even if he's not directly harming anyone else, he is manipulative, inauthentic and dishonest. None of those are good ways for a person to be. If this is a real person, we might hope that a skilled, sensitive therapist could help him learn to relate to people more meaningfully. But whether or not we want to use the word wrong for what he does, it's off, and off in ways that have a significant moral dimension.

Good Day! I would just like to ask. Is truth relative? Personally, I don't think it is because the question begs you to believe there are instances where it is false which means it is not constantly applicable which makes me question it. However, I find a flaw that I can't quite answer. Let's say something that is true on a specific culture, is false on another, if this is the case, then how could truth be absolute? Or is truth actually relative? Thank you!

I can't make sense of the idea that truth could be relative. Suppose that I find some dish spicy, while you find it mild. We might be inclined to say that

(R) "This dish is spicy" is true relative to me and false relative to you,

but I think that way of speaking is by no means forced on us and, in fact, is misleading. For if R itself were true, its truth would have to be explained in terms of the truth of this non-relative claim:

(NR) This dish is spicy relative to my taste but not yours.

NR neither is nor implies the claim that truth is relative. Rather, perceived spiciness is. So too with

(P) "Polygamy is acceptable" is true relative to culture A but false relative to culture B.

P is an avoidable and misleading way of making the non-relative claim that culture A accepts polygamy whereas culture B doesn't. The acceptance of polygamy is relative to culture, and that's a non-relative truth.

Logic plays an important role in reasoning because it helps us out to evaluate the soundness of an argument. But logic doesn't help us out in the search of truth. Does philosophy have a method/s to find truth ? Is something like truth possible in philosophy ? I just would like to know because, as a guy who studies such a subject, I tried to answer these questions without success. I lack the necessary resource to answer such a question (a definition of truth). By the way, I'm sorry for the bad English; it's not my native language.

I respectfully disagree with your claim that logic doesn't help in the search for truth. On the contrary, we need logic in order to find out what any proposition P implies -- what other propositions must be true if P is true -- which, in turn, is essential for verifying that P itself is true. This holds as much in science as in philosophy or any other kind of inquiry.

You suggest that you need a definition of the word truth before you can answer the question whether philosophy can find truth. But if that's a problem, it isn't a problem just for philosophy: it affects science and any other kind of inquiry just as much as it affects philosophy. You could say to a physicist, "Until I have a definition of truth, how can I know whether physics can find truth?" The only difference here between philosophy and physics is that a philosopher will take your question seriously.

I don't think you need a definition of truth -- or at any rate not an interesting definition -- in order to see whether philosophy, physics, or anything else can find truth. Indeed, the concept of truth may be so fundamental that it can't be analyzed in terms of more basic concepts. Perhaps the most we can say about truth is platitudes like these:

"Mass-energy is always conserved" is true if and only if mass-energy is always conserved.
"Determinism is compatible with moral responsibility" is true if and only if determinism is compatible with moral responsibility.

The first quoted claim is from physics, while the second quoted claim is from philosophy. But as far as the concept of truth is concerned, they're on a par. To find out if mass-energy is always conserved, we make observations, perform experiments, and extrapolate (on a grand scale!) from the results. To find out if determinism is compatible with moral responsibility, we think as carefully as we can about the concepts of determinism and moral responsibility. Substantial progress has been made on both issues. Indeed, I think the second issue has been resolved.

Some psychologists believe, based on empirical research, that people tend first to make a decision intuitively and then afterwards find a way to provide logical justification for why it was a good decision. I think they use the term "heuristic" as a way to describe an analog process in which we use experience, memory, and pattern recognition as tools with which to make that initial intuitive decision. If this description of the process of how we decide is based on how our minds actually do work, what are the implications for philosophy, which seems to imply that our decision-making process is rational? Isn't the "rational" part of our brain a fairly late evolutionary development, in which it was grafted on top of our nervous system?

If the evidence favors the view that we don't always make decisions by reasoning, then philosophy needn't disagree. If the truth of the matter were that all of our decisions—including decisions about which views are more plausible—amounted to post-hoc "rationalizations," then it's hard to see how philosophy as we usually understand it would be possible. But the evidence doesn't come close to showing that. Anyone seriously engaged in doing philosophy implicitly assumes that s/he is capable of giving reasons and being swayed by them. But that's different from assuming that we always exercise that capacity or that it never misfires.

A related thought: even if the reasons we give are often after-the-fact rationalizations, it wouldn't follow that our decision our our belief is unreasonable. The underlying mechanism that brought us to our decision or belief may be well-tuned and suited to the task it was performing, even if we have little or no conscious access to how the mechanism really works. Being reasonable doesn't require being able to give an explicit, articulate account of one's reasons. Indeed, it's not at all unusual for someone to have sounds judgment about one sort of thing or another and yet not to be good at putting the basis for the judgment into words. (Shopworn example: being good at telling whether something is grammatical is one thing; being able to explain or defend the judgment is another.)

Still, it's tempting to assume that when we're doing philosophy, conscious reasoning isn't just an incidental part of the process but is the most important part of the story. And so there's an interesting meta-level question here. If we're just as prone to after-the-fact rationalization when we're doing philosophy as we are in other circumstance, how should this affect our conception of what doing philosophy amounts to? I think you're onto an interesting issue, and I'd be wary of people who offer glib answers. That said, the question isn't really just about philosophy. It seems equally important for science, and for a good many other activities. In the case of science, one common reply is that what's important is not so much the rationality of individual scientists but of the overall enterprise. On this view, science is essentially a social activity and knowledge emerges from somethng like the wisdom of the group. We're a little less inclined to think of philosophy that way, but maybe we should.

In mathematics, it is commonly accepted that it is impossible to divide any number by zero. But I don't see why this necessarily has to be the case. For example, it used to be thought of impossible to take the square root of a negative number, until imaginary numbers were invented. If one could create another set of numbers to account for the square root of negatives, then what is stopping anyone from creating another set of numbers to account for division by zero.

It's actually easy to invent a system of numbers in which division by zero is possible. Just take the usual non-negative rational numbers, say, and add one new number, "infinity". Then we can let anything divided by zero be infinity. Infinity plus or times anything is infinity. Infinity minus or divided by any rational is still infinity. We have a bit more choice what to say about infinity minus infinity or divided by infinity. But we can let those be infinity, too, if we like. So infinity kind of `swallows' everything else. (Oh, any rational divided by infinity should be 0.)

Note, however, that many of the usual laws concerning multiplication and division now fail. For example, it's true in the usual case that, if a/b exists, then a = (a/b) x b. But (3/0) x 0 = infinity, not 3; of course, you can carve out an exception for 0, if you wish, but there's no way to make that work in all cases. This is not a fatal flaw, though. In the reals, a x a is always positive; not so when we add imaginary numbers. So we would expect some old generalizations to fail in the new case.

The real question is: Is there anything useful we can do with these new numbers? So far as I'm aware, the answer is "no". There are, in fact, good and useful theories of infinite numbers, but there doesn't seem to be much use for a notion of division involving them.

What's the difference between saying that the burden of proof is on one's opponent, and simply saying that they are likely wrong? The idiom of "burden of proof" is used in a way that suggests that it's somehow different from ordinary, straightforward evaluations of evidence and arguments, but I can't think of what that difference could be.

You often do hear people in philosophy say that the 'burden of proof' is on their opponent. And you sometimes hear people argue about who has the 'burden of proof'. I think that what this usually is about is which position is antecedently more plausible, or which position presently has the best arguments in favor of it. It's kind of like the game "King of the Hill". Whoever's on top of the hill is king, and someone else has to knock them off.

Personally, I don't find this way of thinking about philosophical arguments very helpful. It's not that I don't think there is a 'truth' to these matters, but philosophical progress tends not to happen in a linear manner. The fact that something seems plausible today may not be a very good guide to whether it is true. More generally, I tend to think that understanding an issue is in a way more important than knowing how to solve it, so telling me that you've given an argument and now someone else has the 'burden of proof' just sounds gratuitous. You gave an argument. Period.

The law mandates that people must wait until they are 21 years of age in order to consume alcohol, on the grounds that that is the age at which the body is fully capable of handling alcohol. But it is well understood in biology and physiology that people's bodies grow and develop at different rates depending on any number of factors: environment, genetics, etc. And that is just the physical aspect of it. There is also the mental aspect of understanding the potential dangers of alcohol and knowing how much is safe to consume. Many 18 - 20 year old college students consume alcohol without any harm resulting. Is it accurate to draw a line in the sand and say "this is when you are ready for alcohol"? Sartre says that "existence precedes essence" which I interpret to mean that people are responsible for determining the course of their own lives. So shouldn't we have the freedom to determine for ourselves when we are ready for alcohol? Why should the government make that decision for us? If a person is both physically and mentally capable of drinking alcohol, but under the age of 21, then to enforce the minimum drinking age against that person you would be relying on argumentum ad baculum, wouldn't you? It seems like a violation of human dignity to deny me autonomy over my own digestive system.

And people are ready to drive cars at different ages. But I'm going to guess that it would be a bad thing overall if 12-year-olds were allowed to drive. And people are intellectually capable of entering into contracts at different ages, but even the 10-year-olds who think they are probably aren't.

In America, we tend to favor laws that aren't paternalistic. We tend to think that we should err on the side of treating adults as able to make responsible decisions, even though there are lots of cases where they aren't. But in America (and most places), we tend to think that paternalism about non-adults is another matter. It's not just that on average, non-adults are less ready to make decisions than adults. It's also that there are plenty of adults who would be quite happy to exploit the over-confidence, lack of experience and impulsivity of many non-adults. They'd be happy to sell whisky to 10-year-olds. They're be happy to hire children for bad wages to do dangerous work.

If you want to argue that there just shouldn't be any differences at all in what the law allows adults and non-adults to do, even if some of the non-adults are, well, children, then you've picked an interesting row to hoe, but that's not how your question is phrased. You seem to agree that not everyone is ready to decide if they should drink. It's just that you think the law should leave alone the people who are ready to make adult decisions. But as long as you agree that it's okay for the law to put some age- or maturity-related restrictions on people's actions, then we need to think about how laws like that can work. It would be an enormous waste of time and resources to have the police or the liquor store employee or whomever apply some kind of test to figure out if the person in front of them is really ready. So the law does what legal systems need to do to function: it creates rules that can be applied and enforced relatively straightforwardly and don't stray too far from common sense. In the case of alcohol, it's an aged-based criterion. It's rough. It lets some people drink who aren't ready to make that decision. And it prevents others from drinking who are. But it does at least rough justice to the facts about people's ability to decide this sort of thing.

Now I suppose we could have a drinking license that anyone can get if they can pass some kind of biological/psychological test. And we could even say that unless you've passed the test, you don't get to drink. Almost everyone would agree that this system has way too high a cost in individual liberty. But almost everyone would agree that letting 10-year-olds use their allowances to buy gin doesn't pay enough attention to the fact that children aren't simply small adults.

Maybe you're (I don't know) 17 and mature enough to make wise decisions about alcohol. And so maybe in some attenuated sense, the laws as they stand don't fully respect your dignity. But one likely cost of rejiggering the rules to carve out an exception for you would be a fair bit of harm to people who actually aren't ready to decide these sorts of things.

Legal systems don't have to produce perfect justice in every case to be legitimate. Offhand, laws that put age-based restrictions on drinking don't seem like the sorts of examples that would help make a strong case for anarchy. They seem more like the kind of trade-off that any workable legal system entails.

Many people build their moral beliefs out of deep-seated gut feelings that themselves have no rational grounding. What I wanted to ask is: is this a good way to construct a belief system? If so, could any feeling at all serve as a foundational principle? For instance, would a moral system that takes a deep-seated racism as a building block be any less justified than one that relies on deep-seated empathy?

If I want to construct a sound system of beliefs, then there's not much to be said for merely relying on gut instinct. That's not because gut instincts are necessarily wrong or unreliable. It's because if I'm trying to construct a system as opposed to simply enumerating my commitments, critique, evaluation and adjustment are part of the process. But most people don't have a system of beliefs, and even to the extent that they do, it's bound to be a limited system. My beliefs about some things are much more systematic and reflective than about other things. Given that none of us have endless resources to commit to working out our beliefs, that's inevitable.

But you say that "many people build their moral beliefs out of deep-seated gut feelings that themselves have no rational grounding." I'm worried about that way of putting things. If by "rational grounding" you mean something like "argument from explicit reasons," then I'd disagree that this is what's always needed. It's not just that giving reasons has to stop somewhere on pain of infinite regress. It's that, to use Alvin Plantinga's phrase, some beliefs are "properly basic." I don't need to give reasons to be reasonable in believing that there's a floor under my feet. It's enough that I notice that it's there. (I could be mistaken, of course. Or dreaming. Or crazy. But I don't need to chase down those blind alleys to be reasonable.) I don't need to give reasons to be reasonable in thinking it would be wrong to slip out of the restaurant without paying my bill. It's not that there are no reasons to be given; it's that people who aren't good at identifying and articulating the reasons can still be reasonable in thinking that stealing is wrong.

This sort of thing is true in pretty much every area of knowledge or belief. Being reasonable or justified is one thing. Being able to articulate reasons and justifications is another. There's a much bigger story to be told here, but part of the takeaway will be that we aren't epistemic islands. Knowledge is social; gut instincts don't come from thin air; being reasonable is partly a matter of being plugged into a reasonable community in the right sort of way.

Turning to your specific example: if someone's moral outlook rests on deep-seated racism, it's not reasonable. It's not reasonable because racism is wrong, and moral beliefs built on racism are very likely to be wrong. (Yes; it's quite possible to give a raft of reasons for that claim. But that's a whole other topic.) People whose guts tell them to hold racists attitudes have badly trained guts. People whose guts lead them to reject racism are more fortunate, even if they get tongue-tied when they find themselves arguing with glib-gabbing nasties. But this doesn't mean that the anti-racists are just mindlessly parroting what they've been told. It's quite possible—indeed quite common—to have well-tuned judgment that outstrips one's ability to justify the judgments.

This doesn't at all mean that looking for explicit reasons is bad. It also doesn't mean that societies should do without people who spend time reflecting, investigating, criticizing. What it means is that none of us need do this about everything we think, and for many people, it's fine if they do it in a relatively limited way.

I am reading "How Physics Makes Us Free" and have a question about the central Daniel Dennett thought experiment in the opening chapter. The experiment treats body parts, crucially the brain, as a component of the body like a spark plug in a car (brain in a vat). It is, rather, part of an organism and in my mind indivisible from the nervous system. Even when higher brain function is dead a body will still reject a donated organ and attack it as alien. A thousand same-model spark plugs will work in a car without any issues. It is at the level of biology that identity first appears. Yet the thought experiment treats physics and psychology as the only relevant domains. If the thought experiment were true to biology it would not be enough to replicate all the synapses and nerves but the entire body as the biological instantiation of identity. Am I overstating a life-science claim to some part of this scenario?

You give an interesting argument that the ground of one's identity is biological rather than (just) physical and/or psychological. But it may run into a problem. Not only can one's body reject organs transplanted from someone else. It can also, in the case of autoimmune disease, "reject" (i.e., attack) one's own cells and tissues: sometimes the body doesn't "know its own." Yet it seems incorrect to say that sufferers of autoimmune disease have a "compromised" identity. Does this problem cast doubt on your proposal?

Pages