Recent Responses

Why did all the ancient philosophers seem so fascinated by astronomy? Their interest in math and "physics" is understandable, as math can be seen as very similar to certain branches of philosophy in that it is not the study of a particular existence, but, rather, the study of "existence," and physics is the study of the seemingly occult laws that govern everything, which is also very similar to philosophy in a sense, but astronomy is just the extrapolation of those two fields on "arbitrarily chosen" pieces of mass. Math, and even physics to a large extent, are "implicit" (for lack of better term) to existence, while astronomy is wholly explicit.

You are completely right to notice the early absorption with astronomy. I have heard people say “Greek philosophy began on May 28, 585 BC, at 6:13 in the evening” – because of astronomy. Thales, who is often called the first Greek philosopher, predicted a solar eclipse that we now know to have taken place on that date.

Not only did Thales thereby establish the credentials of philosophers as “ones who know” by being able to predict a coming natural event; he also thereby proved a point about the natural world that encapsulates early philosophy’s turn away from religion. If astonishing events like solar eclipses are not the capricious actions of mysterious gods but rather quite regular events in the natural world, then the world can be adequately studied through rational methods and without dependence on old stories handed down about divine action.

Mind you, this is only one way to understand the earliest philosophers. My point is that interest in astronomy is part of this picture and even one of the central concerns that those philosophers had, not at all something extraneous to their interests. We’d want to say somewhat different things about what astronomy meant to Anaximander and the Pythagoreans, because they had theories unlike anything ascribed to Thales.

Later in the ancient tradition, i.e. with Plato and Aristotle, astronomy came to take on additional significance. But rather than jumping ahead to them, let me back up to the state of astronomy before the philosophers came along.

First of all it’s highly relevant that astronomy was one of the first success stories in ancient science. The astronomy we encounter in e.g. the Babylonian records is almost entirely observational. People noted each night what phase the moon was in and which constellations were visible. Thales, who must have gotten his data from Babylon, was able to draw on their long history of watching the night sky. Astronomy let the earliest agricultural civilizations organize their calendars, not only observing that weather had begun to cool off in the fall but knowing when to expect it to cool; therefore knowing when the best time to plant and harvest would be. The first calendars began with the observation of recurring patterns in the skies, along with the special problem of coordinating the lunar calendar with a solar year.

It should be obvious why astronomy provides the basis for a calendar. The patterns of the sun, moon, and stars not only fall into patterns, once you start observing them long enough, but are also quite independent of anything that happens on earth. Earthquakes, floods, and fires – to say nothing of merely human events like war, drought, and migration – have no discernible effect on what we see in the stars. Measuring time calls for something that changes in a quantifiable way without being changed by the events one is using the calendar to measure. Nothing else accessible to those ancient civilizations could work as the observable sky could.

This fact also guides us to the second feature of astronomy that would have appealed to philosophers, in addition to the regularity that made the world feel natural and subject to human knowledge. Studying astronomy seemed to bring human beings into contact with relations and events that didn’t mix with lesser natural processes.

We know too much today to think this way. We know that the stars are made of matter like the matter found on earth, and that the observable patterns in our night sky are only the accidental effect of where our little sun is riding around a non-central part of an unimpressive galaxy. But ancient observers who knew none of that perceived what they saw in the sky as close to what we’d call a priori truth.

Ultimately my reply is to reject your assumption about the relative status of mathematics, physics, and astronomy. Astronomy struck a philosopher like Plato as much closer to mathematical truth than anything in the subject that Aristotle called “physics.” It mattered to philosophers (for Plato in particular) precisely because of how close it came to being mathematics.

Plato does distinguish the two subjects, though. In Book 7 of the Republic he has Socrates tell Glaucon that even what we see in the starry sky is visible and hence to some degree subject to the failings of all material objects. Astronomy comes closest of all the sciences to giving us patterns of the abstract truths about geometry and motion, but it still isn’t mathematics. Plato would conclude that although philosophers need to study astronomy as they progress toward higher kinds of knowledge, it is not their final object of inquiry.

Humans can apparently commit to beliefs that are ultimately contradictory or incompatible. For instance, the one person, unless they're shown a reason to think otherwise, could believe that both quantum mechanics and relativity correspond to reality. What I wanted to ask is -- the ability to hold contradictory beliefs might sometimes be an advantage; for instance, both lines of inquiry could be pursued simultaneously. Is this an advantage that only organic brains have? Is there any good reason a computer couldn't be designed to hold, and act on, contradictory beliefs?

Fantastic question. Just a brief reply (and only one mode of several possible replies). Suppose you take away the word "belief" from your question. That we can "hold" or "consider" contradictory thoughts or ideas is no big deal -- after all, whenever you decide which of multiple mutually exclusive beliefs to adopt, you continuously weigh all of them as you work your way to your decision. Having that capacity is all you really need to obtain (say) the specific benefit you mention (pursuing multiple lines of inquiry simultaneously). When does a "thought" become a "belief"? Well that's a super complicated question, particularly when you add in complicating factors such as the ability to believe "subconsciously" or implicitly. On top of that let's throw in some intellectual humility, which might take the form (say) of (always? regularly? occasionally?) being willing to revisit your beliefs, reconsider them, consider new opposing arguments and objections. Plus the fact that we may easily change our minds as new evidence arises. That said, it seems to me, that in general there's not much incentive to determine exactly how/when a "thought" becomes a "belief." Maybe that happens when you "commit" (to use your word) to the thought in some strong sense, but then again, when/how does that occur? How often must you "declare" what you believe? So with these mitigating considerations in mind, I'd agree with you that yes, we easily hold and consider contradictory thoughts, there may well be advantages to doing so, (there may well also be disadvantages, worth thinking about), and though I can't say much about artificial intelligence/cognition, if computers can be designed to express thoughts in the first place, it's hard to imagine what would inhibit them from expressing contradicting thoughts ...

A couple of good primary sources relevant to your question: Daniel Dennett's book Consciousness Explained (he develops a theory wherein the brain expresses many different, often contradictory, thoughts simultaneously), and some work by Tamar Gendler, particularly a paper comparing "Beliefs" with "A-liefs" (where the latter are thoughts that don't quite rise to the level of beliefs) .... don't have the title handy, but you should be able to find it.

hope that helps--
ap

What philosophical works have been dedicated to the topic of rational decision making, the adoption of values, or how people choose their purposes in life?

A slim, accessible book on part of this question (and only part!) is Decision Theory and Rationality by José Luis Bermúdez (Oxford University Press 2009). It requires little or no technical knowledge of decision theory, and shows how decision theory can't possibly be an exhaustive account or explication of rationality. Bermúdez makes a good case, in simplest terms, that rationality plays at least three key roles: the guidance of action (i.e. answering the question what counts as a rational solution to a decision problem), normative judgement (answering the question whether a decision problem was set up in a way that reflected the situation it is addressing), and explanation (answering the question how rational actors behave and why). He argues that no form of decision theory (there are lots, and he only explores a few of the more common ones) can perform all three of these roles, yet that if rationality has one of these three roles or dimensions, it has to have all three of them. So decision theory can't be the whole story.

But it sounds as if your question goes way beyond decision theory and rationality in the narrow sense. The question how people arrive at their values and purposes or should arrive at them in the first place (decision theory takes for granted that people already have settled and stable values) is not one that philosophers since the ancient world have had much to say about that's of any use. For the first couple of thousand years or so since their time, religion has been the main source of answers to that question. Since the Enlightenment (i.e. for the past quarter millenium or so) it's been explored mostly not by philosophers strictly speaking, but by literary types with philosophical inclinations such as Diderot, Goethe, Dostoevsky, Robert Musil, Marcel Proust, Thomas Mann and so on (not much in English since about George Eliot). Not that religions have given up (including now various brands of secular religion as well), and also there's a huge self-help literature on the subject, much of it religious or quasi-religious, and most of little or no value. If a philosopher (in the modern sense, i.e. an academic philosopher, not in the 18th-century French sense) tries to tell you she has an answer to this question, that is the point where you should stop listening and walk away.

If the basis of morality is evolutionary and species-specific (for instance, tit for tat behaviour proving reproductively successful for humans; cannibilism proving reproductively successful for arachnids), is it thereby delegitimised? After all, different environmental considerations could have favoured the development of different moral principles.

There's an ambiguity in the words "basis of morality." It might be about the natural history of morality, or it might be about its justification. The problem is that there's no good way to draw conclusions about one from the other. In particular, the history of morality doesn't tell us anything about whether our moral beliefs and practices are legitimate. Even more particularly, the question of how morality figured in reproductive success isn't a question about the correctness of moral conclusions.

Here's a comparison. When we consider a question and try to decide what the best answer is, we rely on a background practice of reasoning. That practice has a natural history. I'd even dare say that reproductive success is part of the story. But whether our reasoning has a natural history and whether a particular way of reasoning is correct are not the same. modus ponens (from "A" and "If A then B," conclude "B") is a correct principle of reasoning whatever the story of how we came to it. On the other hand affirming the consequent (concluding "A" from "B" and "If A then B") is invalid reasoning even if it turns out that often, in typical human circumstances, there's some sort of advantage to reasoning this way. (Reasoning heuristics can be invalid and yet still be useful rules of thumb, though don't bet on this one being a good example.)

I assume the larger point is obvious. We say that stealing is wrong, and there's presumably an origins story about how we came to that principle. But that doesn't give us a reason to doubt that stealing really is wrong.

Not just the same point, but still relevant. There's no such thing as spider morality. Spiders don't subscribe to a code of cannibalism; they just (sometimes) eat their mothers. (BTW: rabbits sometimes eat their young. Happy Easter!) The reason we don't talk about spider mortality is that spiders can't step back and ponder whether it's really okay to eat Momma, but we can. Even if eating your mother might be in your reproductive interest, a little thought should suggest that it's probably still not okay.*

The causal basis of a belief is one thing; the soundness of the belief is another. For some reason, this point often seems less obvious for morality than arithmetic, but holds all the same. Knowing where a belief came from doesn't tell us whether it's true.

-------------
* Not under any circumstances? No. A bit of imagination will let you come up with Bizarro cases where eating dear Momma would be the best thing to do; details left as an exercise for the reader. But this actually reinforces the point. We can reason about what we ought to do. If we couldn't there'd be no such thing as morality. And the conclusions delivered by our reasoning won't always be what you'd expect if you simply looked to evolutionary or social history.

Is there any problem, moral or otherwise, in mixing money and enlightenment? For instance, asking people to pay spiritual guidance. Should philosophers receive a salary?

Even spiritual teachers have to eat. One might be suspicious of someone who withheld "enlightenment" unless the seeker paid, though in many traditions, support for spiritual guidance comes from voluntary donations.

Whatever one thinks about people who explicitly claim to be providing spiritual help, counselors and psychotherapists offer something that's at least in a ballpark not too many miles away. For instance: there are interesting similarities between what one might learn from Buddhist practice and from cognitive behavioral therapy. I, for one, would be puzzled if someone thought a therapist shouldn't charge for her services. Exactly how the lines get drawn here and what, if anything, underlies the difference is an interesting question. If gurus shouldn't be paid, should doctors? How about artists? After all, insofar as I count any of my own experiences as spiritual, some of the more profound ones came from paintings, works of literature, pieces of music.

In any case, I'd suggest caution about lumping philosophers together with spiritual teachers. Although there are some exceptions, most of what philosophers actually do isn't much like spiritual guidance at all. Here's a passage from a classic 20th century philosophical essay:

"Confusion of meaning with extension, in the case of general terms, is less common than confusion of meaning with naming in the case of singular terms." (W. V. O. Quine, 'Two Dogmas of Empiricism.')

I think the author would have been surprised if anyone had thought this was part of a piece of spiritual guidance. Perhaps he might have been less surprised that some people would be puzzled that he received a salary, but the world is full of wonders.

When a person, and especially a talented one, dies young, people sometimes mourn not just what they have in fact lost, but what might have been. But is mourning what might have been predicated on the belief that things could have been otherwise? And if someone is a thoroughgoing determinist and thinks that there's only one way things ever could have turned out, would it be irrational for such a person to mourn what might have been?

One way to interpret the mourner's state of mind is this: the mourner is thinking (optimistically) about the life the young person would have led had he/she not died young. That state of mind is consistent with believing that the young person's death was fully determined by the initial conditions of the universe in combination with the laws of nature.

The deterministic mourner might even recognize that, in mourning the young person's death, the mourner is committed to regretting that the Big Bang occurred just the way it did or that the laws of nature are just as they are: for only if the Big Bang or the laws of nature (or both) had been appropriately different would the young person not have died young. Furthermore, determinism allows that they could have been different. Determinism doesn't say that the initial conditions and the laws of nature are themselves causally determined; that would require causation to occur before any causation could occur.

Although the deterministic mourner's regret may sound odd, it doesn't strike me as irrational. The young person's early death is a painful but deterministic result of the laws of nature and the initial conditions of the universe -- and therefore one reason to regret that the laws and conditions were not appropriately different.

Is there a particular philosophical discipline that deals with large numbers of people doing something innocuous, but having a deleterious effect on a much smaller number of people? If so, does it have a name? Like blame-proration, guilt-apportionment, or anything? Thanks!

Perhaps an example would help, but I think I have the idea. We might want to start by modifying your description a bit. You wrote of large numbers of people doing something innocuous but having a bad effect on a small number of people. If you think about it, however, that means the word "innocuous" isn't really right. And so I'm guessing you have something like this in mind: there's a certain sort of action (call it X-ing) that large numbers of people perform that has something like this handful of features. First, it doesn't harm most people at all. Second, though X-ing is potentially harmful to some people, the harm would be minimal or maybe even non-existent if only a few people X-ed, and only occasionally. Third, however, enough people actually do X that it causes palpable harm to the small minority. And given your suggested terms ("blame-proration," "guilt-apportionment") I take your question to be about just how culpable the people who X actually are.

If that's right, it's a nice question. The broad discipline is ethics, though giving a name to the right subdivision is a bit tricky (especially for someone like me who's interested in the field but doesn't work in it). We're not up in the stratosphere of meta-ethics where people work if they're interested in whether there are objective moral truths, for instance. We're also not at the level of normative ethics where people propose and/or defend frameworks such as utilitarianism or virtue ethics. But we're also not as close to the ground as work in applied ethics. Your issue is theoretical, with both conceptual and normative components, and with potential implications for practical or applied ethics. There's actually a lot of work of this sort in the field of ethics. If I were going to tackle this sort of question, I'd start by assembling some examples to clarify the issues. I wouldn't restrict the examples to cases that are quite as focussed as the question you raise. For instance: I'd also look at cases in which the number of people potentially harmed may not be small, but where the individual actions, taken one at a time, don't seem objectionable. If I drive to work rather than taking the bus, I've increased my carbon footprint—not by a lot in absolute terms, and it's not as though there are no reasons for taking my car rather than the bus. (I'll get to my office earlier and may get more work done, for instance.) But if enough people drive rather than take the bus, the cumulative effect is significant. Your problem is even more specific, but you can see that there's an important connection. And the sort of problem I've identified is one that's been widely discussed. (It's a close cousin of what's sometimes called "the problem of the commons.")

So anyway: the broad discipline is ethics, and the question has both theoretical and practical elements. It's also an interesting issue to think about. Perhaps other panelists who actually work in ethics will have more to say.

1. Stella is a woman and she is mortal. 2. Joan is a woman and she is mortal. 3. Liz is a woman and she is mortal...etc How many instances of women being mortal do I need before I can come to the general conclusion that all women are mortal?

the short answer: you need as many instances as there are (or have been, or will be) women.

a longer answer: if what you're asking is how many instances do you need before it might be reasonable to infer that all women are mortal -- well there's no absolute answer to such a question (I would say). Partly it's about all such similar forms of reasoning -- in general, how many instances do you need in any inductive argument before it's reasonable to draw the general conclusion. Partly it's about the specific case -- what are the specific biological facts about womanhood (assuming that's a biological category) and mortality, which might govern how many instances are required before the general conclusion is reasonable. Partly it's a matter of social norms -- in the community you inhabit, how many instances will people require of you before they decide you are reasonable etc .....

the short answer has the benefit not merely of being correct but also being clear!

hope that helps ---
Andrew

Does a stereotype need to be largely false to be objectionable? Many people seem to think so, as when they respond to criticism of stereotypes by replying, "Some stereotypes exist for a reason."

"Largely false" is an interesting phrase -- and there are several different things one might mean by a stereotype, and it's being "true" or "somewhat/largely" true ... plus there are different sorts of "offenses" one may commit when using stereotypes -- but to be brief: Let's assume some stereotype is largely true, i.e. true of many/most of the members of the relevant category. One might still proceed objectionably when using that stereotype simply for assuming that what's true of many/most is in fact true of all. Indeed, we sometimes say that fail to treat an individual with appropriate respect when you simply classify that individual as a member of some category and are disinterested in the particular details that might characterize that individual. So even if the stereotype is true of that individual, it may still be wrong to ASSUME it is true of that individual; and all the more so if it turns out the stereotype is not true of that individual. So a short answer to your excellent question is no: even "largely true" stereotypes might be objectionable.

Now there are all sorts of ways to start qualifying this -- but I'll leave it at that.

hope that helps...
Andrew

What is the difference between marital relationship and a committed relationship in all aspects, except the legal bond?..is there really a difference?

The difference is exactly that marriage is a legal bond, and it involves certain obligations and requirements (for example those having to do with property) that may not be implied by the "committed relationship". It is as a result a more serious affair. There is also the historically related fact that marriage is often taken to have a religious dimension, which the committed relationship may or may not. What some people dislike about marriage is that in the past it has existed in a hierarchical setting, so that a priest or other official, at a particular moment, says the words, 'I pronounce you man and wife.' It may be that in a particular committed relationship there is such a moment, but it may also not be the case.

Pages