I recently watched a tv show that produced a line of questioning in my head on the virtue of reality. How do we define reality? What's the difference between reality and a world that is the perfect replication of reality? What would be the difference between the two worlds? Is it truly possible to know when we are living in reality? I guess I'm mostly asking if there is work form past philosophers that I could read on the subject?

A perfect replica of reality would be like reality in all respects. It would contain trees—real trees. It would contain people—real people. It would contain fake butter—real fake butter. And if it were a perfect replica, everything in reality would be in the replica. So in every sense that matters, it would be real. But I have the feeling you're worried about how you can know that you're not systematically deluded or deceived about more or less everything. This was Descartes' question in Meditations . He thought that there was one thing he couldn't be deceived about: that he was having doubts and therefore that he, the doubter existed. From there to anything substantial, like trees and people and electrons and burritos is a long way. Descartes thought that just by reasoning about it, he could prove that there's a God who is not a deceiver, and therefore that even though he was no doubt wrong about some things, he wasn't systematically wrong. Most philosophers don't think his argument was very good....

How do we justify our knowledge of the external world? Knowledge of the external world seems to be fallible in any case if we put the threshold of success at the highest level, namely 100% certainty. But this still raises a question: if we want to avoid complete skepticism, how can we be certain that our knowledge is at least likely to be true? In order to create a probability about the validity of our knowledge of the external world we need to start from perception. The problem is that we can be certain of the existence of perception but not the source of it (the matrix/the real world), and that is essential for the knowledge of the external world. In order to calculate our probability we then need the number of possible events E and the one favourable event F we're looking for: E = 2 possible events are external source or non-external source (matrix, hallucination, dream etc.) F = 1 favourable event i.e. external source P(F) = F/E = 1/2 = 50% It seems to me that both possibilities are equally likely....

Setting external world skepticism aside for a moment, suppose I'm about to roll a die. Now there are two possibilities: it will come up 1 or it won't. If I reason as you did, I will conclude that the probability is 1/2 that the die will come up 1. Something has gone wrong here. For one thing, we can't get the answers to probability questions just by counting. There are many ways to slice up the space of possibilities, and if we use your rule, the answer we get will depend on how we do the slicing. This is a well-known problem, and there is no simple fix. But there's another problem: the probabilities here aren't chances. They are degrees of belief. Even if we thought (though we shouldn't) that the right way to slice things up is that our experience has an external source or it doesn't, without adding anything more fine-grained, we don't have to agree that the two possibilities are equally probable. You say "it seems to me that both possibilities are equally likely." It's worth wondering whether you...

I am just starting to study philosophy and I am not understanding the claim that all knowlege comes from science. Could you please give me some practical examples?

The reason you feel you don't understand the claim is because it's nonsense. I know that there are three pillows on the bed behind me, but no science was committed in finding that out. I just turned around, looked and counted. I know that I had dinner with friends yesterday. No scienceing there either. I just remember. I know that one of our neighbors recently quit a committee he'd been a member of. Once again, no science; someone in a position to know told me. In fact, it's safe to say that by far most of what we know we know without anyone doing science. Of course people doing science make observations, consult their memories and get information from other people. But so does everyone, and most of us are not scientists and don't do science. The fact that X has important things in common with Y doesn't make X a Y.

Can one have delusional knowledge?

Depends on what you mean. If "delusional knowledge" is supposed to mean that what the person "knows" isn't true, then the usual answer (with which I would agree) is no. We can't know what isn't so. If "delusional knowledge" means beliefs produced by the person's delusion, but that happen by luck to be true, the answer is no according to most philosophers. The problem is that even though the belief is true, it isn't connected to the facts in the right way. To put it a bit too simply, the fact that what the person believes is true doesn't have anything to do with the fact that they believe it; they would believe it even if it were false. If the question is whether a person who suffers from delusions can know some things, the answer is yes. A deluded person might know her own name; he might know where he lives; she might know that hydrogen is the first element in the periodic table. But due to his delusions, he might believe that astral beings are whispering the secrets of the universe in his ear....

Given a particular conclusion, we can, normally, trace it back to the very basic premises that constitute it. The entire process of reaching such a conclusion(or stripping it to its basic constituents) is based on logic(reason). So, however primitive a premise may be, we don't seem to reach the "root" of a conclusion. Do you believe that goes on to show that we are not to ever acquire "pure knowledge"? That is, do you think there is a way around perceiving truths through a, so to say, prism of reasoning, in which case, nothing is to be trusted?

There's a lot going on here. You begin this way: Given a particular conclusion, we can, normally, trace it back to the very basic premises that constitute it. If by "conclusion" you mean a statement that we accept on the basis of explicit reasoning, then we can trace it back to the premises we reasoned from simply because we've supposed that there are such premises. On the other hand, most of what we believe doesn't come from explicit reasoning. (I don't reason to the conclusion that I had a burrito for lunch. I just remember what I ate.) And even when it does, the premises don't usually constitute the conclusion. The easiest way to see this is to consider non-deductive reasoning. A detective may conclude that Lefty was the culprit because a number of clues point in that direction. Maybe a witness saw someone who looks like him; maybe he had a particular motive for the crime. But the clues don't constitute Lefty being the criminal; they merely make it likely. After all, even given all the...

Is certainty a requirement for truth? We know that certainty is not a requirement for knowledge, but how about for truth?

No; truth doesn't require certainty. Whether something is true is a matter of how things are, whether anyone is certain about it or even aware of it. For example: I have a file cabinet in my office with some papers in it. No one (certainly not me) is certain exactly how many pieces of paper are in the cabinet, though there's a truth of the matter. The truth is determined simply by what's in the cabinet, whether anyone knows or bothers to check. In the case of my file cabinet, it's at least possible to find out how many pieces of paper are in it, and so someone might suggest modifying the view you're asking about. Perhaps there's a truth about a matter only if it's at least possible for someone to become certain of it. And indeed, people have defended views like that. They go under the umbrella of verificationism . There are even some cases where something like verificationism is plausible. For example: we don't believe there's such a thing as absolute uniform (inertial) motion because our physics...

What is right and what is wrong? Who can say what is right and what is wrong? How can we know what it is? Does it really matter, does it make a difference to know what the right thing and what the wrong thing is? I'm talking about stuff like sexism, racism, money, society etc.

Well, things are wrong if we shouldn't do them; they're right if we should. As for which specific things, there are many. Some people think they can boil it down to a simple principle or two (e.g. things are right if they produce the largest balance of good consequences over bad.) Other people think right and wrong are too varied for anything more than rules of thumb. Who can say what's right and what's wrong? If you mean who's qualified to pass judgment, then pretty much all of us are—at least about some things. It's wrong to mock people's infirmities. It's wrong to beat someone up because you're annoyed by something he said. It's wrong to kill someone so that you can collect on her insurance policy. And so on. You're in just as good a position as I am to make those claims. (Of course if you're asking who can make something right or wrong by declaring it right or wrong, there's a pretty good case that no one can. What's right and wrong isn't up to us.) Does it make a difference to know the...

In war memoirs, there is sometimes talk about a feeling of invulnerability among soldiers new to combat: it never occurs to many people that they themselves might be killed. But then something punctures the feeling: it might be that a friend dies, or it might be the sheer quantity or awfulness of death, but at that point the recruit "sees the elephant" and gains a sense of their own mortality. Well, if someone "sees the elephant", how would philosophers characterise the change in epistemological status? For instance, would it be fair to say that the person has gained new knowledge, ie now knows that they're mortal, whereas they didn't know this before? Or is just a case of probability weightings of possible outcomes having changed in the light of new data?

It's a fascinating question. When the recruit "sees the elephant," as you put it, they seem to gain something that calls out for an epistemological characterization, but just what they gain is harder to say. The problem is that the obvious suggestions don't seem to work. The recruit already that s/he is mortal. Likewise, his or her probabilities haven't shifted. The recruit presumably already thought that death is certain. So what might the recruit have gained if not knowledge or improved probability judgments? One answer is salience. It's one thing to know something; it's another for it to figure significantly in your outlook. If something is salient for me, it plays a different role in guiding my actions than it does for someone who knows it's true but gives it little thought. On one model, our actions are guided by probabilities and judgments of importance or value/disvalue. But not everything that we know or believe plays a role in our decision-making, and likewise not everything we see as good or...

If we all have personal biases (ie. every individual, being unique, perceives the same event slightly differently), how can we trust anyone to provide the real truth?

An incomplete answer, but relevant, I hope. Suppose the question is: did Prof. Geisler show up for class on Monday? We ask students enrolled in the class. All the students who were there in the room at class time say yes: Prof. Geisler was there. In fact, she arrived on time, and taught a full class. Let's grant that every person in the room had a slightly different take on exactly what went on in the room at that time. Let's also grant that some of what some people would say happened will be inaccurate, and may reflect their biases and psychological idiosyncrasies. The question, however, is whether Prof. Geisler showed up. There's no reason to think these differences in perception got in the way of judging that . In one way this is a trivial example, but it reflects something extremely common. Even with our very real quirks and biases, there's an enormous amount of what we perceive and believe for which those quirks and biases are simply irrelevant. Individually, most of these facts may be...

How much does one has to "know about" a person to "know" a person? When does a stranger become an associate or acquaintance, an associate or acquaintance become a friend, and a friend become an intimate? When is a stranger no longer a stranger? How does one know when one is "close" to someone? Those questions have bothered me for quite some time. If I read a biography of a celebrity whom I have never met, and I am able to memorize the entire contents of the biography, could it be argued that I "know" the celebrity without actually having never met the celebrity? Since no human being has complete knowledge about any other human being, do we truly know anyone except for ourselves?

I think the best answer is that there's no one answer. Let's start with the easiest of your questions: you've read a biography of someone you've never met. Do you know them? Most people would say "No" because when we say things like "I know Robin," we generally mean that we are acquainted with Robin--have actually met Robin. Knowing about someone is knowledge by description but not by acquaintance, to borrow Bertrand Russell's terms. In the other cases, there's no simple answer because the terms "mere acquaintance," "friend," "close friend" and so on aren't precise; there's no cut-off. It's like the case of baldness. There's no exact point at which a formerly hirsute person becomes unequivocally bald.* The case you've focused on is an instance of a very general phenomenon. Some people are definitely tall, some are not tall, and some are on the border. Some bananas are definitely ripe, some are not, and for some there's no definite right answer. As you can see, it would be easy to make a very...

Pages