I'd say yes . One way would be if (for whatever reason) you ceased to believe some proposition P that you formerly knew to be true. If belief is a precondition for knowledge, then you'd no longer know that P. Another way might be, while retaining your belief of P, to come to believe (or indeed even know) some proposition Q your belief of which undermines your justification for believing P and thereby deprives you of the knowledge of P that you formerly had. See Carl Ginet's article "Knowing Less By Knowing More" (linked here ).
I think of philosophers as people who describe and debate in order to reach defensible positions concerning ethics, truth, etc. But what is uniquely philosophical about such practices? Philosophers identify fallacies, but so do logicians. Philosophers are trained in intellectual movements, but so are historians. As Rorty put it, where is the fach in philosophy? Is philosophy more about excellence in argumentation than content?
I myself would answer "Yes" to your final question. I think you've put your finger on what distinguishes good philosophers from good practitioners of other disciplines: the desire and the ability to attain the highest standards of argumentative attentiveness and rigor regardless of the topic at hand. We can rely on good mathematicians, physicists, and biologists (for example) to be careful and rigorous about math, physics, and biology. But get them outside their scientific specialties, and the results are very much hit-or-miss, with a lot of miss: Richard Feynman, Stephen Hawking, Neil deGrasse Tyson, Lawrence Krauss, Jerry Coyne, and Bill Nye on the nature and value of philosophy; Sam Harris on free will and on the ability of science to resolve ethical questions; Coyne on free will; Richard Dawkins on free will and on the sorites paradox; Krauss on why there's something rather than nothing. Those are examples of sloppy argumentation that come readily to mind from just my own reading. It's not...
My dictionary's definition of "definition" is an "exact description of a thing". The definition doesn't contain an exact description of a thing, it only mentions it, so it doesn't qualify as a definition. Paradox?
I'm intrigued by your suggestion that there may be a paradox here, but I'm having trouble reconstructing your reasoning. As far as I can tell, your reasoning relies on the premise that any definition must contain whatever it defines . But I'm not sure that premise is plausible. Merriam-Webster.com defines "walrus" as follows: "a large gregarious marine mammal ( Odobenus rosmarus of the family Odobenidae) of arctic waters that is related to the seals and has long ivory tusks, a tough wrinkled hide, and stiff whiskers and that feeds mainly on bivalve mollusks." That definition (more precisely, the definiens ) doesn't contain the word ("walrus") being defined, which is good: otherwise the definition would be unhelpful. Nor does the definition contain a walrus; we'd need a cage to do that. Instead, the definition gives us a string of words meant to pick out the walrus from among other things. If it's a genuine definition, then the principle on which it seems you're relying is false. But I may...
We know for now, at least, it's impossible to go back in time scientifically. But what if you really needed to, say if you had done something really bad and had ever desperation to go back in time and correct what you did, so you don't suffer the consequences you are suffering in the present. Provided you would not cause a disaster by going back in time, and that you would only change the bad things you did, it is an interesting concept. With this context, if you could be given a drug, that would leave you asleep for the rest of your life (coma), would you do it? Read on, there's more. In this sleep, you will have a dream, which is set from just before your mistake. So essentially, it causes you to simulate the past and the rest of your life in your head. It seems real, but it isn't. My question is, would this be the same as going back in time and changing things in reality? Does reality matter more, or our interpretation of it?
First a terminological quibble. By "scientifically impossible," I take it you really mean just "technologically infeasible," i.e., impossible given the limits of current technology. As I see it, what's scientifically possible or impossible depends only on the laws of nature, which are standardly regarded as unchanging over time (or at least over any time that humans will experience). I think the jury's still out on whether backward time-travel is scientifically impossible in this latter sense. To your question: I think there's something self-contradictory in the idea of "correcting what you did" if that means "bringing it about that you never did what you in fact did." Either (1) you did it, or (~ 1) you never did it. I can't see how any consistent story features both (1) and (~ 1). In that sense, then, there's no such thing as (2) "going back in time and changing things in reality" and therefore nothing that's "the same as" (2). See section 1.2 of the SEP article on time-travel .
I had a brief chat with a work colleague today about the nature of reality and our perception of it. Essentially, his contention was that because we all basically agree on our external physical reality (e.g. when I hand him a cup of tea we both agree that I've just passed him a cup of hot tea), there must be an external reality because we both seem to agree on what it's like. If there wasn't such an external reality and we didn't essentially agree on it, he pointed out, we wouldn't be able to even ask for a cup of tea because my idea of what a cup of tea actually is would be totally different (or at least different enough to make meaningful communication difficult). Therefore, he concluded, it's common sense that we must be talking about and looking at the same "real" things and that we both experience them in the same -- or very similar -- way. Age-old philosophical problem solved!
But it can't be that simple. So my question is what are the main problems with this "consensus" view of reality? Or, to put...
I've seen only one of the Matrix films, the first one. You might ask your colleague how he can be certain that things in our world aren't as they're portrayed in that film: that is, you and he merely believe you're conversing in the ordinary way about an ordinary teacup, when in fact you're both hooked up to a computer that's simulating the conversation, the teacup, and your surroundings. Is there some internal indication, something about the way things feel to him during such a conversation, that rules out a Matrix-style simulation? What could that be? Granted, neither you nor your colleague are at all inclined to believe that you're living a simulated existence, but that's just how the Matrix wants it!
It sounds to me like the arguments about the existence of God are displaced from what the essence of the argument is "really" about.
It seems pretty clear from the equations of quantum mechanics that there is a Deity. However, whether She takes any interest in human beings, let alone the quotidian details of our everyday lives, is another matter.
That is where the argument "really" seems to be: if we posit that there is a Deity, what reasons do we have to believe that She cares about our everyday lives or intercedes in response to a prayer? It may well be that She is like a parent with grown children: "I took care of you and raised you to adulthood and gave you all the skills and abilities you need to take care of yourself on your own. Good luck!"
Isn't that the basis of the argument in favor of free will? If we do have free will, then why would God respond to our prayers?
It seems pretty clear from the equations of quantum mechanics that there is a Deity. I must say: That's as striking a statement as I can recall reading in quite a while! I wonder if it's the view of most of those who do QM for a living. Indeed, aren't there aspects of QM (indeterminacy, randomness, the Measurement Problem, the difficulty of reconciling QM with General Relativity, etc.) that suggest that no Designer is responsible for QM? Anyway, you draw an analogy between the Deistic God and a parent of grown children. But parents of grown children don't take the totally hands-off attitude toward their children that Deism attributes to God. Not if they're decent parents. What decent parent would deliberately choose not to call for help if she saw her adult child clutch his chest and collapse on the pavement? The Deistic God is a puzzling figure: knowledgeable and powerful enough to create a universe of mind-boggling size and complexity but morally callous enough not to care if the universe She...
One classification of evil is natural evil, those evils that are explained by laws of nature, without need for a personal agent. But is it appropriate to call natural disasters evil? The usual connotation of evil is something that pertains to personal agents so that it seems to me that to classify natural disasters evil would seem misleading. If my argument is valid, why does "natural evil" become a common term in the discussion of the problem of evil?
My hunch is that the term "natural evil" arose from the older label "the problem of evil" as a way to divide the data into events caused by agents and events not caused by agents. I don't think the choice of terminology is significant. One can refer to the problem of evil as the "problem of suffering" and then distinguish suffering caused by agents from suffering not caused by agents. The background assumption in any case is that suffering -- unlike, say, breathing -- isn't morally neutral: all else being equal, suffering is something undesirable that any morally sensitive person tries to prevent or relieve. So I don't think that substituting "suffering" for "evil" makes a difference to the problem or its solution. From my perspective, the important point is that if an omniscient and omnipotent God exists, then any suffering that occurs anywhere, regardless of its cause, is suffering that God chooses to permit .
Some philosopher have said that the very static nature of concepts excludes the dynamic and thus undefinable nature of reality. I don't know how strong that argument is. They say to conceptualize change by reference to unchanging principles is somehow wrongheaded. Can philosophers understand change while avoiding the problems these critics raise and what philosophers in the analytic tradition of thought have tangled with these criticisms?
For an accessible and up-to-date discussion of these issues, I recommend Time, Language, and Ontology (OUP, 2015) by Joshua Mozersky. The author's interview at 3:AM Magazine contains a good overview of the book.
I have two questions about logic that have vexed me for a long time.
Smith has written two great books of philosophy. Now he has come out with a third book. Therefore, that book will probably be good too.
Smith has flipped a coin twice, and both times it has come up tails. Now Smith will flip the coin a third time. Therefore, that flip with probably end up 'tails' too.
The logical form of inductive arguments seems to contribute nothing; the premises seem to do no logical work supporting the conclusion - is that right?
Smith has written two great books of philosophy. Now he has written a third. Any author that has written two great books of philosophy, and then writes a third, has probably written a third great book. Therefore, Smith has probably written a third great book.
That seems a deductive argument, because the general premise was added. And if true, the premises do seem to support with conclusion with necessity, even though the conclusion is probable; it is the knowledge of the world and not...
I think both arguments can be analyzed as inductive arguments and still distinguished in terms of their quality. The book argument is a stronger inductive argument than the coin-toss argument for a simple reason: the probability that Smith's book C is great isn't independent of whether Smith's books A and B are great. That is, Smith's having written great books A and B makes the probability that Smith's book C is great higher than it would be had Smith not already written two great books. Important: higher than it would be otherwise, which needn't mean higher than one-half. Even though Smith's track-record raises the probability that book C is great, the track-record needn't make it more probable than not that book C is great. By contrast, the probability of tails on any given toss of a fair coin is independent of whether the coin came up tails twice already: that history of tosses neither increases nor decreases the probability of tails on a third toss.