I would really like to know what logic is. The Stanford Encyclopedia of Philosophy has TOO MANY articles on logic for someone like me. Let me list most of them: action logic, algebraic propositional logic, classical logic, combinatory logic, combining logic, connexive logic, deontic logic, dependence logic, dialogical logic, dynamic epistemic logic, epistemic logic, free logic, fuzzy logic, hybrid logic, independence friendly logic, inductive logic, infinitary logic, informal logic, intensional logic, intuitionistic logic, justification logic, linear logic, logic of belief revision, logic of conditionals, logical consequence, logical pluralism, logical truth., many-valued logic, modal logic, non-monotonic logic, normative status of logic, paraconsistent logic, propositional dynamic logic, provability logic, relevance logic, second-order and higher-order logic, substructural logic, temporal logic. I have started reading some of these articles, but I still didn't find an answer for my basic question. In...

At the risk of a bit of self-promotion, readers might find my introductory article on logic for the Encyclopedia of Artificial Intelligence to be helpful. You can read it online at http://www.cse.buffalo.edu/~rapaport/Papers/logic.pdf.

Dear Madam or Sir, this is not a question but a request: Is there an introduction into philosophy that you would recommend? Hoping to get an answer I remain sincerely yours Matthias

There are basically two kinds of intro philosophy texts: General intros and anthologies of "classic" papers. As Andrew mentioned in his reply to your question, a search on amazon.com will turn up many good anthologies. But the two general intros that I heartily recommend are: Nagel, Thomas (1987), What Does It All Mean? A Very Short Introduction to Philosophy (Oxford: Oxford University Press) and an "oldie but goodie": Russell, Bertrand (1912), The Problems of Philosophy , various editions and publishers

Do philosophers use computers to find logical proofs? Or are there good reasons the task of programming a computer to do so is difficult (perhaps because of the complexity of proof required, or perhaps because you need a human for some sort of creative step)? Just from my experience of undergrad logic, it seemed to me there was a lot of repetition in what I was doing, and that it was a task I could learn and get better in -- ie, it wasn't down to pure creativity, but there were learnable, repeatable methods of searching that perhaps could be codified, made systematic.

The short answer to your first question is "not usually". The short answer to your second question is: It is difficult because of the complexity of the proof. Verifying a proof is, indeed, "codifiable" ("computable" is the technical term) and relatively easy to program (with an emphasis on "relatively"!). Creating proofs is rather more difficult but can also be done, especially if the formula to be proved is already known to be provable. Finding new proofs of unproved propositions has also been done, but is considerably more difficult and is the focus much research in what is called "automated theorem proving". One of, if not the, first AI programs was the Logic Theorist, developed by Nobel-prize winner Herbert Simon, Allen Newell, and Cliff Shaw, in 1955. So this is an area that has indeed been looked at. A rule of inference known as "resolution" is used in automated theorem proving and lies at the foundation of the Prolog programming language ("Prolog" = "PROgramming in LOGic"). When...

On April 10, 2014, in response to a question, Stephen Maitzen wrote: "I can't see how there could be any law more fundamental than the law of non-contradiction (LNC)." I thought that there were entire logical systems developed in which the law of non-contradiction was assumed not to be valid, and it also seems like "real life" suggests that the law of non-contradiction does not necessarily apply to physical systems. Perhaps I am not understanding the law correctly? Is it that at most one of these statements is true? Either "P is true" or "P is not true"? or is it that at most one of theses statements is true? Either "P is true" or "~P is true"? In physics, if you take filters that polarize light, and place two at right angles to each other, no light gets through. Yet if you take a third filter at a 45 degree angle to the first two, and insert it between the two existing filters, then some light gets through. Based on this experiment, it seems like the law of non-contradiction cannot be true in...

There are real-life situations in which contradictions can appear. Consider a deductive AI knowledge base that can use classical logic to infer new information from stored information (think of IBM' s Watson, for example). Suppose that a user tells the system that, say, the Yankees won the World Series in 1998. (It doesn't matter whether this is true or false.) Suppose that another user tells it that the Yankees lost that year. Now the system "believes" a contradiction. So, by classical logic, it could infer anything whatsoever. This is not a good situation for it to be in. One way out is to replace its classical-logic inference engine with a relevance-logic inference engine that can handle contradictions. For example, the SNePS Belief Revision system will detect contradictions and ask the user to remove one of the contradictory propositions. (It can also try to do this itself, if it has supplementary information about the relative trustworthiness of the sources of the...

As technology develops, do you think it will ever make sense to say that a computer "knows" things?

That depends, of course, on what you mean by "know". On one well-known, though flawed, definition, to know that P is to have a justified true belief that P. Suppose that you are willing to say that a computer believes-true a proposition P if P is true and the computer has (a representation of) P stored in its database. And suppose that a computer would be justified in such a belief if it deduced it from axioms (also stored in its database). Because some computers can deduce propositions according to rules of logic, such a computer could, on this definition of "know", know that P. Sometimes, people feel that to know something is also to be aware that one knows it, or that one knows that one knows it. So, could a computer know something in this sense? That, of course, depends on what might be meant by "aware". But it is certainly possible for a computer to believe that it believes something (for example, in the sense that it not only has P stored in its database, but also has something like "I...

Is it possible for two tautologies to not be logically equivalent?

Stephen Maitzen raises some interesting philosophical issues, but, of course, his response is not the "textbook" answer to the question (but, then, isn't that what philosophy is all about? : Questioning "textbook" answers? :-) The "textbook" answer would go something like this: By definition, a tautology is a "molecular" sentence (or proposition---textbooks differ on this) that, when evaluated by truth tables, comes out true no matter what truth values are assigned to its "atomic" constituents. So, for example, "P or not-P" is a tautology, because, if P is true, then not-P is false, and their disjunction is true; and if P is false, then not-P is true, and their disjunction is still true. Furthermore, by definition, two sentences (or propositions) are logically equivalent if and only if they have the same truth values (no matter what truth values their atomic constituents, if any, have). So, because tautologies always have the same truth value (namely, true), they are always logically...

Is there a role of mathematics in the development of human consciousness?

In addition to Hofstadter's wonderful writings, you might also be interested in work done on the relationships between mathematics and cognition (more generally than just consciousness). Take a look at these classics in that area: Rochel Gelman & C.R. Gallistel, The Child's Understanding of Number (Harvard University Press, 1986) George Lakoff & Rafael Nuñez, Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being (Basic Books, 2001) Stanislaus Dehaene, The Number Sense: How the Mind Creates Mathematics (Oxford University Press, 2011)

Is adultery really immoral? The act itself is mostly legal, so why can't it be mostly moral? I'm a male bachelor, so I can only argue from my point of view. Adultery is a simple biological urge that manifests itself onto two persons, one or both of whom are married. Marriage today is becoming more and more a simple legal contract, routinely terminated and routinely redefined by judges and plebiscites. The ease with which marriages can be terminated either on paper or in practice is just a reflection of the fact that people often change in their feelings towards one another--love fades within marriage and sometimes erupts outside marriage. Making it with a married woman can be very thrilling and the same woman would not be equally exciting if she were single; the supposedly unavailable is always more desirable than the easily attainable. Married women accept advances because their husbands can no longer give them excitement, romance or adventure, so why not a net utilitarian gain for two people, and no...

Because, for every X, there is a philosophy of X, it should come as no surprise that a well-known philosopher has written a book on this subject! I refer you to Richard Taylor's Having Love Affairs (Buffalo: Prometheus Books, 1982), isbn 0-87975-186-X, http://www.amazon.com/Having-Love-Affairs-Richard-Taylor/dp/087975186X

I have a question about "solved" games, and the significance of games to artificial intelligence. I take it games provide one way to assess artificial intelligence: if a computer is able to win at a certain game, such as chess, this provides evidence that the computer is intelligent. Suppose that in the future scientists manage to solve chess, and write an algorithm to play chess according to this solution. By hypothesis, then, a computer running this algorithm wins every game whenever possible. Would we conclude on this basis that the computer is intelligent? I have an intuition that intelligence cannot be reduced to any such algorithm, however complex. But that seems quite strange in a way, because it suggests that imperfect play might somehow demonstrate greater intelligence or creativity than perfect play. [If the notion of "solving" chess is problematic, another approach is to consider a computer which plays by exhaustively computing every possible sequence of moves. This is unfeasible with...

Update: An interesting article about one of my computer science colleagues on the subject of cheating in chess and touching on the nature of "intelligence" in chess just appeared in Chess Life magazine; the link is here

The ability to play a game such as chess intelligently is certainly one partial measure of "intelligence", but an agent that could only play very good chess would hardly be considered "intelligent" in any general sense. So I don't think that a winning chess program would be "intelligent". There is also the question of how the program plays chess. Deep Blue and other computer chess programs usually do well by brute-force search through a game tree, not by simulating/mimicking/using the kinds of "intelligent" pattern-recognition behaviors that professional human chess players use. But some might argue that that doesn't matter: If the external behavior is judged to be "intelligent", then perhaps it shouldn't matter how it is internally accomplished. For some illuminating remarks on this issue (not in the context of games, however), take a look at: Dennett, Daniel (2009), "Darwin's 'Strange Inversion of Reasoning' ", Proc. Nat'l. Academy of Science 106, suppl. 1 (16 June): 10061-10065 ...

Pages