«1. Peirce’s Method for Teaching Logic Peirce’s existential graphs (EGs) are the simplest, most elegant, and easiest-to-learn system of logic ever ...»
Some teachers who would like to adopt EGs complain that there is no suitable textbook. But they can use a traditional textbook: explain the text in terms of EGs and ask the students to use EGs to do the exercises.
The students learn to think in EGs and to translate other notations to the more natural EGs.
In my own experience, I had taught logic in the traditional way long before I discovered Peirce’s EGs. After I learned EGs, I still used the traditional methods and textbooks in teaching, but I added two hours on EGs at the end of the course: one hour on the syntax and one hour on proofs. The students’ major complaint was that I “had saved the good wine for last.” Since then, I found that students at all levels master the fundamental concepts of logic faster and more thoroughly with the EG notation. Later, they can transfer their skills to any notation, including their native language. Transfer is the ultimate goal. As Peirce (1878) said, “The very first lesson that we have a right to demand that logic shall teach us is, how to make our ideas clear.”
6. Advanced Topics Students find the EG graphic notation easy to learn and use, and professional logicians enjoy a novel way of viewing familiar topics. But the symmetry of the EG structure and the simplicity of the rules of inference also support a proof theory that is simpler and more powerful than traditional methods. All the common proof procedures can be derived from Peirce’s rules, but the converse does not hold: some proofs by Peirce’s rules can be orders of magnitude shorter than proofs by other methods. The symmetry properties, for example, imply the reversibility theorem.
• Reversibility Theorem. Any proof of p q by Peirce’s rules of inference for existential graphs can be converted to a proof of ~q ~p by negating the EGs for each step and reversing their order.
• Proof. Let s0,..., sn be the steps of the proof with s0 as the EG that represents p, sn as the EG that represents q, and each ri from 1 to n as the rule that converts si-1 to si. For the reverse sequence of negated EGs, note that ~[sn] represents ~q, and ~[s0] represents ~p. Furthermore, each ~[si] is converted to ~[si-1] by the inverse of rule ri. The only question is whether the inverse conversions are permissible. Since the rules 2e, 2i, 3e, and 3i are equivalence operations, their inverses are permissible in any area, positive or negative. But 1e can only be performed in a negative area, and its inverse 1i can only be performed in a positive area. Since each step of the reverse proof is negated, the polarity (negative or positive) of every nested area in each ~[si] is reversed. Therefore, any application of 1i or 1e in the forward direction can be undone by its inverse (1e or 1i) in the reverse direction.
In most versions of logic, a proof of p q can be converted to a proof of ~q ~p. But the conversions are more complex, because traditional rules of inference don’t have exact inverses and they can’t be applied inside a negated area. For some kinds of proofs, Peirce’s method is much shorter because of a property that is not shared by other common proof procedures: his rules can perform surgical operations (insertions or erasures) in an area of a graph or formula that is nested arbitrarily deep. Furthermore, the rules depend only on whether the area is positive or negative. Those properties imply the cut-and-paste theorem (Sowa 2000),
which can be adapted to any notation for first-order logic:
• Cut-and-Paste Theorem. If a proof p q is possible on a blank sheet sheet of assertion, then in any positive area of an EG where p occurs, q may be substituted for p.
• Proof. Since the nested area in which p occurs is positive, every step of a proof on a blank sheet can be carried out in that area. Therefore, it is permissible to “cut out” the steps of a proof from p to q in the outer area and “paste” them into the nested area. After q has been derived, Rule 1e can be used to erase the original p and any remaining steps of the proof other than q.
From the cut-and-paste theorem (CAPT), other important theorems can be proved as corollaries. One
example is the deduction theorem:
• Deduction Theorem. If a proof p q is possible, then p⊃q is provable.
• Proof. To show that p⊃q is provable, use CAPT to derive ~[(p) ~[(q)]] from a blank:
1. By 3i, draw a double negation around a blank: ~[ ~[ ]]
2. By 1i, insert (p) in the negative area: ~[(p) ~[ ]]
3. By 2i, iterate (p) to derive the equivalent of p ⊃ p: ~[(p) ~[(p)]]
4. By CAPT, replace the inner (p) with (q): ~[(p) ~[(q)]]
The constructive dilemma is another another corollary of CAPT that is difficult to prove in many systems:
• Constructive Dilemma. If p1 q and p2 q, then p1∨p2 q.
• Proof: Use two applications of CAPT.
0. Starting graph: ~[ ~[(p1)] ~[(p2)] ]
1. By CAPT, replace (p1) with q: ~[ ~[(q)] ~[(p2)] ]
2. By CAPT, replace (p2) with q: ~[ ~[(q)] ~[(q)] ]
3. By 2e, deiterate one copy of ~[(q)]: ~[ ~[(q)] ]
4. By 3e, erase the double negation: (q) Converting various proof procedures to EG form provides fundamental insights into the nature of the proofs and their interrelationships. Gentzen (1934) developed two proof procedures: natural deduction and the sequent calculus. A proof by either one can be systematically converted to a proof by Peirce’s rules. The converse, however, does not hold because Gentzen’s rules cannot operate on deeply nested expressions. For some proofs, many steps are needed to bring an expression to the surface of a formula before those rules can be applied. An example is the cut-free version of the sequent calculus, in which proofs can sometimes be exponentially longer than proofs in the usual version. Dau (2006) showed that with Peirce’s rules, the corresponding cut-free proofs are longer by just a polynomial factor.
Like Peirce’s rules, Gentzen’s rules for natural deduction also come in pairs, one of which inserts an operator, which the other erases. Gentzen, however, required six pairs of rules for each of his six operators: ∧, ∨, ⊃, ~, ∀, and ∃. Peirce had only three operators and three pairs of rules. Even when limited to the same three operators, Gentzen’s rules are highly irregular. They lack the symmetry of paired rules that are exact inverses of one another. Four of the twelve rules even require a provability test, expressed by the operator. Yet Gentzen’s rules can all be proved as derived rules of inference in terms of Peirce’s rules. In fact, two of them were just proved above: the rule for ⊃-introduction is the deduction theorem, and the rule for ∨-elimination is the constructive dilemma. The following table shows Gentzen’s six pairs of rules.
All the rules in Gentzen’s table can be proved as derived rules of inference in terms of Peirce’s rules, but some of them require further explanation. The symbol ⊥ in the rules for introducing or eliminating the negation symbol ~ represents a proposition that is always false. In EGs, a single oval with nothing inside or ~[ ] represents ⊥. Peirce called ~[ ] the pseudograph because it is always false. In the resolution method for theorem proving, it is called the empty clause. From the pseudograph, any proposition A can be derived: start with ~[ ]; by 1e, insert ~[(A)] to derive ~[~[(A)]]; by 3e, erase the double negation to derive (A).
Two other special symbols are the arbitrary term t in the rules for ∃-introduction and ∀-elimination and the arbitrary value or free variable a in the rules for ∀-introduction and ∃-elimination. The arbitrary term t is an ordinary expression with no free variables, and it can be represented as —t or [*x] (t ?x), where t is any relation or graph with one peg designated to represent the value of the term. The rule for ∀-elimination, also
called universal instantiation, was proved in Section 3. The proof for ∃-introduction takes one step:
0. Starting graph: [*x] (t ?x) (A ?x)
1. By 1e, erase (t ?x): [*x] (A ?x) Since EGs don’t have variables, they don’t have free variables. But proof procedures that use arbitrary values or free variables have been a source of controversy for years. Fine (1985) interpreted Gentzen’s rule of ∀introduction as an assumption that A(a) had been previously derived from some statement φ and the existence of some arbitrary value a, which could be represented by [*a] in EGIF. Following is a proof that
uses the cut-and-paste theorem:
0. Starting assumption: φ, [*a] (A ?a)
1. By 3i, insert a double negation around a blank: ~[ ~[ ]]
2. By 1i, insert [*x] into the negative area: ~[ [*x] ~[ ]]
3. By 2i, iterate φ from step 0 into the doubly nested area: ~[ [*x] ~[φ]]
4. By 2i, iterate [?x] into the doubly nested area: ~[ [*x] ~[φ [?x]]]
5. By CAPT, replace φ [?x] with (A ?x): ~[ [*x] ~[ (A ?x)]] For Gentzen’s rule of ∃-elimination, A(a) is true of some arbitrary values, but not all. The conclusion B is provable from A(a) only for values of a for which A(a) is true.
0. Starting assumptions: [*x] (A ?x); [*a] (A ?a) (B)
1. Let [*a] be some [*x] that makes (A ?x) true. Then: (B) Although each of Gentzen’s rules can be derived from Peirce’s rules, proofs in the two systems take different paths. EG proofs proceed in a straight line from a blank sheet to the conclusion: each step inserts or erases one subgraph in the immediately preceding graph. No bookkeeping is required to keep track of information from any previous step. But Gentzen’s proofs interrupt the flow of a proof at every rule that contains the pattern p q. When deriving q from p, those rules use a method of bookkeeping called discharging an
1. Whenever the symbol appears in a rule, some proposition p is assumed.
2. The current proof is suspended, and the state of the current rule is recorded.
3. A side proof of q from p is initiated.
4. When the conclusion q is derived, the assumption p is discharged by resuming the previous proof and using q in the rule of inference that had been interrupted.
Such side proofs might be invoked recursively to an arbitrary depth.
EG proofs avoid that bookkeeping by an option that most notations for logic can’t represent: drawing a double negation around a blank. In the proof of the Praeclarum Theorema, for example, the first step is to draw a double negation on a blank sheet, and the second step is to insert the hypothesis into the shaded area.
The result is a well-formed EG, and no bookkeeping is necessary to keep track of how that EG had been derived. In Gentzen’s method, the first step is to assume the hypothesis, and a record of that assumption must be retained until the very end of the proof, when it is finally discharged to form the conclusion. This observation is the basis for converting a proof by Gentzen’s method of natural deduction to a proof by
• Replace each of Gentzen’s rules that does not contain the symbol with one or more of Peirce’s rules that produce an equivalent result.
• Whenever one of Gentzen’s rules containing is invoked, apply rule 3i to insert a double negation around the blank, copy the hypothesis into the negative area by rule 1i, and continue.
An EG proof generated by this procedure will be longer than a direct proof that starts with Peirce’s rules. As an exercise, use Gentzen’s table of rules and his method of discharging assumptions to prove the Praeclarum Theorema. To avoid the bookkeeping, introduce a nest of two ovals when the symbol appears in a rule.
Insert the hypothesis into the shaded area, but continue to use the algebraic notation for the formulas. Finally, translate each formula to an EG, and add any intermediate steps needed to complete the proof according to Peirce’s rules. Proofs by Gentzen’s other method, the sequent calculus, can be converted to EG proofs more directly because they don’t require extra steps to discharge assumptions.
In computational systems, the widely used method of resolution applies a single rule of inference to a set of clauses (Robinson 1965). Each clause is a disjunction of positive or negative literals, which are single atoms or their negations. In the following example, the two clauses on the left of are combined by resolution to
derive the clause on the right:
~p ∨ ~q ∨ r, ~r ∨ u ∨ v ~p ∨ ~q ∨ u ∨ v The leftmost clause has a positive literal r, and the middle clause has a negated copy ~r. Resolution is a cancellation method that erases both r and ~r and merges the remaining literals to form the clause on the
right. Following are the equivalent EGs for the three clauses:
With EGs, resolution takes four steps: by 2i, iterate the middle EG into the doubly nested (unshaded) area of the EG on the left; by 2e, erase the innermost copy of r; by 1e, erase the remaining copy of r; by 3e, erase the double negation. The result is the EG on the right. Resolution is commonly used in a refutation procedure,
which attempts to prove a statement s by deriving a contradiction from ~s:
1. Negate the statement to be proved: ~s.
2. Convert ~s to clause form.
3. Use repeated steps of resolution to derive the empty clause, ~[ ].
4. Since the empty clause is false, the original s must be true.
Many logicians have observed that proofs by resolution are “almost” the inverse of proofs by natural deduction. They aren’t exact inverses, however, because the rule of resolution is simple and regular, but Gentzen’s six pairs of insertion and elimination rules are highly irregular. But the proof that resolution is the inverse of Peirce’s form of natural deduction is a variant of the reversibility theorem. For propositional logic, start by negating each step. The negation of the empty clause in step 4 is a double negation of a blank sheet, which is always true. Each application of the resolution rule corresponds to four EG rules, which can be reversed inside a negation. A conversion to clause form is an equivalence that can be performed in any context; and the double negation ~~s is equivalent to s.