
Introduction to Logic

Tools for Thought

Throughout this book, we have been writing sentences in English about sentences in Logic, and we have been writing informal proofs in English about formal proofs in Logic. A natural question to ask is whether it is possible formalize Logic within Logic. The answer is yes. The limits of what can be done are very interesting. In this section, we look at a small subset of this problem, viz. using Herbrand Logic to formalize information about Propositional Logic.

The first step in formalizing Propositional Logic in Herbrand Logic is to represent the syntactic components of Propositional Logic.
In what follows, we make each proposition constant in our Propositional Logic language an object constant in our Herbrand Logic formalization. For example, if our Propositional Logic language has proposition constants p, q, and r, then p, q, and r are object constants in our formalization.

Next, we introduce function constants to represent constructors of complex sentences. There is one function constant for each logical operator  not for ¬, and for ∧, or for ∨, if for ⇒, and iff for ⇔. Using these function constants, we represent Propositional Logic sentences as terms in our language. For example, we use the term and(p,q) to represent the Propositional Logic sentence (p ∧ q); and we use the term if(and(p,q),r) to represent the Propositional Logic sentence (p ∧ q ⇒ r).

Finally, we introduce a selection of relation constants to express the types of various expressions in our Propositional Logic language. We use the unary relation constant proposition to assert that an expression is a proposition. We use the unary relation constant negation to assert that an expression is a negation. We use the unary relation constant conjunction to assert that an expression is a conjunction. We use the unary relation constant disjunction to assert that an expression is a disjunction. We use the unary relation constant implication to assert that an expression is an implication. We use the unary relation constant biconditional to assert that an expression is a biconditional. And we use the unary relation constant sentence to assert that an expression is a sentence.

With this vocabulary, we can characterize the syntax of our language as follows. We start with declarations of our proposition constants.
proposition(p) 
proposition(q) 
proposition(r) 

Next, we define the types of expressions involving our various logical operators.
∀x.(sentence(x) ⇒ negation(not(x))) 
∀x.∀y.(sentence(x) ∧ sentence(y) ⇒ conjunction(and(x,y))) 
∀x.∀y.(sentence(x) ∧ sentence(y) ⇒ disjunction(or(x,y))) 
∀x.∀y.(sentence(x) ∧ sentence(y) ⇒ implication(if(x,y))) 
∀x.∀y.(sentence(x) ∧ sentence(y) ⇒ biconditional(iff(x,y))) 

Finally, we define sentences as expressions of these types.
∀x.(proposition(x) ⇒ sentence(x)) 
∀x.(negation(x) ⇒ sentence(x)) 
∀x.(conjunction(x) ⇒ sentence(x)) 
∀x.(disjunction(x) ⇒ sentence(x)) 
∀x.(implication(x) ⇒ sentence(x)) 
∀x.(biconditional(x) ⇒ sentence(x)) 

Note that these sentences constrain the types of various expressions but do not define them completely. For example, we have not said that not(p) is not a conjunction. It is possible to make our definitions more complete by writing negative sentences. However, they are a little messy, and we do not need them for the purposes of this section.

With a solid characterization of syntax, we can formalize our rules of inference. We start by representing each rule of inference as a relation constant. For example, we use the ternary relation constant ai to represent And Introduction, and we use the binary relation constant ae to represent And Elimination. With this vocabulary, we can define these relations as shown below.
∀x.∀y.(sentence(x) ∧ sentence(y) ⇒ ai(x,y,and(x,y))) 
∀x.∀y.(sentence(x) ∧ sentence(y) ⇒ ae(and(x,y),x)) 
∀x.∀y.(sentence(x) ∧ sentence(y) ⇒ ae(and(x,y),y)) 

In similar fashion, we can define proofs  both linear and structured. We can even define truth assignments, satisfaction, and the properties of validity, satisfiability, and so forth. Having done all of this, we can use the proof methods discussed in the next chapters to prove our metatheorems about Propositional Logic.

We can use a similar approach to formalizing Herbrand Logic within Herbrand Logic. However, in that case, we need to be very careful. If done incorrectly, we can write paradoxical sentences, i.e. sentences that are neither true nor false. For example, a careless formalization leads to formal versions of sentences like This sentence is false, which is selfcontradictory, i.e. it cannot be true and cannot be false. Fortunately, with care it is possible to avoid such paradoxes and thereby get useful work done.

Use the arrow keys to navigate.
