
Introduction to Logic

Tools for Thought

Imagine an infinite set of dominoes placed in a line so that, when one falls, the next domino in the line also falls. If the first domino falls, then the second domino falls. If the second domino falls, then the third domino falls. And so forth. By continuing this chain of reasoning, it is easy for us to convince ourselves that every domino eventually falls. This is the intuition behind a technique called Linear Induction.

Consider a language with a single object constant a and a single unary function constant s. There are infinitely many ground terms in this language, arranged in what resembles a straight line. See below. Starting from the object constant, we move to a term in which we apply the function constant to that constant, then to a term in which we apply the function constant to that term, and so forth.
a → s(a) → s(s(a)) → s(s(s(a))) → ...
In this section, we concentrate on languages that have linear structure of this sort. Hereafter, we call these linear languages. In all cases, there is a single object constant and a single unary function constant. In talking about a linear language, we call the object constant the base element of the language, and we call the unary function constant the successor function.

Although there are infinitely many ground terms in any linear language, we can still generate finite proofs that are guaranteed to be correct. The trick is to use the structure of the terms in the language in expressing the premises of an inductive rule of inference known as Linear Induction. See below for a statement of the induction rule for the language introduced above. In general, if we know that a schema holds of our base element and if we know that, whenever the schema holds of an element, it also holds of the successor of that element, then we can conclude that the schema holds of all elements.
Linear Induction 
φ[a] 
∀μ.(φ[μ] ⇒ φ[s(μ)]) 

∀ν.φ[ν] 
A bit of terminology before we go on. The first premise in this rule is called the base case of the induction, because it refers to the base element of the language. The second premise is called the inductive case. The antecedent of the inductive case is called the inductive hypothesis, and the consequent is called, not surprisingly, the inductive conclusion. The conclusion of the rule is sometimes called the overall conclusion to distinguish it from the inductive conclusion.

For the language introduced above, our rule of inference is sound. Suppose we know that a schema is true of a and suppose that we know that, whenever the schema is true of an arbitrary ground term τ, it is also true of the term s(τ). Then, the schema must be true of everything, since there are no other terms in the language.
The requirement that the signature consists of no other object constants or function constants is crucial. If this were not the case, say there were another object constant b, then we would have trouble. It would still be true that φ holds for every element in the set {a, s(a), s(s(a)),...}. However, because there are other elements in the Herbrand universe, e.g. b and s(b), ∀x.φ(x) would no longer be guaranteed.

Here is an example of induction in action. Recall the formalization of Arithmetic introduced in Lesson 9. Using the object constant 0 and the unary function constant s, we represent each number n by applying the function constant to 0 exactly n times. For the purposes of this example, let's assume we have just one ternary relation constant, viz. plus, which we use to represent the addition table.
The following axioms describe plus in terms of 0 and s. The first sentence here says that adding 0 to any element produces that element. The second sentence states that adding the successor of a number to another number yields the successor of their sum. The third sentence is a functionality axiom for plus.
∀y.plus(0,y,y) 
∀x.∀y.∀z.(plus(x,y,z) ⇒plus(s(x),y,s(z))) 
∀x.∀y.∀z.∀w.(plus(x,y,z) ∧ ¬same(z,w) ⇒ ¬plus(x,y,w)) 
It is easy to see that any table that satisfies these axioms includes all of the usual addition facts. The first axiom ensures that all cases with 0 as first argument are included. From this fact and the second axiom, we can see that all cases with s(0) as first argument are included. And so forth.

The first axiom above tells us that 0 is a left identity for addition  0 added to any number produces that number as result. As it turns out, given these definitions, 0 must also be a right identity, i.e it must be the case that ∀x.plus(x,0,x).

We can use induction to prove this result as shown below. We start with our premises. We use Universal Elimination on the first premise to derive the sentence on line 3. This takes care of the base case of our induction.
1. 

∀y.plus(0,y,y) 

Premise 
2. 

∀x.∀y.∀z.(plus(x,y,z) ⇒plus(s(x),y,s(z))) 

Premise 
3. 

plus(0,0,0) 

UE: 1 

We then start a subproof and assume the antecedent of the inductive case. We then use three applications of Universal Elimination on the second premise to get the sentence on line 5. We use Implication Elimination on this sentence and our assumption to derive the conclusion on line 6. We then discharge our assumption and form the implication shown on line 7 and then universalize this to get the result on line 8.
4. 

plus(x,0,x) 

Assumption 
5. 

plus(x,0,x) ⇒plus(s(x),0,s(x)) 

3 x UE: 2 
6. 

plus(s(x),0,s(x)) 

IE: 5, 4 
7. 

plus(x,0,x) ⇒ plus(s(x),0,s(x)) 

II: 4, 6 
8. 

∀x.(plus(x,0,x) ⇒ plus(s(x),0,s(x))) 

UI: 7 

Finally, we use Linear Induction to derive our overall conclusion.
9. 

∀x.plus(x,0,x) 

Ind: 3, 8 
Most inductive proofs have this simple structure. We prove the base case. We assume the inductive hypothesis; we prove the inductive conclusion; and, based on this proof, we have the inductive case. From the base case and the inductive case, we infer the overall conclusion.

Use the arrow keys to navigate.
