|
|
Introduction to Logic
|
Tools for Thought
|
|
Lesson 7 - Relational Logic |
7.1 Introduction
Propositional Logic allows us to talk about relationships among individual propositions, and it gives us the machinery to derive logical conclusions based on these relationships. Suppose, for example, we believe that, if Jack knows Jill, then Jill knows Jack. Suppose we also believe that Jack knows Jill. From these two facts, we can conclude that Jill knows Jack using a simple application of Implication Elimination.
Unfortunately, when we want to say things more generally, we find that Propositional Logic is inadequate. Suppose, for example, that we wanted to say that, in general, if one person knows a second person, then the second person knows the first. Suppose, as before, that we believe that Jack knows Jill. How do we express the general fact in a way that allows us to conclude that Jill knows Jack? Here, Propositional Logic is inadequate; it gives us no way of succinctly encoding this more general belief in a form that captures its full meaning and allows us to derive such conclusions.
Relational Logic is an alternative to Propositional Logic that solves this problem. The trick is to augment our language with two new linguistic features, viz. variables and quantifiers. With these new features, we can express information about multiple objects without enumerating those objects; and we can express the existence of objects that satisfy specified conditions without saying which objects they are.
In this chapter, we proceed through the same stages as in the introduction to Propositional Logic. We start with syntax and semantics. We then discuss evaluation and satisfaction. We look at some examples. Then, we talk about properties of Relational Logic sentences and logical entailment for Relational Logic. Finally, we say a few words about the equivalence of Relational Logic and Propositional Logic and its decidability.
|
7.2 Syntax
In Propositional Logic, sentences are constructed from a basic vocabulary of propositional constants. In Relational Logic, there are no propositional constants; instead we have object constants, relation constants, and variables.
In our examples here, we write both variables and constants as strings of letters, digits, and a few non-alphanumeric characters (e.g. "_"). By convention, variables begin with letters from the end of the alphabet (viz. u, v, w, x, y, z). Examples include x, ya, and z_2. By convention, all constants begin with either alphabetic letters (other than u, v, w, x, y, z) or digits. Examples include a, b, 123, comp225, and barack_obama.
Note that there is no distinction in spelling between object constants and relation constants. The type of each such word is determined by its usage or, in some cases, in an explicit specification.
As we shall see, relation constants are used in forming complex expressions by combining them with an appropriate number of arguments. Accordingly, each relation constant has an associated arity, i.e. the number of arguments with which that relation constant can be combined. A relation constant that can combined with a single argument is said to be unary; one that can be combined with two arguments is said to be binary; one that can be combined with three arguments is said to be ternary; more generally, a relation constant that can be combined with n arguments is said to be n-ary.
A vocabulary consists of a finite, non-empty set of object constants, a finite, non-empty set of relation constants, and an assignment of arities for each of the relation constants in the vocabulary. (Note that this definition here is slightly non-traditional. In many textbooks, a vocabulary (sometimes called a signature) includes a specification of relation constants but not object constants, whereas our definition here includes both types of constants.)
A term is defined to be a variable or an object constant. Terms typically denote objects presumed or hypothesized to exist in the world; and, as such, they are analogous to noun phrases in natural language, e.g. Joe or someone.
There are three types of sentences in Relational Logic, viz. relational sentences (the analog of propositions in Propositional Logic), logical sentences (analogous to the logical sentences in Propositional Logic), and quantified sentences (which have no analog in Propositional Logic).
A relational sentence is an expression formed from an n-ary relation constant and n terms. For example, if q is a relation constant with arity 2 and if a and y are terms, then the expression shown below is a syntactically legal relational sentence. Relational sentences are sometimes called atoms to distinguish them from logical and quantified sentences.
q(a, y)
Logical sentences are defined as in Propositional Logic. There are negations, conjunctions, disjunctions, implications, and equivalences. See below for examples.
Negation: | | (¬p(a)) |
Conjunction: | | (p(a) ∧ q(b, c)) |
Disjunction: | | (p(a) ∨ q(b, c)) |
Implication: | | (p(a) ⇒ q(b, c)) |
Biconditional: | | (p(a) ⇔ q(b, c)) |
Note that the syntax here is exactly the same as in Propositional Logic except that the elementary components are relational sentences rather than proposition constants.
Quantified sentences are formed from a quantifier, a variable, and an embedded sentence. The embedded sentence is called the scope of the quantifier. There are two types of quantified sentences in Relational Logic, viz. universally quantified sentences and existentially quantified sentences.
A universally quantified sentence is used to assert that all objects have a certain property. For example, the following expression is a universally quantified sentence asserting that, if p holds of an object, then q holds of that object and itself.
(∀x.(p(x) ⇒ q(x,x)))
An existentially quantified sentence is used to assert that some object has a certain property. For example, the following expression is an existentially quantified sentence asserting that there is an object that satisfies p and, when paired with itself, satisfies q as well.
(∃x.(p(x) ∧ q(x,x)))
Note that quantified sentences can be nested within other sentences. For example, in the first sentence below, we have quantified sentences inside of a disjunction. In the second sentence, we have a quantified sentence nested inside of another quantified sentence.
((∀x.p(x)) ∨ (∃x.q(x,x)))
(∀x.(∃y.q(x,y)))
As with Propositional Logic, we can drop unneeded parentheses in Relational Logic, relying on precedence to disambiguate the structure of unparenthesized sentences. In Relational Logic, the precedence relations of the logical operators are the same as in Propositional Logic, and quantifiers have higher precedence than logical operators.
The following examples show how to parenthesize sentences with both quantifiers and logical operators. The sentences on the right are partially parenthesized versions of the sentences on the left. (To be fully parenthesized, we would need to add parentheses around each of the sentences as a whole.)
∀x.p(x) ⇒ q(x) |
|
(∀x.p(x)) ⇒ q(x) |
∃x.p(x) ∧ q(x) |
|
(∃x.p(x)) ∧ q(x) |
Notice that, in each of these examples, the quantifier does not apply to the second relational sentence, even though, in each case, that sentence contains an occurrence of the variable being quantified. If we want to apply the quantifier to a logical sentence, we must enclose that sentence in parentheses, as in the following examples.
∀x.(p(x) ⇒ q(x)) |
∃x.(p(x) ∧ q(x)) |
An expression in Relational Logic is ground if and only if it contains no variables. For example, the sentence p(a) is ground, whereas the sentence ∀x.p(x) is not.
An occurrence of a variable is free if and only if it is not in the scope of a quantifier of that variable. Otherwise, it is bound. For example, y is free and x is bound in the following sentence.
∃x.q(x,y)
A sentence is open if and only if it has free variables. Otherwise, it is closed. For example, the first sentence below is open and the second is closed.
p(y) ⇒ ∃x.q(x,y)
∀y.(p(y) ⇒ ∃x.q(x,y))
|
7.3 Semantics
The semantics of Relational Logic presented here is termed Herbrand semantics. It is named after the logician Herbrand, who developed some of its key concepts. As Herbrand is French, it should properly be pronounced "air-brahn". However, most people resort to the Anglicization of this, instead pronouncing it "her-brand". (One exception is Stanley Peters, who has been known at times to pronounce it "hare-brained".)
The Herbrand base for a vocabulary is the set of all ground relational sentences that can be formed from the constants of the language. Said another way, it is the set of all sentences of the form r(t1,...,tn), where r is an n-ary relation constant and t1, ... , tn are object constants.
For a vocabulary with object constants a and b and relation constants p and q where p has arity 1 and q has arity 2, the Herbrand base is shown below.
{p(a), p(b), q(a,a), q(a,b), q(b,a), q(b,b)}
It is worthwhile to note that, for a given relation constant and a finite set of object constants, there is an upper bound on the number of ground relational sentences that can be formed using that relation constant. In particular, for a set of object constants of size b, there are bn distinct n-tuples of object constants; and hence there are bn ground relational sentences for each n-ary relation constant. Since the number of relation constants in a vocabulary is finite, this means that the Herbrand base is also finite.
A truth assignment for a Relational Logic language is a function that maps each ground relational sentence in the Herbrand base to a truth value. As in Propositional Logic, we use the digit 1 as a synonym for true and 0 as a synonym for false; and we refer to the value assigned to a ground relational sentence by writing the relational sentence with the name of the truth assignment as a superscript. For example, the truth assignment shown below is an example for the case of the language mentioned a few paragraphs above.
p(a) | → | 1 |
p(b) | → | 0 |
q(a,a) | → | 1 |
q(a,b) | → | 0 |
q(b,a) | → | 1 |
q(b,b) | → | 0 |
As with Propositional Logic, once we have a truth assignment for the ground relational sentences of a language, the semantics of our operators prescribes a unique extension of that assignment to the complex sentences of the language.
The rules for logical sentences in Relational Logic are the same as those for logical sentences in Propositional Logic. A truth assignment satisfies a negation ¬φ if and only if it does not satisfy φ. A truth assignment satisfies a conjunction (φ1 ∧ ... ∧ φn) if and only if it satisfies every φi. A truth assignment satisfies a disjunction (φ1 ∨ ... ∨ φn) if and only if it satisfies at least one φi. A truth assignment satisfies an implication (φ ⇒ ψ) if and only if it does not satisfy φ or does satisfy ψ. A truth assignment satisfies an equivalence (φ ⇔ ψ) if and only if it satisfies both φ and ψ or it satisfies neither φ nor ψ.
In order to define satisfaction of quantified sentences, we need the notion of instances. An instance of an expression is an expression in which all free variables have been consistently replaced by ground terms. Consistent replacement here means that, if one occurrence of a variable is replaced by a ground term, then all occurrences of that variable are replaced by the same ground term.
A universally quantified sentence is true for a truth assignment if and only if every instance of the scope of the quantified sentence is true for that assignment. An existentially quantified sentence is true for a truth assignment if and only if some instance of the scope of the quantified sentence is true for that assignment.
A truth assignment satisfies a sentence with free variables if and only if it satisfies every instance of that sentence. A truth assignment satisfies a set of sentences if and only if it satisfies every sentence in the set.
|
7.4 Evaluation
Evaluation for Relational Logic is similar to evaluation for Propositional Logic. The only difference is that we need to deal with quantifiers. In order to evaluate a universally quantified sentence, we check that all instances of the scope are true. (We are in effect treating it as the conjunction of all those instances.) In order to evaluate an existentially quantified sentence, we check that at least one instance of the scope is true. (We are in effect treating it as the disjunction of those instances.)
Once again, assume we have a language with a unary relation constant p, a binary relation constant q, and object constants a and b; and consider our truth assignment from the last section.
What is the truth value of the sentence ∀x.(p(x) ⇒ q(x,x)) under this assignment? There are two instances of the scope of this sentence. See below.
p(a) ⇒ q(a,a) |
p(b) ⇒ q(b,b) |
We know that p(a) is true and q(a,a) is true, so the first instance is true. q(b,b) is false, but so is p(b) so the second instance is true as well.
(p(a) ⇒ q(a,a)) | → | 1 |
(p(b) ⇒ q(b,b)) | → | 1 |
Since both instances are true, the original quantified sentence is true as well.
∀x.(p(x) ⇒ q(x,x)) → 1
Now let's consider a case with nested quantifiers. Is ∀x.∃y.q(x,y) true or false for the truth assignment shown above? As before, we know that this sentence is true if every instance of its scope is true. The two possible instances are shown below.
To determine the truth of ∃y.q(a,y), we must find at least one instance of q(a,y) that is true. The possibilities are shown below.
Looking at our truth assignment, we see that the first of these is true and the second is false.
Since one of these instances is true, the existential sentence as a whole is true.
∃y.q(a,y) → 1
Now, we do the same for the second existentially quantified. The possible instances in this case are shown below.
Of these, the first is true, and the second is false.
Again, since one of these instances is true, the existential sentence as a whole is true.
∃y.q(b,y) → 1
At this point, we have truth values for our two existential sentences. Since both instances of the scope of our original universal sentence are true, the sentence as a whole must be true as well.
∀x.∃y.q(x,y) → 1
|
7.5 Satisfaction
As in Propositional Logic, it is in principle possible to build a truth table for any set of sentences in Relational Logic. This truth table can then be used to determine which truth assignments satisfy a given set of sentences.
As an example, let us assume we have a language with just two object constants a and b and two unary relation constants p and q. Now consider the sentences shown below, and assume our job is to find a truth assignment that satisfies these sentences.
p(a) ∨ p(b) |
∀x.(p(x) ⇒ q(x)) |
∃x.q(x) |
A truth table for this problem is shown below. Each of the four columns on the left represents one of the elements of the Herbrand base for this language. The three columns on the right represent our sentences.
p(a) |
p(b) |
q(a) |
q(b) |
p(a) ∨ p(b) |
∀x.(p(x) ⇒ q(x)) |
∃x.q(x) |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
0 |
1 |
0 |
1 |
1 |
1 |
0 |
1 |
1 |
0 |
1 |
1 |
1 |
0 |
0 |
1 |
0 |
0 |
1 |
0 |
1 |
1 |
1 |
1 |
1 |
1 |
0 |
1 |
0 |
1 |
1 |
1 |
1 |
0 |
0 |
1 |
1 |
0 |
1 |
1 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
1 |
1 |
1 |
1 |
1 |
1 |
0 |
1 |
1 |
0 |
1 |
0 |
1 |
0 |
1 |
0 |
1 |
1 |
1 |
1 |
0 |
1 |
0 |
0 |
1 |
0 |
0 |
0 |
0 |
1 |
1 |
0 |
1 |
1 |
0 |
0 |
1 |
0 |
0 |
1 |
1 |
0 |
0 |
0 |
1 |
0 |
1 |
1 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
Looking at the table, we see that there are twelve truth assignments that make the first sentence true, nine that make the second sentence true, twelve that make the third sentence true, and five that make them all true (rows 1, 5, 6, 9, and 11).
|
7.6 Example - Sorority World
Consider the interpersonal relations of a small sorority. There are just four members - Abby, Bess, Cody, and Dana. Some of the girls like each other, but some do not. Our goal is to represent information about who likes whom.
The table below shows one set of possibilities. The checkmark in the first row here means that Abby likes Cody, while the absence of a checkmark means that Abby does not like the other girls (including herself). Bess likes Cody too. Cody likes everyone but herself. And Dana also likes the popular Cody.
|
Abby |
Bess |
Cody |
Dana |
Abby |
|
|
|
|
Bess |
|
|
|
|
Cody |
|
|
|
|
Dana |
|
|
|
|
In order to encode this information in Relational Logic, we adopt a vocabulary with four object constants (abby, bess, cody, dana) and one binary relation constant (likes).
If we had complete information about the likes and dislikes of the girls, we could completely characterize the state of affairs as a set of ground relational sentences or negations of ground relational sentences, like the ones shown below, with one sentence for each member of the Herbrand base. (In our example here, we have written the positive literals in black and the negative literals in grey in order to distinguish the two more easily.)
¬ | likes(abby,abby) |
|
¬ | likes(abby,bess) |
|
| likes(abby,cody) |
|
¬ | likes(abby,dana) |
¬ | likes(bess,abby) |
|
¬ | likes(bess,bess) |
|
| likes(bess,cody) |
|
¬ | likes(bess,dana) |
| likes(cody,abby) |
|
| likes(cody,bess) |
|
¬ | likes(cody,cody) |
|
| likes(cody,dana) |
¬ | likes(dana,abby) |
|
¬ | likes(dana,bess) |
|
| likes(dana,cody) |
|
¬ | likes(dana,dana) |
To make things more interesting, let's assume that we do not have complete information, only fragments of information about the girls' likes and dislikes. Let's see how we can encode such fragments in Relational Logic.
Let's start with a simple disjunction. Bess likes Cody or Dana. Encoding a sentence with a disjunctive noun phrase (such as Cody or Dana) is facilitated by first rewriting the sentence as a disjunction of simple sentences. Bess likes Cody or Bess likes Dana. In Relational Logic, we can express this fact as a simple disjunction with the two possibilities as disjuncts.
likes(bess,cody) ∨ likes(bess,dana)
Abby likes everyone Bess likes. Again, paraphrasing helps translate. If Bess likes a girl, then Abby also likes her. Since this is a fact about everyone, we use a universal quantifier.
∀y.(likes(bess,y) ⇒ likes(abby,y))
Cody likes everyone who likes her. In other words, if some girl likes Cody, then Cody likes that girl. Again, we use a universal quantifier.
∀y.(likes(y,cody) ⇒ likes(cody,y))
Bess likes somebody who likes her. The word somebody here is a tip-off that we need to use an existential quantifier.
∃y.(likes(bess,y) ∧ likes(y,bess))
Nobody likes herself. The use of the word nobody here suggests a negation. A good technique in such cases is to rewrite the English sentence as the negation of a positive version of the sentence before translating to Relational Logic.
¬∃x.likes(x,x)
Everybody likes somebody. Here we have a case requiring two quantifiers, one universal and one existential. The key to this case is getting the quantifiers in the right order. Reversing them leads to a very different statement.
∀x.∃y.likes(x,y)
There is someone everyone likes. The preceding sentence tells us that everyone likes someone, but that someone can be different for different people. This sentence tells us that everybody likes the same person.
∃y.∀x.likes(x,y)
|
7.7 Example - Blocks World
The Blocks World is a popular application area for illustrating ideas in the field of Artificial Intelligence. A typical Blocks World scene is shown in Figure 1.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Figure 1 - One state of Blocks World. |
Most people looking at this figure interpret it as a configuration of five toy blocks. Some people conceptualize the table on which the blocks are resting as an object as well; but, for simplicity, we ignore it here.
In order to describe this scene, we adopt a vocabulary with five object constants, as shown below, with one object constant for each of the five blocks in the scene. The intent here is for each of these object constants to represent the block marked with the corresponding capital letter in the scene.
{a, b, c, d, e}
In a spatial conceptualization of the Blocks World, there are numerous meaningful relations. For example, it makes sense to think about the relation that holds between two blocks if and only if one is resting on the other. In what follows, we use the relation constant on to refer to this relation. We might also think about the relation that holds between two blocks if and only if one is anywhere above the other, i.e. the first is resting on the second or is resting on a block that is resting on the second, and so forth. In what follows, we use the relation constant above to talk about this relation. There is the relation that holds of three blocks that are stacked one on top of the other. We use the relation constant stack as a name for this relation. We use the relation constant clear to denote the relation that holds of a block if and only if there is no block on top of it. We use the relation constant table to denote the relation that holds of a block if and only if that block is resting on the table. The set of relation constants corresponding to this conceptualization is shown below.
{on, above, stack, clear, table}
The arities of these relation constants are determined by their intended use. Since on is intended to denote a relation between two blocks, it has arity 2. Similarly, above has arity 2. The stack relation constant has arity 3. Relation constants clear and table each have arity 1.
Given this vocabulary, we can describe the scene in Figure 1 by writing ground literals that state which relations hold of which objects or groups of objects. Let's start with on. The following sentences tell us directly for each ground relational sentence whether it is true or false. (Once again, we have written the positive literals in black and the negative literals in grey in order to distinguish the two more easily.)
¬ | on(a,a) |
|
| on(a,b) |
|
¬ | on(a,c) |
|
¬ | on(a,d) |
|
¬ | on(a,e) |
¬ | on(b,a) |
|
¬ | on(b,b) |
|
| on(b,c) |
|
¬ | on(b,d) |
|
¬ | on(b,e) |
¬ | on(c,a) |
|
¬ | on(c,b) |
|
¬ | on(c,c) |
|
¬ | on(c,d) |
|
¬ | on(c,e) |
¬ | on(d,a) |
|
¬ | on(d,b) |
|
¬ | on(d,c) |
|
¬ | on(d,d) |
|
| on(d,e) |
¬ | on(e,a) |
|
¬ | on(e,b) |
|
¬ | on(e,c) |
|
¬ | on(e,d) |
|
¬ | on(e,e) |
We can do the same for the other relations. However, there is an easier way. Each of the remaining relations can be defined in terms of on. These definitions together with our facts about the on relation logically entail every other ground relational sentence or its negation. Hence, given these definitions, we do not need to write out any additional data.
A block satisfies the clear relation if and only if there is nothing on it.
∀y.(clear(y) ⇔ ¬∃x.on(x,y))
A block satisfies the table relation if and only if it is not on some block.
∀x.(table(x) ⇔ ¬∃y.on(x,y))
Three blocks satisfy the stack relation if and only if the first is on the second and the second is on the third.
∀x.∀y.∀z.(stack(x,y,z) ⇔ on(x,y) ∧ on(y,z))
The above relation is a bit tricky to define correctly. One block is above another block if and only if the first block is on the second block or it is on another block that is above the second block. Given a complete definition for the on relation, this axiom determines a unique above relation.
∀x.∀z.(above(x,z) ⇔ on(x,z) ∨ ∃y.(on(x,y) ∧ above(y,z)))
One advantage to defining relations in terms of other relations is economy. If we record on information for every object and encode the relationship between the on relation and these other relations, there is no need to record any ground relational sentences for those relations.
Another advantage is that these general sentences apply to Blocks World scenes other than the one pictured here. It is possible to create a Blocks World scene in which none of the on sentences we have listed is true, but these general definitions are still correct.
|
7.8 Example - Modular Arithmetic
In this example, we show how to characterize Modular Arithmetic in Relational Logic. In Modular Arithmetic, there are only finitely many objects. For example, in Modular Arithmetic with modulus 4, we would have just four integers - 0, 1, 2, 3 - and that's all. Our goal here to define the addition relation. Admittedly, this is a modest goal; but, once we see how to do this; we can use the same approach to define other arithmetic relations.
Let's start with the same relation, which is true of every number and itself and is false for numbers that are different. We can completely characterize the same relation by writing ground relational sentences, one positive sentence for each number and itself and negative sentences for all of the other cases.
| same(0,0) |
|
¬ | same(0,1) |
|
¬ | same(0,2) |
|
¬ | same(0,3) |
¬ | same(1,0) |
|
| same(1,1) |
|
¬ | same(1,2) |
|
¬ | same(1,3) |
¬ | same(2,0) |
|
¬ | same(2,1) |
|
| same(2,2) |
|
¬ | same(2,3) |
¬ | same(3,0) |
|
¬ | same(3,1) |
|
¬ | same(3,2) |
|
| same(3,3) |
Now, let's axiomatize the next relation, which, for each number, gives the next larger number, wrapping back to 0 after we reach 3.
next(0,1) |
next(1,2) |
next(2,3) |
next(3,0) |
Properly, we should write out the negative literals as well. However, we can save that work by writing a single axiom asserting that next is a functional relation, i.e., for each member of the Herbrand base, there is just one successor.
∀x.∀y.∀z.(next(x,y) ∧ next(x,z) ⇒ same(y,z))
In order to see why this saves us the work of writing out the negative literals, we can write this axiom in the equivalent form shown below.
∀x.∀y.∀z.(next(x,y) ∧ ¬same(y,z) ⇒ ¬next(x,z))
The addition table for Modular Arithmetic is the usual addition table for arbitrary numbers except that we wrap around whenever we get past 3. For such a small arithmetic, it is easy to write out the ground relational sentences for addition, as shown below.
plus(0,0,0) |
|
plus(1,0,1) |
|
plus(2,0,2) |
|
plus(3,0,3) |
plus(0,1,1) |
|
plus(1,1,2) |
|
plus(2,1,3) |
|
plus(3,1,0) |
plus(0,2,2) |
|
plus(1,2,3) |
|
plus(2,2,0) |
|
plus(3,2,1) |
plus(0,3,3) |
|
plus(1,3,0) |
|
plus(2,3,1) |
|
plus(3,3,2) |
As with next, we avoid writing out the negative literals by writing a suitable functionality axiom for plus.
∀x.∀y.∀z.∀w.(plus(x,y,z) ∧ ¬same(z,w) ⇒ ¬plus(x,y,w))
That's one way to do things, but we can do better. Rather than writing out all of those relational sentences, we can use Relational Logic to define plus in terms of same and next and use that axiomatization to deduce the ground relational sentences. The definition is shown below. First, we have an identity axiom. Adding 0 to any number results in the same number. Second we have a successor axiom. If z is the sum of x and y, then the sum of the successor of x and y is the successor of z. Finally, we have our functionality axiom once again.
∀y.plus(0,y,y) |
∀x.∀y.∀z.∀x2.∀z2.(plus(x,y,z) ∧ next(x,x2) ∧ next(z,z2) ⇒ plus(x2,y,z2)) |
∀x.∀y.∀z.∀w.(plus(x,y,z) ∧ ¬same(z,w) ⇒ ¬plus(x,y,w)) |
One advantage of doing things this way is economy. With these sentences, we do not need the ground relational sentences about plus given above. They are all logically entailed by our sentences about next and the definitional sentences. A second advantage is versatility. Our sentences define plus in terms of next for arithmetic with any modulus, not just modulus 4.
|
Use the arrow keys to navigate.
Press the escape key to toggle all / one.
|
|