## 12.1 IntroductionThe In our tour of resolution, we look first at unification, which allows us to ## 12.2 Clausal FormAs with Propositional Resolution, Resolution works only on expressions in The procedure for converting relational sentences to clausal form is similar to that for Propositional Logic. Some of the rules are the same. However, there are a few additional rules to deal with the presence of variables and quantifiers. The conversion rules are summarized below and should be applied in order. In the first step (Implications out), we eliminate all occurrences of the ⇒, ⇐, and ⇔ operators by substituting equivalent sentences involving only the ∧, ∨, and ¬ operators.
In the second step (Negations in), negations are distributed over other logical operators and quantifiers until each such operator applies to a single atomic sentence. The following replacement rules do the job.
In the third step (Standardize variables), we rename variables so that each quantifier has a unique variable, i.e. the same variable is not quantified more than once within the same sentence. The following transformation is an example.
In the fourth step (Existentials out), we eliminate all existential quantifiers. The method for doing this is a little complicated, and we describe it in two stages. If an existential quantifier does not occur within the scope of a universal quantifier, we simply drop the quantifier and replace all occurrences of the quantified variable by a new constant; i.e., one that does not occur anywhere else in our database. The constant used to replace the existential variable in this case is called a
If an existential quantifier is within the scope of any universal quantifiers, there is the possibility that the value of the existential variable depends on the values of the associated universal variables. Consequently, we cannot replace the existential variable with a constant. Instead, the general rule is to drop the existential quantifier and to replace the associated variable by a term formed from a new function symbol applied to the variables associated with the enclosing universal quantifiers. Any function defined in this way is called a
In the fifth step (Alls out), we drop all universal quantifiers. Because the remaining variables at this point are universally quantified, this does not introduce any ambiguities.
In the sixth step (Disjunctions in), we put the expression into
In the seventh step (Operators out), we eliminate operators by separating any conjunctions into its conjuncts and writing each disjunction as a separate clause.
As an example of this conversion process, consider the problem of transforming the following expression to clausal form. The initial expression appears on the top line, and the expressions on the labeled lines are the results of the corresponding steps of the conversion procedure.
Here is another example. In this case, the starting sentence is almost the same. The only difference is the leading ¬, but the result looks quite different.
In Propositional Logic, the clause set corresponding to any sentence is logically equivalent to that sentence. In Relational Logic, this is not necessarily the case. For example, the clausal form of the sentence ∃ ## 12.3 UnificationWhat differentiates Resolution from propositional resolution is unification. In propositional resolution, two clauses resolve if they contain complementary literals, i.e. the positive literal is identical to the target of the negative literal. The same idea underlies Resolution, except that the criterion for complementarity is relaxed. The positive literal does not need to be identical to the target of the negative literal; it is sufficient that the two can be made identical by substitutions for their variables. Unification is the process of determining whether two expressions can be A x←a, y←f(b), z←v}
The variables being replaced together constitute the A substitution is x←a, y←f(b), z←x}
The result of applying a substitution σ to an expression φ is the expression φσ obtained from the original expression by replacing every occurrence of every variable in the domain of the substitution by the term with which it is associated.
Note that, if a substitution is pure, application is idempotent, i.e. applying a substitution a second time has no effect.
However, this is not the case for impure substitutions, as illustrated by the following example. Applying the substitution once leads to an expression with an
Given two or more substitutions, it is possible to define a single substitution that has the same effect as applying those substitutions in sequence. For example, the substitutions { Computing the As an example, consider the composition shown below. In the right hand side of the first equation, we have applied the second substitution to the replacements in the first substitution. In the second equation, we have combined the rules from this new substitution with the non-conflicting rules from the second substitution. - {
*x*←*a*,*y*←*f*(*u*),*z*←*v*}{*u*←*d*,*v*←*e*,*z*←*g*} - = {
*x*←*a*,*y*←*f*(*d*),*z*←*e*}{*u*←*d*,*v*←*e*,*z*←*g*} - = {
*x*←*a*,*y*←*f*(*d*),*z*←*e*,*u*←*d*,*v*←*e*}
It is noteworthy that composition does not necessarily preserve substitutional purity. The composition of two impure substitutions may be pure, and the composition of two pure substitutions may be impure. This problem does not occur if the substitutions are x←a, y←b, z←v}{x←u, v←b}
By contrast, the following substitutions are noncomposable. Here, x←a, y←b, z←v}{x←u, v←x}
The importance of composability is that it ensures preservation of purity. The composition of composable pure substitutions must be pure. In the sequel, we look only at compositions of composable pure substitutions. A substitution σ is a The expressions p(x, y){x←a, y←b, v←b}=p(a, b)p(a, v){x←a, y←b, v←b}=p(a, b)
Note that, although this substitution unifies the two expressions, it is not the only unifier. We do not have to substitute In considering these alternatives, it should be clear that some substitutions are more general than others. We say that a substitution σ is x←a, y←v}{v←f(c)}={x←a, y←f(c), v←f(c)}
In resolution, we are interested only in unifiers with maximum generality. A Although it is possible for two expressions to have more than one most general unifier, all of these most general unifiers are structurally the same, i.e. they are unique up to variable renaming. For example, One good thing about our language is that there is a simple and inexpensive procedure for computing a most general unifier of any two expressions if it exists. The procedure assumes a representation of expressions as sequences of subexpressions. For example, the expression We start the procedure with two expressions and a substitution, which is initially the empty substitution. We then recursively process the two expressions, comparing the subexpressions at each point. Along the way, we expand the substitution with variable assignments as described below. If, we fail to unify any pair of subexpression at any point in this process, the procedure as a whole fails. If we finish this recursive comparison of the expressions, the procedure as a whole succeeds, and the accumulated substitution at that point is the most general unifier. In comparing two subexpressions, we first apply the substitution to each of the two expressions; and we then execute the following procedure on the two modified expressions. 1. If the modified expressions are identical, then nothing more needs to be done, and the procedure succeeds. 2. If the modified expressions are not identical and one is a variable, we check whether the second modified expression is a term containing the variable. If the variable occurs within the expression, we fail; otherwise, we update our substitution to the composition of the old substitution and a new substitution in which we bind the variable to the second modified expression. 3. If the modified expressions are not identical and neither is a variable and at least one is a constant, then we fail, since there is no way to make them look alike. 4. The only remaining possibility is that the two modified expressions are both sequences. In this case, we simply iterate across the expressions, comparing as described above. As an example, consider the computation of the most general unifier for the expressions
As another example, consider the process of unifying the expression
One especially noteworthy part of the unification procedure is the test for whether a variable occurs within an expression before the variable is bound to that expression. This test is called an ## 12.4 Resolution PrincipleThe Relational Resolution Principle is analogous to that of propositional resolution. The main difference is the use of unification to unify literals before applying the rule. Although the rule is simple, there are a couple complexities, so we start with a simple version and then refine it to deal with these complexities. A simple version of the Resolution Principle for Relational Logic is shown below. Given a clause with a literal φ and a second clause with a literal ¬ψ such that φ and ψ have a most general unifier σ, we can derive a conclusion by applying σ to the clause consisting of the remaining literals from the two original clauses.
Consider the example shown below. The first clause contains the positive literal
Unfortunately, this simple version of the Resolution Principle is not quite good enough. Consider the two clauses shown below. Given the meaning of these two clauses, it should be possible to resolve them to produce the empty clause. However, the two atomic sentences do not unify. The variable
Fortunately, this problem can easily be fixed by extending the Resolution Principle slightly as shown below. Before trying to resolve two clauses, we select one of the clauses and rename any variables the clause has in common with the other clause.
Renaming solves this problem. Unfortunately, we are still not quite done. There is one more technicality that must be addressed to finish the story. As stated, even with the extension mentioned above, the rule is not quite good enough. Given the clauses shown below, we should be able to infer the empty clause {}; however, this is not possible with the preceding definition. The clauses can be resolved in various ways, but the result is never the empty clause.
The good news is that we can solve this additional problem with one last modification to our definition of the Resolution Principle. If a subset of the literals in a clause Φ has a most general unifier γ, then the clause Φ' obtained by applying γ to Φ is called a Using the notion of factors, we can give a complete definition for the
Using this enhanced definition of resolution, we can solve the problem mentioned above. Once again, consider the premises { ## 12.5 Resolution ReasoningReasoning with the Resolution Principle is analogous to reasoning with the Propositional Resolution Principle. We start with premises; we apply the Resolution Principle to those premises; we apply the rule to the results of those applications; and so forth until we get to our desired conclusion or until we run out of things to do. As with Propositional Resolution, we define a As an example, consider a problem in the area kinship relations. Suppose we know that Art is the parent of Bob and Bud; suppose that Bob is the parent of Cal; and suppose that Bud is the parent of Coe. Suppose we also know that grandparents are parents of parents. Starting with these premises, we can use resolution to conclude that Art is the grandparent of Coe. The derivation is shown below. We start with our five premises - four simple clauses for the four facts about the parent relation
One thing to notice about this derivation is that there are some dead-ends. We first tried resolving the fact about Art and Bob before getting around to trying the fact about Art and Bud. Resolution does not eliminate all search. However, at no time did we ever have to make an arbitrary assumption or an arbitrary choice of a binding for a variable. The absence of such arbitrary choices is why Resolution is so much more focussed than natural deduction systems like Fitch. Another worthwhile observation about Resolution is that, unlike Fitch, Resolution frequently terminates even when there is no derivation of the desired result. Suppose, for example, we were interested in deriving the clause { Unfortunately, like Propositional Resolution, Resolution is not generatively complete, problems like this one are solved by negating the goal and demonstrating that the resulting set of sentences is unsatisfiable.
## 12.6 UnsatisfiabilityOne common use of resolution is in demonstrating unsatisfiability. In clausal form, a contradiction takes the form of the empty clause, which is equivalent to a disjunction of no literals. Thus, to automate the determination of unsatisfiability, all we need do is to use resolution to derive consequences from the set to be tested, terminating whenever the empty clause is generated. Let's start with a simple example. See the derivation below. We have four premises. The derivation in this case is particularly easy. We resolve the first clause with the second to get the clause shown on line 5. Next, we resolve the result with the third clause to get the unit clause on line 6. Note that
Here is a more complicated derivation, one that illustrates renaming and factoring. Again, we have four premises. Line 5 results from resolution between the clauses on lines 1 and 3. This one is easy. Line 6 results from resolution between the clauses on lines 2 and 4. In this case, renaming is necessary in order for the unification to take place. Line 7 results from renaming and factoring the clause on line 5 and resolving with the clause on line 6. Finally line 8 results from factoring line 5 again and resolving with the clause on line 7. Note that we cannot just factor 5 and factor 6 and resolve the results in one step. Try it and see what happens.
In demonstrating unsatisfiability, Resolution and Fitch without DC are equally powerful. Given a set of sentences, Resolution can derive the empty clause from the clausal form of the sentences if and only if Fitch can find a proof of a contradiction. The benefit of using Resolution is that the search space is smaller. ## 12.7 Logical EntailmentAs with Propositional Logic, we can use a test for unsatisfiability to test logical entailment as well. Suppose we wish to show that the set of sentences Δ logically entails the formula φ. We can do this by finding a proof of φ from Δ, i.e. by establishing Δ |- φ. By the refutation theorem, we can establish that Δ |- φ by showing that Δ∪{¬φ} is unsatisfiable. Thus, if we show that the set of formulas Δ∪{¬φ} is unsatisfiable, we have demonstrated that Δ logically entails φ. To apply this technique of establishing logical entailment by establishing unsatisfiability using resolution, we first negate φ and add it to Δ to yield Δ'. We then convert Δ' to clausal form and apply resolution. If the empty clause is produced, the original Δ' was unsatisfiable, and we have demonstrated that Δ logically entails φ. This process is called a As an example of using Resolution to determine logical entailment, let's consider a case we saw earlier. The premises are shown below. We know that everybody loves somebody and everybody loves a lover.
Our goal is to show that everybody loves everybody. x.∀y.loves(x,y)
In order to solve this problem, we add the negation of our desired conclusion to the premises and convert to clausal form, leading to the clauses shown below. Note the use of a Skolem function in the first clause and the use of Skolem constants in the clause derived from the negated goal.
Starting from these initial clauses, we can use resolution to derive the empty clause and thus prove the result.
As another example of resolution, once again consider the problem of Harry and Ralph introduced in the preceding chapter. We know that every horse can outrun every dog. Some greyhounds can outrun every rabbit. Greyhounds are dogs. The relationship of being faster is transitive. Harry is a horse. Ralph is a rabbit.
We desire to prove that Harry is faster than Ralph. In order to do this, we negate the desired conclusion. f(harry, ralph)
To do the proof, we take the premises and the negated conclusion and convert to clausal form. The resulting clauses are shown below. Note that the second premise has turned into two clauses.
From these clauses, we can derive the empty clause, as shown in the following derivation.
Don't be misled by the simplicity of these examples. Resolution can and has been used in proving complex mathematical theorems, in proving the correctness of programs, and in various other applications. ## 12.8 Answer ExtractionIn a previous section, we saw how to use resolution in answering true-or-false questions (e.g. A fill-in-the-blank question is a sentence with free variables specifying the blanks to be filled in. The goal is to find bindings for the free variables such that the database logically entails the sentence obtained by substituting the bindings into the original question. For example, to ask about Jon's parent, we would write the question An For example, the literal To get answers, we use resolution as described above, except that we change the termination test. Rather than waiting for the empty clause to be produced, the procedure halts as soon as it derives a clause consisting of only answer literals. The following resolution derivation shows how we compute the answer to
If this procedure produces only one answer literal, the terms it contains constitute the only answer to the question. In some cases, the result of a fill-in-the-blank resolution depends on the refutation by which it is produced. In general, several different refutations can result from the same query, leading to multiple answers. Suppose, for example, that we knew the identities of both the father and mother of Jon and that we asked
Unfortunately, we have no way of knowing whether or not the answer statement from a given refutation exhausts the possibilities. We can continue to search for answers until we find enough of them. However, due to the undecidability of logical entailment, we can never know in general whether we have found all the possible answers. Another interesting aspect of fill-in-the-blank resolution is that in some cases the procedure can result in a clause containing more than one answer literal. The significance of this is that no one answer is guaranteed to work, but one of the answers must be correct. The following resolution trace illustrates this fact. The database in this case is a disjunction asserting that either Art or Bob is the father of Jon, but we do not know which man is. The goal is to find a parent of John. After resolving the goal clause with the sentence about fathers and parents, we resolve the result with the database disjunction, obtaining a clause that can be resolved a second time yielding a clause with two answer literals. This answers indicates not two answers but rather uncertainty as to which is the correct answer.
In such situations, we can continue searching in hope of finding a more specific answer. However, given the undecidability of logical entailment, we can never know in general whether we can stop and say that no more specific answer exists. ## 12.9 StrategiesOne of the disadvantages of using the resolution rule in an unconstrained manner is that it leads to many useless inferences. Some inferences are redundant in that their conclusions can be derived in other ways. Some inferences are irrelevant in that they do not lead to derivations of the desired result. This section presents a number of strategies for eliminating useless work. In reading this material, it is important to bear in mind that we are concerned here not with the order in which inferences are done, but only with the size of a resolution graph and with ways of decreasing that size by eliminating useless deductions. ## Pure Literal EliminationA literal occurring in a clause set is The clause set that follows is unsatisfiable. However, in proving this we can ignore the second and third clauses, since they both contain the pure literal
Note that, if a database contains no pure literals, there is no way we can derive any clauses with pure literals using resolution. The upshot is that we do not need to apply the strategy to a database more than once, and in particular we do not have to check each clause as it is generated. ## Tautology EliminationA As it turns out, the presence of tautologies in a set of clauses has no effect on that set's satisfiability. A satisfiable set of clauses remains satisfiable, no matter what tautologies we add. An unsatisfiable set of clauses remains unsatisfiable, even if we remove all tautologies. Therefore, we can remove tautologies from a database, because we need never use them in subsequent inferences. The corresponding deletion strategy is called Note that the literals in a clause must be exact complements for tautology elimination to apply. We cannot remove non-identical literals, just because they are complements under unification. For example, the clauses {¬ ## Subsumption EliminationIn If one member in a set of clauses subsumes another member, then the set remaining after eliminating the subsumed clause is satisfiable if and only if the original set is satisfiable. Therefore, subsumed clauses can be eliminated. Since the resolution process itself can produce tautologies and subsuming clauses, we need to check for tautologies and subsumptions as we perform resolutions. ## Unit ResolutionA As an example of a unit refutation, consider the following proof. In the first two inferences, unit clauses from the initial set are resolved with binary clauses to produce two new unit clauses. These are resolved with the first clause to produce two additional unit clauses. The elements in these two sets of results are then resolved with each other to produce the contradiction.
Note that the proof contains only a subset of the possible uses of the resolution rule. For example, clauses 1 and 2 can be resolved to derive the conclusion { Inference procedures based on unit resolution are easy to implement and are usually quite efficient. It is worth noting that, whenever a clause is resolved with a unit clause, the conclusion has fewer literals than the parent does. This helps to focus the search toward producing the empty clause and thereby improves efficiency. Unfortunately, inference procedures based on unit resolution generally are not complete. For example, the clauses { On the other hand, if we restrict our attention to Horn clauses (i.e. clauses with at most one positive literal), the situation is much better. In fact, it can be shown that there is a unit refutation of a set of Horn clauses if and only if it is unsatisfiable. ## Input ResolutionAn It can be shown that unit resolution and input resolution are equivalent in inferential power in that there is a unit refutation from a set of sentences whenever there is an input refutation and vice versa. One consequence of this fact is that input resolution is complete for Horn clauses but incomplete in general. The unsatisfiable set of clauses { ## Linear Resolution
Linear resolution takes its name from the linear shape of the proofs it generates. A linear deduction starts with a clause in the initial database (called the Much of the redundancy in unconstrained resolution derives from the resolution of intermediate conclusions with other intermediate conclusions. The advantage of linear resolution is that it avoids many useless inferences by focusing deduction at each point on the ancestors of each clause and on the elements of the initial database. Linear resolution is known to be refutation complete. Furthermore, it is not necessary to try every clause in the initial database as top clause. It can be shown that, if a set of clauses Δ is satisfiable and Δ∪{Φ} is unsatisfiable, then there is a linear refutation with Φ as top clause. So, if we know that a particular set of clauses is consistent, we need not attempt refutations with the elements of that set as top clauses. A ## Set of Support ResolutionIf we examine resolution traces, we notice that many conclusions come from resolutions between clauses contained in a portion of the database that we know to be satisfiable. For example, in many cases, the set of premises is satisfiable, yet many of the conclusions are obtained by resolving premises with other premises rather than the negated conclusion As it turns out, we can eliminate these resolutions without affecting the refutation completeness of resolution. A subset Γ of a set Δ is called a The following derivation is a set of support refutation, with the singleton set {¬
Obviously, this strategy would be of little use if there were no easy way of selecting the set of support. Fortunately, there are several ways this can be done at negligible expense. For example, in situations where we are trying to prove conclusions from a consistent database, the natural choice is to use the clauses derived from the negated goal as the set of support. This set satisfies the definition as long as the database itself is truly satisfiable. With this choice of set of support, each resolution must have a connection to the overall goal, so the procedure can be viewed as working backward from the goal. This is especially useful for databases in which the number of conclusions possible by working forward is larger. Furthermore, the goal-oriented character of such refutations often makes them more understandable than refutations using other strategies. ## RecapThe ## ExercisesExercise 12.1: Consider a language with two object constants
Exercise 12.2: For each of the following pairs of sentences, say whether the sentences are unifiable and give a most general unifier for those that are unifiable.
Exercise 12.3 Give all resolvents, if any, for each of the following pairs of clauses.
Exercise 12.4: Given the clauses { Exercise 12.5: Given the premises ∀ Exercise 12.6: Given ∀ Exercise 12.7: Use Resolution to prove ∀ Exercise 12.8: Given ∀ Exercise 12.9: Use resolution to show that the clauses {¬ Exercise 12.10: Given |