next up previous contents
Next: Special effects Up: Implementing stratified abstraction in Previous: Definitions in Watson

Fine points of matching: limited higher order matching and commutative matching

This section is devoted to two refinements of matching, one integral to Watson and the other optional.

A theorem in the current Watson library is the unsurprising

FORALLDIST:  

forall @ [(?P @ ?1) & ?Q @ ?1]  =  
(forall @ [?P @ ?1]) & forall @ [?Q @ ?1]

A term to which this theorem ought to apply is
forall@[(?1=?1)&?1=?1]; application of the theorem should yield
(forall@[?1=?1])&forall@[?1=?1]. And indeed it does:

- s "forall@[(?1=?1)&?1=?1]";

{forall @ [(?1 = ?1) & ?1 = ?1]}
forall @ [(?1 = ?1) & ?1 = ?1]

- ri "FORALLDIST"; ex();

{(forall @ [?1 = ?1]) & forall @ [?1 = ?1]}
(forall @ [?1 = ?1]) & forall @ [?1 = ?1]

However, this would not have worked in early versions of Mark2 (the precursor of the present Watson prover), where this proof would have taken the following form:

- s "forall@[(?1=?1)&?1=?1]";

{forall @ [(?1 = ?1) & ?1 = ?1]}
forall @ [(?1 = ?1) & ?1 = ?1]

- right();right();

forall @ [{(?1 = ?1) & ?1 = ?1}]
(?1 = ?1) & ?1 = ?1

- right(); ri "BIND@?1"; up(); left(); ri "BIND@?1"; ex();

forall @ [{([?2 = ?2] @ ?1) & [?2 = ?2] @ ?1}]
([?2 = ?2] @ ?1) & [?2 = ?2] @ ?1

- top(); ri "FORALLDIST"; ex();

{(forall @ [[?2 = ?2] @ ?1]) & forall @ [[?2 = ?2] 
         @ ?1]}

(forall @ [[?2 = ?2] @ ?1]) & forall @ [[?2 = ?2] 
      @ ?1]

- left();right();right();ri "EVAL";
	top();right();right();right();ri "EVAL"; top(); ex();

{(forall @ [?1 = ?1]) & forall @ [?1 = ?1]}
(forall @ [?1 = ?1]) & forall @ [?1 = ?1]

The difficulty with the old version of the prover (and with the intuition that leads one to believe that the application shown is a direct application of the theorem) is that ?P@?1 and ?Q@?1 are each expected to match ?1=?1, which they do not in any obvious syntactical sense. This was a serious problem for fluent reasoning with quantifiers (and also caused unnecessary work in other applications of functions). (It is also important to note that forall@[?P&?Q] is not a candidate to match forall@[(?1=?1)&?1=?1] at all, because [?P&?Q] is a constant function; the variables ?P and ?Q can only match terms not depending on ?1 at all).

The solution was to adopt a limited form of higher-order matching: in the context where ?n is the highest-numbered bound variable, the expression ?P@?n, where ?P is a free variable, will match any expression in ?n which can be expressed as a function of ?n (stratification restrictions are at work here): the free variable ?P will be matched to a suitable abstraction term. A modification of substitution which causes reductions to occur in terms of the form ?P@?n when an abstract is substituted for ?P is also needed. With these modifications of matching and substitution, we get the natural behaviour illustrated in the first example above.

Recently, we have implemented further refinements of higher-order matching and correlated improvements in substitution, which allow natural treatments of situations involving multiple bound variables as arguments, both when the arguments are curried and when they occur in tuples.

A further point is that new variables introduced by the application of theorems or tactics inside bracket terms may have an unexpected form: under certain conditions, a ``new variable'' introduced at level nwill take the form ?x_1 @ ?n rather than the form ?x_1 which one expects. It is better to avoid the use of theorems which introduce new variables in such contexts (in fact, it is always better to avoid the introduction of new variables except in interactive ``scratch work''); it is possible that a stratification error will result from such a theorem application (the type checking done uses an imperfect heuristic to guess whether the new variable has correct relative type); such errors are caught by the prover and cause the theorem application to fail, and can always be avoided by avoiding the introduction of new variables (by using parameterized versions of theorems and their converses, either user developed or constructed using the !@ and !$ built-in operators). It would be possible to redesign Watson to avoid stratification errors of this kind, but it would involve a considerable elaboration of the data structures representing context information in the prover for a very limited benefit.

An optional refinement of matching which is available is commutative matching, which can be turned on (or off) with the cmatch command. With commutative matching on, the prover will in effect attempt to apply commutative laws wherever possible in the attempt to apply a theorem. A simple example:

- s "?x+?y+?z";

{?x + ?y + ?z}
?x + ?y + ?z

- ri "ASSOC"; ex();      (* ASSOC does not apply *)

{?x + ?y + ?z}
?x + ?y + ?z

cmatch();  (* this turns on commutative matching *)

- ri "ASSOC"; ex();       (* an application of COMM 
                          gives a term to
                          which ASSOC can be applied *)

{?y + ?z + ?x}
?y + ?z + ?x

An important fact about commutative matching is that it is not applied in certain contexts even when it is turned on: for example, the alternative and on-success rule infixes use strict matching to determine whether a theorem succeeds or fails. The additional freedom of commutative matching would disrupt the control structures of the internal programming language too much (for example, the generalized associativity tactic given above would fail, because it depends on ASSOC to fail on sums whose left term is not a sum).

We are not convinced that commutative matching is a useful refinement of the prover (largely because of its conflicts with the needs of the tactic language), but we continue to experiment with it.


next up previous contents
Next: Special effects Up: Implementing stratified abstraction in Previous: Definitions in Watson
Randall Holmes
2000-11-03