人工智能与符号计算 英文版

人工智能符号计算

展开查看详情

1. Bruno Buch berger John A. Campbell (Eds.) « Artificial Intelligence z.....J ~-~- and Symbolic (omputation 7th International Conference, AI SC 2004 l inz. Austria, September 2004 Proceedings;

2.Lecture Notes in Artificial Intelligence 3249 Edited by 1. G. Carbonell and 1. Siekmann Subseries of Lecture Notes in Computer Science

3.This page intentionally left blank

4.Bruno Buchberger John A. Campbell (Eds.) Artificial Intelligence and Symbolic Computation 7th International Conference, AISC 2004 Linz, Austria, September 22-24, 2004 Proceedings Springer

5.eBook ISBN: 3-540-3021 0-7 Print ISBN: 3-540-23212-5 ©2005 Springer Science + Business Media, Inc. Print ©2004 Springer-Verlag Berlin Heidelberg All rights reserved No part of this eBook may be reproduced or transmitted in any form or by any means, electronic, mechanical, recording, or otherwise, without written consent from the Publisher Created in the United States of America Visit Springer's eBookstore at: http://ebooks.springerlink.com and the Springer Global Website Online at: http://www.springeronline.com

6. Preface AISC 2004, the 7th International Conference on Artificial Intelligence and Sym- bolic Computation, was the latest in the series of specialized biennial conferences founded in 1992 by Jacques Calmet of the UniversiHit Karlsruhe and John Camp- bell of University College London with the initial title Artificial Intelligence and Symbolic Mathematical Computing (AISMC). The M disappeared from the title between the 1996 and 1998 conferences. As the editors of the AISC 1998 pro- ceedings said, the organizers of the current meeting decided to drop the adjective 'mathematical' and to emphasize that the conference is concerned with all aspects of symbolic computation in AI: mathematical foundations, implementations, and applications, including applications in industry and academia. This remains the intended profile of the series, and will figure in the call for papers for AISC 2006, which is intended to take place in China. The distribution of papers in the present volume over all the areas of AISC happens to be rather noticeably mathematical, an effect that emerged because we were concerned to select the best relevant papers that were offered to us in 2004, irrespective of their particular topics; hence the title on the cover. Nevertheless, we encourage researchers over the entire spectrum of AISC, as expressed by the 1998 quotation above, to be in touch with us about their interests and the possibility of eventual submission of papers on their work for the next conference in the series. The papers in the present volume are evidence of the health of the field of AISC. Additionally, there are two reasons for optimism about the continuation of this situation. The first is that almost all the items in the list of useful areas for future research that the editors of the proceedings of the first conference in 1992 sug- gested in a 'state of the field' paper there are represented in AISC 2004. Many have of course been present in other AISC conferences too, but never so many as in this year's conference: theorem proving, expert systems, qualitative reason- ing, Grabner bases, differential and integral expressions, computational group theory, constraint-based programming, specification (implying verification), and instances of automated learning, for example. The only major items from the 1992 list that would be needed here to make up what poker players might call a full house are knowledge representation and intelligent user interfaces for math- ematical tasks and mathematical reasoning - but while a word search in this volume may not find them, ingredients of both are undoubtedly present this year. (For a hint, see the next paragraph.) The second of our reasons for an optimistic view of AISC is the maturation of a scientific proposal or prediction that dates back to 1985. In founding the Journal of Symbolic Computation in that year, one of us proposed that SC should encompass both exact mathematical algorithmics (computer algebra) and automated reasoning. Only in recent years has an integration and interaction of these two fields started to materialize. Since 2001 in particular, this has given

7.VI Preface rise to the MKM (mathematical knowledge management) 'movement', which considers seriously the automation of the entire process of mathematical theory exploration. This is now one of the most promising areas for the application of AI methods in general (for invention or discovery of mathematical concepts, problems, theorems and algorithms) to mathematics/SC and vice versa. We are happy to be continuing the fruitful collaboration with Springer which started with the first AISMC conference in 1992 and which permitted the pub- lication of the proceedings in the Lecture Notes in Computer Science (LNCS 737, 958, 1138) series from 1992 to 1996 and the Lecture Notes in Artificial Intelligence (LNAI 1476, 1930, 2385) series subsequently. We, the AISC steering committee, and the organizers of the conference, are grateful to the following bodies for their financial contributions towards its op- eration and success: Linzer Hochschulfonds, Upper Austrian Government, FWF (Austrian Science Foundation), Raiffeisenlandesbank Upper Austria, Siemens Austria, IBM Austria, and CoLogNET. Our thanks are also due to the members of the program committee and several additional anonymous referees, and to those who ensured the effective running of the actual conference and its Web sites. In this latter connection, we administered the submission and selection of pa- pers for AISC 2004 entirely through special-purpose conference software for the first time in the history of AISC, using the START V2 conference manager de- scribed at www.softconf.com.This contributed substantially to the efficiency of the whole process, and allowed us to respect an unusually tight set of dead- lines. We appreciate the prompt and helpful advice on using this software that we received from Rich Gerber whenever we needed it. The effectiveness of the final stage of production of this volume was due mainly to the intensive work of Theodoros Pagtzis. We express our gratitude to him. July 2004 Bruno Buchberger John Campbell

8. Organization AISC 2004, the 7th international conference on Artificial Intelligence and Sym- bolic Computation, was held at Schloss Hagenberg, near Linz, during 22-24 September 2004. The Research Institute for Symbolic Computation (RISC) of the Johannes- Kepler Universitat Linz, and the Radon Institute for Computational and Applied Mathematics (RICAM), Austrian Academy of Sciences (Osterreichische Akademie der Wissenschaften), Linz were jointly responsible for the organization and the local arrangements for the conference. Conference Direction Conference Chair Bruno Buchberger (Johannes-Kepler-Universitat Linz) Program Chair John Campbell (University College London, UK) Local Arrangements Local Organizers Betina Curtis Ramona Pochlinger Websites and Publicity Camelia Kocsis Koji Nakagawa Fiorina Piroi Logic Programming Tutorial Klaus Triimper (University of Texas at Dallas) Program Committee Luigia Carlucci Aiello (Universita 'La Sapienza' , Rome) Michael Beeson (San Jose State University) Belaid Benhamou (Universite de Provence) Bruno Buchberger (Johannes-Kepler-Universitat Linz) Jacques Calmet (Universitat Karlsruhe) Bill Farmer (McMaster University) Jacques Fleuriot (University of Edinburgh) Laurent Henocque (Universite de la Mediterranee) Tetsuo Ida (Tsukuba University) Michael Kohlhase (Universitat Bremen) Steve Linton (University of St Andrews) Aart Middeldorp (Universitat Innsbruck) Eric Monfroy (Universite de Nantes)

9.VIII Organization John Perram (Syddansk Universitet) Jochen Pfalzgraf (Universitat Salzburg) Zbigniew Ras (University of North Carolina) Eugenio Roanes-Lozano (Universidad Complutense de Madrid) Tomas Recio (Universidad de Cantabria) Volker Sorge (University of Birmingham) John Stell (University of Leeds) Carolyn Talcott (SRI International) Dongming Wang (Universite de Paris 6) Wolfgang Windsteiger (Johannes-Kepler-Universitat Linz) Sponsoring Institutions Linzer Hochschulfonds Upper Austrian Government FWF (Austrian Science Foundation) Raiffeisenlandesbank Upper Austria Siemens Austria IBM Austria CoLogNET

10. Table of Contents Invited Presentations The Algorithmization of Physics: Math Between Science and Engineering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 M. Rosenkranz Finite Algebras and AI: From Matrix Semantics to Stochastic Local Search .......................................... 8 Z. Stachniak Proof Search in Minimal Logic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 15 H. Schwichtenberg Planning and Patching Proof. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 26 A. Bundy Papers A Paraconsistent Higher Order Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 38 1. Villadsen Abstraction Within Partial Deduction for Linear Logic ................. 52 P. Kiingas A Decision Procedure for Equality Logic with U ninterpreted Functions ... 66 0. Tveretina Generic Hermitian Quantifier Elimination. . . . . . . . . . . . . . . . . . . . . . . . . . . .. 80 A. Dolzmann and L.A. Gilch Extending Finite Model Searching with Congruence Closure Computation ............................... 94 1. Zhang and H. Zhang On the Combination of Congruence Closure and Completion ............ 103 C. Scharff and L. Bachmair Combination of Nonlinear Terms in Interval Constraint Satisfaction Techniques ......................... 118 L. Granvilliers and M. Ouabiba Proving and Constraint Solving in Computational Origami. . . . . . . . . . . . .. 132 T. Ida, D. Tepeneu, B. Buchberger, and 1. Robu An Evolutionary Local Search Method for Incremental Satisfiability ...... 143 M. El Bachir Menai"

11.X Table of Contents Solving Equations Involving Sequence Variables and Sequence Functions .. 157 T. Kutsia Verified Computer Algebra in ACL2. Grabner Bases Computation. . . . . . .. 171 I. Medina-Bulo, F. Palomo-Lozano, J.A Alonso-Jimenez, and J.L. Ruiz-Reina Polynomial Interpretations with Negative Coefficients .................. 185 N. Hirokawa and A Middeldorp New Developments in Symmetry Breaking in Search U sing Computational Group Theory .................................. 199 T. Kelsey, S. Linton, and C. Roney-Dougal Recognition of Whitehead-Minimal Elements in Free Groups of Large Ranks ...................................... 211 AD. Miasnikov Four Approaches to Automated Reasoning with Differential Algebraic Structures ................................. 222 J. Aransay, C. Ballarin, and J. Rubio Algorithm-Supported Mathematical Theory Exploration: A Personal View and Strategy ....................................... 236 B. Buchberger An Expert System on Detection, Evaluation and Treatment of Hypertension .................................................... 251 E. Roanes-Lozano, E. Lopez-Vidriero Jr., L.M. Laita, E. Lopez- Vidriero, V. Maojo, and E. Roanes-Macfas Short Presentations Two Revision Methods Based on Constraints: Application to a Flooding Problem ................................... 265 M. Khelfallah and B. Benhamou Abstraction-Driven Verification of Array Programs ..................... 271 D. Deharbe, A Imine, and S. Ranise Singularities in Qualitative Reasoning ................................ 276 B. Gottfried From a Computer Algebra Library to a System with an Equational Prover .......................................... 281 S. Mechveliani Author Index ................................................. 285

12. The Algorithmization of Physics: Math Between Science and Engineering* Markus Rosenkranz Johann Radon Institute for Computational and Applied Mathematics Austrian Academy of Sciences, A-4040 Linz, Austria Markus.Rosenkranz@oeaw.ac.at Abstract. I give a concise description of my personal view on symbolic computation, its place within mathematics and its relation to algebra. This view is exemplified by a recent result from my own research: a new symbolic solution method for linear two-point boundary value problems. The essential features of this method are discussed with regard to a potentially novel line of research in symbolic computation. 1 Physics: The Source and Target of Math What is the nature of mathematics? Over the centuries, philosophers and math- ematicians have proposed various different answers to this elusive and intriguing question. Any reasonable attempt to systematically analyze these answers is a major epistemological endeavor. The goal of this presentation is more modest: I want to give you a personal (partial) answer to the question posed above, an answer that highlights some aspects of mathematics that I consider crucial from the perspective of symbolic computation. At the end of my presentation, I will substantiate my view by a recent example from my own research. According to [4], humankind has cultivated the art of rational problem solving in a fundamental three-step rhythm: 1. Observation: The problem of the real world is specified by extracting relevant data in an abstract model. 2. Reasoning: The model problem is solved by suitable reasoning steps, carried out solely in the abstract model. 3. Action: The model solution is applied in the real world by effectuating the desired result. In this view, mathematics is not limited to any particular objects like numbers or figures; it is simply "reasoning in abstract models" (item number 2 in the enumeration above). For highlighting its place in the overall picture, let us take up the example of physics - of course, one can make similar observations for other disciplines like chemistry, biology, economics or psychology. We can see physics as a natural science that deals with observations about matter and energy (item number 1 in the three-step rhythm). In doing so, it * This work is supported by the Austrian Science Foundation FWF in project F1322. B. Buchberger and l.A. Campbell (Eds.): AISC 2004, LNAI 3249, pp. 1-7, 2004. © Springer-Verlag Berlin Heidelberg 2004

13.2 M. Rosenkranz extracts various patterns from the mass of empirical data and tabulates them in natural laws. It is here that it comes in contact with mathematics, which provides a rich supply of abstract structures for clothing these laws. In this process, physicists have often stimulated deep mathematical research for establishing the concepts asked for (e.g. distributions for modeling point sources), sometimes they have also found ready-made material in an unexpected corner of "pure mathematics" (e.g. Rogers-Ramanujan identities for kinetic gas theory); most of the time, however, we see a parallel movement of mutual fertilization. As a branch of technical engineering, physics is utilized for constructing the machines that we encounter in the world of technology (item number 3 in the three-step rhythm). Engineers are nowadays equipped with a huge body of powerful applied mathematics - often hidden in the special-purpose software at their disposal - for controlling some processes of nature precisely in the way desired at a certain site (e.g. the temperature profile of a chemical reactor). If we are inclined to look down to this "down-to-earth math", we should not for- get that it is not only the most prominent source of our money but also the immediate reason for our present-day prosperity. Of course, the above assignment 1 - natural sciences (e.g. theoretical phy- sics), 2 - formal science (i.e. mathematics), 3 - technical sciences (e.g. engi- neering physics) must be understood cum grana salis: Abstract "mathematical" reasoning steps are also employed in the natural and technical sciences, and a mathematician will certainly benefit from understanding the physical context of various mathematical structures. Besides this, the construction of models is also performed within mathematics when powerful concepts are invented for solving math-internal problems in the same three-step rhythm (e.g. 1 - extension fields, 2 - Galois groups, 3 - solvability criterion). 2 Algebraization: The Commitment to Computing The above view of mathematics prefers its dynamical side (problem solving) over its static one (knowledge acquisition), but actually the two sides are inti- mately connected (knowledge is needed for solving problems, and problems are the best filter for building up relevant knowledge bases). The dynamic view of mathematics is also the natural starting point for symbolic computation, as I will explicate in the next section. In fact, one can see symbolic computation as its strictest realization, deeply embedded in the overall organism ofless constructive or "structural" mathematics. Within symbolic computation, I will focus on computer algebra in this pre- sentation. Strictly speaking, this means that we restrict our interest to algebraic structures (domains with functional signatures and equational axioms like rings). But this is not a dogmatic distinction, rather a point of emphasis; e.g. fields are also counted among the algebraic structures despite their non-equational axiom on reciprocals. In some sense computer algebra is the most traditional branch of symbolic computation since rewriting functions along equational chains is maybe the most natural form of "computation with symbols". What is more important, though, is the axiomatic description of the algebraic structures in use.

14. The Algorithmization of Physics: Math Between Science and Engineering 3 Judging from our present understanding, it may seem obvious that one should proceed in this way. Looking at the historical development, however, we perceive a movement of increasing abstraction that has not stopped at the point indicated above [9]. We can see four distinct stages in this abstraction process: 1. Concrete Algebra (Weber, Fricke): Virtually any ring considered was a sub- ring of the integers or of the complex polynomials or of some algebraic num- ber field (similar for other domains). Various "identical" results were proved separately for different instances. 2. Abstract Algebra (Noether, van der Waerden): Rings are described by their axioms; the classical domains mentioned above are subsumed as examples. All the proofs are now done once, within "ring theory". 3. Universal Algebra (Graetzer, Cohn): Classes of algebraic structures are con- sidered collectively; rings are just one instance of an algebraic structure. Results like the homomorphism theorem can be proved for generic algebraic structures that specialize to rings, groups, and the like. 4. Category Theory (MacLane, Eilenberg): Categories are any collection of ob- jects (like algebraic or non-algebraic structures) connected through arrows (like homomorphisms in algebraic structures). The objects need not have a set-theoretic texture (as the carriers of structures have). The role of mathematics as reasoning in abstract models becomes very clear in the process of algebraization: the mathematical models are now specified pre- cisely by way of axioms and we need no longer rely on having the same intuition about them. Let me detail this by looking at one of the most fundamental struc- tures used in physics - the notion of the continuum, which provides a scale for measuring virtually all physical quantities. Its axiomatization as the complete ordered field of reals needed centuries of focused mathematical research culmi- nating in the categoricity result. Proceeding with computer algebra, we would now strip off the topological aspects (the "complete ordered" part) from the algebraic ones (the "field" part) and then study its computable sub fields (finite extensions of the rationals). If one models physical quantities by real numbers, analyzing their mutual dependence amounts to studying real functions (real analysis), and the natural laws governing them are written as differential equations. Their application to specific situations is controlled by adjusting some data like various parameters and initiallboundary conditions. Since the latter are the most frequent data in physical problems [10], boundary value problems (BVPs) will serve as a fitting key example in the last section of this presentation. 3 Algorithmization: The Realization of Computing In order to actually "compute" the solution of a problem in an abstract model, algebraization alone is not enough. We have already observed this in the above example of the continuum: The field of real numbers is regarded as an algebraic domain, but it is clearly uncomputable because of its uncountable carrier. In

15.4 M. Rosenkranz view of these facts, Buchberger [7, 3, 5] has defined symbolic computation (in par- ticular: computer algebra) as that part of algorithmic mathematics (in particular: algorithmic algebra) which solves problems stated in non-algorithmic domains by translating them into isomorphic algorithmic representations. Let us look at three immediate examples: 1. The traditional definition of polynomials introduces them as certain (infi- nite!) congruence classes over some term algebra in the variety of unital com- mutative rings [l3]. The modern definition starts from a monoid ring [12], which turns out to be an isomorphic translation of the former, basically encoding the canonical representatives of the congruence classes. 2. As another example, consider the cardinal question of ideal membership in algebraic geometry: As infinite sets, ideals cannot directly be treated by algorithmic methods; representing them via Grabner bases allows a finitary description and a solution of the ideal membership problem [1, 2, 6]. 3. Finally, let us consider an example from a traditional domain of symbolic computation that does not belong to computer algebra, namely automated theorem proving. It is based on translating the non-algorithmic (semantic) concept of consequence into the algorithmic (syntactic) concept of deducibil- ity, the isomorphism being guaranteed by Gbdel's Completeness Theorem. Returning to the example of boundary value problems introduced above, we should also note that there are actually two famous approaches for algorith- mization: symbolic computation takes the path through algebraization, whereas numerical computation goes through approximation. Simplifying matters a bit, we could say that symbolic computation hunts down the algebraic structure of the continuum, numerical computation its topological structure l . But while industrial mathematics is virtually flooded with numerical solvers (mostly of finite-element type), it is strange to notice that computer algebra systems like Maple or Mathematica do not provide any command for attacking those BVPs that have a symbolic solution. My own research on symbolic functional analysis can be seen as an endeavor to change this situation. 4 Symbolic Functional Analysis: Conquering a New Territory for Computing I will now sketch my own contribution to the exciting process of conquering more and more territory through algebraization and algorithmization. As mentioned before, it deals with certain boundary value problems. More precisely, we are I The ideal algorithmic approach to problem solving would be to combine the best parts of both symbolic and numerical computation, which is the overall objective of a lO-year special research project (SFB) at Johannes Kepler University. My own research there takes place in the subproject [17] on symbolic functional analysis; for some details, see the next section.

16. The Algorithmization of Physics: Math Between Science and Engineering 5 given a function I in Coo [0, 1] say2, and we want to find a solution u in Coo [0, 1] such that Tu=l, (1) Bou = Uo,···, Bn-1u = Un-I· Here T is a linear differential operator like T = x 2 D2 - 2e x D+ 1 and B o, ... , B n - 1 are boundary operators like u f---+ u'(O) - 2u(1), whosenumber n should coincide with the order of T. Furthermore, we require the boundary conditions to be such that the solution u exists and is unique for every choice of Ii in other words, we consider only regular BVPs. In my own understanding, BVPs are the prototype for a new kind of problem in symbolic computation: Whereas "computer algebra" focuses on algorithmi- cally solving for numbers (typical example: Grabner bases for triangularizing polynomial systems) and "computer analysis" does the same for functions (typ- ical example: differential algebra for solving differential equations), the proper realm of "computer operator-theory" or symbolic functional analyis would be solving for operators. See [17] for more details on this three-floor conception of the algebraic part of symbolic computation. Why are BVPs an instance of solving for operators? The reason is that the forcing function I in (1) is understood as a symbolic parameter: One wants to have the solution u as a term that contains I as a free variable. In other words, one needs an operator G that maps any given I to u. For making this explicit, let us rewrite the traditional formulation (1) as TG=1, (2) BoG=0, ... ,Bn-1G=0. Here 1 and 0 denote the identity and zero operator, respectively. Note also that I have passed to homogeneous boundary conditions (it is always possible to reduce a fully inhomogeneous problem to such a semi-inhomogeneous one). The crucial idea of my solution method for (2) is to model the above op- erators as noncommutative polynomials and to extract their algebraically rel- evant properties into a collection of identities. For details, please refer to my PhD thesis [14]; see also [16,15]. The outcome ot this strategical plan is the noncommutative polynomial ring 4::( {D, A, B, L, R} U {fIlii E ~#}) together with a collection of 36 polynomial equalities. The indeterminates D, A, B, L, R and r11 stand for differentiation, integral, cointegral, left boundary value, right boundary value and the multiplication operator induced by Ii here I may range over any so-called analytic algebra (a natural extension of a differential algebra), e.g. the exponential polynomials. The 36 polynomial identities express proper- ties like the product rule of differentiation, the fundamental theorem of calculus, and integration by parts. 2 The smoothness conditions can be dispensed with by passing to distributions on [0,1]. Of course one can also choose an arbitrary finite interval [a, b] instead of the unit interval. See [15] for details.

17.6 M. Rosenkranz I have proved that the 36 polynomials on the left-hand side of the respective identities actually constitute a non-commutative Grabner basis with terminating reduction (a fact that is not guaranteed in the noncommutative case, in partic- ular not if one deals with infinitely many polynomials like the multiplication operator r11 above). Since I have retrieved a generic formula for G, the result- ing polynomial has just to be normalized with respect to this Grabner basis; the corresponding normal form turns out to be the classical Green's operator with the Green's function as its kernel- a well-known picture for every physicist! As a conclusion, let me briefly reflect on the solution scheme sketched above. First of all, we observe the algebraization involved in transforming the topological concepts of differentiation and integration into fitting algebraic counterparts; the creation of <C({D, A, B, L, R} u H1111 E ~}) can also be seen as an instance of math-internal modeling in the sense described in the first section. Second we note the crucial role of noncommutative Grabner bases in providing the classical Green's operator G: without a confluent and noetherian system of identities the search for normal forms would not be an algorithmic process. Third we may also notice the advantage of working purely on the level of operators, as opposed to traditional solution methods on the function level that use costly determinant computations; see e.g. page 189 in [11]. Finally, let me point out that solving BVPs in this way could be the first step into a new territory for symbolic computation that we have baptized "sym- bolic functional analysis" in [17]. Its common feature would be the algorithmic study (and inversion) of some crucial operators occurring in functional analy- sis; its main tool would be noncommutative polynomials. Besides more general BVPs (PDEs rather than ODEs, nonlinear rather than linear, systems of equa- tions rather than single equations, etc), some potentially interesting problems could be: single layer potentials, exterior Dirichlet problems, inverse problems like the backwards heat equation, computation of principal symbols, eigenvalue problems. References 1. Buchberger, B.: An Algorithm for Finding a Basis for the Residual Class Ring of Zero-Dimensional Polynomial Ideal (in German). PhD Thesis, University of Innsbrnck, Institnte for Mathematics (1965) 2. Buchberger, B.: An Algorithmic Criterion for the Solvability of Algebraic Systems of Equations (in German). £quationes Mathematicae, 4 (1970) 374-383, in [8] 535- 545 3. Buchberger, B.: Editorial. Journal of Symbolic Computation 111 (1985) 4. Buchberger, B.: Logic for Computer Science. Lecture Notes, Johannes Kepler Uni- versity, Linz, Austria (1991) 5. Buchberger, B.: Symbolic Computation (in German). In: Gustav Pomberger and Peter Rechenberg, Handbuch der Informatik, Birkhauser, Munchen (1997) 955-974 6. Buchberger, B.: Introduction to Grabner Bases. In [8] 3--3l. 7. Buchberger, B., Loos, Rudiger: Computer Algebra - Symbolic and Algebraic Com- putation. In: Bruno Buchberger and George Edwin Collins and Rudiger Loos, Al- gebraic Simplification, Springer, Wien (1982)

18. The Algorithmization of Physics: Math Between Science and Engineering 7 8. Buchberger, B., Winkler, F.: Grabner Bases and Applications. In London Math- ematical Society Lecture Notes 251, Cambridge University Press, Cambridge/UK (1998) 9. Corry, L.: Modern Algebra and the Rise of Mathematical Strnctures. Birkhauser, Basel (1996) 10. Courant, R., Hilbert, D.: Die Methoden der mathematischen Physik, Volumes 1 & 2. Springer, Berlin (1993) 11. Kamke, E.: Differentialgleichungen und Lasungsmethoden (Volume 1). Teubner, Stuttgart, tenth edition (1983) 12. Lang, S.: Algebra. Springer, New York (2002) 13. Lausch, H., Nabauer, W.: Algebra of Polynomials. North-Holland, Amsterdam (1973) 14. Rosenkranz, M.: The Green's Algebra - A Polynomial Approach to Boundary Value Problems. PhD Thesis, Johannes Kepler University, Linz, Austria (2003) 15. Rosenkranz, M.: A New Symbolic Method for Solving Linear Two-Point Boundary Value Problems on the Level of Operators. J oumal of Symbolic Computation, sub- mitted (2004). Also available as SFB Report 200341, Johannes Kepler University, Linz, Austria (2003) 16. Rosenkranz, M., Buchberger, B., Engl, Heinz W.: Solving Linear Boundary Value Problems via Non-commutative Grabner Bases. Applicable Analysis, 82n (2003) 655--675 17. Rosenkranz, R., Buchberger, B., Engl, Heinz W.: Computer Algebra for Pure and Applied Functional Analysis. An FWF Proposal for F1322 Subproject of the SFB FOl3, Johannes Kepler University, Linz, Austria (2003)

19.Finite Algebras and AI: From Matrix Semantics to Stochastic Local Search Zbigniew Stachniak Department of Computer Science, Yark University Toronto, Canada 1 Introduction Universal algebra has underpinned the modern research in formal logic since Gar- rett Birkoff's pioneering work in the 1930's and 1940's. Since the early 1970's, the entanglement oflogic and algebra has been successfully exploited in many ar- eas of computer science from the theory of computation to Artificial Intelligence (AI). The scientific outcome of the interplay between logic and universal algebra in computer science is rich and vast (cf. [2]). In this presentation I shall discuss some applications of universal algebra in AI with an emphasis on Knowledge Representation and Reasoning (KRR). A brief survey, such as this, of possible ways in which the universal algebra theory could be employed in research on KRR systems, has to be necessarily in- complete. It is primarily for this reason that I shall concentrate almost exclusively on propositional KRR systems. But there are other reasons too. The outburst of research activities on stochastic local search for propositional satisfiability that followed the seminal paper A New Method for Solving Hard Satisfiability Prob- lems by Selman, Levesque, and Mitchel (cf. [11]), provides some evidence that propositional techniques could be surprisingly effective in finding solutions to 'realistic' instances of hard problems. 2 Propositional KRR Systems One of the main objectives of Knowledge Representation is the development of adequate and, preferably, tractable formal representational frameworks for modeling intelligent behaviors of AI agents. In symbolic approach to knowledge representation, a KRR system consists of at least a formal knowledge representational language C and of an inference op- eration f- on C. Such a system may involve additional operations and relations besides f- (such as plan generation and evaluation, belief revision, or diagno- sis); for some domains, some of these additional operations can be defined or implemented in terms of 'basic' logical operations: logical inference, consistency verification, and satisfiability checking. Representing reasoning tasks as instances oflogical inference, consistency, and satisfiability problems is discussed below. B. Buchberger and l.A. Campbell (Eds.): AISC 2004, LNAI 3249, pp. 8-14, 2004. © Springer-Verlag Berlin Heidelberg 2004

20. Finite Algebras and AI: From Matrix Semantics to Stochastic Local Search 9 Syntax. In the prepositional case, a representational language C, defmed by a set of propositional variables Var and logical connectives io, .. ·, in, can be viewed as a term algebra (or Lindenbaum's algebra offormulas) (Terms(Var), io, ... ,in), generated by Var, where Terms(Var) denotes the set of all well-formed formu- las of C. Syntactically richer languages can be adequately modeled using, for instance, partial and many-sorted term algebras. Inference Systems. Given a propositional language C, a relation f- between sets of formulas of C and formulas of C is called an inference operationon on C, if for every set X of formulas: (el) X ~ C(X) (inclusion); (e2) C(C(X)) ~ C(X) (idempotence); where C(X) = {,B : X f- ,B}. An inference system on C is a pair (C, f-), where f- is an inference operation on C. Further conditions on f- can be imposed: for every X, Y ~ Terms(Var), (e3) X ~ Y ~ C(X) implies C(X) = C(Y) (cumulativity ); (e4) X ~ Y implies C(X) ~ C(Y) (mono tonicity); (eS) for every endomorphism eof C, e(C(X)) ~ C(e(X)) (structurality). Every inference system satisfying (c1 )-(c5) is called a propositional 10 gic. Since Tarski's axiomatization of the concept of a consequence operation in formalized languages, algebraic properties of monotonic and non-monotonic inference oper- ations have been extensively studied in the literature, (cf. [1,10,13,16]). Matrix Semantics. The central idea behind classical matrix semantics is to view algebras similar to a language C as models of C. Interpretations of formulas of C in an algebra A similar to C are homomorphisms of C into A. When A is augmented with a subset d of the universe of A, the resulting structure M = (A,d), called a logical matrix for C, determines the inference operation f- M defmed in the following way: for every set X U {a} of formulas of C, X f- M a iJffor every homomorphism h of C into A, if h(X) ~ d then h( a) E d. The research on logical matrices has been strongly influenced by universal alge- bra and model theory. Wojcicki's monograph [16] contains a detailed account of the development of matrix semantics since its inception in the early 20th century. In AI, matrix semantics (and a closely related discipline of many-valued logics) has been successfully exploited in the areas of Automated Reasoning, KRR, and Logic Programming (cf. [3,4,5,6,9,13,15]).

21.10 Z. Stachniak Monotone Calculi. The inference opeartion f- M defmed by a logical matrix M satisfies not only (cl)-(c3) but also (c4) and (c5). Furthermore, for every propositional calculus (,c, f-) there exists a class K oflogical matrices for ,c such that f-= n{f-M: ME K}. Beyond Structurality: Admissible Valuations. One way of extending ma- trix semantics to cover non-structural inference systems is to define the semantic entaihnent in terms of 'admissible interpretations', i.e., to consider generalized matrices of the form (A, d, 1i), where A and d are as above, and 1i is a subset of the set of all interpretations of ,c into A. In this semantic framework, every inference operation that satisfies (cl)-(c4) can be defined by a class of gener- alized matrices. A similar approach of admitting only some interpretations to model non-structural nonmonotonic inference systems has been also developed for preferential model semantics (cf. [7]). Beyond Monotonicity: Preferential Matrices The notion of cumulativ- ity arose as a result of the search for desired and natural formal properties of nonmonotonic inference systems. A desired 'degree' of nonmonotonicity can be semantically modeled in terms of logical matrices of the form M = (A, D, 1i, -<), where A and 1i are as in a generalized matrix, D is a family of subsets of the uni- verse of A, and -< is a binary (preference) relation on D. The inference operation f- M is defined as follows: X f- M a iff for every h E 1i and every d E V, if d is a minimal element of D (with respect to -<) such that h(X) ~ d, then h(a) E d. Preferential matrices have the same semantic scope as preferential model struc- tures (cf. [8,14]). Logical Matrices with Completion. It is not essential to interpret the under- lying algebra A of a logical matrix M for alanguage ,c as a space of truth-values for the formulas of .c. The elements of A can be interpreted as propositions, events, and even infons of the Situation Theory of Barwise and Perry. If one views subsets of the universe of A as situations (partial or complete), then pref- erential matrices can be replaced by structures of the form M = (A, 1i,~) called matrices with completion, where A, and 1i are as above and ~ is a function that maps 21AI into 21AI such that for every B ~ IAI, B ~ B = if. In the language of universal algebra, ~ is a closure operator on A. This operation can be thought of as a completion function that assigns an actual and complete situation B to a (possibly partial) situation B which is a part of B. The inference operation f- M associated with such a matrix is defined as follows: for every set Xu {a} of formulas, X f-M a iJJfor every h E 1i,h(a) E h(X). Matrices with completion can be used to semantically model cumulativity with- out any explicit reference to preference.

22. Finite Algebras and AI: From Matrix Semantics to Stochastic Local Search 11 Beyond Matrix Semantics. The interplay between logic and universal algebra goes far beyond Matrix Semantics; a wealth of results harvested in disciplines such as type theory, term rewriting, algebraic logic, or fuzzy logic, and subjects such as bilattices, dynamic logics, or unification, have had and will continue to have a significant impact on AI research. 3 Problem Solving as Consistency Verification Automated Reasoning deals with the development and application of computer programs to perform a variety of reasoning tasks frequently represented as in- stances of consistency verification problem. Refutational Principle. Refutational theorem proving methods, such as reso- lution, rely on a correspondence between valid inferences and finite inconsistent sets. The refutational principle for an inference system P = ('c, f-) states that there is an algorithm that transformes every finite set X U {a} of formulas into another finite set Xa. of formulas in such a way that (ref) X f- a iff Xa. is inconsistent in P (i.e., for every formula /3, Xa. f- (3). In the light of (ref), a refutational automated reasoning system answers a query X f- a by determining the consistency status of Xa.. Resolution Algebras. Let ,C = (Terms(Var), 10, . .. ,In) be a propositional language (let us assume that the disjunction, denoted by V, is among the con- nectives of ,C). A resolution algebra for ,C is a finite algebra of the form Rs = (({va, ... , vd, 10,···, In), F) where: {va, ... ,vd is a set of formulas of ,C called verifiers, for every i ::; n, Ii and the corresponding connective Ii are of the same arity, and F is a subset of V. Rs defines two types of inference rules. The resolution rule is the case analysis on tm th of a common variable p expressed using verifiers. The other inference rules are the simplification rules defined by the operations 10, . .. ,In (see [13]). A set X of formulas is refutable in Rs if and only if one of the verifiers from F can be derived from X using the inference rules defined by Rs. Resolution Logics. A propositional logic P = ('c, f-) is said to be a resolu- tion logic if there exists a resolution algebra Rs such that for every finite set X of formulas (which do not share variables with the verifiers), X is inconsistent in P iff X is refutable in Rs.

23.12 Z. Stachniak Additional conditions to guarantee the soundness of the refutation process should also be imposed (cf. [13]). The class of resolution logics consists of those calculi which are indistinguishable on inconsistent sets from logics defined by finite ma- trices. Furthermore, resolution algebras for logics defined by finite logical matri- ces can be effectively constructed from the defining matrices (cf. [l3]). Lattices of Resolution Logics. For a logic P = (£, f-), let Kp denote the class of all logics on £ which have the same inconsistent sets as P. Kp is a bounded lattice under the ordering::; defined as follows: if Pi = (£, f-i), i = 0, 1, then Po ::; PI iff PI is inferentially at least as strong as Po. The lattice (K p , ::;) is a convenient tool to discuss the scope of the resolution method defined in terms of resolution algebras: if P is a resolution logic, then so are all the logics in Kp. From the logical standpoint, the systems in Kp can be quite different; from the refutational point of view, they can all be defined by the same resolution algebra. Nonmonotonic Resolution Logics. Resolution algebras can also be used to implement some non monotonic inference systems. Let P = (£, f-) be an arbi- trary cumulative inference system. The monotone base of P is the greatest logic PB on £ (with respect to ::;) such that PB ::; P. The monotone bases of the so-called supraclassicaZ inference systems is classical propositional logic (cf. [8]). The consistency preservation property limits the inference power by which P and PB can differ (cf. [8,l3]). It states that both P and P B have to have the same inconsistent sets of formulas. Every cumulative, structural, and proper inference system satisfies the consistency preservation property. Hence, every such system can be provided with a resolution algebra based proof system, provided that its monotone base is a resolution logic. 4 Problem Solving as Satisfiability A reasoning task, such as a planning problem, can be solved by, first, expressing it as a satisfiability problem in some logical matrix M and, then, by solving it using one of the satisfiability solvers for M. In spite of the fact that for many finite matrices (A, d), the satisfiability problem: (SATM ) for every formula a, determine whether or not there exists an inter- pretation h such that h( a) E d is NP-complete, a number of complete and incomplete SATM solvers have been developed and their good performance in finding solutions to instances of many problems in real-world domains empirically demonstrated. Searching for Satisfying Interpretation. Given a matrix M = (A, d) for a language £, a stochastic local search algorithm for satisfiability in M starts by generating a random interpretation h restricted to the variables of an input formula a. Then, it locally modifies h by selecting a variable p of a, using some selection heuristic select-var(a, h), and changing its truth-value from h(P) to

24. Finite Algebras and AI: From Matrix Semantics to Stochastic Local Search 13 some new truth-value using another selection heuristic selecLval(a,p, h). Such selections of variables and such changes of their truth-values are repeated until either h(a) E d or the allocated time to modify h into a satisfying valuation has elapsed. The process is repeated (if needed) up to a specified number of times. The above procedure defines informally an incomplete SATM solver (clearly, it cannot be used to determine unsatisfiability of a formula). Polarity and SATM. The classical notion of polarity of a variable p in a formulaa(p) captures the monotonic behavior of the term operation Ja(P) in- duced by a(p) over p in a partially ordered algebra of truth-values. The selection heuristics selecLvar(a, h) and selecLval(a,p, h) of an SATM solver can be de- fined in terms of polarity. This is done in the non-clausal solver polSAT for classical propositional logic as well as in its extensions to finitely-valued logics (cf. [12]). Improving the Efficiency of Resolution with SAT M Solvers. An unre- stricted use of the resolution rule during the deductive process may very quickly result in combinatoric explosion of the set of deduced resolvents making the completion of a reasoning task unattainable in an acceptable amount of time. In an efficient resolution-based reasoning program the generation of resolvents that would evidently have no impact on the completion of a reasoning task must be blocked. Tautological resolvents are just that sort of formulas. For many resolution logics the tautology problem is coNP-complete. For some of these logics, SATM solvers can be used to guide the search for refutation so that the use of tautologies during the refutation process is unlikely. At the same tune the refutational completeness of the deductive process is preserved. References 1. Blok, W.J. and Pigozzi, D. Algebraizable Logics. Memoirs of the American Math- ematical Society, vol. 77, no 396 (1989). 2. Denecke, K. and Wismath, S.L. Universal Algebra and Applications in Computer Science, Chapman & Hall/CRC (2002). 3. Fitting, M. and Orlowska, E. (eds.) Beyond Two: Theory and Applications of Multiple- Valued Logic. Studies in Fuzziness and Soft Computing, Phisica-Verlag (2003). 4. Fitting, M. Bilattices and the semantics of logic programming. Journal of Logic Programming 11, pp. 91-116 (1991). 5. Ginsberg, M.L. Multivalued logics: a uniform approach to reasoning in artificial intelligence. Comput. Intell., vol. 5, pp. 265--316. 6. Hahnle, R. Automated Deduction in Multiple-Valued Logics. Clarendon Press (1993). 7. Kraus, S., Lehmann, D. and Magidor, M. Nonmonotonic Reasoning, Preferential Models and Cumulative Logics. Artificial Intelligence 44, pp. 167-207 (1990). 8. Makinson, D. General Patterns in Nonmonotonic Reasoning. In Handbook of Logic in Artificial Intelligence and Logic Programming. Vol 3: Nonmonotonic and Un- certain Reasoning (Gabbay, D.M., Hogger, c.J., and Robinson J.A. eds.). Oxford University Press (1994).

25.14 Z. Stachniak 9. Przymusinski, T. 1bree-Valued Non-Monotonic Fonnalisms and Logic Program- ming" . In Proc. of the First Int. Con! on Principles of Knowledge Representation and Reasoning (KR'89), Toronto, pp. 341-348 (1989) 10. Rasiowa, H. An Algebraic Approach to Non-Classical Logics. North-Holland (1974). 11. Sehnan, B., Levesque, H., and Mitchell, D. A new method for solving hard satisfi- ability problems. In Proc. AAAI-92, pp. 440-446 (1992). 12. Stachniak, Z. Polarity-based Stochastic Local Search Algorithm for Non-clausal Satisfiability. In [3], pp. 181-192 (2002). 13. Stachniak, Z. Resolution Proof Systems: An Algebraic Theory. Kluwer (1996). 14. Stachniak, Z. Nonmonotonic Theories and Their Axiomatic Varieties. J. of Logic, Language, and Computation, vol. 4, no. 4, pp. 317-334 (1995). 15. Thistlewaite, P.B., McRobbie, M.A., and Meyer, R.K. Automated Theorem-Proving in Non-Classical Logics. Pitman (1988). 16. W6jcicki, R. Theory of Logical Calculi: Basic Theory of Consequence Operations. Kluwer (1988).

26. Proof Search in Minimal Logic Helmut Schwichtenberg Mathematisches Institut der Universitat Miinchen schwicht®mathematik.uni-muenchen.de http://www.mathematik.uni-muenchen.de/-schwicht/ 1 Introduction We describe a rather natural proof search algorithm for a certain fragment of higher order (simply typed) minimal logic. This fragment is determined by re- quiring that every higher order variable Y can only occur in a context Yx, where x are distinct bound variables in the scope of the operator binding Y, and of opposite polarity. Note that for fIrst order logic this restriction does not mean anything, since there are no higher order variables. However, when designing a proof search algorithm for fIrst order logic only, one is naturally led into this fragment of higher order logic, where the algorithm works as well. In doing this we rely heavily on Miller's [1], who has introduced this type of restriction to higher order terms (called patterns by Nipkow [2]), noted its relevance for extensions of logic programming, and showed that the unifIcation problem for patterns is solvable and admits most general unifIers. The present paper was motivated by the desire to use Miller's approach as a basis for an implementation of a simple proof search engine for (fIrst and higher order) min- imal logic. This goal prompted us into several simplifIcations, optimizations and extensions, in particular the following. Instead of arbitrarily mixed prefIxes we only use those of the form V3V. Nipkow in [2] already had presented a version of Miller's pattern unification algorithm for such prefIxes, and Miller in [1, Section 9.2] notes that in such a situation any two unifIers can be transformed into each other by a variable renaming substitution. Here we restrict ourselves to V3V-prefixes throughout, i.e., in the proof search algorithm as well. - The order of events in the pattern unifIcation algorithm is changed slightly, by postponing the raising step until it is really needed. This avoids unnec- essary creation of new higher type variables. - Already Miller noted in [1, p.S1S] that such optimizations are possible. The extensions concern the (strong) existential quantifier, which has been left out in Miller's treatment, and also conjunction. The latter can be avoided in principle, but of course is a useful thing to have. Moreover, since part of the motivation to write this paper was the necessity to have a guide for our implementation, we have paid particular attention to write at least the parts of the proofs with algorithmic content as clear and complete as possible. B. Buchberger and l.A. Campbell (Eds.): AISC 2004, LNAI 3249, pp. 15-25, 2004. © Springer-Verlag Berlin Heidelberg 2004

27.16 H. Schwichtenberg The paper is organized as follows. Section 2 defines the pattern unification algorithm, and in section 3 its correctness and completeness is proved. Section 4 presents the proof search algorithm, and again its correctness and completeness is proved. The final section 5 contains what we have to say about extensions to A and 3. 2 The Unification Algorithm We work in the simply typed A-calculus, with the usual conventions. For instance, whenever we write a term we assume that it is correctly typed. Substitutions are denoted by C{J, 'l/J, p. The result of applying a substitution C{J to a term t or a formula A is written as tip or AC{J, with the understanding that after the substitution all terms are brought into long normal form. Q always denotes a V3V-prefix, say Vx3yVz, with distinct variables. We call x the signature variables, y the flexible variables and z the forbidden variables of Q, and write Q3 for the existential part 3y of Q. Q-terms are inductively defined by the following clauses. - Ifu is a universally quantified variable in Q or a constant, and r are Q-terms, then ur is a Q-term. - For any flexible variable y and distinct forbidden variables z from Q, yz is a Q-term. - 1fT is a QVz-term, then AZT is a Q-term. Explicitely, T is a Q-term iff all its free variables are in Q, and for every subterm yr of T with y free in T and flexible in Q, the r are distinct variables either A-bound in T (such that yr is in the scope of this A) or else forbidden in Q. Q-goals and Q-clauses are simultaneously defined by - If r are Q-terms, then Pr is a Q-goal as well as a Q-clause. - If D is a Q-clause and G is a Q-goal, then D -> G is a Q-goal. - If G is a Q-goal and D is a Q-clause, then G -> D is a Q-clause. - If G is a QVx-goal, then VxG is a Q-goal. - If D[y := Y z] is a Vx3y, YVz-clause, then VyD is a'v'x3yVz-clause. Explicitely, a formula A is a Q-goal iff all its free variables are in Q, and for every subterm yr of A with y either existentially bound in A (with yr in the scope) or else free in A and flexible in Q, the r are distinct variables either A- or universally bound in A (such that yr is in the scope) or else free in A and forbidden in Q. A Q-substitution is a substitution of Q-terms. A unificationproblem U consists of a V3V-prefix Q and a conjunction C of equations between Q-terms of the same type, i.e., ~~=l Ti = Si. We may assume that each such equation is of the form AXT = AXS with the same x (which may be empty) and T, S of ground type. A solution to such a unification problem U is a Q-substitution C{J such that for every i, TiC{J = SiC{J holds (i.e., TiC{J and SiC{J have the same normal form). We

28. Proof Search in Minimal Logic 17 sometimes write Cas r = s, and (for obvious reasons) call it a list of unification pairs. We now define the unification algorithm. It takes a unification problem U = QC and returns a substitution p and another patter unification problem U' = Q'C'. Note that p will be neither a Q-substitution nor a Q'-substitution, but will have the property that - p is defined on flexible variables of Q only, and its value terms have no free occurrences of forbidden variables from Q, - if G is a Q-goal, then Gp is a Q'-goal, and - whenever cp' is an U'-solution, then (p 0 cp') rQ3 is an U-solution. To define the unification algorithm, we distinguish cases according to the form of the unification problem, and either give the transition done by the algorithm, or else state that it fails. Case identity, i.e., Q.r = r A C. Then Q.r = r A C ==}" QC. Case ~,i.e., Q.>'xr = >'xsAC. We may assume here that the bound variables x are the same on both sides. Q.>.x r = >.x sAC ==}g QVx.r = sAC. Case rigid-rigid, i.e., Q.fr = /s A C. Q.fr = /s A C ==}c Q.r = sAC. Case flex-flex with equal heads, i.e., Q.uy = uz A C. Q.uy = uz A C ==}p Q'.Cp with p = [u := >'y.u'w], Q' is Q with 3u replaced by 3u', and wan enumeration of {Yi I Yi = Zi} (note >.y.u'w = >.z.u'w). Case flex-flex with different heads, i.e., Q.uy = vz A C. Q.uy = vz A C ==}p Q'Cp, where p and Q' are defined as follows. Let w be an enumeration of the variables both in y and in z. Then p = [u, v := >'y.u'w, >.z.u'w]' and Q' is Q with 3u,3v removed and 3u' inserted. Case flex-rigid, i.e., Q.uy = t A C with t rigid, i.e., not of the form vz with flexible v. Subcase occurrence check: t contains (a critical sub term with head) u. Fail. Subcase pruning: t contains a subterm VWIZW2 with 3v in Q, and Z free in t but not in y. Q.uy = t A C ==}p Q'.uy = tp A Cp where p = [v:= >'Wl, Z,W2.V'WIW2], Q'is Q with 3vreplaced by 3v'.

29.18 H. Schwichtenberg Subcase pruning impossible: >.yt (after all pruning steps are done still) has a free occurrence of a forbidden variable z. Fail. Subcase explicit definition: otherwise. Q.uy = t /\ 0 =*p Q'Op where p = [u:= >.yt], and Q' is obtained from Q by removing :3u. This concludes the definition of the unification algorithm. Our next task is to prove that this algorithm indeed has the three properties stated above. The first one (p is defined on flexible variables of Q only, and its value terms have no free occurrences of forbidden variables from Q) is obvious from the definition. We now prove the second one; the third one will be proved in the next section. Lemma 1. If Q =*p Q' and G is a Q-goal, then Gp is a Q'-goal. Proof We distinguish cases according to the definition of the unification algo- rithm. Cases identity, ~ and rigid-rigid. Then p = c and the claim is trivial. Case flex-flex with equal heads. Then p = [u := >.y.u'w] with w a sublist of y, and Q' is Q with :3u replaced by :lu'. Then clearly G[u := >.y.u'w] is a Q'-goal (recall that after a substitution we always normalize). Case flex-flex with different heads. Then p = [u, v := >.y.u'w, >.z.u'w] with w an enumeration of the variables both in y and in z, and Q' is Q with :lu,3v removed and :lu' inserted. Again clearly G[u, v := >.y.u'w, >.z.u'w] is a Q'-goal. Caseflex-rigid,Subcasepruning: Then p = [v:= >'WI,Z,W2.V'WIW2], and Q' is Q with :lv replaced by :lv'. Suppose G is a Q-goal. Then clearly G[v := >'WI,Z,W2.V'WIW2] is a Q'-goal. Case flex-rigid, Subcase explicit definition: Then p = [u := >.yt] with a Q- term >.yt without free occurrences of forbidden variables, and Q' is obtained from Q by removing :lu. Suppose G is a Q-goal. Then clearly G[u := >.yt] form) is a Q'-goal. Let Q ---+p Q' mean that for some C,O' we have QO =*p Q'C'. Write Q ---+; Q' if there are PI, ... , pn and Qb"" Qn-l such that Q ---+Pl QI ---+P2 '" ---+Pn-l Qn-I ---+Pn Q', and p = PI 0 .•• 0 Pn. Corollary 1. If Q ---+; Q' and G is a Q-goal, then Gp is a Q'-goal. 3 Correctness and Completeness of the Unification Algorithm Lemma 2. Let a unification problem U consisting ofa V:lV-prefix Q and a list r= s of unification pairs be given. Then either