09:00 | Georg Gottlob (University of Oxford) Datalog+-: A Family of Logical Query Languages for New Applications. I will report on a recently introduced family of Datalog-based languages, called Datalog+-, which is a new framework for tractable ontology querying, and for a variety of other applications. Datalog+-extends plain Datalog by features such as existentially quantified rule heads, and, at the same time, restricts the rule bodies so to achieve decidability and tractability. I will review a number of theoretical results and show how Datalog+- relates to both Database Dependency Theory and the Guarded Fragment of first order logic. I will show that popular tractable description logics translate into Datalog+- and illustrate how this formalism can be used in the context of web data extraction, data exchange, and other applications. |

10:30 | Joerg Endrullis, Clemens Grabmayer, Dimitri Hendriks, Jan Willem Klop and Vincent van Oostrom Unique Normal Forms in Infinitary Weakly Orthogonal Term Rewriting The theory of finite and infinitary term rewriting is extensively developed for orthogonal rewrite systems, but to a lesser degree for weakly orthogonal rewrite systems. In this paper we present some contributions to the case of weak orthogonality, where critical pairs are admitted provided they are trivial. We devise a refinement of the compression lemma, as a tool for establishing infinitary confluence, and hence the infinitary unique normal form property, for the case of weakly orthogonal TRSs that do not contain collapsing rewrite rules. That the restriction of collapse-freeness is crucial, is shown in an elaboration of a simple TRS which is weakly orthogonal, but has two collapsing rules. It turns out that the theory for orthogonal systems breaks down dramatically. We conclude with establishing a positive fact: the diamond property for infinitary developments for weakly orthogonal TRSs, by extending an earlier cluster-analysis for the finite case. |

11:00 | Patrick Bahr Abstract Models of Transfinite Reductions We investigate transfinite reductions in abstract reduction systems. To this end, we study two abstract models for transfinite reductions: a metric model generalising the usual metric approach to infinitary term rewriting and a novel partial order model. For both models we distinguish between a weak and a strong variant of convergence as known from infinitary term rewriting. Furthermore, we introduce an axiomatic model of reductions that is general enough to cover all of these models of transfinite reductions as well as the ordinary model of finite reductions. It is shown that, in this unifying axiomatic model, many basic relations between termination and confluence properties known from finite reductions still hold. The introduced models are applied to term rewriting but also to term graph rewriting. We can show that for both term rewriting as well as for term graph rewriting the partial order model forms a conservative extension to the metric model. |

11:30 | Stefan Kahrs Infinitary Rewriting: Foundations Revisited Infinitary Term Rewriting allows to express infinitary terms and infinitary reductions that converge to them. As their notion of transfinite reduction in general, and as binary relations in particular two concepts have been studied in the past: strongly and weakly convergent reductions, and in the last decade research has mostly focused around the former. Finitary rewriting has a strong connection to the equational theory of its rule set: if the rewrite system is confluent this (implies consistency of the theory and) gives rise to a semi-decision procedure for the theory, and if the rewrite system is in addition terminating this becomes a decision procedure. This connection is the original reason for the study of these properties in rewriting. For infinitary rewriting there is barely an established notion of an equational theory. The reason this issue is not trivial is that such a theory would need to include some form of ``getting to limits'', and there are different options one can pursue. These options are being looked at here, as well as several alternatives for the notion of reduction relation and their relationships to these equational theories. |

12:00 | Jakob Grue Simonsen Weak Convergence and Uniform Normalization in Infinitary Rewriting We study infinitary term rewriting systems containing finitely many rules. For these, we show that if a weakly convergent reduction is not strongly convergent, it contains a term that reduces to itself in one step (but the step itself need not be part of the reduction). Using this result, we prove the starkly surprising result that for any orthogonal system with finitely many rules, the system is weakly normalizing under weak convergence if{f} it is strongly normalizing under weak convergence if{f} it is weakly normalizing under strong convergence if{f} it is strongly normalizing under strong convergence. As further corollaries, we derive a number of new results for weakly convergent rewriting: Systems with finitely many rules enjoy unique normal forms, and acyclic orthogonal systems are confluent. Our results suggest that it may be possible to recover some of the positive results for strongly convergent rewriting in the setting of weak convergence, if systems with finitely many rules are considered. Finally, we give a number of counterexamples showing failure of most of the results when infinite sets of rules are allowed. |

14:00 | J Strother Moore (University of Texas) Theorem Proving for Verification: The Early Days. Since Turing, computer scientists have understood that the question "does this program satisfy its specifications?" could be reduced to the question "are these formulas theorems?" But the theorem proving technology of the 50s and 60s was inadequate for the task. In 1971, here in Edinburgh, Boyer and I started building the first general-purpose theorem prover designed for a computational logic. This project continues today, with Matt Kaufmann as a partner; the current version of the theorem prover is ACL2 (A Computational Logic for Applicative Common Lisp). In this talk I'll give a highly personal view of the four decade long "Boyer-Moore Project", including our mechanization of inductive proof, support for recursive definitions, rewriting with previously proved lemmas, integration of decision procedures, efficient representation of logical constants, fast execution, and other proof techniques. Along the way we'll see several interesting side roads: the founding of the Edinburgh school of logic programming, a structure-shared text editor that played a role in the creation of Word, and perhaps most surprisingly, the use of our "Lisp theorem prover" to formalize and prove theorems about commercial microprocessors and virtual machines via deep embeddings, including parts of processors by AMD, Centaur, IBM, Motorola, Rockwell-Collins, Sun, and others. The entire project helps shed light on the dichotomy between general-purpose theorem provers and special-purpose analysis tools. |

15:30 | Harald Zankl and Martin Korp Modular Complexity Analysis via Relative Complexity All current investigations of derivational complexity analysis of term rewrite systems are based on a single termination proof, possibly preceded by transformations. In this paper we introduce a modular framework for proving (derivational) complexity that allows to infer (feasible) upper bounds by combining different criteria. We prove that the modular framework is strictly more powerful than the conventional setting. Furthermore, the results have been implemented and experiments show the tremendous gains in power. |

16:00 | Martin Avanzini and Georg Moser Closing the Gap Between Runtime Complexity and Polytime Computability Recently, many techniques have been introduced that allow the (automated) classification of the runtime complexity of term rewrite systems (TRSs for short). In earlier work, the authors have shown that for confluent TRSs, innermost polynomial runtime complexity induces polytime computability of the functions defined. In this paper, we generalise the above result to full rewriting. Following our previous work, we exploit graph rewriting. We prove the adequacy of graph rewriting for full rewriting, where we precisely control the way resources are copied. Our result strengthens the evidence that the complexity of a rewrite system is truthfully represented through the length of derivations. Moreover our result allows the classification of non-deterministic polytime-computation based on runtime complexity analysis of rewrite systems. |

16:30 | Johannes Waldmann Polynomially Bounded Matrix Interpretations Matrix interpretations can be used to bound the derivational complexity of rewrite systems. We present a criterion that completely characterizes matrix interpretations that are polynomially bounded. It includes the method of upper triangular interpretations as a special case, and we prove that the inclusion is strict. The criterion can be expressed as a finite domain constraint system. It translates to a Boolean constraint system with a size that is polynomial in the dimension of the interpretation. We report on performance of an implementation. |

17:00 | Sarah Winkler, Haruhiko Sato, Aart Middeldorp and Masahito Kurihara Optimizing mkbTT (System Description) We describe performance enhancements that have been added to mkbTT, a modern completion tool combining multi-completion with the use of termination tools. |