09:00 | Mikolaj Bojanczyk (Warsaw University) Invited Talk: Automata for Data Words and Data Trees Data words and data trees appear in verification and XML processing. The term "data" means that positions of the word, or tree, are decorated with elements of an infinite set of data values, such as natural numbers or ASCII strings. This talk is a survey of the various automaton models that have been developed for data words and data trees. |

10:30 | Alessio Guglielmi, Tom Gundersen and Michel Parigot A proof calculus which reduces syntactic bureaucracy In usual proof systems, like sequent calculus, only a very limited way of combining proofs is available through the tree structure. We present in this paper a logic-independent proof calculus, where proofs can be freely composed by connectives, and prove its basic properties. The main advantage of this proof calculus is that it allows to avoid certain types of syntactic bureaucracy inherent to all usual proof systems, in particular sequent calculus. Proofs in this system closely reflect their atomic flow, which traces the behaviour of atoms through structural rules. The general definition is illustrated by the standard deep-inference system for propositional logic, for which there are known rewriting techniques that achieve cut elimination based only on the information in atomic flows. |

11:00 | Claus Appel, Vincent van Oostrom and Jakob Grue Simonsen Higher-Order (Non-)Modularity We show that, contrary to the situation in first-order term rewriting, almost none of the usual properties of rewriting are modular for higher-order rewriting, irrespective of the higher-order rewriting format. We show that for the particular format of simply-typed applicative term rewriting systems modularity of confluence, normalization, and termination, can be recovered by imposing suitable linearity constraints. |

11:30 | Ken-etsu Fujita and Aleksy Schubert The undecidability of type related problems in type-free style System F We consider here a variation of the System F, a predicative second-order system whose terms are intermediate between the Curry style and Church style. These terms contain the information on where the universal quantifier elimination and introduction in the type inference process must take place, which is similar to Church forms. However, they omit the information on which types are involved in the rules, which is similar to Curry forms. In this paper we prove the undecidability of the type-checking, type inference and typability problems for the system. Moreover, the proof works for the predicative version of the system with finitely stratified polymorphic types. The result includes the bounds on the Leivant's level numbers for types used in the instances leading to the undecidability. |

12:00 | Manfred Schmidt-Schauss, David Sabel and Elena Machkasova Simulation in the Call-by-Need Lambda-Calculus with letrec This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in the deterministic call-by-need lambda calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations. Although this property may be a natural one to expect, to the best of our knowledge, this paper is the first one providing a proof. The proof technique is to transfer the contextual approximation into Abramsky's lazy lambda calculus by a fully abstract and surjective translation. This also shows that the natural embedding of Abramsky's lazy lambda calculus into the call-by-need lambda calculus with letrec is an isomorphism between the respective term-models. We show that the equivalence property proven in this paper transfers to a call-by-need letrec calculus developed by Ariola and Felleisen. |

14:00 | Vincent van Oostrom (Utrecht University) Invited Talk: Realising Optimal Sharing |

16:00 | Samuel Mimram Computing Critical Pairs in 2-Dimensional Rewriting Systems Rewriting systems on words are very useful in order to build oriented presentations of monoids: they sometimes give finite descriptions of those monoids, thus allowing their manipulation by a computer, and, when the presentation is confluent and terminating, they provide us with a notion of canonical representative for the elements of the presented monoid. Polygraphs are a higher-dimensional generalization of this notion of presentation, from the setting of monoids to the much more general setting of n-categories. Here, we are interested in proving confluence for polygraphs presenting 2-categories, which can be seen as a generalization of term-rewriting systems. For this purpose, we propose an adaptation of the usual algorithm for computing critical pairs. Interestingly, this setting is much richer than term rewriting systems and requires the elaboration of a new theoretical framework for representing critical pairs, based on compact 2-categories. |

16:30 | Jordi Levy and Mateu Villaret An Efficient Nominal Unification Algorithm Nominal Logic is an extension of first-order logic with equality, name-binding, freshness of names and mechanisms for renaming via name-swapping. Nominal Unification is also an extension of first-order unification where terms can contain binders and unification is performed modulo $\alpha$-equivalence. Here we prove that the existence of nominal unifiers can be decided in quasi-lineal time. First, we lineally-reduce nominal unification problems to a sequence of freshness and equalities between atoms, modulo a permutation, using ideas as Paterson and Wegman for first-order unification. Second, we prove that solvability of these reduced problems may be checked in quasi-lineal time, using ideas of Brown and Tarjan for unbalanced mergeâ€“sorting. |