09:00 | David Basin (ETH Zurich) Policy Monitoring in First-order Temporal Logic In security and compliance, it is often necessary to ensure that agents and systems comply to complex policies. An example from financial reporting is the requirement that every transaction t of a customer c, who has within the last 30 days been involved in a suspicious transaction t', must be reported as suspicious within 2 days. We present an approach to monitoring such policies formulated in an expressive fragment of metric first-order temporal logic. We also report on case studies in security and compliance monitoring and use these to evaluate both the suitability of this fragment for expressing complex, realistic policies and the efficiency of our monitoring algorithm. (Joint work with Felix Klaedtke and Samuel Mueller) |

10:30 | Mike Gordon (University of Cambridge) Remembering Robin Milner |

10:45 | Anders Schack-Nielsen and Carsten Schürmann Curry-Style Explicit Substitutions for the Linear and Affine Lambda Calculus We introduce a calculus of explicit substitutions for the $\lambda$-calculus with linear, affine, and intuitionistic variables and meta-variables. Using a Curry-style formulation, we redesign and extend previously suggested type systems for linear explicit substitutions. This way, we obtain a fine-grained small-step reduction semantics suitable for efficient implementation. We prove that subject reduction, confluence, and termination holds. All theorems have been formally verified in the Twelf proof assistant. |

11:15 | Brigitte Pientka and Joshua Dunfield Beluga: A Framework for Programming and Reasoning with Deductive Systems (System Description) Beluga is an environment for programming and reasoning about formal systems given by axioms and inference rules. It implements the logical framework LF for specifying and prototyping formal systems via higher-order abstract syntax. It also supports reasoning: the user implements inductive proofs about formal systems as dependently typed recursive functions. A distinctive feature of Beluga is that it not only represents binders using higher-order abstract syntax, but directly supports reasoning with contexts. Contextual objects represent hypothetical and parametric derivations, leading to compact and elegant proofs. Our test suite includes standard examples such as the Church-Rosser theorem, type uniqueness, proofs about compiler transformations, and preservation and progress for various ML-like languages. We also implemented proofs of structural properties of expressions and paths in expressions. Stating these properties requires nesting of quantifiers and implications, demonstrating the expressive power of Beluga. |

11:30 | Silvio Ghilardi and Silvio Ranise MCMT: A Model Checker Modulo Theories We describe \textsc{mcmt}, a fully declarative and deductive symbolic model checker for safety properties of infinite state systems whose state variables are arrays. Theories specify the properties of the indexes and the elements of the arrays. Sets of states and transitions of a system are described by quantified first-order formulae. The core of the system is a backward reachability procedure which symbolically computes pre-images of the set of unsafe states and checks for safety and fix-point by solving Satisfiability Modulo Theories problems. Besides standard SMT technology, efficient complete (and specifically tailored to model checking) heuristics for quantifier instantiation consitute the specific deductive engine of the system. MCMT has been successfully applied to the verification of imperative programs, parameterized, and distributed systems. |

11:45 | Carsten Ihlemann and Viorica Sofronie-Stokkermans On efficient reasoning in combinations of theories In this paper we study theory combinations over non-disjoint signatures in which efficient hierarchical and modular reasoning is possible. We use a notion of locality of a theory extension parameterized by a closure operator on ground terms. We give criteria for recognizing these types of theory extensions. We then show that combinations of extensions of theories which are local in this extended sense have also a locality property and hence allow modular and hierarchical reasoning. We thus obtain parameterized decidability and complexity results for many (combinations of) theories important in verification. |

14:00 | Rajeev Goré, Clemens Kupke, Dirk Pattinson and Lutz Schröder Global Caching for Coalgebraic Description Logics Coalgebraic description logics offer a common semantic umbrella for extensions of description logics with reasoning principles outside relational semantics, e.g. quantitative uncertainty, non-monotonic conditionals, or coalitional power. Specifically, we work in coalgebraic logic with global assumptions (i.e. a general TBox), nominals, and satisfaction operators, and prove soundness and completeness of an associated tableau algorithm of optimal complexity EXPTIME. The algorithm is based on global caching, which raises hopes of practically feasible implementation. Instantiation of this result to concrete logics yields new algorithms in all cases including standard relational hybrid logic. |

14:30 | Despoina Magka, Yevgeny Kazakov and Ian Horrocks Tractable Extensions of the Description Logic EL with Numerical Datatypes We consider extensions of the lightweight description logic (DL) EL with numerical datatypes such as naturals, integers and reals equipped with relations such as equality and inequalities. It is well-known that the main reasoning problems for such DLs are decidable in polynomial time provided that the datatypes enjoy the so-called convexity property. Unfortunately many combinations of the numerical relations violate convexity, which makes the usage of these datatypes rather limited in practice. In this paper, we make a more fine-grained complexity analysis of these DLs by considering restrictions not only on the kinds of relations that can be used in ontologies but also on their occurrences, such as allowing certain relations to appear only on the left-hand side of the axioms. To this end, we introduce a notion of safety for a numerical datatype with restrictions (NDR) which guarantees tractability, extend the EL reasoning algorithm to these cases and provide a complete classification of safe NDRs for natural numbers, integers and reals. |

15:30 | Julian Backes and Chad Brown Analytic Tableaux for Higher-Order Logic with Choice While many higher-order interactive theorem provers include a choice operator, higher-order automated theorem provers currently do not. As a step towards supporting automated reasoning in the presence of a choice operator, we present a cut-free ground tableau calculus for Church's simple type theory with choice. The tableau calculus is designed with automated search in mind. In particular, the rules only operate on the top level structure of formulas. Additionally, we restrict the instantiation terms for quantifiers to a universe that depends on the current branch. At base types the universe of instantiations is finite. We prove completeness of the tableau calculus relative to Henkin models. |

16:00 | Jasmin Christian Blanchette and Alexander Krauss Monotonicity Inference for Higher-Order Formulas Formulas are often monotonic in the sense that if the formula is satisfiable for given domains of discourse, it is also satisfiable for all larger domains. Monotonicity is undecidable in general, but we devised two calculi that infer it in many cases for higher-order logic. The stronger calculus has been implemented in Isabelle's model finder Nitpick, where it is used to prune the search space, leading to dramatic speed improvements for formulas involving many atomic types. |

16:30 | Sascha Böhme and Tobias Nipkow Sledgehammer: Judgement Day Sledgehammer, a component of the interactive theorem prover Isabelle, allows to prove goals in higher-order logic by calling the automated provers for first-order logic E, SPASS and Vampire. This paper is the largest and most detailed empirical evaluation of such a link to date. Our test data consists of 1240 proof goals arising in 7 diverse Isabelle theories, thus representing typical Isabelle proof obligations. We measure the effectiveness of Sledgehammer but also many other parameters such as run time and complexity of proofs. A facility for minimizing the number of facts needed to prove a goal is presented and analyzed. |