LOLA on Friday, July 9th

08:50‑10:00 Opening. Invited Talk
Location: IF 4.31+4.33
08:50 Opening
09:00 Gerard Berry (INRIA, College de France)
What could be the right balance between abstract and fine-grain computational properties?
10:30‑11:30 Invited tutorial
Location: IF 4.31+4.33
10:30 Dan Ghica (University of Birmingham)
Geometry of Synthesis: Semantics-directed hardware compilation
11:30‑12:30 Session 1
Location: IF 4.31+4.33
11:30 Magnus O. Myreen and Michael J. C. Gordon
Machine code: architecture-independent formal verification and proof-producing compilation

Modern processors support a large numbers of instructions and a multitude of features; as a result, detailed formal models of the instruction set architectures~(ISAs) are long and hard to understand. Established approaches for proving functional properties on top of these models tie proofs to a specific model and require expert knowledge of the underlying model and substantial manual effort of those performing the proofs.

In this talk, we explain a novel approach to verification of machine code which addresses these issues. We also detail how our approach has applications to automatic synthesis of correct code from functional specifications and explain how verification combined with synthesis has been used in case studies.

12:00 Ugo Dal Lago
On the Role of Interaction in Implicit Computational Complexity

In this short tutorial, we show how interactive methods from semantics can be useful in the quantitative analysis of computation. On the one hand, game semantics and the geometry of interaction allow to infer the time complexity of cut-elimination in linear logic. On the other, bidirectional and space-efficient computation can be structured around the Int construction.

14:00‑15:00 Invited talk
Location: IF 4.31+4.33
14:00 Alex Simpson (LFCS, University of Edinburgh)
Linear types for continuations
15:30‑16:30 Session 2
Location: IF 4.31+4.33
15:30 Nick Benton and Chung-kil Hur
Step-Indexing: The Good, the Bad and the Ugly

Over the last decade, step-indices have been widely used for the construction of operationally-based logical relations in the presence of various kinds of recursion. We first give an argument that step-indices, or something like them, seem to be required for defining realizability relations between high-level source languages and low-level targets, in the case that the low-level allows egregiously intensional operations such as reflection or comparison of code pointers. We then show how, much to our annoyance, step-indices also seem to prevent us from exploiting such operations as aggressively as we would like in proving program transformations.

15:50 Guilhem Jaber and Nicolas Tabareau
Krivine realizability for compiler correctness

We propose a semantic type soundness result, formalized in the Coq proof assistant, for a compiler from a simple functional language to SECD machine code. Our result is quite independent from the source language as it uses Krivine's realizability to give a denotational semantics to SECD machine code using only the type system of the source language. We use realizability to prove the correctness of both a call-by-name (CBN) and a call-by-value (CBV) compiler with the same notion of orthogonality. We abstract over the notion of observation (e.g. divergence or termination) and derive an operational correctness result that relates the reduction of a term with the execution of its compiled SECD machine code.

16:10 Shin-ya Katsumata and Rasmus M√łgelberg
Fullness of monadic translation by TT-lifting

Biorthogonality and TT-closure operator are a technique to derive closure operators using the Galois connection induced by the interaction between terms and continuations. They are frequently used in the study of programming languages and logics: a proof of the cut-elimination of linear logic, capturing admissible relations in operational semantics, structuring types from untyped terms, characterising contextual equivalence by logical relations, establishing logical relations between denotational semantics and abstract machine, etc. Later, Lindley and Stark introduced TT-lifting, which is a modification of TT-closure operator, to lift reducibility at computation types $T\tau$ in their proof of the strong normalisation of $\lml$.

Inspired by Lindley and Stark's work, the first author semantically formulated TT-lifting as a method to construct logical predicates for monadic types. This method is applied to the $\lambda$-definability problem of the $\lambda$-calculus with sums. The characterisation of $\lambda$-definability given there is the combination of Jung and Tiuryn's Kripke logical predicates and TT-closure operator, and holds in any bi-Cartesian closed category. In this talk, we provide another evidence of the effectiveness of TT-lifting / TT-closure operator in the definability problem: we show the fullness of Moggi's monadic translation from the computational lambda calculus $\lc$ with sums to the computational metalanguage $\lml$ with sums using TT-lifting and TT-closure operator.

17:00‑18:00 Session 3
Location: IF 4.31+4.33
17:00 Rasmus Ejlers M√łgelberg and Sam Staton
Full abstraction in a metalanguage for state

We propose the enriched effect calculus (EEC) of Egger, Mogelberg and Simpson as a metalanguage for reasoning about state effects. To support our claim that this is a language capturing stateful computation, we show that the linear state monad translation from a call-by-value language with store into EEC is fully complete in the sense that any term in EEC of a translated type corresponds to a unique program via the translation. The result is not specific to store, but can be applied to any computational effect expressible using algebraic operations in the sense of Plotkin and Power, even to effects that are not usually thought of as stateful. We give a semantic proof based on the observation that any model of EEC contains a state object and that algebraic operations in the EEC model correspond to certain operations on the state object, which we call ``state access operations".

As a detailed example we treat local store, i.e., store in which fresh cells can be allocated. We axiomatize a notion of state object for local store and show how to model this in Pitts' category of nominal sets with restriction structure.

17:20 Antoine Madet, Roberto M. Amadio and Patrick Baillot
An Affine-Intuitionistic System of Types and Effects: Confluence and Termination

We present an affine-intuitionistic system of types and effects which can be regarded as an extension of Barber-Plotkin Dual Intuitionistic Linear Logic to multi-threaded programs with effects. In the system, dynamically generated values such as references or channels are abstracted into a finite set of regions. We introduce a discipline of region usage that entails the confluence (and hence determinacy) of the typable programs. Further, we show that a discipline of region stratification guarantees termination.

17:40 Nathaniel Charlton and Bernhard Reus
A deeper understanding of the deep frame axiom (extended abstract)

Separation logic is well known to provide a local reasoning principle for local store, the frame rule. Local reasoning is extended to programs with modules and higher-order functions respectively by the hypothetical frame rule and higher-order frame rule. This means, for instance, that the private state of a library module can be hidden, so that clients do not need to know or worry about the private state; this leads to modular proofs.

Higher-order functions can be expressed in low level languages using code pointers and recursion through the store. However, local reasoning for such languages is surprisingly tricky: in (Schwinghammer et al, CSL 2009) a version of the (higher-order) deep frame rule is observed to hold only as a rule and not as an axiom. This means one can only apply the rule to the top level triple under consideration, but not to nested triples appearing inside assertions. Assuming the axiom version allows one to ``launder'' specifications and thus derive memory-safety of programs that actually crash. On the other hand, it appears that without a reasoning principle resembling the axiom version, there are safe programs whose correctness cannot be shown.

In this paper we investigate and analyse the technique of ``laundering'' in more detail and propose a remedy, an idiom for writing specifications (Hoare triples) that ``promise'' that applications of the deep frame axiom are sound. This is done with the help of second order logic (quantification over assertions). We demonstrate the utility of this technique for proving real programs by means of an example: a generic memoiser for recursive functions.