ICLP on Friday, July 16th

09:00‑10:00 FLoC Plenary Talk: David Basin
Location: George Square Lecture Theatre
09:00 David Basin (ETH Zurich)
Policy Monitoring in First-order Temporal Logic

In security and compliance, it is often necessary to ensure that agents and systems comply to complex policies. An example from financial reporting is the requirement that every transaction t of a customer c, who has within the last 30 days been involved in a suspicious transaction t', must be reported as suspicious within 2 days. We present an approach to monitoring such policies formulated in an expressive fragment of metric first-order temporal logic. We also report on case studies in security and compliance monitoring and use these to evaluate both the suitability of this fragment for expressing complex, realistic policies and the efficiency of our monitoring algorithm.

(Joint work with Felix Klaedtke and Samuel Mueller)

10:30‑12:30 Analysis and Implementation
Chair: Enrico Pontelli
Location: AT LT3
10:30 Peter Schneider-Kamp, Jürgen Giesl, Thomas Stroeder, Alexander Serebrenik and René Thiemann
Automated Termination Analysis for Logic Programs with Cut

Termination is an important and well-studied property for logic programs. However, almost all approaches for automated termination analysis focus on definite logic programs, whereas real-world Prolog programs typically use the cut operator. We introduce a novel pre-processing method which automatically transforms Prolog programs into logic programs without cuts, where termination of the cut-free program implies termination of the original program. Hence after this pre-processing, any technique for proving termination of definite logic programs can be applied. We implemented this pre-processing in our termination prover AProVE and evaluated it successfully with extensive experiments.

11:00 Alberto Pettorossi, Maurizio Proietti and Valerio Senni
Transformations of Logic Programs on Infinite Lists

We consider an extension of the class of locally stratified logic programs, called omega-programs, that can be used to define predicates over infinite lists. The semantics of these programs is an extension of the perfect model semantics of locally stratified logic programs. omega-programs allow us to specify properties of the behavior of reactive systems and, in general, properties of infinite sequences of events. We present a more general version of the familiar unfold/fold rules which can be used for transforming omega-programs and we show that their application preserves the semantics and, in this sense, they are correct. Through some examples, we demonstrate that our transformation rules can be applied for verifying properties of Buechi automata and omega-regular expressions.

11:30 Pablo Chico de Guzmán, Manuel Carro Liñares and David S. Warren
Swapping Evaluation: A Memory-Scalable Solution for Answer-On-Demand Tabling

One of the differences among the various approaches to suspension-based tabled evaluation is the scheduling strategy. The two most popular strategies are local and batched evaluation. The former collects all the solutions to a tabled predicate before making any one of them available outside the tabled computation. The latter returns answers one by one before computing them all, which in principle is better if only one answer (or a subset of the answers) is desired. Batched evaluation is closer to SLD evaluation in that it computes solutions lazily as they are demanded, but it may need arbitrarily more memory than local evaluation, which is able to reclaim memory sooner. Some programs, which in practice can be executed under the local strategy, quickly run out of memory under batched evaluation. This has generally led to the adoption of local evaluation at the expense of the more depth-first batched strategy. In this paper we study the reasons for the high memory consumption of batched evaluation and propose a new scheduling strategy which we have termed swapping evaluation. Swapping evaluation also returns answers one by one before completing a tabled call, but its memory usage can be orders of magnitude less than batched evaluation. An experimental implementation in the XSB system shows that swapping evaluation is a feasible memory-scalable strategy that need not compromise execution speed.

12:00 Vítor Santos Costa, Inês Castro Dutra and Ricardo Rocha
Threads and Or-Parallelism Unified

Motivated by the intrinsic and strong potential that Logic Programming environments have for implicit parallelism, in this work we investigate novel techniques to efficiently exploit parallelism from real-world applications in low cost multi-core architectures. To achieve these goals, we revive and redesign the YapOr system to exploit or-parallelism based on a multi-threaded implementation. Our new approach takes full advantage of the state-of-the-art fast and optimized Yap Prolog engine and shares the underlying execution environment, scheduler and most of the data structures used to support YapOr's model. Initial experiments with our new approach consistently achieve almost linear speedups for most of the applications, proving itself as a good alternative for exploiting implicit parallelism in the currently available low cost multi-core architectures.

14:00‑15:00 Probabilistic Programs
Chair: Tom Schrijvers
Location: AT LT3
14:00 Jon Sneyers, Wannes Meert, Joost Vennekens, Yoshitaka Kameya and Taisuke Sato
CHR(PRISM)-based Probabilistic Logic Learning

PRISM is an extension of Prolog with probabilistic predicates and built-in support for expectation-maximization learning. Constraint Handling Rules (CHR) is a high-level programming language based on multi-headed multiset rewrite rules.

In this paper, we introduce a new probabilistic logic formalism, called CHRiSM, based on a combination of CHR and PRISM. It can be used for high-level rapid prototyping of complex statistical models by means of "chance rules". The underlying PRISM system can then be used for several probabilistic inference tasks, including probability computation and parameter learning. We define the CHRiSM language in terms of syntax and operational semantics, and illustrate it with examples. We define the notion of ambiguous programs and define a distribution semantics for unambiguous programs. Next, we describe an implementation of CHRiSM, based on CHR(PRISM). We discuss the relation between CHRiSM and other probabilistic logic programming languages, in particular PCHR. Finally we identify potential application domains.

14:30 Henning Christiansen, Christian Theil Have, Ole Torp Lassen and Matthieu Petit
Inference with Constrained Hidden Markov Models in PRISM

A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper, we show how can be extended HMMs with side-constraints and present constraint solving techniques used to efficiently compute inference. Defining HMMs with side-constraints in Constraint Logic Programming have advantages in terms of more compact expression and pruning opportunities during inference. We present a PRISM-based framework for extended HMM with side-constraints and show how well-known constraints such as cardinality or all_different, are integrated. We validate experimentally our approach on one biologically motivated problem: global pairwise alignment.

15:30‑17:00 Technical Communications - I
Chair: Vitor Santos-Costa
Location: AT LT3
15:30 Michael Maher
Contractible Approximations of Soft Global Constraints

Most global constraints are closed: they apply to a static collection of variables. But in constraint logic programming variables may be created at any stage of the execution, and it is desirable to avoid delaying the application of a constraint until all variables have been created. Recent work on open global constraints has sought to address this problem. In particular, previous work has shown that the property of contractibility allows filtering algorithms for closed global constraints to be re-used for their open counterpart, and that contractible approximations provide a basis for open implementations of uncontractible global constraints.

In this paper we address a broad definition of soft global constraints and the problem of defining tight contractible approximations of soft global constraints. Among other results, we show that, unlike hard constraints, whose tight contractible approximations can often be easily be adapted from the original constraint, for many soft constraints a tight contractible approximation cannot be defined within the same framework used to define the soft constraint.

15:42 Jose Santos and Stephen Muggleton
Subsumer: A Prolog theta-subsumption engine

State-of-the-art theta-subsumption engines like Django (C++) and Resumer2 (Java) are implemented in low-level languages. Since theta-subsumption is inherently a logic problem, in this paper we explore how it can be efficiently implemented in Prolog.

Theta-subsumption is an important problem in computational logic and particularly relevant to the Inductive Logic Programming (ILP) community as it is at the core of the hypotheses coverage test which is often the bottleneck of an ILP system. Also, since most of those systems are implemented in Prolog, they can immediately take advantage of a Prolog based theta-subsumption engine.

In this paper we present a relatively simple (<1000 lines in Prolog) but efficient and general theta-subsumption engine, Subsumer. Crucial to Subsumer's performance is the dynamic and recursive decomposition of a clause in sets of independent components. Also important are ideas borrowed from constraint programming to empower Subsumer to efficiently work on clauses with up to several thousand literals and several dozen distinct variables.

Using the notoriously challenging Phase Transition problem we show that, cputime wise, Subsumer clearly outperforms the Django subsumption engine and is competitive with the more sophisticated state-of-the-art Resumer2 subsumption engine. Furthermore, Subsumer's memory requirements are only a small fraction of those.

15:54 Pedro Lopez-Garcia, Luthfi Darmawan and Francisco Bueno
A Framework for Verification and Debugging of Resource Usage Properties

We present a framework for (static) verification of general resource usage program properties. The framework extends the criteria of correctness as the conformance of a program to a specification expressing non-functional global properties (e.g., upper and lower bounds on execution time, memory, power, or user defined resources). Such bounds are given as functions on input data sizes. A given specification can include both, lower and upper bound resource usage functions, i.e., it can express intervals where the resource usage is supposed to be included in. We have defined an abstract semantics for resource usage properties and operations to compare the (approximated) intended semantics of a program (i.e., the specification) with approximated semantics inferred by static analysis. These operations include the comparison of arithmetic functions (e.g., polynomial, exponential or logarithmic functions). A novel aspect of our framework is that the static checking of assertions generate answers that include conditions under which a given specification can be proved or disproved. For example, these conditions can express intervals of input data sizes such that a given specification can be proved for some intervals but disproved for others. We have implemented our techniques within the Ciao/CiaoPP system in a natural way, so that the novel resource usage verification blends in with the CiaoPP framework that unifies static verification and static debugging (as well as run-time verification and unit testing).

16:06 Nicolas Guenot
Focused Proof Search for Linear Logic in the Calculus of Structures

The proof-theoretic approach to logic programming has benefited from the introduction of focused proof systems, through the non-determinism reduction and control they provide when searching for proofs in the sequent calculus. However, this technique was not available in the calculus of structures, which is known for inducing even more non-determinism than other logical formalisms. This work in progress aims at translating the notion of focusing into the presentation of linear logic in this setting, and use some of its specific features, such as deepness and fine granularity, in order to improve proof search procedures. The starting point for this research line is the multiplicative fragment of linear logic, for which a very simple and elegant focused proof system can be built.

16:18 neda saeedloei and Gopal Gupta
Timed Definite Clause Omega-Grammars

We propose timed context-free grammars (TCFGs) and show how parsers for such grammars can be developed using definite clause grammars (DCGs) coupled with constraints over reals (CLP(R)). Timed context-free grammars describe timed context-free languages (TCFLs). We next extend timed context-free grammars to timed context-free omega-grammars (omega-TCFGs for brevity) and incorporate co-inductive logic programming in DCGs to obtain parsers for them. Timed context-free omega-grammars describe timed context-free languages containing infinite-sized words, and are a generalization of timed omega-regular languages recognized by timed automata. We show a practical application of omega-TCFGs to the well-known generalized railroad crossing problem.

16:30 Theofrastos Mantadelis and Gerda Janssens
Dedicated Tabling for a Probabilistic Setting

ProbLog is a probabilistic framework that extends Prolog with probabilistic facts. To compute the probability of a query, the complete SLD proof tree of the query is collected as a sum of products. ProbLog applies advanced techniques to make this feasible and to assess the correct probability.

Tabling is a well known technique to avoid repeated subcomputations and to terminate loops. We investigate how tabling can be used in ProbLog. The challenge is that we have to reconcile tabling with the advanced ProbLog techniques. While standard tabling collects only the answers for the calls, we do need the SLD proof tree. Finally we discuss how to deal with loops in our probabilistic framework.

By avoiding repeated subcomputations, our tabling approach not only improves the execution time of ProbLog programs, but also decreases accordingly the memory consumption. We obtain promising results for standard ProbLog programs using exact probability inference.

16:42 Daan Fierens
Improving the Efficiency of Gibbs Sampling for Probabilistic Logical Models by Means of Program Specialization

There is currently a large interest in probabilistic logical models. A popular algorithm for approximate probabilistic inference with such models is Gibbs sampling. From a computational perspective, Gibbs sampling boils down to repeatedly executing certain queries on a knowledge base composed of a static part and a dynamic part. The larger the static part, of the knowledge base, the more redundancy there is in these repeated calls. This is problematic since inefficient Gibbs sampling yields poor approximations.

We show how to apply program specialization to make Gibbs sampling more efficient. Concretely, we develop an algorithm that specializes the definitions of the query-predicates with respect to the static part of the knowledge base. In experiments on real-world benchmarks we obtain speedups of up to an order of magnitude.