ICLP-DC on Wednesday, July 21st

09:00‑10:00 Session I: Concurrent Programming
Chair: Marcello Balduccini
Location: AT 2.12
09:00 Romain Demeyer
Program Analysis to Support Concurrent Programming in Declarative Languages

In recent years, manufacturers of processors are focusing on parallel architectures in order to increase performance. This shift in hardware evolution is provoking a fundamental turn towards concurrency in software development. Unfortunately, developing concurrent programs which are correct and efficient is hard, as the underlying programming model is much more complex than it is for simple sequential programs. The goal of this research is to study and to develop program analysis to support and improve concurrent software development in declarative languages. The characteristics of these languages offer opportunities, as they are good candidates for building concurrent applications while their simple and uniform data representation, together with a small and formally defined semantics makes them well-adapted to automatic program analysis techniques. In our work, we focus primarily on developing static analysis techniques for detecting race conditions at the application level in Mercury and Prolog programs. A further step is to derive (semi-) automatically the location and the granularity of the critical sections using a data-centric approach.

09:20 Thierry Martinez
Design and Implementation of a Concurrent Logic Programming Language with Linear Logic Constraints

Prolog is originally rooted in logic with the elegant mapping: “programs = formulas, execution = proof search”. Constraint solving engine requires either to be built-in or to add coroutines mechanisms that are not in the scope of logical reading. Implementations add other non-logical features, like assert/retract, and mutable variables, to mimic imperative programming style.

The concurrent constraint programming language enjoys logical semantics and is expressive enough to describe constraint propagators. Furthermore, reading constraints as resources in linear logic allows semantics to capture the non-monotonous traits of imperative programming like mutability. LCC enjoys the mapping “programs = linear-logic formulas, execution = logical deduction”: observables are the logical consequences of a program, by opposition to the logical resolution in the Prolog settings.

My thesis aims at designing a practical language as close as possible to the linear concurrent constraint (LCC) theory. The main contribution is a new operational semantics which behaves as an angelic scheduler with a tractable algorithmic complexity. This operational semantics is sound and complete with respect to the logical semantics and allows the construction of a rich language over a very simple kernel.

09:40 Andrés A. Aristizábal
Bisimilarity in Concurrent Constraint Programming

In this doctoral work we aim at developing a new approach to labelled semantics and equivalences for the Concurrent Constraint Programming (CCP) which will enable a broader capture of processes behavioural equivalence. Moreover, we work towards exploiting the strong connection between first order logic and CCP. Something which will allow us to represent logical formulae in terms of CCP processes and verify its logical equivalence by means of our notion of bisimilarity. Finally, following the lines of the Concurrecy Workbench we plan to implement a CCP Workbench based on our theoretical structure.

10:30‑12:30 Session II: Logic Programming
Chair: Alessandro Dal Palù
Location: AT 2.12
10:30 Mario Alviano
Dynamic Magic Sets for Disjunctive Datalog Programs

Answer set programming (ASP) is a powerful formalism for knowledge representation and common sense reasoning that allows disjunction in rule heads and nonmonotonic negation in bodies. Magic Sets are a technique for optimizing query answering over logic programs and have been originally defined for standard Datalog, that is, ASP without disjunction and negation. Essentially, the input program is rewritten in order to identify a subset of the program instantiation which is sufficient for answering the query.

Dynamic Magic Sets (DMS) are an extension of this technique to ASP. The optimization provided by DMS can be exploited also during the nondeterministic phase of ASP systems. In particular, after some assumptions have been made during the computation, parts of the program may become irrelevant to a query (because of these assumptions). This allows for dynamic pruning of the search space, which may result in exponential performance gains.

DMS has been implemented in the DLV system and experimental results confirm the effectiveness of the technique.

10:50 Fabio Parisini
Local Branching in a Constraint Programming Framework

Local branching is a general purpose heuristic method which searches locally around the best known solution by employing tree search. It has been successfully used in Mixed Integer Programming where local branching constraints are used to model the neighborhood of an incumbent solution and improve the bound. We propose the integration of local branching in Constraint Programming (CP). This integration is not simply a matter of implementation, but requires a number of significant extensions. The original contributions of this paper are: the definition of an efficient and incremental bound computation for the neighborhood, a cost-based filtering algorithm for the local branching constraint and a novel diversification strategy that can explore arbitrarily far regions of the search tree w.r.t. the already found solutions.

11:10 Céline Dandois
Program Analysis for Code Duplication in Logic Programs

In this PhD project, we deal with the issue of code duplication in logic programs. In particular semantical duplication or redundancy is generally viewed as a possible seed of inconvenience in all phases of the program lifecycle, from development to maintenance. The core of this research is the elaboration of a theory of semantical duplication, and of an automated program analysis capable of detecting such duplication and which could steer, to some extent, automatic refactoring of program code.

11:30 Zachary Snow
Realizing the Dependently Typed Lambda Calculus

Dependently typed lambda calculi such as the Edinburgh Logical Framework (LF) can encode relationships between terms in types and can naturally capture correspondences between formulas and their proofs. Such calculi can also be given a logic programming interpretation: the system is based on such an interpretation of LF. We have considered whether a conventional logic programming language can also provide the benefits of a Twelf-like system for encoding type and term dependencies through dependent typing, and whether it can do so in an efficient manner. In particular, we have developed a simple mapping from LF specifications to a set of formulas in the higher-order hereditary Harrop (hohh) language, that relates derivations and proof-search between the two frameworks. We have shown that this encoding can be improved by exploiting knowledge of the well-formedness of the original LF specifications to elide much redundant type-checking information. The resulting logic program has a structure that closely follows the original specification, thereby allowing LF specifications to be viewed as meta-programs that generate hohh programs. We have proven that this mapping is correct, and, using the Teyjus implementation of Lambda Prolog, we have shown that our translation provides an efficient means for executing LF specifications, complementing the ability the Twelf system provides for reasoning about them. In addition, the translation offers new avenues for reasoning about such specifications, via reasoning over the generated hohh programs.

11:50 Abdulla Alqaddoumi
Towards a Parallel Virtual Machine for Functional Logic Programming

Functional logic programming is a multi-paradigm programming that combines the best features of functional programming and logic programming. Functional programming provides mechanisms for demand-driven evaluation, higher order functions and polymorphic typing. Logic programming deals with non-determinism, partial information and constraints. Both programming paradigms fall under the umbrella of declarative programming. For the most part, the current implementations of functional logic languages belong to one of two categories: (1) Implementations that include the logic programming features in a functional language. (2) Implementations that extend logic languages with functional programming features. In this paper we describe the undergoing research efforts to build a parallel virtual machine that performs functional logic computations. The virtual machine will tackle several issues that other implementations do not tackle: (1) Sharing of sub-terms among different terms especially when such sub-terms are evaluated to more than one value (non-determinism). (2) Exploitation of all forms of parallelism present in computations. The evaluation strategy used to evaluate functional logic terms is needed narrowing, which is a complete and sound strategy.

12:10 Niels Pahlavi
Higher-order Logic Learning and λProgol

We present our research produced about Higher-order Logic Learning (HOLL), which consists of adapting First-order Logic Learning (FOLL), like Inductive Logic Programming (ILP), within a Higher-order Logic (HOL) context. We describe a first working implementation of λProgol, a HOLL system adapting the ILP system Progol and the HOL formalism λProlog. We compare λProgol and Progol on the learning of recursive theories showing that HOLL can, in these cases, outperform FOLL.

14:00‑15:00 Session III: Solvers and Applications
Chair: Marcello Balduccini
Location: AT 2.12
14:00 Christian Drescher
Constraint Answer Set Programming Systems

We present an integration of answer set programming and constraint processing as an interesting approach to constraint logic programming. Although our research is in a very early stage, we motivate constraint answer set programming and report on related work, our research objectives, preliminary results we achieved, and future work.

14:20 Sarah Alice Gaggl
Towards a General Argumentation System based on Answer-Set Programming

Within the last years, especially since the work proposed by Dung in 1995, argumentation has emerged as a central issue in Artificial Intelligence. With the so called argumentation frameworks (AFs) it is possible to represent statements (arguments) together with a binary attack relation between them. The conflicts between the statements are solved on a semantical level by selecting acceptable sets of arguments. An increasing amount of data requires an automated computation of such solutions. Logic Programming in particular Answer-Set Programming (ASP) turned out to be adequate to solve problems associated to such AFs. In this work we use ASP to design a sophisticated system for the evaluation of several types of argumentation frameworks.

14:40 Jakob Zwirchmayr
Cutting-Edge Timing Analysis Techniques

This text gives an overview about my current research in timing analysis at the Vienna University of Technology. After a short introduction to the topic follows the description of an approach relying on CLP, the implicit path enumeration technique (IPET). This technique is also used in a tool developed at the institute of Computer Languages (TuBound). Current timing analysis tools suffer from a few flaws worth further investigation in order to achieve better results than current state-of-the-art timing analysis tools.

15:30‑16:30 Session IV: Applications
Chair: Marcello Balduccini
Location: AT 2.12
15:30 Mauricio Toro-bermudez
Structured Interactive Musical Scores

Interactive Scores is a formalism for the design and performance of interactive scenarios that provides temporal relations (TRs) among the objects of the scenario. We can model TRs among objects in Time Stream Petri nets, but it is difficult to represent global constraints. This can be done explicitly in the Non-deterministic Timed Concurrent Constraint (ntcc) calculus. We want to formalize a heterogeneous system that controls in one subsystem the concurrent execution of the objects using ntcc, and audio and video processing in the other. We also plan to develop an automatic verifier for ntcc.

15:50 Neda Saeedloei
Logic Programming Foundations of Cyber-Physical Systems

Cyber-physical systems (CPS) are becoming ubiquitous. Almost every device today has a controller that reads inputs through sensors, does some processing and then performs actions through actuators. These controllers are discrete digital systems whose inputs are continuous physical quantities and whose outputs control physical (analog) devices. Thus, CPS involve both digital and analog data. In addition, CPS are assumed to run forever, and many CPS may run concurrently with each other. we will develop techniques for faithfully and elegantly modeling CPS. Our approach is based on using constraint logic programming over reals, co-induction, and coroutining.

16:10 Hugo A. López
Models for Trustworthy Services and Process Oriented Systems

Service and process-oriented systems promise to provide more effective busi- ness and work processes and more flexible and adaptable enterprise IT systems. However, the technologies and standards are still young and unstable, making research in their the- oretical foundations increasingly important. Our studies focus on two dichotomies: the global/local views of service interactions, and their imperative/declarative specification. A global view of service interactions describes a process as a protocol for interactions, as e.g. an UML sequence diagram or a WS-CDL choreography. A local view describes the system as a set of processes, e.g. specified as a pi calculus or WS-BPEL process, implementing each participant in the process. While the global view is what is usually provided as specification, the local view is a necessary step towards a distributed implementation. If processes are defined imperatively, the control flow is defined explicitly, e.g. as a sequence or flow graph of interactions/commands. In a declarative approach processes are described as a collection of conditions they should fulfill in order to be considered correct. The two approaches have evolved rather independently from each other. Our thesis is that we can provide a theoretical framework based on typed concurrent process and concurrent con- straint calculi for the specification, analysis and verification of service and process oriented system designs which bridges the global and local view and combines the imperative and declarative specification approaches, and can be employed to increase the trust in the de- veloped systems. This article describes our main motivations, results and future research directions.