09:00 | Georg Gottlob (University of Oxford) Datalog+-: A Family of Logical Query Languages for New Applications. I will report on a recently introduced family of Datalog-based languages, called Datalog+-, which is a new framework for tractable ontology querying, and for a variety of other applications. Datalog+-extends plain Datalog by features such as existentially quantified rule heads, and, at the same time, restricts the rule bodies so to achieve decidability and tractability. I will review a number of theoretical results and show how Datalog+- relates to both Database Dependency Theory and the Guarded Fragment of first order logic. I will show that popular tractable description logics translate into Datalog+- and illustrate how this formalism can be used in the context of web data extraction, data exchange, and other applications. |

10:30 | Vishwambhar Rathi, Erik Aurell, Lars Rasmussen and Mikael Skoglund Bounds on Threshold of Regular Random $k$-SAT We consider the regular model of formula generation in conjunctive normal form (CNF) introduced by Boufkhad et. al. in \cite{BDIS05}. We derive an upper bound on the threshold for regular random $k$-SAT for any $k \geq 3$. We show that for large $k$ the threshold is upper bounded by $2^k \ln (2)$ which matches with the corresponding upper bound for the uniform model of formula generation. We also derive upper bounds on the threshold for Not-All-Equal (NAE) satisfiability of regular random formula using the first moment method for $k \geq 3$. We show that for large $k$, the NAE-satisfiability threshold is upper bounded by $2^{k-1} \ln(2)$. This again matches with the corresponding bound for the uniform model of formula generation \cite{AcM06}. There are two main methods to compute lower bounds on the threshold. One method is to analyze the performance of an algorithm designed for finding satisfying assignments. In \cite{BDIS05}, a lower bound on the threshold of regular random $3$-SAT was derived by analyzing the performance of a greedy algorithm. The second moment method is another approach to obtain a lower bound. For the uniform model, it was shown in \cite{AcP04, AcM06} that a careful application of the second moment method yields a significantly better bound as compared to any rigorously proven algorithmic bound. We derive the second moment of the number of satisfying assignments for regular random $k$-SAT for $k \geq 3$. For large $k$, we note that the obtained lower bound on the satisfiability threshold of a regular random formula converges to the lower bound obtained for the uniform model. Similarly, by computing the second moment of the number of NAE-satisfying assignments we obtain lower bounds on the NAE-satisfiability threshold of the regular random formulas. Again we note that the obtained lower bound matches with the corresponding lower bound for the uniform model as $k$ increases. Thus, we answer the question posed in \cite{AcM06} regarding the performance of the second moment method for regular random formulas. |

11:00 | Thomas Hugel and Yacine Boufkhad Non Uniform Selection of Solutions for Upper Bounding the 3-SAT Threshold We give a new insight into the upper bounding of the 3-SAT threshold by the first moment method. The best criteria developed so far to select the solutions to be counted discriminate among neighboring solutions on the basis of uniform information about each individual free variable. What we mean by uniform information, is information which does not depend on the solution: e.g. the number of positive/negative occurrences of the considered variable. What is new in our approach is that we use non uniform information about variables. Thus we are able to make a more precise tuning, resulting in a slight improvement on upper bounding the 3-SAT threshold for various models of formulas defined by their distributions. |

11:30 | Mladen Nikolic Statistical Methodology for Comparison of SAT Solvers Evaluating improvements to modern SAT solvers and comparison of two arbitrary solvers is a challenging and important task. Relative performance of two solvers is usually assessed by running them on a set of SAT instances and comparing the number of solved instances and their running time in a straightforward manner. In this paper we point to shortcomings of this approach and advocate more reliable, statistically founded methodologies that could discriminate better between good and bad ideas. We present one such methodology and illustrate its application. |

11:50 | Adrian Balint and Andreas Fröhlich Improving stochastic local search for SAT with instance specific information and a new probability distribution This paper introduces a new SLS-solver for the satisfiability problem. It is based on the solver gNovelty+ which uses an additive clause-weighting-scheme. In contrast to gNovelty+, when our solver reaches a local minimum, it computes a probability distribution on all variables from unsatisfied clauses. It then flips a variable picked according to this distribution. Compared with other state-of-the-art SLS-solvers this distribution needs neither noise nor a random walk to escape efficiently from cycles. Additionally we use instance specific information for setting the smoothing parameter to achieve a stable performance on a wide range of instances. To obtain this information we run a survey propagation algorithm until the system converges. Based on the resulting messages we compute a complexity measure and define our smoothing parameter as a function of this complexity. We compared this algorithm which we called \textit{Sparrow} to the winners of the SAT2009 competition on broad range of instances. Our results show that \textit{Sparrow} is significantly outperforming all of its competitors on the random 3-SAT problem. |

12:10 | Anton Belov and Zbigniew Stachniak Improved Local Search for Circuit Satisfiability In this paper we describe a number of improvements to the justification-based local search algorithm for circuit satisfiability recently proposed by Järvisalo et al [1]. By carefully combining local search with Boolean Constraint Propagation we achieve multiple orders of magnitude reduction in run-time on industrial and structured benchmark circuits. Additional performance improvements are gained by employing structure-based search heuristics. ------------------------ [1] Matti Järvisalo, Tommi Junttila, and Ilkka Niemelä. Justification-Based Non-Clausal Local Search for SAT. In Proc. of the 18th European Conference on Artificial Intelligence (ECAI 2008), 2008. |

14:00 | J Strother Moore (University of Texas) Theorem Proving for Verification: The Early Days. Since Turing, computer scientists have understood that the question "does this program satisfy its specifications?" could be reduced to the question "are these formulas theorems?" But the theorem proving technology of the 50s and 60s was inadequate for the task. In 1971, here in Edinburgh, Boyer and I started building the first general-purpose theorem prover designed for a computational logic. This project continues today, with Matt Kaufmann as a partner; the current version of the theorem prover is ACL2 (A Computational Logic for Applicative Common Lisp). In this talk I'll give a highly personal view of the four decade long "Boyer-Moore Project", including our mechanization of inductive proof, support for recursive definitions, rewriting with previously proved lemmas, integration of decision procedures, efficient representation of logical constants, fast execution, and other proof techniques. Along the way we'll see several interesting side roads: the founding of the Edinburgh school of logic programming, a structure-shared text editor that played a role in the creation of Word, and perhaps most surprisingly, the use of our "Lisp theorem prover" to formalize and prove theorems about commercial microprocessors and virtual machines via deep embeddings, including parts of processors by AMD, Centaur, IBM, Motorola, Rockwell-Collins, Sun, and others. The entire project helps shed light on the dichotomy between general-purpose theorem provers and special-purpose analysis tools. |

15:30 | Robert Brummayer, Florian Lonsing and Armin Biere Automated Testing and Debugging of SAT and QBF Solvers Robustness and correctness are essential criteria for SAT and QBF solvers. We develop automated state-of-the-art testing and debugging techniques designed and optimized for SAT and QBF solver development. Our experiments show that our fuzz testing techniques are able to find critical solver defects that lead to crashes, invalid satisfying assignments and incorrect satisfiability results. Moreover, our experimental analysis shows that our sequential and concurrent delta debugging techniques are highly effective in minimizing failure-inducing inputs. |

16:00 | William Klieber, Samir Sapra, Sicun Gao and Edmund Clarke A Non-Prenex, Non-Clausal QBF Solver with Game-State Learning We present a DPLL-based solver for the problem of quantified boolean formulas (QBF) in non-prenex, non-CNF form. We make two contributions. First, we reformulate clause/cube learning, extending it to non-prenex instances. We call the resulting technique "game-state learning". Second, we introduce a propagation technique using *phantom literals* that exploits the structure of a non-CNF instance in a manner that is symmetric between the universal and existential variables. Experimental results on the QBFEVAL benchmarks indicate our approach outperforms other state-of-the-art solvers on certain families of benchmarks, including the tipfixpoint and tipdiam families of model checking problems. |

16:30 | Florian Lonsing and Armin Biere Integrating Dependency Schemes in Search-Based QBF Solvers Many search-based QBF solvers implementing the DPLL algorithm for QBF (QDPLL) process formulae in prenex conjunctive normal form (PCNF). The quantifier prefix of PCNFs often results in strong variable dependencies which can influence solver performance negatively. A common approach to overcome this problem is to reconstruct quantifier structure e.g. by quantifier trees. Dependency schemes are a generalization of quantifier trees in the sense that more general dependency graphs can be obtained. So far, dependency graphs have not been applied in QBF solving. In this work we consider the problem of efficiently integrating dependency graphs in QDPLL. Thereby we generalize related work on integrating quantifier trees. By analyzing the core parts of QDPLL, we report on modifications necessary to profit from general dependency graphs. In comprehensive experiments we show that QDPLL using a particular dependency graph, despite of increased overhead, outperforms classical QDPLL relying on quantifier prefixes of PCNFs. |