HWVW on Thursday, July 15th

09:00‑10:00 Session 1
Chair: TBD
Location: AT 2.11
09:00 Sharad Malik (Princeton University, USA)
Invited talk: Managing State Explosion through Runtime Verification

The major challenge in hardware verification is managing the state explosion problem in pre-silicon verification. This is reflected in the high cost and low coverage of functional simulation and the capacity limitations of formal verification. Runtime verification offers an attack on this problem through on-the-fly property checking of the current trace and providing a recovery mechanism to deal with failure. There are several interesting examples of runtime verification that have been proposed in recent years in the computer architecture community. These have been largely motivated by the resiliency needs of future technology generations in the face of dynamic errors due to device failures. I will first highlight the key ideas in hardware runtime verification through specific examples from the uni-processor and multi-processor contexts. Next, I will discuss the challenges in implementing some of these solutions. Finally I will discuss how the strengths of runtime verification and model checking can be used in a complementary fashion.

10:30‑12:30 Session 2
Chair: TBD
Location: AT 2.11
10:30 Matthias Raffelsieper and Mohammadreza Mousavi
Efficient Verification of Verilog Cell Libraries

Standard cells libraries often serve as building blocks of electronic circuits. Using standard cells allows the designer to split and focus the design effort of larger circuits into functional aspects on the one hand, and facilitates a seamless implementation of the circuit into silicon on the other hand. In order to enable such a seamless implementation, one wants to have certain guarantees about the functional description of the standard cells, such as for example equivalence with the transistor netlist implementation.

To realize this, we have defined a formal (operational) semantics for the subset of Verilog used in the description of cell libraries. The semantics is then implemented as a translation from Verilog to the input language of the SMV model-checker. Using the automated translation we were able to experiment with the developed semantics and formally verify open-source and proprietary cell libraries against their netlist implementations.

The semantics is further used as a vehicle for further formal analysis techniques. We have analyzed the sources of non-determinism in the descriptions of cells and checked whether timing checks (bounds on the relative timing of certain signals) do rule out such sources. If the formal analysis finds a counter-example witnessing that a source of non-determinism is not prohibited by the existing timing checks, a timing check is generated, if at all possible, to disallow certain simultaneous events leading to potential non-determinism. Using this technique, we have found omissions in the description of existing open source standard cells.

Additionally, we developed symbolic techniques to analyze the feasibility of module path specifications in cell libraries. Module paths specify the propagation delay for an event from an input to an output. Specifying such paths manually is an error prone task; a forgotten path is interpreted as a zero delay, which can cause further flaws in the subsequent design steps. Moreover, one can specify superfluous module paths, i.e., module paths that can never occur in any practical run of the model and hence, make excessive restrictions on the subsequent design decisions. Complementing this check, we also developed a method to derive module paths from a functional description of a cell.

The material presented here is the result of cooperation with Jan-Willem Roorda, Chris Strolenberg and Hans Zantema. This research was carried out in the context of ValiChip project, which was a cooperation between TU/Eindhoven and Fenix Design Automation. The results of this project have so far been published at ACSD 2009, FMICS 2010 and DATE 2010.

11:00 Yongjian Li, William Hung and Xiaoyu Song
Automatically Exploring Structural Symmetry in Symbolic Trajectory Evaluation

In this paper, we present a formal theory to identify symmetry in netlists and symmetry in properties. The close correspondence between the two kinds of symmetry is formalized as a theorem, which guarantees the soundness of our symmetry reduction method. A practical way is introduced to effectively integrate the symmetry reduction approach in a hybrid verification environment which combines theorem proving and symbolic trajectory evaluation. Finally, the power of the symmetry reduction methods is demonstrated by case studies on content addressable memories.

11:30 Divjyot Sethi, Yogesh Mahajan and Sharad Malik
Specification and Encoding of Transaction Interaction Properties

Abstract. Transaction-level modeling is used in hardware design for describing designs at a higher level compared to the register-transfer level (RTL) (e.g. [1–4]). Each transaction represents a unit of work, which is also a useful unit for design verification. In such models, there are many properties of interest which involve interactions between multiple transactions. Examples of this are ordering relationships in sequential processing and hazard checking in pipelined circuits. Writing such properties on the RTL design requires significant expertise in understanding the higher-level computation being done in a given RTL design and possible instrumentation of the RTL to express the property of interest. This is a barrier to the easy use of such properties in RTL designs. In this paper, we consider specification of interaction properties at the transaction-level and the subsequent encoding of the property and the transaction-level model into a finite-state system for model checking. We discuss automation issues by showing how the encoded finite-state system can be automatically generated from the specification of the property and the transaction-level model, and illustrate this through simple examples.

12:00 Malay Ganai and Weihong Li
Bang for the Buck: Improvising and Scheduling Verification Engines for Effective Resource Utilization

In practice, verification engines have to solve many checkers in a very tight time budget, especially, when the embedded system to be analyzed is large, with many coverage criteria. To cope with such a situation, we propose improved and light-weight verification techniques that are built over the state-of-the-art engines such as bounded model checking (BMC), induction, and guided-simulation (directed testing). Specifically, we propose using control state reachability (CSR) information---obtained from a given software system---to strengthen our induction-based proof engine. We also propose identifying and using lighthouses (or guide-posts)---intermediate control states---to simplify and reduce BMC instances, and to guide a simulation engine. We schedule these engines suitably to maximize the resource utilization. We implemented our techniques in a tool \T{ACE}, and integrated it in an industry strength software verification platform \T{F-Soft} to provide a robust and precise analysis framework. We show effectiveness of \T{ACE} on several industry and public benchmarks in a comparative study.

14:00‑15:00 Session 3
Chair: TBD
Location: AT 2.11
14:00 Cindy Eisner (IBM Research Lab, Israel)
Invited talk: Why Work on Hardware Verification?

In recent years there has been a steady dwindling in the number of publications that specifically target formal verification of hardware. Yet hardware verification is a solved problem in theory only. In this talk I will present a number of real-life verification-related problems that may lend themsevles to a formal solution.

15:30‑17:30 Session 4
Chair: TBD
Location: AT 2.11
15:30 Stefan Kupferschmid, Matthew Lewis, Bernd Becker and Tobias Schubert
Incremental Preprocessing Methods for use in BMC

Traditional incremental SAT solvers have achieved great success in the domain of Bounded Model Checking (BMC). However, modern solvers depend on advanced preprocessing procedures to obtain high levels of performance. Unfortunately, many preprocessing techniques such as a variable and block clause elimination cannot be directly used in an incremental manner. This work focuses on extending these techniques and Craig interpolation so that they can be used effectively together in incremental SAT solving (in the context of BMC). The techniques introduced here doubled the performance of our BMC solver on both SAT and UNSAT problems. For UNSAT problems, preprocessing had the added advantage that Craig interpolation was able to find the fixed point sooner, reducing the number of incremental SAT iterations. Furthermore, our ideas seem to perform better as the benchmarks become larger, and/or deeper, which is exactly when they are needed. Lastly, our methods can be extended to other SAT based BMC tools to achieve similar speedups.

16:00 Souheib Baarir, Cecile Braunstein, Emmanuelle Encrenaz, Jean-Michel ILIE, Tun Li, Isabelle Mounier, Denis Poitrenaud and Sana Younes
Quantifying Robustness by Symbolic Model checking

Evaluating the robustness of modern embedded systems with respect to soft errors has become an important part of the design flow for many safety-critical applications.

In this context, we propose and investigate a robustness evaluation procedure for sequential circuits subject to particle strikes inducing bit-flips in memory elements. We define a general fault model, a parametric reparation model and quantitative measures reflecting the robustness capability of the circuit with respect to these fault and reparation models. We provide algorithms to compute these metrics and show how they can be interpreted in order to better understand the robustness capability on a simple circuit coming from the VIS distribution.

16:30 Gianpiero Cabodi, Sergio Nocco and Stefano Quer
Benchmarking a Model Checker for algorithmic improvements and tuning for performance

This paper describes an experience in which the development of model checking algorithms and the experimental benchmarking proceeded together, following an intertwined scheme. Model checking algorithms are evaluated throughout experiments, and experiments inspire new algorithmic features and methodologies. The approach is not new per-se, as similar approaches are common to several industrial as well as academic environments. Nevertheless, the contributions of this work lies in the description of how we: 1) classified and characterized benchmarks in a dynamic way, throughout experimental runs, 2) related model checking problems to algorithms and engines, 3) experienced possible coordinated efforts among different reduction/abstraction algorithms and verification procedures/engines, 4) recorded results of different approaches, and sort out heuristics to target different classes of problems. We also describe our results, obtained on selected benchmarks derived from the HWMCC’08 suite.

17:00 Armin Biere
Discussion and Presentation of the Results of the 3rd Hardware Model Checking Competition