CSF on Sunday, July 18th

09:00‑10:00 Authorization
Chair: Andrew Gordon
Location: AT LT2
09:00 Adam Lee and Ting Yu
On the quantitative analysis of proofs of authorization: applications, framework, and techniques

Although policy compliance testing is generally treated as a binary decision problem, the evidence gathered during the trust management process can actually be used to examine these outcomes within a more continuous space. In this paper, we develop a formal model that allows us to quantitatively reason about the outcomes of the policy enforcement process in both absolute (i.e., user to ideal case) and relative (i.e., user to user) terms. Within this framework, it becomes possible to quantify, e.g., the robustness of a user's proof of authorization to possible perturbations in the system, how close an unauthorized user is to satisfying a particular policy, and relative ``top-k'' style rankings of the best users to carry out a particular task. To this end, we explore several interesting classes of scoring functions for assessing robustness of authorization decisions, and develop criteria under which these types of functions can be composed with one another. We further show that these types of functions can be extended to quantify how close unauthorized users are to satisfying policies, which can be a useful risk metric for decision making under unexpected circumstances.

09:30 Lujo Bauer, Limin Jia and Divya Sharma
Towards precise specification of logic-based access-control policies

Authorization logics allow concise specification of flexible access-control policies, and are the basis for logic-based access-control systems. In such systems, resource owners issue credentials to specify policies, and the consequences of these policies are derived using logical inference rules. Proofs in authorization logics can serve as capabilities for gaining access to resources.

Because proofs are derived from a set of credentials possibly issued by different parties, the issuer of a specific credential may not be aware of all the proofs that her credential may make possible. From this credential issuer's standpoint, the policy expressed in her credential may thus have unexpected consequences. To solve this general problem, we propose a system in which credentials can specify constraints on how they are to be used. We show how to modularly extend well-studied authorization logics to support the specification and enforcement of such constraints. A novelty of our design is that we allow the constraints to be arbitrary well-behaved functions over authorization proofs. Since all the information about an access is contained in the proofs, this makes it possible to express many interesting constraints. We study the formal properties of such a system, and give examples of constraints.

10:30‑12:30 Information Flow
Chair: David Sands
Location: AT LT2
10:30 Moritz Y. Becker
Information flow in credential systems

This paper proposes a systematic study of information flow in credential-based declarative authorization policies. It argues that a treatment in terms of information flow is needed to adequately describe, analyze and mitigate a class of probing attacks which allow an adversary to infer any confidential fact within a policy. Two information flow properties that have been studied in the context of state transition systems, non-interference and opacity, are reformulated in the current context of policy languages. A comparison between these properties reveals that opacity is the more useful, and more general of the two; indeed, it is shown that non-interference can be stated in terms of opacity. The paper then presents an inference system for non-opacity, or detectability, in Datalog-based policies. Finally, a pragmatic method is presented, based on a mild modification of the mechanics of delegation, for preventing a particularly dangerous kind of probing attack that abuses delegation of authority.

11:00 Alejandro Russo and Andrei Sabelfeld
Dynamic vs. static flow-sensitive security analysis

This paper seeks to answer fundamental questions about trade-offs between static and dynamic security analysis. It has been previously shown that flow-sensitive static information-flow analysis is a natural generalization of flow-insensitive static analysis, which allows accepting more secure programs. It has been also shown that sound purely dynamic information-flow enforcement is more permissive than static analysis in the flow-insensitive case. We argue that the step from flow-insensitive to flow-sensitive is fundamentally limited for purely dynamic information-flow controls. We prove impossibility of a sound purely dynamic information-flow monitor that accepts programs certified by a classical flow-sensitive static analysis. A side implication is impossibility of permissive dynamic instrumented security semantics for information flow, which guides us to uncover an unsound semantics from the literature. We present a general framework for hybrid mechanisms that is parameterized in the static part and in the reaction method of the enforcement (stop, suppress, or rewrite) and give security guarantees with respect to termination-insensitive noninterference for a simple language with output.

11:30 Andrey Chudnov and David Naumann
Information flow monitor inlining

In recent years it has been shown that dynamic monitoring can be used to soundly enforce information flow policies. For programs distributed in source or bytecode form, the use of JIT compilation makes it difficult to implement monitoring by modifying the language runtime system. An inliner avoids this problem and enables flow tracking on multiple runtimes. We show how to inline an information flow monitor, specifically a flow sensitive monitor due to Russo and Sabelfeld (who show it enforces termination insensitive noninterference). We prove that the inlined version is observationally equivalent to the original.

12:00 Stephen Chong
Required information release

Many computer systems have a functional requirement to release information. Such requirements are an important part of a system's information security requirements. Current information-flow control techniques are able to reason about permitted information flows, but not required information flows.

In this paper, we introduce and explore the specification and enforcement of _required information release_ in a language-based setting. We define semantic security conditions that express both _what_ information a program is required to release, and _how_ an observer is able to learn this information. We also consider the relationship between permitted and required information release, and define _bounded release_, which provides upper- and lower-bounds on the information a program releases. We show that both required information release and bounded release can be enforced using a security-type system.

14:00‑15:00 FLoC Keynote Talk: Deepak Kapur
Location: George Square Lecture Theatre
14:00 Deepak Kapur (University of New Mexico)
Induction, Invariants, and Abstraction

Invariants play a key role in verifying properties of imperative programs. Inductive reasoning is essential to verifying properties of recursive programs. Relationship between derivation of loop invariants and speculation of lemmas in inductive reasoning is explored. Abstraction is an effective heuristic for approximating program behavior in order to derive invariants. Interaction among developing abstractions, inductive reasoning, and generating invariants is investigated.

15:30‑17:00 Security Protocol Verification II
Chair: Mark Ryan
Location: AT LT2
15:30 Simon Meier, Cas Cremers and David Basin
Strong invariants for the efficient construction of machine-checked protocol security proofs

We embed an operational semantics for security protocols in the interactive theorem prover Isabelle/HOL and derive two strong protocol-independent invariants. These invariants allow us to reason about the possible origin of messages and justify a local typing assumption for the otherwise untyped protocol variables. The two rules form the core of a theory that is well-suited for interactively constructing natural, human-readable, correctness proofs. Moreover, we develop an algorithm that automatically generates proof scripts based on these invariants. Both interactive and automatic proof construction are faster than competing approaches. Moreover, we have strong correctness guarantees since all proofs, including those deriving the underlying theory from the semantics, are machine checked.

16:00 Santiago Zanella B├ęguelin, Gilles Barthe, Sylvain Heraud, Benjamin Gr├ęgoire and Daniel Hedin
A machine-checked formalization of sigma-protocols

Zero-knowledge proofs have a vast applicability in the domain of cryptography, coming from the fact that they can be used to force potentially malicious parties to abide by the rules of a protocol, without forcing them to reveal their secrets. Sigma-protocols are a class of zero-knowledge proofs that can be implemented efficiently and that suffice for a great variety of practical applications. This paper presents a first machine-checked formalization of a comprehensive theory of Sigma-protocols. The development includes basic definitions, relations between different security properties that appear in the literature, and general composability theorems. We show its usefulness by formalizing---and proving the security---of concrete instances of several well-known protocols. The formalization builds on CertiCrypt, a framework that provides support to reason about cryptographic systems in the Coq proof assistant, and that has been previously used to formalize security proofs of encryption and signature schemes.

16:30 Benedikt Schmidt, Patrick Schaller and David Basin
Impossibility results for secret establishment

Security protocol design is a creative discipline where the solution space depends on the problem to be solved and the cryptographic operators available. In this paper, we examine the general question of when two agents can create a shared secret. Namely, given an equational theory describing the cryptographic operators available, is there a protocol that allows the agents to establish a shared secret?

We examine this question in several settings. First, we provide necessary and sufficient conditions for secret establishment using subterm convergent theories. This directly yields a decision procedure for this problem. As a consequence, we obtain impossibility results for symmetric encryption and signature schemes. Second, we use algebraic methods to prove impossibility results for two important theories that are not subterm convergent: XOR and abelian groups. Finally, we develop a general combination result that enables modular impossibility proofs. For example, the results for symmetric encryption and XOR can be combined to obtain impossibility for the joint theory.