PROOFS: Security Proofs for Embedded Systems

Hotel Paradise: PROOFS 2014 venue HaeUnDae: PROOFS 2014 venue

Program

8h30 – 9h00 Registration
9h00 – 9h15 Opening — Welcome, presentation of PROOFS 2014 [slides]
9h15 – 10h15 Session 1: Formally Checked Protections Against Side-Channel Attacks   Chair: Takeshi Sugawara
"Implementation and Evaluation of a Leakage-Resilient ElGamal Key Encapsulation Mechanism", Abstract: Leakage-resilient cryptography aims to extend the rigorous guarantees achieved through the provable security paradigm to physical implementations. The constructions and mechanisms designed on basis of this new approach inevitably suffer from an Achilles heel: a bounded leakage assumption is needed. Currently, a huge gap exists between the theory of such designs and their implementation to confirm the leakage resilience in practice. The present work tries to narrow this gap for the leakage-resilient bilinear ElGamal key encapsulation mechanism (BEG-KEM) proposed by Kiltz and Pietrzak in 2010. Our first contribution is a variant of the bounded-leakage and the only-computation-leaks model that is closer to practice. We weaken the restriction on the image size of the leakage functions in these models and only insist that the inputs to the leakage functions have sufficient min-entropy left, in spite of the leakage, with no limitation on the quantity of this leakage. We provide a novel security reduction for BEG-KEM in this relaxed leakage model using the generic bilinear group axiom. Secondly, we show that a naive implementation of the exponentiation in BEG-KEM makes it impossible to meet the leakage bound. Instead of trying to find an exponentiation algorithm that meets the leakage axiom (which is a non-trivial problem in practice), we propose an advanced scheme, BEG-KEM+, that avoids exponentiation by a secret value, but rather uses an encoding into the base group due to Fouque and Tibouchi. Thirdly, we present a software implementation of BEG-KEM+ based on the Miracl library and provide detailed experimental results. We also assess its (theoretical) resistance against power analysis attacks from a practical perspective, taking into account the state-of-the-art in side-channel cryptanalysis.
Keywords: Secure implementation, Side-channel cryptanalysis, Leakage-resilient cryptography, Security proof, Public-key encryption, Pairings.
by David Galindo, Johann Großschädl, Zhe Liu, Praveen Kumar Vadnala and Srinivas Vivek. [slides]
"Formally Proved Security of Assembly Code Against Power Analysis: A Case Study on Balanced Logic", Abstract: In his keynote speech at CHES 2004, Kocher advocated that side-channel attacks were an illustration that formal cryptography was not as secure as it was believed because some assumptions (e.g., no auxiliary information is available during the computation) were not modeled. This failure is caused by formal methods’ focus on models rather than implementations. In this paper we present formal methods and tools for designing protected code and proving its security against power analysis. These formal methods avoid the discrepancy between the model and the implementation by working on the latter rather than on a high-level model. Indeed, our methods allow us (a) to automatically insert a power balancing countermeasure directly at the assembly level, and to prove the correctness of the induced code transformation; and (b) to prove that the obtained code is balanced with regard to a reasonable leakage model, and we show how to characterize the hardware to use the resources which maximize the relevancy of the model. The tools implementing our methods are then demonstrated in a case study in which we generate a provably protected present implementation for an 8-bit AVR smartcard.
Keywords: CPA, DPA, Dual-rail, Hamming distance, Hamming weight, NICV, OCaml, PRESENT, block cipher, formal proof, implementation, smartcard, static analysis, symbolic execution.
by Pablo Rauzy, Sylvain Guilley and Zakaria Najm. [slides]
10h15 – 10h30 Coffee break
10h30 – 11h20 Session 2: Physically Unclonable Functions   Chair: Naofumi Homma
"PAC Learning of Arbiter PUFs", Abstract: The general concept of Physically Unclonable Functions (PUFs) has been nowadays widely accepted and adopted to meet the requirements of secure identification and key generation/storage for cryptographic ciphers. However, shattered by different kinds of attacks, it has been proved that the promised security features of arbiter PUFs, including unclonability and unpredictability do not hold unconditionally. It has been stated in the literature that the security of arbiter PUFs can be challenged by launching modelling attacks, i.e., applying machine learning methods. In this case, a large set of Challenge-Response Pairs (CRP) is collected by an attacker, aiming at mathematically modelling the Challenge-Response behavior for a given arbiter PUF. However, the success of all existing modeling attacks rests so far on pure trial and error estimates. It means that neither the probability of obtaining a useful model (confidence), nor the maximum number of required CRPs, nor the correct prediction of an unknown challenge (accuracy) is guaranteed at all. To address these issues, this work will present a Probably Approximately Correct (PAC) learning algorithm. This proves that learning of any arbiter PUF for prescribed levels of accuracy and confidence can be done in polynomial time. Based on some crucial discretization process, we are able to define a Deterministic Finite Automaton (of polynomial size), which exactly accepts that regular language that corresponds to the challenges mapped by the given PUF to one responses. We also prove that the maximum required number of CRPs is polynomial in the number of arbiter stages. A further analysis reveals that this maximum number of CRPs is also polynomial in the maximum deviation of the arbiter delay as well as the pre-defined levels of accuracy and confidence. To the best of our knowledge this is the first algorithm which can provably learn an arbitrary arbiter PUF. Moreover, our proof of the PAC learnability of arbiter PUFs gives many new insights into the correct construction of secure arbiter PUFs in general.
Keywords: Arbiter PUF, Machine Learning, PAC learning, Regular Language, Deterministic Finite Automata.
by Fatemeh Ganji, Shahin Tajik and Jean-Pierre Seifert. [slides]
"Methods to Enhance the Reliability of Key Generation from Physically Unclonable Functions", Abstract: The Physically Unclonable Functions (PUF) represent an elegant solution to generate a key without any human intervention, and to avoid the secret storage in attackable Non-Volatile memory. However, even if the PUF core is of low complexity, this is not the case of the external processing to make it reliable. Indeed the error correcting codes (ECC) used for this purpose needs rather complex decoder which can be the source of physical attacks. We propose in this paper low-cost and proven methods to make a PUF-based key generator more reliable. To demonstrate them, we use the Loop PUF, which intrinsically provides interesting features, as a huge set of challenges and responses in integer format. The methods have been validated both formally and by experiments carried out on PUF designed in ASICs 65 nm CMOS technology.
Keywords: PUF, Key Generation, Reliability, Low-cost, ECC.
by Jean-Luc Danger, Florent Lozac'h and Zouha Cherif. (short talk) [slides]
11h20 – 12h50 Invited keynote talk #1   Chair: Sylvain Guilley
"Verified cryptographic implementations: how far can we go?", Abstract: EasyCrypt is a computer-assisted framework for proving the security of cryptographic constructions. However, there is a significant gap between security proofs done in the usual provable security style and cryptographic implementations used in practice; as a consequence, real-world cryptography is sometimes considered as "one of the many ongoing disaster areas in security". We have recently extended EasyCrypt with support for reasoning about C implementations, and exploited the CompCert verified compiler to carry the security proof to executable code. Moreover, we have developed verified type-based information flow analyses on assembly code to ensure that executable code is protected against cache-based side-channel attacks. In the second part of the talk, I will present a new method (based on program synthesis) for discovering fault attacks in cryptographic implementations. by Gilles Barthe. [slides]
12h50 – 14h15 Lunch
14h15 – 15h15 Session 3: Attacks on Embedded Systems   Chair: Morgan Düring
"Physical functions : the common factor of side-channel and fault attacks ?", Abstract: Security is a key component for information technologies and communication. Among the security threats, a very important one is certainly due to vulnerabilities of the integrated circuits that implement cryptographic algorithms. These electronic devices (such as smartcards) that could fall into the hands of malicious people, can be subject to ``physical attacks''. These attacks are generally classified into two categories : fault and side-channel attacks. One of the main challenges to secure circuits against such attacks is to propose methods and tools to estimate as soundly as possible, the efficiency of protections. Numerous works attend to provide tools based on sound statistical techniques but, to our knowledge, only address side-channel attacks. In this article, a formal link between fault and side-channel attacks is presented. The common factor between them is what we called the 'physical' function which is an extension of the concept of 'leakage function' widely used in side-channel community. We think that our work could make possible the re-use (certainly modulo some adjustments) for fault attacks of the strong theoretical background developed for side-channel attacks. This work could also make easier the combination of side-channel and fault attacks and thus, certainly could facilitate the discovery of new attack paths. But more importantly, the notion of physical functions opens from now new challenges about estimating the protection of circuits.
Keywords: Side-Channel attacks, Fault attacks, Methods and tools for leakage estimation, Cryptography, Security of integrated circuits.
by Bruno Robisson and Hélène Le Bouder. [slides]
"Method Taking into Account Process Dispersions to Detect Hardware Trojan Horse by Side-Channel", Abstract: Hardware trojan horses inserted in integrated circuits have received special attention of researchers. Most of the recent researches focus on detecting the presence of hardware trojans through various techniques like reverse engineering, test/verification methods and side-channel analysis (SCA). Previous works using SCA for trojan detection are based on power measurements or even simulations. When using real silicon, the results are strongly biased by the process variations, the exact size of the trojan and its location. In this paper, we propose a metrics to measure the impact of these parameters. For the first time, we give the detection probability of a trojan as a function of its activity, even if untriggered. Moreover we use electromagnetic (EM) as side-channel as it provides a better spatial and temporal resolution than power measurements. We conduct a proof of concept study using an AES-128 cryptographic core running on a set of 10 Virtex-5 FPGA. Our results show that, using this metric, there is a probability superior than 99% with a false negative rate of 0.23% to detect a HT bigger than 1% of the original circuit.
Keywords: Hardware Trojan, Trojan detection, False negative probability, Electromagnetic Measurements, Side-Channel Analysis, Process Variation, FPGA.
by Xuan Thuy Ngo, Zakaria Najm, Shivam Bhasin, Sylvain Guilley and Jean-Luc Danger. [slides]
15h15 – 16h15 Invited keynote talk #2   Chair: Pablo Rauzy
"Error-Correcting Codes for Cryptography", Abstract: Error-correcting codes have been used in the study of various McEliece-type cryptosystems, Boolean functions, and secret sharing schemes. Recently, Maghrebi, Guilley, Carlet, and Danger introduced leakage squeezing countermeasure against high-order attacks and gave optimal first-order masking with linear and non-linear codes. Motivated by this work, Carlet, Gaborit, Kim, and Solé introduced another application of codes in the study of a countermeasure of side channel attacks. These are called the complementary information set (CIS) codes. A binary linear code of length 2n and dimension n is called a CIS code if it has two disjoint information sets. This class of codes contains self-dual codes as a subclass. Such codes permit to improve the cost of masking cryptographic algorithms against side channel attacks. In this talk, we describe CIS codes and t-CIS codes as their generalization. If time permits, we briefly introduce multiply constant weight codes for the study of PUFs which are useful in RFIDs and smart cards. by Jon-Lark Kim. [slides]
16h15 – 16h30 Wrap-up [slides] and group picture by the sea


Accepted papers

The full version of the selected revised papers are available below as PDF files, using an access control (credentials given at the workshop).

Regular papers:

Short papers:

Keynote speakers:

PROOFS 2014

Sponsors
Institut Mines-Telecom
TELECOM-ParisTech
Secure-IC S.A.S.