Translate

Τρίτη 16 Ιουλίου 2019

Cryptology

Generic Attacks on Hash Combiners

Abstract

Hash combiners are a practical way to make cryptographic hash functions more tolerant to future attacks and compatible with existing infrastructure. A combiner combines two or more hash functions in a way that is hopefully more secure than each of the underlying hash functions, or at least remains secure as long as one of them is secure. Two classical hash combiners are the exclusive-or (XOR) combiner \( \mathcal {H}_1(M) \oplus \mathcal {H}_2(M) \) and the concatenation combiner \( \mathcal {H}_1(M) \Vert \mathcal {H}_2(M) \) . Both of them process the same message using the two underlying hash functions in parallel. Apart from parallel combiners, there are also cascade constructions sequentially calling the underlying hash functions to process the message repeatedly, such as Hash-Twice \(\mathcal {H}_2(\mathcal {H}_1(IV, M), M)\) and the Zipper hash \(\mathcal {H}_2(\mathcal {H}_1(IV, M), \overleftarrow{M})\) , where \(\overleftarrow{M}\) is the reverse of the message M. In this work, we study the security of these hash combiners by devising the best-known generic attacks. The results show that the security of most of the combiners is not as high as commonly believed. We summarize our attacks and their computational complexities (ignoring the polynomial factors) as follows:
  1. Several generic preimage attacks on the XOR combiner:
  2. A first attack with a best-case complexity of \( 2^{5n/6} \) obtained for messages of length \( 2^{n/3} \) . It relies on a novel technical tool named interchange structure. It is applicable for combiners whose underlying hash functions follow the Merkle–Damgård construction or the HAIFA framework.
  3. A second attack with a best-case complexity of \( 2^{2n/3} \) obtained for messages of length \( 2^{n/2} \) . It exploits properties of functional graphs of random mappings. It achieves a significant improvement over the first attack but is only applicable when the underlying hash functions use the Merkle–Damgård construction.
  4. An improvement upon the second attack with a best-case complexity of \( 2^{5n/8} \) obtained for messages of length \( 2^{5n/8} \) . It further exploits properties of functional graphs of random mappings and uses longer messages.
  5. These attacks show a rather surprising result: regarding preimage resistance, the sum of two n-bit narrow-pipe hash functions following the considered constructions can never provide n-bit security.
  6. A generic second-preimage attack on the concatenation combiner of two Merkle–Damgård hash functions. This attack finds second preimages faster than \( 2^n \) for challenges longer than \( 2^{2n/7} \) and has a best-case complexity of \( 2^{3n/4} \) obtained for challenges of length \( 2^{3n/4} \) . It also exploits properties of functional graphs of random mappings.
  7. The first generic second-preimage attack on the Zipper hash with underlying hash functions following the Merkle–Damgård construction. The best-case complexity is \( 2^{3n/5} \) , obtained for challenge messages of length \( 2^{2n/5} \) .
  8. An improved generic second-preimage attack on Hash-Twice with underlying hash functions following the Merkle–Damgård construction. The best-case complexity is \( 2^{13n/22} \) , obtained for challenge messages of length \( 2^{13n/22} \) .
    The last three attacks show that regarding second-preimage resistance, the concatenation and cascade of two n-bit narrow-pipe Merkle–Damgård hash functions do not provide much more security than that can be provided by a single n-bit hash function.
Our main technical contributions include the following:
  1. The interchange structure, which enables simultaneously controlling the behaviours of two hash computations sharing the same input.
  2. The simultaneous expandable message, which is a set of messages of length covering a whole appropriate range and being multi-collision for both of the underlying hash functions.
  3. New ways to exploit the properties of functional graphs of random mappings generated by fixing the message block input to the underlying compression functions.

Feasibility and Infeasibility of Secure Computation with Malicious PUFs

Abstract

A recent line of work has explored the use of physically unclonable functions (PUFs) for secure computation, with the goals of (1) achieving universal composability without additional setup and/or (2) obtaining unconditional security (i.e., avoiding complexity-theoretic assumptions). Initial work assumed that all PUFs, even those created by an attacker, are honestly generated. Subsequently, researchers have investigated models in which an adversary can create malicious PUFs with arbitrary behavior. Researchers have considered both malicious PUFs that might be stateful, as well as malicious PUFs that can have arbitrary behavior but are guaranteed to be stateless. We settle the main open questions regarding secure computation in the malicious-PUF model:
  • We prove that unconditionally secure oblivious transfer is impossible, even in the stand-alone setting, if the adversary can construct (malicious) stateful PUFs.
  • We show that if the attacker is limited to creating (malicious) stateless PUFs, then universally composable two-party computation is possible, unconditionally.

Efficient Fully Structure-Preserving Signatures and Shrinking Commitments

Abstract

In structure-preserving signatures, public keys, messages, and signatures are all collections of source group elements of some bilinear groups. In this paper, we introduce fully structure-preserving signature schemes, with the additional requirement that even secret keys are group elements. This strong property allows efficient non-interactive proofs of knowledge of the secret key, which is useful in designing cryptographic protocols under simulation-based security where online extraction of the secret key is needed. We present efficient constructions under simple standard assumptions and pursue even more efficient constructions with the extra property of randomizability based on the generic bilinear group model. An essential building block for our efficient standard model construction is a shrinking structure-preserving trapdoor commitment scheme, which is by itself an important primitive and of independent interest as it appears to contradict a known impossibility result that structure-preserving commitments cannot be shrinking. We argue that a relaxed binding property lets us circumvent the impossibility while still retaining the usefulness of the primitive in important applications as mentioned above.

The Magic of ELFs

Abstract

We introduce the notion of an Extremely Lossy Function (ELF). An ELF is a family of functions with an image size that is tunable anywhere from injective to having a polynomial-sized image. Moreover, for any efficient adversary, for a sufficiently large polynomial r (necessarily chosen to be larger than the running time of the adversary), the adversary cannot distinguish the injective case from the case of image size r. We develop a handful of techniques for using ELFs, and show that such extreme lossiness is useful for instantiating random oracles in several settings. In particular, we show how to use ELFs to build secure point function obfuscation with auxiliary input, as well as polynomially many hardcore bits for any one-way function. Such applications were previously known from strong knowledge assumptions—for example, polynomially many hardcore bits were only known from differing inputs obfuscation, a notion whose plausibility has been seriously challenged. We also use ELFs to build a simple hash function with output intractability, a new notion we define that may be useful for generating common reference strings. Next, we give a construction of ELFs relying on the exponential hardness of the decisional Diffie–Hellman problem, which is plausible in elliptic curve groups. Combining with the applications above, our work gives several practical constructions relying on qualitatively different—and arguably better—assumptions than prior works.

Probabilistic Termination and Composability of Cryptographic Protocols

Abstract

When analyzing the round complexity of multi-party protocols, one often overlooks the fact that underlying resources, such as a broadcast channel, can by themselves be expensive to implement. For example, it is well known that it is impossible to implement a broadcast channel by a (deterministic) protocol in a sublinear (in the number of corrupted parties) number of rounds. The seminal works of Rabin and Ben-Or from the early 1980s demonstrated that limitations as the above can be overcome by using randomization and allowing parties to terminate at different rounds, igniting the study of protocols over point-to-point channels with probabilistic termination and expected constantround complexity. However, absent a rigorous simulation-based definition, the suggested protocols are proven secure in a property-based manner or via ad hoc simulation-based frameworks, therefore guaranteeing limited, if any, composability. In this work, we put forth the first simulation-based treatment of multi-party cryptographic protocols with probabilistic termination. We define secure multi-party computation (MPC) with probabilistic termination in the UC framework and prove a universal composition theorem for probabilistic termination protocols. Our theorem allows to compile a protocol using deterministic termination hybrids into a protocol that uses expected constant round protocols for emulating these hybrids, preserving the expected round complexity of the calling protocol. We showcase our definitions and compiler by providing the first composable protocols (with simulation-based security proofs) for the following primitives, relying on point-to-point channels: (1) expected constant round perfect Byzantine agreement, (2) expected constant round perfect parallel broadcast, and (3) perfectly secure MPC with round complexity independent of the number of parties.

Beyond Conventional Security in Sponge-Based Authenticated Encryption Modes

Abstract

The Sponge function is known to achieve \(2^{c/2}\) security, where c is its capacity. This bound was carried over to its keyed variants, such as SpongeWrap, to achieve a \(\min \{2^{c/2},2^\kappa \}\) security bound, with \(\kappa \) the key length. Similarly, many CAESAR competition submissions were designed to comply with the classical \(2^{c/2}\) security bound. We show that Sponge-based constructions for authenticated encryption can achieve the significantly higher bound of \(\min \{2^{b/2},2^c,2^\kappa \}\) , with \(b>c\) the permutation size, by proving that the CAESAR submission NORX achieves this bound. The proof relies on rigorous computation of multi-collision probabilities, which may be of independent interest. We additionally derive a generic attack based on multi-collisions that matches the bound. We show how to apply the proof to five other Sponge-based CAESAR submissions: Ascon, CBEAM/STRIBOB, ICEPOLE, Keyak, and two out of the three PRIMATEs. A direct application of the result shows that the parameter choices of some of these submissions are overly conservative. Simple tweaks render the schemes considerably more efficient without sacrificing security. We finally consider the remaining one of the three PRIMATEs, APE, and derive a blockwise adaptive attack in the nonce-respecting setting with complexity \(2^{c/2}\) , therewith demonstrating that the techniques cannot be applied to APE.

On Black-Box Complexity of Universally Composable Security in the CRS Model

Abstract

In this work, we study the intrinsic complexity of black-box Universally Composable (UC) secure computation based on general assumptions. We present a thorough study in various corruption modelings while focusing on achieving security in the common reference string (CRS) model. Our results involve the following:
  • Static UC secure computation. Designing the first static UC oblivious transfer protocol based on public-key encryption and stand-alone semi-honest oblivious transfer. As a corollary, we obtain the first black-box constructions of UC secure computation assuming only two-round semi-honest oblivious transfer.
  • One-sided UC secure computation. Designing adaptive UC two-party computation with single corruptions assuming public-key encryption with oblivious ciphertext generation.
  • Adaptive UC secure computation. Designing adaptively secure UC commitment scheme assuming only public-key encryption with oblivious ciphertext generation. As a corollary, we obtain the first black-box constructions of adaptive UC secure computation assuming only (trapdoor) simulatable public-key encryption (as well as a variety of concrete assumptions).
    We remark that such a result was not known even under non-black-box constructions.

Oblivious Network RAM and Leveraging Parallelism to Achieve Obliviousness

Abstract

Oblivious RAM (ORAM) is a cryptographic primitive that allows a trusted CPU to securely access untrusted memory, such that the access patterns reveal nothing about sensitive data. ORAM is known to have broad applications in secure processor design and secure multiparty computation for big data. Unfortunately, due to a logarithmic lower bound by Goldreich and Ostrovsky (J ACM 43(3):431–473, 1996), ORAM is bound to incur a moderate cost in practice. In particular, with the latest developments in ORAM constructions, we are quickly approaching this limit, and the room for performance improvement is small. In this paper, we consider new models of computation in which the cost of obliviousness can be fundamentally reduced in comparison with the standard ORAM model. We propose the oblivious network RAM model of computation, where a CPU communicates with multiple memory banks, such that the adversary observes only which bank the CPU is communicating with, but not the address offset within each memory bank. In other words, obliviousness within each bank comes for free—either because the architecture prevents a malicious party from observing the address accessed within a bank, or because another solution is used to obfuscate memory accesses within each bank—and hence we only need to obfuscate communication patterns between the CPU and the memory banks. We present new constructions for obliviously simulating general or parallel programs in the network RAM model. We describe applications of our new model in distributed storage applications with a network adversary.

Leakage Resilience from Program Obfuscation

Abstract

The literature on leakage-resilient cryptography contains various leakage models that provide different levels of security. In the bounded leakage model (Akavia et al.—TCC 2009), it is assumed that there is a fixed upper bound L on the number of bits the attacker may leak on the secret key in the entire lifetime of the scheme. Alternatively, in the continual leakage model (Brakerski et al.—FOCS 2010, Dodis et al.—FOCS 2010), the lifetime of a cryptographic scheme is divided into “time periods” between which the scheme’s secret key is updated. Furthermore, in its attack the adversary is allowed to obtain some bounded amount of leakage on the current secret key during each time period. In the continual leakage model, a challenging problem has been to provide security against leakage on key updates, that is, leakage that is a function of not only the current secret key but also the randomness used to update it. We propose a modular approach to overcome this problem based on program obfuscation. Namely, we present a compiler that transforms any public key encryption or signature scheme that achieves a slight strengthening of continual leakage resilience, which we call consecutive continual leakage resilience, to one that is continual leakage resilient with leakage on key updates, assuming indistinguishability obfuscation (Barak et al.—CRYPTO 2001, Garg et al.—FOCS 2013). Under stronger forms of obfuscation, the leakage rate tolerated by our compiled scheme is essentially as good as that of the starting scheme. Our compiler is derived by making a connection between the problems of leakage on key updates and so-called sender-deniable encryption (Canetti et al.—CRYPTO 1997), which was recently constructed based on indistinguishability obfuscation by Sahai and Waters (STOC 2014). In the bounded leakage model, we give an approach to constructing leakage-resilient public key encryption from program obfuscation based on the public key encryption scheme of Sahai and Waters (STOC 2014). In particular, we achieve leakage-resilient public key encryption tolerating L bits of leakage for any L from \({\mathsf {iO}} \) and one-way functions. We build on this to achieve leakage-resilient public key encryption with optimal leakage rate of \(1-o(1)\) based on stronger forms of obfuscation and collision-resistant hash functions. Such a leakage rate is not known to be achievable in a generic way based on public key encryption alone. We then develop additional techniques to construct public key encryption that is (consecutive) continual leakage resilient under appropriate assumptions, which we believe is of independent interest.

Key Establishment à la Merkle in a Quantum World

Abstract

In 1974, Ralph Merkle proposed the first unclassified protocol for secure communications over insecure channels. When legitimate communicating parties are willing to spend an amount of computational effort proportional to some parameter N, an eavesdropper cannot break into their communication without spending a time proportional to  \(N^2\) , which is quadratically more than the legitimate effort. In a quantum world, however, Merkle’s protocol is immediately broken by Grover’s algorithm, but it is easily repaired if we are satisfied with a quantum protocol against which a quantum adversary needs to spend a time proportional to \(N^{3/2}\) in order to break it. Can we do better? We give two new key establishment protocols in the spirit of Merkle’s. The first one, which requires the legitimate parties to have access to a quantum computer, resists any quantum adversary who is not willing to make an effort at least proportional to  \(N^{5/3}\) , except with vanishing probability. Our second protocol is purely classical, yet it requires any quantum adversary to work asymptotically harder than the legitimate parties, again except with vanishing probability. In either case, security is proved for a typical run of the protocols: the probabilities are taken over the random (or quantum) choices made by the legitimate participants in order to establish their key as well as over the random (or quantum) choices made by the adversary who is trying to be privy to it.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου

Translate