09:30–09:50 |
|
Breakfast
|
09:50–10:00 |
|
Welcome
|
10:00–10:30 |
Henry Corrigan-Gibbs
|
The Function Inversion Problem: Barriers and Opportunities
|
|
|
Abstract:
In the function-inversion problem, an algorithm gets black-box access to a function $f: [N] \to [N]$ and takes as input a point $y \in [N]$, along with $S$ bits of auxiliary information about $f$. After running for time $T$, the algorithm must output an $x \in [N]$ such that $f(x) = y$, if one exists. This problem, first studied by Hellman (1980), has manifold applications to cryptanalysis.
Hellman’s algorithm for this problem achieves the time-space tradeoff $S = T = \tilde{O}(N^{2/3})$ when $f$ is a random function, while the best known lower bound, due to Yao (1990) shows that $ST = \tilde{\Omega}(N)$, which admits the possibility of an $S = T = \tilde{O}(N^{1/2})$ algorithm. There remains a long-standing and vexing gap between these upper and lower bounds.
This talk will discuss new connections between function inversion and other areas of theoretical computer science. These connections suggest that making progress on either the lower-bound or upper-bound side of this problem might be challenging. Moreover, we will see how these connections—in concert with Hellman-style algorithms—improve the best upper bounds for well-studied problems in communication complexity and data structures.
|
10:30–11:00 |
Mohammad Hajiabadi
|
New Techniques for Trapdoor Functions and Applications
|
|
|
Abstract:
Trapdoor functions (TDFs) are a fundamental primitive in cryptography. Yet, up until recently, existing techniques only allowed to obtain TDFs from a very narrow class of assumptions, largely limited to decisional assumptions.
In this talk, I will present new techniques for constructing TDFs. In particular, I will present the first construction of TDFs (and deterministic encryption) from the Computational Diffie-Hellman (CDH) assumption, as well as CDH-based constructions of deterministic encryption with linear ciphertext size, beating the quadratic ciphertext size of previous constructions which were based on the stronger Decisional Diffie-Hellman assumption.
Our efficiency-enhancing techniques involve a novel application of error-correcting codes to a setting which we call encode-with-hardcore bits. This avoids the expensive previous techniques of encoding input bits via group elements.
Based on joint work with Sanjam Garg and another work with Sanjam Garg and Romain Gay.
|
11:00–11:30 |
|
Break
|
11:30–12:00 |
Nick Spooner
|
Aurora: Transparent Succinct Arguments for R1CS
|
|
|
Abstract:
We design, implement, and evaluate a zero knowledge succinct non-interactive argument (SNARG) for Rank-1 Constraint Satisfaction (R1CS), a widely-deployed NP language undergoing standardization. Our SNARG has a transparent setup, is plausibly post-quantum secure, and uses lightweight cryptography. A proof attesting to the satisfiability of $n$ constraints has size O(log^2 n); it can be produced with O(n log n) field operations and verified with O(n). At $128$ bits of security, proofs are less than 250kB even for several million constraints, more than 10x shorter than prior SNARGs with similar features.
|
12:00–12:30 |
Sam Kim
|
Watermarking PRFs from Lattices: Stronger Security via Extractable PRFs
|
|
|
Abstract:
In this work, we construct new lattice-based secret-key watermarking schemes for PRFs that both provide unremovability against adversaries that have access to the mark-extraction oracle and offer a strong and meaningful notion of pseudorandomness even against the watermarking authority (i.e., the outputs of unmarked keys are pseudorandom almost everywhere). Moreover, security of several of our schemes can be based on the hardness of computing quasi-polynomial approximations to worst-case lattice problems. This is a qualitatively weaker assumption than that needed for existing lattice-based constructions of watermarking (that support message-embedding), all of which require sub-exponential approximation factors. Our constructions rely on a new cryptographic primitive called an extractable PRF, which is of independent interest.
|
12:30–14:00 |
|
Lunch
|
14:00–15:00 |
Daniele Micciancio
|
Lattice cryptography: from linear functions to fully homomorphic encryption (Keynote)
|
|
|
Abstract:
Lattice cryptography has many attractive features, from conjectured resistance against quantum attacks, to high parallelizability. But nothing brought as much attention to lattice cryptography as its ability to deliver complex and powerful applications, like the design of fully homomorphic encryption schemes: cryptosystems that allow the execution of arbitrary programs on encrypted data. Still, even the most complex applications of lattice cryptography are based on some very simple and intuitive ideas.
In this talk I will present a personal perspective of the evolution of lattice cryptography, starting from the design of simple one-way functions based on the subset-sum problem, and then showing how to obtain increasingly more powerful applications (all the way to fully homomorphic encryption) via a sequence of simple, elementary transformations.
The presentation slides are available here.
|
15:00–15:30 |
|
Break
|
15:30–16:30 |
Daniel Wichs
|
Laconic Function Evaluation and Applications
|
|
|
Abstract:
We introduce a new cryptographic primitive called laconic function evaluation (LFE). Using LFE, Alice can compress a large circuit f into a small digest. Bob can encrypt some data x under this digest in a way that enables Alice to recover f(x) without learning anything else about Bob's data. For the scheme to be laconic, we require that the size of the digest, the run-time of the encryption algorithm and the size of the ciphertext should all be small, much smaller than the circuit-size of f. We construct an LFE scheme for general circuits under the learning with errors (LWE) assumption, where the above parameters only grow polynomially with the depth but not the size of the circuit. We then use LFE to construct secure 2-party and multi-party computation (2PC, MPC) protocols with novel properties:
We construct a 2-round 2PC protocol between Alice and Bob with respective inputs xA,xB in which Alice learns the output f(xA,xB) in the second round. This is the first such protocol which is "Bob-optimized", meaning that Alice does all the work while Bob's computation and the total communication of the protocol are smaller than the size of the circuit f or even Alice's input xA. In contrast, prior solutions based on fully homomorphic encryption are "Alice-optimized".
We construct an MPC protocol, which allows N parties to securely evaluate a function f(x1,...,xN) over their respective inputs, where the total amount of computation performed by the parties during the protocol execution is smaller than that of evaluating the function itself! Each party has to individually preprocess the circuit f before the protocol starts and post-process the protocol transcript to recover the output after the protocol ends, and the cost of these steps is larger than the circuit size. However, this gives the first MPC where the computation performed by each party during the actual protocol execution, from the time the first protocol message is sent until the last protocol message is received, is smaller than the circuit size.
Joint work with Willy Quach and Hoeteck Wee.
|
16:30–17:00 |
Shashank Agrawal
|
PASTA: PASsword-based Threshold Authentication
|
|
|
Abstract:
Token-based authentication is commonly used to enable a single-sign-on experience on the web, in mobile applications and on enterprise networks using a wide range of open standards and network authentication protocols: clients sign on to an identity provider using their username/password to obtain a cryptographic token generated with a master secret key, and store the token for future accesses to various services and applications. The authentication server(s) are single point of failures that if breached, enable attackers to forge arbitrary tokens or mount offline dictionary attacks to recover client credentials.
In this talk I will introduce and formalize the notion of password-based threshold token-based authentication which distributes the role of an identity provider among n servers. Any t servers can collectively verify passwords and generate tokens, while no t-1 servers can forge a valid token or mount offline dictionary attacks. I would then introduce PASTA, a general framework that can be instantiated using any threshold token generation scheme, wherein clients can "sign-on" using a two-round (optimal) protocol that meets our strong notions of unforgeability and password-safety.
We instantiate and implement our framework in C++ using two threshold message authentication codes (MAC) and two threshold digital signatures with different trade-offs. Our experiments show that the overhead of protecting secrets and credentials against breaches in PASTA, i.e. compared to a naive single server solution, is extremely low (1-5%) in the most likely setting where client and servers communicate over the internet. The overhead is higher in case of MAC-based tokens over a LAN (though still only a few milliseconds) due to public-key operations in PASTA. I will argue briefly why this cost is inherent.
Joint work with Payman Mohassel (Visa Research), Peihan Miao (UC Berkeley), and Pratyay Mukherjee (Visa Research).
|