Recent Research


2023: Zombie: Middleboxes that Don’t Snoop

New York University

        Zero-knowledge middleboxes (ZKMBs) are a recent paradigm in which clients get privacy while middleboxes enforce policy: clients prove in zero knowledge that the plaintext underlying their encrypted traffic complies with network policies, such as DNS filtering. However, prior work had impractically poor performance and was limited in functionality.

        This work presents Zombie, the first system built using the ZKMB paradigm. Zombie introduces techniques that push ZKMBs to the verge of practicality: preprocessing (to move the bulk of proof generation to idle times between requests), asynchrony (to remove proving and verifying costs from the critical path), and batching (to amortize some of the verification work). Zombie’s choices, together with these techniques, provide a factor of 3.5x speedup in total computation done by client and middlebox, lowering the critical path overhead for a DNS filtering application to less than 300ms (on commodity hardware) or (in the asynchronous configuration) to 0.

        As an additional contribution that is likely of independent interest, Zombie introduces a portfolio of techniques to efficiently encode regular expressions in probabilistic (and zero knowledge) proofs; these techniques offer significant asymptotic and constant factor improvements in performance over a standard baseline. Zombie builds on this portfolio to support policies based on regular expressions, such as data loss prevention.

This paper was accepted for publication. The most recent preprint for it can be downloaded from here.


2022: Less is more: refinement proofs for probabilistic proofs

New York University

        There has been intense interest over the last decade in implementations of _probabilistic proofs_ (IPs, SNARKs, PCPs, and so on): protocols in which an untrusted party proves to a verifier that a given computation was executed properly, possibly in zero knowledge. Nevertheless, implementations still do not scale beyond small computations. A central source of overhead is the _front-end_: translating from the abstract computation to a set of equivalent arithmetic constraints. This paper introduces a general-purpose framework, called Distiller, in which a user translates to constraints not the original computation but an abstracted _specification_ of it. Distiller is the first in this area to perform such transformations in a way that is provably safe. Furthermore, by taking the idea of "encode a check in the constraints" to its literal logical extreme, Distiller exposes many new opportunities for constraint reduction, resulting in cost reductions for benchmark computations of 1.3–50x, and in some cases, better asymptotics.

This paper was published in the 2023 IEEE Symposium on Security and Privacy (SP) and can be downloaded from here.


2022: zkQMC: Zero-Knowledge Proofs For (Some) Probabilistic Computations Using Quasi-Randomness

Los Alamos National Laboratory

        We initiate research into efficiently embedding probabilistic computations in probabilistic proofs by introducing techniques for capturing Monte Carlo methods and Las Vegas algorithms in zero knowledge and exploring several potential applications of these techniques. We design and demonstrate a technique for proving the integrity of certain randomized computations, such as uncertainty quantification methods, in non-interactive zero knowledge (NIZK) by replacing conventional randomness with low-discrepancy sequences. This technique, known as the Quasi-Monte Carlo (QMC) method, functions as a form of weak algorithmic derandomization to efficiently produce adversarial-resistant worst-case uncertainty bounds for the results of Monte Carlo simulations. The adversarial resistance provided by this approach allows the integrity of results to be verifiable both in interactive and non-interactive zero knowledge without the need for additional statistical or cryptographic assumptions.

The paper can be downloaded from here.


2021: Multicolor Ramsey numbers for Berge cycles

Villanova University

        In this paper, for small uniformities, we determine the order of magnitude of the multicolor Ramsey numbers for Berge cycles of length 4, 5, 6, 7, 10, or 11. Our result follows from a more general setup which can be applied to other hypergraph Ramsey problems. Using this, we additionally determine the order of magnitude of the multicolor Ramsey number for Berge-K a,b for certain a, b, and uniformities.

This paper was published in the Electronic Journal of Combinatorics and can be downloaded from here.


2020: Privacy Preserving, Distributed, and Verifiable Machine Learning for COVID-19 Identification using ZKPs

Los Alamos National Laboratory

        We developed an efficient and succinct zero-knowledge proof application (with differential privacy) using zkSNARKs for distributed and verifiable machine learning with an additional focus on COVID-19 data. The goal of the project was to develop a the capability to remotely train neural networks with a large array of untrusted edge nodes, each with their own secret data, and to produce a succinct proof that attests to the correctness of the entire chain of computation and the output trained parameters. Using recent techniques for recursive proof composition, proof carrying data, differential privacy, distributed neural network training, and zkSNARKs, we wrote an application in C++ (using the libsnark library) that demonstates the capacity for researchers to now perform privacy preserving, distributed, and verifiable ML.

This work was presented at the 2020 Chesapeake Large-Scale Analytics Conference and the slides can be viewed here.


2019: Design and Implementation of a Secure Neural Network Verification System Using zkSNARKs

Los Alamos National Laboratory

        We present an efficient and succinct zero-knowledge proof application using zkSNARKs for remotely verifying the forward-pass execution of an arbitrarily-sized neural network with hidden inputs and model parameters. Our zero-knowledge guarantee allows the prover to hide information about the input and model parameters from the verifier while being able to attest to the integrity of these parameters and the model's execution.
        Our approach is transformative for various applications such as nuclear treaty verification without the need to disclose sensitive data, security camera auditing without the need to leak footage, and secure patient diagnosis without the need to disclose individually identifiable health information. We demonstrate an end-to-end implementation of this proof system using custom gadgets in libsnark on a neural network for the classification of MNIST handwritten digits.

Download the full research poster here.


2018: N-Player Hackenbush: Theory, Approximation, and Implementation

Villanova University

        This project generalizes game sums (the main tool used for analyzing positions in combinatorial games) in 2-Player Hackenbush and proposes a theory for analyzing N-Player combinatorial games including N-Player Hackenbush. This project proposes a new way to quantify the advantages to each of the players given any game position, provides a fast method for the approximation of this sum, and demonstrates an example implementation of this algorithm on arbitrary input. This algorithm is demonstrated by an application to N-Player Hackenbush.

Download the resulting research poster here.


zachdestefano (at) gmail (dot) com