12 Search Results for "Vaikuntanathan, Vinod"


Document
Quantum Proofs of Deletion for Learning with Errors

Authors: Alexander Poremba

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
Quantum information has the property that measurement is an inherently destructive process. This feature is most apparent in the principle of complementarity, which states that mutually incompatible observables cannot be measured at the same time. Recent work by Broadbent and Islam (TCC 2020) builds on this aspect of quantum mechanics to realize a cryptographic notion called certified deletion. While this remarkable notion enables a classical verifier to be convinced that a (private-key) quantum ciphertext has been deleted by an untrusted party, it offers no additional layer of functionality. In this work, we augment the proof-of-deletion paradigm with fully homomorphic encryption (FHE). We construct the first fully homomorphic encryption scheme with certified deletion - an interactive protocol which enables an untrusted quantum server to compute on encrypted data and, if requested, to simultaneously prove data deletion to a client. Our scheme has the desirable property that verification of a deletion certificate is public; meaning anyone can verify that deletion has taken place. Our main technical ingredient is an interactive protocol by which a quantum prover can convince a classical verifier that a sample from the Learning with Errors (LWE) distribution in the form of a quantum state was deleted. As an application of our protocol, we construct a Dual-Regev public-key encryption scheme with certified deletion, which we then extend towards a (leveled) FHE scheme of the same type. We introduce the notion of Gaussian-collapsing hash functions - a special case of collapsing hash functions defined by Unruh (Eurocrypt 2016) - and we prove the security of our schemes under the assumption that the Ajtai hash function satisfies a certain strong Gaussian-collapsing property in the presence of leakage.

Cite as

Alexander Poremba. Quantum Proofs of Deletion for Learning with Errors. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 90:1-90:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{poremba:LIPIcs.ITCS.2023.90,
  author =	{Poremba, Alexander},
  title =	{{Quantum Proofs of Deletion for Learning with Errors}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{90:1--90:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.90},
  URN =		{urn:nbn:de:0030-drops-175934},
  doi =		{10.4230/LIPIcs.ITCS.2023.90},
  annote =	{Keywords: Learning with errors, certified deletion, fully homomorphic encryption}
}
Document
On One-Way Functions from NP-Complete Problems

Authors: Yanyi Liu and Rafael Pass

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
We present the first natural NP-complete problem whose average-case hardness w.r.t. the uniform distribution over instances is equivalent to the existence of one-way functions (OWFs). The problem, which originated in the 1960s, is the Conditional Time-Bounded Kolmogorov Complexity Problem: let K^t(x∣z) be the length of the shortest "program" that, given the "auxiliary input" z, outputs the string x within time t(|x|), and let McK^tP[ζ] be the set of strings (x,z,k) where |z| = ζ(|x|), |k| = log |x| and K^t(x∣z) < k, where, for our purposes, a "program" is defined as a RAM machine. Our main result shows that for every polynomial t(n) ≥ n², there exists some polynomial ζ such that McK^tP[ζ] is NP-complete. We additionally extend the result of Liu-Pass (FOCS'20) to show that for every polynomial t(n) ≥ 1.1n, and every polynomial ζ(⋅), mild average-case hardness of McK^tP[ζ] is equivalent to the existence of OWFs. Taken together, these results provide the following crisp characterization of what is required to base OWFs on NP ⊈ BPP: There exists concrete polynomials t,ζ such that "Basing OWFs on NP ⊈ BPP" is equivalent to providing a "worst-case to (mild) average-case reduction for McK^tP[ζ]". In other words, the "holy-grail" of Cryptography (i.e., basing OWFs on NP ⊈ BPP) is equivalent to a basic question in algorithmic information theory. As an independent contribution, we show that our NP-completeness result can be used to shed new light on the feasibility of the polynomial-time bounded symmetry of information assertion (Kolmogorov'68).

Cite as

Yanyi Liu and Rafael Pass. On One-Way Functions from NP-Complete Problems. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 36:1-36:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.CCC.2022.36,
  author =	{Liu, Yanyi and Pass, Rafael},
  title =	{{On One-Way Functions from NP-Complete Problems}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{36:1--36:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.36},
  URN =		{urn:nbn:de:0030-drops-165981},
  doi =		{10.4230/LIPIcs.CCC.2022.36},
  annote =	{Keywords: One-way Functions, NP-Completeness, Kolmogorov Complexity}
}
Document
Lattice-Inspired Broadcast Encryption and Succinct Ciphertext-Policy ABE

Authors: Zvika Brakerski and Vinod Vaikuntanathan

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
Broadcast encryption remains one of the few remaining central cryptographic primitives that are not yet known to be achievable under a standard cryptographic assumption (excluding obfuscation-based constructions, see below). Furthermore, prior to this work, there were no known direct candidates for post-quantum-secure broadcast encryption. We propose a candidate ciphertext-policy attribute-based encryption (CP-ABE) scheme for circuits, where the ciphertext size depends only on the depth of the policy circuit (and not its size). This, in particular, gives us a Broadcast Encryption (BE) scheme where the size of the keys and ciphertexts have a poly-logarithmic dependence on the number of users. This goal was previously only known to be achievable assuming ideal multilinear maps (Boneh, Waters and Zhandry, Crypto 2014) or indistinguishability obfuscation (Boneh and Zhandry, Crypto 2014) and in a concurrent work from generic bilinear groups and the learning with errors (LWE) assumption (Agrawal and Yamada, Eurocrypt 2020). Our construction relies on techniques from lattice-based (and in particular LWE-based) cryptography. We analyze some attempts at cryptanalysis, but we are unable to provide a security proof.

Cite as

Zvika Brakerski and Vinod Vaikuntanathan. Lattice-Inspired Broadcast Encryption and Succinct Ciphertext-Policy ABE. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 28:1-28:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{brakerski_et_al:LIPIcs.ITCS.2022.28,
  author =	{Brakerski, Zvika and Vaikuntanathan, Vinod},
  title =	{{Lattice-Inspired Broadcast Encryption and Succinct Ciphertext-Policy ABE}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{28:1--28:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.28},
  URN =		{urn:nbn:de:0030-drops-156243},
  doi =		{10.4230/LIPIcs.ITCS.2022.28},
  annote =	{Keywords: Theoretical Cryptography, Broadcast Encryption, Attribute-Based Encryption, Lattice-Based Cryptography}
}
Document
Correlation-Intractable Hash Functions via Shift-Hiding

Authors: Alex Lombardi and Vinod Vaikuntanathan

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
A hash function family ℋ is correlation intractable for a t-input relation ℛ if, given a random function h chosen from ℋ, it is hard to find x_1,…,x_t such that ℛ(x_1,…,x_t,h(x₁),…,h(x_t)) is true. Among other applications, such hash functions are a crucial tool for instantiating the Fiat-Shamir heuristic in the plain model, including the only known NIZK for NP based on the learning with errors (LWE) problem (Peikert and Shiehian, CRYPTO 2019). We give a conceptually simple and generic construction of single-input CI hash functions from shift-hiding shiftable functions (Peikert and Shiehian, PKC 2018) satisfying an additional one-wayness property. This results in a clean abstract framework for instantiating CI, and also shows that a previously existing function family (PKC 2018) was already CI under the LWE assumption. In addition, our framework transparently generalizes to other settings, yielding new results: - We show how to instantiate certain forms of multi-input CI under the LWE assumption. Prior constructions either relied on a very strong "brute-force-is-best" type of hardness assumption (Holmgren and Lombardi, FOCS 2018) or were restricted to "output-only" relations (Zhandry, CRYPTO 2016). - We construct single-input CI hash functions from indistinguishability obfuscation (iO) and one-way permutations. Prior constructions relied essentially on variants of fully homomorphic encryption that are impossible to construct from such primitives. This result also generalizes to more expressive variants of multi-input CI under iO and additional standard assumptions.

Cite as

Alex Lombardi and Vinod Vaikuntanathan. Correlation-Intractable Hash Functions via Shift-Hiding. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 102:1-102:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{lombardi_et_al:LIPIcs.ITCS.2022.102,
  author =	{Lombardi, Alex and Vaikuntanathan, Vinod},
  title =	{{Correlation-Intractable Hash Functions via Shift-Hiding}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{102:1--102:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.102},
  URN =		{urn:nbn:de:0030-drops-156981},
  doi =		{10.4230/LIPIcs.ITCS.2022.102},
  annote =	{Keywords: Cryptographic hash functions, correlation intractability}
}
Document
RANDOM
On the Hardness of Average-Case k-SUM

Authors: Zvika Brakerski, Noah Stephens-Davidowitz, and Vinod Vaikuntanathan

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
In this work, we show the first worst-case to average-case reduction for the classical k-SUM problem. A k-SUM instance is a collection of m integers, and the goal of the k-SUM problem is to find a subset of k integers that sums to 0. In the average-case version, the m elements are chosen uniformly at random from some interval [-u,u]. We consider the total setting where m is sufficiently large (with respect to u and k), so that we are guaranteed (with high probability) that solutions must exist. In particular, m = u^{Ω(1/k)} suffices for totality. Much of the appeal of k-SUM, in particular connections to problems in computational geometry, extends to the total setting. The best known algorithm in the average-case total setting is due to Wagner (following the approach of Blum-Kalai-Wasserman), and achieves a running time of u^{Θ(1/log k)} when m = u^{Θ(1/log k)}. This beats the known (conditional) lower bounds for worst-case k-SUM, raising the natural question of whether it can be improved even further. However, in this work, we show a matching average-case lower bound, by showing a reduction from worst-case lattice problems, thus introducing a new family of techniques into the field of fine-grained complexity. In particular, we show that any algorithm solving average-case k-SUM on m elements in time u^{o(1/log k)} will give a super-polynomial improvement in the complexity of algorithms for lattice problems.

Cite as

Zvika Brakerski, Noah Stephens-Davidowitz, and Vinod Vaikuntanathan. On the Hardness of Average-Case k-SUM. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 29:1-29:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{brakerski_et_al:LIPIcs.APPROX/RANDOM.2021.29,
  author =	{Brakerski, Zvika and Stephens-Davidowitz, Noah and Vaikuntanathan, Vinod},
  title =	{{On the Hardness of Average-Case k-SUM}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{29:1--29:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.29},
  URN =		{urn:nbn:de:0030-drops-147223},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.29},
  annote =	{Keywords: k-SUM, fine-grained complexity, average-case hardness}
}
Document
Cryptography from Information Loss

Authors: Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod Vaikuntanathan, and Prashant Nalini Vasudevan

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
Reductions between problems, the mainstay of theoretical computer science, efficiently map an instance of one problem to an instance of another in such a way that solving the latter allows solving the former. The subject of this work is "lossy" reductions, where the reduction loses some information about the input instance. We show that such reductions, when they exist, have interesting and powerful consequences for lifting hardness into "useful" hardness, namely cryptography. Our first, conceptual, contribution is a definition of lossy reductions in the language of mutual information. Roughly speaking, our definition says that a reduction C is t-lossy if, for any distribution X over its inputs, the mutual information I(X;C(X)) ≤ t. Our treatment generalizes a variety of seemingly related but distinct notions such as worst-case to average-case reductions, randomized encodings (Ishai and Kushilevitz, FOCS 2000), homomorphic computations (Gentry, STOC 2009), and instance compression (Harnik and Naor, FOCS 2006). We then proceed to show several consequences of lossy reductions: 1. We say that a language L has an f-reduction to a language L' for a Boolean function f if there is a (randomized) polynomial-time algorithm C that takes an m-tuple of strings X = (x_1,…,x_m), with each x_i ∈ {0,1}^n, and outputs a string z such that with high probability, L'(z) = f(L(x_1),L(x_2),…,L(x_m)). Suppose a language L has an f-reduction C to L' that is t-lossy. Our first result is that one-way functions exist if L is worst-case hard and one of the following conditions holds: - f is the OR function, t ≤ m/100, and L' is the same as L - f is the Majority function, and t ≤ m/100 - f is the OR function, t ≤ O(m log n), and the reduction has no error This improves on the implications that follow from combining (Drucker, FOCS 2012) with (Ostrovsky and Wigderson, ISTCS 1993) that result in auxiliary-input one-way functions. 2. Our second result is about the stronger notion of t-compressing f-reductions - reductions that only output t bits. We show that if there is an average-case hard language L that has a t-compressing Majority reduction to some language for t=m/100, then there exist collision-resistant hash functions. This improves on the result of (Harnik and Naor, STOC 2006), whose starting point is a cryptographic primitive (namely, one-way functions) rather than average-case hardness, and whose assumption is a compressing OR-reduction of SAT (which is now known to be false unless the polynomial hierarchy collapses). Along the way, we define a non-standard one-sided notion of average-case hardness, which is the notion of hardness used in the second result above, that may be of independent interest.

Cite as

Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod Vaikuntanathan, and Prashant Nalini Vasudevan. Cryptography from Information Loss. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 81:1-81:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ball_et_al:LIPIcs.ITCS.2020.81,
  author =	{Ball, Marshall and Boyle, Elette and Degwekar, Akshay and Deshpande, Apoorvaa and Rosen, Alon and Vaikuntanathan, Vinod and Vasudevan, Prashant Nalini},
  title =	{{Cryptography from Information Loss}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{81:1--81:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.81},
  URN =		{urn:nbn:de:0030-drops-117667},
  doi =		{10.4230/LIPIcs.ITCS.2020.81},
  annote =	{Keywords: Compression, Information Loss, One-Way Functions, Reductions, Generic Constructions}
}
Document
On the Complexity of Decomposable Randomized Encodings, Or: How Friendly Can a Garbling-Friendly PRF Be?

Authors: Marshall Ball, Justin Holmgren, Yuval Ishai, Tianren Liu, and Tal Malkin

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
Garbling schemes, also known as decomposable randomized encodings (DRE), have found many applications in cryptography. However, despite a large body of work on constructing such schemes, very little is known about their limitations. We initiate a systematic study of the DRE complexity of Boolean functions, obtaining the following main results: - Near-quadratic lower bounds. We use a classical lower bound technique of Nečiporuk [Dokl. Akad. Nauk SSSR '66] to show an Ω(n²/log n) lower bound on the size of any DRE for many explicit Boolean functions. For some natural functions, we obtain a corresponding upper bound, thus settling their DRE complexity up to polylogarithmic factors. Prior to our work, no superlinear lower bounds were known, even for non-explicit functions. - Garbling-friendly PRFs. We show that any exponentially secure PRF has Ω(n²/log n) DRE size, and present a plausible candidate for a "garbling-optimal" PRF that nearly meets this bound. This candidate establishes a barrier for super-quadratic DRE lower bounds via natural proof techniques. In contrast, we show a candidate for a weak PRF with near-exponential security and linear DRE size. Our results establish several qualitative separations, including near-quadratic separations between computational and information-theoretic DRE size of Boolean functions, and between DRE size of weak vs. strong PRFs.

Cite as

Marshall Ball, Justin Holmgren, Yuval Ishai, Tianren Liu, and Tal Malkin. On the Complexity of Decomposable Randomized Encodings, Or: How Friendly Can a Garbling-Friendly PRF Be?. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 86:1-86:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ball_et_al:LIPIcs.ITCS.2020.86,
  author =	{Ball, Marshall and Holmgren, Justin and Ishai, Yuval and Liu, Tianren and Malkin, Tal},
  title =	{{On the Complexity of Decomposable Randomized Encodings, Or: How Friendly Can a Garbling-Friendly PRF Be?}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{86:1--86:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.86},
  URN =		{urn:nbn:de:0030-drops-117714},
  doi =		{10.4230/LIPIcs.ITCS.2020.86},
  annote =	{Keywords: Randomized Encoding, Private Simultaneous Messages}
}
Document
Adversarially Robust Property-Preserving Hash Functions

Authors: Elette Boyle, Rio LaVigne, and Vinod Vaikuntanathan

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
Property-preserving hashing is a method of compressing a large input x into a short hash h(x) in such a way that given h(x) and h(y), one can compute a property P(x, y) of the original inputs. The idea of property-preserving hash functions underlies sketching, compressed sensing and locality-sensitive hashing. Property-preserving hash functions are usually probabilistic: they use the random choice of a hash function from a family to achieve compression, and as a consequence, err on some inputs. Traditionally, the notion of correctness for these hash functions requires that for every two inputs x and y, the probability that h(x) and h(y) mislead us into a wrong prediction of P(x, y) is negligible. As observed in many recent works (incl. Mironov, Naor and Segev, STOC 2008; Hardt and Woodruff, STOC 2013; Naor and Yogev, CRYPTO 2015), such a correctness guarantee assumes that the adversary (who produces the offending inputs) has no information about the hash function, and is too weak in many scenarios. We initiate the study of adversarial robustness for property-preserving hash functions, provide definitions, derive broad lower bounds due to a simple connection with communication complexity, and show the necessity of computational assumptions to construct such functions. Our main positive results are two candidate constructions of property-preserving hash functions (achieving different parameters) for the (promise) gap-Hamming property which checks if x and y are "too far" or "too close". Our first construction relies on generic collision-resistant hash functions, and our second on a variant of the syndrome decoding assumption on low-density parity check codes.

Cite as

Elette Boyle, Rio LaVigne, and Vinod Vaikuntanathan. Adversarially Robust Property-Preserving Hash Functions. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 16:1-16:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{boyle_et_al:LIPIcs.ITCS.2019.16,
  author =	{Boyle, Elette and LaVigne, Rio and Vaikuntanathan, Vinod},
  title =	{{Adversarially Robust Property-Preserving Hash Functions}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{16:1--16:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.16},
  URN =		{urn:nbn:de:0030-drops-101097},
  doi =		{10.4230/LIPIcs.ITCS.2019.16},
  annote =	{Keywords: Hash function, compression, property-preserving, one-way communication}
}
Document
How to Subvert Backdoored Encryption: Security Against Adversaries that Decrypt All Ciphertexts

Authors: Thibaut Horel, Sunoo Park, Silas Richelson, and Vinod Vaikuntanathan

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
In this work, we examine the feasibility of secure and undetectable point-to-point communication when an adversary (e.g., a government) can read all encrypted communications of surveillance targets. We consider a model where the only permitted method of communication is via a government-mandated encryption scheme, instantiated with government-mandated keys. Parties cannot simply encrypt ciphertexts of some other encryption scheme, because citizens caught trying to communicate outside the government's knowledge (e.g., by encrypting strings which do not appear to be natural language plaintexts) will be arrested. The one guarantee we suppose is that the government mandates an encryption scheme which is semantically secure against outsiders: a perhaps reasonable supposition when a government might consider it advantageous to secure its people's communication against foreign entities. But then, what good is semantic security against an adversary that holds all the keys and has the power to decrypt? We show that even in the pessimistic scenario described, citizens can communicate securely and undetectably. In our terminology, this translates to a positive statement: all semantically secure encryption schemes support subliminal communication. Informally, this means that there is a two-party protocol between Alice and Bob where the parties exchange ciphertexts of what appears to be a normal conversation even to someone who knows the secret keys and thus can read the corresponding plaintexts. And yet, at the end of the protocol, Alice will have transmitted her secret message to Bob. Our security definition requires that the adversary not be able to tell whether Alice and Bob are just having a normal conversation using the mandated encryption scheme, or they are using the mandated encryption scheme for subliminal communication. Our topics may be thought to fall broadly within the realm of steganography. However, we deal with the non-standard setting of an adversarially chosen distribution of cover objects (i.e., a stronger-than-usual adversary), and we take advantage of the fact that our cover objects are ciphertexts of a semantically secure encryption scheme to bypass impossibility results which we show for broader classes of steganographic schemes. We give several constructions of subliminal communication schemes under the assumption that key exchange protocols with pseudorandom messages exist (such as Diffie-Hellman, which in fact has truly random messages).

Cite as

Thibaut Horel, Sunoo Park, Silas Richelson, and Vinod Vaikuntanathan. How to Subvert Backdoored Encryption: Security Against Adversaries that Decrypt All Ciphertexts. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 42:1-42:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{horel_et_al:LIPIcs.ITCS.2019.42,
  author =	{Horel, Thibaut and Park, Sunoo and Richelson, Silas and Vaikuntanathan, Vinod},
  title =	{{How to Subvert Backdoored Encryption: Security Against Adversaries that Decrypt All Ciphertexts}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{42:1--42:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.42},
  URN =		{urn:nbn:de:0030-drops-101355},
  doi =		{10.4230/LIPIcs.ITCS.2019.42},
  annote =	{Keywords: Backdoored Encryption, Steganography}
}
Document
Some Open Problems in Information-Theoretic Cryptography

Authors: Vinod Vaikuntanathan

Published in: LIPIcs, Volume 93, 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)


Abstract
Information-theoretic cryptography is full of open problems with a communication-complexity flavor. We will describe several such problems that arise in the study of private information retrieval, secure multi-party computation, secret sharing, private simultaneous messages (PSM) and conditional disclosure of secrets (CDS). In all these cases, there is a huge (exponential) gap between the best known upper and lower bounds. We will also describe the connections between these problems, some old and some new.

Cite as

Vinod Vaikuntanathan. Some Open Problems in Information-Theoretic Cryptography. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 5:1-5:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{vaikuntanathan:LIPIcs.FSTTCS.2017.5,
  author =	{Vaikuntanathan, Vinod},
  title =	{{Some Open Problems in Information-Theoretic Cryptography}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{5:1--5:7},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.5},
  URN =		{urn:nbn:de:0030-drops-84188},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.5},
  annote =	{Keywords: Cryptography, Information-Theoretic Security, Private Information Retrieval, Secret Sharing, Multiparty Computation}
}
Document
Zero-Knowledge Proofs of Proximity

Authors: Itay Berman, Ron D. Rothblum, and Vinod Vaikuntanathan

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
Interactive proofs of proximity (IPPs) are interactive proofs in which the verifier runs in time sub-linear in the input length. Since the verifier cannot even read the entire input, following the property testing literature, we only require that the verifier reject inputs that are far from the language (and, as usual, accept inputs that are in the language). In this work, we initiate the study of zero-knowledge proofs of proximity (ZKPP). A ZKPP convinces a sub-linear time verifier that the input is close to the language (similarly to an IPP) while simultaneously guaranteeing a natural zero-knowledge property. Specifically, the verifier learns nothing beyond (1) the fact that the input is in the language, and (2) what it could additionally infer by reading a few bits of the input. Our main focus is the setting of statistical zero-knowledge where we show that the following hold unconditionally (where N denotes the input length): - Statistical ZKPPs can be sub-exponentially more efficient than property testers (or even non-interactive IPPs): We show a natural property which has a statistical ZKPP with a polylog(N) time verifier, but requires Omega(sqrt(N)) queries (and hence also runtime) for every property tester. - Statistical ZKPPs can be sub-exponentially less efficient than IPPs: We show a property which has an IPP with a polylog(N) time verifier, but cannot have a statistical ZKPP with even an N^(o(1)) time verifier. - Statistical ZKPPs for some graph-based properties such as promise versions of expansion and bipartiteness, in the bounded degree graph model, with polylog(N) time verifiers exist. Lastly, we also consider the computational setting where we show that: - Assuming the existence of one-way functions, every language computable either in (logspace uniform) NC or in SC, has a computational ZKPP with a (roughly) sqrt(N) time verifier. - Assuming the existence of collision-resistant hash functions, every language in NP has a statistical zero-knowledge argument of proximity with a polylog(N) time verifier.

Cite as

Itay Berman, Ron D. Rothblum, and Vinod Vaikuntanathan. Zero-Knowledge Proofs of Proximity. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 19:1-19:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{berman_et_al:LIPIcs.ITCS.2018.19,
  author =	{Berman, Itay and Rothblum, Ron D. and Vaikuntanathan, Vinod},
  title =	{{Zero-Knowledge Proofs of Proximity}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{19:1--19:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.19},
  URN =		{urn:nbn:de:0030-drops-83575},
  doi =		{10.4230/LIPIcs.ITCS.2018.19},
  annote =	{Keywords: Property Testing, Interactive Proofs, Zero-Knowledge}
}
Document
Low-Complexity Cryptographic Hash Functions

Authors: Benny Applebaum, Naama Haramaty-Krasne, Yuval Ishai, Eyal Kushilevitz, and Vinod Vaikuntanathan

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
Cryptographic hash functions are efficiently computable functions that shrink a long input into a shorter output while achieving some of the useful security properties of a random function. The most common type of such hash functions is collision resistant hash functions (CRH), which prevent an efficient attacker from finding a pair of inputs on which the function has the same output.

Cite as

Benny Applebaum, Naama Haramaty-Krasne, Yuval Ishai, Eyal Kushilevitz, and Vinod Vaikuntanathan. Low-Complexity Cryptographic Hash Functions. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 7:1-7:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{applebaum_et_al:LIPIcs.ITCS.2017.7,
  author =	{Applebaum, Benny and Haramaty-Krasne, Naama and Ishai, Yuval and Kushilevitz, Eyal and Vaikuntanathan, Vinod},
  title =	{{Low-Complexity Cryptographic Hash Functions}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{7:1--7:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.7},
  URN =		{urn:nbn:de:0030-drops-81901},
  doi =		{10.4230/LIPIcs.ITCS.2017.7},
  annote =	{Keywords: Cryptography, hash functions, complexity theory, coding theory}
}
  • Refine by Author
  • 9 Vaikuntanathan, Vinod
  • 2 Ball, Marshall
  • 2 Boyle, Elette
  • 2 Brakerski, Zvika
  • 2 Ishai, Yuval
  • Show More...

  • Refine by Classification
  • 5 Theory of computation → Computational complexity and cryptography
  • 3 Theory of computation → Cryptographic primitives
  • 1 Security and privacy → Public key encryption
  • 1 Theory of computation → Cryptographic protocols
  • 1 Theory of computation → Problems, reductions and completeness

  • Refine by Keyword
  • 2 Cryptography
  • 1 Attribute-Based Encryption
  • 1 Backdoored Encryption
  • 1 Broadcast Encryption
  • 1 Compression
  • Show More...

  • Refine by Type
  • 12 document

  • Refine by Publication Year
  • 3 2022
  • 2 2018
  • 2 2019
  • 2 2020
  • 1 2017
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail