8 Search Results for "Zhandry, Mark"


Document
Total NP Search Problems with Abundant Solutions

Authors: Jiawei Li

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
We define a new complexity class TFAP to capture TFNP problems that possess abundant solutions for each input. We identify several problems across diverse fields that belong to TFAP, including WeakPigeon (finding a collision in a mapping from [2n] pigeons to [n] holes), Yamakawa-Zhandry’s problem [Takashi Yamakawa and Mark Zhandry, 2022], and all problems in TFZPP. Conversely, we introduce the notion of "semi-gluability" to characterize TFNP problems that could have a unique or a very limited number of solutions for certain inputs. We prove that there is no black-box reduction from any "semi-gluable" problems to any TFAP problems. Furthermore, it can be extended to rule out randomized black-box reduction in most cases. We identify that the majority of common TFNP subclasses, including PPA, PPAD, PPADS, PPP, PLS, CLS, SOPL, and UEOPL, are "semi-gluable". This leads to a broad array of oracle separation results within TFNP regime. As a corollary, UEOPL^O ⊈ PWPP^O relative to an oracle O.

Cite as

Jiawei Li. Total NP Search Problems with Abundant Solutions. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 75:1-75:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{li:LIPIcs.ITCS.2024.75,
  author =	{Li, Jiawei},
  title =	{{Total NP Search Problems with Abundant Solutions}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{75:1--75:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.75},
  URN =		{urn:nbn:de:0030-drops-196031},
  doi =		{10.4230/LIPIcs.ITCS.2024.75},
  annote =	{Keywords: TFNP, Pigeonhole Principle}
}
Document
A Computational Separation Between Quantum No-Cloning and No-Telegraphing

Authors: Barak Nehoran and Mark Zhandry

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Two of the fundamental no-go theorems of quantum information are the no-cloning theorem (that it is impossible to make copies of general quantum states) and the no-teleportation theorem (the prohibition on telegraphing, or sending quantum states over classical channels without pre-shared entanglement). They are known to be equivalent, in the sense that a collection of quantum states is telegraphable if and only if it is clonable. Our main result suggests that this is not the case when computational efficiency is considered. We give a collection of quantum states and quantum oracles relative to which these states are efficiently clonable but not efficiently telegraphable. Given that the opposite scenario is impossible (states that can be telegraphed can always trivially be cloned), this gives the most complete quantum oracle separation possible between these two important no-go properties. We additionally study the complexity class clonableQMA, a subset of QMA whose witnesses are efficiently clonable. As a consequence of our main result, we give a quantum oracle separation between clonableQMA and the class QCMA, whose witnesses are restricted to classical strings. We also propose a candidate oracle-free promise problem separating these classes. We finally demonstrate an application of clonable-but-not-telegraphable states to cryptography, by showing how such states can be used to protect against key exfiltration.

Cite as

Barak Nehoran and Mark Zhandry. A Computational Separation Between Quantum No-Cloning and No-Telegraphing. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 82:1-82:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{nehoran_et_al:LIPIcs.ITCS.2024.82,
  author =	{Nehoran, Barak and Zhandry, Mark},
  title =	{{A Computational Separation Between Quantum No-Cloning and No-Telegraphing}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{82:1--82:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.82},
  URN =		{urn:nbn:de:0030-drops-196104},
  doi =		{10.4230/LIPIcs.ITCS.2024.82},
  annote =	{Keywords: Cloning, telegraphing, no-cloning theorem, oracle separations}
}
Document
Quantum Money from Abelian Group Actions

Authors: Mark Zhandry

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
We give a construction of public key quantum money, and even a strengthened version called quantum lightning, from abelian group actions, which can in turn be constructed from suitable isogenies over elliptic curves. We prove security in the generic group model for group actions under a plausible computational assumption, and develop a general toolkit for proving quantum security in this model. Along the way, we explore knowledge assumptions and algebraic group actions in the quantum setting, finding significant limitations of these assumptions/models compared to generic group actions.

Cite as

Mark Zhandry. Quantum Money from Abelian Group Actions. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 101:1-101:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{zhandry:LIPIcs.ITCS.2024.101,
  author =	{Zhandry, Mark},
  title =	{{Quantum Money from Abelian Group Actions}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{101:1--101:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.101},
  URN =		{urn:nbn:de:0030-drops-196294},
  doi =		{10.4230/LIPIcs.ITCS.2024.101},
  annote =	{Keywords: Quantum Money, Cryptographic Group Actions, Isogenies}
}
Document
The Space-Time Cost of Purifying Quantum Computations

Authors: Mark Zhandry

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
General quantum computation consists of unitary operations and also measurements. It is well known that intermediate quantum measurements can be deferred to the end of the computation, resulting in an equivalent purely unitary computation. While time efficient, this transformation blows up the space to linear in the running time, which could be super-polynomial for low-space algorithms. Fefferman and Remscrim (STOC'21) and Girish, Raz and Zhan (ICALP'21) show different transformations which are space efficient, but blow up the running time by a factor that is exponential in the space. This leaves the case of algorithms with small-but-super-logarithmic space as incurring a large blowup in either time or space complexity. We show that such a blowup is likely inherent, demonstrating that any "black-box" transformation which removes intermediate measurements must significantly blow up either space or time.

Cite as

Mark Zhandry. The Space-Time Cost of Purifying Quantum Computations. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 102:1-102:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{zhandry:LIPIcs.ITCS.2024.102,
  author =	{Zhandry, Mark},
  title =	{{The Space-Time Cost of Purifying Quantum Computations}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{102:1--102:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.102},
  URN =		{urn:nbn:de:0030-drops-196304},
  doi =		{10.4230/LIPIcs.ITCS.2024.102},
  annote =	{Keywords: Quantum computation, intermediate measurements, time-space trade-offs}
}
Document
Lattice-Inspired Broadcast Encryption and Succinct Ciphertext-Policy ABE

Authors: Zvika Brakerski and Vinod Vaikuntanathan

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
Broadcast encryption remains one of the few remaining central cryptographic primitives that are not yet known to be achievable under a standard cryptographic assumption (excluding obfuscation-based constructions, see below). Furthermore, prior to this work, there were no known direct candidates for post-quantum-secure broadcast encryption. We propose a candidate ciphertext-policy attribute-based encryption (CP-ABE) scheme for circuits, where the ciphertext size depends only on the depth of the policy circuit (and not its size). This, in particular, gives us a Broadcast Encryption (BE) scheme where the size of the keys and ciphertexts have a poly-logarithmic dependence on the number of users. This goal was previously only known to be achievable assuming ideal multilinear maps (Boneh, Waters and Zhandry, Crypto 2014) or indistinguishability obfuscation (Boneh and Zhandry, Crypto 2014) and in a concurrent work from generic bilinear groups and the learning with errors (LWE) assumption (Agrawal and Yamada, Eurocrypt 2020). Our construction relies on techniques from lattice-based (and in particular LWE-based) cryptography. We analyze some attempts at cryptanalysis, but we are unable to provide a security proof.

Cite as

Zvika Brakerski and Vinod Vaikuntanathan. Lattice-Inspired Broadcast Encryption and Succinct Ciphertext-Policy ABE. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 28:1-28:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{brakerski_et_al:LIPIcs.ITCS.2022.28,
  author =	{Brakerski, Zvika and Vaikuntanathan, Vinod},
  title =	{{Lattice-Inspired Broadcast Encryption and Succinct Ciphertext-Policy ABE}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{28:1--28:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.28},
  URN =		{urn:nbn:de:0030-drops-156243},
  doi =		{10.4230/LIPIcs.ITCS.2022.28},
  annote =	{Keywords: Theoretical Cryptography, Broadcast Encryption, Attribute-Based Encryption, Lattice-Based Cryptography}
}
Document
Correlation-Intractable Hash Functions via Shift-Hiding

Authors: Alex Lombardi and Vinod Vaikuntanathan

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
A hash function family ℋ is correlation intractable for a t-input relation ℛ if, given a random function h chosen from ℋ, it is hard to find x_1,…,x_t such that ℛ(x_1,…,x_t,h(x₁),…,h(x_t)) is true. Among other applications, such hash functions are a crucial tool for instantiating the Fiat-Shamir heuristic in the plain model, including the only known NIZK for NP based on the learning with errors (LWE) problem (Peikert and Shiehian, CRYPTO 2019). We give a conceptually simple and generic construction of single-input CI hash functions from shift-hiding shiftable functions (Peikert and Shiehian, PKC 2018) satisfying an additional one-wayness property. This results in a clean abstract framework for instantiating CI, and also shows that a previously existing function family (PKC 2018) was already CI under the LWE assumption. In addition, our framework transparently generalizes to other settings, yielding new results: - We show how to instantiate certain forms of multi-input CI under the LWE assumption. Prior constructions either relied on a very strong "brute-force-is-best" type of hardness assumption (Holmgren and Lombardi, FOCS 2018) or were restricted to "output-only" relations (Zhandry, CRYPTO 2016). - We construct single-input CI hash functions from indistinguishability obfuscation (iO) and one-way permutations. Prior constructions relied essentially on variants of fully homomorphic encryption that are impossible to construct from such primitives. This result also generalizes to more expressive variants of multi-input CI under iO and additional standard assumptions.

Cite as

Alex Lombardi and Vinod Vaikuntanathan. Correlation-Intractable Hash Functions via Shift-Hiding. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 102:1-102:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{lombardi_et_al:LIPIcs.ITCS.2022.102,
  author =	{Lombardi, Alex and Vaikuntanathan, Vinod},
  title =	{{Correlation-Intractable Hash Functions via Shift-Hiding}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{102:1--102:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.102},
  URN =		{urn:nbn:de:0030-drops-156981},
  doi =		{10.4230/LIPIcs.ITCS.2022.102},
  annote =	{Keywords: Cryptographic hash functions, correlation intractability}
}
Document
Code Offset in the Exponent

Authors: Luke Demarest, Benjamin Fuller, and Alexander Russell

Published in: LIPIcs, Volume 199, 2nd Conference on Information-Theoretic Cryptography (ITC 2021)


Abstract
Fuzzy extractors derive stable keys from noisy sources. They are a fundamental tool for key derivation from biometric sources. This work introduces a new construction, code offset in the exponent. This construction is the first reusable fuzzy extractor that simultaneously supports structured, low entropy distributions with correlated symbols and confidence information. These properties are specifically motivated by the most pertinent applications - key derivation from biometrics and physical unclonable functions - which typically demonstrate low entropy with additional statistical correlations and benefit from extractors that can leverage confidence information for efficiency. Code offset in the exponent is a group encoding of the code offset construction (Juels and Wattenberg, CCS 1999). A random codeword of a linear error-correcting code is used as a one-time pad for a sampled value from the noisy source. Rather than encoding this directly, code offset in the exponent encodes by exponentiation of a generator in a cryptographically strong group. We introduce and characterize a condition on noisy sources that directly translates to security of our construction in the generic group model. Our condition requires the inner product between the source distribution and all vectors in the null space of the code to be unpredictable.

Cite as

Luke Demarest, Benjamin Fuller, and Alexander Russell. Code Offset in the Exponent. In 2nd Conference on Information-Theoretic Cryptography (ITC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 199, pp. 15:1-15:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{demarest_et_al:LIPIcs.ITC.2021.15,
  author =	{Demarest, Luke and Fuller, Benjamin and Russell, Alexander},
  title =	{{Code Offset in the Exponent}},
  booktitle =	{2nd Conference on Information-Theoretic Cryptography (ITC 2021)},
  pages =	{15:1--15:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-197-9},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{199},
  editor =	{Tessaro, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2021.15},
  URN =		{urn:nbn:de:0030-drops-143348},
  doi =		{10.4230/LIPIcs.ITC.2021.15},
  annote =	{Keywords: fuzzy extractors, code offset, learning with errors, error-correction, generic group model}
}
Document
Affine Determinant Programs: A Framework for Obfuscation and Witness Encryption

Authors: James Bartusek, Yuval Ishai, Aayush Jain, Fermi Ma, Amit Sahai, and Mark Zhandry

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
An affine determinant program ADP: {0,1}^n → {0,1} is specified by a tuple (A,B_1,…,B_n) of square matrices over ?_q and a function Eval: ?_q → {0,1}, and evaluated on x ∈ {0,1}^n by computing Eval(det(A + ∑_{i∈[n]} x_i B_i)). In this work, we suggest ADPs as a new framework for building general-purpose obfuscation and witness encryption. We provide evidence to suggest that constructions following our ADP-based framework may one day yield secure, practically feasible obfuscation. As a proof-of-concept, we give a candidate ADP-based construction of indistinguishability obfuscation (i?) for all circuits along with a simple witness encryption candidate. We provide cryptanalysis demonstrating that our schemes resist several potential attacks, and leave further cryptanalysis to future work. Lastly, we explore practically feasible applications of our witness encryption candidate, such as public-key encryption with near-optimal key generation.

Cite as

James Bartusek, Yuval Ishai, Aayush Jain, Fermi Ma, Amit Sahai, and Mark Zhandry. Affine Determinant Programs: A Framework for Obfuscation and Witness Encryption. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 82:1-82:39, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bartusek_et_al:LIPIcs.ITCS.2020.82,
  author =	{Bartusek, James and Ishai, Yuval and Jain, Aayush and Ma, Fermi and Sahai, Amit and Zhandry, Mark},
  title =	{{Affine Determinant Programs: A Framework for Obfuscation and Witness Encryption}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{82:1--82:39},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.82},
  URN =		{urn:nbn:de:0030-drops-117679},
  doi =		{10.4230/LIPIcs.ITCS.2020.82},
  annote =	{Keywords: Obfuscation, Witness Encryption}
}
  • Refine by Author
  • 4 Zhandry, Mark
  • 2 Vaikuntanathan, Vinod
  • 1 Bartusek, James
  • 1 Brakerski, Zvika
  • 1 Demarest, Luke
  • Show More...

  • Refine by Classification
  • 3 Theory of computation → Cryptographic primitives
  • 3 Theory of computation → Quantum complexity theory
  • 2 Theory of computation → Complexity classes
  • 1 Security and privacy → Biometrics
  • 1 Security and privacy → Information-theoretic techniques
  • Show More...

  • Refine by Keyword
  • 1 Attribute-Based Encryption
  • 1 Broadcast Encryption
  • 1 Cloning
  • 1 Cryptographic Group Actions
  • 1 Cryptographic hash functions
  • Show More...

  • Refine by Type
  • 8 document

  • Refine by Publication Year
  • 4 2024
  • 2 2022
  • 1 2020
  • 1 2021

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail