13 Search Results for "Liu, Yanyi"


Document
Towards Free Lunch Derandomization from Necessary Assumptions (And OWFs)

Authors: Marshall Ball, Lijie Chen, and Roei Tell

Published in: LIPIcs, Volume 339, 40th Computational Complexity Conference (CCC 2025)


Abstract
The question of optimal derandomization, introduced by Doron et. al (JACM 2022), garnered significant recent attention. Works in recent years showed conditional superfast derandomization algorithms, as well as conditional impossibility results, and barriers for obtaining superfast derandomization using certain black-box techniques. Of particular interest is the extreme high-end, which focuses on "free lunch" derandomization, as suggested by Chen and Tell (FOCS 2021). This is derandomization that incurs essentially no time overhead, and errs only on inputs that are infeasible to find. Constructing such algorithms is challenging, and so far there have not been any results following the one in their initial work. In their result, their algorithm is essentially the classical Nisan-Wigderson generator, and they relied on an ad-hoc assumption asserting the existence of a function that is non-batch-computable over all polynomial-time samplable distributions. In this work we deduce free lunch derandomization from a variety of natural hardness assumptions. In particular, we do not resort to non-batch-computability, and the common denominator for all of our assumptions is hardness over all polynomial-time samplable distributions, which is necessary for the conclusion. The main technical components in our proofs are constructions of new and superfast targeted generators, which completely eliminate the time overheads that are inherent to all previously known constructions. In particular, we present an alternative construction for the targeted generator by Chen and Tell (FOCS 2021), which is faster than the original construction, and also more natural and technically intuitive. These contributions significantly strengthen the evidence for the possibility of free lunch derandomization, distill the required assumptions for such a result, and provide the first set of dedicated technical tools that are useful for studying the question.

Cite as

Marshall Ball, Lijie Chen, and Roei Tell. Towards Free Lunch Derandomization from Necessary Assumptions (And OWFs). In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 31:1-31:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ball_et_al:LIPIcs.CCC.2025.31,
  author =	{Ball, Marshall and Chen, Lijie and Tell, Roei},
  title =	{{Towards Free Lunch Derandomization from Necessary Assumptions (And OWFs)}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{31:1--31:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.31},
  URN =		{urn:nbn:de:0030-drops-237259},
  doi =		{10.4230/LIPIcs.CCC.2025.31},
  annote =	{Keywords: Pseudorandomness, Derandomization}
}
Document
Witness Encryption and NP-Hardness of Learning

Authors: Halley Goldberg and Valentine Kabanets

Published in: LIPIcs, Volume 339, 40th Computational Complexity Conference (CCC 2025)


Abstract
We study connections between two fundamental questions from computer science theory. (1) Is witness encryption possible for NP [Sanjam Garg et al., 2013]? That is, given an instance x of an NP-complete language L, can one encrypt a secret message with security contingent on the ability to provide a witness for x ∈ L? (2) Is computational learning (in the sense of [Leslie G. Valiant, 1984; Michael J. Kearns et al., 1994]) hard for NP? That is, is there a polynomial-time reduction from instances of L to instances of learning? Our main contribution is that certain formulations of NP-hardness of learning characterize the existence of witness encryption for NP. More specifically, we show: - witness encryption for a language L ∈ NP is equivalent to a half-Levin reduction from L to the Computational Gap Learning problem (denoted CGL [Benny Applebaum et al., 2008]), where a half-Levin reduction is the same as a Levin reduction but only required to preserve witnesses in one direction, and CGL formalizes agnostic learning as a decision problem. We show versions of the statement above for witness encryption secure against non-uniform and uniform adversaries. We also show that witness encryption for NP with ciphertexts of logarithmic length, along with a circuit lower bound for E, are together equivalent to NP-hardness of a generalized promise version of MCSP. We complement the above with a number of unconditional NP-hardness results for agnostic PAC learning. Extending a result of [Shuichi Hirahara, 2022] to the standard setting of boolean circuits, we show NP-hardness of "semi-proper" learning. Namely: - for some polynomial s, it is NP-hard to agnostically learn circuits of size s(n) by circuits of size s(n)⋅ n^{1/(log log n)^O(1)}. Looking beyond the computational model of standard boolean circuits enables us to prove NP-hardness of improper learning (ie. without a restriction on the size of hypothesis returned by the learner). We obtain such results for: - learning circuits with oracle access to a given randomly sampled string, and - learning RAM programs. In particular, we show that a variant of MINLT [Ker-I Ko, 1991] for RAM programs is NP-hard with parameters corresponding to the setting of improper learning. We view these results as partial progress toward the ultimate goal of showing NP-hardness of learning boolean circuits in an improper setting. Lastly, we give some consequences of NP-hardness of learning for private- and public-key cryptography. Improving a main result of [Benny Applebaum et al., 2008], we show that if improper agnostic PAC learning is NP-hard under a randomized non-adaptive reduction (with some restrictions), then NP ⊈ BPP implies the existence of i.o. one-way functions. In contrast, if CGL is NP-hard under a half-Levin reduction, then NP ⊈ BPP implies the existence of i.o. public-key encryption.

Cite as

Halley Goldberg and Valentine Kabanets. Witness Encryption and NP-Hardness of Learning. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 34:1-34:43, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{goldberg_et_al:LIPIcs.CCC.2025.34,
  author =	{Goldberg, Halley and Kabanets, Valentine},
  title =	{{Witness Encryption and NP-Hardness of Learning}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{34:1--34:43},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.34},
  URN =		{urn:nbn:de:0030-drops-237281},
  doi =		{10.4230/LIPIcs.CCC.2025.34},
  annote =	{Keywords: agnostic PAC learning, witness encryption, NP-hardness}
}
Document
Data Reconstruction: When You See It and When You Don't

Authors: Edith Cohen, Haim Kaplan, Yishay Mansour, Shay Moran, Kobbi Nissim, Uri Stemmer, and Eliad Tsfadia

Published in: LIPIcs, Volume 325, 16th Innovations in Theoretical Computer Science Conference (ITCS 2025)


Abstract
We revisit the fundamental question of formally defining what constitutes a reconstruction attack. While often clear from the context, our exploration reveals that a precise definition is much more nuanced than it appears, to the extent that a single all-encompassing definition may not exist. Thus, we employ a different strategy and aim to "sandwich" the concept of reconstruction attacks by addressing two complementing questions: (i) What conditions guarantee that a given system is protected against such attacks? (ii) Under what circumstances does a given attack clearly indicate that a system is not protected? More specifically, - We introduce a new definitional paradigm - Narcissus Resiliency - to formulate a security definition for protection against reconstruction attacks. This paradigm has a self-referential nature that enables it to circumvent shortcomings of previously studied notions of security. Furthermore, as a side-effect, we demonstrate that Narcissus resiliency captures as special cases multiple well-studied concepts including differential privacy and other security notions of one-way functions and encryption schemes. - We formulate a link between reconstruction attacks and Kolmogorov complexity. This allows us to put forward a criterion for evaluating when such attacks are convincingly successful.

Cite as

Edith Cohen, Haim Kaplan, Yishay Mansour, Shay Moran, Kobbi Nissim, Uri Stemmer, and Eliad Tsfadia. Data Reconstruction: When You See It and When You Don't. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 39:1-39:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.ITCS.2025.39,
  author =	{Cohen, Edith and Kaplan, Haim and Mansour, Yishay and Moran, Shay and Nissim, Kobbi and Stemmer, Uri and Tsfadia, Eliad},
  title =	{{Data Reconstruction: When You See It and When You Don't}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{39:1--39:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.39},
  URN =		{urn:nbn:de:0030-drops-226674},
  doi =		{10.4230/LIPIcs.ITCS.2025.39},
  annote =	{Keywords: differential privacy, reconstruction}
}
Document
On White-Box Learning and Public-Key Encryption

Authors: Yanyi Liu, Noam Mazor, and Rafael Pass

Published in: LIPIcs, Volume 325, 16th Innovations in Theoretical Computer Science Conference (ITCS 2025)


Abstract
We consider a generalization of the Learning With Error problem, referred to as the white-box learning problem: You are given the code of a sampler that with high probability produces samples of the form y,f(y) + ε where ε is small, and f is computable in polynomial-size, and the computational task consist of outputting a polynomial-size circuit C that with probability, say, 1/3 over a new sample y' according to the same distributions, approximates f(y') (i.e., |C(y')-f(y')| is small). This problem can be thought of as a generalizing of the Learning with Error Problem (LWE) from linear functions f to polynomial-size computable functions. We demonstrate that worst-case hardness of the white-box learning problem, conditioned on the instances satisfying a notion of computational shallowness (a concept from the study of Kolmogorov complexity) not only suffices to get public-key encryption, but is also necessary; as such, this yields the first problem whose worst-case hardness characterizes the existence of public-key encryption. Additionally, our results highlights to what extent LWE "overshoots" the task of public-key encryption. We complement these results by noting that worst-case hardness of the same problem, but restricting the learner to only get black-box access to the sampler, characterizes one-way functions.

Cite as

Yanyi Liu, Noam Mazor, and Rafael Pass. On White-Box Learning and Public-Key Encryption. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 73:1-73:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.ITCS.2025.73,
  author =	{Liu, Yanyi and Mazor, Noam and Pass, Rafael},
  title =	{{On White-Box Learning and Public-Key Encryption}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{73:1--73:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.73},
  URN =		{urn:nbn:de:0030-drops-227012},
  doi =		{10.4230/LIPIcs.ITCS.2025.73},
  annote =	{Keywords: Public-Key Encryption, White-Box Learning}
}
Document
Leakage-Resilient Hardness Equivalence to Logspace Derandomization

Authors: Yakov Shalunov

Published in: LIPIcs, Volume 306, 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)


Abstract
Efficient derandomization has long been a goal in complexity theory, and a major recent result by Yanyi Liu and Rafael Pass identifies a new class of hardness assumption under which it is possible to perform time-bounded derandomization efficiently: that of "leakage-resilient hardness." They identify a specific form of this assumption which is equivalent to prP = prBPP. In this paper, we pursue an equivalence to derandomization of prBP⋅L (logspace promise problems with two-way randomness) through techniques analogous to Liu and Pass. We are able to obtain an equivalence between a similar "leakage-resilient hardness" assumption and a slightly stronger statement than derandomization of prBP⋅L, that of finding "non-no" instances of "promise search problems."

Cite as

Yakov Shalunov. Leakage-Resilient Hardness Equivalence to Logspace Derandomization. In 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 306, pp. 83:1-83:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{shalunov:LIPIcs.MFCS.2024.83,
  author =	{Shalunov, Yakov},
  title =	{{Leakage-Resilient Hardness Equivalence to Logspace Derandomization}},
  booktitle =	{49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)},
  pages =	{83:1--83:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-335-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{306},
  editor =	{Kr\'{a}lovi\v{c}, Rastislav and Ku\v{c}era, Anton{\'\i}n},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2024.83},
  URN =		{urn:nbn:de:0030-drops-206395},
  doi =		{10.4230/LIPIcs.MFCS.2024.83},
  annote =	{Keywords: Derandomization, logspace computation, leakage-resilient hardness, psuedorandom generators}
}
Document
Quantum Automating TC⁰-Frege Is LWE-Hard

Authors: Noel Arteche, Gaia Carenini, and Matthew Gray

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We prove the first hardness results against efficient proof search by quantum algorithms. We show that under Learning with Errors (LWE), the standard lattice-based cryptographic assumption, no quantum algorithm can weakly automate TC⁰-Frege. This extends the line of results of Krajíček and Pudlák (Information and Computation, 1998), Bonet, Pitassi, and Raz (FOCS, 1997), and Bonet, Domingo, Gavaldà, Maciel, and Pitassi (Computational Complexity, 2004), who showed that Extended Frege, TC⁰-Frege and AC⁰-Frege, respectively, cannot be weakly automated by classical algorithms if either the RSA cryptosystem or the Diffie-Hellman key exchange protocol are secure. To the best of our knowledge, this is the first interaction between quantum computation and propositional proof search.

Cite as

Noel Arteche, Gaia Carenini, and Matthew Gray. Quantum Automating TC⁰-Frege Is LWE-Hard. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 15:1-15:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{arteche_et_al:LIPIcs.CCC.2024.15,
  author =	{Arteche, Noel and Carenini, Gaia and Gray, Matthew},
  title =	{{Quantum Automating TC⁰-Frege Is LWE-Hard}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{15:1--15:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.15},
  URN =		{urn:nbn:de:0030-drops-204117},
  doi =		{10.4230/LIPIcs.CCC.2024.15},
  annote =	{Keywords: automatability, post-quantum cryptography, feasible interpolation}
}
Document
Track A: Algorithms, Complexity and Games
Impagliazzo’s Worlds Through the Lens of Conditional Kolmogorov Complexity

Authors: Zhenjian Lu and Rahul Santhanam

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We develop new characterizations of Impagliazzo’s worlds Algorithmica, Heuristica and Pessiland by the intractability of conditional Kolmogorov complexity 𝖪 and conditional probabilistic time-bounded Kolmogorov complexity pK^t. In our first set of results, we show that NP ⊆ BPP iff pK^t(x ∣ y) can be computed efficiently in the worst case when t is sublinear in |x| + |y|; DistNP ⊆ HeurBPP iff pK^t(x ∣ y) can be computed efficiently over all polynomial-time samplable distributions when t is sublinear in |x| + |y|; and infinitely-often one-way functions fail to exist iff pK^t(x ∣ y) can be computed efficiently over all polynomial-time samplable distributions for t a sufficiently large polynomial in |x| + |y|. These results characterize Impagliazzo’s worlds Algorithmica, Heuristica and Pessiland purely in terms of the tractability of conditional pK^t. Notably, the results imply that Pessiland fails to exist iff the average-case intractability of conditional pK^t is insensitive to the difference between sublinear and polynomially bounded t. As a corollary, while we prove conditional pK^t to be NP-hard for sublinear t, showing NP-hardness for large enough polynomially bounded t would eliminate Pessiland as a possible world of average-case complexity. In our second set of results, we characterize Impagliazzo’s worlds Algorithmica, Heuristica and Pessiland by the distributional tractability of a natural problem, i.e., approximating the conditional Kolmogorov complexity, that is provably intractable in the worst case. We show that NP ⊆ BPP iff conditional Kolmogorov complexity can be approximated in the semi-worst case; and DistNP ⊆ HeurBPP iff conditional Kolmogorov complexity can be approximated on average over all independent polynomial-time samplable distributions. It follows from a result by Ilango, Ren, and Santhanam (STOC 2022) that infinitely-often one-way functions fail to exist iff conditional Kolmogorov complexity can be approximated on average over all polynomial-time samplable distributions. Together, these results yield the claimed characterizations. Our techniques, combined with previous work, also yield a characterization of auxiliary-input one-way functions and equivalences between different average-case tractability assumptions for conditional Kolmogorov complexity and its variants. Our results suggest that novel average-case tractability assumptions such as tractability in the semi-worst case and over independent polynomial-time samplable distributions might be worthy of further study.

Cite as

Zhenjian Lu and Rahul Santhanam. Impagliazzo’s Worlds Through the Lens of Conditional Kolmogorov Complexity. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 110:1-110:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{lu_et_al:LIPIcs.ICALP.2024.110,
  author =	{Lu, Zhenjian and Santhanam, Rahul},
  title =	{{Impagliazzo’s Worlds Through the Lens of Conditional Kolmogorov Complexity}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{110:1--110:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.110},
  URN =		{urn:nbn:de:0030-drops-202538},
  doi =		{10.4230/LIPIcs.ICALP.2024.110},
  annote =	{Keywords: meta-complexity, Kolmogorov complexity, one-way functions, average-case complexity}
}
Document
Leakage-Resilient Hardness vs Randomness

Authors: Yanyi Liu and Rafael Pass

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
A central open problem in complexity theory concerns the question of whether all efficient randomized algorithms can be simulated by efficient deterministic algorithms. The celebrated "hardness v.s. randomness” paradigm pioneered by Blum-Micali (SIAM JoC’84), Yao (FOCS’84) and Nisan-Wigderson (JCSS’94) presents hardness assumptions under which e.g., prBPP = prP (so-called "high-end derandomization), or prBPP ⊆ prSUBEXP (so-called "low-end derandomization), and more generally, under which prBPP ⊆ prDTIME(𝒞) where 𝒞 is a "nice" class (closed under composition with a polynomial), but these hardness assumptions are not known to also be necessary for such derandomization. In this work, following the recent work by Chen and Tell (FOCS’21) that considers "almost-all-input" hardness of a function f (i.e., hardness of computing f on more than a finite number of inputs), we consider "almost-all-input" leakage-resilient hardness of a function f - that is, hardness of computing f(x) even given, say, √|x| bits of leakage of f(x). We show that leakage-resilient hardness characterizes derandomization of prBPP (i.e., gives a both necessary and sufficient condition for derandomization), both in the high-end and in the low-end setting. In more detail, we show that there exists a constant c such that for every function T, the following are equivalent: - prBPP ⊆ prDTIME(poly(T(poly(n)))); - Existence of a poly(T(poly(n)))-time computable function f :{0,1}ⁿ → {0,1}ⁿ that is almost-all-input leakage-resilient hard with respect to n^c-time probabilistic algorithms. As far as we know, this is the first assumption that characterizes derandomization in both the low-end and the high-end regime. Additionally, our characterization naturally extends also to derandomization of prMA, and also to average-case derandomization, by appropriately weakening the requirements on the function f. In particular, for the case of average-case (a.k.a. "effective") derandomization, we no longer require the function to be almost-all-input hard, but simply satisfy the more standard notion of average-case leakage-resilient hardness (w.r.t., every samplable distribution), whereas for derandomization of prMA, we instead consider leakage-resilience for relations.

Cite as

Yanyi Liu and Rafael Pass. Leakage-Resilient Hardness vs Randomness. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 32:1-32:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.CCC.2023.32,
  author =	{Liu, Yanyi and Pass, Rafael},
  title =	{{Leakage-Resilient Hardness vs Randomness}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{32:1--32:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.32},
  URN =		{urn:nbn:de:0030-drops-183022},
  doi =		{10.4230/LIPIcs.CCC.2023.32},
  annote =	{Keywords: Derandomization, Leakage-Resilient Hardness}
}
Document
Characterizing Derandomization Through Hardness of Levin-Kolmogorov Complexity

Authors: Yanyi Liu and Rafael Pass

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
A central open problem in complexity theory concerns the question of whether all efficient randomized algorithms can be simulated by efficient deterministic algorithms. We consider this problem in the context of promise problems (i.e,. the prBPP v.s. prP problem) and show that for all sufficiently large constants c, the following are equivalent: - prBPP = prP. - For every BPTIME(n^c) algorithm M, and every sufficiently long z ∈ {0,1}ⁿ, there exists some x ∈ {0,1}ⁿ such that M fails to decide whether Kt(x∣z) is "very large" (≥ n-1) or "very small" (≤ O(log n)). where Kt(x∣z) denotes the Levin-Kolmogorov complexity of x conditioned on z. As far as we are aware, this yields the first full characterization of when prBPP = prP through the hardness of some class of problems. Previous hardness assumptions used for derandomization only provide a one-sided implication.

Cite as

Yanyi Liu and Rafael Pass. Characterizing Derandomization Through Hardness of Levin-Kolmogorov Complexity. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 35:1-35:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.CCC.2022.35,
  author =	{Liu, Yanyi and Pass, Rafael},
  title =	{{Characterizing Derandomization Through Hardness of Levin-Kolmogorov Complexity}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{35:1--35:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.35},
  URN =		{urn:nbn:de:0030-drops-165975},
  doi =		{10.4230/LIPIcs.CCC.2022.35},
  annote =	{Keywords: Derandomization, Kolmogorov Complexity, Hitting Set Generators}
}
Document
On One-Way Functions from NP-Complete Problems

Authors: Yanyi Liu and Rafael Pass

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
We present the first natural NP-complete problem whose average-case hardness w.r.t. the uniform distribution over instances is equivalent to the existence of one-way functions (OWFs). The problem, which originated in the 1960s, is the Conditional Time-Bounded Kolmogorov Complexity Problem: let K^t(x∣z) be the length of the shortest "program" that, given the "auxiliary input" z, outputs the string x within time t(|x|), and let McK^tP[ζ] be the set of strings (x,z,k) where |z| = ζ(|x|), |k| = log |x| and K^t(x∣z) < k, where, for our purposes, a "program" is defined as a RAM machine. Our main result shows that for every polynomial t(n) ≥ n², there exists some polynomial ζ such that McK^tP[ζ] is NP-complete. We additionally extend the result of Liu-Pass (FOCS'20) to show that for every polynomial t(n) ≥ 1.1n, and every polynomial ζ(⋅), mild average-case hardness of McK^tP[ζ] is equivalent to the existence of OWFs. Taken together, these results provide the following crisp characterization of what is required to base OWFs on NP ⊈ BPP: There exists concrete polynomials t,ζ such that "Basing OWFs on NP ⊈ BPP" is equivalent to providing a "worst-case to (mild) average-case reduction for McK^tP[ζ]". In other words, the "holy-grail" of Cryptography (i.e., basing OWFs on NP ⊈ BPP) is equivalent to a basic question in algorithmic information theory. As an independent contribution, we show that our NP-completeness result can be used to shed new light on the feasibility of the polynomial-time bounded symmetry of information assertion (Kolmogorov'68).

Cite as

Yanyi Liu and Rafael Pass. On One-Way Functions from NP-Complete Problems. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 36:1-36:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.CCC.2022.36,
  author =	{Liu, Yanyi and Pass, Rafael},
  title =	{{On One-Way Functions from NP-Complete Problems}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{36:1--36:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.36},
  URN =		{urn:nbn:de:0030-drops-165981},
  doi =		{10.4230/LIPIcs.CCC.2022.36},
  annote =	{Keywords: One-way Functions, NP-Completeness, Kolmogorov Complexity}
}
Document
Quantum Meets the Minimum Circuit Size Problem

Authors: Nai-Hui Chia, Chi-Ning Chou, Jiayu Zhang, and Ruizhe Zhang

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
In this work, we initiate the study of the Minimum Circuit Size Problem (MCSP) in the quantum setting. MCSP is a problem to compute the circuit complexity of Boolean functions. It is a fascinating problem in complexity theory - its hardness is mysterious, and a better understanding of its hardness can have surprising implications to many fields in computer science. We first define and investigate the basic complexity-theoretic properties of minimum quantum circuit size problems for three natural objects: Boolean functions, unitaries, and quantum states. We show that these problems are not trivially in NP but in QCMA (or have QCMA protocols). Next, we explore the relations between the three quantum MCSPs and their variants. We discover that some reductions that are not known for classical MCSP exist for quantum MCSPs for unitaries and states, e.g., search-to-decision reductions and self-reductions. Finally, we systematically generalize results known for classical MCSP to the quantum setting (including quantum cryptography, quantum learning theory, quantum circuit lower bounds, and quantum fine-grained complexity) and also find new connections to tomography and quantum gravity. Due to the fundamental differences between classical and quantum circuits, most of our results require extra care and reveal properties and phenomena unique to the quantum setting. Our findings could be of interest for future studies, and we post several open problems for further exploration along this direction.

Cite as

Nai-Hui Chia, Chi-Ning Chou, Jiayu Zhang, and Ruizhe Zhang. Quantum Meets the Minimum Circuit Size Problem. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 47:1-47:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{chia_et_al:LIPIcs.ITCS.2022.47,
  author =	{Chia, Nai-Hui and Chou, Chi-Ning and Zhang, Jiayu and Zhang, Ruizhe},
  title =	{{Quantum Meets the Minimum Circuit Size Problem}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{47:1--47:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.47},
  URN =		{urn:nbn:de:0030-drops-156433},
  doi =		{10.4230/LIPIcs.ITCS.2022.47},
  annote =	{Keywords: Quantum Computation, Quantum Complexity, Minimum Circuit Size Problem}
}
Document
One-Way Functions and a Conditional Variant of MKTP

Authors: Eric Allender, Mahdi Cheraghchi, Dimitrios Myrisiotis, Harsha Tirumala, and Ilya Volkovich

Published in: LIPIcs, Volume 213, 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)


Abstract
One-way functions (OWFs) are central objects of study in cryptography and computational complexity theory. In a seminal work, Liu and Pass (FOCS 2020) proved that the average-case hardness of computing time-bounded Kolmogorov complexity is equivalent to the existence of OWFs. It remained an open problem to establish such an equivalence for the average-case hardness of some natural NP-complete problem. In this paper, we make progress on this question by studying a conditional variant of the Minimum KT-complexity Problem (MKTP), which we call McKTP, as follows. 1) First, we prove that if McKTP is average-case hard on a polynomial fraction of its instances, then there exist OWFs. 2) Then, we observe that McKTP is NP-complete under polynomial-time randomized reductions. 3) Finally, we prove that the existence of OWFs implies the nontrivial average-case hardness of McKTP. Thus the existence of OWFs is inextricably linked to the average-case hardness of this NP-complete problem. In fact, building on recently-announced results of Ren and Santhanam [Rahul Ilango et al., 2021], we show that McKTP is hard-on-average if and only if there are logspace-computable OWFs.

Cite as

Eric Allender, Mahdi Cheraghchi, Dimitrios Myrisiotis, Harsha Tirumala, and Ilya Volkovich. One-Way Functions and a Conditional Variant of MKTP. In 41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 213, pp. 7:1-7:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{allender_et_al:LIPIcs.FSTTCS.2021.7,
  author =	{Allender, Eric and Cheraghchi, Mahdi and Myrisiotis, Dimitrios and Tirumala, Harsha and Volkovich, Ilya},
  title =	{{One-Way Functions and a Conditional Variant of MKTP}},
  booktitle =	{41st IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2021)},
  pages =	{7:1--7:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-215-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{213},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Chekuri, Chandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2021.7},
  URN =		{urn:nbn:de:0030-drops-155181},
  doi =		{10.4230/LIPIcs.FSTTCS.2021.7},
  annote =	{Keywords: Kolmogorov complexity, KT Complexity, Minimum KT-complexity Problem, MKTP, Conditional KT Complexity, Minimum Conditional KT-complexity Problem, McKTP, one-way functions, OWFs, average-case hardness, pseudorandom generators, PRGs, pseudorandom functions, PRFs, distinguishers, learning algorithms, NP-completeness, reductions}
}
Document
Hardness of KT Characterizes Parallel Cryptography

Authors: Hanlin Ren and Rahul Santhanam

Published in: LIPIcs, Volume 200, 36th Computational Complexity Conference (CCC 2021)


Abstract
A recent breakthrough of Liu and Pass (FOCS'20) shows that one-way functions exist if and only if the (polynomial-)time-bounded Kolmogorov complexity, K^t, is bounded-error hard on average to compute. In this paper, we strengthen this result and extend it to other complexity measures: - We show, perhaps surprisingly, that the KT complexity is bounded-error average-case hard if and only if there exist one-way functions in constant parallel time (i.e. NC⁰). This result crucially relies on the idea of randomized encodings. Previously, a seminal work of Applebaum, Ishai, and Kushilevitz (FOCS'04; SICOMP'06) used the same idea to show that NC⁰-computable one-way functions exist if and only if logspace-computable one-way functions exist. - Inspired by the above result, we present randomized average-case reductions among the NC¹-versions and logspace-versions of K^t complexity, and the KT complexity. Our reductions preserve both bounded-error average-case hardness and zero-error average-case hardness. To the best of our knowledge, this is the first reduction between the KT complexity and a variant of K^t complexity. - We prove tight connections between the hardness of K^t complexity and the hardness of (the hardest) one-way functions. In analogy with the Exponential-Time Hypothesis and its variants, we define and motivate the Perebor Hypotheses for complexity measures such as K^t and KT. We show that a Strong Perebor Hypothesis for K^t implies the existence of (weak) one-way functions of near-optimal hardness 2^{n-o(n)}. To the best of our knowledge, this is the first construction of one-way functions of near-optimal hardness based on a natural complexity assumption about a search problem. - We show that a Weak Perebor Hypothesis for MCSP implies the existence of one-way functions, and establish a partial converse. This is the first unconditional construction of one-way functions from the hardness of MCSP over a natural distribution. - Finally, we study the average-case hardness of MKtP. We show that it characterizes cryptographic pseudorandomness in one natural regime of parameters, and complexity-theoretic pseudorandomness in another natural regime.

Cite as

Hanlin Ren and Rahul Santhanam. Hardness of KT Characterizes Parallel Cryptography. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 35:1-35:58, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ren_et_al:LIPIcs.CCC.2021.35,
  author =	{Ren, Hanlin and Santhanam, Rahul},
  title =	{{Hardness of KT Characterizes Parallel Cryptography}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{35:1--35:58},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.35},
  URN =		{urn:nbn:de:0030-drops-143091},
  doi =		{10.4230/LIPIcs.CCC.2021.35},
  annote =	{Keywords: one-way function, meta-complexity, KT complexity, parallel cryptography, randomized encodings}
}
  • Refine by Type
  • 13 Document/PDF
  • 4 Document/HTML

  • Refine by Publication Year
  • 4 2025
  • 3 2024
  • 1 2023
  • 3 2022
  • 2 2021

  • Refine by Author
  • 4 Liu, Yanyi
  • 4 Pass, Rafael
  • 2 Santhanam, Rahul
  • 1 Allender, Eric
  • 1 Arteche, Noel
  • Show More...

  • Refine by Series/Journal
  • 13 LIPIcs

  • Refine by Classification
  • 5 Theory of computation → Computational complexity and cryptography
  • 4 Theory of computation → Pseudorandomness and derandomization
  • 2 Theory of computation → Circuit complexity
  • 2 Theory of computation → Cryptographic primitives
  • 2 Theory of computation → Problems, reductions and completeness
  • Show More...

  • Refine by Keyword
  • 4 Derandomization
  • 2 Kolmogorov Complexity
  • 2 Kolmogorov complexity
  • 2 meta-complexity
  • 2 one-way functions
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail