28 Search Results for "Pass, Rafael"


Volume

OASIcs, Volume 97

3rd International Conference on Blockchain Economics, Security and Protocols (Tokenomics 2021)

Tokenomics 2021, November 18-19, 2021, New York University, USA (Virtual Conference)

Editors: Vincent Gramoli, Hanna Halaburda, and Rafael Pass

Document
Streaming Zero-Knowledge Proofs

Authors: Graham Cormode, Marcel Dall'Agnol, Tom Gur, and Chris Hickey

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
Streaming interactive proofs (SIPs) enable a space-bounded algorithm with one-pass access to a massive stream of data to verify a computation that requires large space, by communicating with a powerful but untrusted prover. This work initiates the study of zero-knowledge proofs for data streams. We define the notion of zero-knowledge in the streaming setting and construct zero-knowledge SIPs for the two main algorithmic building blocks in the streaming interactive proofs literature: the sumcheck and polynomial evaluation protocols. To the best of our knowledge all known streaming interactive proofs are based on either of these tools, and indeed, this allows us to obtain zero-knowledge SIPs for central streaming problems such as index, point and range queries, median, frequency moments, and inner product. Our protocols are efficient in terms of time and space, as well as communication: the verifier algorithm’s space complexity is polylog(n) and, after a non-interactive setup that uses a random string of near-linear length, the remaining parameters are n^o(1). En route, we develop an algorithmic toolkit for designing zero-knowledge data stream protocols, consisting of an algebraic streaming commitment protocol and a temporal commitment protocol. Our analyses rely on delicate algebraic and information-theoretic arguments and reductions from average-case communication complexity.

Cite as

Graham Cormode, Marcel Dall'Agnol, Tom Gur, and Chris Hickey. Streaming Zero-Knowledge Proofs. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 2:1-2:66, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cormode_et_al:LIPIcs.CCC.2024.2,
  author =	{Cormode, Graham and Dall'Agnol, Marcel and Gur, Tom and Hickey, Chris},
  title =	{{Streaming Zero-Knowledge Proofs}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{2:1--2:66},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.2},
  URN =		{urn:nbn:de:0030-drops-203988},
  doi =		{10.4230/LIPIcs.CCC.2024.2},
  annote =	{Keywords: Zero-knowledge proofs, streaming algorithms, computational complexity}
}
Document
Linear-Size Boolean Circuits for Multiselection

Authors: Justin Holmgren and Ron Rothblum

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We study the circuit complexity of the multiselection problem: given an input string x ∈ {0,1}ⁿ along with indices i_1,… ,i_q ∈ [n], output (x_{i_1},… ,x_{i_q}). A trivial lower bound for the circuit size is the input length n + q⋅log(n), but the straightforward construction has size Θ(q⋅n). Our main result is an O(n+q⋅log³(n))-size and O(log(n+q))-depth circuit for multiselection. In particular, for any q ≤ n/log³(n) the circuit has linear size and logarithmic depth. Prior to our work no linear-size circuit for multiselection was known for any q = ω(1) and regardless of depth.

Cite as

Justin Holmgren and Ron Rothblum. Linear-Size Boolean Circuits for Multiselection. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 11:1-11:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{holmgren_et_al:LIPIcs.CCC.2024.11,
  author =	{Holmgren, Justin and Rothblum, Ron},
  title =	{{Linear-Size Boolean Circuits for Multiselection}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{11:1--11:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.11},
  URN =		{urn:nbn:de:0030-drops-204070},
  doi =		{10.4230/LIPIcs.CCC.2024.11},
  annote =	{Keywords: Private Information Retrieval, Batch Selection, Boolean Circuits}
}
Document
Exact Search-To-Decision Reductions for Time-Bounded Kolmogorov Complexity

Authors: Shuichi Hirahara, Valentine Kabanets, Zhenjian Lu, and Igor C. Oliveira

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
A search-to-decision reduction is a procedure that allows one to find a solution to a problem from the mere ability to decide when a solution exists. The existence of a search-to-decision reduction for time-bounded Kolmogorov complexity, i.e., the problem of checking if a string x can be generated by a t-time bounded program of description length s, is a long-standing open problem that dates back to the 1960s. In this work, we obtain new average-case and worst-case search-to-decision reductions for the complexity measure 𝖪^t and its randomized analogue rK^t: 1) (Conditional Errorless and Error-Prone Reductions for 𝖪^t) Under the assumption that 𝖤 requires exponential size circuits, we design polynomial-time average-case search-to-decision reductions for 𝖪^t in both errorless and error-prone settings. In fact, under the easiness of deciding 𝖪^t under the uniform distribution, we obtain a search algorithm for any given polynomial-time samplable distribution. In the error-prone reduction, the search algorithm works in the more general setting of conditional 𝖪^t complexity, i.e., it finds a minimum length t-time bound program for generating x given a string y. 2) (Unconditional Errorless Reduction for rK^t) We obtain an unconditional polynomial-time average-case search-to-decision reduction for rK^t in the errorless setting. Similarly to the results described above, we obtain a search algorithm for each polynomial-time samplable distribution, assuming the existence of a decision algorithm under the uniform distribution. To our knowledge, this is the first unconditional sub-exponential time search-to-decision reduction among the measures 𝖪^t and rK^t that works with respect to any given polynomial-time samplable distribution. 3) (Worst-Case to Average-Case Reductions) Under the errorless average-case easiness of deciding rK^t, we design a worst-case search algorithm running in time 2^O(n/log n) that produces a minimum length randomized t-time program for every input string x ∈ {0,1}ⁿ, with the caveat that it only succeeds on some explicitly computed sub-exponential time bound t ≤ 2^{n^ε} that depends on x. A similar result holds for 𝖪^t, under the assumption that 𝖤 requires exponential size circuits. In these results, the corresponding search problem is solved exactly, i.e., a successful run of the search algorithm outputs a t-time bounded program for x of minimum length, as opposed to an approximately optimal program of slightly larger description length or running time.

Cite as

Shuichi Hirahara, Valentine Kabanets, Zhenjian Lu, and Igor C. Oliveira. Exact Search-To-Decision Reductions for Time-Bounded Kolmogorov Complexity. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 29:1-29:56, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.CCC.2024.29,
  author =	{Hirahara, Shuichi and Kabanets, Valentine and Lu, Zhenjian and Oliveira, Igor C.},
  title =	{{Exact Search-To-Decision Reductions for Time-Bounded Kolmogorov Complexity}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{29:1--29:56},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.29},
  URN =		{urn:nbn:de:0030-drops-204256},
  doi =		{10.4230/LIPIcs.CCC.2024.29},
  annote =	{Keywords: average-case complexity, Kolmogorov complexity, search-to-decision reductions}
}
Document
Search-To-Decision Reductions for Kolmogorov Complexity

Authors: Noam Mazor and Rafael Pass

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
A long-standing open problem dating back to the 1960s is whether there exists a search-to-decision reduction for the time-bounded Kolmogorov complexity problem - that is, the problem of determining whether the length of the shortest time-t program generating a given string x is at most s. In this work, we consider the more "robust" version of the time-bounded Kolmogorov complexity problem, referred to as the GapMINKT problem, where given a size bound s and a running time bound t, the goal is to determine whether there exists a poly(t,|x|)-time program of length s+O(log |x|) that generates x. We present the first non-trivial search-to-decision reduction R for the GapMINKT problem; R has a running-time bound of 2^{ε n} for any ε > 0 and additionally only queries its oracle on "thresholds" s of size s+O(log |x|). As such, we get that any algorithm with running-time (resp. circuit size) 2^{α s} poly(|x|,t,s) for solving GapMINKT (given an instance (x,t,s), yields an algorithm for finding a witness with running-time (resp. circuit size) 2^{(α+ε) s} poly(|x|,t,s). Our second result is a polynomial-time search-to-decision reduction for the time-bounded Kolmogorov complexity problem in the average-case regime. Such a reduction was recently shown by Liu and Pass (FOCS'20), heavily relying on cryptographic techniques. Our reduction is more direct and additionally has the advantage of being length-preserving, and as such also applies in the exponential time/size regime. A central component in both of these results is the use of Kolmogorov and Levin’s Symmetry of Information Theorem.

Cite as

Noam Mazor and Rafael Pass. Search-To-Decision Reductions for Kolmogorov Complexity. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 34:1-34:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{mazor_et_al:LIPIcs.CCC.2024.34,
  author =	{Mazor, Noam and Pass, Rafael},
  title =	{{Search-To-Decision Reductions for Kolmogorov Complexity}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{34:1--34:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.34},
  URN =		{urn:nbn:de:0030-drops-204308},
  doi =		{10.4230/LIPIcs.CCC.2024.34},
  annote =	{Keywords: Kolmogorov complexity, search to decision}
}
Document
Gap MCSP Is Not (Levin) NP-Complete in Obfustopia

Authors: Noam Mazor and Rafael Pass

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We demonstrate that under believable cryptographic hardness assumptions, Gap versions of standard meta-complexity problems, such as the Minimum Circuit Size Problem (MCSP) and the Minimum Time-Bounded Kolmogorov Complexity problem (MKTP) are not NP-complete w.r.t. Levin (i.e., witness-preserving many-to-one) reductions. In more detail: - Assuming the existence of indistinguishability obfuscation, and subexponentially-secure one-way functions, an appropriate Gap version of MCSP is not NP-complete under randomized Levin-reductions. - Assuming the existence of subexponentially-secure indistinguishability obfuscation, subexponentially-secure one-way functions and injective PRGs, an appropriate Gap version of MKTP is not NP-complete under randomized Levin-reductions.

Cite as

Noam Mazor and Rafael Pass. Gap MCSP Is Not (Levin) NP-Complete in Obfustopia. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 36:1-36:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{mazor_et_al:LIPIcs.CCC.2024.36,
  author =	{Mazor, Noam and Pass, Rafael},
  title =	{{Gap MCSP Is Not (Levin) NP-Complete in Obfustopia}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{36:1--36:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.36},
  URN =		{urn:nbn:de:0030-drops-204322},
  doi =		{10.4230/LIPIcs.CCC.2024.36},
  annote =	{Keywords: Kolmogorov complexity, MCSP, Levin Reduction}
}
Document
Track A: Algorithms, Complexity and Games
Impagliazzo’s Worlds Through the Lens of Conditional Kolmogorov Complexity

Authors: Zhenjian Lu and Rahul Santhanam

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We develop new characterizations of Impagliazzo’s worlds Algorithmica, Heuristica and Pessiland by the intractability of conditional Kolmogorov complexity 𝖪 and conditional probabilistic time-bounded Kolmogorov complexity pK^t. In our first set of results, we show that NP ⊆ BPP iff pK^t(x ∣ y) can be computed efficiently in the worst case when t is sublinear in |x| + |y|; DistNP ⊆ HeurBPP iff pK^t(x ∣ y) can be computed efficiently over all polynomial-time samplable distributions when t is sublinear in |x| + |y|; and infinitely-often one-way functions fail to exist iff pK^t(x ∣ y) can be computed efficiently over all polynomial-time samplable distributions for t a sufficiently large polynomial in |x| + |y|. These results characterize Impagliazzo’s worlds Algorithmica, Heuristica and Pessiland purely in terms of the tractability of conditional pK^t. Notably, the results imply that Pessiland fails to exist iff the average-case intractability of conditional pK^t is insensitive to the difference between sublinear and polynomially bounded t. As a corollary, while we prove conditional pK^t to be NP-hard for sublinear t, showing NP-hardness for large enough polynomially bounded t would eliminate Pessiland as a possible world of average-case complexity. In our second set of results, we characterize Impagliazzo’s worlds Algorithmica, Heuristica and Pessiland by the distributional tractability of a natural problem, i.e., approximating the conditional Kolmogorov complexity, that is provably intractable in the worst case. We show that NP ⊆ BPP iff conditional Kolmogorov complexity can be approximated in the semi-worst case; and DistNP ⊆ HeurBPP iff conditional Kolmogorov complexity can be approximated on average over all independent polynomial-time samplable distributions. It follows from a result by Ilango, Ren, and Santhanam (STOC 2022) that infinitely-often one-way functions fail to exist iff conditional Kolmogorov complexity can be approximated on average over all polynomial-time samplable distributions. Together, these results yield the claimed characterizations. Our techniques, combined with previous work, also yield a characterization of auxiliary-input one-way functions and equivalences between different average-case tractability assumptions for conditional Kolmogorov complexity and its variants. Our results suggest that novel average-case tractability assumptions such as tractability in the semi-worst case and over independent polynomial-time samplable distributions might be worthy of further study.

Cite as

Zhenjian Lu and Rahul Santhanam. Impagliazzo’s Worlds Through the Lens of Conditional Kolmogorov Complexity. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 110:1-110:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{lu_et_al:LIPIcs.ICALP.2024.110,
  author =	{Lu, Zhenjian and Santhanam, Rahul},
  title =	{{Impagliazzo’s Worlds Through the Lens of Conditional Kolmogorov Complexity}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{110:1--110:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.110},
  URN =		{urn:nbn:de:0030-drops-202538},
  doi =		{10.4230/LIPIcs.ICALP.2024.110},
  annote =	{Keywords: meta-complexity, Kolmogorov complexity, one-way functions, average-case complexity}
}
Document
The Non-Uniform Perebor Conjecture for Time-Bounded Kolmogorov Complexity Is False

Authors: Noam Mazor and Rafael Pass

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
The Perebor (Russian for "brute-force search") conjectures, which date back to the 1950s and 1960s are some of the oldest conjectures in complexity theory. The conjectures are a stronger form of the NP ≠ P conjecture (which they predate) and state that for "meta-complexity" problems, such as the Time-bounded Kolmogorov complexity Problem, and the Minimum Circuit Size Problem, there are no better algorithms than brute force search. In this paper, we disprove the non-uniform version of the Perebor conjecture for the Time-Bounded Kolmogorov complexity problem. We demonstrate that for every polynomial t(⋅), there exists of a circuit of size 2^{4n/5+o(n)} that solves the t(⋅)-bounded Kolmogorov complexity problem on every instance. Our algorithm is black-box in the description of the Universal Turing Machine U employed in the definition of Kolmogorov Complexity and leverages the characterization of one-way functions through the hardness of the time-bounded Kolmogorov complexity problem of Liu and Pass (FOCS'20), and the time-space trade-off for one-way functions of Fiat and Naor (STOC'91). We additionally demonstrate that no such black-box algorithm can have circuit size smaller than 2^{n/2-o(n)}. Along the way (and of independent interest), we extend the result of Fiat and Naor and demonstrate that any efficiently computable function can be inverted (with probability 1) by a circuit of size 2^{4n/5+o(n)}; as far as we know, this yields the first formal proof that a non-trivial circuit can invert any efficient function.

Cite as

Noam Mazor and Rafael Pass. The Non-Uniform Perebor Conjecture for Time-Bounded Kolmogorov Complexity Is False. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 80:1-80:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{mazor_et_al:LIPIcs.ITCS.2024.80,
  author =	{Mazor, Noam and Pass, Rafael},
  title =	{{The Non-Uniform Perebor Conjecture for Time-Bounded Kolmogorov Complexity Is False}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{80:1--80:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.80},
  URN =		{urn:nbn:de:0030-drops-196088},
  doi =		{10.4230/LIPIcs.ITCS.2024.80},
  annote =	{Keywords: Kolmogorov complexity, perebor conjecture, function inversion}
}
Document
Leakage-Resilient Hardness vs Randomness

Authors: Yanyi Liu and Rafael Pass

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
A central open problem in complexity theory concerns the question of whether all efficient randomized algorithms can be simulated by efficient deterministic algorithms. The celebrated "hardness v.s. randomness” paradigm pioneered by Blum-Micali (SIAM JoC’84), Yao (FOCS’84) and Nisan-Wigderson (JCSS’94) presents hardness assumptions under which e.g., prBPP = prP (so-called "high-end derandomization), or prBPP ⊆ prSUBEXP (so-called "low-end derandomization), and more generally, under which prBPP ⊆ prDTIME(𝒞) where 𝒞 is a "nice" class (closed under composition with a polynomial), but these hardness assumptions are not known to also be necessary for such derandomization. In this work, following the recent work by Chen and Tell (FOCS’21) that considers "almost-all-input" hardness of a function f (i.e., hardness of computing f on more than a finite number of inputs), we consider "almost-all-input" leakage-resilient hardness of a function f - that is, hardness of computing f(x) even given, say, √|x| bits of leakage of f(x). We show that leakage-resilient hardness characterizes derandomization of prBPP (i.e., gives a both necessary and sufficient condition for derandomization), both in the high-end and in the low-end setting. In more detail, we show that there exists a constant c such that for every function T, the following are equivalent: - prBPP ⊆ prDTIME(poly(T(poly(n)))); - Existence of a poly(T(poly(n)))-time computable function f :{0,1}ⁿ → {0,1}ⁿ that is almost-all-input leakage-resilient hard with respect to n^c-time probabilistic algorithms. As far as we know, this is the first assumption that characterizes derandomization in both the low-end and the high-end regime. Additionally, our characterization naturally extends also to derandomization of prMA, and also to average-case derandomization, by appropriately weakening the requirements on the function f. In particular, for the case of average-case (a.k.a. "effective") derandomization, we no longer require the function to be almost-all-input hard, but simply satisfy the more standard notion of average-case leakage-resilient hardness (w.r.t., every samplable distribution), whereas for derandomization of prMA, we instead consider leakage-resilience for relations.

Cite as

Yanyi Liu and Rafael Pass. Leakage-Resilient Hardness vs Randomness. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 32:1-32:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.CCC.2023.32,
  author =	{Liu, Yanyi and Pass, Rafael},
  title =	{{Leakage-Resilient Hardness vs Randomness}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{32:1--32:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.32},
  URN =		{urn:nbn:de:0030-drops-183022},
  doi =		{10.4230/LIPIcs.CCC.2023.32},
  annote =	{Keywords: Derandomization, Leakage-Resilient Hardness}
}
Document
Characterizing Derandomization Through Hardness of Levin-Kolmogorov Complexity

Authors: Yanyi Liu and Rafael Pass

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
A central open problem in complexity theory concerns the question of whether all efficient randomized algorithms can be simulated by efficient deterministic algorithms. We consider this problem in the context of promise problems (i.e,. the prBPP v.s. prP problem) and show that for all sufficiently large constants c, the following are equivalent: - prBPP = prP. - For every BPTIME(n^c) algorithm M, and every sufficiently long z ∈ {0,1}ⁿ, there exists some x ∈ {0,1}ⁿ such that M fails to decide whether Kt(x∣z) is "very large" (≥ n-1) or "very small" (≤ O(log n)). where Kt(x∣z) denotes the Levin-Kolmogorov complexity of x conditioned on z. As far as we are aware, this yields the first full characterization of when prBPP = prP through the hardness of some class of problems. Previous hardness assumptions used for derandomization only provide a one-sided implication.

Cite as

Yanyi Liu and Rafael Pass. Characterizing Derandomization Through Hardness of Levin-Kolmogorov Complexity. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 35:1-35:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.CCC.2022.35,
  author =	{Liu, Yanyi and Pass, Rafael},
  title =	{{Characterizing Derandomization Through Hardness of Levin-Kolmogorov Complexity}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{35:1--35:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.35},
  URN =		{urn:nbn:de:0030-drops-165975},
  doi =		{10.4230/LIPIcs.CCC.2022.35},
  annote =	{Keywords: Derandomization, Kolmogorov Complexity, Hitting Set Generators}
}
Document
On One-Way Functions from NP-Complete Problems

Authors: Yanyi Liu and Rafael Pass

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
We present the first natural NP-complete problem whose average-case hardness w.r.t. the uniform distribution over instances is equivalent to the existence of one-way functions (OWFs). The problem, which originated in the 1960s, is the Conditional Time-Bounded Kolmogorov Complexity Problem: let K^t(x∣z) be the length of the shortest "program" that, given the "auxiliary input" z, outputs the string x within time t(|x|), and let McK^tP[ζ] be the set of strings (x,z,k) where |z| = ζ(|x|), |k| = log |x| and K^t(x∣z) < k, where, for our purposes, a "program" is defined as a RAM machine. Our main result shows that for every polynomial t(n) ≥ n², there exists some polynomial ζ such that McK^tP[ζ] is NP-complete. We additionally extend the result of Liu-Pass (FOCS'20) to show that for every polynomial t(n) ≥ 1.1n, and every polynomial ζ(⋅), mild average-case hardness of McK^tP[ζ] is equivalent to the existence of OWFs. Taken together, these results provide the following crisp characterization of what is required to base OWFs on NP ⊈ BPP: There exists concrete polynomials t,ζ such that "Basing OWFs on NP ⊈ BPP" is equivalent to providing a "worst-case to (mild) average-case reduction for McK^tP[ζ]". In other words, the "holy-grail" of Cryptography (i.e., basing OWFs on NP ⊈ BPP) is equivalent to a basic question in algorithmic information theory. As an independent contribution, we show that our NP-completeness result can be used to shed new light on the feasibility of the polynomial-time bounded symmetry of information assertion (Kolmogorov'68).

Cite as

Yanyi Liu and Rafael Pass. On One-Way Functions from NP-Complete Problems. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 36:1-36:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.CCC.2022.36,
  author =	{Liu, Yanyi and Pass, Rafael},
  title =	{{On One-Way Functions from NP-Complete Problems}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{36:1--36:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.36},
  URN =		{urn:nbn:de:0030-drops-165981},
  doi =		{10.4230/LIPIcs.CCC.2022.36},
  annote =	{Keywords: One-way Functions, NP-Completeness, Kolmogorov Complexity}
}
Document
Complete Volume
OASIcs, Volume 97, Tokenomics 2021, Complete Volume

Authors: Vincent Gramoli, Hanna Halaburda, and Rafael Pass

Published in: OASIcs, Volume 97, 3rd International Conference on Blockchain Economics, Security and Protocols (Tokenomics 2021)


Abstract
OASIcs, Volume 97, Tokenomics 2021, Complete Volume

Cite as

3rd International Conference on Blockchain Economics, Security and Protocols (Tokenomics 2021). Open Access Series in Informatics (OASIcs), Volume 97, pp. 1-124, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Proceedings{gramoli_et_al:OASIcs.Tokenomics.2021,
  title =	{{OASIcs, Volume 97, Tokenomics 2021, Complete Volume}},
  booktitle =	{3rd International Conference on Blockchain Economics, Security and Protocols (Tokenomics 2021)},
  pages =	{1--124},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-220-4},
  ISSN =	{2190-6807},
  year =	{2022},
  volume =	{97},
  editor =	{Gramoli, Vincent and Halaburda, Hanna and Pass, Rafael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.Tokenomics.2021},
  URN =		{urn:nbn:de:0030-drops-158965},
  doi =		{10.4230/OASIcs.Tokenomics.2021},
  annote =	{Keywords: OASIcs, Volume 97, Tokenomics 2021, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Vincent Gramoli, Hanna Halaburda, and Rafael Pass

Published in: OASIcs, Volume 97, 3rd International Conference on Blockchain Economics, Security and Protocols (Tokenomics 2021)


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

3rd International Conference on Blockchain Economics, Security and Protocols (Tokenomics 2021). Open Access Series in Informatics (OASIcs), Volume 97, pp. 0:i-0:x, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{gramoli_et_al:OASIcs.Tokenomics.2021.0,
  author =	{Gramoli, Vincent and Halaburda, Hanna and Pass, Rafael},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{3rd International Conference on Blockchain Economics, Security and Protocols (Tokenomics 2021)},
  pages =	{0:i--0:x},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-220-4},
  ISSN =	{2190-6807},
  year =	{2022},
  volume =	{97},
  editor =	{Gramoli, Vincent and Halaburda, Hanna and Pass, Rafael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.Tokenomics.2021.0},
  URN =		{urn:nbn:de:0030-drops-158975},
  doi =		{10.4230/OASIcs.Tokenomics.2021.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
Distributed Computing Meets Game Theory: Fault Tolerance and Implementation with Cheap Talk (Invited Talk)

Authors: Joseph Y. Halpern

Published in: OASIcs, Volume 97, 3rd International Conference on Blockchain Economics, Security and Protocols (Tokenomics 2021)


Abstract
Traditionally, work in distributed computing has divided the agents into "good guys" and "bad guys". The good guys follow the protocol; the bad guys do everything in their power to make sure it does not work. By way of contrast, game theory has focused on "rational" agents, who try to maximize their utilities. Here I try to combine these viewpoints. Specifically, following the work of Abraham et al. [I. Abraham et al., 2006], I consider (k,t)-robust protocols/strategies, which tolerate coalitions of rational players of size up to k and up to t malicious players. I focus in particular on the problem that economists have called implementing a mediator. That is, can the players in the system, just talking among themselves (using what economists call "cheap talk") simulate the effects of the mediator (see, e.g., [I. Barany, 1992; E. Ben-Porath, 2003; Forges, 1990; D. Gerardi, 2004; Y. Heller, 2005; A. Urbano and J. E. Vila, 2002; A. Urbano and J. E. Vila, 2004]). In computer science, this essentially amounts to multiparty computation [O. Goldreich et al., 1987; A. Shamir et al., 1981; A. Yao, 1982]. Ideas from cryptography and distributed computing allow us to prove results on how many agents are required to implement a (k,t)-robust mediator just using cheap talk. These results subsume (and, in some cases, correct) results from the game theory literature. The results of Abraham et al. [I. Abraham et al., 2006] were proved for what are called synchronous systems in the distributed computing community; this is also the case for all the results in the economics literature cited above. In synchronous systems, communication proceeds in atomic rounds, and all messages sent during round r are received by round r + 1. But many systems in the real world are asynchronous. In an asynchronous setting, there are no rounds; messages sent by the players may take arbitrarily long to get to their recipients. Markets and the internet are best viewed as asynchronous. Blockchain implementations assume partial synchrony, where there is an upper bound on how long messages take to arrive. The partial synchronous setting already shows some of the difficulty of moving away from synchrony: An agent i can wait to take its action until it receives a message from j (on which its action can depend). This cannot happen in a synchronous setting. Abraham, Dolev, Geffner, abnd Halpern [I. Abraham et al., 2019] extend the results on implementing mediators to the asynchronous setting.

Cite as

Joseph Y. Halpern. Distributed Computing Meets Game Theory: Fault Tolerance and Implementation with Cheap Talk (Invited Talk). In 3rd International Conference on Blockchain Economics, Security and Protocols (Tokenomics 2021). Open Access Series in Informatics (OASIcs), Volume 97, pp. 1:1-1:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{halpern:OASIcs.Tokenomics.2021.1,
  author =	{Halpern, Joseph Y.},
  title =	{{Distributed Computing Meets Game Theory: Fault Tolerance and Implementation with Cheap Talk}},
  booktitle =	{3rd International Conference on Blockchain Economics, Security and Protocols (Tokenomics 2021)},
  pages =	{1:1--1:2},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-220-4},
  ISSN =	{2190-6807},
  year =	{2022},
  volume =	{97},
  editor =	{Gramoli, Vincent and Halaburda, Hanna and Pass, Rafael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.Tokenomics.2021.1},
  URN =		{urn:nbn:de:0030-drops-158981},
  doi =		{10.4230/OASIcs.Tokenomics.2021.1},
  annote =	{Keywords: robust equilibrium, implementing mediators, asynchronous systems}
}
Document
General Congestion Attack on HTLC-Based Payment Channel Networks

Authors: Zhichun Lu, Runchao Han, and Jiangshan Yu

Published in: OASIcs, Volume 97, 3rd International Conference on Blockchain Economics, Security and Protocols (Tokenomics 2021)


Abstract
Payment Channel Networks (PCNs) have been a promising approach to scale blockchains. However, PCNs have limited liquidity: large-amount or multi-hop payments may fail. The major threat of PCNs liquidity is payment griefing, where the adversary who acts as the payee keeps withholding the payment, so that coins involved in the payment cannot be used for routing other payments before the payment expires. Payment griefing gives adversaries a chance to launch the congestion attack, where the adversary griefs a large number of payments and paralyses the entire PCN. Understanding congestion attacks, including their strategies and impact, is crucial for designing PCNs with better liquidity guarantees. However, existing research has only focused on the specific attacking strategies and specific aspects of their impact on PCNs. We fill this gap by studying the general congestion attack. Compared to existing attack strategies, in our framework each step serves an orthogonal purpose and is customisable, allowing the adversary to focus on different aspects of the liquidity. To evaluate the attack’s impact, we propose a generic method of quantifying PCNs' liquidity and effectiveness of the congestion attacks. We evaluate our general congestion attacks on Bitcoin’s Lightning Network, and show that with direct channels to 1.5% richest nodes, and ∼ 0.0096 BTC of cost, the adversary can launch a congestion attack that locks 47% (∼280 BTC) coins in the network; reduces success rate of payments by 16.0%∼60.0%; increases fee of payments by 4.5%∼16.0%; increases average attempts of payments by 42.0%∼115.3%; and increase the number of bankruptcy nodes (i.e., nodes with insufficient balance for making normal-size payments) by 26.6%∼109.4%, where the amounts of payments range from 0.001 to 0.019 BTC.

Cite as

Zhichun Lu, Runchao Han, and Jiangshan Yu. General Congestion Attack on HTLC-Based Payment Channel Networks. In 3rd International Conference on Blockchain Economics, Security and Protocols (Tokenomics 2021). Open Access Series in Informatics (OASIcs), Volume 97, pp. 2:1-2:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{lu_et_al:OASIcs.Tokenomics.2021.2,
  author =	{Lu, Zhichun and Han, Runchao and Yu, Jiangshan},
  title =	{{General Congestion Attack on HTLC-Based Payment Channel Networks}},
  booktitle =	{3rd International Conference on Blockchain Economics, Security and Protocols (Tokenomics 2021)},
  pages =	{2:1--2:15},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-220-4},
  ISSN =	{2190-6807},
  year =	{2022},
  volume =	{97},
  editor =	{Gramoli, Vincent and Halaburda, Hanna and Pass, Rafael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.Tokenomics.2021.2},
  URN =		{urn:nbn:de:0030-drops-158990},
  doi =		{10.4230/OASIcs.Tokenomics.2021.2},
  annote =	{Keywords: Blockchain, PCN, Congestion}
}
  • Refine by Author
  • 9 Pass, Rafael
  • 3 Liu, Yanyi
  • 3 Mazor, Noam
  • 2 Eyal, Ittay
  • 2 Gramoli, Vincent
  • Show More...

  • Refine by Classification
  • 6 Theory of computation → Computational complexity and cryptography
  • 4 Security and privacy → Cryptography
  • 2 Mathematics of computing → Mathematical analysis
  • 2 Security and privacy → Distributed systems security
  • 2 Theory of computation → Distributed algorithms
  • Show More...

  • Refine by Keyword
  • 6 Kolmogorov complexity
  • 5 Blockchain
  • 4 Cryptocurrency
  • 2 Bitcoin
  • 2 Derandomization
  • Show More...

  • Refine by Type
  • 27 document
  • 1 volume

  • Refine by Publication Year
  • 18 2022
  • 7 2024
  • 1 2017
  • 1 2021
  • 1 2023