18 Search Results for "Komargodski, Ilan"


Document
From Donkeys to Kings in Tournaments

Authors: Amir Abboud, Tomer Grossman, Moni Naor, and Tomer Solomon

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
A tournament is an orientation of a complete graph. A vertex that can reach every other vertex within two steps is called a king. We study the complexity of finding k kings in a tournament graph. We show that the randomized query complexity of finding k ≤ 3 kings is O(n), and for the deterministic case it takes the same amount of queries (up to a constant) as finding a single king (the best known deterministic algorithm makes O(n^{3/2}) queries). On the other hand, we show that finding k ≥ 4 kings requires Ω(n²) queries, even in the randomized case. We consider the RAM model for k ≥ 4. We show an algorithm that finds k kings in time O(kn²), which is optimal for constant values of k. Alternatively, one can also find k ≥ 4 kings in time n^{ω} (the time for matrix multiplication). We provide evidence that this is optimal for large k by suggesting a fine-grained reduction from a variant of the triangle detection problem.

Cite as

Amir Abboud, Tomer Grossman, Moni Naor, and Tomer Solomon. From Donkeys to Kings in Tournaments. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 3:1-3:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ESA.2024.3,
  author =	{Abboud, Amir and Grossman, Tomer and Naor, Moni and Solomon, Tomer},
  title =	{{From Donkeys to Kings in Tournaments}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{3:1--3:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.3},
  URN =		{urn:nbn:de:0030-drops-210740},
  doi =		{10.4230/LIPIcs.ESA.2024.3},
  annote =	{Keywords: Tournament Graphs, Kings, Query Complexity, Fine Grained Complexity}
}
Document
Cornucopia: Distributed Randomness at Scale

Authors: Miranda Christ, Kevin Choi, and Joseph Bonneau

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
We propose Cornucopia, a protocol framework for distributed randomness beacons combining accumulators and verifiable delay functions. Cornucopia generalizes the Unicorn protocol, using an accumulator to enable efficient verification by each participant that their contribution has been included. The output is unpredictable as long as at least one participant is honest, yielding a scalable distributed randomness beacon with strong security properties. Proving this approach secure requires developing a novel property of accumulators, insertion security, which we show is both necessary and sufficient for Cornucopia-style protocols. We show that not all accumulators are insertion-secure, then prove that common constructions (Merkle trees, RSA accumulators, and bilinear accumulators) are either naturally insertion-secure or can be made so with trivial modifications.

Cite as

Miranda Christ, Kevin Choi, and Joseph Bonneau. Cornucopia: Distributed Randomness at Scale. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 17:1-17:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{christ_et_al:LIPIcs.AFT.2024.17,
  author =	{Christ, Miranda and Choi, Kevin and Bonneau, Joseph},
  title =	{{Cornucopia: Distributed Randomness at Scale}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{17:1--17:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.17},
  URN =		{urn:nbn:de:0030-drops-209533},
  doi =		{10.4230/LIPIcs.AFT.2024.17},
  annote =	{Keywords: Randomness beacons, accumulators}
}
Document
Are Your Keys Protected? Time Will Tell

Authors: Yoav Ben Dov, Liron David, Moni Naor, and Elad Tzalik

Published in: LIPIcs, Volume 304, 5th Conference on Information-Theoretic Cryptography (ITC 2024)


Abstract
Side channel attacks, and in particular timing attacks, are a fundamental obstacle to obtaining secure implementation of algorithms and cryptographic protocols, and have been widely researched for decades. While cryptographic definitions for the security of cryptographic systems have been well established for decades, none of these accepted definitions take into account the running time information leaked from executing the system. In this work, we give the foundation of new cryptographic definitions for cryptographic systems that take into account information about their leaked running time, focusing mainly on keyed functions such as signature and encryption schemes. Specifically, [(1)] 1) We define several cryptographic properties to express the claim that the timing information does not help an adversary to extract sensitive information, e.g. the key or the queries made. We highlight the definition of key-obliviousness, which means that an adversary cannot tell whether it received the timing of the queries with the actual key or the timing of the same queries with a random key. 2) We present a construction of key-oblivious pseudorandom permutations on a small or medium-sized domain. This construction is not "fixed-time," and at the same time is secure against any number of queries even in case the adversary knows the running time exactly. Our construction, which we call Janus Sometimes Recurse, is a variant of the "Sometimes Recurse" shuffle by Morris and Rogaway. 3) We suggest a new security notion for keyed functions, called noticeable security, and prove that cryptographic schemes that have noticeable security remain secure even when the exact timings are leaked, provided the implementation is key-oblivious. We show that our notion applies to cryptographic signatures, private key encryption and PRPs.

Cite as

Yoav Ben Dov, Liron David, Moni Naor, and Elad Tzalik. Are Your Keys Protected? Time Will Tell. In 5th Conference on Information-Theoretic Cryptography (ITC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 304, pp. 3:1-3:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bendov_et_al:LIPIcs.ITC.2024.3,
  author =	{Ben Dov, Yoav and David, Liron and Naor, Moni and Tzalik, Elad},
  title =	{{Are Your Keys Protected? Time Will Tell}},
  booktitle =	{5th Conference on Information-Theoretic Cryptography (ITC 2024)},
  pages =	{3:1--3:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-333-1},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{304},
  editor =	{Aggarwal, Divesh},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2024.3},
  URN =		{urn:nbn:de:0030-drops-205119},
  doi =		{10.4230/LIPIcs.ITC.2024.3},
  annote =	{Keywords: Side channel attacks, Timing attacks, Keyed functions, Key oblivious, Noticeable security}
}
Document
Time-Space Tradeoffs for Finding Multi-Collisions in Merkle-Damgård Hash Functions

Authors: Akshima

Published in: LIPIcs, Volume 304, 5th Conference on Information-Theoretic Cryptography (ITC 2024)


Abstract
We analyze the multi-collision resistance of Merkle-Damgård hash function construction in the auxiliary input random oracle model. Finding multi-collisions or m-way collisions, for some parameter m, in a hash function consists of m distinct input that have the same output under the hash function. This is a natural generalization of the collision finding problem in hash functions, which is basically finding 2-way collisions. Hardness of finding collisions, or collision resistance, is an important security assumption in cryptography. While the time-space trade-offs for collision resistance of hash functions has received considerable attention, this is the first work that studies time-space trade-offs for the multi-collision resistance property of hash functions based on the popular and widely used Merkle-Damgård (MD) constructions. In this work, we study how the advantage of finding m-way collisions depends on the parameter m. We believe understanding whether multi-collision resistance is a strictly easier property than collision resistance is a fundamental problem and our work facilitates this for adversaries with auxiliary information against MD based hash functions. Furthermore, in this work we study how the advantage varies with the bound on length of the m colliding inputs. Prior works [Akshima et al., 2020; Ashrujit Ghoshal and Ilan Komargodski, 2022; Akshima et al., 2022] have shown that finding "longer" collisions with auxiliary input in MD based hash functions becomes easier. More precisely, the advantage of finding collisions linearly depends on the bound on the length of colliding inputs. In this work, we show similar dependence for m-way collision finding, for any m ≥ 2. We show a simple attack for finding 1-block m-way collisions which achieves an advantage of Ω̃(S/mN). For 2 ≤ B < log m, we give the best known attack for finding B-blocks m-way collision which achieves an advantage of Ω̃(ST/m^{1/(B-1)}N) when m^{1/(B-1)}-way collisions exist on every salt. For B > log m, our attack achieves an advantage of Ω̃(STB/N) which is optimal when SB ≥ T and ST² ≤ N. The main results of this work is showing that our attacks are optimal for B = 1 and B = 2. This implies that in the auxiliary-input random oracle model, the advantage decreases by a multiplicative factor of m for finding 1-block and 2-block m-way collisions (compared to collision finding) in Merkle-Damgård based hash functions.

Cite as

Akshima. Time-Space Tradeoffs for Finding Multi-Collisions in Merkle-Damgård Hash Functions. In 5th Conference on Information-Theoretic Cryptography (ITC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 304, pp. 9:1-9:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{akshima:LIPIcs.ITC.2024.9,
  author =	{Akshima},
  title =	{{Time-Space Tradeoffs for Finding Multi-Collisions in Merkle-Damg\r{a}rd Hash Functions}},
  booktitle =	{5th Conference on Information-Theoretic Cryptography (ITC 2024)},
  pages =	{9:1--9:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-333-1},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{304},
  editor =	{Aggarwal, Divesh},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2024.9},
  URN =		{urn:nbn:de:0030-drops-205171},
  doi =		{10.4230/LIPIcs.ITC.2024.9},
  annote =	{Keywords: Collision, hash functions, multi-collisions, Merkle-Damg\r{a}rd, pre-computation, auxiliary input}
}
Document
Secure Multiparty Computation of Symmetric Functions with Polylogarithmic Bottleneck Complexity and Correlated Randomness

Authors: Reo Eriguchi

Published in: LIPIcs, Volume 304, 5th Conference on Information-Theoretic Cryptography (ITC 2024)


Abstract
Bottleneck complexity is an efficiency measure of secure multiparty computation (MPC) protocols introduced to achieve load-balancing in large-scale networks, which is defined as the maximum communication complexity required by any one player within the protocol execution. Towards the goal of achieving low bottleneck complexity, prior works proposed MPC protocols for computing symmetric functions in the correlated randomness model, where players are given input-independent correlated randomness in advance. However, the previous protocols with polylogarithmic bottleneck complexity in the number n of players require a large amount of correlated randomness that is linear in n, which limits the per-party efficiency as receiving and storing correlated randomness are the bottleneck for efficiency. In this work, we present for the first time MPC protocols for symmetric functions such that bottleneck complexity and the amount of correlated randomness are both polylogarithmic in n, assuming semi-honest adversaries colluding with at most n-o(n) players. Furthermore, one of our protocols is even computationally efficient in that each player performs only polylog(n) arithmetic operations while the computational complexity of the previous protocols is O(n). Technically, our efficiency improvements come from novel protocols based on ramp secret sharing to realize basic functionalities with low bottleneck complexity, which we believe may be of interest beyond their applications to secure computation of symmetric functions.

Cite as

Reo Eriguchi. Secure Multiparty Computation of Symmetric Functions with Polylogarithmic Bottleneck Complexity and Correlated Randomness. In 5th Conference on Information-Theoretic Cryptography (ITC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 304, pp. 10:1-10:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{eriguchi:LIPIcs.ITC.2024.10,
  author =	{Eriguchi, Reo},
  title =	{{Secure Multiparty Computation of Symmetric Functions with Polylogarithmic Bottleneck Complexity and Correlated Randomness}},
  booktitle =	{5th Conference on Information-Theoretic Cryptography (ITC 2024)},
  pages =	{10:1--10:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-333-1},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{304},
  editor =	{Aggarwal, Divesh},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2024.10},
  URN =		{urn:nbn:de:0030-drops-205182},
  doi =		{10.4230/LIPIcs.ITC.2024.10},
  annote =	{Keywords: Secure multiparty computation, Bottleneck complexity, Secret sharing}
}
Document
Linear-Size Boolean Circuits for Multiselection

Authors: Justin Holmgren and Ron Rothblum

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We study the circuit complexity of the multiselection problem: given an input string x ∈ {0,1}ⁿ along with indices i_1,… ,i_q ∈ [n], output (x_{i_1},… ,x_{i_q}). A trivial lower bound for the circuit size is the input length n + q⋅log(n), but the straightforward construction has size Θ(q⋅n). Our main result is an O(n+q⋅log³(n))-size and O(log(n+q))-depth circuit for multiselection. In particular, for any q ≤ n/log³(n) the circuit has linear size and logarithmic depth. Prior to our work no linear-size circuit for multiselection was known for any q = ω(1) and regardless of depth.

Cite as

Justin Holmgren and Ron Rothblum. Linear-Size Boolean Circuits for Multiselection. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 11:1-11:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{holmgren_et_al:LIPIcs.CCC.2024.11,
  author =	{Holmgren, Justin and Rothblum, Ron},
  title =	{{Linear-Size Boolean Circuits for Multiselection}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{11:1--11:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.11},
  URN =		{urn:nbn:de:0030-drops-204070},
  doi =		{10.4230/LIPIcs.CCC.2024.11},
  annote =	{Keywords: Private Information Retrieval, Batch Selection, Boolean Circuits}
}
Document
Gap MCSP Is Not (Levin) NP-Complete in Obfustopia

Authors: Noam Mazor and Rafael Pass

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We demonstrate that under believable cryptographic hardness assumptions, Gap versions of standard meta-complexity problems, such as the Minimum Circuit Size Problem (MCSP) and the Minimum Time-Bounded Kolmogorov Complexity problem (MKTP) are not NP-complete w.r.t. Levin (i.e., witness-preserving many-to-one) reductions. In more detail: - Assuming the existence of indistinguishability obfuscation, and subexponentially-secure one-way functions, an appropriate Gap version of MCSP is not NP-complete under randomized Levin-reductions. - Assuming the existence of subexponentially-secure indistinguishability obfuscation, subexponentially-secure one-way functions and injective PRGs, an appropriate Gap version of MKTP is not NP-complete under randomized Levin-reductions.

Cite as

Noam Mazor and Rafael Pass. Gap MCSP Is Not (Levin) NP-Complete in Obfustopia. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 36:1-36:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{mazor_et_al:LIPIcs.CCC.2024.36,
  author =	{Mazor, Noam and Pass, Rafael},
  title =	{{Gap MCSP Is Not (Levin) NP-Complete in Obfustopia}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{36:1--36:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.36},
  URN =		{urn:nbn:de:0030-drops-204322},
  doi =		{10.4230/LIPIcs.CCC.2024.36},
  annote =	{Keywords: Kolmogorov complexity, MCSP, Levin Reduction}
}
Document
Track A: Algorithms, Complexity and Games
Linear Relaxed Locally Decodable and Correctable Codes Do Not Need Adaptivity and Two-Sided Error

Authors: Guy Goldberg

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Relaxed locally decodable codes (RLDCs) are error-correcting codes in which individual bits of the message can be recovered by querying only a few bits from a noisy codeword. For uncorrupted codewords, and for every bit, the decoder must decode the bit correctly with high probability. However, for a noisy codeword, a relaxed local decoder is allowed to output a "rejection" symbol, indicating that the decoding failed. We study the power of adaptivity and two-sided error for RLDCs. Our main result is that if the underlying code is linear, adaptivity and two-sided error do not give any power to relaxed local decoding. We construct a reduction from adaptive, two-sided error relaxed local decoders to non-adaptive, one-sided error ones. That is, the reduction produces a relaxed local decoder that never errs or rejects if its input is a valid codeword and makes queries based on its internal randomness (and the requested index to decode), independently of the input. The reduction essentially maintains the query complexity, requiring at most one additional query. For any input, the decoder’s error probability increases at most two-fold. Furthermore, assuming the underlying code is in systematic form, where the original message is embedded as the first bits of its encoding, the reduction also conserves both the code itself and its rate and distance properties We base the reduction on our new notion of additive promise problems. A promise problem is additive if the sum of any two YES-instances is a YES-instance and the sum of any NO-instance and a YES-instance is a NO-instance. This novel framework captures both linear RLDCs and property testing (of linear properties), despite their significant differences. We prove that in general, algorithms for any additive promise problem do not gain power from adaptivity or two-sided error, and obtain the result for RLDCs as a special case. The result also holds for relaxed locally correctable codes (RLCCs), where a codeword bit should be recovered. As an application, we improve the best known lower bound for linear adaptive RLDCs. Specifically, we prove that such codes require block length of n ≥ k^{1+Ω(1/q²)}, where k denotes the message length and q denotes the number of queries.

Cite as

Guy Goldberg. Linear Relaxed Locally Decodable and Correctable Codes Do Not Need Adaptivity and Two-Sided Error. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 74:1-74:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{goldberg:LIPIcs.ICALP.2024.74,
  author =	{Goldberg, Guy},
  title =	{{Linear Relaxed Locally Decodable and Correctable Codes Do Not Need Adaptivity and Two-Sided Error}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{74:1--74:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.74},
  URN =		{urn:nbn:de:0030-drops-202174},
  doi =		{10.4230/LIPIcs.ICALP.2024.74},
  annote =	{Keywords: Locally decodable codes, Relaxed locally correctable codes, Relaxed locally decodable codes}
}
Document
Scalable Distributed Agreement from LWE: Byzantine Agreement, Broadcast, and Leader Election

Authors: Rex Fernando, Yuval Gelles, and Ilan Komargodski

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Distributed agreement is a general name for the task of ensuring consensus among non-faulty nodes in the presence of faulty or malicious behavior. Well-known instances of agreement tasks are Byzantine Agreement, Broadcast, and Committee or Leader Election. Since agreement tasks lie at the heart of many modern distributed applications, there has been an increased interest in designing scalable protocols for these tasks. Specifically, we want protocols where the per-party communication complexity scales sublinearly with the number of parties. With unconditional security, the state of the art protocols have Õ(√ n) per-party communication and Õ(1) rounds, where n stands for the number of parties, tolerating 1/3-ε fraction of corruptions for any ε > 0. There are matching lower bounds showing that these protocols are essentially optimal among a large class of protocols. Recently, Boyle-Cohen-Goel (PODC 2021) relaxed the attacker to be computationally bounded and using strong cryptographic assumptions showed a protocol with Õ(1) per-party communication and rounds (similarly, tolerating 1/3-ε fraction of corruptions). The security of their protocol relies on SNARKs for NP with linear-time extraction, a somewhat strong and non-standard assumption. Their protocols further relies on a public-key infrastructure (PKI) and a common-reference-string (CRS). In this work, we present a new protocol with Õ(1) per-party communication and rounds but relying only on the standard Learning With Errors (LWE) assumption. Our protocol also relies on a PKI and a CRS, and tolerates 1/3-ε fraction of corruptions, similarly to Boyle et al. Technically, we leverage (multi-hop) BARGs for NP directly and in a generic manner which significantly deviate from the framework of Boyle et al.

Cite as

Rex Fernando, Yuval Gelles, and Ilan Komargodski. Scalable Distributed Agreement from LWE: Byzantine Agreement, Broadcast, and Leader Election. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 46:1-46:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{fernando_et_al:LIPIcs.ITCS.2024.46,
  author =	{Fernando, Rex and Gelles, Yuval and Komargodski, Ilan},
  title =	{{Scalable Distributed Agreement from LWE: Byzantine Agreement, Broadcast, and Leader Election}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{46:1--46:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.46},
  URN =		{urn:nbn:de:0030-drops-195744},
  doi =		{10.4230/LIPIcs.ITCS.2024.46},
  annote =	{Keywords: Byzantine agreement, scalable, learning with errors}
}
Document
Brief Announcement
Brief Announcement: Scalable Agreement Protocols with Optimal Optimistic Efficiency

Authors: Yuval Gelles and Ilan Komargodski

Published in: LIPIcs, Volume 281, 37th International Symposium on Distributed Computing (DISC 2023)


Abstract
Designing efficient distributed protocols for various agreement tasks such as Byzantine Agreement, Broadcast, and Committee Election is a fundamental problem. We are interested in scalable protocols for these tasks, where each (honest) party communicates a number of bits which is sublinear in n, the number of parties. The first major step towards this goal is due to King et al. (SODA 2006) who showed a protocol where each party sends only Õ(1) bits throughout Õ(1) rounds, but guarantees only that 1-o(1) fraction of honest parties end up agreeing on a consistent output, assuming constant < 1/3 fraction of static corruptions. Few years later, King et al. (ICDCN 2011) managed to get a full agreement protocol in the same model but where each party sends Õ(√n) bits throughout Õ(1) rounds. Getting a full agreement protocol with o(√n) communication per party has been a major challenge ever since. In light of this barrier, we propose a new framework for designing efficient agreement protocols. Specifically, we design Õ(1)-round protocols for all of the above tasks (assuming constant < 1/3 fraction of static corruptions) with optimistic and pessimistic guarantees: - Optimistic complexity: In an honest execution, all parties send only Õ(1) bits. - Pessimistic complexity: In any other case, (honest) parties send Õ(√n) bits. Thus, all an adversary can gain from deviating from the honest execution is that honest parties will need to work harder (i.e., transmit more bits) to reach agreement and terminate. Besides the above agreement tasks, we also use our new framework to get a scalable secure multiparty computation (MPC) protocol with optimistic and pessimistic complexities. Technically, we identify a relaxation of Byzantine Agreement (of independent interest) that allows us to fall-back to a pessimistic execution in a coordinated way by all parties. We implement this relaxation with Õ(1) communication bits per party and within Õ(1) rounds.

Cite as

Yuval Gelles and Ilan Komargodski. Brief Announcement: Scalable Agreement Protocols with Optimal Optimistic Efficiency. In 37th International Symposium on Distributed Computing (DISC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 281, pp. 42:1-42:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{gelles_et_al:LIPIcs.DISC.2023.42,
  author =	{Gelles, Yuval and Komargodski, Ilan},
  title =	{{Brief Announcement: Scalable Agreement Protocols with Optimal Optimistic Efficiency}},
  booktitle =	{37th International Symposium on Distributed Computing (DISC 2023)},
  pages =	{42:1--42:6},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-301-0},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{281},
  editor =	{Oshman, Rotem},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2023.42},
  URN =		{urn:nbn:de:0030-drops-191684},
  doi =		{10.4230/LIPIcs.DISC.2023.42},
  annote =	{Keywords: Byzantine Agreement, Consensus, Optimistic-Pessimistic, Secure Multi-Party Computation}
}
Document
A Lower Bound on the Share Size in Evolving Secret Sharing

Authors: Noam Mazor

Published in: LIPIcs, Volume 267, 4th Conference on Information-Theoretic Cryptography (ITC 2023)


Abstract
Secret sharing schemes allow sharing a secret between a set of parties in a way that ensures that only authorized subsets of the parties learn the secret. Evolving secret sharing schemes (Komargodski, Naor, and Yogev [TCC '16]) allow achieving this end in a scenario where the parties arrive in an online fashion, and there is no a-priory bound on the number of parties. An important complexity measure of a secret sharing scheme is the share size, which is the maximum number of bits that a party may receive as a share. While there has been a significant progress in recent years, the best constructions for both secret sharing and evolving secret sharing schemes have a share size that is exponential in the number of parties. On the other hand, the best lower bound, by Csirmaz [Eurocrypt '95], is sub-linear. In this work, we give a tight lower bound on the share size of evolving secret sharing schemes. Specifically, we show that the sub-linear lower bound of Csirmaz implies an exponential lower bound on evolving secret sharing.

Cite as

Noam Mazor. A Lower Bound on the Share Size in Evolving Secret Sharing. In 4th Conference on Information-Theoretic Cryptography (ITC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 267, pp. 2:1-2:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{mazor:LIPIcs.ITC.2023.2,
  author =	{Mazor, Noam},
  title =	{{A Lower Bound on the Share Size in Evolving Secret Sharing}},
  booktitle =	{4th Conference on Information-Theoretic Cryptography (ITC 2023)},
  pages =	{2:1--2:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-271-6},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{267},
  editor =	{Chung, Kai-Min},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2023.2},
  URN =		{urn:nbn:de:0030-drops-183300},
  doi =		{10.4230/LIPIcs.ITC.2023.2},
  annote =	{Keywords: Secret sharing, Evolving secret sharing}
}
Document
The Cost of Statistical Security in Proofs for Repeated Squaring

Authors: Cody Freitag and Ilan Komargodski

Published in: LIPIcs, Volume 267, 4th Conference on Information-Theoretic Cryptography (ITC 2023)


Abstract
In recent years, the number of applications of the repeated squaring assumption has been growing rapidly. The assumption states that, given a group element x, an integer T, and an RSA modulus N, it is hard to compute x^2^T mod N - or even decide whether y?=x^2^T mod N - in parallel time less than the trivial approach of simply computing T squares. This rise has been driven by efficient proof systems for repeated squaring, opening the door to more efficient constructions of verifiable delay functions, various secure computation primitives, and proof systems for more general languages. In this work, we study the complexity of statistically sound proofs for the repeated squaring relation. Technically, we consider proofs where the prover sends at most k ≥ 0 elements and the (probabilistic) verifier performs generic group operations over the group ℤ_N^⋆. As our main contribution, we show that for any (one-round) proof with a randomized verifier (i.e., an MA proof) the verifier either runs in parallel time Ω(T/(k+1)) with high probability, or is able to factor N given the proof provided by the prover. This shows that either the prover essentially sends p,q such that N = p⋅ q (which is infeasible or undesirable in most applications), or a variant of Pietrzak’s proof of repeated squaring (ITCS 2019) has optimal verifier complexity O(T/(k+1)). In particular, it is impossible to obtain a statistically sound one-round proof of repeated squaring with efficiency on par with the computationally-sound protocol of Wesolowski (EUROCRYPT 2019), with a generic group verifier. We further extend our one-round lower bound to a natural class of recursive interactive proofs for repeated squaring. For r-round recursive proofs where the prover is allowed to send k group elements per round, we show that the verifier either runs in parallel time Ω(T/(k+1)^r) with high probability, or is able to factor N given the proof transcript.

Cite as

Cody Freitag and Ilan Komargodski. The Cost of Statistical Security in Proofs for Repeated Squaring. In 4th Conference on Information-Theoretic Cryptography (ITC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 267, pp. 4:1-4:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{freitag_et_al:LIPIcs.ITC.2023.4,
  author =	{Freitag, Cody and Komargodski, Ilan},
  title =	{{The Cost of Statistical Security in Proofs for Repeated Squaring}},
  booktitle =	{4th Conference on Information-Theoretic Cryptography (ITC 2023)},
  pages =	{4:1--4:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-271-6},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{267},
  editor =	{Chung, Kai-Min},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2023.4},
  URN =		{urn:nbn:de:0030-drops-183326},
  doi =		{10.4230/LIPIcs.ITC.2023.4},
  annote =	{Keywords: Cryptographic Proofs, Repeated Squaring, Lower Bounds}
}
Document
Differentially Oblivious Turing Machines

Authors: Ilan Komargodski and Elaine Shi

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
Oblivious RAM (ORAM) is a machinery that protects any RAM from leaking information about its secret input by observing only the access pattern. It is known that every ORAM must incur a logarithmic overhead compared to the non-oblivious RAM. In fact, even the seemingly weaker notion of differential obliviousness, which intuitively "protects" a single access by guaranteeing that the observed access pattern for every two "neighboring" logical access sequences satisfy (ε,δ)-differential privacy, is subject to a logarithmic lower bound. In this work, we show that any Turing machine computation can be generically compiled into a differentially oblivious one with only doubly logarithmic overhead. More precisely, given a Turing machine that makes N transitions, the compiled Turing machine makes O(N ⋅ log log N) transitions in total and the physical head movements sequence satisfies (ε,δ)-differential privacy (for a constant ε and a negligible δ). We additionally show that Ω(log log N) overhead is necessary in a natural range of parameters (and in the balls and bins model). As a corollary, we show that there exist natural data structures such as stack and queues (supporting online operations) on N elements for which there is a differentially oblivious implementation on a Turing machine incurring amortized O(log log N) overhead per operation, while it is known that any oblivious implementation must consume Ω(log N) operations unconditionally even on a RAM. Therefore, we obtain the first unconditional separation between obliviousness and differential obliviousness in the most natural setting of parameters where ε is a constant and δ is negligible. Before this work, such a separation was only known in the balls and bins model. Note that the lower bound applies in the RAM model while our upper bound is in the Turing machine model, making our separation stronger.

Cite as

Ilan Komargodski and Elaine Shi. Differentially Oblivious Turing Machines. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 68:1-68:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{komargodski_et_al:LIPIcs.ITCS.2021.68,
  author =	{Komargodski, Ilan and Shi, Elaine},
  title =	{{Differentially Oblivious Turing Machines}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{68:1--68:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.68},
  URN =		{urn:nbn:de:0030-drops-136071},
  doi =		{10.4230/LIPIcs.ITCS.2021.68},
  annote =	{Keywords: Differential privacy, Turing machines, obliviousness}
}
Document
Oblivious Parallel Tight Compaction

Authors: Gilad Asharov, Ilan Komargodski, Wei-Kai Lin, Enoch Peserico, and Elaine Shi

Published in: LIPIcs, Volume 163, 1st Conference on Information-Theoretic Cryptography (ITC 2020)


Abstract
In tight compaction one is given an array of balls some of which are marked 0 and the rest are marked 1. The output of the procedure is an array that contains all of the original balls except that now the 0-balls appear before the 1-balls. In other words, tight compaction is equivalent to sorting the array according to 1-bit keys (not necessarily maintaining order within same-key balls). Tight compaction is not only an important algorithmic task by itself, but its oblivious version has also played a key role in recent constructions of oblivious RAM compilers. We present an oblivious deterministic algorithm for tight compaction such that for input arrays of n balls requires O(n) total work and O(log n) depth. Our algorithm is in the Exclusive-Read-Exclusive-Write Parallel-RAM model (i.e., EREW PRAM, the most restrictive PRAM model), and importantly we achieve asymptotical optimality in both total work and depth. To the best of our knowledge no earlier work, even when allowing randomization, can achieve optimality in both total work and depth.

Cite as

Gilad Asharov, Ilan Komargodski, Wei-Kai Lin, Enoch Peserico, and Elaine Shi. Oblivious Parallel Tight Compaction. In 1st Conference on Information-Theoretic Cryptography (ITC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 163, pp. 11:1-11:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{asharov_et_al:LIPIcs.ITC.2020.11,
  author =	{Asharov, Gilad and Komargodski, Ilan and Lin, Wei-Kai and Peserico, Enoch and Shi, Elaine},
  title =	{{Oblivious Parallel Tight Compaction}},
  booktitle =	{1st Conference on Information-Theoretic Cryptography (ITC 2020)},
  pages =	{11:1--11:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-151-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{163},
  editor =	{Tauman Kalai, Yael and Smith, Adam D. and Wichs, Daniel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2020.11},
  URN =		{urn:nbn:de:0030-drops-121164},
  doi =		{10.4230/LIPIcs.ITC.2020.11},
  annote =	{Keywords: Oblivious tight compaction, parallel oblivious RAM, EREW PRAM}
}
Document
Instance Complexity and Unlabeled Certificates in the Decision Tree Model

Authors: Tomer Grossman, Ilan Komargodski, and Moni Naor

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
Instance complexity is a measure of goodness of an algorithm in which the performance of one algorithm is compared to others per input. This is in sharp contrast to worst-case and average-case complexity measures, where the performance is compared either on the worst input or on an average one, respectively. We initiate the systematic study of instance complexity and optimality in the query model (a.k.a. the decision tree model). In this model, instance optimality of an algorithm for computing a function is the requirement that the complexity of an algorithm on any input is at most a constant factor larger than the complexity of the best correct algorithm. That is we compare the decision tree to one that receives a certificate and its complexity is measured only if the certificate is correct (but correctness should hold on any input). We study both deterministic and randomized decision trees and provide various characterizations and barriers for more general results. We introduce a new measure of complexity called unlabeled-certificate complexity, appropriate for graph properties and other functions with symmetries, where only information about the structure of the graph is known to the competing algorithm. More precisely, the certificate is some permutation of the input (rather than the input itself) and the correctness should be maintained even if the certificate is wrong. First we show that such an unlabeled certificate is sometimes very helpful in the worst-case. We then study instance optimality with respect to this measure of complexity, where an algorithm is said to be instance optimal if for every input it performs roughly as well as the best algorithm that is given an unlabeled certificate (but is correct on every input). We show that instance optimality depends on the group of permutations in consideration. Our proofs rely on techniques from hypothesis testing and analysis of random graphs.

Cite as

Tomer Grossman, Ilan Komargodski, and Moni Naor. Instance Complexity and Unlabeled Certificates in the Decision Tree Model. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 56:1-56:38, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{grossman_et_al:LIPIcs.ITCS.2020.56,
  author =	{Grossman, Tomer and Komargodski, Ilan and Naor, Moni},
  title =	{{Instance Complexity and Unlabeled Certificates in the Decision Tree Model}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{56:1--56:38},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.56},
  URN =		{urn:nbn:de:0030-drops-117418},
  doi =		{10.4230/LIPIcs.ITCS.2020.56},
  annote =	{Keywords: decision tree complexity, instance complexity, instance optimality, query complexity, unlabeled certificates}
}
  • Refine by Author
  • 9 Komargodski, Ilan
  • 3 Naor, Moni
  • 2 Gelles, Yuval
  • 2 Grossman, Tomer
  • 2 Mazor, Noam
  • Show More...

  • Refine by Classification
  • 3 Security and privacy → Information-theoretic techniques
  • 2 Security and privacy → Cryptography
  • 2 Theory of computation → Complexity theory and logic
  • 2 Theory of computation → Cryptographic protocols
  • 2 Theory of computation → Design and analysis of algorithms
  • Show More...

  • Refine by Keyword
  • 2 Secret sharing
  • 1 Batch Selection
  • 1 Boolean Circuits
  • 1 Bottleneck complexity
  • 1 Byzantine Agreement
  • Show More...

  • Refine by Type
  • 18 document

  • Refine by Publication Year
  • 9 2024
  • 3 2023
  • 2 2015
  • 2 2020
  • 1 2018
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail