14 Search Results for "Dziembowski, Stefan"


Document
Invertible Bloom Lookup Tables with Less Memory and Randomness

Authors: Nils Fleischhacker, Kasper Green Larsen, Maciej Obremski, and Mark Simkin

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In this work we study Invertible Bloom Lookup Tables (IBLTs) with small failure probabilities. IBLTs are highly versatile data structures that have found applications in set reconciliation protocols, error-correcting codes, and even the design of advanced cryptographic primitives. For storing n elements and ensuring correctness with probability at least 1 - δ, existing IBLT constructions require Ω(n((log(1/δ))/(log n))+1)) space and they crucially rely on fully random hash functions. We present new constructions of IBLTs that are simultaneously more space efficient and require less randomness. For storing n elements with a failure probability of at most δ, our data structure only requires O{n + log(1/δ)log log(1/δ)} space and O{log(log(n)/δ)}-wise independent hash functions. As a key technical ingredient we show that hashing n keys with any k-wise independent hash function h:U → [Cn] for some sufficiently large constant C guarantees with probability 1 - 2^{-Ω(k)} that at least n/2 keys will have a unique hash value. Proving this is non-trivial as k approaches n. We believe that the techniques used to prove this statement may be of independent interest. We apply our new IBLTs to the encrypted compression problem, recently studied by Fleischhacker, Larsen, Simkin (Eurocrypt 2023). We extend their approach to work for a more general class of encryption schemes and using our new IBLT we achieve an asymptotically better compression rate.

Cite as

Nils Fleischhacker, Kasper Green Larsen, Maciej Obremski, and Mark Simkin. Invertible Bloom Lookup Tables with Less Memory and Randomness. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 54:1-54:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{fleischhacker_et_al:LIPIcs.ESA.2024.54,
  author =	{Fleischhacker, Nils and Larsen, Kasper Green and Obremski, Maciej and Simkin, Mark},
  title =	{{Invertible Bloom Lookup Tables with Less Memory and Randomness}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{54:1--54:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.54},
  URN =		{urn:nbn:de:0030-drops-211252},
  doi =		{10.4230/LIPIcs.ESA.2024.54},
  annote =	{Keywords: Invertible Bloom Lookup Tables}
}
Document
Cornucopia: Distributed Randomness at Scale

Authors: Miranda Christ, Kevin Choi, and Joseph Bonneau

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
We propose Cornucopia, a protocol framework for distributed randomness beacons combining accumulators and verifiable delay functions. Cornucopia generalizes the Unicorn protocol, using an accumulator to enable efficient verification by each participant that their contribution has been included. The output is unpredictable as long as at least one participant is honest, yielding a scalable distributed randomness beacon with strong security properties. Proving this approach secure requires developing a novel property of accumulators, insertion security, which we show is both necessary and sufficient for Cornucopia-style protocols. We show that not all accumulators are insertion-secure, then prove that common constructions (Merkle trees, RSA accumulators, and bilinear accumulators) are either naturally insertion-secure or can be made so with trivial modifications.

Cite as

Miranda Christ, Kevin Choi, and Joseph Bonneau. Cornucopia: Distributed Randomness at Scale. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 17:1-17:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{christ_et_al:LIPIcs.AFT.2024.17,
  author =	{Christ, Miranda and Choi, Kevin and Bonneau, Joseph},
  title =	{{Cornucopia: Distributed Randomness at Scale}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{17:1--17:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.17},
  URN =		{urn:nbn:de:0030-drops-209533},
  doi =		{10.4230/LIPIcs.AFT.2024.17},
  annote =	{Keywords: Randomness beacons, accumulators}
}
Document
Competitive Policies for Online Collateral Maintenance

Authors: Ghada Almashaqbeh, Sixia Chen, and Alexander Russell

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Layer-two blockchain protocols emerged to address scalability issues related to fees, storage cost, and confirmation delay of on-chain transactions. They aggregate off-chain transactions into fewer on-chain ones, thus offering immediate settlement and reduced transaction fees. To preserve security of the underlying ledger, layer-two protocols often work in a collateralized model; resources are committed on-chain to backup off-chain activities. A fundamental challenge that arises in this setup is determining a policy for establishing, committing, and replenishing the collateral in a way that maximizes the value of settled transactions. In this paper, we study this problem under two settings that model collateralized layer-two protocols. The first is a general model in which a party has an on-chain collateral C with a policy to decide on whether to settle or discard each incoming transaction. The policy also specifies when to replenish C based on the remaining collateral value. The second model considers a discrete setup in which C is divided among k wallets, each of which is of size C/k, such that when a wallet is full, and so cannot settle any incoming transactions, it will be replenished. We devise several online policies for these models, and show how competitive they are compared to optimal (offline) policies that have full knowledge of the incoming transaction stream. To the best of our knowledge, we are the first to study and formulate online competitive policies for collateral and wallet management in the blockchain setting.

Cite as

Ghada Almashaqbeh, Sixia Chen, and Alexander Russell. Competitive Policies for Online Collateral Maintenance. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 26:1-26:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{almashaqbeh_et_al:LIPIcs.AFT.2024.26,
  author =	{Almashaqbeh, Ghada and Chen, Sixia and Russell, Alexander},
  title =	{{Competitive Policies for Online Collateral Maintenance}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{26:1--26:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.26},
  URN =		{urn:nbn:de:0030-drops-209620},
  doi =		{10.4230/LIPIcs.AFT.2024.26},
  annote =	{Keywords: Blockchain layer-two solutions, Wallets, Collateral management, Online algorithms, Competitive analysis}
}
Document
Bribe & Fork: Cheap PCN Bribing Attacks via Forking Threat

Authors: Zeta Avarikioti, Paweł Kędzior, Tomasz Lizurej, and Tomasz Michalak

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
In this work, we reexamine the vulnerability of Payment Channel Networks (PCNs) to bribing attacks, where an adversary incentivizes blockchain miners to deliberately ignore a specific transaction to undermine the punishment mechanism of PCNs. While previous studies have posited a prohibitive cost for such attacks, we show that this cost can be dramatically reduced (to approximately $125), thereby increasing the likelihood of these attacks. To this end, we introduce Bribe & Fork, a modified bribing attack that leverages the threat of a so-called feather fork which we analyze with a novel formal model for the mining game with forking. We empirically analyze historical data of some real-world blockchain implementations to evaluate the scale of this cost reduction. Our findings shed more light on the potential vulnerability of PCNs and highlight the need for robust solutions.

Cite as

Zeta Avarikioti, Paweł Kędzior, Tomasz Lizurej, and Tomasz Michalak. Bribe & Fork: Cheap PCN Bribing Attacks via Forking Threat. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 11:1-11:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{avarikioti_et_al:LIPIcs.AFT.2024.11,
  author =	{Avarikioti, Zeta and K\k{e}dzior, Pawe{\l} and Lizurej, Tomasz and Michalak, Tomasz},
  title =	{{Bribe \& Fork: Cheap PCN Bribing Attacks via Forking Threat}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{11:1--11:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.11},
  URN =		{urn:nbn:de:0030-drops-209473},
  doi =		{10.4230/LIPIcs.AFT.2024.11},
  annote =	{Keywords: Blockchain, Payment Channels Networks, Timelock Bribing, Feather Forking}
}
Document
The Power of Counting Steps in Quantitative Games

Authors: Sougata Bose, Rasmus Ibsen-Jensen, David Purser, Patrick Totzke, and Pierre Vandenhove

Published in: LIPIcs, Volume 311, 35th International Conference on Concurrency Theory (CONCUR 2024)


Abstract
We study deterministic games of infinite duration played on graphs and focus on the strategy complexity of quantitative objectives. Such games are known to admit optimal memoryless strategies over finite graphs, but require infinite-memory strategies in general over infinite graphs. We provide new lower and upper bounds for the strategy complexity of mean-payoff and total-payoff objectives over infinite graphs, focusing on whether step-counter strategies (sometimes called Markov strategies) suffice to implement winning strategies. In particular, we show that over finitely branching arenas, three variants of limsup mean-payoff and total-payoff objectives admit winning strategies that are based either on a step counter or on a step counter and an additional bit of memory. Conversely, we show that for certain liminf total-payoff objectives, strategies resorting to a step counter and finite memory are not sufficient. For step-counter strategies, this settles the case of all classical quantitative objectives up to the second level of the Borel hierarchy.

Cite as

Sougata Bose, Rasmus Ibsen-Jensen, David Purser, Patrick Totzke, and Pierre Vandenhove. The Power of Counting Steps in Quantitative Games. In 35th International Conference on Concurrency Theory (CONCUR 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 311, pp. 13:1-13:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bose_et_al:LIPIcs.CONCUR.2024.13,
  author =	{Bose, Sougata and Ibsen-Jensen, Rasmus and Purser, David and Totzke, Patrick and Vandenhove, Pierre},
  title =	{{The Power of Counting Steps in Quantitative Games}},
  booktitle =	{35th International Conference on Concurrency Theory (CONCUR 2024)},
  pages =	{13:1--13:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-339-3},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{311},
  editor =	{Majumdar, Rupak and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2024.13},
  URN =		{urn:nbn:de:0030-drops-207852},
  doi =		{10.4230/LIPIcs.CONCUR.2024.13},
  annote =	{Keywords: Games on graphs, Markov strategies, quantitative objectives, infinite-state systems}
}
Document
Faster and Smaller Solutions of Obliging Games

Authors: Daniel Hausmann and Nir Piterman

Published in: LIPIcs, Volume 311, 35th International Conference on Concurrency Theory (CONCUR 2024)


Abstract
Obliging games have been introduced in the context of the game perspective on reactive synthesis in order to enforce a degree of cooperation between the to-be-synthesized system and the environment. Previous approaches to the analysis of obliging games have been small-step in the sense that they have been based on a reduction to standard (non-obliging) games in which single moves correspond to single moves in the original (obliging) game. Here, we propose a novel, large-step view on obliging games, reducing them to standard games in which single moves encode long-term behaviors in the original game. This not only allows us to give a meaningful definition of the environment winning in obliging games, but also leads to significantly improved bounds on both strategy sizes and the solution runtime for obliging games.

Cite as

Daniel Hausmann and Nir Piterman. Faster and Smaller Solutions of Obliging Games. In 35th International Conference on Concurrency Theory (CONCUR 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 311, pp. 28:1-28:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hausmann_et_al:LIPIcs.CONCUR.2024.28,
  author =	{Hausmann, Daniel and Piterman, Nir},
  title =	{{Faster and Smaller Solutions of Obliging Games}},
  booktitle =	{35th International Conference on Concurrency Theory (CONCUR 2024)},
  pages =	{28:1--28:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-339-3},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{311},
  editor =	{Majumdar, Rupak and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2024.28},
  URN =		{urn:nbn:de:0030-drops-208008},
  doi =		{10.4230/LIPIcs.CONCUR.2024.28},
  annote =	{Keywords: Two-player games, reactive synthesis, Emerson-Lei games, parity games}
}
Document
The Quantum Decoding Problem

Authors: André Chailloux and Jean-Pierre Tillich

Published in: LIPIcs, Volume 310, 19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024)


Abstract
One of the founding results of lattice based cryptography is a quantum reduction from the Short Integer Solution (SIS) problem to the Learning with Errors (LWE) problem introduced by Regev. It has recently been pointed out by Chen, Liu and Zhandry [Chen et al., 2022] that this reduction can be made more powerful by replacing the LWE problem with a quantum equivalent, where the errors are given in quantum superposition. In parallel, Regev’s reduction has recently been adapted in the context of code-based cryptography by Debris, Remaud and Tillich [Debris-Alazard et al., 2023], who showed a reduction between the Short Codeword Problem and the Decoding Problem (the DRT reduction). This motivates the study of the Quantum Decoding Problem (QDP), which is the Decoding Problem but with errors in quantum superposition and see how it behaves in the DRT reduction. The purpose of this paper is to introduce and to lay a firm foundation for QDP. We first show QDP is likely to be easier than classical decoding, by proving that it can be solved in quantum polynomial time in a large regime of noise whereas no non-exponential quantum algorithm is known for the classical decoding problem. Then, we show that QDP can even be solved (albeit not necessarily efficiently) beyond the information theoretic Shannon limit for classical decoding. We give precisely the largest noise level where we can solve QDP giving in a sense the information theoretic limit for this new problem. Finally, we study how QDP can be used in the DRT reduction. First, we show that our algorithms can be properly used in the DRT reduction showing that our quantum algorithms for QDP beyond Shannon capacity can be used to find minimal weight codewords in a random code. On the negative side, we show that the DRT reduction cannot be, in all generality, a reduction between finding small codewords and QDP by exhibiting quantum algorithms for QDP where this reduction entirely fails. Our proof techniques include the use of specific quantum measurements, such as q-ary unambiguous state discrimination and pretty good measurements as well as strong concentration bounds on weight distribution of random shifted dual codes, which we relate using quantum Fourier analysis.

Cite as

André Chailloux and Jean-Pierre Tillich. The Quantum Decoding Problem. In 19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 310, pp. 6:1-6:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chailloux_et_al:LIPIcs.TQC.2024.6,
  author =	{Chailloux, Andr\'{e} and Tillich, Jean-Pierre},
  title =	{{The Quantum Decoding Problem}},
  booktitle =	{19th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2024)},
  pages =	{6:1--6:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-328-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{310},
  editor =	{Magniez, Fr\'{e}d\'{e}ric and Grilo, Alex Bredariol},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TQC.2024.6},
  URN =		{urn:nbn:de:0030-drops-206767},
  doi =		{10.4230/LIPIcs.TQC.2024.6},
  annote =	{Keywords: quantum information theory, code-based cryptography, quantum algorithms}
}
Document
The Complexity of Simplifying ω-Automata Through the Alternating Cycle Decomposition

Authors: Antonio Casares and Corto Mascle

Published in: LIPIcs, Volume 306, 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)


Abstract
In 2021, Casares, Colcombet and Fijalkow introduced the Alternating Cycle Decomposition (ACD), a structure used to define optimal transformations of Muller into parity automata and to obtain theoretical results about the possibility of relabelling automata with different acceptance conditions. In this work, we study the complexity of computing the ACD and its DAG-version, proving that this can be done in polynomial time for suitable representations of the acceptance condition of the Muller automaton. As corollaries, we obtain that we can decide typeness of Muller automata in polynomial time, as well as the parity index of the languages they recognise. Furthermore, we show that we can minimise in polynomial time the number of colours (resp. Rabin pairs) defining a Muller (resp. Rabin) acceptance condition, but that these problems become NP-complete when taking into account the structure of an automaton using such a condition.

Cite as

Antonio Casares and Corto Mascle. The Complexity of Simplifying ω-Automata Through the Alternating Cycle Decomposition. In 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 306, pp. 35:1-35:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{casares_et_al:LIPIcs.MFCS.2024.35,
  author =	{Casares, Antonio and Mascle, Corto},
  title =	{{The Complexity of Simplifying \omega-Automata Through the Alternating Cycle Decomposition}},
  booktitle =	{49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)},
  pages =	{35:1--35:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-335-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{306},
  editor =	{Kr\'{a}lovi\v{c}, Rastislav and Ku\v{c}era, Anton{\'\i}n},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2024.35},
  URN =		{urn:nbn:de:0030-drops-205916},
  doi =		{10.4230/LIPIcs.MFCS.2024.35},
  annote =	{Keywords: Omega-regular languages, Muller automata, Zielonka tree}
}
Document
Streaming Zero-Knowledge Proofs

Authors: Graham Cormode, Marcel Dall'Agnol, Tom Gur, and Chris Hickey

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
Streaming interactive proofs (SIPs) enable a space-bounded algorithm with one-pass access to a massive stream of data to verify a computation that requires large space, by communicating with a powerful but untrusted prover. This work initiates the study of zero-knowledge proofs for data streams. We define the notion of zero-knowledge in the streaming setting and construct zero-knowledge SIPs for the two main algorithmic building blocks in the streaming interactive proofs literature: the sumcheck and polynomial evaluation protocols. To the best of our knowledge all known streaming interactive proofs are based on either of these tools, and indeed, this allows us to obtain zero-knowledge SIPs for central streaming problems such as index, point and range queries, median, frequency moments, and inner product. Our protocols are efficient in terms of time and space, as well as communication: the verifier algorithm’s space complexity is polylog(n) and, after a non-interactive setup that uses a random string of near-linear length, the remaining parameters are n^o(1). En route, we develop an algorithmic toolkit for designing zero-knowledge data stream protocols, consisting of an algebraic streaming commitment protocol and a temporal commitment protocol. Our analyses rely on delicate algebraic and information-theoretic arguments and reductions from average-case communication complexity.

Cite as

Graham Cormode, Marcel Dall'Agnol, Tom Gur, and Chris Hickey. Streaming Zero-Knowledge Proofs. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 2:1-2:66, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cormode_et_al:LIPIcs.CCC.2024.2,
  author =	{Cormode, Graham and Dall'Agnol, Marcel and Gur, Tom and Hickey, Chris},
  title =	{{Streaming Zero-Knowledge Proofs}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{2:1--2:66},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.2},
  URN =		{urn:nbn:de:0030-drops-203988},
  doi =		{10.4230/LIPIcs.CCC.2024.2},
  annote =	{Keywords: Zero-knowledge proofs, streaming algorithms, computational complexity}
}
Document
Track A: Algorithms, Complexity and Games
Better Space-Time-Robustness Trade-Offs for Set Reconciliation

Authors: Djamal Belazzougui, Gregory Kucherov, and Stefan Walzer

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We consider the problem of reconstructing the symmetric difference between similar sets from their representations (sketches) of size linear in the number of differences. Exact solutions to this problem are based on error-correcting coding techniques and suffer from a large decoding time. Existing probabilistic solutions based on Invertible Bloom Lookup Tables (IBLTs) are time-efficient but offer insufficient success guarantees for many applications. Here we propose a tunable trade-off between the two approaches combining the efficiency of IBLTs with exponentially decreasing failure probability. The proof relies on a refined analysis of IBLTs proposed in (Bæk Tejs Houen et al. SOSA 2023) which has an independent interest. We also propose a modification of our algorithm that enables telling apart the elements of each set in the symmetric difference.

Cite as

Djamal Belazzougui, Gregory Kucherov, and Stefan Walzer. Better Space-Time-Robustness Trade-Offs for Set Reconciliation. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 20:1-20:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{belazzougui_et_al:LIPIcs.ICALP.2024.20,
  author =	{Belazzougui, Djamal and Kucherov, Gregory and Walzer, Stefan},
  title =	{{Better Space-Time-Robustness Trade-Offs for Set Reconciliation}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{20:1--20:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.20},
  URN =		{urn:nbn:de:0030-drops-201639},
  doi =		{10.4230/LIPIcs.ICALP.2024.20},
  annote =	{Keywords: data structures, hashing, set reconciliation, invertible Bloom lookup tables, random hypergraphs, BCH codes}
}
Document
Track A: Algorithms, Complexity and Games
Two-Source and Affine Non-Malleable Extractors for Small Entropy

Authors: Xin Li and Yan Zhong

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Non-malleable extractors are generalizations and strengthening of standard randomness extractors, that are resilient to adversarial tampering. Such extractors have wide applications in cryptography and have become important cornerstones in recent breakthroughs of explicit constructions of two-source extractors and affine extractors for small entropy. However, explicit constructions of non-malleable extractors appear to be much harder than standard extractors. Indeed, in the well-studied models of two-source and affine non-malleable extractors, the previous best constructions only work for entropy rate > 2/3 and 1-γ for some small constant γ > 0 respectively by Li (FOCS' 23). In this paper, we present explicit constructions of two-source and affine non-malleable extractors that match the state-of-the-art constructions of standard ones for small entropy. Our main results include: - Two-source and affine non-malleable extractors (over 𝖥₂) for sources on n bits with min-entropy k ≥ log^C n and polynomially small error, matching the parameters of standard extractors by Chattopadhyay and Zuckerman (STOC' 16, Annals of Mathematics' 19) and Li (FOCS' 16). - Two-source and affine non-malleable extractors (over 𝖥₂) for sources on n bits with min-entropy k = O(log n) and constant error, matching the parameters of standard extractors by Li (FOCS' 23). Our constructions significantly improve previous results, and the parameters (entropy requirement and error) are the best possible without first improving the constructions of standard extractors. In addition, our improved affine non-malleable extractors give strong lower bounds for a certain kind of read-once linear branching programs, recently introduced by Gryaznov, Pudlák, and Talebanfard (CCC' 22) as a generalization of several well studied computational models. These bounds match the previously best-known average-case hardness results given by Chattopadhyay and Liao (CCC' 23) and Li (FOCS' 23), where the branching program size lower bounds are close to optimal, but the explicit functions we use here are different. Our results also suggest a possible deeper connection between non-malleable extractors and standard ones.

Cite as

Xin Li and Yan Zhong. Two-Source and Affine Non-Malleable Extractors for Small Entropy. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 108:1-108:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ICALP.2024.108,
  author =	{Li, Xin and Zhong, Yan},
  title =	{{Two-Source and Affine Non-Malleable Extractors for Small Entropy}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{108:1--108:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.108},
  URN =		{urn:nbn:de:0030-drops-202512},
  doi =		{10.4230/LIPIcs.ICALP.2024.108},
  annote =	{Keywords: Randomness Extractors, Non-malleable, Two-source, Affine}
}
Document
Non-Atomic Payment Splitting in Channel Networks

Authors: Stefan Dziembowski and Paweł Kędzior

Published in: LIPIcs, Volume 282, 5th Conference on Advances in Financial Technologies (AFT 2023)


Abstract
Off-chain channel networks are one of the most promising technologies for dealing with blockchain scalability and delayed finality issues. Parties connected within such networks can send coins to each other without interacting with the blockchain. Moreover, these payments can be "routed" over the network. Thanks to this, even the parties that do not have a channel in common can perform payments between each other with the help of intermediaries. In this paper, we introduce a new notion that we call Non-Atomic Payment Splitting (NAPS) protocols that allow the intermediaries in the network to split the payments recursively into several subpayments in such a way that the payment can be successful "partially" (i.e. not all the requested amount may be transferred). This contrasts with the existing splitting techniques that are "atomic" in that they did not allow such partial payments (we compare the "atomic" and "non-atomic" approaches in the paper). We define NAPS formally and then present a protocol that we call "EthNA", that satisfies this definition. EthNA is based on very simple and efficient cryptographic tools; in particular, it does not use expensive cryptographic primitives. We implement a simple variant of EthNA in Solidity and provide some benchmarks. We also report on some experiments with routing using EthNA.

Cite as

Stefan Dziembowski and Paweł Kędzior. Non-Atomic Payment Splitting in Channel Networks. In 5th Conference on Advances in Financial Technologies (AFT 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 282, pp. 17:1-17:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{dziembowski_et_al:LIPIcs.AFT.2023.17,
  author =	{Dziembowski, Stefan and K\k{e}dzior, Pawe{\l}},
  title =	{{Non-Atomic Payment Splitting in Channel Networks}},
  booktitle =	{5th Conference on Advances in Financial Technologies (AFT 2023)},
  pages =	{17:1--17:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-303-4},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{282},
  editor =	{Bonneau, Joseph and Weinberg, S. Matthew},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2023.17},
  URN =		{urn:nbn:de:0030-drops-192068},
  doi =		{10.4230/LIPIcs.AFT.2023.17},
  annote =	{Keywords: Blockchain, Payment Channels Networks}
}
Document
Efficiently Testable Circuits

Authors: Mirza Ahad Baig, Suvradip Chakraborty, Stefan Dziembowski, Małgorzata Gałązka, Tomasz Lizurej, and Krzysztof Pietrzak

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
In this work, we put forward the notion of "efficiently testable circuits" and provide circuit compilers that transform any circuit into an efficiently testable one. Informally, a circuit is testable if one can detect tampering with the circuit by evaluating it on a small number of inputs from some test set. Our technical contribution is a compiler that transforms any circuit C into a testable circuit (Ĉ,𝕋̂) for which we can detect arbitrary tampering with all wires in Ĉ. The notion of a testable circuit is weaker or incomparable to existing notions of tamper-resilience, which aim to detect or even correct for errors introduced by tampering during every query, but our new notion is interesting in several settings, and we achieve security against much more general tampering classes - like tampering with all wires - with very modest overhead. Concretely, starting from a circuit C of size n and depth d, for any L (think of L as a small constant, say L = 4), we get a testable (Ĉ,𝕋̂) where Ĉ is of size ≈ 12n and depth d+log(n)+L⋅ n^{1/L}. The test set 𝕋̂ is of size 4⋅ 2^L. The number of extra input and output wires (i.e., pins) we need to add for the testing is 3+L and 2^L, respectively.

Cite as

Mirza Ahad Baig, Suvradip Chakraborty, Stefan Dziembowski, Małgorzata Gałązka, Tomasz Lizurej, and Krzysztof Pietrzak. Efficiently Testable Circuits. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 10:1-10:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{baig_et_al:LIPIcs.ITCS.2023.10,
  author =	{Baig, Mirza Ahad and Chakraborty, Suvradip and Dziembowski, Stefan and Ga{\l}\k{a}zka, Ma{\l}gorzata and Lizurej, Tomasz and Pietrzak, Krzysztof},
  title =	{{Efficiently Testable Circuits}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{10:1--10:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.10},
  URN =		{urn:nbn:de:0030-drops-175130},
  doi =		{10.4230/LIPIcs.ITCS.2023.10},
  annote =	{Keywords: circuit compilers, circuit integrity, circuit testing}
}
Document
Lower Bounds for Off-Chain Protocols: Exploring the Limits of Plasma

Authors: Stefan Dziembowski, Grzegorz Fabiański, Sebastian Faust, and Siavash Riahi

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
Blockchain is a disruptive new technology introduced around a decade ago. It can be viewed as a method for recording timestamped transactions in a public database. Most of blockchain protocols do not scale well, i.e., they cannot process quickly large amounts of transactions. A natural idea to deal with this problem is to use the blockchain only as a timestamping service, i.e., to hash several transactions tx_1,…,tx_m into one short string, and just put this string on the blockchain, while at the same time posting the hashed transactions tx_1,…,tx_m to some public place on the Internet ("off-chain"). In this way the transactions tx_i remain timestamped, but the amount of data put on the blockchain is greatly reduced. This idea was introduced in 2017 under the name Plasma by Poon and Buterin. Shortly after this proposal, several variants of Plasma have been proposed. They are typically built on top of the Ethereum blockchain, as they strongly rely on so-called smart contracts (in order to resolve disputes between the users if some of them start cheating). Plasmas are an example of so-called off-chain protocols. In this work we initiate the study of the inherent limitations of Plasma protocols. More concretely, we show that in every Plasma system the adversary can either (a) force the honest parties to communicate a lot with the blockchain, even though they did not intend to (this is traditionally called mass exit); or (b) an honest party that wants to leave the system needs to quickly communicate large amounts of data to the blockchain. What makes these attacks particularly hard to handle in real life is that these attacks do not have so-called uniquely attributable faults, i.e. the smart contract cannot determine which party is malicious, and hence cannot force it to pay the fees for the blockchain interaction. An important implication of our result is that the benefits of two of the most prominent Plasma types, called Plasma Cash and Fungible Plasma, cannot be achieved simultaneously. Besides of the direct implications on real-life cryptocurrency research, we believe that this work may open up a new line of theoretical research, as, up to our knowledge, this is the first work that provides an impossibility result in the area of off-chain protocols.

Cite as

Stefan Dziembowski, Grzegorz Fabiański, Sebastian Faust, and Siavash Riahi. Lower Bounds for Off-Chain Protocols: Exploring the Limits of Plasma. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 72:1-72:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{dziembowski_et_al:LIPIcs.ITCS.2021.72,
  author =	{Dziembowski, Stefan and Fabia\'{n}ski, Grzegorz and Faust, Sebastian and Riahi, Siavash},
  title =	{{Lower Bounds for Off-Chain Protocols: Exploring the Limits of Plasma}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{72:1--72:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.72},
  URN =		{urn:nbn:de:0030-drops-136113},
  doi =		{10.4230/LIPIcs.ITCS.2021.72},
  annote =	{Keywords: blockchain, lower bounds, off-chain protocol, commit chain, plasma}
}
  • Refine by Author
  • 3 Dziembowski, Stefan
  • 2 Kędzior, Paweł
  • 2 Lizurej, Tomasz
  • 1 Almashaqbeh, Ghada
  • 1 Avarikioti, Zeta
  • Show More...

  • Refine by Classification
  • 2 Security and privacy → Cryptography
  • 2 Security and privacy → Systems security
  • 2 Theory of computation → Automata over infinite objects
  • 2 Theory of computation → Error-correcting codes
  • 1 Applied computing → Digital cash
  • Show More...

  • Refine by Keyword
  • 2 Blockchain
  • 2 Payment Channels Networks
  • 1 Affine
  • 1 BCH codes
  • 1 Blockchain layer-two solutions
  • Show More...

  • Refine by Type
  • 14 document

  • Refine by Publication Year
  • 11 2024
  • 2 2023
  • 1 2021

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail