16 Search Results for "Yogev, Eylon"


Document
On Parallel Repetition of PCPs

Authors: Alessandro Chiesa, Ziyi Guan, and Burcu Yıldız

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Parallel repetition refers to a set of valuable techniques used to reduce soundness error of probabilistic proofs while saving on certain efficiency measures. Parallel repetition has been studied for interactive proofs (IPs) and multi-prover interactive proofs (MIPs). In this paper we initiate the study of parallel repetition for probabilistically checkable proofs (PCPs). We show that, perhaps surprisingly, parallel repetition of a PCP can increase soundness error, in fact bringing the soundness error to one as the number of repetitions tends to infinity. This "failure" of parallel repetition is common: we find that it occurs for a wide class of natural PCPs for NP-complete languages. We explain this unexpected phenomenon by providing a characterization result: the parallel repetition of a PCP brings the soundness error to zero if and only if a certain "MIP projection" of the PCP has soundness error strictly less than one. We show that our characterization is tight via a suitable example. Moreover, for those cases where parallel repetition of a PCP does bring the soundness error to zero, the aforementioned connection to MIPs offers preliminary results on the rate of decay of the soundness error. Finally, we propose a simple variant of parallel repetition, called consistent parallel repetition (CPR), which has the same randomness complexity and query complexity as the plain variant of parallel repetition. We show that CPR brings the soundness error to zero for every PCP (with non-trivial soundness error). In fact, we show that CPR decreases the soundness error at an exponential rate in the repetition parameter.

Cite as

Alessandro Chiesa, Ziyi Guan, and Burcu Yıldız. On Parallel Repetition of PCPs. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 34:1-34:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chiesa_et_al:LIPIcs.ITCS.2024.34,
  author =	{Chiesa, Alessandro and Guan, Ziyi and Y{\i}ld{\i}z, Burcu},
  title =	{{On Parallel Repetition of PCPs}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{34:1--34:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.34},
  URN =		{urn:nbn:de:0030-drops-195629},
  doi =		{10.4230/LIPIcs.ITCS.2024.34},
  annote =	{Keywords: probabilistically checkable proofs, parallel repetition}
}
Document
Downward Self-Reducibility in TFNP

Authors: Prahladh Harsha, Daniel Mitropolsky, and Alon Rosen

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
A problem is downward self-reducible if it can be solved efficiently given an oracle that returns solutions for strictly smaller instances. In the decisional landscape, downward self-reducibility is well studied and it is known that all downward self-reducible problems are in PSPACE. In this paper, we initiate the study of downward self-reducible search problems which are guaranteed to have a solution - that is, the downward self-reducible problems in TFNP. We show that most natural PLS-complete problems are downward self-reducible and any downward self-reducible problem in TFNP is contained in PLS. Furthermore, if the downward self-reducible problem is in TFUP (i.e. it has a unique solution), then it is actually contained in UEOPL, a subclass of CLS. This implies that if integer factoring is downward self-reducible then it is in fact in UEOPL, suggesting that no efficient factoring algorithm exists using the factorization of smaller numbers.

Cite as

Prahladh Harsha, Daniel Mitropolsky, and Alon Rosen. Downward Self-Reducibility in TFNP. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 67:1-67:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{harsha_et_al:LIPIcs.ITCS.2023.67,
  author =	{Harsha, Prahladh and Mitropolsky, Daniel and Rosen, Alon},
  title =	{{Downward Self-Reducibility in TFNP}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{67:1--67:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.67},
  URN =		{urn:nbn:de:0030-drops-175700},
  doi =		{10.4230/LIPIcs.ITCS.2023.67},
  annote =	{Keywords: downward self-reducibility, TFNP, TFUP, factoring, PLS, CLS}
}
Document
Secure Distributed Network Optimization Against Eavesdroppers

Authors: Yael Hitron, Merav Parter, and Eylon Yogev

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
We present a new algorithmic framework for distributed network optimization in the presence of eavesdropper adversaries, also known as passive wiretappers. In this setting, the adversary is listening to the traffic exchanged over a fixed set of edges in the graph, trying to extract information on the private input and output of the vertices. A distributed algorithm is denoted as f-secure, if it guarantees that the adversary learns nothing on the input and output for the vertices, provided that it controls at most f graph edges. Recent work has presented general simulation results for f-secure algorithms, with a round overhead of D^Θ(f), where D is the diameter of the graph. In this paper, we present a completely different white-box, and yet quite general, approach for obtaining f-secure algorithms for fundamental network optimization tasks. Specifically, for n-vertex D-diameter graphs with (unweighted) edge-connectivity Ω(f), there are f-secure congest algorithms for computing MST, partwise aggregation, and (1+ε) (weighted) minimum cut approximation, within Õ(D+f √n) congest rounds, hence nearly tight for f = Õ(1). Our algorithms are based on designing a secure algorithmic-toolkit that leverages the special structure of congest algorithms for global optimization graph problems. One of these tools is a general secure compiler that simulates light-weight distributed algorithms in a congestion-sensitive manner. We believe that these tools set the ground for designing additional secure solutions in the congest model and beyond.

Cite as

Yael Hitron, Merav Parter, and Eylon Yogev. Secure Distributed Network Optimization Against Eavesdroppers. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 71:1-71:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hitron_et_al:LIPIcs.ITCS.2023.71,
  author =	{Hitron, Yael and Parter, Merav and Yogev, Eylon},
  title =	{{Secure Distributed Network Optimization Against Eavesdroppers}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{71:1--71:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.71},
  URN =		{urn:nbn:de:0030-drops-175746},
  doi =		{10.4230/LIPIcs.ITCS.2023.71},
  annote =	{Keywords: congest, secure computation, network optimization}
}
Document
Broadcast CONGEST Algorithms Against Eavesdroppers

Authors: Yael Hitron, Merav Parter, and Eylon Yogev

Published in: LIPIcs, Volume 246, 36th International Symposium on Distributed Computing (DISC 2022)


Abstract
An eavesdropper is a passive adversary that aims at extracting private information on the input and output values of the network’s participants, by listening to the traffic exchanged over a subset of edges in the graph. We consider secure congest algorithms for the basic broadcast task, in the presence of eavesdropper (edge) adversaries. For D-diameter n-vertex graphs with edge connectivity Θ(f), we present f-secure broadcast algorithms that run in Õ(D+√{f n}) rounds. These algorithms transmit some broadcast message m^* to all the vertices in the graph, in a way that is information-theoretically secure against an eavesdropper controlling any subset of at most f edges in the graph. While our algorithms are heavily based on network coding (secret sharing), we also show that this is essential. For the basic problem of secure unicast we demonstrate a network coding gap of Ω(n) rounds. In the presence of vertex adversaries, known as semi-honest, we introduce the Forbidden-Set Broadcast problem: In this problem, the vertices of the graph are partitioned into two sets, trusted and untrusted, denoted as R, F ⊆ V, respectively, such that G[R] is connected. It is then desired to exchange a secret message m^* between all the trusted vertices while leaking no information to the untrusted set F. Our algorithm works in Õ(D+√|R|) rounds and its security guarantees hold even when all the untrusted vertices F are controlled by a (centralized) adversary.

Cite as

Yael Hitron, Merav Parter, and Eylon Yogev. Broadcast CONGEST Algorithms Against Eavesdroppers. In 36th International Symposium on Distributed Computing (DISC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 246, pp. 27:1-27:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{hitron_et_al:LIPIcs.DISC.2022.27,
  author =	{Hitron, Yael and Parter, Merav and Yogev, Eylon},
  title =	{{Broadcast CONGEST Algorithms Against Eavesdroppers}},
  booktitle =	{36th International Symposium on Distributed Computing (DISC 2022)},
  pages =	{27:1--27:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-255-6},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{246},
  editor =	{Scheideler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2022.27},
  URN =		{urn:nbn:de:0030-drops-172186},
  doi =		{10.4230/LIPIcs.DISC.2022.27},
  annote =	{Keywords: congest, edge-connectivity, secret sharing}
}
Document
Hardness of Approximation for Stochastic Problems via Interactive Oracle Proofs

Authors: Gal Arnon, Alessandro Chiesa, and Eylon Yogev

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
Hardness of approximation aims to establish lower bounds on the approximability of optimization problems in NP and beyond. We continue the study of hardness of approximation for problems beyond NP, specifically for stochastic constraint satisfaction problems (SCSPs). An SCSP with 𝗄 alternations is a list of constraints over variables grouped into 2𝗄 blocks, where each constraint has constant arity. An assignment to the SCSP is defined by two players who alternate in setting values to a designated block of variables, with one player choosing their assignments uniformly at random and the other player trying to maximize the number of satisfied constraints. In this paper, we establish hardness of approximation for SCSPs based on interactive proofs. For 𝗄 ≤ O(log n), we prove that it is AM[𝗄]-hard to approximate, to within a constant, the value of SCSPs with 𝗄 alternations and constant arity. Before, this was known only for 𝗄 = O(1). Furthermore, we introduce a natural class of 𝗄-round interactive proofs, denoted IR[𝗄] (for interactive reducibility), and show that several protocols (e.g., the sumcheck protocol) are in IR[𝗄]. Using this notion, we extend our inapproximability to all values of 𝗄: we show that for every 𝗄, approximating an SCSP instance with O(𝗄) alternations and constant arity is IR[𝗄]-hard. While hardness of approximation for CSPs is achieved by constructing suitable PCPs, our results for SCSPs are achieved by constructing suitable IOPs (interactive oracle proofs). We show that every language in AM[𝗄 ≤ O(log n)] or in IR[𝗄] has an O(𝗄)-round IOP whose verifier has constant query complexity (regardless of the number of rounds 𝗄). In particular, we derive a "sumcheck protocol" whose verifier reads O(1) bits from the entire interaction transcript.

Cite as

Gal Arnon, Alessandro Chiesa, and Eylon Yogev. Hardness of Approximation for Stochastic Problems via Interactive Oracle Proofs. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 24:1-24:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{arnon_et_al:LIPIcs.CCC.2022.24,
  author =	{Arnon, Gal and Chiesa, Alessandro and Yogev, Eylon},
  title =	{{Hardness of Approximation for Stochastic Problems via Interactive Oracle Proofs}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{24:1--24:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.24},
  URN =		{urn:nbn:de:0030-drops-165867},
  doi =		{10.4230/LIPIcs.CCC.2022.24},
  annote =	{Keywords: hardness of approximation, interactive oracle proofs, stochastic satisfaction problems}
}
Document
Keep That Card in Mind: Card Guessing with Limited Memory

Authors: Boaz Menuhin and Moni Naor

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
A card guessing game is played between two players, Guesser and Dealer. At the beginning of the game, the Dealer holds a deck of n cards (labeled 1, ..., n). For n turns, the Dealer draws a card from the deck, the Guesser guesses which card was drawn, and then the card is discarded from the deck. The Guesser receives a point for each correctly guessed card. With perfect memory, a Guesser can keep track of all cards that were played so far and pick at random a card that has not appeared so far, yielding in expectation ln n correct guesses, regardless of how the Dealer arranges the deck. With no memory, the best a Guesser can do will result in a single guess in expectation. We consider the case of a memory bounded Guesser that has m < n memory bits. We show that the performance of such a memory bounded Guesser depends much on the behavior of the Dealer. In more detail, we show that there is a gap between the static case, where the Dealer draws cards from a properly shuffled deck or a prearranged one, and the adaptive case, where the Dealer draws cards thoughtfully, in an adversarial manner. Specifically: 1) We show a Guesser with O(log² n) memory bits that scores a near optimal result against any static Dealer. 2) We show that no Guesser with m bits of memory can score better than O(√m) correct guesses against a random Dealer, thus, no Guesser can score better than min {√m, ln n}, i.e., the above Guesser is optimal. 3) We show an efficient adaptive Dealer against which no Guesser with m memory bits can make more than ln m + 2 ln log n + O(1) correct guesses in expectation. These results are (almost) tight, and we prove them using compression arguments that harness the guessing strategy for encoding.

Cite as

Boaz Menuhin and Moni Naor. Keep That Card in Mind: Card Guessing with Limited Memory. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 107:1-107:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{menuhin_et_al:LIPIcs.ITCS.2022.107,
  author =	{Menuhin, Boaz and Naor, Moni},
  title =	{{Keep That Card in Mind: Card Guessing with Limited Memory}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{107:1--107:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.107},
  URN =		{urn:nbn:de:0030-drops-157039},
  doi =		{10.4230/LIPIcs.ITCS.2022.107},
  annote =	{Keywords: Adaptivity vs Non-adaptivity, Adversarial Robustness, Card Guessing, Compression Argument, Information Theory, Streaming Algorithms, Two Player Game}
}
Document
Broadcast CONGEST Algorithms against Adversarial Edges

Authors: Yael Hitron and Merav Parter

Published in: LIPIcs, Volume 209, 35th International Symposium on Distributed Computing (DISC 2021)


Abstract
We consider the corner-stone broadcast task with an adaptive adversary that controls a fixed number of t edges in the input communication graph. In this model, the adversary sees the entire communication in the network and the random coins of the nodes, while maliciously manipulating the messages sent through a set of t edges (unknown to the nodes). Since the influential work of [Pease, Shostak and Lamport, JACM'80], broadcast algorithms against plentiful adversarial models have been studied in both theory and practice for over more than four decades. Despite this extensive research, there is no round efficient broadcast algorithm for general graphs in the CONGEST model of distributed computing. Even for a single adversarial edge (i.e., t = 1), the state-of-the-art round complexity is polynomial in the number of nodes. We provide the first round-efficient broadcast algorithms against adaptive edge adversaries. Our two key results for n-node graphs of diameter D are as follows: - For t = 1, there is a deterministic algorithm that solves the problem within Õ(D²) rounds, provided that the graph is 3 edge-connected. This round complexity beats the natural barrier of O(D³) rounds, the existential lower bound on the maximal length of 3 edge-disjoint paths between a given pair of nodes in G. This algorithm can be extended to a Õ((tD)^{O(t)})-round algorithm against t adversarial edges in (2t+1) edge-connected graphs. - For expander graphs with edge connectivity of Ω(t²log n), there is a considerably improved broadcast algorithm with O(t log ² n) rounds against t adversarial edges. This algorithm exploits the connectivity and conductance properties of G-subgraphs obtained by employing the Karger’s edge sampling technique. Our algorithms mark a new connection between the areas of fault-tolerant network design and reliable distributed communication.

Cite as

Yael Hitron and Merav Parter. Broadcast CONGEST Algorithms against Adversarial Edges. In 35th International Symposium on Distributed Computing (DISC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 209, pp. 23:1-23:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{hitron_et_al:LIPIcs.DISC.2021.23,
  author =	{Hitron, Yael and Parter, Merav},
  title =	{{Broadcast CONGEST Algorithms against Adversarial Edges}},
  booktitle =	{35th International Symposium on Distributed Computing (DISC 2021)},
  pages =	{23:1--23:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-210-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{209},
  editor =	{Gilbert, Seth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2021.23},
  URN =		{urn:nbn:de:0030-drops-148256},
  doi =		{10.4230/LIPIcs.DISC.2021.23},
  annote =	{Keywords: CONGEST, Fault-Tolerant Network Design, Edge Connectivity}
}
Document
General CONGEST Compilers against Adversarial Edges

Authors: Yael Hitron and Merav Parter

Published in: LIPIcs, Volume 209, 35th International Symposium on Distributed Computing (DISC 2021)


Abstract
We consider the adversarial CONGEST model of distributed computing in which a fixed number of edges (or nodes) in the graph are controlled by a computationally unbounded adversary that corrupts the computation by sending malicious messages over these (a-priori unknown) controlled edges. As in the standard CONGEST model, communication is synchronous, where per round each processor can send O(log n) bits to each of its neighbors. This paper is concerned with distributed algorithms that are both time efficient (in terms of the number of rounds), as well as, robust against a fixed number of adversarial edges. Unfortunately, the existing algorithms in this setting usually assume that the communication graph is complete (n-clique), and very little is known for graphs with arbitrary topologies. We fill in this gap by extending the methodology of [Parter and Yogev, SODA 2019] and provide a compiler that simulates any CONGEST algorithm 𝒜 (in the reliable setting) into an equivalent algorithm 𝒜' in the adversarial CONGEST model. Specifically, we show the following for every (2f+1) edge-connected graph of diameter D: - For f = 1, there is a general compiler against a single adversarial edge with a compilation overhead of Ô(D³) rounds. This improves upon the Ô(D⁵) round overhead of [Parter and Yogev, SODA 2019] and omits their assumption regarding a fault-free preprocessing phase. - For any constant f, there is a general compiler against f adversarial edges with a compilation overhead of Ô(D^{O(f)}) rounds. The prior compilers of [Parter and Yogev, SODA 2019] were limited to a single adversarial edge. Our compilers are based on a new notion of fault-tolerant cycle covers. The computation of these cycles in the adversarial CONGEST model constitutes the key technical contribution of the paper.

Cite as

Yael Hitron and Merav Parter. General CONGEST Compilers against Adversarial Edges. In 35th International Symposium on Distributed Computing (DISC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 209, pp. 24:1-24:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{hitron_et_al:LIPIcs.DISC.2021.24,
  author =	{Hitron, Yael and Parter, Merav},
  title =	{{General CONGEST Compilers against Adversarial Edges}},
  booktitle =	{35th International Symposium on Distributed Computing (DISC 2021)},
  pages =	{24:1--24:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-210-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{209},
  editor =	{Gilbert, Seth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2021.24},
  URN =		{urn:nbn:de:0030-drops-148266},
  doi =		{10.4230/LIPIcs.DISC.2021.24},
  annote =	{Keywords: CONGEST, Cycle Covers, Byzantine Adversaries}
}
Document
Total Functions in the Polynomial Hierarchy

Authors: Robert Kleinberg, Oliver Korten, Daniel Mitropolsky, and Christos Papadimitriou

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
We identify several genres of search problems beyond NP for which existence of solutions is guaranteed. One class that seems especially rich in such problems is PEPP (for "polynomial empty pigeonhole principle"), which includes problems related to existence theorems proved through the union bound, such as finding a bit string that is far from all codewords, finding an explicit rigid matrix, as well as a problem we call Complexity, capturing Complexity Theory’s quest. When the union bound is generous, in that solutions constitute at least a polynomial fraction of the domain, we have a family of seemingly weaker classes α-PEPP, which are inside FP^NP|poly. Higher in the hierarchy, we identify the constructive version of the Sauer-Shelah lemma and the appropriate generalization of PPP that contains it, as well as the problem of finding a king in a tournament (a vertex k such that all other vertices are defeated by k, or by somebody k defeated).

Cite as

Robert Kleinberg, Oliver Korten, Daniel Mitropolsky, and Christos Papadimitriou. Total Functions in the Polynomial Hierarchy. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 44:1-44:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kleinberg_et_al:LIPIcs.ITCS.2021.44,
  author =	{Kleinberg, Robert and Korten, Oliver and Mitropolsky, Daniel and Papadimitriou, Christos},
  title =	{{Total Functions in the Polynomial Hierarchy}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{44:1--44:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.44},
  URN =		{urn:nbn:de:0030-drops-135835},
  doi =		{10.4230/LIPIcs.ITCS.2021.44},
  annote =	{Keywords: total complexity, polynomial hierarchy, pigeonhole principle}
}
Document
Spiking Neural Networks Through the Lens of Streaming Algorithms

Authors: Yael Hitron, Cameron Musco, and Merav Parter

Published in: LIPIcs, Volume 179, 34th International Symposium on Distributed Computing (DISC 2020)


Abstract
We initiate the study of biologically-inspired spiking neural networks from the perspective of streaming algorithms. Like computers, human brains face memory limitations, which pose a significant obstacle when processing large scale and dynamically changing data. In computer science, these challenges are captured by the well-known streaming model, which can be traced back to Munro and Paterson `78 and has had significant impact in theory and beyond. In the classical streaming setting, one must compute a function f of a stream of updates 𝒮 = {u₁,…,u_m}, given restricted single-pass access to the stream. The primary complexity measure is the space used by the algorithm. In contrast to the large body of work on streaming algorithms, relatively little is known about the computational aspects of data processing in spiking neural networks. In this work, we seek to connect these two models, leveraging techniques developed for streaming algorithms to better understand neural computation. Our primary goal is to design networks for various computational tasks using as few auxiliary (non-input or output) neurons as possible. The number of auxiliary neurons can be thought of as the "space" required by the network. Previous algorithmic work in spiking neural networks has many similarities with streaming algorithms. However, the connection between these two space-limited models has not been formally addressed. We take the first steps towards understanding this connection. On the upper bound side, we design neural algorithms based on known streaming algorithms for fundamental tasks, including distinct elements, approximate median, and heavy hitters. The number of neurons in our solutions almost match the space bounds of the corresponding streaming algorithms. As a general algorithmic primitive, we show how to implement the important streaming technique of linear sketching efficiently in spiking neural networks. On the lower bound side, we give a generic reduction, showing that any space-efficient spiking neural network can be simulated by a space-efficient streaming algorithm. This reduction lets us translate streaming-space lower bounds into nearly matching neural-space lower bounds, establishing a close connection between the two models.

Cite as

Yael Hitron, Cameron Musco, and Merav Parter. Spiking Neural Networks Through the Lens of Streaming Algorithms. In 34th International Symposium on Distributed Computing (DISC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 179, pp. 10:1-10:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{hitron_et_al:LIPIcs.DISC.2020.10,
  author =	{Hitron, Yael and Musco, Cameron and Parter, Merav},
  title =	{{Spiking Neural Networks Through the Lens of Streaming Algorithms}},
  booktitle =	{34th International Symposium on Distributed Computing (DISC 2020)},
  pages =	{10:1--10:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-168-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{179},
  editor =	{Attiya, Hagit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2020.10},
  URN =		{urn:nbn:de:0030-drops-130882},
  doi =		{10.4230/LIPIcs.DISC.2020.10},
  annote =	{Keywords: Biological distributed algorithms, Spiking neural networks, Streaming algorithms}
}
Document
Hardness Amplification of Optimization Problems

Authors: Elazar Goldenberg and Karthik C. S.

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products. We say that an optimization problem Π is direct product feasible if it is possible to efficiently aggregate any k instances of Π and form one large instance of Π such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem Π, our hardness amplification theorem may be informally stated as follows: If there is a distribution D over instances of Π of size n such that every randomized algorithm running in time t(n) fails to solve Π on 1/α(n) fraction of inputs sampled from D, then, assuming some relationships on α(n) and t(n), there is a distribution D' over instances of Π of size O(n⋅α(n)) such that every randomized algorithm running in time t(n)/poly(α(n)) fails to solve Π on 99/100 fraction of inputs sampled from D'. As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium.

Cite as

Elazar Goldenberg and Karthik C. S.. Hardness Amplification of Optimization Problems. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 1:1-1:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{goldenberg_et_al:LIPIcs.ITCS.2020.1,
  author =	{Goldenberg, Elazar and Karthik C. S.},
  title =	{{Hardness Amplification of Optimization Problems}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{1:1--1:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.1},
  URN =		{urn:nbn:de:0030-drops-116863},
  doi =		{10.4230/LIPIcs.ITCS.2020.1},
  annote =	{Keywords: hardness amplification, average case complexity, direct product, optimization problems, fine-grained complexity, TFNP}
}
Document
Smooth and Strong PCPs

Authors: Orr Paradise

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
Probabilistically checkable proofs (PCPs) can be verified based only on a constant amount of random queries, such that any correct claim has a proof that is always accepted, and incorrect claims are rejected with high probability (regardless of the given alleged proof). We consider two possible features of PCPs: - A PCP is strong if it rejects an alleged proof of a correct claim with probability proportional to its distance from some correct proof of that claim. - A PCP is smooth if each location in a proof is queried with equal probability. We prove that all sets in NP have PCPs that are both smooth and strong, are of polynomial length, and can be verified based on a constant number of queries. This is achieved by following the proof of the PCP theorem of Arora, Lund, Motwani, Sudan and Szegedy (JACM, 1998), providing a stronger analysis of the Hadamard and Reed - Muller based PCPs and a refined PCP composition theorem. In fact, we show that any set in NP has a smooth strong canonical PCP of Proximity (PCPP), meaning that there is an efficiently computable bijection of NP witnesses to correct proofs. This improves on the recent construction of Dinur, Gur and Goldreich (ITCS, 2019) of PCPPs that are strong canonical but inherently non-smooth. Our result implies the hardness of approximating the satisfiability of "stable" 3CNF formulae with bounded variable occurrence, where stable means that the number of clauses violated by an assignment is proportional to its distance from a satisfying assignment (in the relative Hamming metric). This proves a hypothesis used in the work of Friggstad, Khodamoradi and Salavatipour (SODA, 2019), suggesting a connection between the hardness of these instances and other stable optimization problems.

Cite as

Orr Paradise. Smooth and Strong PCPs. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 2:1-2:41, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{paradise:LIPIcs.ITCS.2020.2,
  author =	{Paradise, Orr},
  title =	{{Smooth and Strong PCPs}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{2:1--2:41},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.2},
  URN =		{urn:nbn:de:0030-drops-116875},
  doi =		{10.4230/LIPIcs.ITCS.2020.2},
  annote =	{Keywords: Interactive and probabilistic proof systems, Probabilistically checkable proofs, Hardness of approximation}
}
Document
Counting to Ten with Two Fingers: Compressed Counting with Spiking Neurons

Authors: Yael Hitron and Merav Parter

Published in: LIPIcs, Volume 144, 27th Annual European Symposium on Algorithms (ESA 2019)


Abstract
We consider the task of measuring time with probabilistic threshold gates implemented by bio-inspired spiking neurons. In the model of spiking neural networks, network evolves in discrete rounds, where in each round, neurons fire in pulses in response to a sufficiently high membrane potential. This potential is induced by spikes from neighboring neurons that fired in the previous round, which can have either an excitatory or inhibitory effect. Discovering the underlying mechanisms by which the brain perceives the duration of time is one of the largest open enigma in computational neuro-science. To gain a better algorithmic understanding onto these processes, we introduce the neural timer problem. In this problem, one is given a time parameter t, an input neuron x, and an output neuron y. It is then required to design a minimum sized neural network (measured by the number of auxiliary neurons) in which every spike from x in a given round i, makes the output y fire for the subsequent t consecutive rounds. We first consider a deterministic implementation of a neural timer and show that Theta(log t) (deterministic) threshold gates are both sufficient and necessary. This raised the question of whether randomness can be leveraged to reduce the number of neurons. We answer this question in the affirmative by considering neural timers with spiking neurons where the neuron y is required to fire for t consecutive rounds with probability at least 1-delta, and should stop firing after at most 2t rounds with probability 1-delta for some input parameter delta in (0,1). Our key result is a construction of a neural timer with O(log log 1/delta) spiking neurons. Interestingly, this construction uses only one spiking neuron, while the remaining neurons can be deterministic threshold gates. We complement this construction with a matching lower bound of Omega(min{log log 1/delta, log t}) neurons. This provides the first separation between deterministic and randomized constructions in the setting of spiking neural networks. Finally, we demonstrate the usefulness of compressed counting networks for synchronizing neural networks. In the spirit of distributed synchronizers [Awerbuch-Peleg, FOCS'90], we provide a general transformation (or simulation) that can take any synchronized network solution and simulate it in an asynchronous setting (where edges have arbitrary response latencies) while incurring a small overhead w.r.t the number of neurons and computation time.

Cite as

Yael Hitron and Merav Parter. Counting to Ten with Two Fingers: Compressed Counting with Spiking Neurons. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 57:1-57:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{hitron_et_al:LIPIcs.ESA.2019.57,
  author =	{Hitron, Yael and Parter, Merav},
  title =	{{Counting to Ten with Two Fingers: Compressed Counting with Spiking Neurons}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{57:1--57:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.57},
  URN =		{urn:nbn:de:0030-drops-111782},
  doi =		{10.4230/LIPIcs.ESA.2019.57},
  annote =	{Keywords: stochastic neural networks, approximate counting, synchronizer}
}
Document
Track A: Algorithms, Complexity and Games
Optimal Short Cycle Decomposition in Almost Linear Time

Authors: Merav Parter and Eylon Yogev

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
Short cycle decomposition is an edge partitioning of an unweighted graph into edge-disjoint short cycles, plus a small number of extra edges not in any cycle. This notion was introduced by Chu et al. [FOCS'18] as a fundamental tool for graph sparsification and sketching. Clearly, it is most desirable to have a fast algorithm for partitioning the edges into as short as possible cycles, while omitting few edges. The most naïve procedure for such decomposition runs in time O(m * n) and partitions the edges into O(log n)-length edge-disjoint cycles plus at most 2n edges. Chu et al. improved the running time considerably to m^{1+o(1)}, while increasing both the length of the cycles and the number of omitted edges by a factor of n^{o(1)}. Even more recently, Liu-Sachdeva-Yu [SODA'19] showed that for every constant delta in (0,1] there is an O(m * n^{delta})-time algorithm that provides, w.h.p., cycles of length O(log n)^{1/delta} and O(n) extra edges. In this paper, we significantly improve upon these bounds. We first show an m^{1+o(1)}-time deterministic algorithm for computing nearly optimal cycle decomposition, i.e., with cycle length O(log^2 n) and an extra subset of O(n log n) edges not in any cycle. This algorithm is based on a reduction to low-congestion cycle covers, introduced by the authors in [SODA'19]. We also provide a simple deterministic algorithm that computes edge-disjoint cycles of length 2^{1/epsilon} with n^{1+epsilon}* 2^{1/epsilon} extra edges, for every epsilon in (0,1]. Combining this with Liu-Sachdeva-Yu [SODA'19] gives a linear time randomized algorithm for computing cycles of length poly(log n) and O(n) extra edges, for every n-vertex graphs with n^{1+1/delta} edges for some constant delta. These decomposition algorithms lead to improvements in all the algorithmic applications of Chu et al. as well as to new distributed constructions.

Cite as

Merav Parter and Eylon Yogev. Optimal Short Cycle Decomposition in Almost Linear Time. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 89:1-89:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{parter_et_al:LIPIcs.ICALP.2019.89,
  author =	{Parter, Merav and Yogev, Eylon},
  title =	{{Optimal Short Cycle Decomposition in Almost Linear Time}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{89:1--89:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.89},
  URN =		{urn:nbn:de:0030-drops-106653},
  doi =		{10.4230/LIPIcs.ICALP.2019.89},
  annote =	{Keywords: Cycle decomposition, low-congestion cycle cover, graph sparsification}
}
Document
Congested Clique Algorithms for Graph Spanners

Authors: Merav Parter and Eylon Yogev

Published in: LIPIcs, Volume 121, 32nd International Symposium on Distributed Computing (DISC 2018)


Abstract
Graph spanners are sparse subgraphs that faithfully preserve the distances in the original graph up to small stretch. Spanner have been studied extensively as they have a wide range of applications ranging from distance oracles, labeling schemes and routing to solving linear systems and spectral sparsification. A k-spanner maintains pairwise distances up to multiplicative factor of k. It is a folklore that for every n-vertex graph G, one can construct a (2k-1) spanner with O(n^{1+1/k}) edges. In a distributed setting, such spanners can be constructed in the standard CONGEST model using O(k^2) rounds, when randomization is allowed. In this work, we consider spanner constructions in the congested clique model, and show: - a randomized construction of a (2k-1)-spanner with O~(n^{1+1/k}) edges in O(log k) rounds. The previous best algorithm runs in O(k) rounds; - a deterministic construction of a (2k-1)-spanner with O~(n^{1+1/k}) edges in O(log k +(log log n)^3) rounds. The previous best algorithm runs in O(k log n) rounds. This improvement is achieved by a new derandomization theorem for hitting sets which might be of independent interest; - a deterministic construction of a O(k)-spanner with O(k * n^{1+1/k}) edges in O(log k) rounds.

Cite as

Merav Parter and Eylon Yogev. Congested Clique Algorithms for Graph Spanners. In 32nd International Symposium on Distributed Computing (DISC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 121, pp. 40:1-40:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{parter_et_al:LIPIcs.DISC.2018.40,
  author =	{Parter, Merav and Yogev, Eylon},
  title =	{{Congested Clique Algorithms for Graph Spanners}},
  booktitle =	{32nd International Symposium on Distributed Computing (DISC 2018)},
  pages =	{40:1--40:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-092-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{121},
  editor =	{Schmid, Ulrich and Widder, Josef},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2018.40},
  URN =		{urn:nbn:de:0030-drops-98298},
  doi =		{10.4230/LIPIcs.DISC.2018.40},
  annote =	{Keywords: Distributed Graph Algorithms, Spanner, Congested Clique}
}
  • Refine by Author
  • 8 Parter, Merav
  • 6 Hitron, Yael
  • 6 Yogev, Eylon
  • 2 Chiesa, Alessandro
  • 2 Mitropolsky, Daniel
  • Show More...

  • Refine by Classification
  • 5 Networks → Network algorithms
  • 3 Theory of computation → Computational complexity and cryptography
  • 3 Theory of computation → Distributed algorithms
  • 2 Theory of computation → Complexity classes
  • 2 Theory of computation → Design and analysis of algorithms
  • Show More...

  • Refine by Keyword
  • 3 TFNP
  • 2 CONGEST
  • 2 congest
  • 1 Adaptivity vs Non-adaptivity
  • 1 Adversarial Robustness
  • Show More...

  • Refine by Type
  • 16 document

  • Refine by Publication Year
  • 3 2020
  • 3 2021
  • 3 2022
  • 2 2019
  • 2 2023
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail