12 Search Results for "Grossman, Ofer"


Document
RANDOM
Matroid Intersection: A Pseudo-Deterministic Parallel Reduction from Search to Weighted-Decision

Authors: Sumanta Ghosh and Rohit Gurjar

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
We study the matroid intersection problem from the parallel complexity perspective. Given two matroids over the same ground set, the problem asks to decide whether they have a common base and its search version asks to find a common base, if one exists. Another widely studied variant is the weighted decision version where with the two matroids, we are given small weights on the ground set elements and a target weight W, and the question is to decide whether there is a common base of weight at least W. From the perspective of parallel complexity, the relation between the search and the decision versions is not well understood. We make a significant progress on this question by giving a pseudo-deterministic parallel (NC) algorithm for the search version that uses an oracle access to the weighted decision. The notion of pseudo-deterministic NC was recently introduced by Goldwasser and Grossman [Shafi Goldwasser and Ofer Grossman, 2017], which is a relaxation of NC. A pseudo-deterministic NC algorithm for a search problem is a randomized NC algorithm that, for a given input, outputs a fixed solution with high probability. In case the given matroids are linearly representable, our result implies a pseudo-deterministic NC algorithm (without the weighted decision oracle). This resolves an open question posed by Anari and Vazirani [Nima Anari and Vijay V. Vazirani, 2020].

Cite as

Sumanta Ghosh and Rohit Gurjar. Matroid Intersection: A Pseudo-Deterministic Parallel Reduction from Search to Weighted-Decision. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 41:1-41:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ghosh_et_al:LIPIcs.APPROX/RANDOM.2021.41,
  author =	{Ghosh, Sumanta and Gurjar, Rohit},
  title =	{{Matroid Intersection: A Pseudo-Deterministic Parallel Reduction from Search to Weighted-Decision}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{41:1--41:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.41},
  URN =		{urn:nbn:de:0030-drops-147342},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.41},
  annote =	{Keywords: Linear Matroid, Matroid Intersection, Parallel Complexity, Pseudo-deterministic NC}
}
Document
On the Pseudo-Deterministic Query Complexity of NP Search Problems

Authors: Shafi Goldwasser, Russell Impagliazzo, Toniann Pitassi, and Rahul Santhanam

Published in: LIPIcs, Volume 200, 36th Computational Complexity Conference (CCC 2021)


Abstract
We study pseudo-deterministic query complexity - randomized query algorithms that are required to output the same answer with high probability on all inputs. We prove Ω(√n) lower bounds on the pseudo-deterministic complexity of a large family of search problems based on unsatisfiable random CNF instances, and also for the promise problem (FIND1) of finding a 1 in a vector populated with at least half one’s. This gives an exponential separation between randomized query complexity and pseudo-deterministic complexity, which is tight in the quantum setting. As applications we partially solve a related combinatorial coloring problem, and we separate random tree-like Resolution from its pseudo-deterministic version. In contrast to our lower bound, we show, surprisingly, that in the zero-error, average case setting, the three notions (deterministic, randomized, pseudo-deterministic) collapse.

Cite as

Shafi Goldwasser, Russell Impagliazzo, Toniann Pitassi, and Rahul Santhanam. On the Pseudo-Deterministic Query Complexity of NP Search Problems. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 36:1-36:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{goldwasser_et_al:LIPIcs.CCC.2021.36,
  author =	{Goldwasser, Shafi and Impagliazzo, Russell and Pitassi, Toniann and Santhanam, Rahul},
  title =	{{On the Pseudo-Deterministic Query Complexity of NP Search Problems}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{36:1--36:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.36},
  URN =		{urn:nbn:de:0030-drops-143104},
  doi =		{10.4230/LIPIcs.CCC.2021.36},
  annote =	{Keywords: Pseudo-determinism, Query complexity, Proof complexity}
}
Document
Error Correcting Codes for Uncompressed Messages

Authors: Ofer Grossman and Justin Holmgren

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
Most types of messages we transmit (e.g., video, audio, images, text) are not fully compressed, since they do not have known efficient and information theoretically optimal compression algorithms. When transmitting such messages, standard error correcting codes fail to take advantage of the fact that messages are not fully compressed. We show that in this setting, it is sub-optimal to use standard error correction. We consider a model where there is a set of "valid messages" which the sender may send that may not be efficiently compressible, but where it is possible for the receiver to recognize valid messages. In this model, we construct a (probabilistic) encoding procedure that achieves better tradeoffs between data rates and error-resilience (compared to just applying a standard error correcting code). Additionally, our techniques yield improved efficiently decodable (probabilistic) codes for fully compressed messages (the standard setting where the set of valid messages is all binary strings) in the high-rate regime.

Cite as

Ofer Grossman and Justin Holmgren. Error Correcting Codes for Uncompressed Messages. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 43:1-43:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{grossman_et_al:LIPIcs.ITCS.2021.43,
  author =	{Grossman, Ofer and Holmgren, Justin},
  title =	{{Error Correcting Codes for Uncompressed Messages}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{43:1--43:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.43},
  URN =		{urn:nbn:de:0030-drops-135828},
  doi =		{10.4230/LIPIcs.ITCS.2021.43},
  annote =	{Keywords: Coding Theory, List Decoding}
}
Document
Total Functions in the Polynomial Hierarchy

Authors: Robert Kleinberg, Oliver Korten, Daniel Mitropolsky, and Christos Papadimitriou

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
We identify several genres of search problems beyond NP for which existence of solutions is guaranteed. One class that seems especially rich in such problems is PEPP (for "polynomial empty pigeonhole principle"), which includes problems related to existence theorems proved through the union bound, such as finding a bit string that is far from all codewords, finding an explicit rigid matrix, as well as a problem we call Complexity, capturing Complexity Theory’s quest. When the union bound is generous, in that solutions constitute at least a polynomial fraction of the domain, we have a family of seemingly weaker classes α-PEPP, which are inside FP^NP|poly. Higher in the hierarchy, we identify the constructive version of the Sauer-Shelah lemma and the appropriate generalization of PPP that contains it, as well as the problem of finding a king in a tournament (a vertex k such that all other vertices are defeated by k, or by somebody k defeated).

Cite as

Robert Kleinberg, Oliver Korten, Daniel Mitropolsky, and Christos Papadimitriou. Total Functions in the Polynomial Hierarchy. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 44:1-44:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kleinberg_et_al:LIPIcs.ITCS.2021.44,
  author =	{Kleinberg, Robert and Korten, Oliver and Mitropolsky, Daniel and Papadimitriou, Christos},
  title =	{{Total Functions in the Polynomial Hierarchy}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{44:1--44:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.44},
  URN =		{urn:nbn:de:0030-drops-135835},
  doi =		{10.4230/LIPIcs.ITCS.2021.44},
  annote =	{Keywords: total complexity, polynomial hierarchy, pigeonhole principle}
}
Document
Improved Hardness of Approximation of Diameter in the CONGEST Model

Authors: Ofer Grossman, Seri Khoury, and Ami Paz

Published in: LIPIcs, Volume 179, 34th International Symposium on Distributed Computing (DISC 2020)


Abstract
We study the problem of approximating the diameter D of an unweighted and undirected n-node graph in the congest model. Through a connection to extremal combinatorics, we show that a (6/11 + ε)-approximation requires Ω(n^{1/6}/log n) rounds, a (4/7 + ε)-approximation requires Ω(n^{1/4}/log n) rounds, and a (3/5 + ε)-approximation requires Ω(n^{1/3}/log n) rounds. These lower bounds are robust in the sense that they hold even against algorithms that are allowed to return an additional small additive error. Prior to our work, only lower bounds for (2/3 + ε)-approximation were known [Frischknecht et al. SODA 2012, Abboud et al. DISC 2016]. Furthermore, we prove that distinguishing graphs of diameter 3 from graphs of diameter 5 requires Ω(n/log n) rounds. This stands in sharp contrast to previous work: while there is an algorithm that returns an estimate ⌊ 2/3D ⌋ ≤ D̃ ≤ D in Õ(√n+D) rounds [Holzer et al. DISC 2014], our lower bound implies that any algorithm for returning an estimate 2/3D ≤ D̃ ≤ D requires ̃Ω(n) rounds.

Cite as

Ofer Grossman, Seri Khoury, and Ami Paz. Improved Hardness of Approximation of Diameter in the CONGEST Model. In 34th International Symposium on Distributed Computing (DISC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 179, pp. 19:1-19:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{grossman_et_al:LIPIcs.DISC.2020.19,
  author =	{Grossman, Ofer and Khoury, Seri and Paz, Ami},
  title =	{{Improved Hardness of Approximation of Diameter in the CONGEST Model}},
  booktitle =	{34th International Symposium on Distributed Computing (DISC 2020)},
  pages =	{19:1--19:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-168-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{179},
  editor =	{Attiya, Hagit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2020.19},
  URN =		{urn:nbn:de:0030-drops-130972},
  doi =		{10.4230/LIPIcs.DISC.2020.19},
  annote =	{Keywords: Distributed graph algorithms, Approximation algorithms, Lower bounds}
}
Document
Strategy-Stealing Is Non-Constructive

Authors: Greg Bodwin and Ofer Grossman

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
In many combinatorial games, one can prove that the first player wins under best play using a simple but non-constructive argument called strategy-stealing. This work is about the complexity behind these proofs: how hard is it to actually find a winning move in a game, when you know by strategy-stealing that one exists? We prove that this problem is PSPACE-Complete already for Minimum Poset Games and Symmetric Maker-Maker Games, which are simple classes of games that capture two of the main types of strategy-stealing arguments in the current literature.

Cite as

Greg Bodwin and Ofer Grossman. Strategy-Stealing Is Non-Constructive. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 21:1-21:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bodwin_et_al:LIPIcs.ITCS.2020.21,
  author =	{Bodwin, Greg and Grossman, Ofer},
  title =	{{Strategy-Stealing Is Non-Constructive}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{21:1--21:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.21},
  URN =		{urn:nbn:de:0030-drops-117069},
  doi =		{10.4230/LIPIcs.ITCS.2020.21},
  annote =	{Keywords: PSPACE-hard, Hex, Combinatorial Game Theory}
}
Document
Instance Complexity and Unlabeled Certificates in the Decision Tree Model

Authors: Tomer Grossman, Ilan Komargodski, and Moni Naor

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
Instance complexity is a measure of goodness of an algorithm in which the performance of one algorithm is compared to others per input. This is in sharp contrast to worst-case and average-case complexity measures, where the performance is compared either on the worst input or on an average one, respectively. We initiate the systematic study of instance complexity and optimality in the query model (a.k.a. the decision tree model). In this model, instance optimality of an algorithm for computing a function is the requirement that the complexity of an algorithm on any input is at most a constant factor larger than the complexity of the best correct algorithm. That is we compare the decision tree to one that receives a certificate and its complexity is measured only if the certificate is correct (but correctness should hold on any input). We study both deterministic and randomized decision trees and provide various characterizations and barriers for more general results. We introduce a new measure of complexity called unlabeled-certificate complexity, appropriate for graph properties and other functions with symmetries, where only information about the structure of the graph is known to the competing algorithm. More precisely, the certificate is some permutation of the input (rather than the input itself) and the correctness should be maintained even if the certificate is wrong. First we show that such an unlabeled certificate is sometimes very helpful in the worst-case. We then study instance optimality with respect to this measure of complexity, where an algorithm is said to be instance optimal if for every input it performs roughly as well as the best algorithm that is given an unlabeled certificate (but is correct on every input). We show that instance optimality depends on the group of permutations in consideration. Our proofs rely on techniques from hypothesis testing and analysis of random graphs.

Cite as

Tomer Grossman, Ilan Komargodski, and Moni Naor. Instance Complexity and Unlabeled Certificates in the Decision Tree Model. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 56:1-56:38, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{grossman_et_al:LIPIcs.ITCS.2020.56,
  author =	{Grossman, Tomer and Komargodski, Ilan and Naor, Moni},
  title =	{{Instance Complexity and Unlabeled Certificates in the Decision Tree Model}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{56:1--56:38},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.56},
  URN =		{urn:nbn:de:0030-drops-117418},
  doi =		{10.4230/LIPIcs.ITCS.2020.56},
  annote =	{Keywords: decision tree complexity, instance complexity, instance optimality, query complexity, unlabeled certificates}
}
Document
Pseudo-Deterministic Streaming

Authors: Shafi Goldwasser, Ofer Grossman, Sidhanth Mohanty, and David P. Woodruff

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
A pseudo-deterministic algorithm is a (randomized) algorithm which, when run multiple times on the same input, with high probability outputs the same result on all executions. Classic streaming algorithms, such as those for finding heavy hitters, approximate counting, ?_2 approximation, finding a nonzero entry in a vector (for turnstile algorithms) are not pseudo-deterministic. For example, in the instance of finding a nonzero entry in a vector, for any known low-space algorithm A, there exists a stream x so that running A twice on x (using different randomness) would with high probability result in two different entries as the output. In this work, we study whether it is inherent that these algorithms output different values on different executions. That is, we ask whether these problems have low-memory pseudo-deterministic algorithms. For instance, we show that there is no low-memory pseudo-deterministic algorithm for finding a nonzero entry in a vector (given in a turnstile fashion), and also that there is no low-dimensional pseudo-deterministic sketching algorithm for ?_2 norm estimation. We also exhibit problems which do have low memory pseudo-deterministic algorithms but no low memory deterministic algorithm, such as outputting a nonzero row of a matrix, or outputting a basis for the row-span of a matrix. We also investigate multi-pseudo-deterministic algorithms: algorithms which with high probability output one of a few options. We show the first lower bounds for such algorithms. This implies that there are streaming problems such that every low space algorithm for the problem must have inputs where there are many valid outputs, all with a significant probability of being outputted.

Cite as

Shafi Goldwasser, Ofer Grossman, Sidhanth Mohanty, and David P. Woodruff. Pseudo-Deterministic Streaming. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 79:1-79:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{goldwasser_et_al:LIPIcs.ITCS.2020.79,
  author =	{Goldwasser, Shafi and Grossman, Ofer and Mohanty, Sidhanth and Woodruff, David P.},
  title =	{{Pseudo-Deterministic Streaming}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{79:1--79:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.79},
  URN =		{urn:nbn:de:0030-drops-117644},
  doi =		{10.4230/LIPIcs.ITCS.2020.79},
  annote =	{Keywords: streaming, pseudo-deterministic}
}
Document
Algorithms for Noisy Broadcast with Erasures

Authors: Ofer Grossman, Bernhard Haeupler, and Sidhanth Mohanty

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
The noisy broadcast model was first studied by [Gallager, 1988] where an n-character input is distributed among n processors, so that each processor receives one input bit. Computation proceeds in rounds, where in each round each processor broadcasts a single character, and each reception is corrupted independently at random with some probability p. [Gallager, 1988] gave an algorithm for all processors to learn the input in O(log log n) rounds with high probability. Later, a matching lower bound of Omega(log log n) was given by [Goyal et al., 2008]. We study a relaxed version of this model where each reception is erased and replaced with a `?' independently with probability p, so the processors have knowledge of whether a bit has been corrupted. In this relaxed model, we break past the lower bound of [Goyal et al., 2008] and obtain an O(log^* n)-round algorithm for all processors to learn the input with high probability. We also show an O(1)-round algorithm for the same problem when the alphabet size is Omega(poly(n)).

Cite as

Ofer Grossman, Bernhard Haeupler, and Sidhanth Mohanty. Algorithms for Noisy Broadcast with Erasures. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 153:1-153:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{grossman_et_al:LIPIcs.ICALP.2018.153,
  author =	{Grossman, Ofer and Haeupler, Bernhard and Mohanty, Sidhanth},
  title =	{{Algorithms for Noisy Broadcast with Erasures}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{153:1--153:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.153},
  URN =		{urn:nbn:de:0030-drops-91576},
  doi =		{10.4230/LIPIcs.ICALP.2018.153},
  annote =	{Keywords: noisy broadcast, error correction, erasures, distributed computing with noise}
}
Document
Pseudo-Deterministic Proofs

Authors: Shafi Goldwasser, Ofer Grossman, and Dhiraj Holden

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
We introduce pseudo-deterministic interactive proofs (psdIP): interactive proof systems for search problems where the verifier is guaranteed with high probability to output the same output on different executions. As in the case with classical interactive proofs, the verifier is a probabilistic polynomial time algorithm interacting with an untrusted powerful prover. We view pseudo-deterministic interactive proofs as an extension of the study of pseudo-deterministic randomized polynomial time algorithms: the goal of the latter is to find canonical solutions to search problems whereas the goal of the former is to prove that a solution to a search problem is canonical to a probabilistic polynomial time verifier. Alternatively, one may think of the powerful prover as aiding the probabilistic polynomial time verifier to find canonical solutions to search problems, with high probability over the randomness of the verifier. The challenge is that pseudo-determinism should hold not only with respect to the randomness, but also with respect to the prover: a malicious prover should not be able to cause the verifier to output a solution other than the unique canonical one. The IP=PSPACE characterization implies that psdIP = IP. The challenge is to find constant round pseudo-deterministic interactive proofs for hard search problems. We show a constant round pseudo-deterministic interactive proof for the graph isomorphism problem: on any input pair of isomorphic graphs (G_0,G_1), there exist a unique isomorphism phi from G_0 to G_1 (although many isomorphism many exist) which will be output by the verifier with high probability, regardless of any dishonest prover strategy. In contrast, we show that it is unlikely that psdIP proofs with constant rounds exist for NP-complete problems by showing that if any NP-complete problem has a constant round psdIP protocol, then the polynomial hierarchy collapses.

Cite as

Shafi Goldwasser, Ofer Grossman, and Dhiraj Holden. Pseudo-Deterministic Proofs. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 17:1-17:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{goldwasser_et_al:LIPIcs.ITCS.2018.17,
  author =	{Goldwasser, Shafi and Grossman, Ofer and Holden, Dhiraj},
  title =	{{Pseudo-Deterministic Proofs}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{17:1--17:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.17},
  URN =		{urn:nbn:de:0030-drops-83669},
  doi =		{10.4230/LIPIcs.ITCS.2018.17},
  annote =	{Keywords: Pseudo-Deterministic, Interactive Proofs}
}
Document
Improved Deterministic Distributed Construction of Spanners

Authors: Ofer Grossman and Merav Parter

Published in: LIPIcs, Volume 91, 31st International Symposium on Distributed Computing (DISC 2017)


Abstract
Graph spanners are fundamental graph structures with a wide range of applications in distributed networks. We consider a standard synchronous message passing model where in each round O(log n) bits can be transmitted over every edge (the CONGEST model). The state of the art of deterministic distributed spanner constructions suffers from large messages. The only exception is the work of Derbel et al., which computes an optimal-sized (2k-1)-spanner but uses O(n^(1-1/k)) rounds. In this paper, we significantly improve this bound. We present a deterministic distributed algorithm that given an unweighted n-vertex graph G = (V,E) and a parameter k > 2, constructs a (2k-1)-spanner with O(k n^(1+1/k)) edges within O(2^k n^(1/2 - 1/k)) rounds for every even k. For odd k, the number of rounds is O(2^k n^(1/2 - 1/(2k))). For the weighted case, we provide the first deterministic construction of a 3-spanner with O(n^(3/2)) edges that uses O(log n)-size messages and ~O(1) rounds. If the vertices have IDs in [1,Theta(n)], the spanner is computed in only 2 rounds!

Cite as

Ofer Grossman and Merav Parter. Improved Deterministic Distributed Construction of Spanners. In 31st International Symposium on Distributed Computing (DISC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 91, pp. 24:1-24:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{grossman_et_al:LIPIcs.DISC.2017.24,
  author =	{Grossman, Ofer and Parter, Merav},
  title =	{{Improved Deterministic Distributed Construction of Spanners}},
  booktitle =	{31st International Symposium on Distributed Computing (DISC 2017)},
  pages =	{24:1--24:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-053-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{91},
  editor =	{Richa, Andr\'{e}a},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2017.24},
  URN =		{urn:nbn:de:0030-drops-80085},
  doi =		{10.4230/LIPIcs.DISC.2017.24},
  annote =	{Keywords: spanners, clustering, deterministic algorithms, congest model}
}
Document
Bipartite Perfect Matching in Pseudo-Deterministic NC

Authors: Shafi Goldwasser and Ofer Grossman

Published in: LIPIcs, Volume 80, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)


Abstract
We present a pseudo-deterministic NC algorithm for finding perfect matchings in bipartite graphs. Specifically, our algorithm is a randomized parallel algorithm which uses poly(n) processors, poly(log n) depth, poly(log n) random bits, and outputs for each bipartite input graph a unique perfect matching with high probability. That is, on the same graph it returns the same matching for almost all choices of randomness. As an immediate consequence we also find a pseudo-deterministic NC algorithm for constructing a depth first search (DFS) tree. We introduce a method for computing the union of all min-weight perfect matchings of a weighted graph in RNC and a novel set of weight assignments which in combination enable isolating a unique matching in a graph. We then show a way to use pseudo-deterministic algorithms to reduce the number of random bits used by general randomized algorithms. The main idea is that random bits can be reused by successive invocations of pseudo-deterministic randomized algorithms. We use the technique to show an RNC algorithm for constructing a depth first search (DFS) tree using only O(log^2 n) bits whereas the previous best randomized algorithm used O(log^7 n), and a new sequential randomized algorithm for the set-maxima problem which uses fewer random bits than the previous state of the art. Furthermore, we prove that resolving the decision question NC = RNC, would imply an NC algorithm for finding a bipartite perfect matching and finding a DFS tree in NC. This is not implied by previous randomized NC search algorithms for finding bipartite perfect matching, but is implied by the existence of a pseudo-deterministic NC search algorithm.

Cite as

Shafi Goldwasser and Ofer Grossman. Bipartite Perfect Matching in Pseudo-Deterministic NC. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 87:1-87:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{goldwasser_et_al:LIPIcs.ICALP.2017.87,
  author =	{Goldwasser, Shafi and Grossman, Ofer},
  title =	{{Bipartite Perfect Matching in Pseudo-Deterministic NC}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{87:1--87:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.87},
  URN =		{urn:nbn:de:0030-drops-74824},
  doi =		{10.4230/LIPIcs.ICALP.2017.87},
  annote =	{Keywords: Parallel Algorithms, Pseudo-determinism, RNC, Perfect Matching}
}
  • Refine by Author
  • 8 Grossman, Ofer
  • 4 Goldwasser, Shafi
  • 2 Mohanty, Sidhanth
  • 1 Bodwin, Greg
  • 1 Ghosh, Sumanta
  • Show More...

  • Refine by Classification
  • 2 Theory of computation → Complexity classes
  • 2 Theory of computation → Distributed computing models
  • 1 Mathematics of computing → Approximation algorithms
  • 1 Mathematics of computing → Matroids and greedoids
  • 1 Mathematics of computing → Probabilistic algorithms
  • Show More...

  • Refine by Keyword
  • 2 Pseudo-determinism
  • 1 Approximation algorithms
  • 1 Coding Theory
  • 1 Combinatorial Game Theory
  • 1 Distributed graph algorithms
  • Show More...

  • Refine by Type
  • 12 document

  • Refine by Publication Year
  • 4 2020
  • 4 2021
  • 2 2017
  • 2 2018

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail