Search Results

Documents authored by Raz, Ran


Document
Information Dissemination via Broadcasts in the Presence of Adversarial Noise

Authors: Klim Efremenko, Gillat Kol, Dmitry Paramonov, Ran Raz, and Raghuvansh R. Saxena

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
We initiate the study of error correcting codes over the multi-party adversarial broadcast channel. Specifically, we consider the classic information dissemination problem where n parties, each holding an input bit, wish to know each other’s input. For this, they communicate in rounds, where, in each round, one designated party sends a bit to all other parties over a channel governed by an adversary that may corrupt a constant fraction of the received communication. We mention that the dissemination problem was studied in the stochastic noise model since the 80’s. While stochastic noise in multi-party channels has received quite a bit of attention, the case of adversarial noise has largely been avoided, as such channels cannot handle more than a 1/n-fraction of errors. Indeed, this many errors allow an adversary to completely corrupt the incoming or outgoing communication for one of the parties and fail the protocol. Curiously, we show that by eliminating these "trivial" attacks, one can get a simple protocol resilient to a constant fraction of errors. Thus, a model that rules out such attacks is both necessary and sufficient to get a resilient protocol. The main shortcoming of our dissemination protocol is its length: it requires Θ(n²) communication rounds whereas n rounds suffice in the absence of noise. Our main result is a matching lower bound of Ω(n²) on the length of any dissemination protocol in our model. Our proof first "gets rid" of the channel noise by converting it to a form of "input noise", showing that a noisy dissemination protocol implies a (noiseless) protocol for a version of the direct sum gap-majority problem. We conclude the proof with a tight lower bound for the latter problem, which may be of independent interest.

Cite as

Klim Efremenko, Gillat Kol, Dmitry Paramonov, Ran Raz, and Raghuvansh R. Saxena. Information Dissemination via Broadcasts in the Presence of Adversarial Noise. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 19:1-19:33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{efremenko_et_al:LIPIcs.CCC.2024.19,
  author =	{Efremenko, Klim and Kol, Gillat and Paramonov, Dmitry and Raz, Ran and Saxena, Raghuvansh R.},
  title =	{{Information Dissemination via Broadcasts in the Presence of Adversarial Noise}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{19:1--19:33},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.19},
  URN =		{urn:nbn:de:0030-drops-204159},
  doi =		{10.4230/LIPIcs.CCC.2024.19},
  annote =	{Keywords: Radio Networks, Interactive Coding, Error Correcting Codes}
}
Document
Is Untrusted Randomness Helpful?

Authors: Uma Girish, Ran Raz, and Wei Zhan

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
Randomized algorithms and protocols assume the availability of a perfect source of randomness. In real life, however, perfect randomness is rare and is almost never guaranteed. The gap between these two facts motivated much of the work on randomness and derandomization in theoretical computer science. In this work, we define a new type of randomized algorithms (and protocols), that we call robustly-randomized algorithms (protocols). Such algorithms have access to two separate (read-once) random strings. The first string is trusted to be perfectly random, but its length is bounded by some parameter k = k(n) (where n is the length of the input). We think of k as relatively small, say sub-linear or poly-logarithmic in n. The second string is of unbounded length and is assumed to be random, but its randomness is not trusted. The output of the algorithm is either an output in the set of possible outputs of the problem, or a special symbol, interpreted as do not know and denoted by ⊥. On every input for the algorithm, the output of the algorithm must satisfy the following two requirements: 1) If the second random string is perfectly random then the algorithm must output the correct answer with high probability. 2) If the second random string is an arbitrary string, even adversarially chosen after seeing the input, the algorithm must output with high probability either the correct answer or the special symbol ⊥. We discuss relations of this new definition to several previously studied notions in randomness and derandomization. For example, when considering polynomial-time algorithms, if k is logarithmic we get the complexity class ZPP, while if k is unbounded we get the complexity class BPP, and for a general k, the algorithm can be viewed as an interactive proof with a probabilistic polynomial-time prover and a probabilistic polynomial-time verifier, where the prover is allowed an unlimited number of random bits and the verifier is limited to at most k random bits. Every previously-studied class of randomized algorithms or protocols, and more generally, every previous use of randomness in theoretical computer science, can be revisited and redefined in light of our new definition, by replacing each random string with a pair of random strings, the first is trusted to be perfectly random but is relatively short and the second is of unlimited length but its randomness is not trusted. The main question that we ask is: In which settings and for which problems is the untrusted random string helpful? Our main technical observation is that every problem in the class BPL (of problems solvable by bounded-error randomized logspace algorithms) can be solved by a robustly-randomized logspace algorithm with k = O(log n), that is with just a logarithmic number of trusted random bits. We also give query complexity separations that show cases where the untrusted random string is provenly helpful. Specifically, we show that there are promise problems that can be solved by robustly-randomized protocols with only one query and just a logarithmic number of trusted random bits, whereas any randomized protocol requires either a linear number of random bits or an exponential number of queries, and any zero-error randomized protocol requires a polynomial number of queries.

Cite as

Uma Girish, Ran Raz, and Wei Zhan. Is Untrusted Randomness Helpful?. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 56:1-56:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{girish_et_al:LIPIcs.ITCS.2023.56,
  author =	{Girish, Uma and Raz, Ran and Zhan, Wei},
  title =	{{Is Untrusted Randomness Helpful?}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{56:1--56:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.56},
  URN =		{urn:nbn:de:0030-drops-175593},
  doi =		{10.4230/LIPIcs.ITCS.2023.56},
  annote =	{Keywords: Untrusted, Randomness, Verifiable, ZPL, BPL, ZPP, BPP}
}
Document
RANDOM
Polynomial Bounds on Parallel Repetition for All 3-Player Games with Binary Inputs

Authors: Uma Girish, Kunal Mittal, Ran Raz, and Wei Zhan

Published in: LIPIcs, Volume 245, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)


Abstract
We prove that for every 3-player (3-prover) game G with value less than one, whose query distribution has the support S = {(1,0,0), (0,1,0), (0,0,1)} of Hamming weight one vectors, the value of the n-fold parallel repetition G^{⊗n} decays polynomially fast to zero; that is, there is a constant c = c(G) > 0 such that the value of the game G^{⊗n} is at most n^{-c}. Following the recent work of Girish, Holmgren, Mittal, Raz and Zhan (STOC 2022), our result is the missing piece that implies a similar bound for a much more general class of multiplayer games: For every 3-player game G over binary questions and arbitrary answer lengths, with value less than 1, there is a constant c = c(G) > 0 such that the value of the game G^{⊗n} is at most n^{-c}. Our proof technique is new and requires many new ideas. For example, we make use of the Level-k inequalities from Boolean Fourier Analysis, which, to the best of our knowledge, have not been explored in this context prior to our work.

Cite as

Uma Girish, Kunal Mittal, Ran Raz, and Wei Zhan. Polynomial Bounds on Parallel Repetition for All 3-Player Games with Binary Inputs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 245, pp. 6:1-6:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{girish_et_al:LIPIcs.APPROX/RANDOM.2022.6,
  author =	{Girish, Uma and Mittal, Kunal and Raz, Ran and Zhan, Wei},
  title =	{{Polynomial Bounds on Parallel Repetition for All 3-Player Games with Binary Inputs}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)},
  pages =	{6:1--6:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-249-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{245},
  editor =	{Chakrabarti, Amit and Swamy, Chaitanya},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2022.6},
  URN =		{urn:nbn:de:0030-drops-171286},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2022.6},
  annote =	{Keywords: Parallel repetition, Multi-prover games, Fourier analysis}
}
Document
Eliminating Intermediate Measurements Using Pseudorandom Generators

Authors: Uma Girish and Ran Raz

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
We show that quantum algorithms of time T and space S ≥ log T with unitary operations and intermediate measurements can be simulated by quantum algorithms of time T ⋅ poly (S) and space {O}(S⋅ log T) with unitary operations and without intermediate measurements. The best results prior to this work required either Ω(T) space (by the deferred measurement principle) or poly(2^S) time [Bill Fefferman and Zachary Remscrim, 2021; Uma Girish et al., 2021]. Our result is thus a time-efficient and space-efficient simulation of algorithms with unitary operations and intermediate measurements by algorithms with unitary operations and without intermediate measurements. To prove our result, we study pseudorandom generators for quantum space-bounded algorithms. We show that (an instance of) the INW pseudorandom generator for classical space-bounded algorithms [Russell Impagliazzo et al., 1994] also fools quantum space-bounded algorithms. More precisely, we show that for quantum space-bounded algorithms that have access to a read-once tape consisting of random bits, the final state of the algorithm when the random bits are drawn from the uniform distribution is nearly identical to the final state when the random bits are drawn using the INW pseudorandom generator. This result applies to general quantum algorithms which can apply unitary operations, perform intermediate measurements and reset qubits.

Cite as

Uma Girish and Ran Raz. Eliminating Intermediate Measurements Using Pseudorandom Generators. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 76:1-76:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{girish_et_al:LIPIcs.ITCS.2022.76,
  author =	{Girish, Uma and Raz, Ran},
  title =	{{Eliminating Intermediate Measurements Using Pseudorandom Generators}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{76:1--76:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.76},
  URN =		{urn:nbn:de:0030-drops-156726},
  doi =		{10.4230/LIPIcs.ITCS.2022.76},
  annote =	{Keywords: quantum algorithms, intermediate measurements, deferred measurement, pseudorandom generator, INW generator}
}
Document
RANDOM
Lower Bounds for XOR of Forrelations

Authors: Uma Girish, Ran Raz, and Wei Zhan

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
The Forrelation problem, first introduced by Aaronson [Scott Aaronson, 2010] and Aaronson and Ambainis [Scott Aaronson and Andris Ambainis, 2015], is a well studied computational problem in the context of separating quantum and classical computational models. Variants of this problem were used to give tight separations between quantum and classical query complexity [Scott Aaronson and Andris Ambainis, 2015]; the first separation between poly-logarithmic quantum query complexity and bounded-depth circuits of super-polynomial size, a result that also implied an oracle separation of the classes BQP and PH [Ran Raz and Avishay Tal, 2019]; and improved separations between quantum and classical communication complexity [Uma Girish et al., 2021]. In all these separations, the lower bound for the classical model only holds when the advantage of the protocol (over a random guess) is more than ≈ 1/√N, that is, the success probability is larger than ≈ 1/2 + 1/√N. This is unavoidable as ≈ 1/√N is the correlation between two coordinates of an input that is sampled from the Forrelation distribution, and hence there are simple classical protocols that achieve advantage ≈ 1/√N, in all these models. To achieve separations when the classical protocol has smaller advantage, we study in this work the xor of k independent copies of (a variant of) the Forrelation function (where k≪ N). We prove a very general result that shows that any family of Boolean functions that is closed under restrictions, whose Fourier mass at level 2k is bounded by α^k (that is, the sum of the absolute values of all Fourier coefficients at level 2k is bounded by α^k), cannot compute the xor of k independent copies of the Forrelation function with advantage better than O((α^k)/(N^{k/2})). This is a strengthening of a result of [Eshan Chattopadhyay et al., 2019], that gave a similar statement for k = 1, using the technique of [Ran Raz and Avishay Tal, 2019]. We give several applications of our result. In particular, we obtain the following separations: Quantum versus Classical Communication Complexity. We give the first example of a partial Boolean function that can be computed by a simultaneous-message quantum protocol with communication complexity polylog(N) (where Alice and Bob also share polylog(N) EPR pairs), and such that, any classical randomized protocol of communication complexity at most õ(N^{1/4}), with any number of rounds, has quasipolynomially small advantage over a random guess. Previously, only separations where the classical protocol has polynomially small advantage were known between these models [Dmitry Gavinsky, 2016; Uma Girish et al., 2021]. Quantum Query Complexity versus Bounded Depth Circuits. We give the first example of a partial Boolean function that has a quantum query algorithm with query complexity polylog(N), and such that, any constant-depth circuit of quasipolynomial size has quasipolynomially small advantage over a random guess. Previously, only separations where the constant-depth circuit has polynomially small advantage were known [Ran Raz and Avishay Tal, 2019].

Cite as

Uma Girish, Ran Raz, and Wei Zhan. Lower Bounds for XOR of Forrelations. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 52:1-52:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{girish_et_al:LIPIcs.APPROX/RANDOM.2021.52,
  author =	{Girish, Uma and Raz, Ran and Zhan, Wei},
  title =	{{Lower Bounds for XOR of Forrelations}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{52:1--52:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.52},
  URN =		{urn:nbn:de:0030-drops-147453},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.52},
  annote =	{Keywords: Forrelation, Quasipolynomial, Separation, Quantum versus Classical, Xor}
}
Document
RANDOM
Memory-Sample Lower Bounds for Learning Parity with Noise

Authors: Sumegha Garg, Pravesh K. Kothari, Pengda Liu, and Ran Raz

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
In this work, we show, for the well-studied problem of learning parity under noise, where a learner tries to learn x = (x₁,…,x_n) ∈ {0,1}ⁿ from a stream of random linear equations over 𝔽₂ that are correct with probability 1/2+ε and flipped with probability 1/2-ε (0 < ε < 1/2), that any learning algorithm requires either a memory of size Ω(n²/ε) or an exponential number of samples. In fact, we study memory-sample lower bounds for a large class of learning problems, as characterized by [Garg et al., 2018], when the samples are noisy. A matrix M: A × X → {-1,1} corresponds to the following learning problem with error parameter ε: an unknown element x ∈ X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a₁, b₁), (a₂, b₂) …, where for every i, a_i ∈ A is chosen uniformly at random and b_i = M(a_i,x) with probability 1/2+ε and b_i = -M(a_i,x) with probability 1/2-ε (0 < ε < 1/2). Assume that k,𝓁, r are such that any submatrix of M of at least 2^{-k} ⋅ |A| rows and at least 2^{-𝓁} ⋅ |X| columns, has a bias of at most 2^{-r}. We show that any learning algorithm for the learning problem corresponding to M, with error parameter ε, requires either a memory of size at least Ω((k⋅𝓁)/ε), or at least 2^{Ω(r)} samples. The result holds even if the learner has an exponentially small success probability (of 2^{-Ω(r)}). In particular, this shows that for a large class of learning problems, same as those in [Garg et al., 2018], any learning algorithm requires either a memory of size at least Ω(((log|X|)⋅(log|A|))/ε) or an exponential number of noisy samples. Our proof is based on adapting the arguments in [Ran Raz, 2017; Garg et al., 2018] to the noisy case.

Cite as

Sumegha Garg, Pravesh K. Kothari, Pengda Liu, and Ran Raz. Memory-Sample Lower Bounds for Learning Parity with Noise. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 60:1-60:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.APPROX/RANDOM.2021.60,
  author =	{Garg, Sumegha and Kothari, Pravesh K. and Liu, Pengda and Raz, Ran},
  title =	{{Memory-Sample Lower Bounds for Learning Parity with Noise}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{60:1--60:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.60},
  URN =		{urn:nbn:de:0030-drops-147534},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.60},
  annote =	{Keywords: memory-sample tradeoffs, learning parity under noise, space lower bound, branching program}
}
Document
RANDOM
Parallel Repetition for the GHZ Game: A Simpler Proof

Authors: Uma Girish, Justin Holmgren, Kunal Mittal, Ran Raz, and Wei Zhan

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
We give a new proof of the fact that the parallel repetition of the (3-player) GHZ game reduces the value of the game to zero polynomially quickly. That is, we show that the value of the n-fold GHZ game is at most n^{-Ω(1)}. This was first established by Holmgren and Raz [Holmgren and Raz, 2020]. We present a new proof of this theorem that we believe to be simpler and more direct. Unlike most previous works on parallel repetition, our proof makes no use of information theory, and relies on the use of Fourier analysis. The GHZ game [Greenberger et al., 1989] has played a foundational role in the understanding of quantum information theory, due in part to the fact that quantum strategies can win the GHZ game with probability 1. It is possible that improved parallel repetition bounds may find applications in this setting. Recently, Dinur, Harsha, Venkat, and Yuen [Dinur et al., 2017] highlighted the GHZ game as a simple three-player game, which is in some sense maximally far from the class of multi-player games whose behavior under parallel repetition is well understood. Dinur et al. conjectured that parallel repetition decreases the value of the GHZ game exponentially quickly, and speculated that progress on proving this would shed light on parallel repetition for general multi-player (multi-prover) games.

Cite as

Uma Girish, Justin Holmgren, Kunal Mittal, Ran Raz, and Wei Zhan. Parallel Repetition for the GHZ Game: A Simpler Proof. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 62:1-62:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{girish_et_al:LIPIcs.APPROX/RANDOM.2021.62,
  author =	{Girish, Uma and Holmgren, Justin and Mittal, Kunal and Raz, Ran and Zhan, Wei},
  title =	{{Parallel Repetition for the GHZ Game: A Simpler Proof}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{62:1--62:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.62},
  URN =		{urn:nbn:de:0030-drops-147551},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.62},
  annote =	{Keywords: Parallel Repetition, GHZ, Polynomial, Multi-player}
}
Document
Track A: Algorithms, Complexity and Games
Quantum Logspace Algorithm for Powering Matrices with Bounded Norm

Authors: Uma Girish, Ran Raz, and Wei Zhan

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
We give a quantum logspace algorithm for powering contraction matrices, that is, matrices with spectral norm at most 1. The algorithm gets as an input an arbitrary n× n contraction matrix A, and a parameter T ≤ poly(n) and outputs the entries of A^T, up to (arbitrary) polynomially small additive error. The algorithm applies only unitary operators, without intermediate measurements. We show various implications and applications of this result: First, we use this algorithm to show that the class of quantum logspace algorithms with only quantum memory and with intermediate measurements is equivalent to the class of quantum logspace algorithms with only quantum memory without intermediate measurements. This shows that the deferred-measurement principle, a fundamental principle of quantum computing, applies also for quantum logspace algorithms (without classical memory). More generally, we give a quantum algorithm with space O(S + log T) that takes as an input the description of a quantum algorithm with quantum space S and time T, with intermediate measurements (without classical memory), and simulates it unitarily with polynomially small error, without intermediate measurements. Since unitary transformations are reversible (while measurements are irreversible) an interesting aspect of this result is that it shows that any quantum logspace algorithm (without classical memory) can be simulated by a reversible quantum logspace algorithm. This proves a quantum analogue of the result of Lange, McKenzie and Tapp that deterministic logspace is equal to reversible logspace [Lange et al., 2000]. Finally, we use our results to show non-trivial classical simulations of quantum logspace learning algorithms.

Cite as

Uma Girish, Ran Raz, and Wei Zhan. Quantum Logspace Algorithm for Powering Matrices with Bounded Norm. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 73:1-73:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{girish_et_al:LIPIcs.ICALP.2021.73,
  author =	{Girish, Uma and Raz, Ran and Zhan, Wei},
  title =	{{Quantum Logspace Algorithm for Powering Matrices with Bounded Norm}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{73:1--73:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.73},
  URN =		{urn:nbn:de:0030-drops-141426},
  doi =		{10.4230/LIPIcs.ICALP.2021.73},
  annote =	{Keywords: BQL, Matrix Powering, Quantum Circuit, Reversible Computation}
}
Document
Quantum Versus Randomized Communication Complexity, with Efficient Players

Authors: Uma Girish, Ran Raz, and Avishay Tal

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
We study a new type of separations between quantum and classical communication complexity, separations that are obtained using quantum protocols where all parties are efficient, in the sense that they can be implemented by small quantum circuits, with oracle access to their inputs. Our main result qualitatively matches the strongest known separation between quantum and classical communication complexity [Dmitry Gavinsky, 2016] and is obtained using a quantum protocol where all parties are efficient. More precisely, we give an explicit partial Boolean function f over inputs of length N, such that: (1) f can be computed by a simultaneous-message quantum protocol with communication complexity polylog(N) (where at the beginning of the protocol Alice and Bob also have polylog(N) entangled EPR pairs). (2) Any classical randomized protocol for f, with any number of rounds, has communication complexity at least Ω̃(N^{1/4}). (3) All parties in the quantum protocol of Item (1) (Alice, Bob and the referee) can be implemented by quantum circuits of size polylog(N) (where Alice and Bob have oracle access to their inputs). Items (1), (2) qualitatively match the strongest known separation between quantum and classical communication complexity, proved by Gavinsky [Dmitry Gavinsky, 2016]. Item (3) is new. (Our result is incomparable to the one of Gavinsky. While he obtained a quantitatively better lower bound of Ω(N^{1/2}) in the classical case, the referee in his quantum protocol is inefficient). Exponential separations of quantum and classical communication complexity have been studied in numerous previous works, but to the best of our knowledge the efficiency of the parties in the quantum protocol has not been addressed, and in most previous separations the quantum parties seem to be inefficient. The only separations that we know of that have efficient quantum parties are the recent separations that are based on lifting [Arkadev Chattopadhyay et al., 2019; Arkadev Chattopadhyay et al., 2019]. However, these separations seem to require quantum protocols with at least two rounds of communication, so they imply a separation of two-way quantum and classical communication complexity but they do not give the stronger separations of simultaneous-message quantum communication complexity vs. two-way classical communication complexity (or even one-way quantum communication complexity vs. two-way classical communication complexity). Our proof technique is completely new, in the context of communication complexity, and is based on techniques from [Ran Raz and Avishay Tal, 2019]. Our function f is based on a lift of the forrelation problem, using xor as a gadget.

Cite as

Uma Girish, Ran Raz, and Avishay Tal. Quantum Versus Randomized Communication Complexity, with Efficient Players. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 54:1-54:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{girish_et_al:LIPIcs.ITCS.2021.54,
  author =	{Girish, Uma and Raz, Ran and Tal, Avishay},
  title =	{{Quantum Versus Randomized Communication Complexity, with Efficient Players}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{54:1--54:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.54},
  URN =		{urn:nbn:de:0030-drops-135932},
  doi =		{10.4230/LIPIcs.ITCS.2021.54},
  annote =	{Keywords: Exponential Separation, Quantum, Randomized, Communication, Complexity, Forrelation}
}
Document
Block Rigidity: Strong Multiplayer Parallel Repetition Implies Super-Linear Lower Bounds for Turing Machines

Authors: Kunal Mittal and Ran Raz

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
We prove that a sufficiently strong parallel repetition theorem for a special case of multiplayer (multiprover) games implies super-linear lower bounds for multi-tape Turing machines with advice. To the best of our knowledge, this is the first connection between parallel repetition and lower bounds for time complexity and the first major potential implication of a parallel repetition theorem with more than two players. Along the way to proving this result, we define and initiate a study of block rigidity, a weakening of Valiant’s notion of rigidity [Valiant, 1977]. While rigidity was originally defined for matrices, or, equivalently, for (multi-output) linear functions, we extend and study both rigidity and block rigidity for general (multi-output) functions. Using techniques of Paul, Pippenger, Szemerédi and Trotter [Paul et al., 1983], we show that a block-rigid function cannot be computed by multi-tape Turing machines that run in linear (or slightly super-linear) time, even in the non-uniform setting, where the machine gets an arbitrary advice tape. We then describe a class of multiplayer games, such that, a sufficiently strong parallel repetition theorem for that class of games implies an explicit block-rigid function. The games in that class have the following property that may be of independent interest: for every random string for the verifier (which, in particular, determines the vector of queries to the players), there is a unique correct answer for each of the players, and the verifier accepts if and only if all answers are correct. We refer to such games as independent games. The theorem that we need is that parallel repetition reduces the value of games in this class from v to v^Ω(n), where n is the number of repetitions. As another application of block rigidity, we show conditional size-depth tradeoffs for boolean circuits, where the gates compute arbitrary functions over large sets.

Cite as

Kunal Mittal and Ran Raz. Block Rigidity: Strong Multiplayer Parallel Repetition Implies Super-Linear Lower Bounds for Turing Machines. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 71:1-71:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{mittal_et_al:LIPIcs.ITCS.2021.71,
  author =	{Mittal, Kunal and Raz, Ran},
  title =	{{Block Rigidity: Strong Multiplayer Parallel Repetition Implies Super-Linear Lower Bounds for Turing Machines}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{71:1--71:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.71},
  URN =		{urn:nbn:de:0030-drops-136101},
  doi =		{10.4230/LIPIcs.ITCS.2021.71},
  annote =	{Keywords: Block-rigidity, Matrix Rigidity, Parallel Repetition, Size-depth tradeoffs, Turing Machines}
}
Document
RANDOM
Time-Space Tradeoffs for Distinguishing Distributions and Applications to Security of Goldreich’s PRG

Authors: Sumegha Garg, Pravesh K. Kothari, and Ran Raz

Published in: LIPIcs, Volume 176, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)


Abstract
In this work, we establish lower-bounds against memory bounded algorithms for distinguishing between natural pairs of related distributions from samples that arrive in a streaming setting. Our first result applies to the problem of distinguishing the uniform distribution on {0,1}ⁿ from uniform distribution on some unknown linear subspace of {0,1}ⁿ. As a specific corollary, we show that any algorithm that distinguishes between uniform distribution on {0,1}ⁿ and uniform distribution on an n/2-dimensional linear subspace of {0,1}ⁿ with non-negligible advantage needs 2^Ω(n) samples or Ω(n²) memory (tight up to constants in the exponent). Our second result applies to distinguishing outputs of Goldreich’s local pseudorandom generator from the uniform distribution on the output domain. Specifically, Goldreich’s pseudorandom generator G fixes a predicate P:{0,1}^k → {0,1} and a collection of subsets S₁, S₂, …, S_m ⊆ [n] of size k. For any seed x ∈ {0,1}ⁿ, it outputs P(x_S₁), P(x_S₂), …, P(x_{S_m}) where x_{S_i} is the projection of x to the coordinates in S_i. We prove that whenever P is t-resilient (all non-zero Fourier coefficients of (-1)^P are of degree t or higher), then no algorithm, with < n^ε memory, can distinguish the output of G from the uniform distribution on {0,1}^m with a large inverse polynomial advantage, for stretch m ≤ (n/t) ^{(1-ε)/36 ⋅ t} (barring some restrictions on k). The lower bound holds in the streaming model where at each time step i, S_i ⊆ [n] is a randomly chosen (ordered) subset of size k and the distinguisher sees either P(x_{S_i}) or a uniformly random bit along with S_i. An important implication of our second result is the security of Goldreich’s generator with super linear stretch (in the streaming model), against memory-bounded adversaries, whenever the predicate P satisfies the necessary condition of t-resiliency identified in various prior works. Our proof builds on the recently developed machinery for proving time-space trade-offs (Raz 2016 and follow-ups). Our key technical contribution is to adapt this machinery to work for distinguishing problems in contrast to prior works on similar results for search/learning problems.

Cite as

Sumegha Garg, Pravesh K. Kothari, and Ran Raz. Time-Space Tradeoffs for Distinguishing Distributions and Applications to Security of Goldreich’s PRG. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 176, pp. 21:1-21:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.APPROX/RANDOM.2020.21,
  author =	{Garg, Sumegha and Kothari, Pravesh K. and Raz, Ran},
  title =	{{Time-Space Tradeoffs for Distinguishing Distributions and Applications to Security of Goldreich’s PRG}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)},
  pages =	{21:1--21:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-164-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{176},
  editor =	{Byrka, Jaros{\l}aw and Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2020.21},
  URN =		{urn:nbn:de:0030-drops-126248},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2020.21},
  annote =	{Keywords: memory-sample tradeoffs, bounded storage cryptography, Goldreich’s local PRG, distinguishing problems, refuting CSPs}
}
Document
The Random-Query Model and the Memory-Bounded Coupon Collector

Authors: Ran Raz and Wei Zhan

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
We study a new model of space-bounded computation, the random-query model. The model is based on a branching-program over input variables x_1,…,x_n. In each time step, the branching program gets as an input a random index i ∈ {1,…,n}, together with the input variable x_i (rather than querying an input variable of its choice, as in the case of a standard (oblivious) branching program). We motivate the new model in various ways and study time-space tradeoff lower bounds in this model. Our main technical result is a quadratic time-space lower bound for zero-error computations in the random-query model, for XOR, Majority and many other functions. More precisely, a zero-error computation is a computation that stops with high probability and such that conditioning on the event that the computation stopped, the output is correct with probability 1. We prove that for any Boolean function f: {0,1}^n → {0,1}, with sensitivity k, any zero-error computation with time T and space S, satisfies T ⋅ (S+log n) ≥ Ω(n⋅k). We note that the best time-space lower bounds for standard oblivious branching programs are only slightly super linear and improving these bounds is an important long-standing open problem. To prove our results, we study a memory-bounded variant of the coupon-collector problem that seems to us of independent interest and to the best of our knowledge has not been studied before. We consider a zero-error version of the coupon-collector problem. In this problem, the coupon-collector could explicitly choose to stop when he/she is sure with zero-error that all coupons have already been collected. We prove that any zero-error coupon-collector that stops with high probability in time T, and uses space S, satisfies T⋅(S+log n) ≥ Ω(n^2), where n is the number of different coupons.

Cite as

Ran Raz and Wei Zhan. The Random-Query Model and the Memory-Bounded Coupon Collector. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 20:1-20:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{raz_et_al:LIPIcs.ITCS.2020.20,
  author =	{Raz, Ran and Zhan, Wei},
  title =	{{The Random-Query Model and the Memory-Bounded Coupon Collector}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{20:1--20:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.20},
  URN =		{urn:nbn:de:0030-drops-117053},
  doi =		{10.4230/LIPIcs.ITCS.2020.20},
  annote =	{Keywords: random-query model, time-space trade-offs, branching programs}
}
Document
Time-Space Lower Bounds for Two-Pass Learning

Authors: Sumegha Garg, Ran Raz, and Avishay Tal

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
A line of recent works showed that for a large class of learning problems, any learning algorithm requires either super-linear memory size or a super-polynomial number of samples [Raz, 2016; Kol et al., 2017; Raz, 2017; Moshkovitz and Moshkovitz, 2018; Beame et al., 2018; Garg et al., 2018]. For example, any algorithm for learning parities of size n requires either a memory of size Omega(n^{2}) or an exponential number of samples [Raz, 2016]. All these works modeled the learner as a one-pass branching program, allowing only one pass over the stream of samples. In this work, we prove the first memory-samples lower bounds (with a super-linear lower bound on the memory size and super-polynomial lower bound on the number of samples) when the learner is allowed two passes over the stream of samples. For example, we prove that any two-pass algorithm for learning parities of size n requires either a memory of size Omega(n^{1.5}) or at least 2^{Omega(sqrt{n})} samples. More generally, a matrix M: A x X - > {-1,1} corresponds to the following learning problem: An unknown element x in X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a_1, b_1), (a_2, b_2) ..., where for every i, a_i in A is chosen uniformly at random and b_i = M(a_i,x). Assume that k,l, r are such that any submatrix of M of at least 2^{-k} * |A| rows and at least 2^{-l} * |X| columns, has a bias of at most 2^{-r}. We show that any two-pass learning algorithm for the learning problem corresponding to M requires either a memory of size at least Omega (k * min{k,sqrt{l}}), or at least 2^{Omega(min{k,sqrt{l},r})} samples.

Cite as

Sumegha Garg, Ran Raz, and Avishay Tal. Time-Space Lower Bounds for Two-Pass Learning. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 22:1-22:39, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.CCC.2019.22,
  author =	{Garg, Sumegha and Raz, Ran and Tal, Avishay},
  title =	{{Time-Space Lower Bounds for Two-Pass Learning}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{22:1--22:39},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.22},
  URN =		{urn:nbn:de:0030-drops-108446},
  doi =		{10.4230/LIPIcs.CCC.2019.22},
  annote =	{Keywords: branching program, time-space tradeoffs, two-pass streaming, PAC learning, lower bounds}
}
Document
A Lower Bound for Adaptively-Secure Collective Coin-Flipping Protocols

Authors: Yael Tauman Kalai, Ilan Komargodski, and Ran Raz

Published in: LIPIcs, Volume 121, 32nd International Symposium on Distributed Computing (DISC 2018)


Abstract
In 1985, Ben-Or and Linial (Advances in Computing Research '89) introduced the collective coin-flipping problem, where n parties communicate via a single broadcast channel and wish to generate a common random bit in the presence of adaptive Byzantine corruptions. In this model, the adversary can decide to corrupt a party in the course of the protocol as a function of the messages seen so far. They showed that the majority protocol, in which each player sends a random bit and the output is the majority value, tolerates O(sqrt n) adaptive corruptions. They conjectured that this is optimal for such adversaries. We prove that the majority protocol is optimal (up to a poly-logarithmic factor) among all protocols in which each party sends a single, possibly long, message. Previously, such a lower bound was known for protocols in which parties are allowed to send only a single bit (Lichtenstein, Linial, and Saks, Combinatorica '89), or for symmetric protocols (Goldwasser, Kalai, and Park, ICALP '15).

Cite as

Yael Tauman Kalai, Ilan Komargodski, and Ran Raz. A Lower Bound for Adaptively-Secure Collective Coin-Flipping Protocols. In 32nd International Symposium on Distributed Computing (DISC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 121, pp. 34:1-34:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{taumankalai_et_al:LIPIcs.DISC.2018.34,
  author =	{Tauman Kalai, Yael and Komargodski, Ilan and Raz, Ran},
  title =	{{A Lower Bound for Adaptively-Secure Collective Coin-Flipping Protocols}},
  booktitle =	{32nd International Symposium on Distributed Computing (DISC 2018)},
  pages =	{34:1--34:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-092-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{121},
  editor =	{Schmid, Ulrich and Widder, Josef},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2018.34},
  URN =		{urn:nbn:de:0030-drops-98230},
  doi =		{10.4230/LIPIcs.DISC.2018.34},
  annote =	{Keywords: Coin flipping, adaptive corruptions, byzantine faults, lower bound}
}
Document
A Candidate for a Strong Separation of Information and Communication

Authors: Mark Braverman, Anat Ganor, Gillat Kol, and Ran Raz

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
The weak interactive compression conjecture asserts that any two-party communication protocol with communication complexity C and information complexity I can be compressed to a protocol with communication complexity poly(I)polylog(C). We describe a communication problem that is a candidate for refuting that conjecture. Specifically, while we show that the problem can be solved by a protocol with communication complexity C and information complexity I=polylog(C), the problem seems to be hard for protocols with communication complexity poly(I)polylog(C)=polylog(C).

Cite as

Mark Braverman, Anat Ganor, Gillat Kol, and Ran Raz. A Candidate for a Strong Separation of Information and Communication. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 11:1-11:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.ITCS.2018.11,
  author =	{Braverman, Mark and Ganor, Anat and Kol, Gillat and Raz, Ran},
  title =	{{A Candidate for a Strong Separation of Information and Communication}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{11:1--11:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.11},
  URN =		{urn:nbn:de:0030-drops-83322},
  doi =		{10.4230/LIPIcs.ITCS.2018.11},
  annote =	{Keywords: communication complexity, amortized communication complexity, communication compression, direct sum, information complexity}
}
Document
Complete Volume
LIPIcs, Volume 50, CCC'16, Complete Volume

Authors: Ran Raz

Published in: LIPIcs, Volume 50, 31st Conference on Computational Complexity (CCC 2016)


Abstract
LIPIcs, Volume 50, CCC'16, Complete Volume

Cite as

31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Proceedings{raz:LIPIcs.CCC.2016,
  title =	{{LIPIcs, Volume 50, CCC'16, Complete Volume}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016},
  URN =		{urn:nbn:de:0030-drops-58590},
  doi =		{10.4230/LIPIcs.CCC.2016},
  annote =	{Keywords: Theory of Computation}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Awards, Conference Organization, External Reviewers

Authors: Ran Raz

Published in: LIPIcs, Volume 50, 31st Conference on Computational Complexity (CCC 2016)


Abstract
Front Matter, Table of Contents, Preface, Awards, Conference Organization, External Reviewers

Cite as

31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 0:i-0:xvi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{raz:LIPIcs.CCC.2016.0,
  author =	{Raz, Ran},
  title =	{{Front Matter, Table of Contents, Preface, Awards, Conference Organization, External Reviewers}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{0:i--0:xvi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.0},
  URN =		{urn:nbn:de:0030-drops-58227},
  doi =		{10.4230/LIPIcs.CCC.2016.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Awards, Conference Organization, External Reviewers}
}
Document
Two Sides of the Coin Problem

Authors: Gil Cohen, Anat Ganor, and Ran Raz

Published in: LIPIcs, Volume 28, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)


Abstract
In the coin problem, one is given n independent flips of a coin that has bias b > 0 towards either Head or Tail. The goal is to decide which side the coin is biased towards, with high confidence. An optimal strategy for solving the coin problem is to apply the majority function on the n samples. This simple strategy works as long as b > c(1/sqrt n) for some constant c. However, computing majority is an impossible task for several natural computational models, such as bounded width read once branching programs and AC^0 circuits. Brody and Verbin proved that a length n, width w read once branching program cannot solve the coin problem for b < O(1/(log n)^w). This result was tightened by Steinberger to O(1/(log n)^(w-2)). The coin problem in the model of AC^0 circuits was first studied by Shaltiel and Viola, and later by Aaronson who proved that a depth d size s Boolean circuit cannot solve the coin problem for b < O(1/(log s)^(d+2)). This work has two contributions: 1. We strengthen Steinberger's result and show that any Santha-Vazirani source with bias b < O(1/(log n)^(w-2)) fools length n, width w read once branching programs. In other words, the strong independence assumption in the coin problem is completely redundant in the model of read once branching programs, assuming the bias remains small. That is, the exact same result holds for a much more general class of sources. 2. We tighten Aaronson's result and show that a depth d, size s Boolean circuit cannot solve the coin problem for b < O(1/(log s)^(d-1)). Moreover, our proof technique is different and we believe that it is simpler and more natural.

Cite as

Gil Cohen, Anat Ganor, and Ran Raz. Two Sides of the Coin Problem. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 618-629, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.APPROX-RANDOM.2014.618,
  author =	{Cohen, Gil and Ganor, Anat and Raz, Ran},
  title =	{{Two Sides of the Coin Problem}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{618--629},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.618},
  URN =		{urn:nbn:de:0030-drops-47265},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.618},
  annote =	{Keywords: bounded depth circuits, read once branching programs, Santha-Vazirani sources, the coin problem}
}
Document
Space Pseudorandom Generators by Communication Complexity Lower Bounds

Authors: Anat Ganor and Ran Raz

Published in: LIPIcs, Volume 28, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)


Abstract
In 1989, Babai, Nisan and Szegedy gave a construction of a pseudorandom generator for logspace, based on lower bounds for multiparty communication complexity. The seed length of their pseudorandom generator was relatively large, because the best lower bounds for multiparty communication complexity are relatively weak. Subsequently, pseudorandom generators for logspace with seed length O(log^2 n) were given by Nisan, and Impagliazzo, Nisan and Wigderson. In this paper, we show how to use the pseudorandom generator construction of Babai, Nisan and Szegedy to obtain a third construction of a pseudorandom generator with seed length O(log^2 n), achieving the same parameters as Nisan, and Impagliazzo, Nisan and Wigderson. We achieve this by concentrating on protocols in a restricted model of multiparty communication complexity that we call the conservative one-way unicast model and is based on the conservative one-way model of Damm, Jukna and Sgall. We observe that bounds in the conservative one-way unicast model (rather than the standard Number On the Forehead model) are sufficient for the pseudorandom generator construction of Babai, Nisan and Szegedy to work. Roughly speaking, in a conservative one-way unicast communication protocol, the players speak in turns, one after the other in a fixed order, and every message is visible only to the next player. Moreover, before the beginning of the protocol, each player only knows the inputs of the players that speak after she does and a certain function of the inputs of the players that speak before she does. We prove a lower bound for the communication complexity of conservative one-way unicast communication protocols that compute a family of functions obtained by compositions of strong extractors. Our final pseudorandom generator construction is related to, but different from the constructions of Nisan, and Impagliazzo, Nisan and Wigderson.

Cite as

Anat Ganor and Ran Raz. Space Pseudorandom Generators by Communication Complexity Lower Bounds. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 692-703, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{ganor_et_al:LIPIcs.APPROX-RANDOM.2014.692,
  author =	{Ganor, Anat and Raz, Ran},
  title =	{{Space Pseudorandom Generators by Communication Complexity Lower Bounds}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{692--703},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.692},
  URN =		{urn:nbn:de:0030-drops-47324},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.692},
  annote =	{Keywords: Communication complexity, Logspace, Pseudorandom generator}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail