8 Search Results for "Garg, Sumegha"


Document
Hitting Sets for Regular Branching Programs

Authors: Andrej Bogdanov, William M. Hoza, Gautam Prakriya, and Edward Pyne

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
We construct improved hitting set generators (HSGs) for ordered (read-once) regular branching programs in two parameter regimes. First, we construct an explicit ε-HSG for unbounded-width regular branching programs with a single accept state with seed length Õ(log n ⋅ log(1/ε)), where n is the length of the program. Second, we construct an explicit ε-HSG for width-w length-n regular branching programs with seed length Õ(log n ⋅ (√{log(1/ε)} + log w) + log(1/ε)). For context, the "baseline" in this area is the pseudorandom generator (PRG) by Nisan (Combinatorica 1992), which fools ordered (possibly non-regular) branching programs with seed length O(log(wn/ε) ⋅ log n). For regular programs, the state-of-the-art PRG, by Braverman, Rao, Raz, and Yehudayoff (FOCS 2010, SICOMP 2014), has seed length Õ(log(w/ε) ⋅ log n), which beats Nisan’s seed length when log(w/ε) = o(log n). Taken together, our two new constructions beat Nisan’s seed length in all parameter regimes except when log w and log(1/ε) are both Ω(log n) (for the construction of HSGs for regular branching programs with a single accept vertex). Extending work by Reingold, Trevisan, and Vadhan (STOC 2006), we furthermore show that an explicit HSG for regular branching programs with a single accept vertex with seed length o(log² n) in the regime log w = Θ(log(1/ε)) = Θ(log n) would imply improved HSGs for general ordered branching programs, which would be a major breakthrough in derandomization. Pyne and Vadhan (CCC 2021) recently obtained such parameters for the special case of permutation branching programs.

Cite as

Andrej Bogdanov, William M. Hoza, Gautam Prakriya, and Edward Pyne. Hitting Sets for Regular Branching Programs. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 3:1-3:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bogdanov_et_al:LIPIcs.CCC.2022.3,
  author =	{Bogdanov, Andrej and Hoza, William M. and Prakriya, Gautam and Pyne, Edward},
  title =	{{Hitting Sets for Regular Branching Programs}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{3:1--3:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.3},
  URN =		{urn:nbn:de:0030-drops-165658},
  doi =		{10.4230/LIPIcs.CCC.2022.3},
  annote =	{Keywords: Pseudorandomness, hitting set generators, space-bounded computation}
}
Document
RANDOM
Memory-Sample Lower Bounds for Learning Parity with Noise

Authors: Sumegha Garg, Pravesh K. Kothari, Pengda Liu, and Ran Raz

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
In this work, we show, for the well-studied problem of learning parity under noise, where a learner tries to learn x = (x₁,…,x_n) ∈ {0,1}ⁿ from a stream of random linear equations over 𝔽₂ that are correct with probability 1/2+ε and flipped with probability 1/2-ε (0 < ε < 1/2), that any learning algorithm requires either a memory of size Ω(n²/ε) or an exponential number of samples. In fact, we study memory-sample lower bounds for a large class of learning problems, as characterized by [Garg et al., 2018], when the samples are noisy. A matrix M: A × X → {-1,1} corresponds to the following learning problem with error parameter ε: an unknown element x ∈ X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a₁, b₁), (a₂, b₂) …, where for every i, a_i ∈ A is chosen uniformly at random and b_i = M(a_i,x) with probability 1/2+ε and b_i = -M(a_i,x) with probability 1/2-ε (0 < ε < 1/2). Assume that k,𝓁, r are such that any submatrix of M of at least 2^{-k} ⋅ |A| rows and at least 2^{-𝓁} ⋅ |X| columns, has a bias of at most 2^{-r}. We show that any learning algorithm for the learning problem corresponding to M, with error parameter ε, requires either a memory of size at least Ω((k⋅𝓁)/ε), or at least 2^{Ω(r)} samples. The result holds even if the learner has an exponentially small success probability (of 2^{-Ω(r)}). In particular, this shows that for a large class of learning problems, same as those in [Garg et al., 2018], any learning algorithm requires either a memory of size at least Ω(((log|X|)⋅(log|A|))/ε) or an exponential number of noisy samples. Our proof is based on adapting the arguments in [Ran Raz, 2017; Garg et al., 2018] to the noisy case.

Cite as

Sumegha Garg, Pravesh K. Kothari, Pengda Liu, and Ran Raz. Memory-Sample Lower Bounds for Learning Parity with Noise. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 60:1-60:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.APPROX/RANDOM.2021.60,
  author =	{Garg, Sumegha and Kothari, Pravesh K. and Liu, Pengda and Raz, Ran},
  title =	{{Memory-Sample Lower Bounds for Learning Parity with Noise}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{60:1--60:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.60},
  URN =		{urn:nbn:de:0030-drops-147534},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.60},
  annote =	{Keywords: memory-sample tradeoffs, learning parity under noise, space lower bound, branching program}
}
Document
Pseudodistributions That Beat All Pseudorandom Generators (Extended Abstract)

Authors: Edward Pyne and Salil Vadhan

Published in: LIPIcs, Volume 200, 36th Computational Complexity Conference (CCC 2021)


Abstract
A recent paper of Braverman, Cohen, and Garg (STOC 2018) introduced the concept of a weighted pseudorandom generator (WPRG), which amounts to a pseudorandom generator (PRG) whose outputs are accompanied with real coefficients that scale the acceptance probabilities of any potential distinguisher. They gave an explicit construction of WPRGs for ordered branching programs whose seed length has a better dependence on the error parameter ε than the classic PRG construction of Nisan (STOC 1990 and Combinatorica 1992). In this work, we give an explicit construction of WPRGs that achieve parameters that are impossible to achieve by a PRG. In particular, we construct a WPRG for ordered permutation branching programs of unbounded width with a single accept state that has seed length Õ(log^{3/2} n) for error parameter ε = 1/poly(n), where n is the input length. In contrast, recent work of Hoza et al. (ITCS 2021) shows that any PRG for this model requires seed length Ω(log² n) to achieve error ε = 1/poly(n). As a corollary, we obtain explicit WPRGs with seed length Õ(log^{3/2} n) and error ε = 1/poly(n) for ordered permutation branching programs of width w = poly(n) with an arbitrary number of accept states. Previously, seed length o(log² n) was only known when both the width and the reciprocal of the error are subpolynomial, i.e. w = n^{o(1)} and ε = 1/n^{o(1)} (Braverman, Rao, Raz, Yehudayoff, FOCS 2010 and SICOMP 2014). The starting point for our results are the recent space-efficient algorithms for estimating random-walk probabilities in directed graphs by Ahmadenijad, Kelner, Murtagh, Peebles, Sidford, and Vadhan (FOCS 2020), which are based on spectral graph theory and space-efficient Laplacian solvers. We interpret these algorithms as giving WPRGs with large seed length, which we then derandomize to obtain our results. We also note that this approach gives a simpler proof of the original result of Braverman, Cohen, and Garg, as independently discovered by Cohen, Doron, Renard, Sberlo, and Ta-Shma (these proceedings).

Cite as

Edward Pyne and Salil Vadhan. Pseudodistributions That Beat All Pseudorandom Generators (Extended Abstract). In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 33:1-33:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{pyne_et_al:LIPIcs.CCC.2021.33,
  author =	{Pyne, Edward and Vadhan, Salil},
  title =	{{Pseudodistributions That Beat All Pseudorandom Generators (Extended Abstract)}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{33:1--33:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.33},
  URN =		{urn:nbn:de:0030-drops-143070},
  doi =		{10.4230/LIPIcs.CCC.2021.33},
  annote =	{Keywords: pseudorandomness, space-bounded computation, spectral graph theory}
}
Document
RANDOM
Time-Space Tradeoffs for Distinguishing Distributions and Applications to Security of Goldreich’s PRG

Authors: Sumegha Garg, Pravesh K. Kothari, and Ran Raz

Published in: LIPIcs, Volume 176, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)


Abstract
In this work, we establish lower-bounds against memory bounded algorithms for distinguishing between natural pairs of related distributions from samples that arrive in a streaming setting. Our first result applies to the problem of distinguishing the uniform distribution on {0,1}ⁿ from uniform distribution on some unknown linear subspace of {0,1}ⁿ. As a specific corollary, we show that any algorithm that distinguishes between uniform distribution on {0,1}ⁿ and uniform distribution on an n/2-dimensional linear subspace of {0,1}ⁿ with non-negligible advantage needs 2^Ω(n) samples or Ω(n²) memory (tight up to constants in the exponent). Our second result applies to distinguishing outputs of Goldreich’s local pseudorandom generator from the uniform distribution on the output domain. Specifically, Goldreich’s pseudorandom generator G fixes a predicate P:{0,1}^k → {0,1} and a collection of subsets S₁, S₂, …, S_m ⊆ [n] of size k. For any seed x ∈ {0,1}ⁿ, it outputs P(x_S₁), P(x_S₂), …, P(x_{S_m}) where x_{S_i} is the projection of x to the coordinates in S_i. We prove that whenever P is t-resilient (all non-zero Fourier coefficients of (-1)^P are of degree t or higher), then no algorithm, with < n^ε memory, can distinguish the output of G from the uniform distribution on {0,1}^m with a large inverse polynomial advantage, for stretch m ≤ (n/t) ^{(1-ε)/36 ⋅ t} (barring some restrictions on k). The lower bound holds in the streaming model where at each time step i, S_i ⊆ [n] is a randomly chosen (ordered) subset of size k and the distinguisher sees either P(x_{S_i}) or a uniformly random bit along with S_i. An important implication of our second result is the security of Goldreich’s generator with super linear stretch (in the streaming model), against memory-bounded adversaries, whenever the predicate P satisfies the necessary condition of t-resiliency identified in various prior works. Our proof builds on the recently developed machinery for proving time-space trade-offs (Raz 2016 and follow-ups). Our key technical contribution is to adapt this machinery to work for distinguishing problems in contrast to prior works on similar results for search/learning problems.

Cite as

Sumegha Garg, Pravesh K. Kothari, and Ran Raz. Time-Space Tradeoffs for Distinguishing Distributions and Applications to Security of Goldreich’s PRG. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 176, pp. 21:1-21:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.APPROX/RANDOM.2020.21,
  author =	{Garg, Sumegha and Kothari, Pravesh K. and Raz, Ran},
  title =	{{Time-Space Tradeoffs for Distinguishing Distributions and Applications to Security of Goldreich’s PRG}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)},
  pages =	{21:1--21:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-164-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{176},
  editor =	{Byrka, Jaros{\l}aw and Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2020.21},
  URN =		{urn:nbn:de:0030-drops-126248},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2020.21},
  annote =	{Keywords: memory-sample tradeoffs, bounded storage cryptography, Goldreich’s local PRG, distinguishing problems, refuting CSPs}
}
Document
The Role of Randomness and Noise in Strategic Classification

Authors: Mark Braverman and Sumegha Garg

Published in: LIPIcs, Volume 156, 1st Symposium on Foundations of Responsible Computing (FORC 2020)


Abstract
We investigate the problem of designing optimal classifiers in the "strategic classification" setting, where the classification is part of a game in which players can modify their features to attain a favorable classification outcome (while incurring some cost). Previously, the problem has been considered from a learning-theoretic perspective and from the algorithmic fairness perspective. Our main contributions include - Showing that if the objective is to maximize the efficiency of the classification process (defined as the accuracy of the outcome minus the sunk cost of the qualified players manipulating their features to gain a better outcome), then using randomized classifiers (that is, ones where the probability of a given feature vector to be accepted by the classifier is strictly between 0 and 1) is necessary. - Showing that in many natural cases, the imposed optimal solution (in terms of efficiency) has the structure where players never change their feature vectors (and the randomized classifier is structured in a way, such that the gain in the probability of being classified as a "1" does not justify the expense of changing one’s features). - Observing that the randomized classification is not a stable best-response from the classifier’s viewpoint, and that the classifier doesn’t benefit from randomized classifiers without creating instability in the system. - Showing that in some cases, a noisier signal leads to better equilibria outcomes - improving both accuracy and fairness when more than one subpopulation with different feature adjustment costs are involved. This is particularly interesting from a policy perspective, since it is hard to force institutions to stick to a particular randomized classification strategy (especially in a context of a market with multiple classifiers), but it is possible to alter the information environment to make the feature signals inherently noisier.

Cite as

Mark Braverman and Sumegha Garg. The Role of Randomness and Noise in Strategic Classification. In 1st Symposium on Foundations of Responsible Computing (FORC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 156, pp. 9:1-9:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.FORC.2020.9,
  author =	{Braverman, Mark and Garg, Sumegha},
  title =	{{The Role of Randomness and Noise in Strategic Classification}},
  booktitle =	{1st Symposium on Foundations of Responsible Computing (FORC 2020)},
  pages =	{9:1--9:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-142-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{156},
  editor =	{Roth, Aaron},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2020.9},
  URN =		{urn:nbn:de:0030-drops-120255},
  doi =		{10.4230/LIPIcs.FORC.2020.9},
  annote =	{Keywords: Strategic classification, noisy features, randomized classification, fairness}
}
Document
Time-Space Lower Bounds for Two-Pass Learning

Authors: Sumegha Garg, Ran Raz, and Avishay Tal

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
A line of recent works showed that for a large class of learning problems, any learning algorithm requires either super-linear memory size or a super-polynomial number of samples [Raz, 2016; Kol et al., 2017; Raz, 2017; Moshkovitz and Moshkovitz, 2018; Beame et al., 2018; Garg et al., 2018]. For example, any algorithm for learning parities of size n requires either a memory of size Omega(n^{2}) or an exponential number of samples [Raz, 2016]. All these works modeled the learner as a one-pass branching program, allowing only one pass over the stream of samples. In this work, we prove the first memory-samples lower bounds (with a super-linear lower bound on the memory size and super-polynomial lower bound on the number of samples) when the learner is allowed two passes over the stream of samples. For example, we prove that any two-pass algorithm for learning parities of size n requires either a memory of size Omega(n^{1.5}) or at least 2^{Omega(sqrt{n})} samples. More generally, a matrix M: A x X - > {-1,1} corresponds to the following learning problem: An unknown element x in X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a_1, b_1), (a_2, b_2) ..., where for every i, a_i in A is chosen uniformly at random and b_i = M(a_i,x). Assume that k,l, r are such that any submatrix of M of at least 2^{-k} * |A| rows and at least 2^{-l} * |X| columns, has a bias of at most 2^{-r}. We show that any two-pass learning algorithm for the learning problem corresponding to M requires either a memory of size at least Omega (k * min{k,sqrt{l}}), or at least 2^{Omega(min{k,sqrt{l},r})} samples.

Cite as

Sumegha Garg, Ran Raz, and Avishay Tal. Time-Space Lower Bounds for Two-Pass Learning. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 22:1-22:39, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.CCC.2019.22,
  author =	{Garg, Sumegha and Raz, Ran and Tal, Avishay},
  title =	{{Time-Space Lower Bounds for Two-Pass Learning}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{22:1--22:39},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.22},
  URN =		{urn:nbn:de:0030-drops-108446},
  doi =		{10.4230/LIPIcs.CCC.2019.22},
  annote =	{Keywords: branching program, time-space tradeoffs, two-pass streaming, PAC learning, lower bounds}
}
Document
The Space Complexity of Mirror Games

Authors: Sumegha Garg and Jon Schneider

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
We consider the following game between two players Alice and Bob, which we call the mirror game. Alice and Bob take turns saying numbers belonging to the set {1, 2, ...,N}. A player loses if they repeat a number that has already been said. Otherwise, after N turns, when all the numbers have been spoken, both players win. When N is even, Bob, who goes second, has a very simple (and memoryless) strategy to avoid losing: whenever Alice says x, respond with N+1-x. The question is: does Alice have a similarly simple strategy to win that avoids remembering all the numbers said by Bob? The answer is no. We prove a linear lower bound on the space complexity of any deterministic winning strategy of Alice. Interestingly, this follows as a consequence of the Eventown-Oddtown theorem from extremal combinatorics. We additionally demonstrate a randomized strategy for Alice that wins with high probability that requires only O~(sqrt N) space (provided that Alice has access to a random matching on K_N). We also investigate lower bounds for a generalized mirror game where Alice and Bob alternate saying 1 number and b numbers each turn (respectively). When 1+b is a prime, our linear lower bounds continue to hold, but when 1+b is composite, we show that the existence of a o(N) space strategy for Bob (when N != 0 mod (1+b)) implies the existence of exponential-sized matching vector families over Z^N_{1+b}.

Cite as

Sumegha Garg and Jon Schneider. The Space Complexity of Mirror Games. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 36:1-36:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.ITCS.2019.36,
  author =	{Garg, Sumegha and Schneider, Jon},
  title =	{{The Space Complexity of Mirror Games}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{36:1--36:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.36},
  URN =		{urn:nbn:de:0030-drops-101295},
  doi =		{10.4230/LIPIcs.ITCS.2019.36},
  annote =	{Keywords: Mirror Games, Space Complexity, Eventown-Oddtown}
}
Document
Coding in Undirected Graphs Is Either Very Helpful or Not Helpful at All

Authors: Mark Braverman, Sumegha Garg, and Ariel Schvartzman

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
While it is known that using network coding can significantly improve the throughput of directed networks, it is a notorious open problem whether coding yields any advantage over the multicommodity flow (MCF) rate in undirected networks. It was conjectured that the answer is no. In this paper we show that even a small advantage over MCF can be amplified to yield a near-maximum possible gap. We prove that any undirected network with k source-sink pairs that exhibits a (1+epsilon) gap between its MCF rate and its network coding rate can be used to construct a family of graphs G' whose gap is log(|G'|)^c for some constant c < 1. The resulting gap is close to the best currently known upper bound, log(|G'|), which follows from the connection between MCF and sparsest cuts. Our construction relies on a gap-amplifying graph tensor product that, given two graphs G1,G2 with small gaps, creates another graph G with a gap that is equal to the product of the previous two, at the cost of increasing the size of the graph. We iterate this process to obtain a gap of log(|G'|)^c from any initial gap.

Cite as

Mark Braverman, Sumegha Garg, and Ariel Schvartzman. Coding in Undirected Graphs Is Either Very Helpful or Not Helpful at All. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 18:1-18:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.ITCS.2017.18,
  author =	{Braverman, Mark and Garg, Sumegha and Schvartzman, Ariel},
  title =	{{Coding in Undirected Graphs Is Either Very Helpful or Not Helpful at All}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.18},
  URN =		{urn:nbn:de:0030-drops-81599},
  doi =		{10.4230/LIPIcs.ITCS.2017.18},
  annote =	{Keywords: Network coding, Gap Amplification, Multicommodity flows}
}
  • Refine by Author
  • 6 Garg, Sumegha
  • 3 Raz, Ran
  • 2 Braverman, Mark
  • 2 Kothari, Pravesh K.
  • 2 Pyne, Edward
  • Show More...

  • Refine by Classification
  • 3 Theory of computation → Machine learning theory
  • 3 Theory of computation → Pseudorandomness and derandomization
  • 1 Theory of computation → Algorithmic game theory and mechanism design
  • 1 Theory of computation → Circuit complexity
  • 1 Theory of computation → Interactive computation

  • Refine by Keyword
  • 2 branching program
  • 2 memory-sample tradeoffs
  • 2 space-bounded computation
  • 1 Eventown-Oddtown
  • 1 Gap Amplification
  • Show More...

  • Refine by Type
  • 8 document

  • Refine by Publication Year
  • 2 2019
  • 2 2020
  • 2 2021
  • 1 2017
  • 1 2022

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail