16 Search Results for "Ghazi, Badih"


Document
Differentially Private Aggregation via Imperfect Shuffling

Authors: Badih Ghazi, Ravi Kumar, Pasin Manurangsi, Jelani Nelson, and Samson Zhou

Published in: LIPIcs, Volume 267, 4th Conference on Information-Theoretic Cryptography (ITC 2023)


Abstract
In this paper, we introduce the imperfect shuffle differential privacy model, where messages sent from users are shuffled in an almost uniform manner before being observed by a curator for private aggregation. We then consider the private summation problem. We show that the standard split-and-mix protocol by Ishai et. al. [FOCS 2006] can be adapted to achieve near-optimal utility bounds in the imperfect shuffle model. Specifically, we show that surprisingly, there is no additional error overhead necessary in the imperfect shuffle model.

Cite as

Badih Ghazi, Ravi Kumar, Pasin Manurangsi, Jelani Nelson, and Samson Zhou. Differentially Private Aggregation via Imperfect Shuffling. In 4th Conference on Information-Theoretic Cryptography (ITC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 267, pp. 17:1-17:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{ghazi_et_al:LIPIcs.ITC.2023.17,
  author =	{Ghazi, Badih and Kumar, Ravi and Manurangsi, Pasin and Nelson, Jelani and Zhou, Samson},
  title =	{{Differentially Private Aggregation via Imperfect Shuffling}},
  booktitle =	{4th Conference on Information-Theoretic Cryptography (ITC 2023)},
  pages =	{17:1--17:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-271-6},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{267},
  editor =	{Chung, Kai-Min},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2023.17},
  URN =		{urn:nbn:de:0030-drops-183453},
  doi =		{10.4230/LIPIcs.ITC.2023.17},
  annote =	{Keywords: Differential privacy, private summation, shuffle model}
}
Document
Track A: Algorithms, Complexity and Games
On Differentially Private Counting on Trees

Authors: Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, and Kewen Wu

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
We study the problem of performing counting queries at different levels in hierarchical structures while preserving individuals' privacy. Motivated by applications, we propose a new error measure for this problem by considering a combination of multiplicative and additive approximation to the query results. We examine known mechanisms in differential privacy (DP) and prove their optimality, under this measure, in the pure-DP setting. In the approximate-DP setting, we design new algorithms achieving significant improvements over known ones.

Cite as

Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, and Kewen Wu. On Differentially Private Counting on Trees. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 66:1-66:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{ghazi_et_al:LIPIcs.ICALP.2023.66,
  author =	{Ghazi, Badih and Kamath, Pritish and Kumar, Ravi and Manurangsi, Pasin and Wu, Kewen},
  title =	{{On Differentially Private Counting on Trees}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{66:1--66:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.66},
  URN =		{urn:nbn:de:0030-drops-181186},
  doi =		{10.4230/LIPIcs.ICALP.2023.66},
  annote =	{Keywords: Differential Privacy, Algorithms, Trees, Hierarchies}
}
Document
Track A: Algorithms, Complexity and Games
Decidability of Fully Quantum Nonlocal Games with Noisy Maximally Entangled States

Authors: Minglong Qin and Penghui Yao

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
This paper considers the decidability of fully quantum nonlocal games with noisy maximally entangled states. Fully quantum nonlocal games are a generalization of nonlocal games, where both questions and answers are quantum and the referee performs a binary POVM measurement to decide whether they win the game after receiving the quantum answers from the players. The quantum value of a fully quantum nonlocal game is the supremum of the probability that they win the game, where the supremum is taken over all the possible entangled states shared between the players and all the valid quantum operations performed by the players. The seminal work MIP^* = RE [Zhengfeng Ji et al., 2020; Zhengfeng Ji et al., 2020] implies that it is undecidable to approximate the quantum value of a fully nonlocal game. This still holds even if the players are only allowed to share (arbitrarily many copies of) maximally entangled states. This paper investigates the case that the shared maximally entangled states are noisy. We prove that there is a computable upper bound on the copies of noisy maximally entangled states for the players to win a fully quantum nonlocal game with a probability arbitrarily close to the quantum value. This implies that it is decidable to approximate the quantum values of these games. Hence, the hardness of approximating the quantum value of a fully quantum nonlocal game is not robust against the noise in the shared states. This paper is built on the framework for the decidability of non-interactive simulations of joint distributions [Badih Ghazi et al., 2016; De et al., 2018; Ghazi et al., 2018] and generalizes the analogous result for nonlocal games in [Qin and Yao, 2021]. We extend the theory of Fourier analysis to the space of super-operators and prove several key results including an invariance principle and a dimension reduction for super-operators. These results are interesting in their own right and are believed to have further applications.

Cite as

Minglong Qin and Penghui Yao. Decidability of Fully Quantum Nonlocal Games with Noisy Maximally Entangled States. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 97:1-97:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{qin_et_al:LIPIcs.ICALP.2023.97,
  author =	{Qin, Minglong and Yao, Penghui},
  title =	{{Decidability of Fully Quantum Nonlocal Games with Noisy Maximally Entangled States}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{97:1--97:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.97},
  URN =		{urn:nbn:de:0030-drops-181499},
  doi =		{10.4230/LIPIcs.ICALP.2023.97},
  annote =	{Keywords: Fully quantum nonlocal games, Fourier analysis, Dimension reduction}
}
Document
Algorithms with More Granular Differential Privacy Guarantees

Authors: Badih Ghazi, Ravi Kumar, Pasin Manurangsi, and Thomas Steinke

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
Differential privacy is often applied with a privacy parameter that is larger than the theory suggests is ideal; various informal justifications for tolerating large privacy parameters have been proposed. In this work, we consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis. We study several basic data analysis and learning tasks in this framework, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person (i.e., all the attributes).

Cite as

Badih Ghazi, Ravi Kumar, Pasin Manurangsi, and Thomas Steinke. Algorithms with More Granular Differential Privacy Guarantees. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 54:1-54:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{ghazi_et_al:LIPIcs.ITCS.2023.54,
  author =	{Ghazi, Badih and Kumar, Ravi and Manurangsi, Pasin and Steinke, Thomas},
  title =	{{Algorithms with More Granular Differential Privacy Guarantees}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{54:1--54:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.54},
  URN =		{urn:nbn:de:0030-drops-175574},
  doi =		{10.4230/LIPIcs.ITCS.2023.54},
  annote =	{Keywords: Differential Privacy, Algorithms, Per-Attribute Privacy}
}
Document
Private Counting of Distinct and k-Occurring Items in Time Windows

Authors: Badih Ghazi, Ravi Kumar, Jelani Nelson, and Pasin Manurangsi

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
In this work, we study the task of estimating the numbers of distinct and k-occurring items in a time window under the constraint of differential privacy (DP). We consider several variants depending on whether the queries are on general time windows (between times t₁ and t₂), or are restricted to being cumulative (between times 1 and t₂), and depending on whether the DP neighboring relation is event-level or the more stringent item-level. We obtain nearly tight upper and lower bounds on the errors of DP algorithms for these problems. En route, we obtain an event-level DP algorithm for estimating, at each time step, the number of distinct items seen over the last W updates with error polylogarithmic in W; this answers an open question of Bolot et al. (ICDT 2013).

Cite as

Badih Ghazi, Ravi Kumar, Jelani Nelson, and Pasin Manurangsi. Private Counting of Distinct and k-Occurring Items in Time Windows. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 55:1-55:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{ghazi_et_al:LIPIcs.ITCS.2023.55,
  author =	{Ghazi, Badih and Kumar, Ravi and Nelson, Jelani and Manurangsi, Pasin},
  title =	{{Private Counting of Distinct and k-Occurring Items in Time Windows}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{55:1--55:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.55},
  URN =		{urn:nbn:de:0030-drops-175580},
  doi =		{10.4230/LIPIcs.ITCS.2023.55},
  annote =	{Keywords: Differential Privacy, Algorithms, Distinct Elements, Time Windows}
}
Document
Track A: Algorithms, Complexity and Games
Improved Approximation Algorithms and Lower Bounds for Search-Diversification Problems

Authors: Amir Abboud, Vincent Cohen-Addad, Euiwoong Lee, and Pasin Manurangsi

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We study several questions related to diversifying search results. We give improved approximation algorithms in each of the following problems, together with some lower bounds. 1) We give a polynomial-time approximation scheme (PTAS) for a diversified search ranking problem [Nikhil Bansal et al., 2010] whose objective is to minimizes the discounted cumulative gain. Our PTAS runs in time n^{2^O(log(1/ε)/ε)} ⋅ m^O(1) where n denotes the number of elements in the databases and m denotes the number of constraints. Complementing this result, we show that no PTAS can run in time f(ε) ⋅ (nm)^{2^o(1/ε)} assuming Gap-ETH and therefore our running time is nearly tight. Both our upper and lower bounds answer open questions from [Nikhil Bansal et al., 2010]. 2) We next consider the Max-Sum Dispersion problem, whose objective is to select k out of n elements from a database that maximizes the dispersion, which is defined as the sum of the pairwise distances under a given metric. We give a quasipolynomial-time approximation scheme (QPTAS) for the problem which runs in time n^{O_ε(log n)}. This improves upon previously known polynomial-time algorithms with approximate ratios 0.5 [Refael Hassin et al., 1997; Allan Borodin et al., 2017]. Furthermore, we observe that reductions from previous work rule out approximation schemes that run in n^õ_ε(log n) time assuming ETH. 3) Finally, we consider a generalization of Max-Sum Dispersion called Max-Sum Diversification. In addition to the sum of pairwise distance, the objective also includes another function f. For monotone submodular function f, we give a quasipolynomial-time algorithm with approximation ratio arbitrarily close to (1-1/e). This improves upon the best polynomial-time algorithm which has approximation ratio 0.5 [Allan Borodin et al., 2017]. Furthermore, the (1-1/e) factor is also tight as achieving better-than-(1-1/e) approximation is NP-hard [Uriel Feige, 1998].

Cite as

Amir Abboud, Vincent Cohen-Addad, Euiwoong Lee, and Pasin Manurangsi. Improved Approximation Algorithms and Lower Bounds for Search-Diversification Problems. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 7:1-7:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ICALP.2022.7,
  author =	{Abboud, Amir and Cohen-Addad, Vincent and Lee, Euiwoong and Manurangsi, Pasin},
  title =	{{Improved Approximation Algorithms and Lower Bounds for Search-Diversification Problems}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{7:1--7:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.7},
  URN =		{urn:nbn:de:0030-drops-163481},
  doi =		{10.4230/LIPIcs.ICALP.2022.7},
  annote =	{Keywords: Approximation Algorithms, Complexity, Data Mining, Diversification}
}
Document
Track A: Algorithms, Complexity and Games
High-Probability List-Recovery, and Applications to Heavy Hitters

Authors: Dean Doron and Mary Wootters

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
An error correcting code 𝒞 : Σ^k → Σⁿ is efficiently list-recoverable from input list size 𝓁 if for any sets ℒ₁, …, ℒ_n ⊆ Σ of size at most 𝓁, one can efficiently recover the list ℒ = {x ∈ Σ^k : ∀ j ∈ [n], 𝒞(x)_j ∈ ℒ_j}. While list-recovery has been well-studied in error correcting codes, all known constructions with "efficient" algorithms are not efficient in the parameter 𝓁. In this work, motivated by applications in algorithm design and pseudorandomness, we study list-recovery with the goal of obtaining a good dependence on 𝓁. We make a step towards this goal by obtaining it in the weaker case where we allow a randomized encoding map and a small failure probability, and where the input lists are derived from unions of codewords. As an application of our construction, we give a data structure for the heavy hitters problem in the strict turnstile model that, for some parameter regimes, obtains stronger guarantees than known constructions.

Cite as

Dean Doron and Mary Wootters. High-Probability List-Recovery, and Applications to Heavy Hitters. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 55:1-55:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{doron_et_al:LIPIcs.ICALP.2022.55,
  author =	{Doron, Dean and Wootters, Mary},
  title =	{{High-Probability List-Recovery, and Applications to Heavy Hitters}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{55:1--55:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.55},
  URN =		{urn:nbn:de:0030-drops-163961},
  doi =		{10.4230/LIPIcs.ICALP.2022.55},
  annote =	{Keywords: List recoverable codes, Heavy Hitters, high-dimensional expanders}
}
Document
Privately Answering Counting Queries with Generalized Gaussian Mechanisms

Authors: Arun Ganesh and Jiazheng Zhao

Published in: LIPIcs, Volume 192, 2nd Symposium on Foundations of Responsible Computing (FORC 2021)


Abstract
We give the first closed-form privacy guarantees for the Generalized Gaussian mechanism (the mechanism that adds noise x to a vector with probability proportional to exp(-(||x||_p/σ)^p) for some σ, p), in the setting of answering k counting (i.e. sensitivity-1) queries about a database with (ε, δ)-differential privacy (in particular, with low 𝓁_∞-error). Just using Generalized Gaussian noise, we obtain a mechanism such that if the true answers to the queries are the vector d, the mechanism outputs answers d̃ with the 𝓁_∞-error guarantee: 𝔼[||d̃ - d||_∞] = O(√{k log log k log(1/δ)}/ε). This matches the error bound of [Steinke and Ullman, 2017], but using a much simpler mechanism. By composing this mechanism with the sparse vector mechanism (generalizing a technique of [Steinke and Ullman, 2017]), we obtain a mechanism improving the √{k log log k} dependence on k to √{k log log log k}, Our main technical contribution is showing that certain powers of Generalized Gaussians, which follow a Generalized Gamma distribution, are sub-gamma. In subsequent work, the optimal 𝓁_∞-error bound of O(√{k log (1/δ)}/ε) has been achieved by [Yuval Dagan and Gil Kur, 2020] and [Badih Ghazi et al., 2020] independently. However, the Generalized Gaussian mechanism has some qualitative advantages over the mechanisms used in these papers which may make it of interest to both practitioners and theoreticians, both in the setting of answering counting queries and more generally.

Cite as

Arun Ganesh and Jiazheng Zhao. Privately Answering Counting Queries with Generalized Gaussian Mechanisms. In 2nd Symposium on Foundations of Responsible Computing (FORC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 192, pp. 1:1-1:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ganesh_et_al:LIPIcs.FORC.2021.1,
  author =	{Ganesh, Arun and Zhao, Jiazheng},
  title =	{{Privately Answering Counting Queries with Generalized Gaussian Mechanisms}},
  booktitle =	{2nd Symposium on Foundations of Responsible Computing (FORC 2021)},
  pages =	{1:1--1:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-187-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{192},
  editor =	{Ligett, Katrina and Gupta, Swati},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2021.1},
  URN =		{urn:nbn:de:0030-drops-138698},
  doi =		{10.4230/LIPIcs.FORC.2021.1},
  annote =	{Keywords: Differential privacy, counting queries, Generalized Gaussians}
}
Document
On Distributed Differential Privacy and Counting Distinct Elements

Authors: Lijie Chen, Badih Ghazi, Ravi Kumar, and Pasin Manurangsi

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
We study the setup where each of n users holds an element from a discrete set, and the goal is to count the number of distinct elements across all users, under the constraint of (ε,δ)-differentially privacy: - In the non-interactive local setting, we prove that the additive error of any protocol is Ω(n) for any constant ε and for any δ inverse polynomial in n. - In the single-message shuffle setting, we prove a lower bound of Ω̃(n) on the error for any constant ε and for some δ inverse quasi-polynomial in n. We do so by building on the moment-matching method from the literature on distribution estimation. - In the multi-message shuffle setting, we give a protocol with at most one message per user in expectation and with an error of Õ(√n) for any constant ε and for any δ inverse polynomial in n. Our protocol is also robustly shuffle private, and our error of √n matches a known lower bound for such protocols. Our proof technique relies on a new notion, that we call dominated protocols, and which can also be used to obtain the first non-trivial lower bounds against multi-message shuffle protocols for the well-studied problems of selection and learning parity. Our first lower bound for estimating the number of distinct elements provides the first ω(√n) separation between global sensitivity and error in local differential privacy, thus answering an open question of Vadhan (2017). We also provide a simple construction that gives Ω̃(n) separation between global sensitivity and error in two-party differential privacy, thereby answering an open question of McGregor et al. (2011).

Cite as

Lijie Chen, Badih Ghazi, Ravi Kumar, and Pasin Manurangsi. On Distributed Differential Privacy and Counting Distinct Elements. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 56:1-56:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2021.56,
  author =	{Chen, Lijie and Ghazi, Badih and Kumar, Ravi and Manurangsi, Pasin},
  title =	{{On Distributed Differential Privacy and Counting Distinct Elements}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{56:1--56:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.56},
  URN =		{urn:nbn:de:0030-drops-135953},
  doi =		{10.4230/LIPIcs.ITCS.2021.56},
  annote =	{Keywords: Differential Privacy, Shuffle Model}
}
Document
RANDOM
When Is Amplification Necessary for Composition in Randomized Query Complexity?

Authors: Shalev Ben-David, Mika Göös, Robin Kothari, and Thomas Watson

Published in: LIPIcs, Volume 176, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)


Abstract
Suppose we have randomized decision trees for an outer function f and an inner function g. The natural approach for obtaining a randomized decision tree for the composed function (f∘ gⁿ)(x¹,…,xⁿ) = f(g(x¹),…,g(xⁿ)) involves amplifying the success probability of the decision tree for g, so that a union bound can be used to bound the error probability over all the coordinates. The amplification introduces a logarithmic factor cost overhead. We study the question: When is this log factor necessary? We show that when the outer function is parity or majority, the log factor can be necessary, even for models that are more powerful than plain randomized decision trees. Our results are related to, but qualitatively strengthen in various ways, known results about decision trees with noisy inputs.

Cite as

Shalev Ben-David, Mika Göös, Robin Kothari, and Thomas Watson. When Is Amplification Necessary for Composition in Randomized Query Complexity?. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 176, pp. 28:1-28:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bendavid_et_al:LIPIcs.APPROX/RANDOM.2020.28,
  author =	{Ben-David, Shalev and G\"{o}\"{o}s, Mika and Kothari, Robin and Watson, Thomas},
  title =	{{When Is Amplification Necessary for Composition in Randomized Query Complexity?}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)},
  pages =	{28:1--28:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-164-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{176},
  editor =	{Byrka, Jaros{\l}aw and Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2020.28},
  URN =		{urn:nbn:de:0030-drops-126316},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2020.28},
  annote =	{Keywords: Amplification, composition, query complexity}
}
Document
Pure Differentially Private Summation from Anonymous Messages

Authors: Badih Ghazi, Noah Golowich, Ravi Kumar, Pasin Manurangsi, Rasmus Pagh, and Ameya Velingker

Published in: LIPIcs, Volume 163, 1st Conference on Information-Theoretic Cryptography (ITC 2020)


Abstract
The shuffled (aka anonymous) model has recently generated significant interest as a candidate distributed privacy framework with trust assumptions better than the central model but with achievable error rates smaller than the local model. In this paper, we study pure differentially private protocols in the shuffled model for summation, a very basic and widely used primitive. Specifically: - For the binary summation problem where each of n users holds a bit as an input, we give a pure ε-differentially private protocol for estimating the number of ones held by the users up to an absolute error of O_{ε}(1), and where each user sends O_{ε}(log n) one-bit messages. This is the first pure protocol in the shuffled model with error o(√n) for constant values of ε. Using our binary summation protocol as a building block, we give a pure ε-differentially private protocol that performs summation of real numbers in [0, 1] up to an absolute error of O_{ε}(1), and where each user sends O_{ε}(log³ n) messages each consisting of O(log log n) bits. - In contrast, we show that for any pure ε-differentially private protocol for binary summation in the shuffled model having absolute error n^{0.5-Ω(1)}, the per user communication has to be at least Ω_{ε}(√{log n}) bits. This implies (i) the first separation between the (bounded-communication) multi-message shuffled model and the central model, and (ii) the first separation between pure and approximate differentially private protocols in the shuffled model. Interestingly, over the course of proving our lower bound, we have to consider (a generalization of) the following question that might be of independent interest: given γ ∈ (0, 1), what is the smallest positive integer m for which there exist two random variables X⁰ and X^1 supported on {0, … , m} such that (i) the total variation distance between X⁰ and X^1 is at least 1 - γ, and (ii) the moment generating functions of X⁰ and X^1 are within a constant factor of each other everywhere? We show that the answer to this question is m = Θ(√{log(1/γ)}).

Cite as

Badih Ghazi, Noah Golowich, Ravi Kumar, Pasin Manurangsi, Rasmus Pagh, and Ameya Velingker. Pure Differentially Private Summation from Anonymous Messages. In 1st Conference on Information-Theoretic Cryptography (ITC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 163, pp. 15:1-15:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ghazi_et_al:LIPIcs.ITC.2020.15,
  author =	{Ghazi, Badih and Golowich, Noah and Kumar, Ravi and Manurangsi, Pasin and Pagh, Rasmus and Velingker, Ameya},
  title =	{{Pure Differentially Private Summation from Anonymous Messages}},
  booktitle =	{1st Conference on Information-Theoretic Cryptography (ITC 2020)},
  pages =	{15:1--15:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-151-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{163},
  editor =	{Tauman Kalai, Yael and Smith, Adam D. and Wichs, Daniel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2020.15},
  URN =		{urn:nbn:de:0030-drops-121208},
  doi =		{10.4230/LIPIcs.ITC.2020.15},
  annote =	{Keywords: Pure differential privacy, Shuffled model, Anonymous messages, Summation, Communication bounds}
}
Document
Dimension Reduction for Polynomials over Gaussian Space and Applications

Authors: Badih Ghazi, Pritish Kamath, and Prasad Raghavendra

Published in: LIPIcs, Volume 102, 33rd Computational Complexity Conference (CCC 2018)


Abstract
We introduce a new technique for reducing the dimension of the ambient space of low-degree polynomials in the Gaussian space while preserving their relative correlation structure. As an application, we obtain an explicit upper bound on the dimension of an epsilon-optimal noise-stable Gaussian partition. In fact, we address the more general problem of upper bounding the number of samples needed to epsilon-approximate any joint distribution that can be non-interactively simulated from a correlated Gaussian source. Our results significantly improve (from Ackermann-like to "merely" exponential) the upper bounds recently proved on the above problems by De, Mossel & Neeman [CCC 2017, SODA 2018 resp.] and imply decidability of the larger alphabet case of the gap non-interactive simulation problem posed by Ghazi, Kamath & Sudan [FOCS 2016]. Our technique of dimension reduction for low-degree polynomials is simple and can be seen as a generalization of the Johnson-Lindenstrauss lemma and could be of independent interest.

Cite as

Badih Ghazi, Pritish Kamath, and Prasad Raghavendra. Dimension Reduction for Polynomials over Gaussian Space and Applications. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 28:1-28:37, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{ghazi_et_al:LIPIcs.CCC.2018.28,
  author =	{Ghazi, Badih and Kamath, Pritish and Raghavendra, Prasad},
  title =	{{Dimension Reduction for Polynomials over Gaussian Space and Applications}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{28:1--28:37},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.28},
  URN =		{urn:nbn:de:0030-drops-88616},
  doi =		{10.4230/LIPIcs.CCC.2018.28},
  annote =	{Keywords: Dimension reduction, Low-degree Polynomials, Noise Stability, Non-Interactive Simulation}
}
Document
Compression in a Distributed Setting

Authors: Badih Ghazi, Elad Haramaty, Pritish Kamath, and Madhu Sudan

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
Motivated by an attempt to understand the formation and development of (human) language, we introduce a "distributed compression" problem. In our problem a sequence of pairs of players from a set of K players are chosen and tasked to communicate messages drawn from an unknown distribution Q. Arguably languages are created and evolve to compress frequently occurring messages, and we focus on this aspect. The only knowledge that players have about the distribution Q is from previously drawn samples, but these samples differ from player to player. The only common knowledge between the players is restricted to a common prior distribution P and some constant number of bits of information (such as a learning algorithm). Letting T_epsilon denote the number of iterations it would take for a typical player to obtain an epsilon-approximation to Q in total variation distance, we ask whether T_epsilon iterations suffice to compress the messages down roughly to their entropy and give a partial positive answer. We show that a natural uniform algorithm can compress the communication down to an average cost per message of O(H(Q) + log (D(P || Q)) in tilde{O}(T_epsilon) iterations while allowing for O(epsilon)-error, where D(. || .) denotes the KL-divergence between distributions. For large divergences this compares favorably with the static algorithm that ignores all samples and compresses down to H(Q) + D(P || Q) bits, while not requiring T_epsilon * K iterations that it would take players to develop optimal but separate compressions for each pair of players. Along the way we introduce a "data-structural" view of the task of communicating with a natural language and show that our natural algorithm can also be implemented by an efficient data structure, whose storage is comparable to the storage requirements of Q and whose query complexity is comparable to the lengths of the message to be compressed. Our results give a plausible mathematical analogy to the mechanisms by which human languages get created and evolve, and in particular highlights the possibility of coordination towards a joint task (agreeing on a language) while engaging in distributed learning.

Cite as

Badih Ghazi, Elad Haramaty, Pritish Kamath, and Madhu Sudan. Compression in a Distributed Setting. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 19:1-19:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{ghazi_et_al:LIPIcs.ITCS.2017.19,
  author =	{Ghazi, Badih and Haramaty, Elad and Kamath, Pritish and Sudan, Madhu},
  title =	{{Compression in a Distributed Setting}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{19:1--19:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.19},
  URN =		{urn:nbn:de:0030-drops-81763},
  doi =		{10.4230/LIPIcs.ITCS.2017.19},
  annote =	{Keywords: Distributed Compression, Communication, Language Evolution, Isolating Hash Families}
}
Document
On the Power of Learning from k-Wise Queries

Authors: Vitaly Feldman and Badih Ghazi

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
Several well-studied models of access to data samples, including statistical queries, local differential privacy and low-communication algorithms rely on queries that provide information about a function of a single sample. (For example, a statistical query (SQ) gives an estimate of Ex_{x ~ D}[q(x)] for any choice of the query function q mapping X to the reals, where D is an unknown data distribution over X.) Yet some data analysis algorithms rely on properties of functions that depend on multiple samples. Such algorithms would be naturally implemented using k-wise queries each of which is specified by a function q mapping X^k to the reals. Hence it is natural to ask whether algorithms using k-wise queries can solve learning problems more efficiently and by how much. Blum, Kalai and Wasserman (2003) showed that for any weak PAC learning problem over a fixed distribution, the complexity of learning with k-wise SQs is smaller than the (unary) SQ complexity by a factor of at most 2^k. We show that for more general problems over distributions the picture is substantially richer. For every k, the complexity of distribution-independent PAC learning with k-wise queries can be exponentially larger than learning with (k+1)-wise queries. We then give two approaches for simulating a k-wise query using unary queries. The first approach exploits the structure of the problem that needs to be solved. It generalizes and strengthens (exponentially) the results of Blum et al.. It allows us to derive strong lower bounds for learning DNF formulas and stochastic constraint satisfaction problems that hold against algorithms using k-wise queries. The second approach exploits the k-party communication complexity of the k-wise query function.

Cite as

Vitaly Feldman and Badih Ghazi. On the Power of Learning from k-Wise Queries. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 41:1-41:32, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{feldman_et_al:LIPIcs.ITCS.2017.41,
  author =	{Feldman, Vitaly and Ghazi, Badih},
  title =	{{On the Power of Learning from k-Wise Queries}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{41:1--41:32},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.41},
  URN =		{urn:nbn:de:0030-drops-81801},
  doi =		{10.4230/LIPIcs.ITCS.2017.41},
  annote =	{Keywords: Statistical Queries, PAC Learning, Differential Privacy, Lower bounds, Communication Complexity}
}
Document
The Power of Shared Randomness in Uncertain Communication

Authors: Badih Ghazi and Madhu Sudan

Published in: LIPIcs, Volume 80, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)


Abstract
In a recent work (Ghazi et al., SODA 2016), the authors with Komargodski and Kothari initiated the study of communication with contextual uncertainty, a setup aiming to understand how efficient communication is possible when the communicating parties imperfectly share a huge context. In this setting, Alice is given a function f and an input string x, and Bob is given a function g and an input string y. The pair (x,y) comes from a known distribution mu and f and g are guaranteed to be close under this distribution. Alice and Bob wish to compute g(x,y) with high probability. The lack of agreement between Alice and Bob on the function that is being computed captures the uncertainty in the context. The previous work showed that any problem with one-way communication complexity k in the standard model (i.e., without uncertainty, in other words, under the promise that f=g) has public-coin communication at most O(k(1+I)) bits in the uncertain case, where I is the mutual information between x and y. Moreover, a lower bound of Omega(sqrt{I}) bits on the public-coin uncertain communication was also shown. However, an important question that was left open is related to the power that public randomness brings to uncertain communication. Can Alice and Bob achieve efficient communication amid uncertainty without using public randomness? And how powerful are public-coin protocols in overcoming uncertainty? Motivated by these two questions: - We prove the first separation between private-coin uncertain communication and public-coin uncertain communication. Namely, we exhibit a function class for which the communication in the standard model and the public-coin uncertain communication are O(1) while the private-coin uncertain communication is a growing function of n (the length of the inputs). This lower bound (proved with respect to the uniform distribution) is in sharp contrast with the case of public-coin uncertain communication which was shown by the previous work to be within a constant factor from the certain communication. This lower bound also implies the first separation between public-coin uncertain communication and deterministic uncertain communication. Interestingly, we also show that if Alice and Bob imperfectly share a sequence of random bits (a setup weaker than public randomness), then achieving a constant blow-up in communication is still possible. - We improve the lower-bound of the previous work on public-coin uncertain communication. Namely, we exhibit a function class and a distribution (with mutual information I approx n) for which the one-way certain communication is k bits but the one-way public-coin uncertain communication is at least Omega(sqrt{k}*sqrt{I}) bits. Our proofs introduce new problems in the standard communication complexity model and prove lower bounds for these problems. Both the problems and the lower bound techniques may be of general interest.

Cite as

Badih Ghazi and Madhu Sudan. The Power of Shared Randomness in Uncertain Communication. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 49:1-49:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{ghazi_et_al:LIPIcs.ICALP.2017.49,
  author =	{Ghazi, Badih and Sudan, Madhu},
  title =	{{The Power of Shared Randomness in Uncertain Communication}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{49:1--49:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.49},
  URN =		{urn:nbn:de:0030-drops-74871},
  doi =		{10.4230/LIPIcs.ICALP.2017.49},
  annote =	{Keywords: randomness, uncertainty, communication, imperfectly shared randomness, lower bounds}
}
  • Refine by Author
  • 11 Ghazi, Badih
  • 7 Manurangsi, Pasin
  • 6 Kumar, Ravi
  • 3 Kamath, Pritish
  • 2 Nelson, Jelani
  • Show More...

  • Refine by Classification
  • 3 Security and privacy → Privacy-preserving protocols
  • 3 Theory of computation → Theory of database privacy and security
  • 1 Mathematics of computing → Probabilistic algorithms
  • 1 Security and privacy → Human and societal aspects of security and privacy
  • 1 Theory of computation → Approximation algorithms analysis
  • Show More...

  • Refine by Keyword
  • 5 Differential Privacy
  • 3 Algorithms
  • 2 Differential privacy
  • 2 Dimension reduction
  • 1 Amplification
  • Show More...

  • Refine by Type
  • 16 document

  • Refine by Publication Year
  • 5 2023
  • 3 2017
  • 2 2020
  • 2 2021
  • 2 2022
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail