29 Search Results for "Mossel, Elchanan"


Document
Influence Maximization in Ising Models

Authors: Zongchen Chen and Elchanan Mossel

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Given a complex high-dimensional distribution over {± 1}ⁿ, what is the best way to increase the expected number of +1’s by controlling the values of only a small number of variables? Such a problem is known as influence maximization and has been widely studied in social networks, biology, and computer science. In this paper, we consider influence maximization on the Ising model which is a prototypical example of undirected graphical models and has wide applications in many real-world problems. We establish a sharp computational phase transition for influence maximization on sparse Ising models under a bounded budget: In the high-temperature regime, we give a linear-time algorithm for finding a small subset of variables and their values which achieve nearly optimal influence; In the low-temperature regime, we show that the influence maximization problem cannot be solved in polynomial time under commonly-believed complexity assumption. The critical temperature coincides with the tree uniqueness/non-uniqueness threshold for Ising models which is also a critical point for other computational problems including approximate sampling and counting.

Cite as

Zongchen Chen and Elchanan Mossel. Influence Maximization in Ising Models. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 30:1-30:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2024.30,
  author =	{Chen, Zongchen and Mossel, Elchanan},
  title =	{{Influence Maximization in Ising Models}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{30:1--30:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.30},
  URN =		{urn:nbn:de:0030-drops-195588},
  doi =		{10.4230/LIPIcs.ITCS.2024.30},
  annote =	{Keywords: Influence maximization, Ising model, phase transition, correlation decay}
}
Document
Is This Correct? Let’s Check!

Authors: Omri Ben-Eliezer, Dan Mikulincer, Elchanan Mossel, and Madhu Sudan

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
Societal accumulation of knowledge is a complex process. The correctness of new units of knowledge depends not only on the correctness of new reasoning, but also on the correctness of old units that the new one builds on. The errors in such accumulation processes are often remedied by error correction and detection heuristics. Motivating examples include the scientific process based on scientific publications, and software development based on libraries of code. Natural processes that aim to keep errors under control, such as peer review in scientific publications, and testing and debugging in software development, would typically check existing pieces of knowledge - both for the reasoning that generated them and the previous facts they rely on. In this work, we present a simple process that models such accumulation of knowledge and study the persistence (or lack thereof) of errors. We consider a simple probabilistic model for the generation of new units of knowledge based on the preferential attachment growth model, which additionally allows for errors. Furthermore, the process includes checks aimed at catching these errors. We investigate when effects of errors persist forever in the system (with positive probability) and when they get rooted out completely by the checking process. The two basic parameters associated with the checking process are the probability of conducting a check and the depth of the check. We show that errors are rooted out if checks are sufficiently frequent and sufficiently deep. In contrast, shallow or infrequent checks are insufficient to root out errors.

Cite as

Omri Ben-Eliezer, Dan Mikulincer, Elchanan Mossel, and Madhu Sudan. Is This Correct? Let’s Check!. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 15:1-15:11, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{beneliezer_et_al:LIPIcs.ITCS.2023.15,
  author =	{Ben-Eliezer, Omri and Mikulincer, Dan and Mossel, Elchanan and Sudan, Madhu},
  title =	{{Is This Correct? Let’s Check!}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{15:1--15:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.15},
  URN =		{urn:nbn:de:0030-drops-175180},
  doi =		{10.4230/LIPIcs.ITCS.2023.15},
  annote =	{Keywords: Error Propagation, Preferential Attachment}
}
Document
On the Round Complexity of Randomized Byzantine Agreement

Authors: Ran Cohen, Iftach Haitner, Nikolaos Makriyannis, Matan Orland, and Alex Samorodnitsky

Published in: LIPIcs, Volume 146, 33rd International Symposium on Distributed Computing (DISC 2019)


Abstract
We prove lower bounds on the round complexity of randomized Byzantine agreement (BA) protocols, bounding the halting probability of such protocols after one and two rounds. In particular, we prove that: 1) BA protocols resilient against n/3 [resp., n/4] corruptions terminate (under attack) at the end of the first round with probability at most o(1) [resp., 1/2+ o(1)]. 2) BA protocols resilient against n/4 corruptions terminate at the end of the second round with probability at most 1-Theta(1). 3) For a large class of protocols (including all BA protocols used in practice) and under a plausible combinatorial conjecture, BA protocols resilient against n/3 [resp., n/4] corruptions terminate at the end of the second round with probability at most o(1) [resp., 1/2 + o(1)]. The above bounds hold even when the parties use a trusted setup phase, e.g., a public-key infrastructure (PKI). The third bound essentially matches the recent protocol of Micali (ITCS'17) that tolerates up to n/3 corruptions and terminates at the end of the third round with constant probability.

Cite as

Ran Cohen, Iftach Haitner, Nikolaos Makriyannis, Matan Orland, and Alex Samorodnitsky. On the Round Complexity of Randomized Byzantine Agreement. In 33rd International Symposium on Distributed Computing (DISC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 146, pp. 12:1-12:17, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.DISC.2019.12,
  author =	{Cohen, Ran and Haitner, Iftach and Makriyannis, Nikolaos and Orland, Matan and Samorodnitsky, Alex},
  title =	{{On the Round Complexity of Randomized Byzantine Agreement}},
  booktitle =	{33rd International Symposium on Distributed Computing (DISC 2019)},
  pages =	{12:1--12:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-126-9},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{146},
  editor =	{Suomela, Jukka},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2019.12},
  URN =		{urn:nbn:de:0030-drops-113199},
  doi =		{10.4230/LIPIcs.DISC.2019.12},
  annote =	{Keywords: Byzantine agreement, lower bound, round complexity}
}
Document
Approximating the Orthogonality Dimension of Graphs and Hypergraphs

Authors: Ishay Haviv

Published in: LIPIcs, Volume 138, 44th International Symposium on Mathematical Foundations of Computer Science (MFCS 2019)


Abstract
A t-dimensional orthogonal representation of a hypergraph is an assignment of nonzero vectors in R^t to its vertices, such that every hyperedge contains two vertices whose vectors are orthogonal. The orthogonality dimension of a hypergraph H, denoted by overline{xi}(H), is the smallest integer t for which there exists a t-dimensional orthogonal representation of H. In this paper we study computational aspects of the orthogonality dimension of graphs and hypergraphs. We prove that for every k >= 4, it is NP-hard (resp. quasi-NP-hard) to distinguish n-vertex k-uniform hypergraphs H with overline{xi}(H) <= 2 from those satisfying overline{xi}(H) >= Omega(log^delta n) for some constant delta>0 (resp. overline{xi}(H) >= Omega(log^{1-o(1)} n)). For graphs, we relate the NP-hardness of approximating the orthogonality dimension to a variant of a long-standing conjecture of Stahl. We also consider the algorithmic problem in which given a graph G with overline{xi}(G) <= 3 the goal is to find an orthogonal representation of G of as low dimension as possible, and provide a polynomial time approximation algorithm based on semidefinite programming.

Cite as

Ishay Haviv. Approximating the Orthogonality Dimension of Graphs and Hypergraphs. In 44th International Symposium on Mathematical Foundations of Computer Science (MFCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 138, pp. 39:1-39:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{haviv:LIPIcs.MFCS.2019.39,
  author =	{Haviv, Ishay},
  title =	{{Approximating the Orthogonality Dimension of Graphs and Hypergraphs}},
  booktitle =	{44th International Symposium on Mathematical Foundations of Computer Science (MFCS 2019)},
  pages =	{39:1--39:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-117-7},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{138},
  editor =	{Rossmanith, Peter and Heggernes, Pinar and Katoen, Joost-Pieter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2019.39},
  URN =		{urn:nbn:de:0030-drops-109836},
  doi =		{10.4230/LIPIcs.MFCS.2019.39},
  annote =	{Keywords: orthogonal representations of hypergraphs, orthogonality dimension, hardness of approximation, Kneser and Schrijver graphs, semidefinite programming}
}
Document
Almost Optimal Distribution-Free Junta Testing

Authors: Nader H. Bshouty

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
We consider the problem of testing whether an unknown n-variable Boolean function is a k-junta in the distribution-free property testing model, where the distance between functions is measured with respect to an arbitrary and unknown probability distribution over {0,1}^n. Chen, Liu, Servedio, Sheng and Xie [Zhengyang Liu et al., 2018] showed that the distribution-free k-junta testing can be performed, with one-sided error, by an adaptive algorithm that makes O~(k^2)/epsilon queries. In this paper, we give a simple two-sided error adaptive algorithm that makes O~(k/epsilon) queries.

Cite as

Nader H. Bshouty. Almost Optimal Distribution-Free Junta Testing. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 2:1-2:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bshouty:LIPIcs.CCC.2019.2,
  author =	{Bshouty, Nader H.},
  title =	{{Almost Optimal Distribution-Free Junta Testing}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{2:1--2:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.2},
  URN =		{urn:nbn:de:0030-drops-108249},
  doi =		{10.4230/LIPIcs.CCC.2019.2},
  annote =	{Keywords: Distribution-free property testing, k-Junta}
}
Document
UG-Hardness to NP-Hardness by Losing Half

Authors: Amey Bhangale and Subhash Khot

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
The 2-to-2 Games Theorem of [Subhash Khot et al., 2017; Dinur et al., 2018; Dinur et al., 2018; Dinur et al., 2018] implies that it is NP-hard to distinguish between Unique Games instances with assignment satisfying at least (1/2-epsilon) fraction of the constraints vs. no assignment satisfying more than epsilon fraction of the constraints, for every constant epsilon>0. We show that the reduction can be transformed in a non-trivial way to give a stronger guarantee in the completeness case: For at least (1/2-epsilon) fraction of the vertices on one side, all the constraints associated with them in the Unique Games instance can be satisfied. We use this guarantee to convert the known UG-hardness results to NP-hardness. We show: 1) Tight inapproximability of approximating independent sets in degree d graphs within a factor of Omega(d/(log^2 d)), where d is a constant. 2) NP-hardness of approximate the Maximum Acyclic Subgraph problem within a factor of 2/3+epsilon, improving the previous ratio of 14/15+epsilon by Austrin et al. [Austrin et al., 2015]. 3) For any predicate P^{-1}(1) subseteq [q]^k supporting a balanced pairwise independent distribution, given a P-CSP instance with value at least 1/2-epsilon, it is NP-hard to satisfy more than (|P^{-1}(1)|/(q^k))+epsilon fraction of constraints.

Cite as

Amey Bhangale and Subhash Khot. UG-Hardness to NP-Hardness by Losing Half. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 3:1-3:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bhangale_et_al:LIPIcs.CCC.2019.3,
  author =	{Bhangale, Amey and Khot, Subhash},
  title =	{{UG-Hardness to NP-Hardness by Losing Half}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{3:1--3:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.3},
  URN =		{urn:nbn:de:0030-drops-108258},
  doi =		{10.4230/LIPIcs.CCC.2019.3},
  annote =	{Keywords: NP-hardness, Inapproximability, Unique Games Conjecture}
}
Document
Optimality of Linear Sketching Under Modular Updates

Authors: Kaave Hosseini, Shachar Lovett, and Grigory Yaroslavtsev

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
We study the relation between streaming algorithms and linear sketching algorithms, in the context of binary updates. We show that for inputs in n dimensions, the existence of efficient streaming algorithms which can process Omega(n^2) updates implies efficient linear sketching algorithms with comparable cost. This improves upon the previous work of Li, Nguyen and Woodruff [Yi Li et al., 2014] and Ai, Hu, Li and Woodruff [Yuqing Ai et al., 2016] which required a triple-exponential number of updates to achieve a similar result for updates over integers. We extend our results to updates modulo p for integers p >= 2, and to approximation instead of exact computation.

Cite as

Kaave Hosseini, Shachar Lovett, and Grigory Yaroslavtsev. Optimality of Linear Sketching Under Modular Updates. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 13:1-13:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{hosseini_et_al:LIPIcs.CCC.2019.13,
  author =	{Hosseini, Kaave and Lovett, Shachar and Yaroslavtsev, Grigory},
  title =	{{Optimality of Linear Sketching Under Modular Updates}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{13:1--13:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.13},
  URN =		{urn:nbn:de:0030-drops-108355},
  doi =		{10.4230/LIPIcs.CCC.2019.13},
  annote =	{Keywords: communication complexity, linear sketching, streaming algorithm}
}
Document
Being Corrupt Requires Being Clever, But Detecting Corruption Doesn't

Authors: Yan Jin, Elchanan Mossel, and Govind Ramnarayan

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
We consider a variation of the problem of corruption detection on networks posed by Alon, Mossel, and Pemantle '15. In this model, each vertex of a graph can be either truthful or corrupt. Each vertex reports about the types (truthful or corrupt) of all its neighbors to a central agency, where truthful nodes report the true types they see and corrupt nodes report adversarially. The central agency aggregates these reports and attempts to find a single truthful node. Inspired by real auditing networks, we pose our problem for arbitrary graphs and consider corruption through a computational lens. We identify a key combinatorial parameter of the graph m(G), which is the minimal number of corrupted agents needed to prevent the central agency from identifying a single corrupt node. We give an efficient (in fact, linear time) algorithm for the central agency to identify a truthful node that is successful whenever the number of corrupt nodes is less than m(G)/2. On the other hand, we prove that for any constant alpha > 1, it is NP-hard to find a subset of nodes S in G such that corrupting S prevents the central agency from finding one truthful node and |S| <= alpha m(G), assuming the Small Set Expansion Hypothesis (Raghavendra and Steurer, STOC '10). We conclude that being corrupt requires being clever, while detecting corruption does not. Our main technical insight is a relation between the minimum number of corrupt nodes required to hide all truthful nodes and a certain notion of vertex separability for the underlying graph. Additionally, this insight lets us design an efficient algorithm for a corrupt party to decide which graphs require the fewest corrupted nodes, up to a multiplicative factor of O(log n).

Cite as

Yan Jin, Elchanan Mossel, and Govind Ramnarayan. Being Corrupt Requires Being Clever, But Detecting Corruption Doesn't. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 45:1-45:14, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{jin_et_al:LIPIcs.ITCS.2019.45,
  author =	{Jin, Yan and Mossel, Elchanan and Ramnarayan, Govind},
  title =	{{Being Corrupt Requires Being Clever, But Detecting Corruption Doesn't}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{45:1--45:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.45},
  URN =		{urn:nbn:de:0030-drops-101388},
  doi =		{10.4230/LIPIcs.ITCS.2019.45},
  annote =	{Keywords: Corruption detection, PMC Model, Small Set Expansion, Hardness of Approximation}
}
Document
Linear Sketching over F_2

Authors: Sampath Kannan, Elchanan Mossel, Swagato Sanyal, and Grigory Yaroslavtsev

Published in: LIPIcs, Volume 102, 33rd Computational Complexity Conference (CCC 2018)


Abstract
We initiate a systematic study of linear sketching over F_2. For a given Boolean function treated as f : F_2^n -> F_2 a randomized F_2-sketch is a distribution M over d x n matrices with elements over F_2 such that Mx suffices for computing f(x) with high probability. Such sketches for d << n can be used to design small-space distributed and streaming algorithms. Motivated by these applications we study a connection between F_2-sketching and a two-player one-way communication game for the corresponding XOR-function. We conjecture that F_2-sketching is optimal for this communication game. Our results confirm this conjecture for multiple important classes of functions: 1) low-degree F_2-polynomials, 2) functions with sparse Fourier spectrum, 3) most symmetric functions, 4) recursive majority function. These results rely on a new structural theorem that shows that F_2-sketching is optimal (up to constant factors) for uniformly distributed inputs. Furthermore, we show that (non-uniform) streaming algorithms that have to process random updates over F_2 can be constructed as F_2-sketches for the uniform distribution. In contrast with the previous work of Li, Nguyen and Woodruff (STOC'14) who show an analogous result for linear sketches over integers in the adversarial setting our result does not require the stream length to be triply exponential in n and holds for streams of length O(n) constructed through uniformly random updates.

Cite as

Sampath Kannan, Elchanan Mossel, Swagato Sanyal, and Grigory Yaroslavtsev. Linear Sketching over F_2. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 8:1-8:37, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kannan_et_al:LIPIcs.CCC.2018.8,
  author =	{Kannan, Sampath and Mossel, Elchanan and Sanyal, Swagato and Yaroslavtsev, Grigory},
  title =	{{Linear Sketching over F\underline2}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{8:1--8:37},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.8},
  URN =		{urn:nbn:de:0030-drops-88819},
  doi =		{10.4230/LIPIcs.CCC.2018.8},
  annote =	{Keywords: Linear sketch, Streaming algorithms, XOR-functions, Communication complexity}
}
Document
Noise Stability Is Computable and Approximately Low-Dimensional

Authors: Anindya De, Elchanan Mossel, and Joe Neeman

Published in: LIPIcs, Volume 79, 32nd Computational Complexity Conference (CCC 2017)


Abstract
Questions of noise stability play an important role in hardness of approximation in computer science as well as in the theory of voting. In many applications, the goal is to find an optimizer of noise stability among all possible partitions of R^n for n >= 1 to k parts with given Gaussian measures mu_1, ..., mu_k. We call a partition epsilon-optimal, if its noise stability is optimal up to an additive epsilon. In this paper, we give an explicit, computable function n(epsilon) such that an epsilon-optimal partition exists in R^{n(epsilon)}. This result has implications for the computability of certain problems in non-interactive simulation, which are addressed in a subsequent work.

Cite as

Anindya De, Elchanan Mossel, and Joe Neeman. Noise Stability Is Computable and Approximately Low-Dimensional. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 10:1-10:11, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{de_et_al:LIPIcs.CCC.2017.10,
  author =	{De, Anindya and Mossel, Elchanan and Neeman, Joe},
  title =	{{Noise Stability Is Computable and Approximately Low-Dimensional}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{10:1--10:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.10},
  URN =		{urn:nbn:de:0030-drops-75390},
  doi =		{10.4230/LIPIcs.CCC.2017.10},
  annote =	{Keywords: Gaussian noise stability; Plurality is stablest; Ornstein Uhlenbeck operator}
}
Document
Lower Bounds on Same-Set Inner Product in Correlated Spaces

Authors: Jan Hazla, Thomas Holenstein, and Elchanan Mossel

Published in: LIPIcs, Volume 60, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016)


Abstract
Let P be a probability distribution over a finite alphabet Omega^L with all L marginals equal. Let X^(1), ..., X^(L), where X^(j) = (X_1^(j), ..., X_n^(j)) be random vectors such that for every coordinate i in [n] the tuples (X_i^(1), ..., X_i^(L)) are i.i.d. according to P. The question we address is: does there exist a function c_P independent of n such that for every f: Omega^n -> [0, 1] with E[f(X^(1))] = m > 0 we have E[f(X^(1)) * ... * f(X^(n))] > c_P(m) > 0? We settle the question for L=2 and when L>2 and P has bounded correlation smaller than 1.

Cite as

Jan Hazla, Thomas Holenstein, and Elchanan Mossel. Lower Bounds on Same-Set Inner Product in Correlated Spaces. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 60, pp. 34:1-34:11, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{hazla_et_al:LIPIcs.APPROX-RANDOM.2016.34,
  author =	{Hazla, Jan and Holenstein, Thomas and Mossel, Elchanan},
  title =	{{Lower Bounds on Same-Set Inner Product in Correlated Spaces}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2016)},
  pages =	{34:1--34:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-018-7},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{60},
  editor =	{Jansen, Klaus and Mathieu, Claire and Rolim, Jos\'{e} D. P. and Umans, Chris},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2016.34},
  URN =		{urn:nbn:de:0030-drops-66571},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2016.34},
  annote =	{Keywords: same set hitting, product spaces, correlation, lower bounds}
}
Document
Invariance Principle on the Slice

Authors: Yuval Filmus, Guy Kindler, Elchanan Mossel, and Karl Wimmer

Published in: LIPIcs, Volume 50, 31st Conference on Computational Complexity (CCC 2016)


Abstract
We prove a non-linear invariance principle for the slice. As applications, we prove versions of Majority is Stablest, Bourgain's tail theorem, and the Kindler-Safra theorem for the slice. From the latter we deduce a stability version of the t-intersecting Erdos-Ko-Rado theorem.

Cite as

Yuval Filmus, Guy Kindler, Elchanan Mossel, and Karl Wimmer. Invariance Principle on the Slice. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 15:1-15:10, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{filmus_et_al:LIPIcs.CCC.2016.15,
  author =	{Filmus, Yuval and Kindler, Guy and Mossel, Elchanan and Wimmer, Karl},
  title =	{{Invariance Principle on the Slice}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{15:1--15:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.15},
  URN =		{urn:nbn:de:0030-drops-58236},
  doi =		{10.4230/LIPIcs.CCC.2016.15},
  annote =	{Keywords: analysis of boolean functions, invariance principle, Johnson association scheme, the slice}
}
Document
Harmonicity and Invariance on Slices of the Boolean Cube

Authors: Yuval Filmus and Elchanan Mossel

Published in: LIPIcs, Volume 50, 31st Conference on Computational Complexity (CCC 2016)


Abstract
In a recent work with Kindler and Wimmer we proved an invariance principle for the slice for low-influence, low-degree functions. Here we provide an alternative proof for general low-degree functions, with no constraints on the influences. We show that any real-valued function on the slice, whose degree when written as a harmonic multi-linear polynomial is o(sqrt(n)), has approximately the same distribution under the slice and cube measure. Our proof is based on a novel decomposition of random increasing paths in the cube in terms of martingales and reverse martingales. While such decompositions have been used in the past for stationary reversible Markov chains, ours decomposition is applied in a non-reversible non-stationary setup. We also provide simple proofs for some known and some new properties of harmonic functions which are crucial for the proof. Finally, we provide independent simple proofs for the known facts that 1) one cannot distinguish between the slice and the cube based on functions of little of of n coordinates and 2) Boolean symmetric functions on the cube cannot be approximated under the uniform measure by functions whose sum of influences is o(sqrt(n)).

Cite as

Yuval Filmus and Elchanan Mossel. Harmonicity and Invariance on Slices of the Boolean Cube. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 16:1-16:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{filmus_et_al:LIPIcs.CCC.2016.16,
  author =	{Filmus, Yuval and Mossel, Elchanan},
  title =	{{Harmonicity and Invariance on Slices of the Boolean Cube}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{16:1--16:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.16},
  URN =		{urn:nbn:de:0030-drops-58240},
  doi =		{10.4230/LIPIcs.CCC.2016.16},
  annote =	{Keywords: analysis of boolean functions, invariance principle, Johnson association scheme, the slice}
}
Document
Harnessing the Bethe Free Energy

Authors: Victor Bapst and Amin Coja-Oghlan

Published in: LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)


Abstract
Gibbs measures induced by random factor graphs play a prominent role in computer science, combinatorics and physics. A key problem is to calculate the typical value of the partition function. According to the "replica symmetric cavity method", a heuristic that rests on non-rigorous considerations from statistical mechanics, in many cases this problem can be tackled by way of maximising a functional called the "Bethe free energy". In this paper we prove that the Bethe free energy upper-bounds the partition function in a broad class of models. Additionally, we provide a sufficient condition for this upper bound to be tight.

Cite as

Victor Bapst and Amin Coja-Oghlan. Harnessing the Bethe Free Energy. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 467-480, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{bapst_et_al:LIPIcs.APPROX-RANDOM.2015.467,
  author =	{Bapst, Victor and Coja-Oghlan, Amin},
  title =	{{Harnessing the Bethe Free Energy}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
  pages =	{467--480},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-89-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{40},
  editor =	{Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.467},
  URN =		{urn:nbn:de:0030-drops-53180},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2015.467},
  annote =	{Keywords: Belief Propagation, free energy, Gibbs measure, partition function}
}
Document
Weighted Polynomial Approximations: Limits for Learning and Pseudorandomness

Authors: Mark Bun and Thomas Steinke

Published in: LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)


Abstract
Low-degree polynomial approximations to the sign function underlie pseudorandom generators for halfspaces, as well as algorithms for agnostically learning halfspaces. We study the limits of these constructions by proving inapproximability results for the sign function. First, we investigate the derandomization of Chernoff-type concentration inequalities. Schmidt et al. (SIAM J. Discrete Math. 1995) showed that a tail bound of delta can be established for sums of Bernoulli random variables with only O(log(1/delta))-wise independence. We show that their results are tight up to constant factors. Secondly, the “polynomial regression” algorithm of Kalai et al. (SIAM J. Comput. 2008) shows that halfspaces can be efficiently learned with respect to log-concave distributions on R^n in the challenging agnostic learning model. The power of this algorithm relies on the fact that under log-concave distributions, halfspaces can be approximated arbitrarily well by low-degree polynomials. In contrast, we exhibit a large class of non-log-concave distributions under which polynomials of any degree cannot approximate the sign function to within arbitrarily low error.

Cite as

Mark Bun and Thomas Steinke. Weighted Polynomial Approximations: Limits for Learning and Pseudorandomness. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 625-644, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{bun_et_al:LIPIcs.APPROX-RANDOM.2015.625,
  author =	{Bun, Mark and Steinke, Thomas},
  title =	{{Weighted Polynomial Approximations: Limits for Learning and Pseudorandomness}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
  pages =	{625--644},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-89-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{40},
  editor =	{Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.625},
  URN =		{urn:nbn:de:0030-drops-53274},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2015.625},
  annote =	{Keywords: Polynomial Approximations, Pseudorandomness, Concentration, Learning Theory, Halfspaces}
}
  • Refine by Author
  • 10 Mossel, Elchanan
  • 2 Bhangale, Amey
  • 2 Filmus, Yuval
  • 2 Guruswami, Venkatesan
  • 2 Haviv, Ishay
  • Show More...

  • Refine by Classification
  • 2 Theory of computation → Approximation algorithms analysis
  • 2 Theory of computation → Communication complexity
  • 2 Theory of computation → Problems, reductions and completeness
  • 1 Mathematics of computing
  • 1 Mathematics of computing → Discrete mathematics
  • Show More...

  • Refine by Keyword
  • 3 Hardness of Approximation
  • 2 Johnson association scheme
  • 2 analysis of boolean functions
  • 2 hardness of approximation
  • 2 invariance principle
  • Show More...

  • Refine by Type
  • 29 document

  • Refine by Publication Year
  • 10 2015
  • 6 2014
  • 6 2019
  • 3 2016
  • 1 2017
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail