LIPIcs, Volume 28

Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)



Thumbnail PDF

Event

APPROX/RANDOM 2014, September 4-6, 2014, Barcelona, Spain

Editors

Klaus Jansen
José Rolim
Nikhil R. Devanur
Cristopher Moore

Publication Details

  • published at: 2014-09-04
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-939897-74-3
  • DBLP: db/conf/approx/approx2014

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 28, APPROX/RANDOM'14, Complete Volume

Authors: Klaus Jansen, José D. P. Rolim, Nikhil R. Devanur, and Cristopher Moore


Abstract
LIPIcs, Volume 28, APPROX/RANDOM'14, Complete Volume

Cite as

Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@Proceedings{jansen_et_al:LIPIcs.APPROX-RANDOM.2014,
  title =	{{LIPIcs, Volume 28, APPROX/RANDOM'14, Complete Volume}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014},
  URN =		{urn:nbn:de:0030-drops-47603},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014},
  annote =	{Keywords: Network Architecture and Design, Computer-communication, Coding and Information Theory, Theory of Computation, Computation by Abstract Devices, Models of Computation – relations between models, Modes of Computation, Complexity Measures and Classes, Analysis of Algorithms and Problem Complexity}
}
Document
Front Matter
Frontmatter, Table of Contents, Preface, Conference Organization

Authors: Klaus Jansen, José Rolim, Nikhil R. Devanur, and Cristopher Moore


Abstract
Frontmatter, Table of Contents, Preface, Conference Organization

Cite as

Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. i-xviii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{jansen_et_al:LIPIcs.APPROX-RANDOM.2014.i,
  author =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  title =	{{Frontmatter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{i--xviii},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.i},
  URN =		{urn:nbn:de:0030-drops-46846},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.i},
  annote =	{Keywords: Frontmatter, Table of Contents, Preface, Conference Organization}
}
Document
Fully Dynamic All-Pairs Shortest Paths: Breaking the O(n) Barrier

Authors: Ittai Abraham, Shiri Chechik, and Kunal Talwar


Abstract
A fully dynamic approximate distance oracle is a distance reporting data structure that supports dynamic insert edge and delete edge operations. In this paper we break a longstanding barrier in the design of fully dynamic all-pairs approximate distance oracles. All previous results for this model incurred an amortized cost of at least Omega(n) per operation. We present the first construction that provides constant stretch and o(m) amortized update time. For graphs that are not too dense (where |E| = O(|V|^{2-delta}) for some delta>0 we break the O(n) barrier and provide the first construction with constant stretch and o(n) amortized cost.

Cite as

Ittai Abraham, Shiri Chechik, and Kunal Talwar. Fully Dynamic All-Pairs Shortest Paths: Breaking the O(n) Barrier. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 1-16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{abraham_et_al:LIPIcs.APPROX-RANDOM.2014.1,
  author =	{Abraham, Ittai and Chechik, Shiri and Talwar, Kunal},
  title =	{{Fully Dynamic All-Pairs Shortest Paths: Breaking the O(n) Barrier}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{1--16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.1},
  URN =		{urn:nbn:de:0030-drops-46857},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.1},
  annote =	{Keywords: Shortest Paths, Dynamic Algorithms}
}
Document
Approximation Algorithms for Minimum-Load k-Facility Location

Authors: Sara Ahmadian, Babak Behsaz, Zachary Friggstad, Amin Jorati, Mohammad R. Salavatipour, and Chaitanya Swamy


Abstract
We consider a facility-location problem that abstracts settings where the cost of serving the clients assigned to a facility is incurred by the facility. Formally, we consider the minimum-load k-facility location (MLkFL) problem, which is defined as follows. We have a set F of facilities, a set C of clients, and an integer k > 0. Assigning client j to a facility f incurs a connection cost d(f, j). The goal is to open a set F' of k facilities, and assign each client j to a facility f(j) in F' so as to minimize maximum, over all facilities in F', of the sum of distances of clients j assigned to F' to F'. We call this sum the load of facility f. This problem was studied under the name of min-max star cover in [6, 2], who (among other results) gave bicriteria approximation algorithms for MLkFL for when F = C. MLkFL is rather poorly understood, and only an O(k)-approximation is currently known for MLkFL, even for line metrics. Our main result is the first polynomial time approximation scheme (PTAS) for MLkFL on line metrics (note that no non-trivial true approximation of any kind was known for this metric). Complementing this, we prove that MLkFL is strongly NP-hard on line metrics. We also devise a quasi-PTAS for MLkFL on tree metrics. MLkFL turns out to be surprisingly challenging even on line metrics, and resilient to attack by the variety of techniques that have been successfully applied to facility-location problems. For instance, we show that: (a) even a configuration-style LP-relaxation has a bad integrality gap; and (b) a multi-swap k-median style local-search heuristic has a bad locality gap. Thus, we need to devise various novel techniques to attack MLkFL. Our PTAS for line metrics consists of two main ingredients. First, we prove that there always exists a near-optimal solution possessing some nice structural properties. A novel aspect of this proof is that we first move to a mixed-integer LP (MILP) encoding the problem, and argue that a MILP-solution minimizing a certain potential function possesses the desired structure, and then use a rounding algorithm for the generalized-assignment problem to "transfer" this structure to the rounded integer solution. Complementing this, we show that these structural properties enable one to find such a structured solution via dynamic programming.

Cite as

Sara Ahmadian, Babak Behsaz, Zachary Friggstad, Amin Jorati, Mohammad R. Salavatipour, and Chaitanya Swamy. Approximation Algorithms for Minimum-Load k-Facility Location. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 17-33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{ahmadian_et_al:LIPIcs.APPROX-RANDOM.2014.17,
  author =	{Ahmadian, Sara and Behsaz, Babak and Friggstad, Zachary and Jorati, Amin and Salavatipour, Mohammad R. and Swamy, Chaitanya},
  title =	{{Approximation Algorithms for Minimum-Load k-Facility Location}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{17--33},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.17},
  URN =		{urn:nbn:de:0030-drops-47154},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.17},
  annote =	{Keywords: approximation algorithms, min-max star cover, facility location, line metrics}
}
Document
The Cover Number of a Matrix and its Algorithmic Applications

Authors: Noga Alon, Troy Lee, and Adi Shraibman


Abstract
Given a matrix A, we study how many epsilon-cubes are required to cover the convex hull of the columns of A. We show bounds on this cover number in terms of VC dimension and the gamma_2 norm and give algorithms for enumerating elements of a cover. This leads to algorithms for computing approximate Nash equilibria that unify and extend several previous results in the literature. Moreover, our approximation algorithms can be applied quite generally to a family of quadratic optimization problems that also includes finding the k-by-k combinatorial rectangle of a matrix. In particular, for this problem we give the first quasi-polynomial time additive approximation algorithm that works for any matrix A in [0,1]^{m x n}.

Cite as

Noga Alon, Troy Lee, and Adi Shraibman. The Cover Number of a Matrix and its Algorithmic Applications. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 34-47, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{alon_et_al:LIPIcs.APPROX-RANDOM.2014.34,
  author =	{Alon, Noga and Lee, Troy and Shraibman, Adi},
  title =	{{The Cover Number of a Matrix and its Algorithmic Applications}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{34--47},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.34},
  URN =		{urn:nbn:de:0030-drops-46865},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.34},
  annote =	{Keywords: Approximation algorithms, Approximate Nash equilibria, Cover number, VC dimension}
}
Document
Network Design with Coverage Costs

Authors: Siddharth Barman, Shuchi Chawla, and Seeun Umboh


Abstract
We study network design with a cost structure motivated by redundancy in data traffic. We are given a graph, g groups of terminals, and a universe of data packets. Each group of terminals desires a subset of the packets from its respective source. The cost of routing traffic on any edge in the network is proportional to the total size of the distinct packets that the edge carries. Our goal is to find a minimum cost routing. We focus on two settings. In the first, the collection of packet sets desired by source-sink pairs is laminar. For this setting, we present a primal-dual based 2-approximation, improving upon a logarithmic approximation due to Barman and Chawla (2012){BC12}. In the second setting, packet sets can have non-trivial intersection. We focus on the case where each packet is desired by either a single terminal group or by all of the groups. This setting does not admit an O(log^{{1}/{4} - gamma} g)-approximation for any constant gamma under a standard assumption; we present an O(log g)-approximation when the graph is unweighted. Our approximation for the second setting is based on a novel spanner-type construction in unweighted graphs that, given a collection of g vertex subsets, finds a subgraph of cost only a constant factor more than the minimum spanning tree of the graph, such that every subset in the collection has a Steiner tree in the subgraph of cost at most O(log g) that of its minimum Steiner tree in the original graph. We call such a subgraph a group spanner.

Cite as

Siddharth Barman, Shuchi Chawla, and Seeun Umboh. Network Design with Coverage Costs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 48-63, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{barman_et_al:LIPIcs.APPROX-RANDOM.2014.48,
  author =	{Barman, Siddharth and Chawla, Shuchi and Umboh, Seeun},
  title =	{{Network Design with Coverage Costs}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{48--63},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.48},
  URN =		{urn:nbn:de:0030-drops-46876},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.48},
  annote =	{Keywords: Network Design, Spanner, Primal Dual Method, Traffic Redundancy}
}
Document
Online Set Cover with Set Requests

Authors: Kshipra Bhawalkar, Sreenivas Gollapudi, and Debmalya Panigrahi


Abstract
We consider a generic online allocation problem that generalizes the classical online set cover framework by considering requests comprising a set of elements rather than a single element. This problem has multiple applications in cloud computing, crowd sourcing, facility planning, etc. Formally, it is an online covering problem where each online step comprises an offline covering problem. In addition, the covering sets are capacitated, leading to packing constraints. We give a randomized algorithm for this problem that has a nearly tight competitive ratio in both objectives: overall cost and maximum capacity violation. Our main technical tool is an online algorithm for packing/covering LPs with nested constraints, which may be of interest in other applications as well.

Cite as

Kshipra Bhawalkar, Sreenivas Gollapudi, and Debmalya Panigrahi. Online Set Cover with Set Requests. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 64-79, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{bhawalkar_et_al:LIPIcs.APPROX-RANDOM.2014.64,
  author =	{Bhawalkar, Kshipra and Gollapudi, Sreenivas and Panigrahi, Debmalya},
  title =	{{Online Set Cover with Set Requests}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{64--79},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.64},
  URN =		{urn:nbn:de:0030-drops-46883},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.64},
  annote =	{Keywords: Online Algorithms, Set Cover}
}
Document
Lowest Degree k-Spanner: Approximation and Hardness

Authors: Eden Chlamtác and Michael Dinitz


Abstract
A k-spanner is a subgraph in which distances are approximately preserved, up to some given stretch factor k. We focus on the following problem: Given a graph and a value k, can we find a k-spanner that minimizes the maximum degree? While reasonably strong bounds are known for some spanner problems, they almost all involve minimizing the total number of edges. Switching the objective to the degree introduces significant new challenges, and currently the only known approximation bound is an O~(Delta^(3-2*sqrt(2)))-approximation for the special case when k = 2 [Chlamtac, Dinitz, Krauthgamer FOCS 2012] (where Delta is the maximum degree in the input graph). In this paper we give the first non-trivial algorithm and polynomial-factor hardness of approximation for the case of general k. Specifically, we give an LP-based O~(Delta^((1-1/k)^2) )-approximation and prove that it is hard to approximate the optimum to within Delta^Omega(1/k) when the graph is undirected, and to within Delta^Omega(1) when it is directed.

Cite as

Eden Chlamtác and Michael Dinitz. Lowest Degree k-Spanner: Approximation and Hardness. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 80-95, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{chlamtac_et_al:LIPIcs.APPROX-RANDOM.2014.80,
  author =	{Chlamt\'{a}c, Eden and Dinitz, Michael},
  title =	{{Lowest Degree k-Spanner: Approximation and Hardness}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{80--95},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.80},
  URN =		{urn:nbn:de:0030-drops-46894},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.80},
  annote =	{Keywords: Graph spanners, approximation algorithms, hardness of approximation}
}
Document
Improved Streaming Algorithms for Weighted Matching, via Unweighted Matching

Authors: Michael Crouch and Daniel M. Stubbs


Abstract
We present a (4 + epsilon) approximation algorithm for weighted graph matching which applies in the semistreaming, sliding window, and MapReduce models; this single algorithm improves the previous best algorithm in each model. The algorithm operates by reducing the maximum-weight matching problem to a polylog number of copies of the maximum-cardinality matching problem. The algorithm also extends to provide approximation guarantees for the more general problem of finding weighted independent sets in p-systems (which include intersections of p matroids and p-bounded hypergraph matching).

Cite as

Michael Crouch and Daniel M. Stubbs. Improved Streaming Algorithms for Weighted Matching, via Unweighted Matching. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 96-104, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{crouch_et_al:LIPIcs.APPROX-RANDOM.2014.96,
  author =	{Crouch, Michael and Stubbs, Daniel M.},
  title =	{{Improved Streaming Algorithms for Weighted Matching, via Unweighted Matching}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{96--104},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.96},
  URN =		{urn:nbn:de:0030-drops-46907},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.96},
  annote =	{Keywords: Streaming Algorithms, Graph Matching, Weighted Graph Matching, MapReduce, Independence Systems}
}
Document
Guruswami-Sinop Rounding without Higher Level Lasserre

Authors: Amit Deshpande and Rakesh Venkat


Abstract
Guruswami and Sinop give a O(1/delta) approximation guarantee for the non-uniform Sparsest Cut problem by solving O(r)-level Lasserre semidefinite constraints, provided that the generalized eigenvalues of the Laplacians of the cost and demand graphs satisfy a certain spectral condition, namely, the (r+1)-th generalized eigenvalue is at least OPT/(1-delta). Their key idea is a rounding technique that first maps a vector-valued solution to [0,1] using appropriately scaled projections onto Lasserre vectors. In this paper, we show that similar projections and analysis can be obtained using only l_2^2 triangle inequality constraints. This results in a O(r/delta^2) approximation guarantee for the non-uniform Sparsest Cut problem by adding only l_2^2 triangle inequality constraints to the usual semidefinite program, provided that the same spectral condition, the (r+1)-th generalized eigenvalue is at least OPT/(1-delta), holds.

Cite as

Amit Deshpande and Rakesh Venkat. Guruswami-Sinop Rounding without Higher Level Lasserre. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 105-114, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{deshpande_et_al:LIPIcs.APPROX-RANDOM.2014.105,
  author =	{Deshpande, Amit and Venkat, Rakesh},
  title =	{{Guruswami-Sinop Rounding without Higher Level Lasserre}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{105--114},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.105},
  URN =		{urn:nbn:de:0030-drops-46911},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.105},
  annote =	{Keywords: Sparsest Cut, Lasserre Hierarchy, Metric embeddings}
}
Document
Improved Approximation Algorithm for Steiner k-Forest with Nearly Uniform Weights

Authors: Michael Dinitz, Guy Kortsarz, and Zeev Nutov


Abstract
In the Steiner k-Forest problem we are given an edge weighted graph, a collection D of node pairs, and an integer k \leq |D|. The goal is to find a minimum cost subgraph that connects at least k pairs. The best known ratio for this problem is min{O(sqrt{n}),O(sqrt{k})} [Gupta et al., 2008]. In [Gupta et al., 2008] it is also shown that ratio rho for Steiner k-Forest implies ratio O(rho log^2 n) for the Dial-a-Ride problem: given an edge weighted graph and a set of items with a source and a destination each, find a minimum length tour to move each object from its source to destination, but carrying at most k objects at a time. The only other algorithm known for Dial-a-Ride, besides the one resulting from [Gupta et al., 2008], has ratio O(sqrt{n}) [Charikar and Raghavachari, 1998]. We obtain ratio n^{0.448} for Steiner k-Forest and Dial-a-Ride with unit weights, breaking the O(sqrt{n}) ratio barrier for this natural special case. We also show that if the maximum weight of an edge is O(n^{epsilon}), then one can achieve ratio O(n^{(1+epsilon) 0.448}), which is less than sqrt{n} if epsilon is small enough. To prove our main result we consider the following generalization of the Minimum k-Edge Subgraph (Mk-ES) problem, which we call Min-Cost l-Edge-Profit Subgraph (MCl-EPS): Given a graph G=(V,E) with edge-profits p={p_e: e in E} and node-costs c={c_v: v in V}, and a lower profit bound l, find a minimum node-cost subgraph of G of edge profit at least l. The Mk-ES problem is a special case of MCl-EPS with unit node costs and unit edge profits. The currently best known ratio for Mk-ES is n^{3-2*sqrt{2} + epsilon} (note that 3-2*sqrt{2} < 0.1716). We extend this ratio to MCl-EPS for arbitrary node weights and edge profits that are polynomial in n, which may be of independent interest.

Cite as

Michael Dinitz, Guy Kortsarz, and Zeev Nutov. Improved Approximation Algorithm for Steiner k-Forest with Nearly Uniform Weights. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 115-127, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{dinitz_et_al:LIPIcs.APPROX-RANDOM.2014.115,
  author =	{Dinitz, Michael and Kortsarz, Guy and Nutov, Zeev},
  title =	{{Improved Approximation Algorithm for Steiner k-Forest with Nearly Uniform Weights}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{115--127},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.115},
  URN =		{urn:nbn:de:0030-drops-46925},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.115},
  annote =	{Keywords: k-Steiner Forest; Uniform weights; Densest k-Subgraph; Approximation algorithms}
}
Document
Computing Opaque Interior Barriers à la Shermer

Authors: Adrian Dumitrescu, Minghui Jiang, and Csaba D. Tóth


Abstract
The problem of finding a collection of curves of minimum total length that meet all the lines intersecting a given polygon was initiated by Mazurkiewicz in 1916. Such a collection forms an opaque barrier for the polygon. In 1991 Shermer proposed an exponential-time algorithm that computes an interior-restricted barrier made of segments for any given convex n-gon. He conjectured that the barrier found by his algorithm is optimal, however this was refuted recently by Provan et al. Here we give a Shermer like algorithm that computes an interior polygonal barrier whose length is at most 1.7168 times the optimal and that runs in O(n) time. As a byproduct, we also deduce upper and lower bounds on the approximation ratio of Shermer's algorithm.

Cite as

Adrian Dumitrescu, Minghui Jiang, and Csaba D. Tóth. Computing Opaque Interior Barriers à la Shermer. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 128-143, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{dumitrescu_et_al:LIPIcs.APPROX-RANDOM.2014.128,
  author =	{Dumitrescu, Adrian and Jiang, Minghui and T\'{o}th, Csaba D.},
  title =	{{Computing Opaque Interior Barriers \`{a} la Shermer}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{128--143},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.128},
  URN =		{urn:nbn:de:0030-drops-46938},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.128},
  annote =	{Keywords: Opaque barrier, approximation algorithm, isoperimetric inequality}
}
Document
Hardness of Submodular Cost Allocation: Lattice Matching and a Simplex Coloring Conjecture

Authors: Alina Ene and Jan Vondrák


Abstract
We consider the Minimum Submodular Cost Allocation (MSCA) problem. In this problem, we are given k submodular cost functions f_1, ... , f_k: 2^V -> R_+ and the goal is to partition V into k sets A_1, ..., A_k so as to minimize the total cost sum_{i = 1}^k f_i(A_i). We show that MSCA is inapproximable within any multiplicative factor even in very restricted settings; prior to our work, only Set Cover hardness was known. In light of this negative result, we turn our attention to special cases of the problem. We consider the setting in which each function f_i satisfies f_i = g_i + h, where each g_i is monotone submodular and h is (possibly non-monotone) submodular. We give an O(k log |V|) approximation for this problem. We provide some evidence that a factor of k may be necessary, even in the special case of HyperLabel. In particular, we formulate a simplex-coloring conjecture that implies a Unique-Games-hardness of (k - 1 - epsilon) for k-uniform HyperLabel and label set [k]. We provide a proof of the simplex-coloring conjecture for k=3.

Cite as

Alina Ene and Jan Vondrák. Hardness of Submodular Cost Allocation: Lattice Matching and a Simplex Coloring Conjecture. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 144-159, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{ene_et_al:LIPIcs.APPROX-RANDOM.2014.144,
  author =	{Ene, Alina and Vondr\'{a}k, Jan},
  title =	{{Hardness of Submodular Cost Allocation: Lattice Matching and a Simplex Coloring Conjecture}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{144--159},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.144},
  URN =		{urn:nbn:de:0030-drops-46943},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.144},
  annote =	{Keywords: Minimum Cost Submodular Allocation, Submodular Optimization, Hypergraph Labeling}
}
Document
Constrained Monotone Function Maximization and the Supermodular Degree

Authors: Moran Feldman and Rani Izsak


Abstract
The problem of maximizing a constrained monotone set function has many practical applications and generalizes many combinatorial problems such as k-Coverage, Max-SAT, Set Packing, Maximum Independent Set and Welfare Maximization. Unfortunately, it is generally not possible to maximize a monotone set function up to an acceptable approximation ratio, even subject to simple constraints. One highly studied approach to cope with this hardness is to restrict the set function, for example, by requiring it to be submodular. An outstanding disadvantage of imposing such a restriction on the set function is that no result is implied for set functions deviating from the restriction, even slightly. A more flexible approach, studied by Feige and Izsak [ITCS 2013], is to design an approximation algorithm whose approximation ratio depends on the complexity of the instance, as measured by some complexity measure. Specifically, they introduced a complexity measure called supermodular degree, measuring deviation from submodularity, and designed an algorithm for the welfare maximization problem with an approximation ratio that depends on this measure. In this work, we give the first (to the best of our knowledge) algorithm for maximizing an arbitrary monotone set function, subject to a k-extendible system. This class of constraints captures, for example, the intersection of k-matroids (note that a single matroid constraint is sufficient to capture the welfare maximization problem). Our approximation ratio deteriorates gracefully with the complexity of the set function and k. Our work can be seen as generalizing both the classic result of Fisher, Nemhauser and Wolsey [Mathematical Programming Study 1978], for maximizing a submodular set function subject to a k-extendible system, and the result of Feige and Izsak for the welfare maximization problem. Moreover, when our algorithm is applied to each one of these simpler cases, it obtains the same approximation ratio as of the respective original work. That is, the generalization does not incur any penalty. Finally, we also consider the less general problem of maximizing a monotone set function subject to a uniform matroid constraint, and give a somewhat better approximation ratio for it.

Cite as

Moran Feldman and Rani Izsak. Constrained Monotone Function Maximization and the Supermodular Degree. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 160-175, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{feldman_et_al:LIPIcs.APPROX-RANDOM.2014.160,
  author =	{Feldman, Moran and Izsak, Rani},
  title =	{{Constrained Monotone Function Maximization and the Supermodular Degree}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{160--175},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.160},
  URN =		{urn:nbn:de:0030-drops-46950},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.160},
  annote =	{Keywords: supermodular degree, set function, submodular, matroid, extendible system}
}
Document
On the Equivalence of the Bidirected and Hypergraphic Relaxations for Steiner Tree

Authors: Andreas Emil Feldmann, Jochen Könemann, Neil Olver, and Laura Sanità


Abstract
The bottleneck of the currently best (ln(4) + epsilon)-approximation algorithm for the NP-hard Steiner tree problem is the solution of its large, so called hypergraphic, linear programming relaxation (HYP). Hypergraphic LPs are NP-hard to solve exactly, and it is a formidable computational task to even approximate them sufficiently well. We focus on another well-studied but poorly understood LP relaxation of the problem: the bidirected cut relaxation (BCR). This LP is compact, and can therefore be solved efficiently. Its integrality gap is known to be greater than 1.16, and while this is widely conjectured to be close to the real answer, only a (trivial) upper bound of 2 is known. In this paper, we give an efficient constructive proof that BCR and HYP are polyhedrally equivalent in instances that do not have an (edge-induced) claw on Steiner vertices, i.e., they do not contain a Steiner vertex with 3 Steiner neighbors. This implies faster ln(4)-approximations for these graphs, and is a significant step forward from the previously known equivalence for (so called quasi-bipartite) instances in which Steiner vertices form an independent set. We complement our results by showing that even restricting to instances where Steiner vertices induce one single star, determining whether the two relaxations are equivalent is NP-hard.

Cite as

Andreas Emil Feldmann, Jochen Könemann, Neil Olver, and Laura Sanità. On the Equivalence of the Bidirected and Hypergraphic Relaxations for Steiner Tree. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 176-191, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{feldmann_et_al:LIPIcs.APPROX-RANDOM.2014.176,
  author =	{Feldmann, Andreas Emil and K\"{o}nemann, Jochen and Olver, Neil and Sanit\`{a}, Laura},
  title =	{{On the Equivalence of the Bidirected and Hypergraphic Relaxations for Steiner Tree}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{176--191},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.176},
  URN =		{urn:nbn:de:0030-drops-46962},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.176},
  annote =	{Keywords: Steiner tree, bidirected cut relaxation, hypergraphic relaxation, polyhedral equivalence, approximation algorithms}
}
Document
Reaching Consensus via Non-Bayesian Asynchronous Learning in Social Networks

Authors: Michal Feldman, Nicole Immorlica, Brendan Lucier, and S. Matthew Weinberg


Abstract
We study the outcomes of information aggregation in online social networks. Our main result is that networks with certain realistic structural properties avoid information cascades and enable a population to effectively aggregate information. In our model, each individual in a network holds a private, independent opinion about a product or idea, biased toward a ground truth. Individuals declare their opinions asynchronously, can observe the stated opinions of their neighbors, and are free to update their declarations over time. Supposing that individuals conform with the majority report of their neighbors, we ask whether the population will eventually arrive at consensus on the ground truth. We show that the answer depends on the network structure: there exist networks for which consensus is unlikely, or for which declarations converge on the incorrect opinion with positive probability. On the other hand, we prove that for networks that are sparse and expansive, the population will converge to the correct opinion with high probability.

Cite as

Michal Feldman, Nicole Immorlica, Brendan Lucier, and S. Matthew Weinberg. Reaching Consensus via Non-Bayesian Asynchronous Learning in Social Networks. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 192-208, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{feldman_et_al:LIPIcs.APPROX-RANDOM.2014.192,
  author =	{Feldman, Michal and Immorlica, Nicole and Lucier, Brendan and Weinberg, S. Matthew},
  title =	{{Reaching Consensus via Non-Bayesian Asynchronous Learning in Social Networks}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{192--208},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.192},
  URN =		{urn:nbn:de:0030-drops-46976},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.192},
  annote =	{Keywords: Information Cascades, Social Networks, non-Bayesian Asynchronous Learning, Expander Graphs, Stochastic Processes}
}
Document
Deliver or hold: Approximation Algorithms for the Periodic Inventory Routing Problem

Authors: Takuro Fukunaga, Afshin Nikzad, and R. Ravi


Abstract
The inventory routing problem involves trading off inventory holding costs at client locations with vehicle routing costs to deliver frequently from a single central depot to meet deterministic client demands over a finite planing horizon. In this paper, we consider periodic solutions that visit clients in one of several specified frequencies, and focus on the case when the frequencies of visiting nodes are nested. We give the first constant-factor approximation algorithms for designing optimum nested periodic schedules for the problem with no limit on vehicle capacities by simple reductions to prize-collecting network design problems. For instance, we present a 2.55-approximation algorithm for the minimum-cost nested periodic schedule where the vehicle routes are modeled as minimum Steiner trees. We also show a general reduction from the capacitated problem where all vehicles have the same capacity to the uncapacitated version with a slight loss in performance. This reduction gives a 4.55-approximation for the capacitated problem. In addition, we prove several structural results relating the values of optimal policies of various types.

Cite as

Takuro Fukunaga, Afshin Nikzad, and R. Ravi. Deliver or hold: Approximation Algorithms for the Periodic Inventory Routing Problem. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 209-225, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{fukunaga_et_al:LIPIcs.APPROX-RANDOM.2014.209,
  author =	{Fukunaga, Takuro and Nikzad, Afshin and Ravi, R.},
  title =	{{Deliver or hold: Approximation Algorithms for the Periodic Inventory Routing Problem}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{209--225},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.209},
  URN =		{urn:nbn:de:0030-drops-46985},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.209},
  annote =	{Keywords: Inventry Routing Problem, Approximation algorithm, Prize-collecting Steiner Tree}
}
Document
Complexity and Approximation of the Continuous Network Design Problem

Authors: Martin Gairing, Tobias Harks, and Max Klimm


Abstract
We revisit a classical problem in transportation, known as the continuous (bilevel) network design problem, CNDP for short. Given a graph for which the latency of each edge depends on the ratio of the edge flow and the capacity installed, the goal is to find an optimal investment in edge capacities so as to minimize the sum of the routing cost of the induced Wardrop equilibrium and the investment cost for installing the capacity. While this problem is considered as challenging in the literature, its complexity status was still unknown. We close this gap showing that CNDP is strongly NP-complete and APX-hard, both on directed and undirected networks and even for instances with affine latencies. As for the approximation of the problem, we first provide a detailed analysis for a heuristic studied by Marcotte for the special case of monomial latency functions (Math. Program., Vol. 34, 1986). We derive a closed form expression of its approximation guarantee for arbitrary sets of latency functions. We then propose a different approximation algorithm and show that it has the same approximation guarantee. However, we show that using the better of the two approximation algorithms results in a strictly improved approximation guarantee for which we derive a closed form expression. For affine latencies, e.g., this algorithm achieves a 49/41-approximation which improves on the 5/4 that has been shown before by Marcotte. We finally discuss the case of hard budget constraints on the capacity investment.

Cite as

Martin Gairing, Tobias Harks, and Max Klimm. Complexity and Approximation of the Continuous Network Design Problem. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 226-241, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{gairing_et_al:LIPIcs.APPROX-RANDOM.2014.226,
  author =	{Gairing, Martin and Harks, Tobias and Klimm, Max},
  title =	{{Complexity and Approximation of the Continuous Network Design Problem}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{226--241},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.226},
  URN =		{urn:nbn:de:0030-drops-46998},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.226},
  annote =	{Keywords: Bilevel optimization, Optimization under equilibrium constraints, Network design, Wardrop equilibrium, Computational complexity, Approximation algorit}
}
Document
Approximate Pure Nash Equilibria in Weighted Congestion Games

Authors: Christoph Hansknecht, Max Klimm, and Alexander Skopalik


Abstract
We study the existence of approximate pure Nash equilibria in weighted congestion games and develop techniques to obtain approximate potential functions that prove the existence of alpha-approximate pure Nash equilibria and the convergence of alpha-improvement steps. Specifically, we show how to obtain upper bounds for approximation factor alpha for a given class of cost functions. For example for concave cost functions the factor is at most 3/2, for quadratic cost functions it is at most 4/3, and for polynomial cost functions of maximal degree d it is at at most d + 1. For games with two players we obtain tight bounds which are as small as for example 1.054 in the case of quadratic cost functions.

Cite as

Christoph Hansknecht, Max Klimm, and Alexander Skopalik. Approximate Pure Nash Equilibria in Weighted Congestion Games. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 242-257, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{hansknecht_et_al:LIPIcs.APPROX-RANDOM.2014.242,
  author =	{Hansknecht, Christoph and Klimm, Max and Skopalik, Alexander},
  title =	{{Approximate Pure Nash Equilibria in Weighted Congestion Games}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{242--257},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.242},
  URN =		{urn:nbn:de:0030-drops-47005},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.242},
  annote =	{Keywords: Congestion game, Pure Nash equilibrium, Approximate equilibrium, Existence, Potential function}
}
Document
Discrepancy Without Partial Colorings

Authors: Nicholas J. A. Harvey, Roy Schwartz, and Mohit Singh


Abstract
Spencer's theorem asserts that, for any family of n subsets of ground set of size n, the elements of the ground set can be "colored" by the values +1 or -1 such that the sum of every set is O(sqrt(n)) in absolute value. All existing proofs of this result recursively construct "partial colorings", which assign +1 or -1 values to half of the ground set. We devise the first algorithm for Spencer's theorem that directly computes a coloring, without recursively computing partial colorings.

Cite as

Nicholas J. A. Harvey, Roy Schwartz, and Mohit Singh. Discrepancy Without Partial Colorings. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 258-273, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{harvey_et_al:LIPIcs.APPROX-RANDOM.2014.258,
  author =	{Harvey, Nicholas J. A. and Schwartz, Roy and Singh, Mohit},
  title =	{{Discrepancy Without Partial Colorings}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{258--273},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.258},
  URN =		{urn:nbn:de:0030-drops-47014},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.258},
  annote =	{Keywords: Combinatorial Discrepancy, Brownian Motion, Semi-Definite Programming, Randomized Algorithm}
}
Document
Universal Factor Graphs for Every NP-Hard Boolean CSP

Authors: Shlomo Jozeph


Abstract
An instance of a Boolean constraint satisfaction problem can be divided into two parts. One part, that we refer to as the factor graph of the instance, specifies for each clause the set of variables that are associated with the clause. The other part, specifies for each of the given clauses what is the constraint that is evaluated on the respective variables. Depending on the allowed choices of constraints, it is known that Boolean constraint satisfaction problems fall into one of two classes, being either NP-hard or in P. This paper shows that every NP-hard Boolean constraint satisfaction problem (except for an easy to characterize set of natural exceptions) has a universal factor graph. That is, for every NP-hard Boolean constraint satisfaction problem, there is a family of at most one factor graph of each size, such that the problem, restricted to instances that have a factor graph from this family, cannot be solved in polynomial time unless NP is contained in P/poly. Moreover, we extend this classification to one that establishes hardness of approximation.

Cite as

Shlomo Jozeph. Universal Factor Graphs for Every NP-Hard Boolean CSP. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 274-283, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{jozeph:LIPIcs.APPROX-RANDOM.2014.274,
  author =	{Jozeph, Shlomo},
  title =	{{Universal Factor Graphs for Every NP-Hard Boolean CSP}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{274--283},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.274},
  URN =		{urn:nbn:de:0030-drops-47021},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.274},
  annote =	{Keywords: Hardness of Approximation, Hardness with Preprocessing}
}
Document
A 9/7 -Approximation Algorithm for Graphic TSP in Cubic Bipartite Graphs

Authors: Jeremy A. Karp and R. Ravi


Abstract
We prove new results for approximating Graphic TSP. Specifically, we provide a polynomial-time 9/7-approximation algorithm for cubic bipartite graphs and a (9/7+1/(21(k-2)))-approximation algorithm for k-regular bipartite graphs, both of which are improved approximation factors compared to previous results. Our approach involves finding a cycle cover with relatively few cycles, which we are able to do by leveraging the fact that all cycles in bipartite graphs are of even length along with our knowledge of the structure of cubic graphs.

Cite as

Jeremy A. Karp and R. Ravi. A 9/7 -Approximation Algorithm for Graphic TSP in Cubic Bipartite Graphs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 284-296, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{karp_et_al:LIPIcs.APPROX-RANDOM.2014.284,
  author =	{Karp, Jeremy A. and Ravi, R.},
  title =	{{A 9/7 -Approximation Algorithm for Graphic TSP in Cubic Bipartite Graphs}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{284--296},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.284},
  URN =		{urn:nbn:de:0030-drops-47034},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.284},
  annote =	{Keywords: Approximation algorithms, traveling salesman problem, Barnette’s conjecture, combinatorial optimization}
}
Document
Sherali-Adams Gaps, Flow-cover Inequalities and Generalized Configurations for Capacity-constrained Facility Location

Authors: Stavros G. Kolliopoulos and Yannis Moysoglou


Abstract
Metric facility location is a well-studied problem for which linear programming methods have been used with great success in deriving approximation algorithms. The capacity-constrained generalizations, such as capacitated facility location (CFL) and lower-bounded facility location (LBFL), have proved notorious as far as LP-based approximation is concerned: while there are local-search-based constant-factor approximations, there is no known linear relaxation with constant integrality gap. According to Williamson and Shmoys devising a relaxation-based approximation for CFL is among the top 10 open problems in approximation algorithms. This paper advances significantly the state-of-the-art on the effectiveness of linear programming for capacity-constrained facility location through a host of impossibility results for both CFL and LBFL. We show that the relaxations obtained from the natural LP at Omega(n) levels of the Sherali-Adams hierarchy have an unbounded gap, partially answering an open question from the literature. Here, n denotes the number of facilities in the instance. Building on the ideas for this result, we prove that the standard CFL relaxation enriched with the generalized flow-cover valid inequalities has also an unbounded gap. This disproves a long-standing conjecture of Levi et al. We finally introduce the family of proper relaxations which generalizes to its logical extreme the classic star relaxation and captures general configuration-style LPs. We characterize the behavior of proper relaxations for CFL and LBFL through a sharp threshold phenomenon.

Cite as

Stavros G. Kolliopoulos and Yannis Moysoglou. Sherali-Adams Gaps, Flow-cover Inequalities and Generalized Configurations for Capacity-constrained Facility Location. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 297-312, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{kolliopoulos_et_al:LIPIcs.APPROX-RANDOM.2014.297,
  author =	{Kolliopoulos, Stavros G. and Moysoglou, Yannis},
  title =	{{Sherali-Adams Gaps, Flow-cover Inequalities and Generalized Configurations for Capacity-constrained Facility Location}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{297--312},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.297},
  URN =		{urn:nbn:de:0030-drops-47046},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.297},
  annote =	{Keywords: Approximation Algorithms, Linear Programming, Facility Location}
}
Document
Lower Bounds on Expansions of Graph Powers

Authors: Tsz Chiu Kwok and Lap Chi Lau


Abstract
Given a lazy regular graph G, we prove that the expansion of G^t is at least sqrt(t) times the expansion of G. This bound is tight and can be generalized to small set expansion. We show some applications of this result.

Cite as

Tsz Chiu Kwok and Lap Chi Lau. Lower Bounds on Expansions of Graph Powers. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 313-324, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{kwok_et_al:LIPIcs.APPROX-RANDOM.2014.313,
  author =	{Kwok, Tsz Chiu and Lau, Lap Chi},
  title =	{{Lower Bounds on Expansions of Graph Powers}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{313--324},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.313},
  URN =		{urn:nbn:de:0030-drops-47057},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.313},
  annote =	{Keywords: Conductance, Expansion, Graph power, Random walk}
}
Document
An Improved Approximation Algorithm for the Hard Uniform Capacitated k-median Problem

Authors: Shanfei Li


Abstract
In the k-median problem, given a set of locations, the goal is to select a subset of at most k centers so as to minimize the total cost of connecting each location to its nearest center. We study the uniform hard capacitated version of the k-median problem, in which each selected center can only serve a limited number of locations. Inspired by the algorithm of Charikar, Guha, Tardos and Shmoys, we give an improved approximation algorithm for this problem with increasing the capacities by a constant factor, which improves the previous best approximation algorithm proposed by Byrka, Fleszar, Rybicki and Spoerhase.

Cite as

Shanfei Li. An Improved Approximation Algorithm for the Hard Uniform Capacitated k-median Problem. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 325-338, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{li:LIPIcs.APPROX-RANDOM.2014.325,
  author =	{Li, Shanfei},
  title =	{{An Improved Approximation Algorithm for the Hard Uniform Capacitated k-median Problem}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{325--338},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.325},
  URN =		{urn:nbn:de:0030-drops-47062},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.325},
  annote =	{Keywords: Approximation algorithm; k-median problem; LP-rounding; Hard capacities}
}
Document
Approximation Algorithms for Hypergraph Small Set Expansion and Small Set Vertex Expansion

Authors: Anand Louis and Yury Makarychev


Abstract
The expansion of a hypergraph, a natural extension of the notion of expansion in graphs, is defined as the minimum over all cuts in the hypergraph of the ratio of the number of the hyperedges cut to the size of the smaller side of the cut. We study the Hypergraph Small Set Expansion problem, which, for a parameter 's' such that 0 < s < 1/2, asks to compute the cut having the least expansion while having at most 's' fraction of the vertices on the smaller side of the cut. We present two algorithms. Our first algorithm gives a multiplicative polylogarithmic approximation. Our second algorithm gives a bound that is a function of the expansion of the hypergraph but is independent of the size of the hypergraph. Using these results, we also obtain similar guarantees for the Small Set Vertex Expansion problem.

Cite as

Anand Louis and Yury Makarychev. Approximation Algorithms for Hypergraph Small Set Expansion and Small Set Vertex Expansion. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 339-355, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{louis_et_al:LIPIcs.APPROX-RANDOM.2014.339,
  author =	{Louis, Anand and Makarychev, Yury},
  title =	{{Approximation Algorithms for Hypergraph Small Set Expansion and Small Set Vertex Expansion}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{339--355},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.339},
  URN =		{urn:nbn:de:0030-drops-47079},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.339},
  annote =	{Keywords: Approximation Algorithms, Graph Expansion, Hypergraph Expansion, Vertex Expansion}
}
Document
Robust Appointment Scheduling

Authors: Shashi Mittal, Andreas S. Schulz, and Sebastian Stiller


Abstract
Health care providers are under tremendous pressure to reduce costs and increase quality of their services. It has long been recognized that well-designed appointment systems have the potential to improve utilization of expensive personnel and medical equipment and to reduce waiting times for patients. In a widely influential survey on outpatient scheduling, Cayirli and Veral (2003) concluded that the "biggest challenge for future research will be to develop easy-to-use heuristics." We analyze the appointment scheduling problem from a robust-optimization perspective, and we establish the existence of a closed-form optimal solution--arguably the simplest and best `heuristic' possible. In case the order of patients is changeable, the robust optimization approach yields a novel formulation of the appointment scheduling problem as that of minimizing a concave function over a supermodular polyhedron. We devise the first constant-factor approximation algorithm for this case.

Cite as

Shashi Mittal, Andreas S. Schulz, and Sebastian Stiller. Robust Appointment Scheduling. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 356-370, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{mittal_et_al:LIPIcs.APPROX-RANDOM.2014.356,
  author =	{Mittal, Shashi and Schulz, Andreas S. and Stiller, Sebastian},
  title =	{{Robust Appointment Scheduling}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{356--370},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.356},
  URN =		{urn:nbn:de:0030-drops-47089},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.356},
  annote =	{Keywords: Robust Optimization, Health Care Scheduling, Approximation Algorithms}
}
Document
Computational Complexity of Certifying Restricted Isometry Property

Authors: Abhiram Natarajan and Yi Wu


Abstract
Given a matrix A with n rows, a number k < n, and 0 < delta < 1, A is (k,delta)-RIP (Restricted Isometry Property) if, for any vector x in R^n, with at most k non-zero co-ordinates, (1-delta)|x|^2 <= |Ax|^2 <= (1+delta)|Ax|^2. In many applications, such as compressed sensing and sparse recovery, it is desirable to construct RIP matrices with a large k and a small delta;. Given the efficacy of random constructions in generating useful RIP matrices, the problem of certifying the RIP parameters of a matrix has become important. In this paper, we prove that it is hard to approximate the RIP parameters of a matrix assuming the Small-Set-Expansion-Hypothesis. Specifically, we prove that for any arbitrarily large constant C>0 and any arbitrarily small constant 0<delta<1, there exists some k such that given a matrix M, it is SSE-Hard to distinguish the following two cases: i) (Highly RIP) M is (k,delta)-RIP; ii) (Far away from RIP) M is not (k/C,1-delta)-RIP. Most of the previous results on the topic of hardness of RIP certification only hold for certification when delta=o(1). In practice, it is of interest to understand the complexity of certifying a matrix with delta being close to sqrt(2) - 1, as it suffices for many real applications to have matrices with delta=sqrt(2)-1. Our hardness result holds for any constant delta. Specifically, our result proves that even if delta is indeed very small, i.e. the matrix is in fact strongly RIP, certifying that the matrix exhibits weak RIP itself is SSE-Hard. In order to prove the hardness result, we prove a variant of the Cheeger's Inequality for sparse vectors.

Cite as

Abhiram Natarajan and Yi Wu. Computational Complexity of Certifying Restricted Isometry Property. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 371-380, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{natarajan_et_al:LIPIcs.APPROX-RANDOM.2014.371,
  author =	{Natarajan, Abhiram and Wu, Yi},
  title =	{{Computational Complexity of Certifying Restricted Isometry Property}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{371--380},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.371},
  URN =		{urn:nbn:de:0030-drops-47092},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.371},
  annote =	{Keywords: Restricted Isometry Property, RIP, RIP Certification, Sparse Cheeger Inequality, SSE Hard}
}
Document
Gap Amplification for Small-Set Expansion via Random Walks

Authors: Prasad Raghavendra and Tselil Schramm


Abstract
In this work, we achieve gap amplification for the Small-Set Expansion problem. Specifically, we show that an instance of the Small-Set Expansion Problem with completeness epsilon and soundness 1/2 is at least as difficult as Small-Set Expansion with completeness epsilon and soundness f(epsilon), for any function f(epsilon) which grows faster than (epsilon)^(1/2). We achieve this amplification via random walks--the output graph corresponds to taking random walks on the original graph. An interesting feature of our reduction is that unlike gap amplification via parallel repetition, the size of the instances (number of vertices) produced by the reduction remains the same.

Cite as

Prasad Raghavendra and Tselil Schramm. Gap Amplification for Small-Set Expansion via Random Walks. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 381-391, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{raghavendra_et_al:LIPIcs.APPROX-RANDOM.2014.381,
  author =	{Raghavendra, Prasad and Schramm, Tselil},
  title =	{{Gap Amplification for Small-Set Expansion via Random Walks}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{381--391},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.381},
  URN =		{urn:nbn:de:0030-drops-47108},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.381},
  annote =	{Keywords: Gap amplification, Small-Set Expansion, random walks, graph products, Unique Games}
}
Document
Power of Preemption on Uniform Parallel Machines

Authors: Alan J. Soper and Vitaly A. Strusevich


Abstract
For a scheduling problem on parallel machines, the power of preemption is defined as the ratio of the makespan of an optimal non-preemptive schedule over the makespan of an optimal preemptive schedule. For m uniform parallel machines, we give the necessary and sufficient conditions under which the global bound of 2-1/m is tight. If the makespan of the optimal preemptive schedule is defined by the ratio of the total processing times of r < m longest jobs over the total speed of r fastest machines, we show that the tight bound on the power of preemption is 2-1/min{r,m-r}.

Cite as

Alan J. Soper and Vitaly A. Strusevich. Power of Preemption on Uniform Parallel Machines. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 392-402, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{soper_et_al:LIPIcs.APPROX-RANDOM.2014.392,
  author =	{Soper, Alan J. and Strusevich, Vitaly A.},
  title =	{{Power of Preemption on Uniform Parallel Machines}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{392--402},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.392},
  URN =		{urn:nbn:de:0030-drops-47116},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.392},
  annote =	{Keywords: Machine Scheduling, Uniform Parallel Machines, Power of Preemption}
}
Document
Improved Approximation Algorithms for Matroid and Knapsack Median Problems and Applications

Authors: Chaitanya Swamy


Abstract
We consider the matroid median problem, wherein we are given a set of facilities with opening costs and a matroid on the facility-set, and clients with demands and connection costs, and we seek to open an independent set of facilities and assign clients to open facilities so as to minimize the sum of the facility-opening and client-connection costs. We give a simple 8-approximation algorithm for this problem based on LP-rounding, which improves upon the 16-approximation by Krishnaswamy et al. We illustrate the power and versatility of our techniques by deriving: (a) an 8-approximation for the two-matroid median problem, a generalization of matroid median that we introduce involving two matroids; and (b) a 24-approximation algorithm for matroid median with penalties, which is a vast improvement over the 360-approximation obtained by Krishnaswamy et al. We show that a variety of seemingly disparate facility-location problems considered in the literature -- data placement problem, mobile facility location, k-median forest, metric uniform minimum-latency UFL -- in fact reduce to the matroid median or two-matroid median problems, and thus obtain improved approximation guarantees for all these problems. Our techniques also yield an improvement for the knapsack median problem.

Cite as

Chaitanya Swamy. Improved Approximation Algorithms for Matroid and Knapsack Median Problems and Applications. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 403-418, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{swamy:LIPIcs.APPROX-RANDOM.2014.403,
  author =	{Swamy, Chaitanya},
  title =	{{Improved Approximation Algorithms for Matroid and Knapsack Median Problems and Applications}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{403--418},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.403},
  URN =		{urn:nbn:de:0030-drops-47125},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.403},
  annote =	{Keywords: Approximation algorithms, LP rounding, facility location, matroid and submodular polyhedra, knapsack constraints}
}
Document
Robust Approximation of Temporal CSP

Authors: Suguru Tamaki and Yuichi Yoshida


Abstract
A temporal constraint language G is a set of relations with first-order definitions in (Q; <). Let CSP(G) denote the set of constraint satisfaction problem instances with relations from G. CSP(G) admits robust approximation if, for any e >= 0, given a (1-e)-satisfiable instance of CSP(G), we can compute an assignment that satisfies at least a (1-f(e))-fraction of constraints in polynomial time. Here, f(e) is some function satisfying f(0)=0 and f(e) goes 0 as e goes 0. Firstly, we give a qualitative characterization of robust approximability: Assuming the Unique Games Conjecture, we give a necessary and sufficient condition on G under which CSP(G) admits robust approximation. Secondly, we give a quantitative characterization of robust approximability: Assuming the Unique Games Conjecture, we precisely characterize how f(e) depends on e for each G. We show that our robust approximation algorithms can be run in almost linear time.

Cite as

Suguru Tamaki and Yuichi Yoshida. Robust Approximation of Temporal CSP. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 419-432, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{tamaki_et_al:LIPIcs.APPROX-RANDOM.2014.419,
  author =	{Tamaki, Suguru and Yoshida, Yuichi},
  title =	{{Robust Approximation of Temporal CSP}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{419--432},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.419},
  URN =		{urn:nbn:de:0030-drops-47135},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.419},
  annote =	{Keywords: constraint satisfaction, maximum satisfiability, approximation algorithm, hardness of approximation, infinite domain}
}
Document
Parity is Positively Useless

Authors: Cenny Wenner


Abstract
We give the first examples of non-trivially positively-useless predicates subject only to P != NP. In particular, for every constraint function Q : {-1,1}^4 -> R, we construct Contraint-Satisfaction-Problem (CSP) instances without negations which have value at least 1-eps when evaluted for the arity-four odd-parity predicate, yet it is NP-hard to find a solution with value significantly better than a random biased assignment when evaluated for Q. More generally, we show that all parities except one are positively useless. Although we are not able to exhibit a single protocol producing hard instances when evaluated for every Q, we show that two protocols do the trick. The first protocol is the classical one used by Håstad with a twist. We extend the protocol to multilayered Label Cover and employ a particular distribution over layers in order to limit moments of table biases. The second protocol is a modification of Chan's multi-question protocol where queried tuples of Label Cover vertices are randomized in such a way that the tables can be seen as being independently sampled from a common distribution and in effect having identical expected biases. We believe that our techniques may prove useful in further analyzing the approximability of CSPs without negations.

Cite as

Cenny Wenner. Parity is Positively Useless. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 433-448, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{wenner:LIPIcs.APPROX-RANDOM.2014.433,
  author =	{Wenner, Cenny},
  title =	{{Parity is Positively Useless}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{433--448},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.433},
  URN =		{urn:nbn:de:0030-drops-47144},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.433},
  annote =	{Keywords: Approximation hardness, approximation resistance, parity, usefulness, negations, monotone, constraint satisfaction problems, smoothness, multilayer, L}
}
Document
The Condensation Phase Transition in Random Graph Coloring

Authors: Victor Bapst, Amin Coja-Oghlan, Samuel Hetterich, Felicia Raßmann, and Dan Vilenchik


Abstract
Based on a non-rigorous formalism called the "cavity method", physicists have put forward intriguing predictions on phase transitions in discrete structures. One of the most remarkable ones is that in problems such as random k-SAT or random graph k-coloring, very shortly before the threshold for the existence of solutions there occurs another phase transition called condensation [Krzakala et al., PNAS 2007]. The existence of this phase transition appears to be intimately related to the difficulty of proving precise results on, e.g., the k-colorability threshold as well as to the performance of message passing algorithms. In random graph k-coloring, there is a precise conjecture as to the location of the condensation phase transition in terms of a distributional fixed point problem. In this paper we prove this conjecture for k exceeding a certain constant k0.

Cite as

Victor Bapst, Amin Coja-Oghlan, Samuel Hetterich, Felicia Raßmann, and Dan Vilenchik. The Condensation Phase Transition in Random Graph Coloring. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 449-464, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{bapst_et_al:LIPIcs.APPROX-RANDOM.2014.449,
  author =	{Bapst, Victor and Coja-Oghlan, Amin and Hetterich, Samuel and Ra{\ss}mann, Felicia and Vilenchik, Dan},
  title =	{{The Condensation Phase Transition in Random Graph Coloring}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{449--464},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.449},
  URN =		{urn:nbn:de:0030-drops-47168},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.449},
  annote =	{Keywords: random graphs, graph coloring, phase transitions, message-passing algorithm}
}
Document
The Information Complexity of Hamming Distance

Authors: Eric Blais, Joshua Brody, and Badih Ghazi


Abstract
The Hamming distance function Ham_{n,d} returns 1 on all pairs of inputs x and y that differ in at most d coordinates and returns 0 otherwise. We initiate the study of the information complexity of the Hamming distance function. We give a new optimal lower bound for the information complexity of the Ham_{n,d} function in the small-error regime where the protocol is required to err with probability at most epsilon < d/n. We also give a new conditional lower bound for the information complexity of Ham_{n,d} that is optimal in all regimes. These results imply the first new lower bounds on the communication complexity of the Hamming distance function for the shared randomness two-way communication model since Pang and El-Gamal (1986). These results also imply new lower bounds in the areas of property testing and parity decision tree complexity.

Cite as

Eric Blais, Joshua Brody, and Badih Ghazi. The Information Complexity of Hamming Distance. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 465-489, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{blais_et_al:LIPIcs.APPROX-RANDOM.2014.465,
  author =	{Blais, Eric and Brody, Joshua and Ghazi, Badih},
  title =	{{The Information Complexity of Hamming Distance}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{465--489},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.465},
  URN =		{urn:nbn:de:0030-drops-47174},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.465},
  annote =	{Keywords: Hamming distance, communication complexity, information complexity}
}
Document
An Approximate Version of the Tree Packing Conjecture via Random Embeddings

Authors: Julia Böttcher, Jan Hladký, Diana Piguet, and Anusch Taraz


Abstract
We prove that for any pair of constants a>0 and D and for n sufficiently large, every family of trees of orders at most n, maximum degrees at most D, and with at most n(n-1)/2 edges in total packs into the complete graph of order (1+a)n. This implies asymptotic versions of the Tree Packing Conjecture of Gyarfas from 1976 and a tree packing conjecture of Ringel from 1963 for trees with bounded maximum degree. A novel random tree embedding process combined with the nibble method forms the core of the proof.

Cite as

Julia Böttcher, Jan Hladký, Diana Piguet, and Anusch Taraz. An Approximate Version of the Tree Packing Conjecture via Random Embeddings. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 490-499, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{bottcher_et_al:LIPIcs.APPROX-RANDOM.2014.490,
  author =	{B\"{o}ttcher, Julia and Hladk\'{y}, Jan and Piguet, Diana and Taraz, Anusch},
  title =	{{An Approximate Version of the Tree Packing Conjecture via Random Embeddings}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{490--499},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.490},
  URN =		{urn:nbn:de:0030-drops-47184},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.490},
  annote =	{Keywords: tree packing conjecture, Ringel’s conjecture, random walks, quasirandom graphs}
}
Document
On Sharp Thresholds in Random Geometric Graphs

Authors: Milan Bradonjic and Will Perkins


Abstract
We give a characterization of vertex-monotone properties with sharp thresholds in a Poisson random geometric graph or hypergraph. As an application we show that a geometric model of random k-SAT exhibits a sharp threshold for satisfiability.

Cite as

Milan Bradonjic and Will Perkins. On Sharp Thresholds in Random Geometric Graphs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 500-514, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{bradonjic_et_al:LIPIcs.APPROX-RANDOM.2014.500,
  author =	{Bradonjic, Milan and Perkins, Will},
  title =	{{On Sharp Thresholds in Random Geometric Graphs}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{500--514},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.500},
  URN =		{urn:nbn:de:0030-drops-47195},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.500},
  annote =	{Keywords: Sharp thresholds, random geometric graphs, k-SAT}
}
Document
Average Case Polyhedral Complexity of the Maximum Stable Set Problem

Authors: Gábor Braun, Samuel Fiorini, and Sebastian Pokutta


Abstract
We study the minimum number of constraints needed to formulate random instances of the maximum stable set problem via LPs (more precisely, linear extended formulations), in two distinct models. In the uniform model, the constraints of the LP are not allowed to depend on the input graph, which should be encoded solely in the objective function. There we prove a super-polynomial lower bound with overwhelming probability for every LP that is exact for a randomly selected set of instances with a natural distribution. In the non-uniform model, the constraints of the LP may depend on the input graph, but we allow weights on the vertices. The input graph is sampled according to the Erdös-Renyi model. There we obtain upper and lower bounds holding with high probability for various ranges of p. We obtain a super-polynomial lower bound all the way from essentially p = polylog(n) / n to p = 1 / log n. Our upper bound is close as there is only an essentially quadratic gap in the exponent, which also exists in the worst case model. Finally, we state a conjecture to close the gap both in the average-case and worst-case models.

Cite as

Gábor Braun, Samuel Fiorini, and Sebastian Pokutta. Average Case Polyhedral Complexity of the Maximum Stable Set Problem. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 515-530, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{braun_et_al:LIPIcs.APPROX-RANDOM.2014.515,
  author =	{Braun, G\'{a}bor and Fiorini, Samuel and Pokutta, Sebastian},
  title =	{{Average Case Polyhedral Complexity of the Maximum Stable Set Problem}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{515--530},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.515},
  URN =		{urn:nbn:de:0030-drops-47201},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.515},
  annote =	{Keywords: polyhedral approximation, extended formulation, stable sets}
}
Document
An Optimal Algorithm for Large Frequency Moments Using O(n^(1-2/k)) Bits

Authors: Vladimir Braverman, Jonathan Katzman, Charles Seidell, and Gregory Vorsanger


Abstract
In this paper, we provide the first optimal algorithm for the remaining open question from the seminal paper of Alon, Matias, and Szegedy: approximating large frequency moments. We give an upper bound on the space required to find a k-th frequency moment of O(n^(1-2/k)) bits that matches, up to a constant factor, the lower bound of Woodruff et. al for constant epsilon and constant k. Our algorithm makes a single pass over the stream and works for any constant k > 3. It is based upon two major technical accomplishments: first, we provide an optimal algorithm for finding the heavy elements in a stream; and second, we provide a technique using Martingale Sketches which gives an optimal reduction of the large frequency moment problem to the all heavy elements problem. We also provide a polylogarithmic improvement for frequency moments, frequency based functions, spatial data streams, and measuring independence of data sets.

Cite as

Vladimir Braverman, Jonathan Katzman, Charles Seidell, and Gregory Vorsanger. An Optimal Algorithm for Large Frequency Moments Using O(n^(1-2/k)) Bits. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 531-544, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.APPROX-RANDOM.2014.531,
  author =	{Braverman, Vladimir and Katzman, Jonathan and Seidell, Charles and Vorsanger, Gregory},
  title =	{{An Optimal Algorithm for Large Frequency Moments Using O(n^(1-2/k)) Bits}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{531--544},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.531},
  URN =		{urn:nbn:de:0030-drops-47217},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.531},
  annote =	{Keywords: Streaming Algorithms, Randomized Algorithms, Frequency Moments, Heavy Hitters}
}
Document
Certifying Equality With Limited Interaction

Authors: Joshua Brody, Amit Chakrabarti, Ranganath Kondapally, David P. Woodruff, and Grigory Yaroslavtsev


Abstract
The EQUALITY problem is usually one’s first encounter with communication complexity and is one of the most fundamental problems in the field. Although its deterministic and randomized communication complexity were settled decades ago, we find several new things to say about the problem by focusing on three subtle aspects. The first is to consider the expected communication cost (at a worst-case input) for a protocol that uses limited interaction—i.e., a bounded number of rounds of communication—and whose error probability is zero or close to it. The second is to treat the false negative error rate separately from the false positive error rate. The third is to consider the information cost of such protocols. We obtain asymptotically optimal rounds-versus-cost tradeoffs for EQUALITY: both expected communication cost and information cost scale as Theta(log log ... log n), with r-1 logs, where r is the number of rounds. These bounds hold even when the false negative rate approaches 1. For the case of zero-error communication cost, we obtain essentially matching bounds, up to a tiny additive constant. We also provide some applications.

Cite as

Joshua Brody, Amit Chakrabarti, Ranganath Kondapally, David P. Woodruff, and Grigory Yaroslavtsev. Certifying Equality With Limited Interaction. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 545-581, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{brody_et_al:LIPIcs.APPROX-RANDOM.2014.545,
  author =	{Brody, Joshua and Chakrabarti, Amit and Kondapally, Ranganath and Woodruff, David P. and Yaroslavtsev, Grigory},
  title =	{{Certifying Equality With Limited Interaction}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{545--581},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.545},
  URN =		{urn:nbn:de:0030-drops-47229},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.545},
  annote =	{Keywords: equality, communication complexity, information complexity}
}
Document
#BIS-Hardness for 2-Spin Systems on Bipartite Bounded Degree Graphs in the Tree Non-uniqueness Region

Authors: Jin-Yi Cai, Andreas Galanis, Leslie Ann Goldberg, Heng Guo, Mark Jerrum, Daniel Stefankovic, and Eric Vigoda


Abstract
Counting independent sets on bipartite graphs (#BIS) is considered a canonical counting problem of intermediate approximation complexity. It is conjectured that #BIS neither has an FPRAS nor is as hard as #SAT to approximate. We study #BIS in the general framework of two-state spin systems in bipartite graphs. Such a system is parameterized by three numbers (beta,gamma,lambda), where beta (respectively gamma) represents the weight of an edge (or "interaction strength") whose endpoints are of the same 0 (respectively 1) spin, and lambda is the weight of a 1 vertex, also known as an "external field". By convention, the edge weight with unequal 0/1 end points and the vertex weight with spin 0 are both normalized to 1. The partition function of the special case beta=1, gamma=0, and lambda=1 counts the number of independent sets. We define two notions, nearly-independent phase-correlated spins and symmetry breaking. We prove that it is #BIS-hard to approximate the partition function of any two-spin system on bipartite graphs supporting these two notions. As a consequence, we show that #BIS on graphs of degree at most 6 is as hard to approximate as #BIS~without degree bound. The degree bound 6 is the best possible as Weitz presented an FPTAS to count independent sets on graphs of maximum degree 5. This result extends to the hard-core model and to other anti-ferromagnetic two-spin models. In particular, for all antiferromagnetic two-spin systems, namely those satisfying beta*gamma<1, we prove that when the infinite (Delta-1)-ary tree lies in the non-uniqueness region then it is #BIS-hard to approximate the partition function on bipartite graphs of maximum degree Delta, except for the case beta=gamma and lambda=1. The exceptional case is precisely the antiferromagnetic Ising model without an external field, and we show that it has an FPRAS on bipartite graphs. Our inapproximability results match the approximability results of Li et al., who presented an FPTAS for general graphs of maximum degree Delta when the parameters lie in the uniqueness region.

Cite as

Jin-Yi Cai, Andreas Galanis, Leslie Ann Goldberg, Heng Guo, Mark Jerrum, Daniel Stefankovic, and Eric Vigoda. #BIS-Hardness for 2-Spin Systems on Bipartite Bounded Degree Graphs in the Tree Non-uniqueness Region. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 582-595, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{cai_et_al:LIPIcs.APPROX-RANDOM.2014.582,
  author =	{Cai, Jin-Yi and Galanis, Andreas and Goldberg, Leslie Ann and Guo, Heng and Jerrum, Mark and Stefankovic, Daniel and Vigoda, Eric},
  title =	{{#BIS-Hardness for 2-Spin Systems on Bipartite Bounded Degree Graphs in the Tree Non-uniqueness Region}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{582--595},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.582},
  URN =		{urn:nbn:de:0030-drops-47235},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.582},
  annote =	{Keywords: Spin systems, approximate counting, complexity, #BIS-hardness, phase transition}
}
Document
The Power of Super-logarithmic Number of Players

Authors: Arkadev Chattopadhyay and Michael E. Saks


Abstract
In the `Number-on-Forehead' (NOF) model of multiparty communication, the input is a k times m boolean matrix A (where k is the number of players) and Player i sees all bits except those in the i-th row, and the players communicate by broadcast in order to evaluate a specified function f at A. We discover new computational power when k exceeds log m. We give a protocol with communication cost poly-logarithmic in m, for block composed functions with limited block width. These are functions of the form f o g where f is a symmetric b-variate function, and g is a (kr)-variate function and (f o g)(A) is defined, for a k times (br) matrix to be f(g(A-1),...,g(A-b)) where A-i is the i-th (k times r) block of A. Our protocol works provided that k > 1+ ln b + (2 to the power of r). Ada et al. (ICALP'2012) previously obtained simultaneous and deterministic efficient protocols for composed functions of block-width one. The new protocol is the first to work for block composed functions with block-width greather than one. Moreover, it is simultaneous, with vanishingly small error probability, if public coin randomness is allowed. The deterministic and zero-error version barely uses interaction.

Cite as

Arkadev Chattopadhyay and Michael E. Saks. The Power of Super-logarithmic Number of Players. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 596-603, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{chattopadhyay_et_al:LIPIcs.APPROX-RANDOM.2014.596,
  author =	{Chattopadhyay, Arkadev and Saks, Michael E.},
  title =	{{The Power of Super-logarithmic Number of Players}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{596--603},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.596},
  URN =		{urn:nbn:de:0030-drops-47243},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.596},
  annote =	{Keywords: Communication complexity, Number-On-Forehead model, composed functions}
}
Document
On Reconstructing a Hidden Permutation

Authors: Flavio Chierichetti, Anirban Dasgupta, Ravi Kumar, and Silvio Lattanzi


Abstract
The Mallows model is a classical model for generating noisy perturbations of a hidden permutation, where the magnitude of the perturbations is determined by a single parameter. In this work we consider the following reconstruction problem: given several perturbations of a hidden permutation that are generated according to the Mallows model, each with its own parameter, how to recover the hidden permutation? When the parameters are approximately known and satisfy certain conditions, we obtain a simple algorithm for reconstructing the hidden permutation; we also show that these conditions are nearly inevitable for reconstruction. We then provide an algorithm to estimate the parameters themselves. En route we obtain a precise characterization of the swapping probability in the Mallows model.

Cite as

Flavio Chierichetti, Anirban Dasgupta, Ravi Kumar, and Silvio Lattanzi. On Reconstructing a Hidden Permutation. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 604-617, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{chierichetti_et_al:LIPIcs.APPROX-RANDOM.2014.604,
  author =	{Chierichetti, Flavio and Dasgupta, Anirban and Kumar, Ravi and Lattanzi, Silvio},
  title =	{{On Reconstructing a Hidden Permutation}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{604--617},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.604},
  URN =		{urn:nbn:de:0030-drops-47251},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.604},
  annote =	{Keywords: Mallows model; Rank aggregation; Reconstruction}
}
Document
Two Sides of the Coin Problem

Authors: Gil Cohen, Anat Ganor, and Ran Raz


Abstract
In the coin problem, one is given n independent flips of a coin that has bias b > 0 towards either Head or Tail. The goal is to decide which side the coin is biased towards, with high confidence. An optimal strategy for solving the coin problem is to apply the majority function on the n samples. This simple strategy works as long as b > c(1/sqrt n) for some constant c. However, computing majority is an impossible task for several natural computational models, such as bounded width read once branching programs and AC^0 circuits. Brody and Verbin proved that a length n, width w read once branching program cannot solve the coin problem for b < O(1/(log n)^w). This result was tightened by Steinberger to O(1/(log n)^(w-2)). The coin problem in the model of AC^0 circuits was first studied by Shaltiel and Viola, and later by Aaronson who proved that a depth d size s Boolean circuit cannot solve the coin problem for b < O(1/(log s)^(d+2)). This work has two contributions: 1. We strengthen Steinberger's result and show that any Santha-Vazirani source with bias b < O(1/(log n)^(w-2)) fools length n, width w read once branching programs. In other words, the strong independence assumption in the coin problem is completely redundant in the model of read once branching programs, assuming the bias remains small. That is, the exact same result holds for a much more general class of sources. 2. We tighten Aaronson's result and show that a depth d, size s Boolean circuit cannot solve the coin problem for b < O(1/(log s)^(d-1)). Moreover, our proof technique is different and we believe that it is simpler and more natural.

Cite as

Gil Cohen, Anat Ganor, and Ran Raz. Two Sides of the Coin Problem. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 618-629, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{cohen_et_al:LIPIcs.APPROX-RANDOM.2014.618,
  author =	{Cohen, Gil and Ganor, Anat and Raz, Ran},
  title =	{{Two Sides of the Coin Problem}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{618--629},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.618},
  URN =		{urn:nbn:de:0030-drops-47265},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.618},
  annote =	{Keywords: bounded depth circuits, read once branching programs, Santha-Vazirani sources, the coin problem}
}
Document
Absorption Time of the Moran Process

Authors: Josep Díaz, Leslie Ann Goldberg, David Richerby, and Maria Serna


Abstract
The Moran process models the spread of mutations in populations on graphs. We investigate the absorption time of the process, which is the time taken for a mutation introduced at a randomly chosen vertex to either spread to the whole population, or to become extinct. It is known that the expected absorption time for an advantageous mutation is polynomial on an n-vertex undirected graph, which allows the behaviour of the process on undirected graphs to be analysed using the Markov chain Monte Carlo method. We show that this does not extend to directed graphs by exhibiting an infinite family of directed graphs for which the expected absorption time is exponential in the number of vertices. However, for regular directed graphs, we give the expected absorption time is blog n lower bound and an explicit quadratic upper bound. We exhibit families of graphs matching these bounds and give improved bounds for other families of graphs, based on isoperimetric number. Our results are obtained via stochastic dominations which we demonstrate by establishing a coupling in a related continuous-time model. The coupling also implies several natural domination results regarding the fixation probability of the original (discrete-time) process, resolving a conjecture of Shakarian, Roos and Johnson.

Cite as

Josep Díaz, Leslie Ann Goldberg, David Richerby, and Maria Serna. Absorption Time of the Moran Process. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 630-642, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{diaz_et_al:LIPIcs.APPROX-RANDOM.2014.630,
  author =	{D{\'\i}az, Josep and Goldberg, Leslie Ann and Richerby, David and Serna, Maria},
  title =	{{Absorption Time of the Moran Process}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{630--642},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.630},
  URN =		{urn:nbn:de:0030-drops-47279},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.630},
  annote =	{Keywords: Moran Process}
}
Document
Sampling a Uniform Solution of a Quadratic Equation Modulo a Prime Power

Authors: Chandan Dubey and Thomas Holenstein


Abstract
Let p be a prime and k, t be positive integers. Given a quadratic equation Q(x1,x2,...,xn)=t mod p^k in n-variables; we present a polynomial time Las-Vegas algorithm that samples a uniformly random solution of the quadratic equation.

Cite as

Chandan Dubey and Thomas Holenstein. Sampling a Uniform Solution of a Quadratic Equation Modulo a Prime Power. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 643-653, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{dubey_et_al:LIPIcs.APPROX-RANDOM.2014.643,
  author =	{Dubey, Chandan and Holenstein, Thomas},
  title =	{{Sampling a Uniform Solution of a Quadratic Equation Modulo a Prime Power}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{643--653},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.643},
  URN =		{urn:nbn:de:0030-drops-47289},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.643},
  annote =	{Keywords: Quadratic Forms, Lattices, Modular, p-adic}
}
Document
Unidirectional Input/Output Streaming Complexity of Reversal and Sorting

Authors: Nathanaël François, Rahul Jain, and Frédéric Magniez


Abstract
We consider unidirectional data streams with restricted access, such as read-only and write-only streams. For read-write streams, we also introduce a new complexity measure called expansion, the ratio between the space used on the stream and the input size. We give tight bounds for the complexity of reversing a stream of length n in several of the possible models. In the read-only and write-only model, we show that p-pass algorithms need memory space Theta(n/p). But if either the output stream or the input stream is read-write, then the complexity falls to Theta(n/p^2). It becomes polylog(n) if p = O(log n) and both streams are read-write. We also study the complexity of sorting a stream and give two algorithms with small expansion. Our main sorting algorithm is randomized and has O(1) expansion, O(log n) passes and O(log n) memory.

Cite as

Nathanaël François, Rahul Jain, and Frédéric Magniez. Unidirectional Input/Output Streaming Complexity of Reversal and Sorting. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 654-668, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{francois_et_al:LIPIcs.APPROX-RANDOM.2014.654,
  author =	{Fran\c{c}ois, Nathana\"{e}l and Jain, Rahul and Magniez, Fr\'{e}d\'{e}ric},
  title =	{{Unidirectional Input/Output Streaming Complexity of Reversal and Sorting}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{654--668},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.654},
  URN =		{urn:nbn:de:0030-drops-47298},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.654},
  annote =	{Keywords: Streaming Algorithms, Multiple Streams, Reversal, Sorting}
}
Document
Improved Lower Bounds for Testing Triangle-freeness in Boolean Functions via Fast Matrix Multiplication

Authors: Hu Fu and Robert Kleinberg


Abstract
Understanding the query complexity for testing linear-invariant properties has been a central open problem in the study of algebraic property testing. Triangle-freeness in Boolean functions is a simple property whose testing complexity is unknown. Three Boolean functions f1, f2 and f3, mapping {0,1}^k to {0,1}, are said to be triangle free if there is no x, y in {0,1}^k such that f1(x) = f2(y) = f3(x + y) = 1. This property is known to be strongly testable (Green 2005), but the number of queries needed is upper-bounded only by a tower of twos whose height is polynomial in 1 / epsislon, where epsislon is the distance between the tested function triple and triangle-freeness, i.e., the minimum fraction of function values that need to be modified to make the triple triangle free. A lower bound of (1 / epsilon)^2.423 for any one-sided tester was given by Bhattacharyya and Xie (2010). In this work we improve this bound to (1 / epsilon)^6.619. Interestingly, we prove this by way of a combinatorial construction called uniquely solvable puzzles that was at the heart of Coppersmith and Winograd's renowned matrix multiplication algorithm.

Cite as

Hu Fu and Robert Kleinberg. Improved Lower Bounds for Testing Triangle-freeness in Boolean Functions via Fast Matrix Multiplication. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 669-676, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{fu_et_al:LIPIcs.APPROX-RANDOM.2014.669,
  author =	{Fu, Hu and Kleinberg, Robert},
  title =	{{Improved Lower Bounds for Testing Triangle-freeness in Boolean Functions via Fast Matrix Multiplication}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{669--676},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.669},
  URN =		{urn:nbn:de:0030-drops-47304},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.669},
  annote =	{Keywords: Property testing, linear invariance, fast matrix multiplication, uniquely solvable puzzles}
}
Document
Ferromagnetic Potts Model: Refined #BIS-hardness and Related Results

Authors: Andreas Galanis, Daniel Stefankovic, Eric Vigoda, and Linji Yang


Abstract
Recent results establish for the hard-core model (and more generally for 2-spin antiferromagnetic systems) that the computational complexity of approximating the partition function on graphs of maximum degree D undergoes a phase transition that coincides with the uniqueness/non-uniqueness phase transition on the infinite D-regular tree. For the ferromagnetic Potts model we investigate whether analogous hardness results hold. Goldberg and Jerrum showed that approximating the partition function of the ferromagnetic Potts model is at least as hard as approximating the number of independent sets in bipartite graphs, so-called #BIS-hardness. We improve this hardness result by establishing it for bipartite graphs of maximum degree D. To this end, we first present a detailed picture for the phase diagram for the infinite D-regular tree, giving a refined picture of its first-order phase transition and establishing the critical temperature for the coexistence of the disordered and ordered phases. We then prove for all temperatures below this critical temperature (corresponding to the region where the ordered phase "dominates") that it is #BIS-hard to approximate the partition function on bipartite graphs of maximum degree D. The #BIS-hardness result uses random bipartite regular graphs as a gadget in the reduction. The analysis of these random graphs relies on recent results establishing connections between the maxima of the expectation of their partition function, attractive fixpoints of the associated tree recursions, and induced matrix norms. In this paper we extend these connections to random regular graphs for all ferromagnetic models. Using these connections, we establish the Bethe prediction for every ferromagnetic spin system on random regular graphs, which says roughly that the expectation of the log of the partition function Z is the same as the log of the expectation of Z. As a further consequence of our results, we prove for the ferromagnetic Potts model that the Swendsen-Wang algorithm is torpidly mixing (i.e., exponentially slow convergence to its stationary distribution) on random D-regular graphs at the critical temperature for sufficiently large q.

Cite as

Andreas Galanis, Daniel Stefankovic, Eric Vigoda, and Linji Yang. Ferromagnetic Potts Model: Refined #BIS-hardness and Related Results. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 677-691, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{galanis_et_al:LIPIcs.APPROX-RANDOM.2014.677,
  author =	{Galanis, Andreas and Stefankovic, Daniel and Vigoda, Eric and Yang, Linji},
  title =	{{Ferromagnetic Potts Model: Refined #BIS-hardness and Related Results}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{677--691},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.677},
  URN =		{urn:nbn:de:0030-drops-47319},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.677},
  annote =	{Keywords: Ferromagnetic Potts model, approximate counting, spin systems, phase transition, random regular graphs}
}
Document
Space Pseudorandom Generators by Communication Complexity Lower Bounds

Authors: Anat Ganor and Ran Raz


Abstract
In 1989, Babai, Nisan and Szegedy gave a construction of a pseudorandom generator for logspace, based on lower bounds for multiparty communication complexity. The seed length of their pseudorandom generator was relatively large, because the best lower bounds for multiparty communication complexity are relatively weak. Subsequently, pseudorandom generators for logspace with seed length O(log^2 n) were given by Nisan, and Impagliazzo, Nisan and Wigderson. In this paper, we show how to use the pseudorandom generator construction of Babai, Nisan and Szegedy to obtain a third construction of a pseudorandom generator with seed length O(log^2 n), achieving the same parameters as Nisan, and Impagliazzo, Nisan and Wigderson. We achieve this by concentrating on protocols in a restricted model of multiparty communication complexity that we call the conservative one-way unicast model and is based on the conservative one-way model of Damm, Jukna and Sgall. We observe that bounds in the conservative one-way unicast model (rather than the standard Number On the Forehead model) are sufficient for the pseudorandom generator construction of Babai, Nisan and Szegedy to work. Roughly speaking, in a conservative one-way unicast communication protocol, the players speak in turns, one after the other in a fixed order, and every message is visible only to the next player. Moreover, before the beginning of the protocol, each player only knows the inputs of the players that speak after she does and a certain function of the inputs of the players that speak before she does. We prove a lower bound for the communication complexity of conservative one-way unicast communication protocols that compute a family of functions obtained by compositions of strong extractors. Our final pseudorandom generator construction is related to, but different from the constructions of Nisan, and Impagliazzo, Nisan and Wigderson.

Cite as

Anat Ganor and Ran Raz. Space Pseudorandom Generators by Communication Complexity Lower Bounds. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 692-703, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{ganor_et_al:LIPIcs.APPROX-RANDOM.2014.692,
  author =	{Ganor, Anat and Raz, Ran},
  title =	{{Space Pseudorandom Generators by Communication Complexity Lower Bounds}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{692--703},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.692},
  URN =		{urn:nbn:de:0030-drops-47324},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.692},
  annote =	{Keywords: Communication complexity, Logspace, Pseudorandom generator}
}
Document
On Multiple Input Problems in Property Testing

Authors: Oded Goldreich


Abstract
We consider three types of multiple input problems in the context of property testing. Specifically, for a property Pi (of n-bit long strings), a proximity parameter epsilon, and an integer m, we consider the following problems: (1) Direct m-Sum Problem for Pi and epsilon: Given a sequence of m inputs, output a sequence of m bits such that for each i in [m] the i-th bit satisfies the requirements from an epsilon-tester for Pi regarding the i-th input; that is, for each i, the i-th output bit should be 1 (w.p. at least 2/3) if the i-th input is in Pi, and should be 0 (w.p. at least 2/3) if the i-th input is epsilon-far from Pi. (2) Direct m-Product Problem for Pi and epsilon: Given a sequence of m inputs, output 1 (w.p. at least 2/3) if all inputs are in Pi, and output 0 (w.p. at least 2/3) if at least one of the inputs is epsilon-far from Pi. (3) The m-Concatenation Problem for Pi and epsilon: Here one is required to epsilon-test the m-product of Pi; that is, the property that consists of the m-wise Cartesian product of Pi. We show that the query complexity of the first two problems is Theta(m) times the query complexity of epsilon-testing Pi, whereas (except in pathological cases) the query complexity of the third problem is almost of the same order of magnitude as the query complexity of the problem of epsilon-testing Pi. All upper bounds are shown via efficient reductions. We also consider the nonadaptive and one-sided error versions of these problems. The only significant deviation from the picture in the general (adaptive and two-sided error) model is that the one-sided error query complexity of the Direct Product Problem equals Theta(m) times the (two-sided error) query complexity of epsilon-testing Pi plus Theta(1) times the one-sided error query complexity of epsilon-testing Pi.

Cite as

Oded Goldreich. On Multiple Input Problems in Property Testing. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 704-720, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{goldreich:LIPIcs.APPROX-RANDOM.2014.704,
  author =	{Goldreich, Oded},
  title =	{{On Multiple Input Problems in Property Testing}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{704--720},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.704},
  URN =		{urn:nbn:de:0030-drops-47336},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.704},
  annote =	{Keywords: Property Testing, Direct Sum Theorems, Direct Product Theorems, Adaptive vs Nonadaptive queries, One-Sided Error vs Two-Sided Error}
}
Document
Communication Complexity of Set-Disjointness for All Probabilities

Authors: Mika Göös and Thomas Watson


Abstract
We study set-disjointness in a generalized model of randomized two-party communication where the probability of acceptance must be at least alpha(n) on yes-inputs and at most beta(n) on no-inputs, for some functions alpha(n)>beta(n). Our main result is a complete characterization of the private-coin communication complexity of set-disjointness for all functions alpha and beta, and a near-complete characterization for public-coin protocols. In particular, we obtain a simple proof of a theorem of Braverman and Moitra (STOC 2013), who studied the case where alpha=1/2+epsilon(n) and beta=1/2-epsilon(n). The following contributions play a crucial role in our characterization and are interesting in their own right. (1) We introduce two communication analogues of the classical complexity class that captures small bounded-error computations: we define a "restricted" class SBP (which lies between MA and AM) and an "unrestricted" class USBP. The distinction between them is analogous to the distinction between the well-known communication classes PP and UPP. (2) We show that the SBP communication complexity is precisely captured by the classical corruption lower bound method. This sharpens a theorem of Klauck (CCC 2003). (3) We use information complexity arguments to prove a linear lower bound on the USBP complexity of set-disjointness.

Cite as

Mika Göös and Thomas Watson. Communication Complexity of Set-Disjointness for All Probabilities. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 721-736, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{goos_et_al:LIPIcs.APPROX-RANDOM.2014.721,
  author =	{G\"{o}\"{o}s, Mika and Watson, Thomas},
  title =	{{Communication Complexity of Set-Disjointness for All Probabilities}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{721--736},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.721},
  URN =		{urn:nbn:de:0030-drops-47342},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.721},
  annote =	{Keywords: Communication Complexity, Set-Disjointness, All Probabilities}
}
Document
List Decoding Group Homomorphisms Between Supersolvable Groups

Authors: Alan Guo and Madhu Sudan


Abstract
We show that the set of homomorphisms between two supersolvable groups can be locally list decoded up to the minimum distance of the code, extending the results of Dinur et al. (Proc. STOC 2008) who studied the case where the groups are abelian. Moreover, when specialized to the abelian case, our proof is more streamlined and gives a better constant in the exponent of the list size. The constant is improved from about 3.5 million to 105.

Cite as

Alan Guo and Madhu Sudan. List Decoding Group Homomorphisms Between Supersolvable Groups. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 737-747, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{guo_et_al:LIPIcs.APPROX-RANDOM.2014.737,
  author =	{Guo, Alan and Sudan, Madhu},
  title =	{{List Decoding Group Homomorphisms Between Supersolvable Groups}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{737--747},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.737},
  URN =		{urn:nbn:de:0030-drops-47359},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.737},
  annote =	{Keywords: Group theory, error-correcting codes, locally decodable codes}
}
Document
Evading Subspaces Over Large Fields and Explicit List-decodable Rank-metric Codes

Authors: Venkatesan Guruswami and Carol Wang


Abstract
We construct an explicit family of linear rank-metric codes over any field F that enables efficient list decoding up to a fraction rho of errors in the rank metric with a rate of 1-rho-eps, for any desired rho in (0,1) and eps > 0. Previously, a Monte Carlo construction of such codes was known, but this is in fact the first explicit construction of positive rate rank-metric codes for list decoding beyond the unique decoding radius. Our codes are explicit subcodes of the well-known Gabidulin codes, which encode linearized polynomials of low degree via their values at a collection of linearly independent points. The subcode is picked by restricting the message polynomials to an F-subspace that evades certain structured subspaces over an extension field of F. These structured spaces arise from the linear-algebraic list decoder for Gabidulin codes due to Guruswami and Xing (STOC'13). Our construction is obtained by combining subspace designs constructed by Guruswami and Kopparty (FOCS'13) with subspace-evasive varieties due to Dvir and Lovett (STOC'12). We establish a similar result for subspace codes, which are a collection of subspaces, every pair of which have low-dimensional intersection, and which have received much attention recently in the context of network coding. We also give explicit subcodes of folded Reed-Solomon (RS) codes with small folding order that are list-decodable (in the Hamming metric) with optimal redundancy, motivated by the fact that list decoding RS codes reduces to list decoding such folded RS codes. However, as we only list decode a subcode of these codes, the Johnson radius continues to be the best known error fraction for list decoding RS codes.

Cite as

Venkatesan Guruswami and Carol Wang. Evading Subspaces Over Large Fields and Explicit List-decodable Rank-metric Codes. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 748-761, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX-RANDOM.2014.748,
  author =	{Guruswami, Venkatesan and Wang, Carol},
  title =	{{Evading Subspaces Over Large Fields and Explicit List-decodable Rank-metric Codes}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{748--761},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.748},
  URN =		{urn:nbn:de:0030-drops-47361},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.748},
  annote =	{Keywords: list-decoding, pseudorandomness, algebraic coding, explicit constructions}
}
Document
Exchangeability and Realizability: De Finetti Theorems on Graphs

Authors: T. S. Jayram and Jan Vondrák


Abstract
A classic result in probability theory known as de Finetti's theorem states that exchangeable random variables are equivalent to a mixture of distributions where each distribution is determined by an i.i.d. sequence of random variables (an "i.i.d. mix"). Motivated by a recent application and more generally by the relationship of local vs. global correlation in randomized rounding, we study weaker notions of exchangeability that still imply the conclusion of de Finetti's theorem. We say that a bivariate distribution rho is G-realizable for a graph G if there exists a joint distribution of random variables on the vertices such that the marginal distribution on each edge equals rho. We first characterize completely the G-realizable distributions for all symmetric/arc-transitive graphs G. Our main results are forms of de Finetti's theorem for general graphs, based on spectral properties. Let lambda_1(G) >= ... >= lambda_n(G) denote the eigenvalues of the adjacency matrix of G. 1. We prove that if rho is G_n-realizable for a sequence of graphs such that lambda_n(G_n) / lambda_1(G_n) tends to 0, then rho is described by a probability matrix that is positive-semidefinite. For random variables on domains of size |D| <= 4, this implies that rho must be an i.i.d. mix. 2. If rho is G_n-realizable for a sequence of (n,d,lambda)-graphs G_n (d-regular with all eigenvalues except for one bounded by lambda in absolute value) such that lambda(G_n) / d(G_n) tends to 0, then rho is an i.i.d. mix. 3. If rho is G_n-realizable for a sequence of directed graphs such that each of them is an arbitrary orientation of an (n,d,lambda)-graph G_n, and lambda(G_n) / d(G_n) tends to 0, then rho is an i.i.d. mix.

Cite as

T. S. Jayram and Jan Vondrák. Exchangeability and Realizability: De Finetti Theorems on Graphs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 762-778, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{jayram_et_al:LIPIcs.APPROX-RANDOM.2014.762,
  author =	{Jayram, T. S. and Vondr\'{a}k, Jan},
  title =	{{Exchangeability and Realizability: De Finetti Theorems on Graphs}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{762--778},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.762},
  URN =		{urn:nbn:de:0030-drops-47375},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.762},
  annote =	{Keywords: exchangeability, de Finetti’s Theorem, spectral graph theory, regularity lemma}
}
Document
Global and Local Information in Clustering Labeled Block Models

Authors: Varun Kanade, Elchanan Mossel, and Tselil Schramm


Abstract
The stochastic block model is a classical cluster-exhibiting random graph model that has been widely studied in statistics, physics and computer science. In its simplest form, the model is a random graph with two equal-sized clusters, with intra-cluster edge probability p, and inter-cluster edge probability q. We focus on the sparse case, i.e. p, q = O(1/n), which is practically more relevant and also mathematically more challenging. A conjecture of Decelle, Krzakala, Moore and Zdeborova, based on ideas from statistical physics, predicted a specific threshold for clustering. The negative direction of the conjecture was proved by Mossel, Neeman and Sly (2012), and more recently the positive direction was proven independently by Massoulie and Mossel, Neeman, and Sly. In many real network clustering problems, nodes contain information as well. We study the interplay between node and network information in clustering by studying a labeled block model, where in addition to the edge information, the true cluster labels of a small fraction of the nodes are revealed. In the case of two clusters, we show that below the threshold, a small amount of node information does not affect recovery. On the other hand, we show that for any small amount of information efficient local clustering is achievable as long as the number of clusters is sufficiently large (as a function of the amount of revealed information).

Cite as

Varun Kanade, Elchanan Mossel, and Tselil Schramm. Global and Local Information in Clustering Labeled Block Models. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 779-792, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{kanade_et_al:LIPIcs.APPROX-RANDOM.2014.779,
  author =	{Kanade, Varun and Mossel, Elchanan and Schramm, Tselil},
  title =	{{Global and Local Information in Clustering Labeled Block Models}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{779--792},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.779},
  URN =		{urn:nbn:de:0030-drops-47384},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.779},
  annote =	{Keywords: stochastic block models, information flow on trees}
}
Document
Embedding Hard Learning Problems Into Gaussian Space

Authors: Adam Klivans and Pravesh Kothari


Abstract
We give the first representation-independent hardness result for agnostically learning halfspaces with respect to the Gaussian distribution. We reduce from the problem of learning sparse parities with noise with respect to the uniform distribution on the hypercube (sparse LPN), a notoriously hard problem in theoretical computer science and show that any algorithm for agnostically learning halfspaces requires n^Omega(log(1/\epsilon)) time under the assumption that k-sparse LPN requires n^Omega(k) time, ruling out a polynomial time algorithm for the problem. As far as we are aware, this is the first representation-independent hardness result for supervised learning when the underlying distribution is restricted to be a Gaussian. We also show that the problem of agnostically learning sparse polynomials with respect to the Gaussian distribution in polynomial time is as hard as PAC learning DNFs on the uniform distribution in polynomial time. This complements the surprising result of Andoni et. al. 2013 who show that sparse polynomials are learnable under random Gaussian noise in polynomial time. Taken together, these results show the inherent difficulty of designing supervised learning algorithms in Euclidean space even in the presence of strong distributional assumptions. Our results use a novel embedding of random labeled examples from the uniform distribution on the Boolean hypercube into random labeled examples from the Gaussian distribution that allows us to relate the hardness of learning problems on two different domains and distributions.

Cite as

Adam Klivans and Pravesh Kothari. Embedding Hard Learning Problems Into Gaussian Space. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 793-809, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{klivans_et_al:LIPIcs.APPROX-RANDOM.2014.793,
  author =	{Klivans, Adam and Kothari, Pravesh},
  title =	{{Embedding Hard Learning Problems Into Gaussian Space}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{793--809},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.793},
  URN =		{urn:nbn:de:0030-drops-47391},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.793},
  annote =	{Keywords: distribution-specific hardness of learning, gaussian space, halfspace-learning, agnostic learning}
}
Document
Smoothed Analysis on Connected Graphs

Authors: Michael Krivelevich, Daniel Reichman, and Wojciech Samotij


Abstract
The main paradigm of smoothed analysis on graphs suggests that for any large graph G in a certain class of graphs, perturbing slightly the edges of G at random (usually adding few random edges to G) typically results in a graph having much "nicer" properties. In this work we study smoothed analysis on trees or, equivalently, on connected graphs. Given an n-vertex connected graph G, form a random supergraph of G* of G by turning every pair of vertices of G into an edge with probability epsilon/n, where epsilon is a small positive constant. This perturbation model has been studied previously in several contexts, including smoothed analysis, small world networks, and combinatorics. Connected graphs can be bad expanders, can have very large diameter, and possibly contain no long paths. In contrast, we show that if G is an n-vertex connected graph then typically G* has edge expansion Omega(1/(log n)), diameter O(log n), vertex expansion Omega(1/(log n)), and contains a path of length Omega(n), where for the last two properties we additionally assume that G has bounded maximum degree. Moreover, we show that if G has bounded degeneracy, then typically the mixing time of the lazy random walk on G* is O(log^2(n)). All these results are asymptotically tight.

Cite as

Michael Krivelevich, Daniel Reichman, and Wojciech Samotij. Smoothed Analysis on Connected Graphs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 810-825, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{krivelevich_et_al:LIPIcs.APPROX-RANDOM.2014.810,
  author =	{Krivelevich, Michael and Reichman, Daniel and Samotij, Wojciech},
  title =	{{Smoothed Analysis on Connected Graphs}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{810--825},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.810},
  URN =		{urn:nbn:de:0030-drops-47407},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.810},
  annote =	{Keywords: Random walks and Markov chains, Random network models}
}
Document
Local Algorithms for Sparse Spanning Graphs

Authors: Reut Levi, Dana Ron, and Ronitt Rubinfeld


Abstract
We initiate the study of the problem of designing sublinear-time (local) algorithms that, given an edge (u,v) in a connected graph G=(V,E), decide whether (u,v) belongs to a sparse spanning graph G' = (V,E') of G. Namely, G' should be connected and |E'| should be upper bounded by (1+epsilon)|V| for a given parameter epsilon > 0. To this end the algorithms may query the incidence relation of the graph G, and we seek algorithms whose query complexity and running time (per given edge (u,v)) is as small as possible. Such an algorithm may be randomized but (for a fixed choice of its random coins) its decision on different edges in the graph should be consistent with the same spanning graph G' and independent of the order of queries. We first show that for general (bounded-degree) graphs, the query complexity of any such algorithm must be Omega(sqrt{|V|}). This lower bound holds for graphs that have high expansion. We then turn to design and analyze algorithms both for graphs with high expansion (obtaining a result that roughly matches the lower bound) and for graphs that are (strongly) non-expanding (obtaining results in which the complexity does not depend on |V|). The complexity of the problem for graphs that do not fall into these two categories is left as an open question.

Cite as

Reut Levi, Dana Ron, and Ronitt Rubinfeld. Local Algorithms for Sparse Spanning Graphs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 826-842, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{levi_et_al:LIPIcs.APPROX-RANDOM.2014.826,
  author =	{Levi, Reut and Ron, Dana and Rubinfeld, Ronitt},
  title =	{{Local Algorithms for Sparse Spanning Graphs}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{826--842},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.826},
  URN =		{urn:nbn:de:0030-drops-47410},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.826},
  annote =	{Keywords: local, spanning graph, sparse}
}
Document
The Complexity of Ferromagnetic Two-spin Systems with External Fields

Authors: Jingcheng Liu, Pinyan Lu, and Chihao Zhang


Abstract
We study the approximability of computing the partition function for ferromagnetic two-state spin systems. The remarkable algorithm by Jerrum and Sinclair showed that there is a fully polynomial-time randomized approximation scheme (FPRAS) for the special ferromagnetic Ising model with any given uniform external field. Later, Goldberg and Jerrum proved that it is #BIS-hard for Ising model if we allow inconsistent external fields on different nodes. In contrast to these two results, we prove that for any ferromagnetic two-state spin systems except the Ising model, there exists a threshold for external fields beyond which the problem is #BIS-hard, even if the external field is uniform.

Cite as

Jingcheng Liu, Pinyan Lu, and Chihao Zhang. The Complexity of Ferromagnetic Two-spin Systems with External Fields. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 843-856, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.APPROX-RANDOM.2014.843,
  author =	{Liu, Jingcheng and Lu, Pinyan and Zhang, Chihao},
  title =	{{The Complexity of Ferromagnetic Two-spin Systems with External Fields}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{843--856},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.843},
  URN =		{urn:nbn:de:0030-drops-47428},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.843},
  annote =	{Keywords: Spin System, #BIS-hard, FPRAS}
}
Document
It’s a Small World for Random Surfers

Authors: Abbas Mehrabian and Nick Wormald


Abstract
We prove logarithmic upper bounds for the diameters of the random-surfer Webgraph model and the PageRank-based selection Webgraph model, confirming the small-world phenomenon holds for them. In the special case when the generated graph is a tree, we get close lower and upper bounds for the diameters of both models.

Cite as

Abbas Mehrabian and Nick Wormald. It’s a Small World for Random Surfers. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 857-871, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{mehrabian_et_al:LIPIcs.APPROX-RANDOM.2014.857,
  author =	{Mehrabian, Abbas and Wormald, Nick},
  title =	{{It’s a Small World for Random Surfers}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{857--871},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.857},
  URN =		{urn:nbn:de:0030-drops-47437},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.857},
  annote =	{Keywords: random-surfer webgraph model, PageRank-based selection model, smallworld phenomenon, height of random trees, probabilistic analysis, large deviations}
}
Document
Deterministic Coupon Collection and Better Strong Dispersers

Authors: Raghu Meka, Omer Reingold, and Yuan Zhou


Abstract
Hashing is one of the main techniques in data processing and algorithm design for very large data sets. While random hash functions satisfy most desirable properties, it is often too expensive to store a fully random hash function. Motivated by this, much attention has been given to designing small families of hash functions suitable for various applications. In this work, we study the question of designing space-efficient hash families H = {h:[U] -> [N]} with the natural property of 'covering': H is said to be covering if any set of Omega(N log N) distinct items from the universe (the "coupon-collector limit") are hashed to cover all N bins by most hash functions in H. We give an explicit covering family H of size poly(N) (which is optimal), so that hash functions in H can be specified efficiently by O(log N) bits. We build covering hash functions by drawing a connection to "dispersers", which are quite well-studied and have a variety of applications themselves. We in fact need strong dispersers and we give new constructions of strong dispersers which may be of independent interest. Specifically, we construct strong dispersers with optimal entropy loss in the high min-entropy, but very small error (poly(n)/2^n for n bit sources) regimes. We also provide a strong disperser construction with constant error but for any min-entropy. Our constructions achieve these by using part of the source to replace seed from previous non-strong constructions in surprising ways. In doing so, we take two of the few constructions of dispersers with parameters better than known extractors and make them strong.

Cite as

Raghu Meka, Omer Reingold, and Yuan Zhou. Deterministic Coupon Collection and Better Strong Dispersers. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 872-884, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{meka_et_al:LIPIcs.APPROX-RANDOM.2014.872,
  author =	{Meka, Raghu and Reingold, Omer and Zhou, Yuan},
  title =	{{Deterministic Coupon Collection and Better Strong Dispersers}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{872--884},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.872},
  URN =		{urn:nbn:de:0030-drops-47440},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.872},
  annote =	{Keywords: Coupon collection; dispersers, strong dispersers, hashing, pseudorandomness}
}
Document
Pseudorandomness and Fourier Growth Bounds for Width-3 Branching Programs

Authors: Thomas Steinke, Salil Vadhan, and Andrew Wan


Abstract
We present an explicit pseudorandom generator for oblivious, read-once, width-3 branching programs, which can read their input bits in any order. The generator has seed length O~( log^3 n ). The previously best known seed length for this model is n^{1/2+o(1)} due to Impagliazzo, Meka, and Zuckerman (FOCS'12). Our work generalizes a recent result of Reingold, Steinke, and Vadhan (RANDOM'13) for permutation branching programs. The main technical novelty underlying our generator is a new bound on the Fourier growth of width-3, oblivious, read-once branching programs. Specifically, we show that for any f : {0,1}^n -> {0,1} computed by such a branching program, and k in [n], sum_{|s|=k} |hat{f}(s)| < n^2 * (O(\log n))^k, where f(x) = sum_s hat{f}(s) (-1)^<s,x> is the standard Fourier transform over Z_2^n. The base O(log n) of the Fourier growth is tight up to a factor of log log n.

Cite as

Thomas Steinke, Salil Vadhan, and Andrew Wan. Pseudorandomness and Fourier Growth Bounds for Width-3 Branching Programs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 885-899, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{steinke_et_al:LIPIcs.APPROX-RANDOM.2014.885,
  author =	{Steinke, Thomas and Vadhan, Salil and Wan, Andrew},
  title =	{{Pseudorandomness and Fourier Growth Bounds for Width-3 Branching Programs}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{885--899},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.885},
  URN =		{urn:nbn:de:0030-drops-47456},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.885},
  annote =	{Keywords: Pseudorandomness, Branching Programs, Discrete Fourier Analysis}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail