{"@context":"https:\/\/schema.org\/","@type":"PublicationVolume","@id":"#volume6243","volumeNumber":40,"name":"Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX\/RANDOM 2015)","dateCreated":"2015-08-13","datePublished":"2015-08-13","editor":[{"@type":"Person","name":"Garg, Naveen","givenName":"Naveen","familyName":"Garg"},{"@type":"Person","name":"Jansen, Klaus","givenName":"Klaus","familyName":"Jansen"},{"@type":"Person","name":"Rao, Anup","givenName":"Anup","familyName":"Rao"},{"@type":"Person","name":"Rolim, Jos\u00e9 D. P.","givenName":"Jos\u00e9 D. P.","familyName":"Rolim"}],"isAccessibleForFree":true,"publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":{"@type":"Periodical","@id":"#series116","name":"Leibniz International Proceedings in Informatics","issn":"1868-8969","isAccessibleForFree":true,"publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","hasPart":"#volume6243"},"hasPart":[{"@type":"ScholarlyArticle","@id":"#article8038","name":"LIPIcs, Volume 40, APPROX\/RANDOM'15, Complete Volume","abstract":"LIPIcs, Volume 40, APPROX\/RANDOM'15, Complete Volume","keywords":"Data Structures, Coding and Information Theory, Theory of Computation, Computation by Abstract Devices, Modes of Computation, Complexity Measures and Problem Complexity, Numerical Algorithms and Problems, Nonnumerical Algorithms and Problems, Approximation, Numerical Linear Algorithms and Problems","author":[{"@type":"Person","name":"Garg, Naveen","givenName":"Naveen","familyName":"Garg"},{"@type":"Person","name":"Jansen, Klaus","givenName":"Klaus","familyName":"Jansen"},{"@type":"Person","name":"Rao, Anup","givenName":"Anup","familyName":"Rao"},{"@type":"Person","name":"Rolim, Jos\u00e9 D. P.","givenName":"Jos\u00e9 D. P.","familyName":"Rolim"}],"position":-1,"pageStart":0,"pageEnd":0,"dateCreated":"2015-08-27","datePublished":"2015-08-27","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Garg, Naveen","givenName":"Naveen","familyName":"Garg"},{"@type":"Person","name":"Jansen, Klaus","givenName":"Klaus","familyName":"Jansen"},{"@type":"Person","name":"Rao, Anup","givenName":"Anup","familyName":"Rao"},{"@type":"Person","name":"Rolim, Jos\u00e9 D. P.","givenName":"Jos\u00e9 D. P.","familyName":"Rolim"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8039","name":"Frontmatter, Table of Contents, Preface, Program Commitees, External Reviewers, List of Authors","abstract":"Frontmatter, Table of Contents, Preface, Program Commitees, External Reviewers, List of Authors","keywords":["Frontmatter","Table of Contents","Preface","Program Commitees","External Reviewers","List of Authors"],"author":[{"@type":"Person","name":"Garg, Naveen","givenName":"Naveen","familyName":"Garg"},{"@type":"Person","name":"Jansen, Klaus","givenName":"Klaus","familyName":"Jansen"},{"@type":"Person","name":"Rao, Anup","givenName":"Anup","familyName":"Rao"},{"@type":"Person","name":"Rolim, Jos\u00e9 D. P.","givenName":"Jos\u00e9 D. P.","familyName":"Rolim"}],"position":0,"pageStart":"i","pageEnd":"xviii","dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Garg, Naveen","givenName":"Naveen","familyName":"Garg"},{"@type":"Person","name":"Jansen, Klaus","givenName":"Klaus","familyName":"Jansen"},{"@type":"Person","name":"Rao, Anup","givenName":"Anup","familyName":"Rao"},{"@type":"Person","name":"Rolim, Jos\u00e9 D. P.","givenName":"Jos\u00e9 D. P.","familyName":"Rolim"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.i","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8040","name":"On Guillotine Cutting Sequences","abstract":"Imagine a wooden plate with a set of non-overlapping geometric objects painted on it. How many of them can a carpenter cut out using a panel saw making guillotine cuts, i.e., only moving forward through the material along a straight line until it is split into two pieces? Already fifteen years ago, Pach and Tardos investigated whether one can always cut out a constant fraction if all objects are axis-parallel rectangles. However, even for the case of axis-parallel squares this question is still open. In this paper, we answer the latter affirmatively. Our result is constructive and holds even in a more general setting where the squares have weights and the goal is to save as much weight as possible. We further show that when solving the more general question for rectangles affirmatively with only axis-parallel cuts, this would yield a combinatorial O(1)-approximation algorithm for the Maximum Independent Set of Rectangles problem, and would thus solve a long-standing open problem. In practical applications, like the mentioned carpentry and many other settings, we can usually place the items freely that we want to cut out, which gives rise to the two-dimensional guillotine knapsack problem: Given a collection of axis-parallel rectangles without presumed coordinates, our goal is to place as many of them as possible in a square-shaped knapsack respecting the constraint that the placed objects can be separated by a sequence of guillotine cuts. Our main result for this problem is a quasi-PTAS, assuming the input data to be quasi-polynomially bounded integers. This factor matches the best known (quasi-polynomial time) result for (non-guillotine) two-dimensional knapsack.","keywords":["Guillotine cuts","Rectangles","Squares","Independent Sets","Packing"],"author":[{"@type":"Person","name":"Abed, Fidaa","givenName":"Fidaa","familyName":"Abed"},{"@type":"Person","name":"Chalermsook, Parinya","givenName":"Parinya","familyName":"Chalermsook"},{"@type":"Person","name":"Correa, Jos\u00e9","givenName":"Jos\u00e9","familyName":"Correa"},{"@type":"Person","name":"Karrenbauer, Andreas","givenName":"Andreas","familyName":"Karrenbauer"},{"@type":"Person","name":"P\u00e9rez-Lantero, Pablo","givenName":"Pablo","familyName":"P\u00e9rez-Lantero"},{"@type":"Person","name":"Soto, Jos\u00e9 A.","givenName":"Jos\u00e9 A.","familyName":"Soto"},{"@type":"Person","name":"Wiese, Andreas","givenName":"Andreas","familyName":"Wiese"}],"position":1,"pageStart":1,"pageEnd":19,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Abed, Fidaa","givenName":"Fidaa","familyName":"Abed"},{"@type":"Person","name":"Chalermsook, Parinya","givenName":"Parinya","familyName":"Chalermsook"},{"@type":"Person","name":"Correa, Jos\u00e9","givenName":"Jos\u00e9","familyName":"Correa"},{"@type":"Person","name":"Karrenbauer, Andreas","givenName":"Andreas","familyName":"Karrenbauer"},{"@type":"Person","name":"P\u00e9rez-Lantero, Pablo","givenName":"Pablo","familyName":"P\u00e9rez-Lantero"},{"@type":"Person","name":"Soto, Jos\u00e9 A.","givenName":"Jos\u00e9 A.","familyName":"Soto"},{"@type":"Person","name":"Wiese, Andreas","givenName":"Andreas","familyName":"Wiese"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.1","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8041","name":"Approximate Nearest Neighbor Search in Metrics of Planar Graphs","abstract":"We investigate the problem of approximate Nearest-Neighbor Search (NNS) in graphical metrics: The task is to preprocess an edge-weighted graph G=(V,E) on m vertices and a small \"dataset\" D \\subset V of size n << m, so that given a query point q \\in V, one can quickly approximate dist(q,D) (the distance from q to its closest vertex in D) and find a vertex a \\in D within this approximated distance. We assume the query algorithm has access to a distance oracle, that quickly evaluates the exact distance between any pair of vertices.\r\n\r\nFor planar graphs G with maximum degree Delta, we show how to efficiently construct a compact data structure -- of size ~O(n(Delta+1\/epsilon)) -- that answers (1+epsilon)-NNS queries in time ~O(Delta+1\/epsilon). Thus, as far as NNS applications are concerned, metrics derived from bounded-degree planar graphs behave as low-dimensional metrics, even though planar metrics do not necessarily have a low doubling dimension, nor can they be embedded with low distortion into l_2. We complement our algorithmic result by lower bounds showing that the access to an exact distance oracle (rather than an approximate one) and the dependency on Delta (in query time) are both essential.","keywords":["Data Structures","Nearest Neighbor Search","Planar Graphs","Planar Metrics","Planar Separator"],"author":[{"@type":"Person","name":"Abraham, Ittai","givenName":"Ittai","familyName":"Abraham"},{"@type":"Person","name":"Chechik, Shiri","givenName":"Shiri","familyName":"Chechik"},{"@type":"Person","name":"Krauthgamer, Robert","givenName":"Robert","familyName":"Krauthgamer"},{"@type":"Person","name":"Wieder, Udi","givenName":"Udi","familyName":"Wieder"}],"position":2,"pageStart":20,"pageEnd":42,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Abraham, Ittai","givenName":"Ittai","familyName":"Abraham"},{"@type":"Person","name":"Chechik, Shiri","givenName":"Shiri","familyName":"Chechik"},{"@type":"Person","name":"Krauthgamer, Robert","givenName":"Robert","familyName":"Krauthgamer"},{"@type":"Person","name":"Wieder, Udi","givenName":"Udi","familyName":"Wieder"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.20","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8042","name":"How to Tame Rectangles: Solving Independent Set and Coloring of Rectangles via Shrinking","abstract":"In the Maximum Weight Independent Set of Rectangles (MWISR) problem, we are given a collection of weighted axis-parallel rectangles in the plane. Our goal is to compute a maximum weight subset of pairwise non-overlapping rectangles. Due to its various applications, as well as connections to many other problems in computer science, MWISR has received a lot of attention from the computational geometry and the approximation algorithms community. However, despite being extensively studied, MWISR remains not very well understood in terms of polynomial time approximation algorithms, as there is a large gap between the upper and lower bounds, i.e., O(log n\\ loglog n) v.s. NP-hardness. Another important, poorly understood question is whether one can color rectangles with at most O(omega(R)) colors where omega(R) is the size of a maximum clique in the intersection graph of a set of input rectangles R. Asplund and Gr\u00fcnbaum obtained an upper bound of O(omega(R)^2) about 50 years ago, and the result has remained asymptotically best. This question is strongly related to the integrality gap of the canonical LP for MWISR. \r\n\r\nIn this paper, we settle above three open problems in a relaxed model where we are allowed to shrink the rectangles by a tiny bit (rescaling them by a factor of 1-delta for an arbitrarily small constant delta > 0. Namely, in this model, we show (i) a PTAS for MWISR and (ii) a coloring with O(omega(R)) colors which implies a constant upper bound on the integrality gap of the canonical LP. \r\n\r\nFor some applications of MWISR the possibility to shrink the rectangles has a natural, well-motivated meaning. Our results can be seen as an evidence that the shrinking model is a promising way to relax a geometric problem for the purpose of better algorithmic results.","keywords":["Approximation algorithms","independent set","resource augmentation","rectangle intersection graphs","PTAS"],"author":[{"@type":"Person","name":"Adamaszek, Anna","givenName":"Anna","familyName":"Adamaszek"},{"@type":"Person","name":"Chalermsook, Parinya","givenName":"Parinya","familyName":"Chalermsook"},{"@type":"Person","name":"Wiese, Andreas","givenName":"Andreas","familyName":"Wiese"}],"position":3,"pageStart":43,"pageEnd":60,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Adamaszek, Anna","givenName":"Anna","familyName":"Adamaszek"},{"@type":"Person","name":"Chalermsook, Parinya","givenName":"Parinya","familyName":"Chalermsook"},{"@type":"Person","name":"Wiese, Andreas","givenName":"Andreas","familyName":"Wiese"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.43","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8043","name":"Non-Uniform Robust Network Design in Planar Graphs","abstract":"Robust optimization is concerned with constructing solutions that remain feasible also when a limited number of resources is removed from the solution. Most studies of robust combinatorial optimization to date made the assumption that every resource is equally vulnerable, and that the set of scenarios is implicitly given by a single budget constraint. This paper studies a robustness model of a different kind. We focus on Bulk-Robustness, a model recently introduced (Adjiashvili, Stiller, Zenklusen 2015) for addressing the need to model non-uniform failure patterns in systems.\r\n\r\nWe significantly extend the techniques used by Adjiashvili et al. to design approximation algorithm for bulk-robust network design problems in planar graphs. Our techniques use an augmentation framework, combined with linear programming (LP) rounding that depends on a planar embedding of the input graph. A connection to cut covering problems and the dominating set problem in circle graphs is established. Our methods use few of the specifics of bulk-robust optimization, hence it is conceivable that they can be adapted to solve other robust network design problems.","keywords":["Robust optimization","Network design","Planar graph","Approximation algorithm","LP rounding"],"author":{"@type":"Person","name":"Adjiashvili, David","givenName":"David","familyName":"Adjiashvili"},"position":4,"pageStart":61,"pageEnd":77,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":{"@type":"Person","name":"Adjiashvili, David","givenName":"David","familyName":"Adjiashvili"},"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.61","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8044","name":"Large Supports are Required for Well-Supported Nash Equilibria","abstract":"We prove that for any constant k and any epsilon < 1, there exist bimatrix win-lose games for which every epsilon-WSNE requires supports of cardinality greater than k. To do this, we provide a graph-theoretic characterization of win-lose games that possess epsilon-WSNE with constant cardinality supports. We then apply a result in additive number theory of Haight to construct win-lose games that do not satisfy the requirements of the characterization. These constructions disprove graph theoretic conjectures of Daskalakis, Mehta and Papadimitriou and Myers.","keywords":["bimatrix games","well-supported Nash equilibria"],"author":[{"@type":"Person","name":"Anbalagan, Yogesh","givenName":"Yogesh","familyName":"Anbalagan"},{"@type":"Person","name":"Huang, Hao","givenName":"Hao","familyName":"Huang"},{"@type":"Person","name":"Lovett, Shachar","givenName":"Shachar","familyName":"Lovett"},{"@type":"Person","name":"Norin, Sergey","givenName":"Sergey","familyName":"Norin"},{"@type":"Person","name":"Vetta, Adrian","givenName":"Adrian","familyName":"Vetta"},{"@type":"Person","name":"Wu, Hehui","givenName":"Hehui","familyName":"Wu"}],"position":5,"pageStart":78,"pageEnd":84,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Anbalagan, Yogesh","givenName":"Yogesh","familyName":"Anbalagan"},{"@type":"Person","name":"Huang, Hao","givenName":"Hao","familyName":"Huang"},{"@type":"Person","name":"Lovett, Shachar","givenName":"Shachar","familyName":"Lovett"},{"@type":"Person","name":"Norin, Sergey","givenName":"Sergey","familyName":"Norin"},{"@type":"Person","name":"Vetta, Adrian","givenName":"Adrian","familyName":"Vetta"},{"@type":"Person","name":"Wu, Hehui","givenName":"Hehui","familyName":"Wu"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.78","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8045","name":"Minimizing Maximum Flow-time on Related Machines","abstract":"We consider the online problem of minimizing the maximum flow-time on related machines. This is a natural generalization of the extensively studied makespan minimization problem to the setting where jobs arrive over time. Interestingly, natural algorithms such as Greedy or Slow-fit that work for the simpler identical machines case or for makespan minimization on related machines, are not O(1)-competitive. Our main result is a new O(1)-competitive algorithm for the problem. Previously, O(1)-competitive algorithms were known only with resource augmentation, and in fact no O(1) approximation was known even in the offline case.","keywords":["Related machines scheduling","Maximum flow-time minimization","On-line algorithm","Approximation algorithm"],"author":[{"@type":"Person","name":"Bansal, Nikhil","givenName":"Nikhil","familyName":"Bansal"},{"@type":"Person","name":"Cloostermans, Bouke","givenName":"Bouke","familyName":"Cloostermans"}],"position":6,"pageStart":85,"pageEnd":95,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Bansal, Nikhil","givenName":"Nikhil","familyName":"Bansal"},{"@type":"Person","name":"Cloostermans, Bouke","givenName":"Bouke","familyName":"Cloostermans"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.85","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8046","name":"A 2-Competitive Algorithm For Online Convex Optimization With Switching Costs","abstract":"We consider a natural online optimization problem set on the real line. The state of the online algorithm at each integer time is a location on the real line. At each integer time, a convex function arrives online. In response, the online algorithm picks a new location. The cost paid by the online algorithm for this response is the distance moved plus the value of the function at the final destination. The objective is then to minimize the aggregate cost over all time. The motivating application is rightsizing power-proportional data centers. We give a 2-competitive algorithm for this problem. We also give a 3-competitive memoryless algorithm, and show that this is the best competitive ratio achievable by a deterministic memoryless algorithm. Finally we show that this online problem is strictly harder than the standard ski rental problem.","keywords":["Stochastic","Scheduling"],"author":[{"@type":"Person","name":"Bansal, Nikhil","givenName":"Nikhil","familyName":"Bansal"},{"@type":"Person","name":"Gupta, Anupam","givenName":"Anupam","familyName":"Gupta"},{"@type":"Person","name":"Krishnaswamy, Ravishankar","givenName":"Ravishankar","familyName":"Krishnaswamy"},{"@type":"Person","name":"Pruhs, Kirk","givenName":"Kirk","familyName":"Pruhs"},{"@type":"Person","name":"Schewior, Kevin","givenName":"Kevin","familyName":"Schewior"},{"@type":"Person","name":"Stein, Cliff","givenName":"Cliff","familyName":"Stein"}],"position":7,"pageStart":96,"pageEnd":109,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Bansal, Nikhil","givenName":"Nikhil","familyName":"Bansal"},{"@type":"Person","name":"Gupta, Anupam","givenName":"Anupam","familyName":"Gupta"},{"@type":"Person","name":"Krishnaswamy, Ravishankar","givenName":"Ravishankar","familyName":"Krishnaswamy"},{"@type":"Person","name":"Pruhs, Kirk","givenName":"Kirk","familyName":"Pruhs"},{"@type":"Person","name":"Schewior, Kevin","givenName":"Kevin","familyName":"Schewior"},{"@type":"Person","name":"Stein, Cliff","givenName":"Cliff","familyName":"Stein"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.96","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8047","name":"Beating the Random Assignment on Constraint Satisfaction Problems of Bounded Degree","abstract":"We show that for any odd k and any instance I of the max-kXOR constraint satisfaction problem, there is an efficient algorithm that finds an assignment satisfying at least a 1\/2 + Omega(1\/sqrt(D)) fraction of I's constraints, where D is a bound on the number of constraints that each variable occurs in.\r\nThis improves both qualitatively and quantitatively on the recent work of Farhi, Goldstone, and Gutmann (2014), which gave a quantum algorithm to find an assignment satisfying a 1\/2 Omega(D^{-3\/4}) fraction of the equations.\r\n\r\nFor arbitrary constraint satisfaction problems, we give a similar result for \"triangle-free\" instances; i.e., an efficient algorithm that finds an assignment satisfying at least a mu + Omega(1\/sqrt(degree)) fraction of constraints, where mu is the fraction that would be satisfied by a uniformly random assignment.","keywords":["constraint satisfaction problems","bounded degree","advantage over random"],"author":[{"@type":"Person","name":"Barak, Boaz","givenName":"Boaz","familyName":"Barak"},{"@type":"Person","name":"Moitra, Ankur","givenName":"Ankur","familyName":"Moitra"},{"@type":"Person","name":"O\u2019Donnell, Ryan","givenName":"Ryan","familyName":"O\u2019Donnell"},{"@type":"Person","name":"Raghavendra, Prasad","givenName":"Prasad","familyName":"Raghavendra"},{"@type":"Person","name":"Regev, Oded","givenName":"Oded","familyName":"Regev"},{"@type":"Person","name":"Steurer, David","givenName":"David","familyName":"Steurer"},{"@type":"Person","name":"Trevisan, Luca","givenName":"Luca","familyName":"Trevisan"},{"@type":"Person","name":"Vijayaraghavan, Aravindan","givenName":"Aravindan","familyName":"Vijayaraghavan"},{"@type":"Person","name":"Witmer, David","givenName":"David","familyName":"Witmer"},{"@type":"Person","name":"Wright, John","givenName":"John","familyName":"Wright"}],"position":8,"pageStart":110,"pageEnd":123,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Barak, Boaz","givenName":"Boaz","familyName":"Barak"},{"@type":"Person","name":"Moitra, Ankur","givenName":"Ankur","familyName":"Moitra"},{"@type":"Person","name":"O\u2019Donnell, Ryan","givenName":"Ryan","familyName":"O\u2019Donnell"},{"@type":"Person","name":"Raghavendra, Prasad","givenName":"Prasad","familyName":"Raghavendra"},{"@type":"Person","name":"Regev, Oded","givenName":"Oded","familyName":"Regev"},{"@type":"Person","name":"Steurer, David","givenName":"David","familyName":"Steurer"},{"@type":"Person","name":"Trevisan, Luca","givenName":"Luca","familyName":"Trevisan"},{"@type":"Person","name":"Vijayaraghavan, Aravindan","givenName":"Aravindan","familyName":"Vijayaraghavan"},{"@type":"Person","name":"Witmer, David","givenName":"David","familyName":"Witmer"},{"@type":"Person","name":"Wright, John","givenName":"John","familyName":"Wright"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.110","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8048","name":"Improved Bounds in Stochastic Matching and Optimization","abstract":"We consider two fundamental problems in stochastic optimization: approximation algorithms for stochastic matching, and sampling bounds in the black-box model. For the former, we improve the current-best bound of 3.709 due to Adamczyk et al. (2015), to 3.224; we also present improvements on Bansal et al. (2012) for hypergraph matching and for relaxed versions of the problem. In the context of stochastic optimization, we improve upon the sampling bounds of Charikar et al. (2005).","keywords":["stochastic matching","approximation algorithms","sampling complexity"],"author":[{"@type":"Person","name":"Baveja, Alok","givenName":"Alok","familyName":"Baveja"},{"@type":"Person","name":"Chavan, Amit","givenName":"Amit","familyName":"Chavan"},{"@type":"Person","name":"Nikiforov, Andrei","givenName":"Andrei","familyName":"Nikiforov"},{"@type":"Person","name":"Srinivasan, Aravind","givenName":"Aravind","familyName":"Srinivasan"},{"@type":"Person","name":"Xu, Pan","givenName":"Pan","familyName":"Xu"}],"position":9,"pageStart":124,"pageEnd":134,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Baveja, Alok","givenName":"Alok","familyName":"Baveja"},{"@type":"Person","name":"Chavan, Amit","givenName":"Amit","familyName":"Chavan"},{"@type":"Person","name":"Nikiforov, Andrei","givenName":"Andrei","familyName":"Nikiforov"},{"@type":"Person","name":"Srinivasan, Aravind","givenName":"Aravind","familyName":"Srinivasan"},{"@type":"Person","name":"Xu, Pan","givenName":"Pan","familyName":"Xu"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.124","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8049","name":"Fully Dynamic Bin Packing Revisited","abstract":"We consider the fully dynamic bin packing problem, where items arrive and depart in an online fashion and repacking of previously packed items is allowed. The goal is, of course, to minimize both the number of bins used as well as the amount of repacking. A recently introduced way of measuring the repacking costs at each timestep is the migration factor, defined as the total size of repacked items divided by the size of an arriving or departing item. Concerning the trade-off between number of bins and migration factor, if we wish to achieve an asymptotic competitive ratio of 1 + epsilon for the number of bins, a relatively simple argument proves a lower bound of Omega(1\/epsilon) of the migration factor. We establish a fairly close upper bound of O(1\/epsilon^4 log(1\/epsilon)) using a new dynamic rounding technique and new ideas to handle small items in a dynamic setting such that no amortization is needed. The running time of our algorithm is polynomial in the number of items n and in 1\/epsilon. The previous best trade-off was for an asymptotic competitive ratio of 5\/4 for the bins (rather than 1+epsilon) and needed an amortized number of O(log n) repackings (while in our scheme the number of repackings is independent of n and non-amortized).","keywords":["online","bin packing","migration factor","robust","AFPTAS"],"author":[{"@type":"Person","name":"Berndt, Sebastian","givenName":"Sebastian","familyName":"Berndt"},{"@type":"Person","name":"Jansen, Klaus","givenName":"Klaus","familyName":"Jansen"},{"@type":"Person","name":"Klein, Kim-Manuel","givenName":"Kim-Manuel","familyName":"Klein"}],"position":10,"pageStart":135,"pageEnd":151,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Berndt, Sebastian","givenName":"Sebastian","familyName":"Berndt"},{"@type":"Person","name":"Jansen, Klaus","givenName":"Klaus","familyName":"Jansen"},{"@type":"Person","name":"Klein, Kim-Manuel","givenName":"Kim-Manuel","familyName":"Klein"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.135","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8050","name":"Approximate Hypergraph Coloring under Low-discrepancy and Related Promises","abstract":"A hypergraph is said to be X-colorable if its vertices can be colored with X colors so that no hyperedge is monochromatic. 2-colorability is a fundamental property (called Property B) of hypergraphs and is extensively studied in combinatorics. Algorithmically, however, given a 2-colorable k-uniform hypergraph, it is NP-hard to find a 2-coloring miscoloring fewer than a fraction 2^(-k+1) of hyperedges (which is trivially achieved by a random 2-coloring), and the best algorithms to color the hypergraph properly require about n^(1-1\/k) colors, approaching the trivial bound of n as k increases.\r\n\r\nIn this work, we study the complexity of approximate hypergraph coloring, for both the maximization (finding a 2-coloring with fewest miscolored edges) and minimization (finding a proper coloring using fewest number of colors) versions, when the input hypergraph is promised to have the following stronger properties than 2-colorability:\r\n\r\n(A) Low-discrepancy: If the hypergraph has a 2-coloring of discrepancy l << sqrt(k), we give an algorithm to color the hypergraph with about n^(O(l^2\/k)) colors. However, for the maximization version, we prove NP-hardness of finding a 2-coloring miscoloring a smaller than 2^(-O(k)) (resp. k^(-O(k))) fraction of the hyperedges when l = O(log k) (resp. l=2). Assuming the Unique Games conjecture, we improve the latter hardness factor to 2^(-O(k)) for almost discrepancy-1 hypergraphs.\r\n\r\n(B) Rainbow colorability: If the hypergraph has a (k-l)-coloring such that each hyperedge is polychromatic with all these colors (this is stronger than a (l+1)-discrepancy 2-coloring), we give a 2-coloring algorithm that miscolors at most k^(-Omega(k)) of the hyperedges when l << sqrt(k), and complement this with a matching Unique Games hardness result showing that when l = sqrt(k), it is hard to even beat the 2^(-k+1) bound achieved by a random coloring.\r\n\r\n(C) Strong Colorability: We obtain similar (stronger) Min- and Max-2-Coloring algorithmic results in the case of (k+l)-strong colorability.","keywords":["Hypergraph Coloring","Discrepancy","Rainbow Coloring","Stong Coloring","Algorithms","Semidefinite Programming","Hardness of Approximation"],"author":[{"@type":"Person","name":"Bhattiprolu, Vijay V. S. P.","givenName":"Vijay V. S. P.","familyName":"Bhattiprolu"},{"@type":"Person","name":"Guruswami, Venkatesan","givenName":"Venkatesan","familyName":"Guruswami"},{"@type":"Person","name":"Lee, Euiwoong","givenName":"Euiwoong","familyName":"Lee"}],"position":11,"pageStart":152,"pageEnd":174,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Bhattiprolu, Vijay V. S. P.","givenName":"Vijay V. S. P.","familyName":"Bhattiprolu"},{"@type":"Person","name":"Guruswami, Venkatesan","givenName":"Venkatesan","familyName":"Guruswami"},{"@type":"Person","name":"Lee, Euiwoong","givenName":"Euiwoong","familyName":"Lee"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.152","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8051","name":"Stochastic and Robust Scheduling in the Cloud","abstract":"Users of cloud computing services are offered rapid access to computing resources via the Internet. Cloud providers use different pricing options such as (i) time slot reservation in advance at a fixed price and (ii) on-demand service at a (hourly) pay-as-used basis. Choosing the best combination of pricing options is a challenging task for users, in particular, when the instantiation of computing jobs underlies uncertainty.\r\n\r\nWe propose a natural model for two-stage scheduling under uncertainty that captures such resource provisioning and scheduling problem in the cloud. Reserving a time unit for processing jobs incurs some cost, which depends on when the reservation is made: a priori decisions, based only on distributional information, are much cheaper than on-demand decisions when the actual scenario is known. We consider both stochastic and robust versions of scheduling unrelated machines with objectives of minimizing the sum of weighted completion times and the makespan. Our main contribution is an (8+eps)-approximation algorithm for the min-sum objective for the stochastic polynomial-scenario model. The same technique gives a (7.11+eps)-approximation for minimizing the makespan. The key ingredient is an LP-based separation of jobs and time slots to be considered in either the first or the second stage only, and then approximately solving the separated problems. At the expense of another epsilon our results hold for any arbitrary scenario distribution given by means of a black-box. Our techniques also yield approximation algorithms for robust two-stage scheduling.","keywords":["Approximation Algorithms","Robust Optimization","Stochastic Optimization","Unrelated Machine Scheduling","Cloud Computing"],"author":[{"@type":"Person","name":"Chen, Lin","givenName":"Lin","familyName":"Chen"},{"@type":"Person","name":"Megow, Nicole","givenName":"Nicole","familyName":"Megow"},{"@type":"Person","name":"Rischke, Roman","givenName":"Roman","familyName":"Rischke"},{"@type":"Person","name":"Stougie, Leen","givenName":"Leen","familyName":"Stougie"}],"position":12,"pageStart":175,"pageEnd":186,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Chen, Lin","givenName":"Lin","familyName":"Chen"},{"@type":"Person","name":"Megow, Nicole","givenName":"Nicole","familyName":"Megow"},{"@type":"Person","name":"Rischke, Roman","givenName":"Roman","familyName":"Rischke"},{"@type":"Person","name":"Stougie, Leen","givenName":"Leen","familyName":"Stougie"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.175","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8052","name":"On Approximating Node-Disjoint Paths in Grids","abstract":"In the Node-Disjoint Paths (NDP) problem, the input is an undirected n-vertex graph G, and a collection {(s_1,t_1),...,(s_k,t_k)} of pairs of vertices called demand pairs. The goal is to route the largest possible number of the demand pairs (s_i,t_i), by selecting a path connecting each such pair, so that the resulting paths are node-disjoint. NDP is one of the most basic and extensively studied routing problems. Unfortunately, its approximability is far from being well-understood: the best current upper bound of O(sqrt(n)) is achieved via a simple greedy algorithm, while the best current lower bound on its approximability is Omega(log^{1\/2-\\delta}(n)) for any constant delta. Even for seemingly simpler special cases, such as planar graphs, and even grid graphs, no better approximation algorithms are currently known. A major reason for this impasse is that the standard technique for designing approximation algorithms for routing problems is LP-rounding of the standard multicommodity flow relaxation of the problem, whose integrality gap for NDP is Omega(sqrt(n)) even on grid graphs.\r\n\r\nOur main result is an O(n^{1\/4} * log(n))-approximation algorithm for NDP on grids. We distinguish between demand pairs with both vertices close to the grid boundary, and pairs where at least one of the two vertices is far from the grid boundary. Our algorithm shows that when all demand pairs are of the latter type, the integrality gap of the multicommodity flow LP-relaxation is at most O(n^{1\/4} * log(n)), and we deal with demand pairs of the former type by other methods. We complement our upper bounds by proving that NDP is APX-hard on grid graphs.","keywords":["Node-disjoint paths","approximation algorithms","routing and layout"],"author":[{"@type":"Person","name":"Chuzhoy, Julia","givenName":"Julia","familyName":"Chuzhoy"},{"@type":"Person","name":"Kim, David H. K.","givenName":"David H. K.","familyName":"Kim"}],"position":13,"pageStart":187,"pageEnd":211,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Chuzhoy, Julia","givenName":"Julia","familyName":"Chuzhoy"},{"@type":"Person","name":"Kim, David H. K.","givenName":"David H. K.","familyName":"Kim"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.187","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8053","name":"Approximating Upper Degree-Constrained Partial Orientations","abstract":"In the Upper Degree-Constrained Partial Orientation (UDPO) problem we are given an undirected graph G=(V,E), together with two degree constraint functions d^-,d^+:V -> N. The goal is to orient as many edges as possible, in such a way that for each vertex v in V the number of arcs entering v is at most d^-(v), whereas the number of arcs leaving v is at most d^+(v). This problem was introduced by Gabow [SODA'06], who proved it to be MAXSNP-hard (and thus APX-hard). In the same paper Gabow presented an LP-based iterative rounding 4\/3-approximation algorithm.\r\n\r\nAs already observed by Gabow, the problem in question is a special case of the classic 3-Dimensional Matching, which in turn is a special case of the k-Set Packing problem. Back in 2006 the best known polynomial time approximation algorithm for 3-Dimensional Matching was a simple local search by Hurkens and Schrijver [SIDMA'89], the approximation ratio of which is (3+epsilon)\/2; hence the algorithm of Gabow was an improvement over the approach brought from the more general problems.\r\n\r\nIn this paper we show that the UDPO problem when cast as 3-Dimensional Matching admits a special structure, which is obliviously exploited by the known approximation algorithms for k-Set Packing. In fact, we show that already the local-search routine of Hurkens and Schrijver gives (4+epsilon)\/3-approximation when used for the instances coming from UDPO. Moreover, the recent approximation algorithm for 3-Set Packing [Cygan, FOCS'13] turns out to be a (5+epsilon)\/4-approximation for UDPO. This improves over 4\/3 as the best ratio known up to date for UDPO.","keywords":["graph orientations","degree-constrained orientations","approximation algorithm","local search"],"author":[{"@type":"Person","name":"Cygan, Marek","givenName":"Marek","familyName":"Cygan"},{"@type":"Person","name":"Kociumaka, Tomasz","givenName":"Tomasz","familyName":"Kociumaka"}],"position":14,"pageStart":212,"pageEnd":224,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Cygan, Marek","givenName":"Marek","familyName":"Cygan"},{"@type":"Person","name":"Kociumaka, Tomasz","givenName":"Tomasz","familyName":"Kociumaka"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.212","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8054","name":"Approximating Hit Rate Curves using Streaming Algorithms","abstract":"A hit rate curve is a function that maps cache size to the proportion of requests that can be served from the cache. (The caching policy and sequence of requests are assumed to be fixed.) Hit rate curves have been studied for decades in the operating system, database and computer architecture communities. They are useful tools for designing appropriate cache sizes, dynamically allocating memory between competing caches, and for summarizing locality properties of the request sequence. In this paper we focus on the widely-used LRU caching policy.\r\n\r\nComputing hit rate curves is very efficient from a runtime standpoint, but existing algorithms are not efficient in their space usage. For a stream of m requests for n cacheable objects, all existing algorithms that provably compute the hit rate curve use space linear in n. In the context of modern storage systems, n can easily be in the billions or trillions, so the space usage of these algorithms makes them impractical.\r\n\r\nWe present the first algorithm for provably approximating hit rate curves for the LRU policy with sublinear space. Our algorithm uses O( p^2 * log(n) * log^2(m) \/ epsilon^2 ) bits of space and approximates the hit rate curve at p uniformly-spaced points to within additive error epsilon. This is not far from optimal. Any single-pass algorithm with the same guarantees must use Omega(p^2 + epsilon^{-2} + log(n)) bits of space. Furthermore, our use of additive error is necessary. Any single-pass algorithm achieving multiplicative error requires Omega(n) bits of space.","keywords":["Cache analysis","hit rate curves","miss rate curves","streaming algorithms"],"author":[{"@type":"Person","name":"Drudi, Zachary","givenName":"Zachary","familyName":"Drudi"},{"@type":"Person","name":"Harvey, Nicholas J. A.","givenName":"Nicholas J. A.","familyName":"Harvey"},{"@type":"Person","name":"Ingram, Stephen","givenName":"Stephen","familyName":"Ingram"},{"@type":"Person","name":"Warfield, Andrew","givenName":"Andrew","familyName":"Warfield"},{"@type":"Person","name":"Wires, Jake","givenName":"Jake","familyName":"Wires"}],"position":15,"pageStart":225,"pageEnd":241,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Drudi, Zachary","givenName":"Zachary","familyName":"Drudi"},{"@type":"Person","name":"Harvey, Nicholas J. A.","givenName":"Nicholas J. A.","familyName":"Harvey"},{"@type":"Person","name":"Ingram, Stephen","givenName":"Stephen","familyName":"Ingram"},{"@type":"Person","name":"Warfield, Andrew","givenName":"Andrew","familyName":"Warfield"},{"@type":"Person","name":"Wires, Jake","givenName":"Jake","familyName":"Wires"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.225","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8055","name":"Terminal Embeddings","abstract":"In this paper we study terminal embeddings, in which one is given a finite metric (X,d_X) (or a graph G=(V,E)) and a subset K of X of its points are designated as terminals. The objective is to embed the metric into a normed space, while approximately preserving all distances among pairs that contain a terminal. We devise such embeddings in various settings, and conclude that even though we have to preserve approx |K| * |X| pairs, the distortion depends only on |K|, rather than on |X|.\r\n\r\nWe also strengthen this notion, and consider embeddings that approximately preserve the distances between all pairs, but provide improved distortion for pairs containing a terminal. Surprisingly, we show that such embeddings exist in many settings, and have optimal distortion bounds both with respect to X \\times X and with respect to K * X.\r\n\r\nMoreover, our embeddings have implications to the areas of Approximation and Online Algorithms. In particular, Arora et. al. devised an ~O(sqrt(log(r))-approximation algorithm for sparsest-cut instances with r demands. Building on their framework, we provide an ~O(sqrt(log |K|)-approximation for sparsest-cut instances in which each demand is incident on one of the vertices of K (aka, terminals). Since |K| <= r, our bound generalizes that of Arora et al.","keywords":["embedding","distortion","terminals"],"author":[{"@type":"Person","name":"Elkin, Michael","givenName":"Michael","familyName":"Elkin"},{"@type":"Person","name":"Filtser, Arnold","givenName":"Arnold","familyName":"Filtser"},{"@type":"Person","name":"Neiman, Ofer","givenName":"Ofer","familyName":"Neiman"}],"position":16,"pageStart":242,"pageEnd":264,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Elkin, Michael","givenName":"Michael","familyName":"Elkin"},{"@type":"Person","name":"Filtser, Arnold","givenName":"Arnold","familyName":"Filtser"},{"@type":"Person","name":"Neiman, Ofer","givenName":"Ofer","familyName":"Neiman"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.242","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8056","name":"On Linear Programming Relaxations for Unsplittable Flow in Trees","abstract":"We study some linear programming relaxations for the Unsplittable Flow problem on trees (UFP-Tree). Inspired by results obtained by Chekuri, Ene, and Korula for Unsplittable Flow on paths (UFP-Path), we present a relaxation with polynomially many constraints that has an integrality gap bound of O(log n * min(log m, log n)) where n denotes the number of tasks and m denotes the number of edges in the tree. This matches the approximation guarantee of their combinatorial algorithm and is the first demonstration of an efficiently-solvable relaxation for UFP-Tree with a sub-linear integrality gap.\r\n\r\nThe new constraints in our LP relaxation are just a few of the (exponentially many) rank constraints that can be added to strengthen the natural relaxation. A side effect of how we prove our upper bound is an efficient O(1)-approximation for solving the rank LP. We also show that our techniques can be used to prove integrality gap bounds for similar LP relaxations for packing demand-weighted subtrees of an edge-capacitated tree.\r\n\r\nOn the other hand, we show that the inclusion of all rank constraints does not reduce the integrality gap for UFP-Tree to a constant. Specifically, we show the integrality gap is Omega(sqrt(log n)) even in cases where all tasks share a common endpoint. In contrast, intersecting instances of UFP-Path are known to have an integrality gap of O(1) even if just a few of the rank 1 constraints are included.\r\n\r\nWe also observe that applying two rounds of the Lov\u00e1sz-Schrijver SDP procedure to the natural LP for UFP-Tree derives an SDP whose integrality gap is also O(log n * min(log m, log n)).","keywords":["Unsplittable flow","Linear programming relaxation","Approximation algorithm"],"author":[{"@type":"Person","name":"Friggstad, Zachary","givenName":"Zachary","familyName":"Friggstad"},{"@type":"Person","name":"Gao, Zhihan","givenName":"Zhihan","familyName":"Gao"}],"position":17,"pageStart":265,"pageEnd":283,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Friggstad, Zachary","givenName":"Zachary","familyName":"Friggstad"},{"@type":"Person","name":"Gao, Zhihan","givenName":"Zhihan","familyName":"Gao"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.265","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8057","name":"Inapproximability of H-Transversal\/Packing","abstract":"Given an undirected graph G=(V,E) and a fixed pattern graph H with k vertices, we consider the H-Transversal and H-Packing problems. The former asks to find the smallest subset S of vertices such that the subgraph induced by V - S does not have H as a subgraph, and the latter asks to find the maximum number of pairwise disjoint k-subsets S1, ..., Sm such that the subgraph induced by each Si has H as a subgraph.\r\n\r\nWe prove that if H is 2-connected, H-Transversal and H-Packing are almost as hard to approximate as general k-Hypergraph Vertex Cover and k-Set Packing, so it is NP-hard to approximate them within a factor of Omega(k) and Omega(k \/ polylog(k)) respectively. We also show that there is a 1-connected H where H-Transversal admits an O(log k)-approximation algorithm, so that the connectivity requirement cannot be relaxed from 2 to 1. For a special case of H-Transversal where H is a (family of) cycles, we mention the implication of our result to the related Feedback Vertex Set problem, and give a different hardness proof for directed graphs.","keywords":["Constraint Satisfaction Problems","Approximation resistance"],"author":[{"@type":"Person","name":"Guruswami, Venkatesan","givenName":"Venkatesan","familyName":"Guruswami"},{"@type":"Person","name":"Lee, Euiwoong","givenName":"Euiwoong","familyName":"Lee"}],"position":18,"pageStart":284,"pageEnd":304,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Guruswami, Venkatesan","givenName":"Venkatesan","familyName":"Guruswami"},{"@type":"Person","name":"Lee, Euiwoong","givenName":"Euiwoong","familyName":"Lee"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.284","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8058","name":"Towards a Characterization of Approximation Resistance for Symmetric CSPs","abstract":"A Boolean constraint satisfaction problem (CSP) is called approximation resistant if independently setting variables to 1 with some probability achieves the best possible approximation ratio for the fraction of constraints satisfied. We study approximation resistance of a natural subclass of CSPs that we call Symmetric Constraint Satisfaction Problems (SCSPs), where satisfaction of each constraint only depends on the number of true literals in its scope. Thus a SCSP of arity k can be described by a subset of allowed number of true literals.\r\n\r\nFor SCSPs without negation, we conjecture that a simple sufficient condition to be approximation resistant by Austrin and Hastad is indeed necessary. We show that this condition has a compact analytic representation in the case of symmetric CSPs (depending only on the gap between the largest and smallest numbers in S), and provide the rationale behind our conjecture. We prove two interesting special cases of the conjecture, (i) when S is an interval and (ii) when S is even. For SCSPs with negation, we prove that the analogous sufficient condition by Austrin and Mossel is necessary for the same two cases, though we do not pose an analogous conjecture in general.","keywords":["Constraint Satisfaction Problems","Approximation resistance"],"author":[{"@type":"Person","name":"Guruswami, Venkatesan","givenName":"Venkatesan","familyName":"Guruswami"},{"@type":"Person","name":"Lee, Euiwoong","givenName":"Euiwoong","familyName":"Lee"}],"position":19,"pageStart":305,"pageEnd":322,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Guruswami, Venkatesan","givenName":"Venkatesan","familyName":"Guruswami"},{"@type":"Person","name":"Lee, Euiwoong","givenName":"Euiwoong","familyName":"Lee"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.305","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8059","name":"Sequential Importance Sampling Algorithms for Estimating the All-Terminal Reliability Polynomial of Sparse Graphs","abstract":"The all-terminal reliability polynomial of a graph counts its connected subgraphs of various sizes. Algorithms based on sequential importance sampling (SIS) have been proposed to estimate a graph's reliability polynomial. We show upper bounds on the relative error of three sequential importance sampling algorithms. We use these to create a hybrid algorithm, which selects the best SIS algorithm for a particular graph G and particular coefficient of the polynomial.\r\n\r\nThis hybrid algorithm is particularly effective when G has low degree. For graphs of average degree < 11, it is the fastest known algorithm; for graphs of average degree <= 45 it is the fastest known polynomial-space algorithm. For example, when a graph has average degree 3, this algorithm estimates to error epsilon in time O(1.26^n * epsilon^{-2}).\r\n\r\nAlthough the algorithm may take exponential time, in practice it can have good performance even on medium-scale graphs. We provide experimental results that show quite practical performance on graphs with hundreds of vertices and thousands of edges. By contrast, alternative algorithms are either not rigorous or are completely impractical for such large graphs.","keywords":["All-terminal reliability","sequential importance sampling"],"author":[{"@type":"Person","name":"Harris, David G.","givenName":"David G.","familyName":"Harris"},{"@type":"Person","name":"Sullivan, Francis","givenName":"Francis","familyName":"Sullivan"}],"position":20,"pageStart":323,"pageEnd":340,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Harris, David G.","givenName":"David G.","familyName":"Harris"},{"@type":"Person","name":"Sullivan, Francis","givenName":"Francis","familyName":"Sullivan"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.323","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8060","name":"Improved NP-Inapproximability for 2-Variable Linear Equations","abstract":"An instance of the 2-Lin(2) problem is a system of equations of the form \"x_i + x_j = b (mod 2)\". Given such a system in which it's possible to satisfy all but an epsilon fraction of the equations, we show it is NP-hard to satisfy all but a C*epsilon fraction of the equations, for any C < 11\/8 = 1.375 (and any 0 < epsilon <= 1\/8). The previous best result, standing for over 15 years, had 5\/4 in place of 11\/8. Our result provides the best known NP-hardness even for the Unique Games problem, and it also holds for the special case of Max-Cut. The precise factor 11\/8 is unlikely to be best possible; we also give a conjecture concerning analysis of Boolean functions which, if true, would yield a larger hardness factor of 3\/2.\r\n\r\nOur proof is by a modified gadget reduction from a pairwise-independent predicate. We also show an inherent limitation to this type of gadget reduction. In particular, any such reduction can never establish a hardness factor C greater than 2.54. Previously, no such limitation on gadget reductions was known.","keywords":["approximability","unique games","linear equation","gadget","linear programming"],"author":[{"@type":"Person","name":"H\u00e5stad, Johan","givenName":"Johan","familyName":"H\u00e5stad"},{"@type":"Person","name":"Huang, Sangxia","givenName":"Sangxia","familyName":"Huang"},{"@type":"Person","name":"Manokaran, Rajsekar","givenName":"Rajsekar","familyName":"Manokaran"},{"@type":"Person","name":"O\u2019Donnell, Ryan","givenName":"Ryan","familyName":"O\u2019Donnell"},{"@type":"Person","name":"Wright, John","givenName":"John","familyName":"Wright"}],"position":21,"pageStart":341,"pageEnd":360,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"H\u00e5stad, Johan","givenName":"Johan","familyName":"H\u00e5stad"},{"@type":"Person","name":"Huang, Sangxia","givenName":"Sangxia","familyName":"Huang"},{"@type":"Person","name":"Manokaran, Rajsekar","givenName":"Rajsekar","familyName":"Manokaran"},{"@type":"Person","name":"O\u2019Donnell, Ryan","givenName":"Ryan","familyName":"O\u2019Donnell"},{"@type":"Person","name":"Wright, John","givenName":"John","familyName":"Wright"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.341","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8061","name":"A Tight Approximation Bound for the Stable Marriage Problem with Restricted Ties","abstract":"The problem of finding a maximum cardinality stable matching in the presence of ties and unacceptable partners, called MAX SMTI, is a well-studied NP-hard problem. The MAX SMTI is NP-hard even for highly restricted instances where (i) ties appear only in women's preference lists and (ii) each tie appears at the end of each woman's preference list. The current best lower bounds on the approximation ratio for this variant are 1.1052 unless P=NP and 1.25 under the unique games conjecture, while the current best upper bound is 1.4616. In this paper, we improve the upper bound to 1.25, which matches the lower bound under the unique games conjecture. Note that this is the first special case of the MAX SMTI where the tight approximation bound is obtained. The improved ratio is achieved via a new analysis technique, which avoids the complicated case-by-case analysis used in earlier studies. As a by-product of our analysis, we show that the integrality gap of natural IP and LP formulations for this variant is 1.25. We also show that the unrestricted MAX SMTI cannot be approximated with less than 1.5 unless the approximation ratio of a certain special case of the minimum maximal matching problem can be improved.","keywords":["stable marriage with ties and incomplete lists","approximation algorithm","integer program","linear program relaxation","integrality gap"],"author":[{"@type":"Person","name":"Huang, Chien-Chung","givenName":"Chien-Chung","familyName":"Huang"},{"@type":"Person","name":"Iwama, Kazuo","givenName":"Kazuo","familyName":"Iwama"},{"@type":"Person","name":"Miyazaki, Shuichi","givenName":"Shuichi","familyName":"Miyazaki"},{"@type":"Person","name":"Yanagisawa, Hiroki","givenName":"Hiroki","familyName":"Yanagisawa"}],"position":22,"pageStart":361,"pageEnd":380,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Huang, Chien-Chung","givenName":"Chien-Chung","familyName":"Huang"},{"@type":"Person","name":"Iwama, Kazuo","givenName":"Kazuo","familyName":"Iwama"},{"@type":"Person","name":"Miyazaki, Shuichi","givenName":"Shuichi","familyName":"Miyazaki"},{"@type":"Person","name":"Yanagisawa, Hiroki","givenName":"Hiroki","familyName":"Yanagisawa"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.361","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8062","name":"Designing Overlapping Networks for Publish-Subscribe Systems","abstract":"From the publish-subscribe systems of the early days of the Internet to the recent emergence of Web 3.0 and IoT (Internet of Things), new problems arise in the design of networks centered at producers and consumers of constantly evolving information. In a typical problem, each terminal is a source or sink of information and builds a physical network in the form of a tree or an overlay network in the form of a star rooted at itself. Every pair of pub-sub terminals that need to be coordinated (e.g. the source and sink of an important piece of control information) define an edge in a bipartite demand graph; the solution must ensure that the corresponding networks rooted at the endpoints of each demand edge overlap at some node. This simple overlap constraint, and the requirement that each network is a tree or a star, leads to a variety of new questions on the design of overlapping networks.\r\n\r\nIn this paper, for the general demand case of the problem, we show that a natural LP formulation has a non-constant integrality gap; on the positive side, we present a logarithmic approximation for the general demand case. When the demand graph is complete, however, we design approximation algorithms with small constant performance ratios, irrespective of whether the pub networks and sub networks are required to be trees or stars.","keywords":["Approximation Algorithms","Steiner Trees","Publish-Subscribe Systems","Integrality Gap","VPN."],"author":[{"@type":"Person","name":"Iglesias, Jennifer","givenName":"Jennifer","familyName":"Iglesias"},{"@type":"Person","name":"Rajaraman, Rajmohan","givenName":"Rajmohan","familyName":"Rajaraman"},{"@type":"Person","name":"Ravi, R.","givenName":"R.","familyName":"Ravi"},{"@type":"Person","name":"Sundaram, Ravi","givenName":"Ravi","familyName":"Sundaram"}],"position":23,"pageStart":381,"pageEnd":395,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Iglesias, Jennifer","givenName":"Jennifer","familyName":"Iglesias"},{"@type":"Person","name":"Rajaraman, Rajmohan","givenName":"Rajmohan","familyName":"Rajaraman"},{"@type":"Person","name":"Ravi, R.","givenName":"R.","familyName":"Ravi"},{"@type":"Person","name":"Sundaram, Ravi","givenName":"Ravi","familyName":"Sundaram"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.381","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8063","name":"Approximating Dense Max 2-CSPs","abstract":"In this paper, we present a polynomial-time algorithm that approximates sufficiently high-value Max 2-CSPs on sufficiently dense graphs to within O(N^epsilon) approximation ratio for any constant epsilon > 0. Using this algorithm, we also achieve similar results for free games, projection games on sufficiently dense random graphs, and the Densest k-Subgraph problem with sufficiently dense optimal solution. Note, however, that algorithms with similar guarantees to the last algorithm were in fact discovered prior to our work by Feige et al. and Suzuki and Tokuyama.\r\n\r\nIn addition, our idea for the above algorithms yields the following by-product: a quasi-polynomial time approximation scheme (QPTAS) for satisfiable dense Max 2-CSPs with better running time than the known algorithms.","keywords":["Max 2-CSP","Dense Graphs","Densest k-Subgraph","QPTAS","Free Games","Projection Games"],"author":[{"@type":"Person","name":"Manurangsi, Pasin","givenName":"Pasin","familyName":"Manurangsi"},{"@type":"Person","name":"Moshkovitz, Dana","givenName":"Dana","familyName":"Moshkovitz"}],"position":24,"pageStart":396,"pageEnd":415,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Manurangsi, Pasin","givenName":"Pasin","familyName":"Manurangsi"},{"@type":"Person","name":"Moshkovitz, Dana","givenName":"Dana","familyName":"Moshkovitz"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.396","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8064","name":"The Container Selection Problem","abstract":"We introduce and study a network resource management problem that is a special case of non-metric k-median, naturally arising in cross platform scheduling and cloud computing. In the continuous d-dimensional container selection problem, we are given a set C of input points in d-dimensional Euclidean space, for some d >= 2, and a budget k. An input point p can be assigned to a \"container point\" c only if c dominates p in every dimension. The assignment cost is then equal to the L1-norm of the container point. The goal is to find k container points in the d-dimensional space, such that the total assignment cost for all input points is minimized. The discrete variant of the problem has one key distinction, namely, the container points must be chosen from a given set F of points.\r\n\r\nFor the continuous version, we obtain a polynomial time approximation scheme for any fixed dimension d>= 2. On the negative side, we show that the problem is NP-hard for any d>=3. We further show that the discrete version is significantly harder, as it is NP-hard to approximate without violating the budget k in any dimension d>=3. Thus, we focus on obtaining bi-approximation algorithms. For d=2, the bi-approximation guarantee is (1+epsilon,3), i.e., for any epsilon>0, our scheme outputs a solution of size 3k and cost at most (1+epsilon) times the optimum. For fixed d>2, we present a (1+epsilon,O((1\/epsilon)log k)) bi-approximation algorithm.","keywords":["non-metric k-median","geometric hitting set","approximation algorithms","cloud computing","cross platform scheduling."],"author":[{"@type":"Person","name":"Nagarajan, Viswanath","givenName":"Viswanath","familyName":"Nagarajan"},{"@type":"Person","name":"Sarpatwar, Kanthi K.","givenName":"Kanthi K.","familyName":"Sarpatwar"},{"@type":"Person","name":"Schieber, Baruch","givenName":"Baruch","familyName":"Schieber"},{"@type":"Person","name":"Shachnai, Hadas","givenName":"Hadas","familyName":"Shachnai"},{"@type":"Person","name":"Wolf, Joel L.","givenName":"Joel L.","familyName":"Wolf"}],"position":25,"pageStart":416,"pageEnd":434,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Nagarajan, Viswanath","givenName":"Viswanath","familyName":"Nagarajan"},{"@type":"Person","name":"Sarpatwar, Kanthi K.","givenName":"Kanthi K.","familyName":"Sarpatwar"},{"@type":"Person","name":"Schieber, Baruch","givenName":"Baruch","familyName":"Schieber"},{"@type":"Person","name":"Shachnai, Hadas","givenName":"Hadas","familyName":"Shachnai"},{"@type":"Person","name":"Wolf, Joel L.","givenName":"Joel L.","familyName":"Wolf"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.416","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","citation":["http:\/\/aws.amazon.com\/ec2\/","http:\/\/wikipedia.org\/wiki\/Cloud_computing#Private_cloud"],"isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8065","name":"Tight Bounds for Graph Problems in Insertion Streams","abstract":"Despite the large amount of work on solving graph problems in the data stream model, there do not exist tight space bounds for almost any of them, even in a stream with only edge insertions. For example, for testing connectivity, the upper bound is O(n * log(n)) bits, while the lower bound is only Omega(n) bits. We remedy this situation by providing the first tight Omega(n * log(n)) space lower bounds for randomized algorithms which succeed with constant probability in a stream of edge insertions for a number of graph problems. Our lower bounds apply to testing bipartiteness, connectivity, cycle-freeness, whether a graph is Eulerian, planarity, H-minor freeness, finding a minimum spanning tree of a connected graph, and testing if the diameter of a sparse graph is constant. We also give the first Omega(n * k * log(n)) space lower bounds for deterministic algorithms for k-edge connectivity and k-vertex connectivity; these are optimal in light of known deterministic upper bounds (for k-vertex connectivity we also need to allow edge duplications, which known upper bounds allow). Finally, we give an Omega(n * log^2(n)) lower bound for randomized algorithms approximating the minimum cut up to a constant factor with constant probability in a graph with integer weights between 1 and n, presented as a stream of insertions and deletions to its edges. This lower bound also holds for cut sparsifiers, and gives the first separation of maintaining a sparsifier in the data stream model versus the offline model.","keywords":["communication complexity","data streams","graphs","space complexity"],"author":[{"@type":"Person","name":"Sun, Xiaoming","givenName":"Xiaoming","familyName":"Sun"},{"@type":"Person","name":"Woodruff, David P.","givenName":"David P.","familyName":"Woodruff"}],"position":26,"pageStart":435,"pageEnd":448,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Sun, Xiaoming","givenName":"Xiaoming","familyName":"Sun"},{"@type":"Person","name":"Woodruff, David P.","givenName":"David P.","familyName":"Woodruff"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.435","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8066","name":"A Chasm Between Identity and Equivalence Testing with Conditional Queries","abstract":"A recent model for property testing of probability distributions enables tremendous savings in the sample complexity of testing algorithms, by allowing them to condition the sampling on subsets of the domain.\r\n\r\nIn particular, Canonne, Ron, and Servedio showed that, in this setting, testing identity of an unknown distribution D (i.e., whether D = D* for an explicitly known D*) can be done with a constant number of samples, independent of the support size n - in contrast to the required sqrt(n) in the standard sampling model. However, it was unclear whether the same held for the case of testing equivalence, where both distributions are unknown. Indeed, while Canonne, Ron, and Servedio established a polylog(n)-query upper bound for equivalence testing, very recently brought down to ~O(log(log(n))) by Falahatgar et al., whether a dependence on the domain size n is necessary was still open, and explicitly posed by Fischer at the Bertinoro Workshop on Sublinear Algorithms. In this work, we answer the question in the positive, showing that any testing algorithm for equivalence must make Omega(sqrt(log(log(n)))) queries in the conditional sampling model. Interestingly, this demonstrates an intrinsic qualitative gap between identity and equivalence testing, absent in the standard sampling model (where both problems have sampling complexity n^(Theta(1))).\r\n\r\nTurning to another question, we investigate the complexity of support size estimation. We provide a doubly-logarithmic upper bound for the adaptive version of this problem, generalizing work of Ron and Tsur to our weaker model. We also establish a logarithmic lower bound for the non-adaptive version of this problem. This latter result carries on to the related problem of non-adaptive uniformity testing, an exponential improvement over previous results that resolves an open question of Chakraborty, Fischer, Goldhirsh, and Matsliah.","keywords":["property testing","probability distributions","conditional samples"],"author":[{"@type":"Person","name":"Acharya, Jayadev","givenName":"Jayadev","familyName":"Acharya"},{"@type":"Person","name":"Canonne, Cl\u00e9ment L.","givenName":"Cl\u00e9ment L.","familyName":"Canonne"},{"@type":"Person","name":"Kamath, Gautam","givenName":"Gautam","familyName":"Kamath"}],"position":27,"pageStart":449,"pageEnd":466,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Acharya, Jayadev","givenName":"Jayadev","familyName":"Acharya"},{"@type":"Person","name":"Canonne, Cl\u00e9ment L.","givenName":"Cl\u00e9ment L.","familyName":"Canonne"},{"@type":"Person","name":"Kamath, Gautam","givenName":"Gautam","familyName":"Kamath"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.449","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","citation":"http:\/\/sublinear.info\/66","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8067","name":"Harnessing the Bethe Free Energy","abstract":"Gibbs measures induced by random factor graphs play a prominent role in computer science, combinatorics and physics. A key problem is to calculate the typical value of the partition function. According to the \"replica symmetric cavity method\", a heuristic that rests on non-rigorous considerations from statistical mechanics, in many cases this problem can be tackled by way of maximising a functional called the \"Bethe free energy\". In this paper we prove that the Bethe free energy upper-bounds the partition function in a broad class of models. Additionally, we provide a sufficient condition for this upper bound to be tight.","keywords":["Belief Propagation","free energy","Gibbs measure","partition function"],"author":[{"@type":"Person","name":"Bapst, Victor","givenName":"Victor","familyName":"Bapst"},{"@type":"Person","name":"Coja-Oghlan, Amin","givenName":"Amin","familyName":"Coja-Oghlan"}],"position":28,"pageStart":467,"pageEnd":480,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Bapst, Victor","givenName":"Victor","familyName":"Bapst"},{"@type":"Person","name":"Coja-Oghlan, Amin","givenName":"Amin","familyName":"Coja-Oghlan"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.467","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8068","name":"Internal Compression of Protocols to Entropy","abstract":"We study internal compression of communication protocols to their internal entropy, which is the entropy of the transcript from the players' perspective. We provide two internal compression schemes with error. One of a protocol of Feige et al. for finding the first difference between two strings. The second and main one is an internal compression with error epsilon > 0 of a protocol with internal entropy H^{int} and communication complexity C to a protocol with communication at most order (H^{int}\/epsilon)^2 * log(log(C)).\r\n\r\nThis immediately implies a similar compression to the internal information of public-coin protocols, which provides an exponential improvement over previously known public-coin compressions in the dependence on C. It further shows that in a recent protocol of Ganor, Kol and Raz, it is impossible to move the private randomness to be public without an exponential cost. To the best of our knowledge, No such example was previously known.","keywords":["Communication complexity","Information complexity","Compression","Simulation","Entropy"],"author":[{"@type":"Person","name":"Bauer, Balthazar","givenName":"Balthazar","familyName":"Bauer"},{"@type":"Person","name":"Moran, Shay","givenName":"Shay","familyName":"Moran"},{"@type":"Person","name":"Yehudayoff, Amir","givenName":"Amir","familyName":"Yehudayoff"}],"position":29,"pageStart":481,"pageEnd":496,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Bauer, Balthazar","givenName":"Balthazar","familyName":"Bauer"},{"@type":"Person","name":"Moran, Shay","givenName":"Shay","familyName":"Moran"},{"@type":"Person","name":"Yehudayoff, Amir","givenName":"Amir","familyName":"Yehudayoff"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.481","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8069","name":"On Fortification of Projection Games","abstract":"A recent result of Moshkovitz [Moshkovitz14] presented an ingenious method to provide a completely elementary proof of the Parallel Repetition Theorem for certain projection games via a construction called fortification. However, the construction used in [Moshkovitz14] to fortify arbitrary label cover instances using an arbitrary extractor is insufficient to prove parallel repetition. In this paper, we provide a fix by using a stronger graph that we call fortifiers. Fortifiers are graphs that have both l_1 and l_2 guarantees on induced distributions from large subsets.\r\n\r\nWe then show that an expander with sufficient spectral gap, or a bi-regular extractor with stronger parameters (the latter is also the construction used in an independent update [Moshkovitz15] of [Moshkovitz14] with an alternate argument), is a good fortifier. We also show that using a fortifier (in particular l_2 guarantees) is necessary for obtaining the robustness required for fortification.","keywords":["Parallel Repetition","Fortification"],"author":[{"@type":"Person","name":"Bhangale, Amey","givenName":"Amey","familyName":"Bhangale"},{"@type":"Person","name":"Saptharishi, Ramprasad","givenName":"Ramprasad","familyName":"Saptharishi"},{"@type":"Person","name":"Varma, Girish","givenName":"Girish","familyName":"Varma"},{"@type":"Person","name":"Venkat, Rakesh","givenName":"Rakesh","familyName":"Venkat"}],"position":30,"pageStart":497,"pageEnd":511,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Bhangale, Amey","givenName":"Amey","familyName":"Bhangale"},{"@type":"Person","name":"Saptharishi, Ramprasad","givenName":"Ramprasad","familyName":"Saptharishi"},{"@type":"Person","name":"Varma, Girish","givenName":"Girish","familyName":"Varma"},{"@type":"Person","name":"Venkat, Rakesh","givenName":"Rakesh","familyName":"Venkat"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.497","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","citation":"http:\/\/people.csail.mit.edu\/dmoshkov\/papers\/par-rep\/final3.pdf","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8070","name":"Learning Circuits with few Negations","abstract":"Monotone Boolean functions, and the monotone Boolean circuits that compute them, have been intensively studied in complexity theory. In this paper we study the structure of Boolean functions in terms of the minimum number of negations in any circuit computing them, a complexity measure that interpolates between monotone functions and the class of all functions. We study this generalization of monotonicity from the vantage point of learning theory, establishing nearly matching upper and lower bounds on the uniform-distribution learnability of circuits in terms of the number of negations they contain. Our upper bounds are based on a new structural characterization of negation-limited circuits that extends a classical result of A.A. Markov. Our lower bounds, which employ Fourier-analytic tools from hardness amplification, give new results even for circuits with no negations (i.e. monotone functions).","keywords":["Boolean functions","monotonicity","negations","PAC learning"],"author":[{"@type":"Person","name":"Blais, Eric","givenName":"Eric","familyName":"Blais"},{"@type":"Person","name":"Canonne, Cl\u00e9ment L.","givenName":"Cl\u00e9ment L.","familyName":"Canonne"},{"@type":"Person","name":"Oliveira, Igor C.","givenName":"Igor C.","familyName":"Oliveira"},{"@type":"Person","name":"Servedio, Rocco A.","givenName":"Rocco A.","familyName":"Servedio"},{"@type":"Person","name":"Tan, Li-Yang","givenName":"Li-Yang","familyName":"Tan"}],"position":31,"pageStart":512,"pageEnd":527,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Blais, Eric","givenName":"Eric","familyName":"Blais"},{"@type":"Person","name":"Canonne, Cl\u00e9ment L.","givenName":"Cl\u00e9ment L.","familyName":"Canonne"},{"@type":"Person","name":"Oliveira, Igor C.","givenName":"Igor C.","familyName":"Oliveira"},{"@type":"Person","name":"Servedio, Rocco A.","givenName":"Rocco A.","familyName":"Servedio"},{"@type":"Person","name":"Tan, Li-Yang","givenName":"Li-Yang","familyName":"Tan"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.512","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8071","name":"Dynamics for the Mean-field Random-cluster Model","abstract":"The random-cluster model has been widely studied as a unifying framework for random graphs, spin systems and random spanning trees, but its dynamics have so far largely resisted analysis. In this paper we study a natural non-local Markov chain known as the Chayes-Machta dynamics for the mean-field case of the random-cluster model, and identify a critical regime (lambda_s,lambda_S) of the model parameter lambda in which the dynamics undergoes an exponential slowdown. Namely, we prove that the mixing time is Theta(log n) if lambda is not in [lambda_s,lambda_S], and e^Omega(sqrt{n}) when lambda is in (lambda_s,lambda_S). These results hold for all values of the second model parameter q > 1. In addition, we prove that the local heat-bath dynamics undergoes a similar exponential slowdown in (lambda_s,lambda_S).","keywords":["random-cluster model","random graphs","Markov chains","statistical physics","dynamics"],"author":[{"@type":"Person","name":"Blanca, Antonio","givenName":"Antonio","familyName":"Blanca"},{"@type":"Person","name":"Sinclair, Alistair","givenName":"Alistair","familyName":"Sinclair"}],"position":32,"pageStart":528,"pageEnd":543,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Blanca, Antonio","givenName":"Antonio","familyName":"Blanca"},{"@type":"Person","name":"Sinclair, Alistair","givenName":"Alistair","familyName":"Sinclair"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.528","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8072","name":"Correlation in Hard Distributions in Communication Complexity","abstract":"We study the effect that the amount of correlation in a bipartite distribution has on the communication complexity of a problem under that distribution. We introduce a new family of complexity measures that interpolates between the two previously studied extreme cases: the (standard) randomised communication complexity and the case of distributional complexity under product distributions.\r\n\r\n- We give a tight characterisation of the randomised complexity of Disjointness under distributions with mutual information k, showing that it is Theta(sqrt(n(k+1))) for all 0 <= k <= n. This smoothly interpolates between the lower bounds of Babai, Frankl and Simon for the product distribution case (k=0), and the bound of Razborov for the randomised case. The upper bounds improve and generalise what was known for product distributions, and imply that any tight bound for Disjointness needs Omega(n) bits of mutual information in the corresponding distribution.\r\n\r\n- We study the same question in the distributional quantum setting, and show a lower bound of Omega((n(k+1))^{1\/4}), and an upper bound (via constructing communication protocols), matching up to a logarithmic factor.\r\n\r\n- We show that there are total Boolean functions f_d that have distributional communication complexity O(log(n)) under all distributions of information up to o(n), while the (interactive) distributional complexity maximised over all distributions is Theta(log(d)) for n <= d <= 2^{n\/100}. This shows, in particular, that the correlation needed to show that a problem is hard can be much larger than the communication complexity of the problem.\r\n\r\n- We show that in the setting of one-way communication under product distributions, the dependence of communication cost on the allowed error epsilon is multiplicative in log(1\/epsilon) - the previous upper bounds had the dependence of more than 1\/epsilon. This result, for the first time, explains how one-way communication complexity under product distributions is stronger than PAC-learning: both tasks are characterised by the VC-dimension, but have very different error dependence (learning from examples, it costs more to reduce the error).","keywords":"communication complexity; information theory","author":[{"@type":"Person","name":"Bottesch, Ralph Christian","givenName":"Ralph Christian","familyName":"Bottesch"},{"@type":"Person","name":"Gavinsky, Dmitry","givenName":"Dmitry","familyName":"Gavinsky"},{"@type":"Person","name":"Klauck, Hartmut","givenName":"Hartmut","familyName":"Klauck"}],"position":33,"pageStart":544,"pageEnd":572,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Bottesch, Ralph Christian","givenName":"Ralph Christian","familyName":"Bottesch"},{"@type":"Person","name":"Gavinsky, Dmitry","givenName":"Dmitry","familyName":"Gavinsky"},{"@type":"Person","name":"Klauck, Hartmut","givenName":"Hartmut","familyName":"Klauck"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.544","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8073","name":"Zero-One Laws for Sliding Windows and Universal Sketches","abstract":"Given a stream of data, a typical approach in streaming algorithms is to design a sophisticated algorithm with small memory that computes a specific statistic over the streaming data. Usually, if one wants to compute a different statistic after the stream is gone, it is impossible. But what if we want to compute a different statistic after the fact? In this paper, we consider the following fascinating possibility: can we collect some small amount of specific data during the stream that is \"universal,\" i.e., where we do not know anything about the statistics we will want to later compute, other than the guarantee that had we known the statistic ahead of time, it would have been possible to do so with small memory? This is indeed what we introduce (and show) in this paper with matching upper and lower bounds: we show that it is possible to collect universal statistics of polylogarithmic size, and prove that these universal statistics allow us after the fact to compute all other statistics that are computable with similar amounts of memory. We show that this is indeed possible, both for the standard unbounded streaming model and the sliding window streaming model.","keywords":["Streaming Algorithms","Universality","Sliding Windows"],"author":[{"@type":"Person","name":"Braverman, Vladimir","givenName":"Vladimir","familyName":"Braverman"},{"@type":"Person","name":"Ostrovsky, Rafail","givenName":"Rafail","familyName":"Ostrovsky"},{"@type":"Person","name":"Roytman, Alan","givenName":"Alan","familyName":"Roytman"}],"position":34,"pageStart":573,"pageEnd":590,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Braverman, Vladimir","givenName":"Vladimir","familyName":"Braverman"},{"@type":"Person","name":"Ostrovsky, Rafail","givenName":"Rafail","familyName":"Ostrovsky"},{"@type":"Person","name":"Roytman, Alan","givenName":"Alan","familyName":"Roytman"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.573","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","citation":["http:\/\/sublinear.info\/20","http:\/\/sublinear.info\/30"],"isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8074","name":"Universal Sketches for the Frequency Negative Moments and Other Decreasing Streaming Sums","abstract":"Given a stream with frequency vector f in n dimensions, we characterize the space necessary for approximating the frequency negative moments Fp, where p<0, in terms of n, the accuracy, and the L_1 length of the vector f. To accomplish this, we actually prove a much more general result. Given any nonnegative and nonincreasing function g, we characterize the space necessary for any streaming algorithm that outputs a (1 +\/- eps)-approximation to the sum of the coordinates of the vector f transformed by g. The storage required is expressed in the form of the solution to a relatively simple nonlinear optimization problem, and the algorithm is universal for (1 +\/- eps)-approximations to any such sum where the applied function is nonnegative, nonincreasing, and has the same or smaller space complexity as g. This partially answers an open question of Nelson (IITK Workshop Kanpur, 2009).","keywords":["data streams","frequency moments","negative moments"],"author":[{"@type":"Person","name":"Braverman, Vladimir","givenName":"Vladimir","familyName":"Braverman"},{"@type":"Person","name":"Chestnut, Stephen R.","givenName":"Stephen R.","familyName":"Chestnut"}],"position":35,"pageStart":591,"pageEnd":605,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Braverman, Vladimir","givenName":"Vladimir","familyName":"Braverman"},{"@type":"Person","name":"Chestnut, Stephen R.","givenName":"Stephen R.","familyName":"Chestnut"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.591","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","citation":"http:\/\/sublinear.info\/30","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8075","name":"Dependent Random Graphs and Multi-Party Pointer Jumping","abstract":"We initiate a study of a relaxed version of the standard Erdos-Renyi random graph model, where each edge may depend on a few other edges. We call such graphs \"dependent random graphs\". Our main result in this direction is a thorough understanding of the clique number of dependent random graphs. We also obtain bounds for the chromatic number. Surprisingly, many of the standard properties of random graphs also hold in this relaxed setting. We show that with high probability, a dependent random graph will contain a clique of size ((1-o(1))log(n))\/log(1\/p), and the chromatic number will be at most (nlog(1\/(1-p)))\/log(n). We expect these results to be of independent interest. As an application and second main result, we give a new communication protocol for the k-player Multi-Party Pointer Jumping problem (MPJk) in the number-on-the-forehead (NOF) model. Multi-Party Pointer Jumping is one of the canonical NOF communication problems, yet even for three players, its communication complexity is not well understood. Our protocol for MPJ3 costs O((n * log(log(n)))\/log(n)) communication, improving on a bound from [BrodyChakrabarti08]. We extend our protocol to the non-Boolean pointer jumping problem, achieving an upper bound which is o(n) for any k >= 4 players. This is the first o(n) protocol and improves on a bound of Damm, Jukna, and Sgall, which has stood for almost twenty years.","keywords":["random graphs","communication complexity","number-on-the-forehead model","pointer jumping"],"author":[{"@type":"Person","name":"Brody, Joshua","givenName":"Joshua","familyName":"Brody"},{"@type":"Person","name":"Sanchez, Mario","givenName":"Mario","familyName":"Sanchez"}],"position":36,"pageStart":606,"pageEnd":624,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Brody, Joshua","givenName":"Joshua","familyName":"Brody"},{"@type":"Person","name":"Sanchez, Mario","givenName":"Mario","familyName":"Sanchez"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.606","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8076","name":"Weighted Polynomial Approximations: Limits for Learning and Pseudorandomness","abstract":"Low-degree polynomial approximations to the sign function underlie pseudorandom generators for halfspaces, as well as algorithms for agnostically learning halfspaces. We study the limits of these constructions by proving inapproximability results for the sign function. First, we investigate the derandomization of Chernoff-type concentration inequalities. Schmidt et al. (SIAM J. Discrete Math. 1995) showed that a tail bound of delta can be established for sums of Bernoulli random variables with only O(log(1\/delta))-wise independence. We show that their results are tight up to constant factors. Secondly, the \u201cpolynomial regression\u201d algorithm of Kalai et al. (SIAM J. Comput. 2008) shows that halfspaces can be efficiently learned with respect to log-concave distributions on R^n in the challenging agnostic learning model. The power of this algorithm relies on the fact that under log-concave distributions, halfspaces can be approximated arbitrarily well by low-degree polynomials. In contrast, we exhibit a large class of non-log-concave distributions under which polynomials of any degree cannot approximate the sign function to within arbitrarily low error.","keywords":["Polynomial Approximations","Pseudorandomness","Concentration","Learning Theory","Halfspaces"],"author":[{"@type":"Person","name":"Bun, Mark","givenName":"Mark","familyName":"Bun"},{"@type":"Person","name":"Steinke, Thomas","givenName":"Thomas","familyName":"Steinke"}],"position":37,"pageStart":625,"pageEnd":644,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Bun, Mark","givenName":"Mark","familyName":"Bun"},{"@type":"Person","name":"Steinke, Thomas","givenName":"Thomas","familyName":"Steinke"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.625","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8077","name":"Tighter Connections between Derandomization and Circuit Lower Bounds","abstract":"We tighten the connections between circuit lower bounds and derandomization for each of the following three types of derandomization:\r\n- general derandomization of promiseBPP (connected to Boolean circuits),\r\n- derandomization of Polynomial Identity Testing (PIT) over fixed finite fields (connected to arithmetic circuit lower bounds over the same field), and\r\n- derandomization of PIT over the integers (connected to arithmetic circuit lower bounds over the integers).\r\n\r\nWe show how to make these connections uniform equivalences, although at the expense of using somewhat less common versions of complexity classes and for a less studied notion of inclusion.\r\n\r\nOur main results are as follows:\r\n1. We give the first proof that a non-trivial (nondeterministic subexponential-time) algorithm for PIT over a fixed finite field yields arithmetic circuit lower bounds.\r\n2. We get a similar result for the case of PIT over the integers, strengthening a result of Jansen and Santhanam [JS12] (by removing the need for advice).\r\n3. We derive a Boolean circuit lower bound for NEXP intersect coNEXP from the assumption of sufficiently strong non-deterministic derandomization of promiseBPP (without advice), as well as from the assumed existence of an NP-computable non-empty property of Boolean functions useful for proving superpolynomial circuit lower bounds (in the sense of natural proofs of [RR97]); this strengthens the related results of [IKW02].\r\n4. Finally, we turn all of these implications into equivalences for appropriately defined promise classes and for a notion of robust inclusion\/separation (inspired by [FS11]) that lies between the classical \"almost everywhere\" and \"infinitely often\" notions.","keywords":["derandomization","circuit lower bounds","polynomial identity testing","promise BPP","hardness vs. randomness"],"author":[{"@type":"Person","name":"Carmosino, Marco L.","givenName":"Marco L.","familyName":"Carmosino"},{"@type":"Person","name":"Impagliazzo, Russell","givenName":"Russell","familyName":"Impagliazzo"},{"@type":"Person","name":"Kabanets, Valentine","givenName":"Valentine","familyName":"Kabanets"},{"@type":"Person","name":"Kolokolova, Antonina","givenName":"Antonina","familyName":"Kolokolova"}],"position":38,"pageStart":645,"pageEnd":658,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Carmosino, Marco L.","givenName":"Marco L.","familyName":"Carmosino"},{"@type":"Person","name":"Impagliazzo, Russell","givenName":"Russell","familyName":"Impagliazzo"},{"@type":"Person","name":"Kabanets, Valentine","givenName":"Valentine","familyName":"Kabanets"},{"@type":"Person","name":"Kolokolova, Antonina","givenName":"Antonina","familyName":"Kolokolova"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.645","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8078","name":"Average Distance Queries through Weighted Samples in Graphs and Metric Spaces: High Scalability with Tight Statistical Guarantees","abstract":"The average distance from a node to all other nodes in a graph, or from a query point in a metric space to a set of points, is a fundamental quantity in data analysis. The inverse of the average distance, known as the (classic) closeness centrality of a node, is a popular importance measure in the study of social networks. We develop novel structural insights on the sparsifiability of the distance relation via weighted sampling. Based on that, we present highly practical algorithms with strong statistical guarantees for fundamental problems. We show that the average distance (and hence the centrality) for all nodes in a graph can be estimated using O(epsilon^{-2}) single-source distance computations. For a set V of n points in a metric space, we show that after preprocessing which uses O(n) distance computations we can compute a weighted sample S subset of V of size O(epsilon^{-2}) such that the average distance from any query point v to V can be estimated from the distances from v to S. Finally, we show that for a set of points V in a metric space, we can estimate the average pairwise distance using O(n+epsilon^{-2}) distance computations. The estimate is based on a weighted sample of O(epsilon^{-2}) pairs of points, which is computed using O(n) distance computations. Our estimates are unbiased with normalized mean square error (NRMSE) of at most epsilon. Increasing the sample size by a O(log(n)) factor ensures that the probability that the relative error exceeds epsilon is polynomially small.","keywords":"Closeness Centrality; Average Distance; Metric Space; Weighted Sampling","author":[{"@type":"Person","name":"Chechik, Shiri","givenName":"Shiri","familyName":"Chechik"},{"@type":"Person","name":"Cohen, Edith","givenName":"Edith","familyName":"Cohen"},{"@type":"Person","name":"Kaplan, Haim","givenName":"Haim","familyName":"Kaplan"}],"position":39,"pageStart":659,"pageEnd":679,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Chechik, Shiri","givenName":"Shiri","familyName":"Chechik"},{"@type":"Person","name":"Cohen, Edith","givenName":"Edith","familyName":"Cohen"},{"@type":"Person","name":"Kaplan, Haim","givenName":"Haim","familyName":"Kaplan"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.659","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8079","name":"Two Structural Results for Low Degree Polynomials and Applications","abstract":"In this paper, two structural results concerning low degree polynomials over finite fields are given. The first states that over any finite field F, for any polynomial f on n variables with degree d > log(n)\/10, there exists a subspace of F^n with dimension at least d n^(1\/(d-1)) on which f is constant. This result is shown to be tight. Stated differently, a degree d polynomial cannot compute an affine disperser for dimension smaller than the stated dimension. Using a recursive argument, we obtain our second structural result, showing that any degree d polynomial f induces a partition of F^n to affine subspaces of dimension n^(1\/(d-1)!), such that f is constant on each part.\r\n\r\nWe extend both structural results to more than one polynomial. We further prove an analog of the first structural result to sparse polynomials (with no restriction on the degree) and to functions that are close to low degree polynomials. We also consider the algorithmic aspect of the two structural results.\r\n\r\nOur structural results have various applications, two of which are:\r\n* Dvir [CC 2012] introduced the notion of extractors for varieties, and gave explicit constructions of such extractors over large fields. We show that over any finite field any affine extractor is also an extractor for varieties with related parameters. Our reduction also holds for dispersers, and we conclude that Shaltiel's affine disperser [FOCS 2011] is a disperser for varieties over the binary field.\r\n\r\n* Ben-Sasson and Kopparty [SIAM J. C 2012] proved that any degree 3 affine disperser over a prime field is also an affine extractor with related parameters. Using our structural results, and based on the work of Kaufman and Lovett [FOCS 2008] and Haramaty and Shpilka [STOC 2010], we generalize this result to any constant degree.","keywords":["low degree polynomials","affine extractors","affine dispersers","extractors for varieties","dispersers for varieties"],"author":[{"@type":"Person","name":"Cohen, Gil","givenName":"Gil","familyName":"Cohen"},{"@type":"Person","name":"Tal, Avishay","givenName":"Avishay","familyName":"Tal"}],"position":40,"pageStart":680,"pageEnd":709,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Cohen, Gil","givenName":"Gil","familyName":"Cohen"},{"@type":"Person","name":"Tal, Avishay","givenName":"Avishay","familyName":"Tal"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.680","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8080","name":"The Minimum Bisection in the Planted Bisection Model","abstract":"In the planted bisection model a random graph G(n,p_+,p_-) with n vertices is created by partitioning the vertices randomly into two classes of equal size (up to plus or minus 1). Any two vertices that belong to the same class are linked by an edge with probability p_+ and any two that belong to different classes with probability (p_-) <(p_+) independently. The planted bisection model has been used extensively to benchmark graph partitioning algorithms. If (p_+)=2(d_+)\/n and (p_-)=2(d_-)\/n for numbers 0 <= (d_-) <(d_+) that remain fixed as n tends to infinity, then with high probability the \"planted\" bisection (the one used to construct the graph) will not be a minimum bisection. In this paper we derive an asymptotic formula for the minimum bisection width under the assumption that (d_+)-(d_-) > c * sqrt((d_+)ln(d_+)) for a certain constant c>0.","keywords":["Random graphs","minimum bisection","planted bisection","belief propagation."],"author":[{"@type":"Person","name":"Coja-Oghlan, Amin","givenName":"Amin","familyName":"Coja-Oghlan"},{"@type":"Person","name":"Cooley, Oliver","givenName":"Oliver","familyName":"Cooley"},{"@type":"Person","name":"Kang, Mihyun","givenName":"Mihyun","familyName":"Kang"},{"@type":"Person","name":"Skubch, Kathrin","givenName":"Kathrin","familyName":"Skubch"}],"position":41,"pageStart":710,"pageEnd":725,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Coja-Oghlan, Amin","givenName":"Amin","familyName":"Coja-Oghlan"},{"@type":"Person","name":"Cooley, Oliver","givenName":"Oliver","familyName":"Cooley"},{"@type":"Person","name":"Kang, Mihyun","givenName":"Mihyun","familyName":"Kang"},{"@type":"Person","name":"Skubch, Kathrin","givenName":"Kathrin","familyName":"Skubch"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.710","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8081","name":"Local Convergence of Random Graph Colorings","abstract":"Let G=G(n,m) be a random graph whose average degree d=2m\/n is below the k-colorability threshold. If we sample a k-coloring Sigma of G uniformly at random, what can we say about the correlations between the colors assigned to vertices that are far apart? According to a prediction from statistical physics, for average degrees below the so-called condensation threshold d_c, the colors assigned to far away vertices are asymptotically independent [Krzakala et al: PNAS 2007]. We prove this conjecture for k exceeding a certain constant k_0. More generally, we determine the joint distribution of the k-colorings that Sigma induces locally on the bounded-depth neighborhoods of a fixed number of vertices.","keywords":["Random graph","Galton-Watson tree","phase transitions","graph coloring","Gibbs distribution","convergence"],"author":[{"@type":"Person","name":"Coja-Oghlan, Amin","givenName":"Amin","familyName":"Coja-Oghlan"},{"@type":"Person","name":"Efthymiou, Charilaos","givenName":"Charilaos","familyName":"Efthymiou"},{"@type":"Person","name":"Jaafari, Nor","givenName":"Nor","familyName":"Jaafari"}],"position":42,"pageStart":726,"pageEnd":737,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Coja-Oghlan, Amin","givenName":"Amin","familyName":"Coja-Oghlan"},{"@type":"Person","name":"Efthymiou, Charilaos","givenName":"Charilaos","familyName":"Efthymiou"},{"@type":"Person","name":"Jaafari, Nor","givenName":"Nor","familyName":"Jaafari"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.726","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8082","name":"Towards Resistance Sparsifiers","abstract":"We study resistance sparsification of graphs, in which the goal is to find a sparse subgraph (with reweighted edges) that approximately preserves the effective resistances between every pair of nodes. We show that every dense regular expander admits a (1+epsilon)-resistance sparsifier of size ~O(n\/epsilon), and conjecture this bound holds for all graphs on n nodes. In comparison, spectral sparsification is a strictly stronger notion and requires Omega(n\/epsilon^2) edges even on the complete graph.\r\n\r\nOur approach leads to the following structural question on graphs: Does every dense regular expander contain a sparse regular expander as a subgraph? Our main technical contribution, which may of independent interest, is a positive answer to this question in a certain setting of parameters. Combining this with a recent result of von Luxburg, Radl, and Hein (JMLR, 2014) leads to the aforementioned resistance sparsifiers.","keywords":["edge sparsification","spectral sparsifier","graph expansion","effective resistance","commute time"],"author":[{"@type":"Person","name":"Dinitz, Michael","givenName":"Michael","familyName":"Dinitz"},{"@type":"Person","name":"Krauthgamer, Robert","givenName":"Robert","familyName":"Krauthgamer"},{"@type":"Person","name":"Wagner, Tal","givenName":"Tal","familyName":"Wagner"}],"position":43,"pageStart":738,"pageEnd":755,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Dinitz, Michael","givenName":"Michael","familyName":"Dinitz"},{"@type":"Person","name":"Krauthgamer, Robert","givenName":"Robert","familyName":"Krauthgamer"},{"@type":"Person","name":"Wagner, Tal","givenName":"Tal","familyName":"Wagner"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.738","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","citation":["http:\/\/arxiv.org\/abs\/1403.7058","http:\/\/dx.doi.org\/10.1137\/090772873","http:\/\/dx.doi.org\/10.1145\/237814.237827","http:\/\/arxiv.org\/abs\/1401.4159","http:\/\/dx.doi.org\/10.1145\/2090236.2090267","http:\/\/www.eecs.berkeley.edu\/Pubs\/TechRpts\/2007\/EECS-2007-177.html","http:\/\/dx.doi.org\/10.1145\/1538902.1538903","http:\/\/dx.doi.org\/10.1016\/j.jcta.2014.04.010","http:\/\/dx.doi.org\/10.1002\/jgt.3190130114","http:\/\/dx.doi.org\/10.1145\/1007352.1007372","http:\/\/dx.doi.org\/10.1137\/080734029","http:\/\/dx.doi.org\/10.1137\/08074489X","http:\/\/jmlr.org\/papers\/v15\/vonluxburg14a.html"],"isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8083","name":"Reconstruction\/Non-reconstruction Thresholds for Colourings of General Galton-Watson Trees","abstract":"The broadcasting models on trees arise in many contexts such as discrete mathematics, biology, information theory, statistical physics and computer science. In this work, we consider the k-colouring model. A basic question here is whether the assignment at the root affects the distribution of the colourings at the vertices at distance h from the root. This is the so-called reconstruction problem. For the case where the underlying tree is d -ary it is well known that d\/ln(d) is the reconstruction threshold. That is, for k=(1+epsilon)*d\/ln(d) we have non-reconstruction while for k=(1-epsilon)*d\/ln(d) we have reconstruction.\r\n\r\nHere, we consider the largely unstudied case where the underlying tree is chosen according to a predefined distribution. In particular, we consider the well-known Galton-Watson trees. The corresponding model arises naturally in many contexts such as\r\nthe theory of spin-glasses and its applications on random Constraint Satisfaction Problems (rCSP). The study on rCSP focuses on Galton-Watson trees with offspring distribution B(n,d\/n), i.e. the binomial with parameters n and d\/n, where d is fixed. Here we consider a broader version of the problem, as we assume general offspring distribution which includes B(n,d\/n) as a special case.\r\n\r\nOur approach relates the corresponding bounds for (non)reconstruction to certain concentration properties of the offspring distribution. This allows to derive reconstruction thresholds for a very wide family of offspring distributions, which includes B(n,d\/n). A very interesting corollary is that for distributions with expected offspring d, we get reconstruction threshold d\/ln(d) under weaker concentration conditions than what we have in B(n,d\/n).\r\n \r\nFurthermore, our reconstruction threshold for the random colorings of Galton-Watson with offspring B(n,d\/n), implies the reconstruction threshold for the random colourings of G(n,d\/n).","keywords":["Random Colouring","Reconstruction Problem","Galton-Watson Tree"],"author":{"@type":"Person","name":"Efthymiou, Charilaos","givenName":"Charilaos","familyName":"Efthymiou"},"position":44,"pageStart":756,"pageEnd":774,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":{"@type":"Person","name":"Efthymiou, Charilaos","givenName":"Charilaos","familyName":"Efthymiou"},"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.756","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8084","name":"A Randomized Online Quantile Summary in O(1\/epsilon * log(1\/epsilon)) Words","abstract":"A quantile summary is a data structure that approximates to epsilon-relative error the order statistics of a much larger underlying dataset.\r\n\r\nIn this paper we develop a randomized online quantile summary for the cash register data input model and comparison data domain model that uses O((1\/epsilon) log(1\/epsilon)) words of memory. This improves upon the previous best upper bound of O((1\/epsilon) (log(1\/epsilon))^(3\/2)) by Agarwal et al. (PODS 2012). Further, by a lower bound of Hung and Ting (FAW 2010) no deterministic summary for the comparison model can outperform our randomized summary in terms of space complexity. Lastly, our summary has the nice property that O((1\/epsilon) log(1\/epsilon)) words suffice to ensure that the success probability is 1 - exp(-poly(1\/epsilon)).","keywords":["order statistics","data stream","streaming algorithm"],"author":[{"@type":"Person","name":"Felber, David","givenName":"David","familyName":"Felber"},{"@type":"Person","name":"Ostrovsky, Rafail","givenName":"Rafail","familyName":"Ostrovsky"}],"position":45,"pageStart":775,"pageEnd":785,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Felber, David","givenName":"David","familyName":"Felber"},{"@type":"Person","name":"Ostrovsky, Rafail","givenName":"Rafail","familyName":"Ostrovsky"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.775","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","citation":"http:\/\/sublinear.info\/2","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8085","name":"On Constant-Size Graphs That Preserve the Local Structure of High-Girth Graphs","abstract":"Let G=(V,E) be an undirected graph with maximum degree d. The k-disc of a vertex v is defined as the rooted subgraph that is induced by all vertices whose distance to v is at most k. The k-disc frequency vector of G, freq(G), is a vector indexed by all isomorphism types of k-discs. For each such isomorphism type Gamma, the k-disc frequency vector counts the fraction of vertices that have k-disc isomorphic to Gamma. Thus, the frequency vector freq(G) of G captures the local structure of G. A natural question is whether one can construct a much smaller graph H such that H has a similar local structure. N. Alon proved that for any epsilon>0 there always exists a graph H whose size is independent of |V| and whose frequency vector satisfies ||freq(G) - freq(G)||_1 <= epsilon. However, his proof is only existential and neither gives an explicit bound on the size of H nor an efficient algorithm. He gave the open problem to find such explicit bounds. In this paper, we solve this problem for the special case of high girth graphs. We show how to efficiently compute a graph H with the above properties when G has girth at least 2k+2 and we give explicit bounds on the size of H.","keywords":["local graph structure","k-disc frequency vector","graph property testing"],"author":[{"@type":"Person","name":"Fichtenberger, Hendrik","givenName":"Hendrik","familyName":"Fichtenberger"},{"@type":"Person","name":"Peng, Pan","givenName":"Pan","familyName":"Peng"},{"@type":"Person","name":"Sohler, Christian","givenName":"Christian","familyName":"Sohler"}],"position":46,"pageStart":786,"pageEnd":799,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Fichtenberger, Hendrik","givenName":"Hendrik","familyName":"Fichtenberger"},{"@type":"Person","name":"Peng, Pan","givenName":"Pan","familyName":"Peng"},{"@type":"Person","name":"Sohler, Christian","givenName":"Christian","familyName":"Sohler"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.786","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","citation":"http:\/\/sublinear.info\/42","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8086","name":"Dimension Expanders via Rank Condensers","abstract":"An emerging theory of \"linear algebraic pseudorandomness: aims to understand the linear algebraic analogs of fundamental Boolean pseudorandom objects where the rank of subspaces plays the role of the size of subsets. In this work, we study and highlight the interrelationships between several such algebraic objects such as subspace designs, dimension expanders, seeded rank condensers, two-source rank condensers, and rank-metric codes. In particular, with the recent construction of near-optimal subspace designs by Guruswami and Kopparty as a starting point, we construct good (seeded) rank condensers (both lossless and lossy versions), which are a small collection of linear maps F^n to F^t for t<0 (Cooper et al., 2000). In contrast for q>=3 there are two critical temperatures 0=beta_rc. These results complement refined results of Cuff et al. (2012) on the mixing time of the Glauber dynamics for the ferromagnetic Potts model. The most interesting aspect of our analysis is at the critical temperature beta=beta_u, which requires a delicate choice of a potential function to balance the conflating factors for the slow drift away from a fixed point (which is repulsive but not Jacobian repulsive): close to the fixed point the variance from the percolation step dominates and sufficiently far from the fixed point the dynamics of the size of the dominant color class takes over.","keywords":["Ferromagnetic Potts model","Swendsen-Wang dynamics","mixing time","mean-field analysis","phase transition."],"author":[{"@type":"Person","name":"Galanis, Andreas","givenName":"Andreas","familyName":"Galanis"},{"@type":"Person","name":"\u0160tefankovic, Daniel","givenName":"Daniel","familyName":"\u0160tefankovic"},{"@type":"Person","name":"Vigoda, Eric","givenName":"Eric","familyName":"Vigoda"}],"position":48,"pageStart":815,"pageEnd":828,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Galanis, Andreas","givenName":"Andreas","familyName":"Galanis"},{"@type":"Person","name":"\u0160tefankovic, Daniel","givenName":"Daniel","familyName":"\u0160tefankovic"},{"@type":"Person","name":"Vigoda, Eric","givenName":"Eric","familyName":"Vigoda"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.815","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8088","name":"Decomposing Overcomplete 3rd Order Tensors using Sum-of-Squares Algorithms","abstract":"Tensor rank and low-rank tensor decompositions have many applications in learning and complexity theory. Most known algorithms use unfoldings of tensors and can only handle rank up to n^{\\lfloor p\/2 \\rceil} for a p-th order tensor. Previously no efficient algorithm can decompose 3rd order tensors when the rank is super-linear in the dimension. Using ideas from sum-of-squares hierarchy, we give the first quasi-polynomial time algorithm that can decompose a random 3rd order tensor decomposition when the rank is as large as n^{3\/2}\/poly log n.\r\n\r\nWe also give a polynomial time algorithm for certifying the injective norm of random low rank tensors. Our tensor decomposition algorithm exploits the relationship between injective norm and the tensor components. The proof relies on interesting tools for decoupling random variables to prove better matrix concentration bounds.","keywords":["sum of squares","overcomplete tensor decomposition"],"author":[{"@type":"Person","name":"Ge, Rong","givenName":"Rong","familyName":"Ge"},{"@type":"Person","name":"Ma, Tengyu","givenName":"Tengyu","familyName":"Ma"}],"position":49,"pageStart":829,"pageEnd":849,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Ge, Rong","givenName":"Rong","familyName":"Ge"},{"@type":"Person","name":"Ma, Tengyu","givenName":"Tengyu","familyName":"Ma"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.829","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","citation":"http:\/\/arxiv.org\/abs\/1501.06521","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8089","name":"Negation-Limited Formulas","abstract":"We give an efficient structural decomposition theorem for formulas that depends on their negation complexity and demonstrate its power with the following applications.\r\n\r\nWe prove that every formula that contains t negation gates can be shrunk using a random restriction to a formula of size O(t) with the shrinkage exponent of monotone formulas. As a result, the shrinkage exponent of formulas that contain a constant number of negation gates is equal to the shrinkage exponent of monotone formulas.\r\n\r\nWe give an efficient transformation of formulas with t negation gates to circuits with log(t) negation gates. This transformation provides a generic way to cast results for negation-limited circuits to the setting of negation-limited formulas. For example, using a result of Rossman (CCC'15), we obtain an average-case lower bound for formulas of polynomial-size on n variables with n^{1\/2-epsilon} negations.\r\n\r\nIn addition, we prove a lower bound on the number of negations required to compute one-way permutations by polynomial-size formulas.","keywords":["Negation complexity","De Morgan formulas","Shrinkage"],"author":[{"@type":"Person","name":"Guo, Siyao","givenName":"Siyao","familyName":"Guo"},{"@type":"Person","name":"Komargodski, Ilan","givenName":"Ilan","familyName":"Komargodski"}],"position":50,"pageStart":850,"pageEnd":866,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Guo, Siyao","givenName":"Siyao","familyName":"Guo"},{"@type":"Person","name":"Komargodski, Ilan","givenName":"Ilan","familyName":"Komargodski"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.850","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","citation":"http:\/\/arxiv.org\/abs\/1410.8420","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8090","name":"Deletion Codes in the High-noise and High-rate Regimes","abstract":"The noise model of deletions poses significant challenges in coding theory, with basic questions like the capacity of the binary deletion channel still being open. In this paper, we study the harder model of worst-case deletions, with a focus on constructing efficiently encodable and decodable codes for the two extreme regimes of high-noise and high-rate. Specifically, we construct polynomial-time decodable codes with the following trade-offs (for any epsilon > 0):\r\n\r\n(1) Codes that can correct a fraction 1-epsilon of deletions with rate poly(eps) over an alphabet of size poly(1\/epsilon); (2) Binary codes of rate 1-O~(sqrt(epsilon)) that can correct a fraction eps of deletions; and\r\n(3) Binary codes that can be list decoded from a fraction (1\/2-epsilon) of deletions with rate poly(epsion)\r\n\r\nOur work is the first to achieve the qualitative goals of correcting a deletion fraction approaching 1 over bounded alphabets, and correcting a constant fraction of bit deletions with rate aproaching 1. The above results bring our understanding of deletion code constructions in these regimes to a similar level as worst-case errors.","keywords":["algorithmic coding theory","deletion codes","list decoding","probabilistic method","explicit constructions"],"author":[{"@type":"Person","name":"Guruswami, Venkatesan","givenName":"Venkatesan","familyName":"Guruswami"},{"@type":"Person","name":"Wang, Carol","givenName":"Carol","familyName":"Wang"}],"position":51,"pageStart":867,"pageEnd":880,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Guruswami, Venkatesan","givenName":"Venkatesan","familyName":"Guruswami"},{"@type":"Person","name":"Wang, Carol","givenName":"Carol","familyName":"Wang"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.867","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8091","name":"Communication with Partial Noiseless Feedback","abstract":"We introduce the notion of one-way communication schemes with partial noiseless feedback. In this setting, Alice wishes to communicate a message to Bob by using a communication scheme that involves sending a sequence of bits over a channel while receiving feedback bits from Bob for delta fraction of the transmissions. An adversary is allowed to corrupt up to a constant fraction of Alice's transmissions, while the feedback is always uncorrupted. Motivated by questions related to coding for interactive communication, we seek to determine the maximum error rate, as a function of 0 <= delta <= 1, such that Alice can send a message to Bob via some protocol with delta fraction of noiseless feedback. The case delta = 1 corresponds to full feedback, in which the result of Berlekamp ['64] implies that the maximum tolerable error rate is 1\/3, while the case delta = 0 corresponds to no feedback, in which the maximum tolerable error rate is 1\/4, achievable by use of a binary error-correcting code.\r\n\r\nIn this work, we show that for any delta in (0,1] and gamma in [0, 1\/3), there exists a randomized communication scheme with noiseless delta-feedback, such that the probability of miscommunication is low, as long as no more than a gamma fraction of the rounds are corrupted. Moreover, we show that for any delta in (0, 1] and gamma < f(delta), there exists a deterministic communication scheme with noiseless delta-feedback that always decodes correctly as long as no more than a gamma fraction of rounds are corrupted. Here f is a monotonically increasing, piecewise linear, continuous function with f(0) = 1\/4 and f(1) = 1\/3. Also, the rate of communication in both cases is constant (dependent on delta and gamma but independent of the input length).","keywords":["Communication with feedback","Interactive communication","Coding theory Digital"],"author":[{"@type":"Person","name":"Haeupler, Bernhard","givenName":"Bernhard","familyName":"Haeupler"},{"@type":"Person","name":"Kamath, Pritish","givenName":"Pritish","familyName":"Kamath"},{"@type":"Person","name":"Velingker, Ameya","givenName":"Ameya","familyName":"Velingker"}],"position":52,"pageStart":881,"pageEnd":897,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Haeupler, Bernhard","givenName":"Bernhard","familyName":"Haeupler"},{"@type":"Person","name":"Kamath, Pritish","givenName":"Pritish","familyName":"Kamath"},{"@type":"Person","name":"Velingker, Ameya","givenName":"Ameya","familyName":"Velingker"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.881","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8092","name":"Spectral Norm of Random Kernel Matrices with Applications to Privacy","abstract":"Kernel methods are an extremely popular set of techniques used for many important machine learning and data analysis applications. In addition to having good practical performance, these methods are supported by a well-developed theory. Kernel methods use an implicit mapping of the input data into a high dimensional feature space defined by a kernel function, i.e., a function returning the inner product between the images of two data points in the feature space. Central to any kernel method is the kernel matrix, which is built by evaluating the kernel function on a given sample dataset.\r\n\r\nIn this paper, we initiate the study of non-asymptotic spectral properties of random kernel matrices. These are n x n random matrices whose (i,j)th entry is obtained by evaluating the kernel function on x_i and x_j, where x_1,..,x_n are a set of n independent random high-dimensional vectors. Our main contribution is to obtain tight upper bounds on the spectral norm (largest eigenvalue) of random kernel matrices constructed by using common kernel functions such as polynomials and Gaussian radial basis.\r\n\r\nAs an application of these results, we provide lower bounds on the distortion needed for releasing the coefficients of kernel ridge regression under attribute privacy, a general privacy notion which captures a large class of privacy definitions. Kernel ridge regression is standard method for performing non-parametric regression that regularly outperforms traditional regression approaches in various domains. Our privacy distortion lower bounds are the first for any kernel technique, and our analysis assumes realistic scenarios for the input, unlike all previous lower bounds for other release problems which only hold under very restrictive input settings.","keywords":["Random Kernel Matrices","Spectral Norm","Subguassian Distribution","Data Privacy","Reconstruction Attacks"],"author":[{"@type":"Person","name":"Kasiviswanathan, Shiva Prasad","givenName":"Shiva Prasad","familyName":"Kasiviswanathan"},{"@type":"Person","name":"Rudelson, Mark","givenName":"Mark","familyName":"Rudelson"}],"position":53,"pageStart":898,"pageEnd":914,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Kasiviswanathan, Shiva Prasad","givenName":"Shiva Prasad","familyName":"Kasiviswanathan"},{"@type":"Person","name":"Rudelson, Mark","givenName":"Mark","familyName":"Rudelson"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.898","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8093","name":"Separating Decision Tree Complexity from Subcube Partition Complexity","abstract":"The subcube partition model of computation is at least as powerful as decision trees but no separation between these models was known. We show that there exists a function whose deterministic subcube partition complexity is asymptotically smaller than its randomized decision tree complexity, resolving an open problem of Friedgut, Kahn, and Wigderson (2002). Our lower bound is based on the information-theoretic techniques first introduced to lower bound the randomized decision tree complexity of the recursive majority function.\r\n\r\nWe also show that the public-coin partition bound, the best known lower bound method for randomized decision tree complexity subsuming other general techniques such as block sensitivity, approximate degree, randomized certificate complexity, and the classical adversary bound, also lower bounds randomized subcube partition complexity. This shows that all these lower bound techniques cannot prove optimal lower bounds for randomized decision tree complexity, which answers an open question of Jain and Klauck (2010) and Jain, Lee, and Vishnoi (2014).","keywords":["Decision tree complexity","query complexity","randomized algorithms","subcube partition complexity"],"author":[{"@type":"Person","name":"Kothari, Robin","givenName":"Robin","familyName":"Kothari"},{"@type":"Person","name":"Racicot-Desloges, David","givenName":"David","familyName":"Racicot-Desloges"},{"@type":"Person","name":"Santha, Miklos","givenName":"Miklos","familyName":"Santha"}],"position":54,"pageStart":915,"pageEnd":930,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Kothari, Robin","givenName":"Robin","familyName":"Kothari"},{"@type":"Person","name":"Racicot-Desloges, David","givenName":"David","familyName":"Racicot-Desloges"},{"@type":"Person","name":"Santha, Miklos","givenName":"Miklos","familyName":"Santha"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.915","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8094","name":"Distance-based Species Tree Estimation: Information-Theoretic Trade-off between Number of Loci and Sequence Length under the Coalescent","abstract":"We consider the reconstruction of a phylogeny from multiple genes under the multispecies coalescent. We establish a connection with the sparse signal detection problem, where one seeks to distinguish between a distribution and a mixture of the distribution and a sparse signal. Using this connection, we derive an information-theoretic trade-off between the number of genes needed for an accurate reconstruction and the sequence length of the genes.","keywords":["phylogenetic reconstruction","multispecies coalescent","sequence length requirement."],"author":[{"@type":"Person","name":"Mossel, Elchanan","givenName":"Elchanan","familyName":"Mossel"},{"@type":"Person","name":"Roch, Sebastien","givenName":"Sebastien","familyName":"Roch"}],"position":55,"pageStart":931,"pageEnd":942,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":[{"@type":"Person","name":"Mossel, Elchanan","givenName":"Elchanan","familyName":"Mossel"},{"@type":"Person","name":"Roch, Sebastien","givenName":"Sebastien","familyName":"Roch"}],"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.931","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"},{"@type":"ScholarlyArticle","@id":"#article8095","name":"Deterministically Factoring Sparse Polynomials into Multilinear Factors and Sums of Univariate Polynomials","abstract":"We present the first efficient deterministic algorithm for factoring sparse polynomials that split into multilinear factors\r\nand sums of univariate polynomials. Our result makes partial progress towards the resolution of the classical question posed by von zur Gathen and Kaltofen in [von zur Gathen\/Kaltofen, J. Comp. Sys. Sci., 1985] to devise an efficient deterministic algorithm for factoring (general) sparse polynomials. We achieve our goal by introducing essential factorization schemes which can be thought of as a relaxation of the regular factorization notion.","keywords":["Derandomization","Multivariate Polynomial Factorization","Sparse polynomials"],"author":{"@type":"Person","name":"Volkovich, Ilya","givenName":"Ilya","familyName":"Volkovich"},"position":56,"pageStart":943,"pageEnd":958,"dateCreated":"2015-08-13","datePublished":"2015-08-13","isAccessibleForFree":true,"license":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/legalcode","copyrightHolder":{"@type":"Person","name":"Volkovich, Ilya","givenName":"Ilya","familyName":"Volkovich"},"copyrightYear":"2015","accessMode":"textual","accessModeSufficient":"textual","creativeWorkStatus":"Published","inLanguage":"en-US","sameAs":"https:\/\/doi.org\/10.4230\/LIPIcs.APPROX-RANDOM.2015.943","publisher":"Schloss Dagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik","isPartOf":"#volume6243"}]}