eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
0
0
10.4230/LIPIcs.APPROX-RANDOM.2018
article
LIPIcs, Volume 116, APPROX/RANDOM'18, Complete Volume
Blais, Eric
Jansen, Klaus
D. P. Rolim, José
Steurer, David
LIPIcs, Volume 116, APPROX/RANDOM'18, Complete Volume
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018/LIPIcs.APPROX-RANDOM.2018.pdf
Mathematics of computing, Theory of computation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
0:i
0:xvi
10.4230/LIPIcs.APPROX-RANDOM.2018.0
article
Front Matter, Table of Contents, Preface, Conference Organization
Blais, Eric
Jansen, Klaus
D. P. Rolim, José
Steurer, David
Front Matter, Table of Contents, Preface, Conference Organization
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.0/LIPIcs.APPROX-RANDOM.2018.0.pdf
Front Matter
Table of Contents
Preface
Conference Organization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
1:1
1:15
10.4230/LIPIcs.APPROX-RANDOM.2018.1
article
Polylogarithmic Approximation Algorithms for Weighted-F-Deletion Problems
Agrawal, Akanksha
1
Lokshtanov, Daniel
2
Misra, Pranabendu
2
Saurabh, Saket
3
Zehavi, Meirav
4
Institute for Computer Science and Control, Hungarian Academy of Sciences (MTA SZTAKI), Budapest, Hungary
University of Bergen, Norway
Institute of Mathematical Sciences, HBNI, Chennai, India, University of Bergen, Norway, and UMI ReLax
Ben-Gurion University, Beersheba, Israel
Let F be a family of graphs. A canonical vertex deletion problem corresponding to F is defined as follows: given an n-vertex undirected graph G and a weight function w: V(G) - >R^+, find a minimum weight subset S subseteq V(G) such that G-S belongs to F. This is known as Weighted F Vertex Deletion problem. In this paper we devise a recursive scheme to obtain O(log^{O(1)} n)-approximation algorithms for such problems, building upon the classical technique of finding balanced separators in a graph. Roughly speaking, our scheme applies to those problems, where an optimum solution S together with a well-structured set X, form a balanced separator of the input graph. In this paper, we obtain the first O(log^{O(1)} n)-approximation algorithms for the following vertex deletion problems.
- Let {F} be a finite set of graphs containing a planar graph, and F=G(F) be the family of graphs such that every graph H in G(F) excludes all graphs in F as minors. The vertex deletion problem corresponding to F=G(F) is the Weighted Planar F-Minor-Free Deletion (WPF-MFD) problem. We give randomized and deterministic approximation algorithms for WPF-MFD with ratios O(log^{1.5} n) and O(log^2 n), respectively. Previously, only a randomized constant factor approximation algorithm for the unweighted version of the problem was known [FOCS 2012].
- We give an O(log^2 n)-factor approximation algorithm for Weighted Chordal Vertex Deletion (WCVD), the vertex deletion problem to the family of chordal graphs. On the way to this algorithm, we also obtain a constant factor approximation algorithm for Multicut on chordal graphs.
- We give an O(log^3 n)-factor approximation algorithm for Weighted Distance Hereditary Vertex Deletion (WDHVD), also known as Weighted Rankwidth-1 Vertex Deletion (WR-1VD). This is the vertex deletion problem to the family of distance hereditary graphs, or equivalently, the family of graphs of rankwidth one.
We believe that our recursive scheme can be applied to obtain O(log^{O(1)} n)-approximation algorithms for many other problems as well.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.1/LIPIcs.APPROX-RANDOM.2018.1.pdf
Approximation Algorithms
Planar- F-Deletion
Separator
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
2:1
2:19
10.4230/LIPIcs.APPROX-RANDOM.2018.2
article
Improved Approximation Bounds for the Minimum Constraint Removal Problem
Bandyapadhyay, Sayan
1
Kumar, Neeraj
2
Suri, Subhash
2
Varadarajan, Kasturi
1
Department of Computer Science, University of Iowa, Iowa City, USA
Department of Computer Science, University of California, Santa Barbara, USA
In the minimum constraint removal problem, we are given a set of geometric objects as obstacles in the plane, and we want to find the minimum number of obstacles that must be removed to reach a target point t from the source point s by an obstacle-free path. The problem is known to be intractable, and (perhaps surprisingly) no sub-linear approximations are known even for simple obstacles such as rectangles and disks. The main result of our paper is a new approximation technique that gives O(sqrt{n})-approximation for rectangles, disks as well as rectilinear polygons. The technique also gives O(sqrt{n})-approximation for the minimum color path problem in graphs. We also present some inapproximability results for the geometric constraint removal problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.2/LIPIcs.APPROX-RANDOM.2018.2.pdf
Minimum Constraint Removal
Minimum Color Path
Barrier Resilience
Obstacle Removal
Obstacle Free Path
Approximation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
3:1
3:15
10.4230/LIPIcs.APPROX-RANDOM.2018.3
article
A Tight 4/3 Approximation for Capacitated Vehicle Routing in Trees
Becker, Amariah
1
Brown University Department of Computer Science, Providence, RI, USA
Given a set of clients with demands, the Capacitated Vehicle Routing problem is to find a set of tours that collectively cover all client demand, such that the capacity of each vehicle is not exceeded and such that the sum of the tour lengths is minimized. In this paper, we provide a 4/3-approximation algorithm for Capacitated Vehicle Routing on trees, improving over the previous best-known approximation ratio of (sqrt{41}-1)/4 by Asano et al.[Asano et al., 2001], while using the same lower bound. Asano et al. show that there exist instances whose optimal cost is 4/3 times this lower bound. Notably, our 4/3 approximation ratio is therefore tight for this lower bound, achieving the best-possible performance.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.3/LIPIcs.APPROX-RANDOM.2018.3.pdf
Approximation algorithms
Graph algorithms
Capacitated vehicle routing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
4:1
4:16
10.4230/LIPIcs.APPROX-RANDOM.2018.4
article
Low Rank Approximation in the Presence of Outliers
Bhaskara, Aditya
1
Kumar, Srivatsan
2
School of Computing, University of Utah, Salt Lake City, UT, USA, http://www.cs.utah.edu/~bhaskara/
School of Computing, University of Utah, Salt Lake City, UT, USA
We consider the problem of principal component analysis (PCA) in the presence of outliers. Given a matrix A (d x n) and parameters k, m, the goal is to remove a set of at most m columns of A (outliers), so as to minimize the rank-k approximation error of the remaining matrix (inliers). While much of the work on this problem has focused on recovery of the rank-k subspace under assumptions on the inliers and outliers, we focus on the approximation problem. Our main result shows that sampling-based methods developed in the outlier-free case give non-trivial guarantees even in the presence of outliers. Using this insight, we develop a simple algorithm that has bi-criteria guarantees. Further, unlike similar formulations for clustering, we show that bi-criteria guarantees are unavoidable for the problem, under appropriate complexity assumptions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.4/LIPIcs.APPROX-RANDOM.2018.4.pdf
Low rank approximation
PCA
Robustness to outliers
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
5:1
5:15
10.4230/LIPIcs.APPROX-RANDOM.2018.5
article
Greedy Bipartite Matching in Random Type Poisson Arrival Model
Borodin, Allan
1
Karavasilis, Christodoulos
1
Pankratov, Denis
1
University of Toronto, 10 Kings College Road, Toronto, Canada
We introduce a new random input model for bipartite matching which we call the Random Type Poisson Arrival Model. Just like in the known i.i.d. model (introduced by Feldman et al. [Feldman et al., 2009]), online nodes have types in our model. In contrast to the adversarial types studied in the known i.i.d. model, following the random graphs studied in Mastin and Jaillet [A. Mastin, 2013], in our model each type graph is generated randomly by including each offline node in the neighborhood of an online node with probability c/n independently. In our model, nodes of the same type appear consecutively in the input and the number of times each type node appears is distributed according to the Poisson distribution with parameter 1. We analyze the performance of the simple greedy algorithm under this input model. The performance is controlled by the parameter c and we are able to exactly characterize the competitive ratio for the regimes c = o(1) and c = omega(1). We also provide a precise bound on the expected size of the matching in the remaining regime of constant c. We compare our results to the previous work of Mastin and Jaillet who analyzed the simple greedy algorithm in the G_{n,n,p} model where each online node type occurs exactly once. We essentially show that the approach of Mastin and Jaillet can be extended to work for the Random Type Poisson Arrival Model, although several nontrivial technical challenges need to be overcome. Intuitively, one can view the Random Type Poisson Arrival Model as the G_{n,n,p} model with less randomness; that is, instead of each online node having a new type, each online node has a chance of repeating the previous type.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.5/LIPIcs.APPROX-RANDOM.2018.5.pdf
bipartite matching
stochastic input models
online algorithms
greedy algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
6:1
6:17
10.4230/LIPIcs.APPROX-RANDOM.2018.6
article
Semi-Direct Sum Theorem and Nearest Neighbor under l_infty
Braverman, Mark
1
Ko, Young Kun
1
Department of Computer Science, Princeton University, 35 Olden St. Princeton NJ 08540, USA
We introduce semi-direct sum theorem as a framework for proving asymmetric communication lower bounds for the functions of the form V_{i=1}^n f(x,y_i). Utilizing tools developed in proving direct sum theorem for information complexity, we show that if the function is of the form V_{i=1}^n f(x,y_i) where Alice is given x and Bob is given y_i's, it suffices to prove a lower bound for a single f(x,y_i). This opens a new avenue of attack other than the conventional combinatorial technique (i.e. "richness lemma" from [Miltersen et al., 1995]) for proving randomized lower bounds for asymmetric communication for functions of such form.
As the main technical result and an application of semi-direct sum framework, we prove an information lower bound on c-approximate Nearest Neighbor (ANN) under l_infty which implies that the algorithm of [Indyk, 2001] for c-approximate Nearest Neighbor under l_infty is optimal even under randomization for both decision tree and cell probe data structure model (under certain parameter assumption for the latter). In particular, this shows that randomization cannot improve [Indyk, 2001] under decision tree model. Previously only a deterministic lower bound was known by [Andoni et al., 2008] and randomized lower bound for cell probe model by [Kapralov and Panigrahy, 2012]. We suspect further applications of our framework in exhibiting randomized asymmetric communication lower bounds for big data applications.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.6/LIPIcs.APPROX-RANDOM.2018.6.pdf
Asymmetric Communication Lower Bound
Data Structure Lower Bound
Nearest Neighbor Search
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
7:1
7:22
10.4230/LIPIcs.APPROX-RANDOM.2018.7
article
Nearly Optimal Distinct Elements and Heavy Hitters on Sliding Windows
Braverman, Vladimir
1
Grigorescu, Elena
2
Lang, Harry
3
Woodruff, David P.
4
Zhou, Samson
2
Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
Department of Computer Science, Purdue University, West Lafayette, IN, USA
Department of Mathematics, Johns Hopkins University, Baltimore, MD, USA
School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA
We study the distinct elements and l_p-heavy hitters problems in the sliding window model, where only the most recent n elements in the data stream form the underlying set. We first introduce the composable histogram, a simple twist on the exponential (Datar et al., SODA 2002) and smooth histograms (Braverman and Ostrovsky, FOCS 2007) that may be of independent interest. We then show that the composable histogram{} along with a careful combination of existing techniques to track either the identity or frequency of a few specific items suffices to obtain algorithms for both distinct elements and l_p-heavy hitters that are nearly optimal in both n and epsilon.
Applying our new composable histogram framework, we provide an algorithm that outputs a (1+epsilon)-approximation to the number of distinct elements in the sliding window model and uses O{1/(epsilon^2) log n log (1/epsilon)log log n+ (1/epsilon) log^2 n} bits of space. For l_p-heavy hitters, we provide an algorithm using space O{(1/epsilon^p) log^2 n (log^2 log n+log 1/epsilon)} for 0<p <=2, improving upon the best-known algorithm for l_2-heavy hitters (Braverman et al., COCOON 2014), which has space complexity O{1/epsilon^4 log^3 n}. We also show complementing nearly optimal lower bounds of Omega ((1/epsilon) log^2 n+(1/epsilon^2) log n) for distinct elements and Omega ((1/epsilon^p) log^2 n) for l_p-heavy hitters, both tight up to O{log log n} and O{log 1/epsilon} factors.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.7/LIPIcs.APPROX-RANDOM.2018.7.pdf
Streaming algorithms
sliding windows
heavy hitters
distinct elements
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
8:1
8:19
10.4230/LIPIcs.APPROX-RANDOM.2018.8
article
Survivable Network Design for Group Connectivity in Low-Treewidth Graphs
Chalermsook, Parinya
1
Das, Syamantak
2
https://orcid.org/0000-0002-4393-8678
Even, Guy
3
Laekhanukit, Bundit
4
https://orcid.org/0000-0002-4476-8914
Vaz, Daniel
5
https://orcid.org/0000-0003-2224-2185
Department of Computer Science, Aalto University, Espoo, Finland
Indraprastha Institute of Information Technology Delhi, Delhi, India
Tel-Aviv University, Tel-Aviv, Israel
Max-Planck-Institut für Informatik, Saarücken, Germany and, Institute for Theoretical Computer Science, Shanghai University of Finance and Economics, Shanghai, China
Max-Planck-Institut für Informatik, Germany & Graduate School of Computer Science, Saarland University, Saarücken, Germany
In the Group Steiner Tree problem (GST), we are given a (edge or vertex)-weighted graph G=(V,E) on n vertices, together with a root vertex r and a collection of groups {S_i}_{i in [h]}: S_i subseteq V(G). The goal is to find a minimum-cost subgraph H that connects the root to every group. We consider a fault-tolerant variant of GST, which we call Restricted (Rooted) Group SNDP. In this setting, each group S_i has a demand k_i in [k], k in N, and we wish to find a minimum-cost subgraph H subseteq G such that, for each group S_i, there is a vertex in the group that is connected to the root via k_i (vertex or edge) disjoint paths.
While GST admits O(log^2 n log h) approximation, its higher connectivity variants are known to be Label-Cover hard, and for the vertex-weighted version, the hardness holds even when k=2 (it is widely believed that there is no subpolynomial approximation for the Label-Cover problem [Bellare et al., STOC 1993]). More precisely, the problem admits no 2^{log^{1-epsilon}n}-approximation unless NP subseteq DTIME(n^{polylog(n)}). Previously, positive results were known only for the edge-weighted version when k=2 [Gupta et al., SODA 2010; Khandekar et al., Theor. Comput. Sci., 2012] and for a relaxed variant where k_i disjoint paths from r may end at different vertices in a group [Chalermsook et al., SODA 2015], for which the authors gave a bicriteria approximation. For k >= 3, there is no non-trivial approximation algorithm known for edge-weighted Restricted Group SNDP, except for the special case of the relaxed variant on trees (folklore).
Our main result is an O(log n log h) approximation algorithm for Restricted Group SNDP that runs in time n^{f(k, w)}, where w is the treewidth of the input graph. Our algorithm works for both edge and vertex weighted variants, and the approximation ratio nearly matches the lower bound when k and w are constants. The key to achieving this result is a non-trivial extension of a framework introduced in [Chalermsook et al., SODA 2017]. This framework first embeds all feasible solutions to the problem into a dynamic program (DP) table. However, finding the optimal solution in the DP table remains intractable. We formulate a linear program relaxation for the DP and obtain an approximate solution via randomized rounding. This framework also allows us to systematically construct DP tables for high-connectivity problems. As a result, we present new exact algorithms for several variants of survivable network design problems in low-treewidth graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.8/LIPIcs.APPROX-RANDOM.2018.8.pdf
Approximation Algorithms
Hardness of Approximation
Survivable Network Design
Group Steiner Tree
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
9:1
9:16
10.4230/LIPIcs.APPROX-RANDOM.2018.9
article
Perturbation Resilient Clustering for k-Center and Related Problems via LP Relaxations
Chekuri, Chandra
1
Gupta, Shalmoli
1
Department of Computer Science, University of Illinois, Urbana-Champaign, IL 61801, USA
We consider clustering in the perturbation resilience model that has been studied since the work of Bilu and Linial [Yonatan Bilu and Nathan Linial, 2010] and Awasthi, Blum and Sheffet [Awasthi et al., 2012]. A clustering instance I is said to be alpha-perturbation resilient if the optimal solution does not change when the pairwise distances are modified by a factor of alpha and the perturbed distances satisfy the metric property - this is the metric perturbation resilience property introduced in [Angelidakis et al., 2017] and a weaker requirement than prior models. We make two high-level contributions.
- We show that the natural LP relaxation of k-center and asymmetric k-center is integral for 2-perturbation resilient instances. We belive that demonstrating the goodness of standard LP relaxations complements existing results [Maria{-}Florina Balcan et al., 2016; Angelidakis et al., 2017] that are based on new algorithms designed for the perturbation model.
- We define a simple new model of perturbation resilience for clustering with outliers. Using this model we show that the unified MST and dynamic programming based algorithm proposed in [Angelidakis et al., 2017] exactly solves the clustering with outliers problem for several common center based objectives (like k-center, k-means, k-median) when the instances is 2-perturbation resilient. We further show that a natural LP relxation is integral for 2-perturbation resilient instances of k-center with outliers.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.9/LIPIcs.APPROX-RANDOM.2018.9.pdf
Clustering
Perturbation Resilience
LP Integrality
Outliers
Beyond Worst Case Analysis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
10:1
10:19
10.4230/LIPIcs.APPROX-RANDOM.2018.10
article
Sherali-Adams Integrality Gaps Matching the Log-Density Threshold
Chlamtác, Eden
1
Manurangsi, Pasin
2
Ben-Gurion University, Beer Sheva, Israel
University of California, Berkeley, USA
The log-density method is a powerful algorithmic framework which in recent years has given rise to the best-known approximations for a variety of problems, including Densest-k-Subgraph and Small Set Bipartite Vertex Expansion. These approximations have been conjectured to be optimal based on various instantiations of a general conjecture: that it is hard to distinguish a fully random combinatorial structure from one which contains a similar planted sub-structure with the same "log-density".
We bolster this conjecture by showing that in a random hypergraph with edge probability n^{-alpha}, Omega(log n) rounds of Sherali-Adams cannot rule out the existence of a k-subhypergraph with edge density k^{-alpha-o(1)}, for any k and alpha. This holds even when the bound on the objective function is lifted. This gives strong integrality gaps which exactly match the gap in the above distinguishing problems, as well as the best-known approximations, for Densest k-Subgraph, Smallest p-Edge Subgraph, their hypergraph extensions, and Small Set Bipartite Vertex Expansion (or equivalently, Minimum p-Union). Previously, such integrality gaps were known only for Densest k-Subgraph for one specific parameter setting.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.10/LIPIcs.APPROX-RANDOM.2018.10.pdf
Approximation algorithms
integrality gaps
lift-and-project
log-density
Densest k-Subgraph
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
11:1
11:18
10.4230/LIPIcs.APPROX-RANDOM.2018.11
article
Lower Bounds for Approximating Graph Parameters via Communication Complexity
Eden, Talya
1
Rosenbaum, Will
1
https://orcid.org/0000-0002-7723-9090
School of Electrical Engineering, Tel Aviv University, Tel Aviv 6997801, Israel
In a celebrated work, Blais, Brody, and Matulef [Blais et al., 2012] developed a technique for proving property testing lower bounds via reductions from communication complexity. Their work focused on testing properties of functions, and yielded new lower bounds as well as simplified analyses of known lower bounds. Here, we take a further step in generalizing the methodology of [Blais et al., 2012] to analyze the query complexity of graph parameter estimation problems. In particular, our technique decouples the lower bound arguments from the representation of the graph, allowing it to work with any query type.
We illustrate our technique by providing new simpler proofs of previously known tight lower bounds for the query complexity of several graph problems: estimating the number of edges in a graph, sampling edges from an almost-uniform distribution, estimating the number of triangles (and more generally, r-cliques) in a graph, and estimating the moments of the degree distribution of a graph. We also prove new lower bounds for estimating the edge connectivity of a graph and estimating the number of instances of any fixed subgraph in a graph. We show that the lower bounds for estimating the number of triangles and edge connectivity also hold in a strictly stronger computational model that allows access to uniformly random edge samples.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.11/LIPIcs.APPROX-RANDOM.2018.11.pdf
sublinear graph parameter estimation
lower bounds
communication complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
12:1
12:16
10.4230/LIPIcs.APPROX-RANDOM.2018.12
article
Communication Complexity of Correlated Equilibrium with Small Support
Ganor, Anat
1
C. S., Karthik
2
Tel Aviv University, Tel Aviv, Israel
Weizmann Institute of Science, Rehovot, Israel
We define a two-player N x N game called the 2-cycle game, that has a unique pure Nash equilibrium which is also the only correlated equilibrium of the game. In this game, every 1/poly(N)-approximate correlated equilibrium is concentrated on the pure Nash equilibrium. We show that the randomized communication complexity of finding any 1/poly(N)-approximate correlated equilibrium of the game is Omega(N). For small approximation values, our lower bound answers an open question of Babichenko and Rubinstein (STOC 2017).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.12/LIPIcs.APPROX-RANDOM.2018.12.pdf
Correlated equilibrium
Nash equilibrium
Communication complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
13:1
13:15
10.4230/LIPIcs.APPROX-RANDOM.2018.13
article
On Minrank and the Lovász Theta Function
Haviv, Ishay
1
School of Computer Science, The Academic College of Tel Aviv-Yaffo, Tel Aviv 61083, Israel
Two classical upper bounds on the Shannon capacity of graphs are the theta-function due to Lovász and the minrank parameter due to Haemers. We provide several explicit constructions of n-vertex graphs with a constant theta-function and minrank at least n^delta for a constant delta>0 (over various prime order fields). This implies a limitation on the theta-function-based algorithmic approach to approximating the minrank parameter of graphs. The proofs involve linear spaces of multivariate polynomials and the method of higher incidence matrices.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.13/LIPIcs.APPROX-RANDOM.2018.13.pdf
Minrank
Theta Function
Shannon capacity
Multivariate polynomials
Higher incidence matrices
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
14:1
14:19
10.4230/LIPIcs.APPROX-RANDOM.2018.14
article
Online Makespan Minimization: The Power of Restart
Huang, Zhiyi
1
Kang, Ning
1
Tang, Zhihao Gavin
1
Wu, Xiaowei
2
Zhang, Yuhao
1
Department of Computer Sicence, The University of Hong Kong, Hong Kong
Department of Computing, The Hong Kong Polytechnic University, Hong Kong
We consider the online makespan minimization problem on identical machines. Chen and Vestjens (ORL 1997) show that the largest processing time first (LPT) algorithm is 1.5-competitive. For the special case of two machines, Noga and Seiden (TCS 2001) introduce the SLEEPY algorithm that achieves a competitive ratio of (5 - sqrt{5})/2 ~~ 1.382, matching the lower bound by Chen and Vestjens (ORL 1997). Furthermore, Noga and Seiden note that in many applications one can kill a job and restart it later, and they leave an open problem whether algorithms with restart can obtain better competitive ratios.
We resolve this long-standing open problem on the positive end. Our algorithm has a natural rule for killing a processing job: a newly-arrived job replaces the smallest processing job if 1) the new job is larger than other pending jobs, 2) the new job is much larger than the processing one, and 3) the processed portion is small relative to the size of the new job. With appropriate choice of parameters, we show that our algorithm improves the 1.5 competitive ratio for the general case, and the 1.382 competitive ratio for the two-machine case.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.14/LIPIcs.APPROX-RANDOM.2018.14.pdf
Online Scheduling
Makespan Minimization
Identical Machines
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
15:1
15:20
10.4230/LIPIcs.APPROX-RANDOM.2018.15
article
On Sketching the q to p Norms
Krishnan, Aditya
1
Mohanty, Sidhanth
1
Woodruff, David P.
1
Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, USA
We initiate the study of data dimensionality reduction, or sketching, for the q -> p norms. Given an n x d matrix A, the q -> p norm, denoted |A |_{q -> p} = sup_{x in R^d \ 0} |Ax |_p / |x |_q, is a natural generalization of several matrix and vector norms studied in the data stream and sketching models, with applications to datamining, hardness of approximation, and oblivious routing. We say a distribution S on random matrices L in R^{nd} - > R^k is a (k,alpha)-sketching family if from L(A), one can approximate |A |_{q -> p} up to a factor alpha with constant probability. We provide upper and lower bounds on the sketching dimension k for every p, q in [1, infty], and in a number of cases our bounds are tight. While we mostly focus on constant alpha, we also consider large approximation factors alpha, as well as other variants of the problem such as when A has low rank.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.15/LIPIcs.APPROX-RANDOM.2018.15.pdf
Dimensionality Reduction
Norms
Sketching
Streaming
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
16:1
16:21
10.4230/LIPIcs.APPROX-RANDOM.2018.16
article
Flow-time Optimization for Concurrent Open-Shop and Precedence Constrained Scheduling Models
Kulkarni, Janardhan
1
Li, Shi
2
Microsoft Research, Redmond, WA, USA
Department of Computer Science and Engineering, University at Buffalo, Buffalo, NY, USA
Scheduling a set of jobs over a collection of machines is a fundamental problem that needs to be solved millions of times a day in various computing platforms: in operating systems, in large data clusters, and in data centers. Along with makespan, flow-time, which measures the length of time a job spends in a system before it completes, is arguably the most important metric to measure the performance of a scheduling algorithm. In recent years, there has been a remarkable progress in understanding flow-time based objective functions in diverse settings such as unrelated machines scheduling, broadcast scheduling, multi-dimensional scheduling, to name a few.
Yet, our understanding of the flow-time objective is limited mostly to the scenarios where jobs have no dependencies. On the other hand, in almost all real world applications, think of MapReduce settings for example, jobs have dependencies that need to be respected while making scheduling decisions. In this paper, we take first steps towards understanding this complex problem. In particular, we consider two classical scheduling problems that capture dependencies across jobs: 1) concurrent open-shop scheduling (COSSP) and 2) precedence constrained scheduling. Our main motivation to study these problems specifically comes from their relevance to two scheduling problems that have gained importance in the context of data centers: co-flow scheduling and DAG scheduling. We design almost optimal approximation algorithms for COSSP and PCSP, and show hardness results.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.16/LIPIcs.APPROX-RANDOM.2018.16.pdf
Approximation
Weighted Flow Time
Concurrent Open Shop
Precedence Constraints
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
17:1
17:19
10.4230/LIPIcs.APPROX-RANDOM.2018.17
article
Sublinear-Time Quadratic Minimization via Spectral Decomposition of Matrices
Levi, Amit
1
https://orcid.org/0000-0002-8530-5182
Yoshida, Yuichi
2
https://orcid.org/0000-0001-8919-8479
University of Waterloo, Canada
National Institute of Informatics, Tokyo, Japan
We design a sublinear-time approximation algorithm for quadratic function minimization problems with a better error bound than the previous algorithm by Hayashi and Yoshida (NIPS'16). Our approximation algorithm can be modified to handle the case where the minimization is done over a sphere. The analysis of our algorithms is obtained by combining results from graph limit theory, along with a novel spectral decomposition of matrices. Specifically, we prove that a matrix A can be decomposed into a structured part and a pseudorandom part, where the structured part is a block matrix with a polylogarithmic number of blocks, such that in each block all the entries are the same, and the pseudorandom part has a small spectral norm, achieving better error bound than the existing decomposition theorem of Frieze and Kannan (FOCS'96). As an additional application of the decomposition theorem, we give a sublinear-time approximation algorithm for computing the top singular values of a matrix.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.17/LIPIcs.APPROX-RANDOM.2018.17.pdf
Qudratic function minimization
Approximation Algorithms
Matrix spectral decomposition
Graph limits
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
18:1
18:18
10.4230/LIPIcs.APPROX-RANDOM.2018.18
article
Deterministic Heavy Hitters with Sublinear Query Time
Li, Yi
1
Nakos, Vasileios
2
Nanyang Technological University, Singapore
Harvard University, USA
We study the classic problem of finding l_1 heavy hitters in the streaming model. In the general turnstile model, we give the first deterministic sublinear-time sketching algorithm which takes a linear sketch of length O(epsilon^{-2} log n * log^*(epsilon^{-1})), which is only a factor of log^*(epsilon^{-1}) more than the best existing polynomial-time sketching algorithm (Nelson et al., RANDOM '12). Our approach is based on an iterative procedure, where most unrecovered heavy hitters are identified in each iteration. Although this technique has been extensively employed in the related problem of sparse recovery, this is the first time, to the best of our knowledge, that it has been used in the context of heavy hitters. Along the way we also obtain a sublinear time algorithm for the closely related problem of the l_1/l_1 compressed sensing, matching the space usage of previous (super-)linear time algorithms. In the strict turnstile model, we show that the runtime can be improved and the sketching matrix can be made strongly explicit with O(epsilon^{-2}log^3 n/log^3(1/epsilon)) rows.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.18/LIPIcs.APPROX-RANDOM.2018.18.pdf
heavy hitters
turnstile model
sketching algorithm
strongly explicit
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
19:1
19:13
10.4230/LIPIcs.APPROX-RANDOM.2018.19
article
On Low-Risk Heavy Hitters and Sparse Recovery Schemes
Li, Yi
1
Nakos, Vasileios
2
Woodruff, David P.
3
Nanyang Technological University, Singapore
Harvard University, USA
Carnegie Mellon University, USA
We study the heavy hitters and related sparse recovery problems in the low failure probability regime. This regime is not well-understood, and the main previous work on this is by Gilbert et al. (ICALP'13). We recognize an error in their analysis, improve their results, and contribute new sparse recovery algorithms, as well as provide upper and lower bounds for the heavy hitters problem with low failure probability. Our results are summarized as follows:
1) (Heavy Hitters) We study three natural variants for finding heavy hitters in the strict turnstile model, where the variant depends on the quality of the desired output. For the weakest variant, we give a randomized algorithm improving the failure probability analysis of the ubiquitous Count-Min data structure. We also give a new lower bound for deterministic schemes, resolving a question about this variant posed in Question 4 in the IITK Workshop on Algorithms for Data Streams (2006). Under the strongest and well-studied l_{infty}/ l_2 variant, we show that the classical Count-Sketch data structure is optimal for very low failure probabilities, which was previously unknown.
2) (Sparse Recovery Algorithms) For non-adaptive sparse-recovery, we give sublinear-time algorithms with low-failure probability, which improve upon Gilbert et al. (ICALP'13). In the adaptive case, we improve the failure probability from a constant by Indyk et al. (FOCS '11) to e^{-k^{0.99}}, where k is the sparsity parameter.
3) (Optimal Average-Case Sparse Recovery Bounds) We give matching upper and lower bounds in all parameters, including the failure probability, for the measurement complexity of the l_2/l_2 sparse recovery problem in the spiked-covariance model, completely settling its complexity in this model.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.19/LIPIcs.APPROX-RANDOM.2018.19.pdf
heavy hitters
sparse recovery
turnstile model
spike covariance model
lower bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
20:1
20:17
10.4230/LIPIcs.APPROX-RANDOM.2018.20
article
Mildly Exponential Time Approximation Algorithms for Vertex Cover, Balanced Separator and Uniform Sparsest Cut
Manurangsi, Pasin
1
Trevisan, Luca
1
University of California, Berkeley, USA
In this work, we study the trade-off between the running time of approximation algorithms and their approximation guarantees. By leveraging a structure of the "hard" instances of the Arora-Rao-Vazirani lemma [Sanjeev Arora et al., 2009; James R. Lee, 2005], we show that the Sum-of-Squares hierarchy can be adapted to provide "fast", but still exponential time, approximation algorithms for several problems in the regime where they are believed to be NP-hard. Specifically, our framework yields the following algorithms; here n denote the number of vertices of the graph and r can be any positive real number greater than 1 (possibly depending on n).
- A (2 - 1/(O(r)))-approximation algorithm for Vertex Cover that runs in exp (n/(2^{r^2)})n^{O(1)} time.
- An O(r)-approximation algorithms for Uniform Sparsest Cut and Balanced Separator that runs in exp (n/(2^{r^2)})n^{O(1)} time.
Our algorithm for Vertex Cover improves upon Bansal et al.'s algorithm [Nikhil Bansal et al., 2017] which achieves (2 - 1/(O(r)))-approximation in time exp (n/(r^r))n^{O(1)}. For Uniform Sparsest Cut and Balanced Separator, our algorithms improve upon O(r)-approximation exp (n/(2^r))n^{O(1)}-time algorithms that follow from a work of Charikar et al. [Moses Charikar et al., 2010].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.20/LIPIcs.APPROX-RANDOM.2018.20.pdf
Approximation algorithms
Exponential-time algorithms
Vertex Cover
Sparsest Cut
Balanced Separator
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
21:1
21:19
10.4230/LIPIcs.APPROX-RANDOM.2018.21
article
Deterministic O(1)-Approximation Algorithms to 1-Center Clustering with Outliers
Narayanan, Shyam
1
Harvard University, Cambridge, Massachusetts, USA
The 1-center clustering with outliers problem asks about identifying a prototypical robust statistic that approximates the location of a cluster of points. Given some constant 0 < alpha < 1 and n points such that alpha n of them are in some (unknown) ball of radius r, the goal is to compute a ball of radius O(r) that also contains alpha n points. This problem can be formulated with the points in a normed vector space such as R^d or in a general metric space.
The problem has a simple randomized solution: a randomly selected point is a correct solution with constant probability, and its correctness can be verified in linear time. However, the deterministic complexity of this problem was not known. In this paper, for any L^p vector space, we show an O(nd)-time solution with a ball of radius O(r) for a fixed alpha > 1/2, and for any normed vector space, we show an O(nd)-time solution with a ball of radius O(r) when alpha > 1/2 as well as an O(nd log^{(k)}(n))-time solution with a ball of radius O(r) for all alpha > 0, k in N, where log^{(k)}(n) represents the kth iterated logarithm, assuming distance computation and vector space operations take O(d) time. For an arbitrary metric space, we show for any C in N an O(n^{1+1/C})-time solution that finds a ball of radius 2Cr, assuming distance computation between any pair of points takes O(1)-time, and show that for any alpha, C, an O(n^{1+1/C})-time solution that finds a ball of radius ((2C-3)(1-alpha)-1)r cannot exist.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.21/LIPIcs.APPROX-RANDOM.2018.21.pdf
Deterministic
Approximation Algorithm
Cluster
Statistic
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
22:1
22:17
10.4230/LIPIcs.APPROX-RANDOM.2018.22
article
Robust Online Speed Scaling With Deadline Uncertainty
Reddy, Goonwanth
1
Vaze, Rahul
2
Department of Electrical Engineering, Indian Institute of Technology, Madras, Chennai, India
School of Technology and Computer Science, Tata Institute of Fundamental Research, Mumbai, India
A speed scaling problem is considered, where time is divided into slots, and jobs with payoff v arrive at the beginning of the slot with associated deadlines d. Each job takes one slot to be processed, and multiple jobs can be processed by the server in each slot with energy cost g(k) for processing k jobs in one slot. The payoff is accrued by the algorithm only if the job is processed by its deadline. We consider a robust version of this speed scaling problem, where a job on its arrival reveals its payoff v, however, the deadline is hidden to the online algorithm, which could potentially be chosen adversarially and known to the optimal offline algorithm. The objective is to derive a robust (to deadlines) and optimal online algorithm that achieves the best competitive ratio. We propose an algorithm (called min-LCR) and show that it is an optimal online algorithm for any convex energy cost function g(.). We do so without actually evaluating the optimal competitive ratio, and give a general proof that works for any convex g, which is rather novel. For the popular choice of energy cost function g(k) = k^alpha, alpha >= 2, we give concrete bounds on the competitive ratio of the algorithm, which ranges between 2.618 and 3 depending on the value of alpha. The best known online algorithm for the same problem, but where deadlines are revealed to the online algorithm has competitive ratio of 2 and a lower bound of sqrt{2}. Thus, importantly, lack of deadline knowledge does not make the problem degenerate, and the effect of deadline information on the optimal competitive ratio is limited.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.22/LIPIcs.APPROX-RANDOM.2018.22.pdf
Online Algorithms
Speed Scaling
Greedy Algorithms
Scheduling
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
23:1
23:20
10.4230/LIPIcs.APPROX-RANDOM.2018.23
article
Multi-Agent Submodular Optimization
Santiago, Richard
1
Shepherd, F. Bruce
2
McGill University, Montreal, Canada
University of British Columbia, Vancouver, Canada
Recent years have seen many algorithmic advances in the area of submodular optimization: (SO) min/max~f(S): S in F, where F is a given family of feasible sets over a ground set V and f:2^V - > R is submodular. This progress has been coupled with a wealth of new applications for these models. Our focus is on a more general class of multi-agent submodular optimization (MASO) min/max Sum_{i=1}^{k} f_i(S_i): S_1 u+ S_2 u+ ... u+ S_k in F. Here we use u+ to denote disjoint union and hence this model is attractive where resources are being allocated across k agents, each with its own submodular cost function f_i(). This was introduced in the minimization setting by Goel et al. In this paper we explore the extent to which the approximability of the multi-agent problems are linked to their single-agent versions, referred to informally as the multi-agent gap.
We present different reductions that transform a multi-agent problem into a single-agent one. For minimization, we show that (MASO) has an O(alpha * min{k, log^2 (n)})-approximation whenever (SO) admits an alpha-approximation over the convex formulation. In addition, we discuss the class of "bounded blocker" families where there is a provably tight O(log n) multi-agent gap between (MASO) and (SO). For maximization, we show that monotone (resp. nonmonotone) (MASO) admits an alpha (1-1/e) (resp. alpha * 0.385) approximation whenever monotone (resp. nonmonotone) (SO) admits an alpha-approximation over the multilinear formulation; and the 1-1/e multi-agent gap for monotone objectives is tight. We also discuss several families (such as spanning trees, matroids, and p-systems) that have an (optimal) multi-agent gap of 1. These results substantially expand the family of tractable models for submodular maximization.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.23/LIPIcs.APPROX-RANDOM.2018.23.pdf
submodular optimization
multi-agent
approximation algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
24:1
24:18
10.4230/LIPIcs.APPROX-RANDOM.2018.24
article
Generalized Assignment of Time-Sensitive Item Groups
Sarpatwar, Kanthi
1
Schieber, Baruch
1
Shachnai, Hadas
2
IBM Research, Yorktown Heights, NY, USA
Computer Science Department, Technion, Haifa, Israel
We study the generalized assignment problem with time-sensitive item groups (chi-AGAP). It has central applications in advertisement placement on the Internet, and in virtual network embedding in Cloud data centers. We are given a set of items, partitioned into n groups, and a set of T identical bins (or, time-slots). Each group 1 <= j <= n has a time-window chi_j = [r_j, d_j]subseteq [T] in which it can be packed. Each item i in group j has a size s_i>0 and a non-negative utility u_{it} when packed into bin t in chi_j. A bin can accommodate at most one item from each group and the total size of the items in a bin cannot exceed its capacity. The goal is to find a feasible packing of a subset of the items in the bins such that the total utility from groups that are completely packed is maximized. Our main result is an Omega(1)-approximation algorithm for chi-AGAP. Our approximation technique relies on a non-trivial rounding of a configuration LP, which can be adapted to other common scenarios of resource allocation in Cloud data centers.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.24/LIPIcs.APPROX-RANDOM.2018.24.pdf
Approximation Algorithms
Packing and Covering problems
Generalized Assignment problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
25:1
25:15
10.4230/LIPIcs.APPROX-RANDOM.2018.25
article
On Geodesically Convex Formulations for the Brascamp-Lieb Constant
Sra, Suvrit
1
Vishnoi, Nisheeth K.
2
Yildiz, Ozan
2
Massachusetts Institute of Technology (MIT), Cambridge, MA, USA
École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
We consider two non-convex formulations for computing the optimal constant in the Brascamp-Lieb inequality corresponding to a given datum and show that they are geodesically log-concave on the manifold of positive definite matrices endowed with the Riemannian metric corresponding to the Hessian of the log-determinant function. The first formulation is present in the work of Lieb [Lieb, 1990] and the second is new and inspired by the work of Bennett et al. [Bennett et al., 2008]. Recent work of Garg et al. [Ankit Garg et al., 2017] also implies a geodesically log-concave formulation of the Brascamp-Lieb constant through a reduction to the operator scaling problem. However, the dimension of the arising optimization problem in their reduction depends exponentially on the number of bits needed to describe the Brascamp-Lieb datum. The formulations presented here have dimensions that are polynomial in the bit complexity of the input datum.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.25/LIPIcs.APPROX-RANDOM.2018.25.pdf
Geodesic convexity
positive definite cone
geodesics
Brascamp-Lieb constant
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
26:1
26:9
10.4230/LIPIcs.APPROX-RANDOM.2018.26
article
Tensor Rank is Hard to Approximate
Swernofsky, Joseph
1
Kungliga Tekniska Högskolan, Lindstedtsvägen 3, Stockholm SE-100 44, Sweden
We prove that approximating the rank of a 3-tensor to within a factor of 1 + 1/1852 - delta, for any delta > 0, is NP-hard over any field. We do this via reduction from bounded occurrence 2-SAT.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.26/LIPIcs.APPROX-RANDOM.2018.26.pdf
tensor rank
high rank tensor
slice elimination
approximation algorithm
hardness of approximation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
27:1
27:14
10.4230/LIPIcs.APPROX-RANDOM.2018.27
article
An O(1)-Approximation Algorithm for Dynamic Weighted Vertex Cover with Soft Capacity
Wei, Hao-Ting
1
Hon, Wing-Kai
2
Horn, Paul
3
Liao, Chung-Shou
1
Sadakane, Kunihiko
4
Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu 30013, Taiwan
Department of Computer Science, National Tsing Hua University, Hsinchu 30013, Taiwan
Department of Mathematics, University of Denver, Denver, USA
Department of Mathematical Informatics, The University of Tokyo, Tokyo, Japan
This study considers the soft capacitated vertex cover problem in a dynamic setting. This problem generalizes the dynamic model of the vertex cover problem, which has been intensively studied in recent years. Given a dynamically changing vertex-weighted graph G=(V,E), which allows edge insertions and edge deletions, the goal is to design a data structure that maintains an approximate minimum vertex cover while satisfying the capacity constraint of each vertex. That is, when picking a copy of a vertex v in the cover, the number of v's incident edges covered by the copy is up to a given capacity of v. We extend Bhattacharya et al.'s work [SODA'15 and ICALP'15] to obtain a deterministic primal-dual algorithm for maintaining a constant-factor approximate minimum capacitated vertex cover with O(log n / epsilon) amortized update time, where n is the number of vertices in the graph. The algorithm can be extended to (1) a more general model in which each edge is associated with a non-uniform and unsplittable demand, and (2) the more general capacitated set cover problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.27/LIPIcs.APPROX-RANDOM.2018.27.pdf
approximation algorithm
dynamic algorithm
primal-dual
vertex cover
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
28:1
28:19
10.4230/LIPIcs.APPROX-RANDOM.2018.28
article
Fixed-Parameter Approximation Schemes for Weighted Flowtime
Wiese, Andreas
1
Department of Industrial Engineering and Center for Mathematical Modeling, Universidad de Chile, Chile
Given a set of n jobs with integral release dates, processing times and weights, it is a natural and important scheduling problem to compute a schedule that minimizes the sum of the weighted flow times of the jobs. There are strong lower bounds for the possible approximation ratios. In the non-preemptive case, even on a single machine the best known result is a O(sqrt{n})-approximation which is best possible. In the preemptive case on m identical machines there is a O(log min{n/m,P})-approximation (where P denotes the maximum job size) which is also best possible.
We study the problem in the parametrized setting where our parameter k is an upper bound on the maximum (integral) processing time and weight of a job, a standard parameter for scheduling problems. We present a (1+epsilon)-approximation algorithm for the preemptive and the non-preemptive case of minimizing weighted flow time on m machines with a running time of f(k,epsilon,m)* n^{O(1)}, i.e., our combined parameters are k,epsilon, and m. Key to our results is to distinguish time intervals according to whether in the optimal solution the pending jobs have large or small total weight. Depending on this we employ dynamic programming, linear programming, greedy routines, or combinations of the latter to compute the schedule for each respective interval.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.28/LIPIcs.APPROX-RANDOM.2018.28.pdf
Scheduling
fixed-parameter algorithms
approximation algorithms
approximation schemes
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
29:1
29:18
10.4230/LIPIcs.APPROX-RANDOM.2018.29
article
List-Decoding Homomorphism Codes with Arbitrary Codomains
Babai, László
1
https://orcid.org/0000-0002-2058-685X
Black, Timothy J. F.
1
https://orcid.org/0000-0003-2469-9867
Wuu, Angela
1
University of Chicago, Chicago IL, USA
The codewords of the homomorphism code aHom(G,H) are the affine homomorphisms between two finite groups, G and H, generalizing Hadamard codes. Following the work of Goldreich-Levin (1989), Grigorescu et al. (2006), Dinur et al. (2008), and Guo and Sudan (2014), we further expand the range of groups for which local list-decoding is possible up to mindist, the minimum distance of the code. In particular, for the first time, we do not require either G or H to be solvable. Specifically, we demonstrate a poly(1/epsilon) bound on the list size, i. e., on the number of codewords within distance (mindist-epsilon) from any received word, when G is either abelian or an alternating group, and H is an arbitrary (finite or infinite) group. We conjecture that a similar bound holds for all finite simple groups as domains; the alternating groups serve as the first test case.
The abelian vs. arbitrary result permits us to adapt previous techniques to obtain efficient local list-decoding for this case. We also obtain efficient local list-decoding for the permutation representations of alternating groups (the codomain is a symmetric group) under the restriction that the domain G=A_n is paired with codomain H=S_m satisfying m < 2^{n-1}/sqrt{n}.
The limitations on the codomain in the latter case arise from severe technical difficulties stemming from the need to solve the homomorphism extension (HomExt) problem in certain cases; these are addressed in a separate paper (Wuu 2018).
We introduce an intermediate "semi-algorithmic" model we call Certificate List-Decoding that bypasses the HomExt bottleneck and works in the alternating vs. arbitrary setting. A certificate list-decoder produces partial homomorphisms that uniquely extend to the homomorphisms in the list. A homomorphism extender applied to a list of certificates yields the desired list.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.29/LIPIcs.APPROX-RANDOM.2018.29.pdf
Error-correcting codes
Local algorithms
Local list-decoding
Finite groups
Homomorphism codes
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
30:1
30:15
10.4230/LIPIcs.APPROX-RANDOM.2018.30
article
Optimal Deterministic Extractors for Generalized Santha-Vazirani Sources
Beigi, Salman
1
Bogdanov, Andrej
2
Etesami, Omid
1
Guo, Siyao
3
Institute for Research in Fundamental Sciences, Tehran, Iran
Chinese University of Hong Kong
Northeastern University, Boston, USA
Let F be a finite alphabet and D be a finite set of distributions over F. A Generalized Santha-Vazirani (GSV) source of type (F, D), introduced by Beigi, Etesami and Gohari (ICALP 2015, SICOMP 2017), is a random sequence (F_1, ..., F_n) in F^n, where F_i is a sample from some distribution d in D whose choice may depend on F_1, ..., F_{i-1}.
We show that all GSV source types (F, D) fall into one of three categories: (1) non-extractable; (2) extractable with error n^{-Theta(1)}; (3) extractable with error 2^{-Omega(n)}.
We provide essentially randomness-optimal extraction algorithms for extractable sources. Our algorithm for category (2) sources extracts one bit with error epsilon from n = poly(1/epsilon) samples in time linear in n. Our algorithm for category (3) sources extracts m bits with error epsilon from n = O(m + log 1/epsilon) samples in time min{O(m2^m * n),n^{O(|F|)}}.
We also give algorithms for classifying a GSV source type (F, D): Membership in category (1) can be decided in NP, while membership in category (3) is polynomial-time decidable.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.30/LIPIcs.APPROX-RANDOM.2018.30.pdf
feasibility of randomness extraction
extractor lower bounds
martingales
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
31:1
31:10
10.4230/LIPIcs.APPROX-RANDOM.2018.31
article
Adaptive Lower Bound for Testing Monotonicity on the Line
Belovs, Aleksandrs
1
Faculty of Computing, University of Latvia, Raina bulvaris 19, Riga, Latvia.
In the property testing model, the task is to distinguish objects possessing some property from the objects that are far from it. One of such properties is monotonicity, when the objects are functions from one poset to another. This is an active area of research. In this paper we study query complexity of epsilon-testing monotonicity of a function f : [n]->[r]. All our lower bounds are for adaptive two-sided testers.
- We prove a nearly tight lower bound for this problem in terms of r. The bound is Omega((log r)/(log log r)) when epsilon = 1/2. No previous satisfactory lower bound in terms of r was known.
- We completely characterise query complexity of this problem in terms of n for smaller values of epsilon. The complexity is Theta(epsilon^{-1} log (epsilon n)). Apart from giving the lower bound, this improves on the best known upper bound.
Finally, we give an alternative proof of the Omega(epsilon^{-1}d log n - epsilon^{-1}log epsilon^{-1}) lower bound for testing monotonicity on the hypergrid [n]^d due to Chakrabarty and Seshadhri (RANDOM'13).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.31/LIPIcs.APPROX-RANDOM.2018.31.pdf
property testing
monotonicity on the line
monotonicity on the hypergrid
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
32:1
32:18
10.4230/LIPIcs.APPROX-RANDOM.2018.32
article
Swendsen-Wang Dynamics for General Graphs in the Tree Uniqueness Region
Blanca, Antonio
1
Chen, Zongchen
2
Vigoda, Eric
1
School of Computer Science, Georgia Institute of Technology, Atlanta GA 30332, USA
School of Mathematics, Georgia Institute of Technology, Atlanta GA 30332, USA
The Swendsen-Wang dynamics is a popular algorithm for sampling from the Gibbs distribution for the ferromagnetic Ising model on a graph G=(V,E). The dynamics is a "global" Markov chain which is conjectured to converge to equilibrium in O(|V|^{1/4}) steps for any graph G at any (inverse) temperature beta. It was recently proved by Guo and Jerrum (2017) that the Swendsen-Wang dynamics has polynomial mixing time on any graph at all temperatures, yet there are few results providing o(|V|) upper bounds on its convergence time.
We prove fast convergence of the Swendsen-Wang dynamics on general graphs in the tree uniqueness region of the ferromagnetic Ising model. In particular, when beta < beta_c(d) where beta_c(d) denotes the uniqueness/non-uniqueness threshold on infinite d-regular trees, we prove that the relaxation time (i.e., the inverse spectral gap) of the Swendsen-Wang dynamics is Theta(1) on any graph of maximum degree d >= 3. Our proof utilizes a version of the Swendsen-Wang dynamics which only updates isolated vertices. We establish that this variant of the Swendsen-Wang dynamics has mixing time O(log{|V|}) and relaxation time Theta(1) on any graph of maximum degree d for all beta < beta_c(d). We believe that this Markov chain may be of independent interest, as it is a monotone Swendsen-Wang type chain. As part of our proofs, we provide modest extensions of the technology of Mossel and Sly (2013) for analyzing mixing times and of the censoring result of Peres and Winkler (2013). Both of these results are for the Glauber dynamics, and we extend them here to general monotone Markov chains. This class of dynamics includes for example the heat-bath block dynamics, for which we obtain new tight mixing time bounds.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.32/LIPIcs.APPROX-RANDOM.2018.32.pdf
Swendsen-Wang dynamics
mixing time
relaxation time
spatial mixing
censoring
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
33:1
33:15
10.4230/LIPIcs.APPROX-RANDOM.2018.33
article
Sampling in Uniqueness from the Potts and Random-Cluster Models on Random Regular Graphs
Blanca, Antonio
1
Galanis, Andreas
2
Goldberg, Leslie Ann
2
Stefankovic, Daniel
3
Vigoda, Eric
1
Yang, Kuan
2
School of Computer Science, Georgia Institute of Technology, Atlanta GA 30332, USA
Department of Computer Science, University of Oxford, Parks Road, Oxford, OX1 3QD, UK
Department of Computer Science, University of Rochester, Rochester, NY 14627, USA
We consider the problem of sampling from the Potts model on random regular graphs. It is conjectured that sampling is possible when the temperature of the model is in the so-called uniqueness regime of the regular tree, but positive algorithmic results have been for the most part elusive. In this paper, for all integers q >= 3 and Delta >= 3, we develop algorithms that produce samples within error o(1) from the q-state Potts model on random Delta-regular graphs, whenever the temperature is in uniqueness, for both the ferromagnetic and antiferromagnetic cases.
The algorithm for the antiferromagnetic Potts model is based on iteratively adding the edges of the graph and resampling a bichromatic class that contains the endpoints of the newly added edge. Key to the algorithm is how to perform the resampling step efficiently since bichromatic classes can potentially induce linear-sized components. To this end, we exploit the tree uniqueness to show that the average growth of bichromatic components is typically small, which allows us to use correlation decay algorithms for the resampling step. While the precise uniqueness threshold on the tree is not known for general values of q and Delta in the antiferromagnetic case, our algorithm works throughout uniqueness regardless of its value.
In the case of the ferromagnetic Potts model, we are able to simplify the algorithm significantly by utilising the random-cluster representation of the model. In particular, we demonstrate that a percolation-type algorithm succeeds in sampling from the random-cluster model with parameters p,q on random Delta-regular graphs for all values of q >= 1 and p<p_c(q,Delta), where p_c(q,Delta) corresponds to a uniqueness threshold for the model on the Delta-regular tree. When restricted to integer values of q, this yields a simplified algorithm for the ferromagnetic Potts model on random Delta-regular graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.33/LIPIcs.APPROX-RANDOM.2018.33.pdf
sampling
Potts model
random regular graphs
phase transitions
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
34:1
34:17
10.4230/LIPIcs.APPROX-RANDOM.2018.34
article
Polar Codes with Exponentially Small Error at Finite Block Length
Blasiok, Jaroslaw
1
Guruswami, Venkatesan
2
Sudan, Madhu
3
Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, 33 Oxford Street, Cambridge, MA 02138, USA
Computer Science Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, 33 Oxford Street, Cambridge, MA 02138, USA.
We show that the entire class of polar codes (up to a natural necessary condition) converge to capacity at block lengths polynomial in the gap to capacity, while simultaneously achieving failure probabilities that are exponentially small in the block length (i.e., decoding fails with probability exp(-N^{Omega(1)}) for codes of length N). Previously this combination was known only for one specific family within the class of polar codes, whereas we establish this whenever the polar code exhibits a condition necessary for any polarization.
Our results adapt and strengthen a local analysis of polar codes due to the authors with Nakkiran and Rudra [Proc. STOC 2018]. Their analysis related the time-local behavior of a martingale to its global convergence, and this allowed them to prove that the broad class of polar codes converge to capacity at polynomial block lengths. Their analysis easily adapts to show exponentially small failure probabilities, provided the associated martingale, the "Arikan martingale", exhibits a corresponding strong local effect. The main contribution of this work is a much stronger local analysis of the Arikan martingale. This leads to the general result claimed above.
In addition to our general result, we also show, for the first time, polar codes that achieve failure probability exp(-N^{beta}) for any beta < 1 while converging to capacity at block length polynomial in the gap to capacity. Finally we also show that the "local" approach can be combined with any analysis of failure probability of an arbitrary polar code to get essentially the same failure probability while achieving block length polynomial in the gap to capacity.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.34/LIPIcs.APPROX-RANDOM.2018.34.pdf
Polar codes
error exponent
rate of polarization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
35:1
35:18
10.4230/LIPIcs.APPROX-RANDOM.2018.35
article
Approximate Degree and the Complexity of Depth Three Circuits
Bun, Mark
1
Thaler, Justin
2
Princeton University, Princeton, NJ, USA
Georgetown University, Washington, DC, USA
Threshold weight, margin complexity, and Majority-of-Threshold circuit size are basic complexity measures of Boolean functions that arise in learning theory, communication complexity, and circuit complexity. Each of these measures might exhibit a chasm at depth three: namely, all polynomial size Boolean circuits of depth two have polynomial complexity under the measure, but there may exist Boolean circuits of depth three that have essentially maximal complexity exp(Theta(n)). However, existing techniques are far from showing this: for all three measures, the best lower bound for depth three circuits is exp(Omega(n^{2/5})). Moreover, prior methods exclusively study block-composed functions. Such methods appear intrinsically unable to prove lower bounds better than exp(Omega(sqrt{n})) even for depth four circuits, and have yet to prove lower bounds better than exp(Omega(sqrt{n})) for circuits of any constant depth.
We take a step toward showing that all of these complexity measures indeed exhibit a chasm at depth three. Specifically, for any arbitrarily small constant delta > 0, we exhibit a depth three circuit of polynomial size (in fact, an O(log n)-decision list) of complexity exp(Omega(n^{1/2-delta})) under each of these measures.
Our methods go beyond the block-composed functions studied in prior work, and hence may not be subject to the same barriers. Accordingly, we suggest natural candidate functions that may exhibit stronger bounds.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.35/LIPIcs.APPROX-RANDOM.2018.35.pdf
approximate degree
communication complexity
learning theory
polynomial approximation
threshold circuits
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
36:1
36:18
10.4230/LIPIcs.APPROX-RANDOM.2018.36
article
Speeding up Switch Markov Chains for Sampling Bipartite Graphs with Given Degree Sequence
Carstens, Corrie Jacobien
1
Kleer, Pieter
2
Korteweg-de Vries Institute for Mathematics, Amsterdam, The Netherlands
Centrum Wiskunde & Informatica (CWI), Amsterdam, The Netherlands
We consider the well-studied problem of uniformly sampling (bipartite) graphs with a given degree sequence, or equivalently, the uniform sampling of binary matrices with fixed row and column sums. In particular, we focus on Markov Chain Monte Carlo (MCMC) approaches, which proceed by making small changes that preserve the degree sequence to a given graph. Such Markov chains converge to the uniform distribution, but the challenge is to show that they do so quickly, i.e., that they are rapidly mixing.
The standard example of this Markov chain approach for sampling bipartite graphs is the switch algorithm, that proceeds by locally switching two edges while preserving the degree sequence. The Curveball algorithm is a variation on this approach in which essentially multiple switches (trades) are performed simultaneously, with the goal of speeding up switch-based algorithms. Even though the Curveball algorithm is expected to mix faster than switch-based algorithms for many degree sequences, nothing is currently known about its mixing time. On the other hand, the switch algorithm has been proven to be rapidly mixing for several classes of degree sequences.
In this work we present the first results regarding the mixing time of the Curveball algorithm. We give a theoretical comparison between the switch and Curveball algorithms in terms of their underlying Markov chains. As our main result, we show that the Curveball chain is rapidly mixing whenever a switch-based chain is rapidly mixing. We do this using a novel state space graph decomposition of the switch chain into Johnson graphs. This decomposition is of independent interest.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.36/LIPIcs.APPROX-RANDOM.2018.36.pdf
Binary matrix
graph sampling
Curveball
switch
Markov chain decomposition
Johnson graph
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
37:1
37:20
10.4230/LIPIcs.APPROX-RANDOM.2018.37
article
Randomness Extraction in AC0 and with Small Locality
Cheng, Kuan
1
Li, Xin
1
Department of Computer Science, Johns Hopkins University.
Randomness extractors, which extract high quality (almost-uniform) random bits from biased random sources, are important objects both in theory and in practice. While there have been significant progress in obtaining near optimal constructions of randomness extractors in various settings, the computational complexity of randomness extractors is still much less studied. In particular, it is not clear whether randomness extractors with good parameters can be computed in several interesting complexity classes that are much weaker than P.
In this paper we study randomness extractors in the following two models of computation: (1) constant-depth circuits (AC^0), and (2) the local computation model. Previous work in these models, such as [Viola, 2005], [Goldreich et al., 2015] and [Bogdanov and Guo, 2013], only achieve constructions with weak parameters. In this work we give explicit constructions of randomness extractors with much better parameters. Our results on AC^0 extractors refute a conjecture in [Goldreich et al., 2015] and answer several open problems there. We also provide a lower bound on the error of extractors in AC^0, which together with the entropy lower bound in [Viola, 2005; Goldreich et al., 2015] almost completely characterizes extractors in this class. Our results on local extractors also significantly improve the seed length in [Bogdanov and Guo, 2013]. As an application, we use our AC^0 extractors to study pseudorandom generators in AC^0, and show that we can construct both cryptographic pseudorandom generators (under reasonable computational assumptions) and unconditional pseudorandom generators for space bounded computation with very good parameters.
Our constructions combine several previous techniques in randomness extractors, as well as introduce new techniques to reduce or preserve the complexity of extractors, which may be of independent interest. These include (1) a general way to reduce the error of strong seeded extractors while preserving the AC^0 property and small locality, and (2) a seeded randomness condenser with small locality.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.37/LIPIcs.APPROX-RANDOM.2018.37.pdf
Randomness Extraction
AC0
Locality
Pseudorandom Generator
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
38:1
38:20
10.4230/LIPIcs.APPROX-RANDOM.2018.38
article
Boolean Function Analysis on High-Dimensional Expanders
Dikstein, Yotam
1
Dinur, Irit
1
Filmus, Yuval
2
Harsha, Prahladh
3
Weizmann Institute of Science, ISRAEL
Technion - Israel Institute of Technology, ISRAEL
Tata Institute of Fundamental Research, INDIA
We initiate the study of Boolean function analysis on high-dimensional expanders. We describe an analog of the Fourier expansion and of the Fourier levels on simplicial complexes, and generalize the FKN theorem to high-dimensional expanders.
Our results demonstrate that a high-dimensional expanding complex X can sometimes serve as a sparse model for the Boolean slice or hypercube, and quite possibly additional results from Boolean function analysis can be carried over to this sparse model. Therefore, this model can be viewed as a derandomization of the Boolean slice, containing |X(k)|=O(n) points in comparison to binom{n}{k+1} points in the (k+1)-slice (which consists of all n-bit strings with exactly k+1 ones).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.38/LIPIcs.APPROX-RANDOM.2018.38.pdf
high dimensional expanders
Boolean function analysis
sparse model
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
39:1
39:17
10.4230/LIPIcs.APPROX-RANDOM.2018.39
article
Percolation of Lipschitz Surface and Tight Bounds on the Spread of Information Among Mobile Agents
Gracar, Peter
1
Stauffer, Alexandre
2
Mathematical Institute, University of Cologne, Weyertal 86-90, 50931 Köln, Germany
Department of Mathematical Sciences, University of Bath, Claverton Down, Bath, BA2 7AY, United Kingdom
We consider the problem of spread of information among mobile agents on the torus. The agents are initially distributed as a Poisson point process on the torus, and move as independent simple random walks. Two agents can share information whenever they are at the same vertex of the torus. We study the so-called flooding time: the amount of time it takes for information to be known by all agents. We establish a tight upper bound on the flooding time, and introduce a technique which we believe can be applicable to analyze other processes involving mobile agents.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.39/LIPIcs.APPROX-RANDOM.2018.39.pdf
Lipschitz surface
spread of information
flooding time
moving agents
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
40:1
40:17
10.4230/LIPIcs.APPROX-RANDOM.2018.40
article
Flipping out with Many Flips: Hardness of Testing k-Monotonicity
Grigorescu, Elena
1
Kumar, Akash
2
Wimmer, Karl
3
Purdue University, West Lafayette, IN, USA, https://www.cs.purdue.edu/homes/egrigore/
Purdue University, West Lafayette, IN, USA, https://www.cs.purdue.edu/homes/akumar
Duquesne University, Pittsburgh, PA, USA, http://www.mathcs.duq.edu/~wimmer/
A function f:{0,1}^n - > {0,1} is said to be k-monotone if it flips between 0 and 1 at most k times on every ascending chain. Such functions represent a natural generalization of (1-)monotone functions, and have been recently studied in circuit complexity, PAC learning, and cryptography. Our work is part of a renewed focus in understanding testability of properties characterized by freeness of arbitrary order patterns as a generalization of monotonicity. Recently, Canonne et al. (ITCS 2017) initiate the study of k-monotone functions in the area of property testing, and Newman et al. (SODA 2017) study testability of families characterized by freeness from order patterns on real-valued functions over the line [n] domain.
We study k-monotone functions in the more relaxed parametrized property testing model, introduced by Parnas et al. (JCSS, 72(6), 2006). In this process we resolve a problem left open in previous work. Specifically, our results include the following.
1) Testing 2-monotonicity on the hypercube non-adaptively with one-sided error requires an exponential in sqrt{n} number of queries. This behavior shows a stark contrast with testing (1-)monotonicity, which only needs O~(sqrt{n}) queries (Khot et al. (FOCS 2015)). Furthermore, even the apparently easier task of distinguishing 2-monotone functions from functions that are far from being n^{.01}-monotone also requires an exponential number of queries.
2) On the hypercube [n]^d domain, there exists a testing algorithm that makes a constant number of queries and distinguishes functions that are k-monotone from functions that are far from being O(kd^2) -monotone. Such a dependency is likely necessary, given the lower bound above for the hypercube.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.40/LIPIcs.APPROX-RANDOM.2018.40.pdf
Property Testing
Boolean Functions
k-Monotonicity
Lower Bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
41:1
41:11
10.4230/LIPIcs.APPROX-RANDOM.2018.41
article
How Long Can Optimal Locally Repairable Codes Be?
Guruswami, Venkatesan
1
https://orcid.org/0000-0001-7926-3396
Xing, Chaoping
2
https://orcid.org/0000-0002-1257-1033
Yuan, Chen
3
https://orcid.org/0000-0002-3730-8397
Computer Science Department, Carnegie Mellon University, Pittsburgh, USA.
School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore.
Centrum Wiskunde & Informatica, Amsterdam, Netherlands.
A locally repairable code (LRC) with locality r allows for the recovery of any erased codeword symbol using only r other codeword symbols. A Singleton-type bound dictates the best possible trade-off between the dimension and distance of LRCs - an LRC attaining this trade-off is deemed optimal. Such optimal LRCs have been constructed over alphabets growing linearly in the block length. Unlike the classical Singleton bound, however, it was not known if such a linear growth in the alphabet size is necessary, or for that matter even if the alphabet needs to grow at all with the block length. Indeed, for small code distances 3,4, arbitrarily long optimal LRCs were known over fixed alphabets.
Here, we prove that for distances d >=slant 5, the code length n of an optimal LRC over an alphabet of size q must be at most roughly O(d q^3). For the case d=5, our upper bound is O(q^2). We complement these bounds by showing the existence of optimal LRCs of length Omega_{d,r}(q^{1+1/floor[(d-3)/2]}) when d <=slant r+2. Our bounds match when d=5, pinning down n=Theta(q^2) as the asymptotically largest length of an optimal LRC for this case.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.41/LIPIcs.APPROX-RANDOM.2018.41.pdf
Locally Repairable Code
Singleton Bound
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
42:1
42:14
10.4230/LIPIcs.APPROX-RANDOM.2018.42
article
On Minrank and Forbidden Subgraphs
Haviv, Ishay
1
School of Computer Science, The Academic College of Tel Aviv-Yaffo, Tel Aviv 61083, Israel
The minrank over a field F of a graph G on the vertex set {1,2,...,n} is the minimum possible rank of a matrix M in F^{n x n} such that M_{i,i} != 0 for every i, and M_{i,j}=0 for every distinct non-adjacent vertices i and j in G. For an integer n, a graph H, and a field F, let g(n,H,F) denote the maximum possible minrank over F of an n-vertex graph whose complement contains no copy of H. In this paper we study this quantity for various graphs H and fields F. For finite fields, we prove by a probabilistic argument a general lower bound on g(n,H,F), which yields a nearly tight bound of Omega(sqrt{n}/log n) for the triangle H=K_3. For the real field, we prove by an explicit construction that for every non-bipartite graph H, g(n,H,R) >= n^delta for some delta = delta(H)>0. As a by-product of this construction, we disprove a conjecture of Codenotti, Pudlák, and Resta. The results are motivated by questions in information theory, circuit complexity, and geometry.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.42/LIPIcs.APPROX-RANDOM.2018.42.pdf
Minrank
Forbidden subgraphs
Shannon capacity
Circuit Complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
43:1
43:19
10.4230/LIPIcs.APPROX-RANDOM.2018.43
article
Preserving Randomness for Adaptive Algorithms
Hoza, William M.
1
https://orcid.org/0000-0001-5162-9181
Klivans, Adam R.
1
Department of Computer Science, University of Texas at Austin, Austin, TX, USA
Suppose Est is a randomized estimation algorithm that uses n random bits and outputs values in R^d. We show how to execute Est on k adaptively chosen inputs using only n + O(k log(d + 1)) random bits instead of the trivial nk (at the cost of mild increases in the error and failure probability). Our algorithm combines a variant of the INW pseudorandom generator [Impagliazzo et al., 1994] with a new scheme for shifting and rounding the outputs of Est. We prove that modifying the outputs of Est is necessary in this setting, and furthermore, our algorithm's randomness complexity is near-optimal in the case d <= O(1). As an application, we give a randomness-efficient version of the Goldreich-Levin algorithm; our algorithm finds all Fourier coefficients with absolute value at least theta of a function F: {0, 1}^n -> {-1, 1} using O(n log n) * poly(1/theta) queries to F and O(n) random bits (independent of theta), improving previous work by Bshouty et al. [Bshouty et al., 2004].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.43/LIPIcs.APPROX-RANDOM.2018.43.pdf
pseudorandomness
adaptivity
estimation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
44:1
44:20
10.4230/LIPIcs.APPROX-RANDOM.2018.44
article
Commutative Algorithms Approximate the LLL-distribution
Iliopoulos, Fotis
1
https://orcid.org/0000-0002-1825-0097
University of California Berkeley, USA
Following the groundbreaking Moser-Tardos algorithm for the Lovász Local Lemma (LLL), a series of works have exploited a key ingredient of the original analysis, the witness tree lemma, in order to: derive deterministic, parallel and distributed algorithms for the LLL, to estimate the entropy of the output distribution, to partially avoid bad events, to deal with super-polynomially many bad events, and even to devise new algorithmic frameworks. Meanwhile, a parallel line of work has established tools for analyzing stochastic local search algorithms motivated by the LLL that do not fall within the Moser-Tardos framework. Unfortunately, the aforementioned results do not transfer to these more general settings. Mainly, this is because the witness tree lemma, provably, does not longer hold. Here we prove that for commutative algorithms, a class recently introduced by Kolmogorov and which captures the vast majority of LLL applications, the witness tree lemma does hold. Armed with this fact, we extend the main result of Haeupler, Saha, and Srinivasan to commutative algorithms, establishing that the output of such algorithms well-approximates the LLL-distribution, i.e., the distribution obtained by conditioning on all bad events being avoided, and give several new applications. For example, we show that the recent algorithm of Molloy for list coloring number of sparse, triangle-free graphs can output exponential many list colorings of the input graph.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.44/LIPIcs.APPROX-RANDOM.2018.44.pdf
Lovasz Local Lemma
Local Search
Commutativity
LLL-distribution
Coloring Triangle-free Graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
45:1
45:14
10.4230/LIPIcs.APPROX-RANDOM.2018.45
article
The Cover Time of a Biased Random Walk on a Random Regular Graph of Odd Degree
Johansson, Tony
1
https://orcid.org/0000-0002-9264-3462
Department of Mathematics, Uppsala University, Uppsala, Sweden
We consider a random walk process, introduced by Orenshtein and Shinkar [Tal Orenshtein and Igor Shinkar, 2014], which prefers to visit previously unvisited edges, on the random r-regular graph G_r for any odd r >= 3. We show that this random walk process has asymptotic vertex and edge cover times 1/(r-2)n log n and r/(2(r-2))n log n, respectively, generalizing the result from [Cooper et al., to appear] from r = 3 to any larger odd r. This completes the study of the vertex cover time for fixed r >= 3, with [Petra Berenbrink et al., 2015] having previously shown that G_r has vertex cover time asymptotic to rn/2 when r >= 4 is even.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.45/LIPIcs.APPROX-RANDOM.2018.45.pdf
Random walk
random regular graph
cover time
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
46:1
46:19
10.4230/LIPIcs.APPROX-RANDOM.2018.46
article
Satisfiability and Derandomization for Small Polynomial Threshold Circuits
Kabanets, Valentine
1
Lu, Zhenjian
1
School of Computing Science, Simon Fraser University, Burnaby, BC, Canada
A polynomial threshold function (PTF) is defined as the sign of a polynomial p : {0,1}^n ->R. A PTF circuit is a Boolean circuit whose gates are PTFs. We study the problems of exact and (promise) approximate counting for PTF circuits of constant depth.
- Satisfiability (#SAT). We give the first zero-error randomized algorithm faster than exhaustive search that counts the number of satisfying assignments of a given constant-depth circuit with a super-linear number of wires whose gates are s-sparse PTFs, for s almost quadratic in the input size of the circuit; here a PTF is called s-sparse if its underlying polynomial has at most s monomials. More specifically, we show that, for any large enough constant c, given a depth-d circuit with (n^{2-1/c})-sparse PTF gates that has at most n^{1+epsilon_d} wires, where epsilon_d depends only on c and d, the number of satisfying assignments of the circuit can be computed in randomized time 2^{n-n^{epsilon_d}} with zero error. This generalizes the result by Chen, Santhanam and Srinivasan (CCC, 2016) who gave a SAT algorithm for constant-depth circuits of super-linear wire complexity with linear threshold function (LTF) gates only.
- Quantified derandomization. The quantified derandomization problem, introduced by Goldreich and Wigderson (STOC, 2014), asks to compute the majority value of a given Boolean circuit, under the promise that the minority-value inputs to the circuit are very few. We give a quantified derandomization algorithm for constant-depth PTF circuits with a super-linear number of wires that runs in quasi-polynomial time. More specifically, we show that for any sufficiently large constant c, there is an algorithm that, given a degree-Delta PTF circuit C of depth d with n^{1+1/c^d} wires such that C has at most 2^{n^{1-1/c}} minority-value inputs, runs in quasi-polynomial time exp ((log n)^{O (Delta^2)}) and determines the majority value of C. (We obtain a similar quantified derandomization result for PTF circuits with n^{Delta}-sparse PTF gates.) This extends the recent result of Tell (STOC, 2018) for constant-depth LTF circuits of super-linear wire complexity.
- Pseudorandom generators. We show how the classical Nisan-Wigderson (NW) generator (JCSS, 1994) yields a nontrivial pseudorandom generator for PTF circuits (of unrestricted depth) with sub-linearly many gates. As a corollary, we get a PRG for degree-Delta PTFs with the seed length exp (sqrt{Delta * log n})* log^2(1/epsilon).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.46/LIPIcs.APPROX-RANDOM.2018.46.pdf
constant-depth circuits
polynomial threshold functions
circuit analysis algorithms
SAT
derandomization
quantified derandomization
pseudorandom generators.
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
47:1
47:17
10.4230/LIPIcs.APPROX-RANDOM.2018.47
article
High Order Random Walks: Beyond Spectral Gap
Kaufman, Tali
1
Oppenheim, Izhar
2
Department of Computer Science, Bar-Ilan University, Ramat Gan, Israel
Department of Mathematics, Ben-Gurion University of the Negev, P.O. Box 653, Be'er-Sheva, Israel
We study high order random walks on high dimensional expanders on simplicial complexes (i.e., hypergraphs). These walks walk from a k-face (i.e., a k-hyperedge) to a k-face if they are both contained in a k+1-face (i.e, a k+1 hyperedge). This naturally generalizes the random walks on graphs that walk from a vertex (0-face) to a vertex if they are both contained in an edge (1-face).
Recent works have studied the spectrum of high order walks operators and deduced fast mixing. However, the spectral gap of high order walks operators is inherently small, due to natural obstructions (called coboundaries) that do not happen for walks on expander graphs.
In this work we go beyond spectral gap, and relate the expansion of a function on k-faces (called k-cochain, for k=0, this is a function on vertices) to its structure.
We show a Decomposition Theorem: For every k-cochain defined on high dimensional expander, there exists a decomposition of the cochain into i-cochains such that the square norm of the k-cochain is a sum of the square norms of the i-chains and such that the more weight the k-cochain has on higher levels of the decomposition the better is its expansion, or equivalently, the better is its shrinkage by the high order random walk operator.
The following corollaries are implied by the Decomposition Theorem:
- We characterize highly expanding k-cochains as those whose mass is concentrated on the highest levels of the decomposition that we construct. For example, a function on edges (i.e. a 1-cochain) which is locally thin (i.e. it contains few edges through every vertex) is highly expanding, while a function on edges that contains all edges through a single vertex is not highly expanding.
- We get optimal mixing for high order random walks on Ramanujan complexes. Ramanujan complexes are recently discovered bounded degree high dimensional expanders. The optimality in their mixing that we prove here, enable us to get from them more efficient Two-Layer-Samplers than those presented by the previous work of Dinur and Kaufman.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.47/LIPIcs.APPROX-RANDOM.2018.47.pdf
High Dimensional Expanders
Simplicial Complexes
Random Walk
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
48:1
48:18
10.4230/LIPIcs.APPROX-RANDOM.2018.48
article
Improved Composition Theorems for Functions and Relations
Koroth, Sajin
1
https://orcid.org/0000-0002-7989-1963
Meir, Or
2
https://orcid.org/0000-0001-5031-0750
Department of Computer Science, University of Haifa, Haifa 3498838, Israel
Department of Computer Science, University of Haifa, Haifa 3498838,Israel
One of the central problems in complexity theory is to prove super-logarithmic depth bounds for circuits computing a problem in P, i.e., to prove that P is not contained in NC^1. As an approach for this question, Karchmer, Raz and Wigderson [Mauricio Karchmer et al., 1995] proposed a conjecture called the KRW conjecture, which if true, would imply that P is not cotained in NC^{1}.
Since proving this conjecture is currently considered an extremely difficult problem, previous works by Edmonds, Impagliazzo, Rudich and Sgall [Edmonds et al., 2001], Håstad and Wigderson [Johan Håstad and Avi Wigderson, 1990] and Gavinsky, Meir, Weinstein and Wigderson [Dmitry Gavinsky et al., 2014] considered weaker variants of the conjecture. In this work we significantly improve the parameters in these variants, achieving almost tight lower bounds.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.48/LIPIcs.APPROX-RANDOM.2018.48.pdf
circuit complexity
communication complexity
KRW conjecture
composition
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
49:1
49:16
10.4230/LIPIcs.APPROX-RANDOM.2018.49
article
Round Complexity Versus Randomness Complexity in Interactive Proofs
Leshkowitz, Maya
1
Weizmann Institute of Science, Rehovot, Israel
Consider an interactive proof system for some set S that has randomness complexity r(n) for instances of length n, and arbitrary round complexity. We show a public-coin interactive proof system for S of round complexity O(r(n)/log n). Furthermore, the randomness complexity is preserved up to a constant factor, and the resulting interactive proof system has perfect completeness.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.49/LIPIcs.APPROX-RANDOM.2018.49.pdf
Interactive Proofs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
50:1
50:19
10.4230/LIPIcs.APPROX-RANDOM.2018.50
article
Improved List-Decodability of Random Linear Binary Codes
Li, Ray
1
Wootters, Mary
2
Department of Computer Science, Stanford University, USA
Departments of Computer Science and Electrical Engineering, Stanford University, USA
There has been a great deal of work establishing that random linear codes are as list-decodable as uniformly random codes, in the sense that a random linear binary code of rate 1 - H(p) - epsilon is (p,O(1/epsilon))-list-decodable with high probability. In this work, we show that such codes are (p, H(p)/epsilon + 2)-list-decodable with high probability, for any p in (0, 1/2) and epsilon > 0. In addition to improving the constant in known list-size bounds, our argument - which is quite simple - works simultaneously for all values of p, while previous works obtaining L = O(1/epsilon) patched together different arguments to cover different parameter regimes.
Our approach is to strengthen an existential argument of (Guruswami, Håstad, Sudan and Zuckerman, IEEE Trans. IT, 2002) to hold with high probability. To complement our upper bound for random linear binary codes, we also improve an argument of (Guruswami, Narayanan, IEEE Trans. IT, 2014) to obtain a tight lower bound of 1/epsilon on the list size of uniformly random binary codes; this implies that random linear binary codes are in fact more list-decodable than uniformly random binary codes, in the sense that the list sizes are strictly smaller.
To demonstrate the applicability of these techniques, we use them to (a) obtain more information about the distribution of list sizes of random linear binary codes and (b) to prove a similar result for random linear rank-metric codes.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.50/LIPIcs.APPROX-RANDOM.2018.50.pdf
List-decoding
Random linear codes
Rank-metric codes
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
51:1
51:13
10.4230/LIPIcs.APPROX-RANDOM.2018.51
article
Sunflowers and Quasi-Sunflowers from Randomness Extractors
Li, Xin
1
Lovett, Shachar
2
Zhang, Jiapeng
2
Johns Hopkins University, Baltimore, USA
University of California San Diego, La Jolla, USA
The Erdös-Rado sunflower theorem (Journal of Lond. Math. Soc. 1960) is a fundamental result in combinatorics, and the corresponding sunflower conjecture is a central open problem. Motivated by applications in complexity theory, Rossman (FOCS 2010) extended the result to quasi-sunflowers, where similar conjectures emerge about the optimal parameters for which it holds.
In this work, we exhibit a surprising connection between the existence of sunflowers and quasi-sunflowers in large enough set systems, and the problem of constructing (or existing) certain randomness extractors. This allows us to re-derive the known results in a systematic manner, and to reduce the relevant conjectures to the problem of obtaining improved constructions of the randomness extractors.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.51/LIPIcs.APPROX-RANDOM.2018.51.pdf
Sunflower conjecture
Quasi-sunflowers
Randomness Extractors
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
52:1
52:15
10.4230/LIPIcs.APPROX-RANDOM.2018.52
article
Torpid Mixing of Markov Chains for the Six-vertex Model on Z^2
Liu, Tianyu
1
University of Wisconsin-Madison, Madison, WI, USA
In this paper, we study the mixing time of two widely used Markov chain algorithms for the six-vertex model, Glauber dynamics and the directed-loop algorithm, on the square lattice Z^2. We prove, for the first time that, on finite regions of the square lattice these Markov chains are torpidly mixing under parameter settings in the ferroelectric phase and the anti-ferroelectric phase.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.52/LIPIcs.APPROX-RANDOM.2018.52.pdf
the six-vertex model
Eulerian orientations
square lattice
torpid mixing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
53:1
53:13
10.4230/LIPIcs.APPROX-RANDOM.2018.53
article
On the Testability of Graph Partition Properties
Nakar, Yonatan
1
Ron, Dana
1
Tel Aviv University, Tel Aviv, Israel
In this work we study the testability of a family of graph partition properties that generalizes a family previously studied by Goldreich, Goldwasser, and Ron (Journal of the ACM, 1998 ). While the family studied by Goldreich, Goldwasser, and Ron includes a variety of natural properties, such as k-colorability and containing a large cut, it does not include other properties of interest, such as split graphs, and more generally (p,q)-colorable graphs. The generalization we consider allows us to impose constraints on the edge-densities within and between parts (relative to the sizes of the parts). We denote the family studied in this work by GPP.
We first show that all properties in GPP have a testing algorithm whose query complexity is polynomial in 1/epsilon, where epsilon is the given proximity parameter (and there is no dependence on the size of the graph). As the testing algorithm has two-sided error, we next address the question of which properties in GPP can be tested with one-sided error and query complexity polynomial in 1/epsilon. We answer this question by establishing a characterization result. Namely, we define a subfamily GPP_{0,1} of GPP and show that a property P in GPP is testable by a one-sided error algorithm that has query complexity poly(1/epsilon) if and only if P in GPP_{0,1}.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.53/LIPIcs.APPROX-RANDOM.2018.53.pdf
Graph Partition Properties
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
54:1
54:19
10.4230/LIPIcs.APPROX-RANDOM.2018.54
article
On Closeness to k-Wise Uniformity
O'Donnell, Ryan
1
Zhao, Yu
1
Carnegie Mellon University, Pittsburgh, PA, USA
A probability distribution over {-1, 1}^n is (epsilon, k)-wise uniform if, roughly, it is epsilon-close to the uniform distribution when restricted to any k coordinates. We consider the problem of how far an (epsilon, k)-wise uniform distribution can be from any globally k-wise uniform distribution. We show that every (epsilon, k)-wise uniform distribution is O(n^{k/2}epsilon)-close to a k-wise uniform distribution in total variation distance. In addition, we show that this bound is optimal for all even k: we find an (epsilon, k)-wise uniform distribution that is Omega(n^{k/2}epsilon)-far from any k-wise uniform distribution in total variation distance. For k=1, we get a better upper bound of O(epsilon), which is also optimal.
One application of our closeness result is to the sample complexity of testing whether a distribution is k-wise uniform or delta-far from k-wise uniform. We give an upper bound of O(n^{k}/delta^2) (or O(log n/delta^2) when k = 1) on the required samples. We show an improved upper bound of O~(n^{k/2}/delta^2) for the special case of testing fully uniform vs. delta-far from k-wise uniform. Finally, we complement this with a matching lower bound of Omega(n/delta^2) when k = 2.
Our results improve upon the best known bounds from [Alon et al., 2007], and have simpler proofs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.54/LIPIcs.APPROX-RANDOM.2018.54.pdf
k-wise independence
property testing
Fourier analysis
Boolean function
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
55:1
55:19
10.4230/LIPIcs.APPROX-RANDOM.2018.55
article
Pseudo-Derandomizing Learning and Approximation
Carboni Oliveira, Igor
1
Santhanam, Rahul
1
Department of Computer Science, University of Oxford, United Kingdom.
We continue the study of pseudo-deterministic algorithms initiated by Gat and Goldwasser [Eran Gat and Shafi Goldwasser, 2011]. A pseudo-deterministic algorithm is a probabilistic algorithm which produces a fixed output with high probability. We explore pseudo-determinism in the settings of learning and approximation. Our goal is to simulate known randomized algorithms in these settings by pseudo-deterministic algorithms in a generic fashion - a goal we succinctly term pseudo-derandomization. Learning. In the setting of learning with membership queries, we first show that randomized learning algorithms can be derandomized (resp. pseudo-derandomized) under the standard hardness assumption that E (resp. BPE) requires large Boolean circuits. Thus, despite the fact that learning is an algorithmic task that requires interaction with an oracle, standard hardness assumptions suffice to (pseudo-)derandomize it. We also unconditionally pseudo-derandomize any {quasi-polynomial} time learning algorithm for polynomial size circuits on infinitely many input lengths in sub-exponential time.
Next, we establish a generic connection between learning and derandomization in the reverse direction, by showing that deterministic (resp. pseudo-deterministic) learning algorithms for a concept class C imply hitting sets against C that are computable deterministically (resp. pseudo-deterministically). In particular, this suggests a new approach to constructing hitting set generators against AC^0[p] circuits by giving a deterministic learning algorithm for AC^0[p]. Approximation. Turning to approximation, we unconditionally pseudo-derandomize any poly-time randomized approximation scheme for integer-valued functions infinitely often in subexponential time over any samplable distribution on inputs. As a corollary, we get that the (0,1)-Permanent has a fully pseudo-deterministic approximation scheme running in sub-exponential time infinitely often over any samplable distribution on inputs.
Finally, we {investigate} the notion of approximate canonization of Boolean circuits. We use a connection between pseudodeterministic learning and approximate canonization to show that if BPE does not have sub-exponential size circuits infinitely often, then there is a pseudo-deterministic approximate canonizer for AC^0[p] computable in quasi-polynomial time.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.55/LIPIcs.APPROX-RANDOM.2018.55.pdf
derandomization
learning
approximation
boolean circuits
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
56:1
56:20
10.4230/LIPIcs.APPROX-RANDOM.2018.56
article
Luby-Velickovic-Wigderson Revisited: Improved Correlation Bounds and Pseudorandom Generators for Depth-Two Circuits
Servedio, Rocco A.
1
Tan, Li-Yang
2
Department of Computer Science, Columbia University, New York, NY, USA
Department of Computer Science, Stanford University, Stanford, California, USA
We study correlation bounds and pseudorandom generators for depth-two circuits that consist of a SYM-gate (computing an arbitrary symmetric function) or THR-gate (computing an arbitrary linear threshold function) that is fed by S {AND} gates. Such circuits were considered in early influential work on unconditional derandomization of Luby, Velickovi{c}, and Wigderson [Michael Luby et al., 1993], who gave the first non-trivial PRG with seed length 2^{O(sqrt{log(S/epsilon)})} that epsilon-fools these circuits.
In this work we obtain the first strict improvement of [Michael Luby et al., 1993]'s seed length: we construct a PRG that epsilon-fools size-S {SYM,THR} oAND circuits over {0,1}^n with seed length 2^{O(sqrt{log S})} + polylog(1/epsilon), an exponential (and near-optimal) improvement of the epsilon-dependence of [Michael Luby et al., 1993]. The above PRG is actually a special case of a more general PRG which we establish for constant-depth circuits containing multiple SYM or THR gates, including as a special case {SYM,THR} o AC^0 circuits. These more general results strengthen previous results of Viola [Viola, 2006] and essentially strengthen more recent results of Lovett and Srinivasan [Lovett and Srinivasan, 2011].
Our improved PRGs follow from improved correlation bounds, which are transformed into PRGs via the Nisan-Wigderson "hardness versus randomness" paradigm [Nisan and Wigderson, 1994]. The key to our improved correlation bounds is the use of a recent powerful multi-switching lemma due to Håstad [Johan Håstad, 2014].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.56/LIPIcs.APPROX-RANDOM.2018.56.pdf
Pseudorandom generators
correlation bounds
constant-depth circuits
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
57:1
57:19
10.4230/LIPIcs.APPROX-RANDOM.2018.57
article
Randomly Coloring Graphs of Logarithmically Bounded Pathwidth
Vardi, Shai
1
Krannert School of Management, Purdue University, West Lafayette, IN, 47907, USA
We consider the problem of sampling a proper k-coloring of a graph of maximal degree Delta uniformly at random. We describe a new Markov chain for sampling colorings, and show that it mixes rapidly on graphs of logarithmically bounded pathwidth if k >=(1+epsilon)Delta, for any epsilon>0, using a hybrid paths argument.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.57/LIPIcs.APPROX-RANDOM.2018.57.pdf
Random coloring
Glauber dynamics
Markov-chain Monte Carlo
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2018-08-13
116
58:1
58:14
10.4230/LIPIcs.APPROX-RANDOM.2018.58
article
Explicit Strong LTCs with Inverse Poly-Log Rate and Constant Soundness
Viderman, Michael
1
Yahoo Research, Haifa, Israel
An error-correcting code C subseteq F^n is called (q,epsilon)-strong locally testable code (LTC) if there exists a tester that makes at most q queries to the input word. This tester accepts all codewords with probability 1 and rejects all non-codewords x not in C with probability at least epsilon * delta(x,C), where delta(x,C) denotes the relative Hamming distance between the word x and the code C. The parameter q is called the query complexity and the parameter epsilon is called soundness.
Goldreich and Sudan (J.ACM 2006) asked about the existence of strong LTCs with constant query complexity, constant relative distance, constant soundness and inverse polylogarithmic rate. They also asked about the explicit constructions of these codes.
Strong LTCs with the required range of parameters were obtained recently in the works of Viderman (CCC 2013, FOCS 2013) based on the papers of Meir (SICOMP 2009) and Dinur (J.ACM 2007). However, the construction of these codes was probabilistic.
In this work we show that codes presented in the works of Dinur (J.ACM 2007) and Ben-Sasson and Sudan (SICOMP 2005) provide the explicit construction of strong LTCs with the above range of parameters. Previously, such codes were proven to be weak LTCs. Using the results of Viderman (CCC 2013, FOCS 2013) we prove that such codes are, in fact, strong LTCs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol116-approx-random2018/LIPIcs.APPROX-RANDOM.2018.58/LIPIcs.APPROX-RANDOM.2018.58.pdf
Error-Correcting Codes
Tensor Products
Locally Testable Codes