15 Search Results for "Austrin, Per"


Document
Sum-Of-Squares Lower Bounds for the Minimum Circuit Size Problem

Authors: Per Austrin and Kilian Risse

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
We prove lower bounds for the Minimum Circuit Size Problem (MCSP) in the Sum-of-Squares (SoS) proof system. Our main result is that for every Boolean function f: {0,1}ⁿ → {0,1}, SoS requires degree Ω(s^{1-ε}) to prove that f does not have circuits of size s (for any s > poly(n)). As a corollary we obtain that there are no low degree SoS proofs of the statement NP ⊈ P/poly. We also show that for any 0 < α < 1 there are Boolean functions with circuit complexity larger than 2^{n^α} but SoS requires size 2^{2^Ω(n^α)} to prove this. In addition we prove analogous results on the minimum monotone circuit size for monotone Boolean slice functions. Our approach is quite general. Namely, we show that if a proof system Q has strong enough constraint satisfaction problem lower bounds that only depend on good expansion of the constraint-variable incidence graph and, furthermore, Q is expressive enough that variables can be substituted by local Boolean functions, then the MCSP problem is hard for Q.

Cite as

Per Austrin and Kilian Risse. Sum-Of-Squares Lower Bounds for the Minimum Circuit Size Problem. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 31:1-31:21, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{austrin_et_al:LIPIcs.CCC.2023.31,
  author =	{Austrin, Per and Risse, Kilian},
  title =	{{Sum-Of-Squares Lower Bounds for the Minimum Circuit Size Problem}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{31:1--31:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.31},
  URN =		{urn:nbn:de:0030-drops-183011},
  doi =		{10.4230/LIPIcs.CCC.2023.31},
  annote =	{Keywords: Proof Complexity, Sum of Squares, Minimum Circuit Size Problem}
}
Document
APPROX
The Biased Homogeneous r-Lin Problem

Authors: Suprovat Ghoshal

Published in: LIPIcs, Volume 245, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)


Abstract
The p-biased Homogeneous r-Lin problem (Hom-r-Lin_p) is the following: given a homogeneous system of r-variable equations over m{F}₂, the goal is to find an assignment of relative weight p that satisfies the maximum number of equations. In a celebrated work, Håstad (JACM 2001) showed that the unconstrained variant of this i.e., Max-3-Lin, is hard to approximate beyond a factor of 1/2. This is also tight due to the naive random guessing algorithm which sets every variable uniformly from {0,1}. Subsequently, Holmerin and Khot (STOC 2004) showed that the same holds for the balanced Hom-r-Lin problem as well. In this work, we explore the approximability of the Hom-r-Lin_p problem beyond the balanced setting (i.e., p ≠ 1/2), and investigate whether the (p-biased) random guessing algorithm is optimal for every p. Our results include the following: - The Hom-r-Lin_p problem has no efficient 1/2 + 1/2 (1 - 2p)^{r-2} + ε-approximation algorithm for every p if r is even, and for p ∈ (0,1/2] if r is odd, unless NP ⊂ ∪_{ε>0}DTIME(2^{n^ε}). - For any r and any p, there exists an efficient 1/2 (1 - e^{-2})-approximation algorithm for Hom-r-Lin_p. We show that this is also tight for odd values of r (up to o_r(1)-additive factors) assuming the Unique Games Conjecture. Our results imply that when r is even, then for large values of r, random guessing is near optimal for every p. On the other hand, when r is odd, our results illustrate an interesting contrast between the regimes p ∈ (0,1/2) (where random guessing is near optimal) and p → 1 (where random guessing is far from optimal). A key technical contribution of our work is a generalization of Håstad’s 3-query dictatorship test to the p-biased setting.

Cite as

Suprovat Ghoshal. The Biased Homogeneous r-Lin Problem. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 245, pp. 47:1-47:14, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ghoshal:LIPIcs.APPROX/RANDOM.2022.47,
  author =	{Ghoshal, Suprovat},
  title =	{{The Biased Homogeneous r-Lin Problem}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)},
  pages =	{47:1--47:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-249-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{245},
  editor =	{Chakrabarti, Amit and Swamy, Chaitanya},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2022.47},
  URN =		{urn:nbn:de:0030-drops-171695},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2022.47},
  annote =	{Keywords: Biased Approximation Resistance, Constraint Satisfaction Problems}
}
Document
Track A: Algorithms, Complexity and Games
Conditional Dichotomy of Boolean Ordered Promise CSPs

Authors: Joshua Brakensiek, Venkatesan Guruswami, and Sai Sandeep

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
Promise Constraint Satisfaction Problems (PCSPs) are a generalization of Constraint Satisfaction Problems (CSPs) where each predicate has a strong and a weak form and given a CSP instance, the objective is to distinguish if the strong form can be satisfied vs. even the weak form cannot be satisfied. Since their formal introduction by Austrin, Guruswami, and Håstad [Per Austrin et al., 2017], there has been a flurry of works on PCSPs, including recent breakthroughs in approximate graph coloring [Barto et al., 2018; Andrei A. Krokhin and Jakub Opršal, 2019; Marcin Wrochna and Stanislav Zivný, 2020]. The key tool in studying PCSPs is the algebraic framework developed in the context of CSPs where the closure properties of the satisfying solutions known as polymorphisms are analyzed. The polymorphisms of PCSPs are significantly richer than CSPs - even in the Boolean case, we still do not know if there exists a dichotomy result for PCSPs analogous to Schaefer’s dichotomy result [Thomas J. Schaefer, 1978] for CSPs. In this paper, we study a special case of Boolean PCSPs, namely Boolean Ordered PCSPs where the Boolean PCSPs have the predicate x ≤ y. In the algebraic framework, this is the special case of Boolean PCSPs when the polymorphisms are monotone functions. We prove that Boolean Ordered PCSPs exhibit a computational dichotomy assuming the Rich 2-to-1 Conjecture [Mark Braverman et al., 2021] which is a perfect completeness surrogate of the Unique Games Conjecture. In particular, assuming the Rich 2-to-1 Conjecture, we prove that a Boolean Ordered PCSP can be solved in polynomial time if for every ε > 0, it has polymorphisms where each coordinate has Shapley value at most ε, else it is NP-hard. The algorithmic part of our dichotomy result is based on a structural lemma showing that Boolean monotone functions with each coordinate having low Shapley value have arbitrarily large threshold functions as minors. The hardness part proceeds by showing that the Shapley value is consistent under a uniformly random 2-to-1 minor. As a structural result of independent interest, we construct an example to show that the Shapley value can be inconsistent under an adversarial 2-to-1 minor.

Cite as

Joshua Brakensiek, Venkatesan Guruswami, and Sai Sandeep. Conditional Dichotomy of Boolean Ordered Promise CSPs. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 37:1-37:15, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{brakensiek_et_al:LIPIcs.ICALP.2021.37,
  author =	{Brakensiek, Joshua and Guruswami, Venkatesan and Sandeep, Sai},
  title =	{{Conditional Dichotomy of Boolean Ordered Promise CSPs}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{37:1--37:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.37},
  URN =		{urn:nbn:de:0030-drops-141060},
  doi =		{10.4230/LIPIcs.ICALP.2021.37},
  annote =	{Keywords: promise constraint satisfaction, Boolean ordered PCSP, Shapley value, rich 2-to-1 conjecture, random minor}
}
Document
APPROX
Global Cardinality Constraints Make Approximating Some Max-2-CSPs Harder

Authors: Per Austrin and Aleksa Stanković

Published in: LIPIcs, Volume 145, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)


Abstract
Assuming the Unique Games Conjecture, we show that existing approximation algorithms for some Boolean Max-2-CSPs with cardinality constraints are optimal. In particular, we prove that Max-Cut with cardinality constraints is UG-hard to approximate within ~~0.858, and that Max-2-Sat with cardinality constraints is UG-hard to approximate within ~~0.929. In both cases, the previous best hardness results were the same as the hardness of the corresponding unconstrained Max-2-CSP (~~0.878 for Max-Cut, and ~~0.940 for Max-2-Sat). The hardness for Max-2-Sat applies to monotone Max-2-Sat instances, meaning that we also obtain tight inapproximability for the Max-k-Vertex-Cover problem.

Cite as

Per Austrin and Aleksa Stanković. Global Cardinality Constraints Make Approximating Some Max-2-CSPs Harder. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 145, pp. 24:1-24:17, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{austrin_et_al:LIPIcs.APPROX-RANDOM.2019.24,
  author =	{Austrin, Per and Stankovi\'{c}, Aleksa},
  title =	{{Global Cardinality Constraints Make Approximating Some Max-2-CSPs Harder}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)},
  pages =	{24:1--24:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-125-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{145},
  editor =	{Achlioptas, Dimitris and V\'{e}gh, L\'{a}szl\'{o} A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2019.24},
  URN =		{urn:nbn:de:0030-drops-112394},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2019.24},
  annote =	{Keywords: Constraint satisfaction problems, global cardinality constraints, semidefinite programming, inapproximability, Unique Games Conjecture, Max-Cut, Max-2-Sat}
}
Document
UG-Hardness to NP-Hardness by Losing Half

Authors: Amey Bhangale and Subhash Khot

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
The 2-to-2 Games Theorem of [Subhash Khot et al., 2017; Dinur et al., 2018; Dinur et al., 2018; Dinur et al., 2018] implies that it is NP-hard to distinguish between Unique Games instances with assignment satisfying at least (1/2-epsilon) fraction of the constraints vs. no assignment satisfying more than epsilon fraction of the constraints, for every constant epsilon>0. We show that the reduction can be transformed in a non-trivial way to give a stronger guarantee in the completeness case: For at least (1/2-epsilon) fraction of the vertices on one side, all the constraints associated with them in the Unique Games instance can be satisfied. We use this guarantee to convert the known UG-hardness results to NP-hardness. We show: 1) Tight inapproximability of approximating independent sets in degree d graphs within a factor of Omega(d/(log^2 d)), where d is a constant. 2) NP-hardness of approximate the Maximum Acyclic Subgraph problem within a factor of 2/3+epsilon, improving the previous ratio of 14/15+epsilon by Austrin et al. [Austrin et al., 2015]. 3) For any predicate P^{-1}(1) subseteq [q]^k supporting a balanced pairwise independent distribution, given a P-CSP instance with value at least 1/2-epsilon, it is NP-hard to satisfy more than (|P^{-1}(1)|/(q^k))+epsilon fraction of constraints.

Cite as

Amey Bhangale and Subhash Khot. UG-Hardness to NP-Hardness by Losing Half. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 3:1-3:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bhangale_et_al:LIPIcs.CCC.2019.3,
  author =	{Bhangale, Amey and Khot, Subhash},
  title =	{{UG-Hardness to NP-Hardness by Losing Half}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{3:1--3:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.3},
  URN =		{urn:nbn:de:0030-drops-108258},
  doi =		{10.4230/LIPIcs.CCC.2019.3},
  annote =	{Keywords: NP-hardness, Inapproximability, Unique Games Conjecture}
}
Document
Tensor Network Complexity of Multilinear Maps

Authors: Per Austrin, Petteri Kaski, and Kaie Kubjas

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
We study tensor networks as a model of arithmetic computation for evaluating multilinear maps. These capture any algorithm based on low border rank tensor decompositions, such as O(n^{omega+epsilon}) time matrix multiplication, and in addition many other algorithms such as O(n log n) time discrete Fourier transform and O^*(2^n) time for computing the permanent of a matrix. However tensor networks sometimes yield faster algorithms than those that follow from low-rank decompositions. For instance the fastest known O(n^{(omega +epsilon)t}) time algorithms for counting 3t-cliques can be implemented with tensor networks, even though the underlying tensor has border rank n^{3t} for all t >= 2. For counting homomorphisms of a general pattern graph P into a host graph on n vertices we obtain an upper bound of O(n^{(omega+epsilon)bw(P)/2}) where bw(P) is the branchwidth of P. This essentially matches the bound for counting cliques, and yields small improvements over previous algorithms for many choices of P. While powerful, the model still has limitations, and we are able to show a number of unconditional lower bounds for various multilinear maps, including: b) an Omega(n^{bw(P)}) time lower bound for counting homomorphisms from P to an n-vertex graph, matching the upper bound if omega = 2. In particular for P a v-clique this yields an Omega(n^{ceil[2v/3]}) time lower bound for counting v-cliques, and for P a k-uniform v-hyperclique we obtain an Omega(n^v) time lower bound for k >= 3, ruling out tensor networks as an approach to obtaining non-trivial algorithms for hyperclique counting and the Max-3-CSP problem. c) an Omega(2^{0.918n}) time lower bound for the permanent of an n x n matrix.

Cite as

Per Austrin, Petteri Kaski, and Kaie Kubjas. Tensor Network Complexity of Multilinear Maps. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 7:1-7:21, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{austrin_et_al:LIPIcs.ITCS.2019.7,
  author =	{Austrin, Per and Kaski, Petteri and Kubjas, Kaie},
  title =	{{Tensor Network Complexity of Multilinear Maps}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{7:1--7:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.7},
  URN =		{urn:nbn:de:0030-drops-101006},
  doi =		{10.4230/LIPIcs.ITCS.2019.7},
  annote =	{Keywords: arithmetic complexity, lower bound, multilinear map, tensor network}
}
Document
Dense Subset Sum May Be the Hardest

Authors: Per Austrin, Petteri Kaski, Mikko Koivisto, and Jesper Nederlof

Published in: LIPIcs, Volume 47, 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)


Abstract
The SUBSET SUM problem asks whether a given set of n positive integers contains a subset of elements that sum up to a given target t. It is an outstanding open question whether the O^*(2^{n/2})-time algorithm for SUBSET SUM by Horowitz and Sahni [J. ACM 1974] can be beaten in the worst-case setting by a "truly faster", O^*(2^{(0.5-delta)*n})-time algorithm, with some constant delta > 0. Continuing an earlier work [STACS 2015], we study SUBSET SUM parameterized by the maximum bin size beta, defined as the largest number of subsets of the n input integers that yield the same sum. For every epsilon > 0 we give a truly faster algorithm for instances with beta <= 2^{(0.5-epsilon)*n}, as well as instances with beta >= 2^{0.661n}. Consequently, we also obtain a characterization in terms of the popular density parameter n/log_2(t): if all instances of density at least 1.003 admit a truly faster algorithm, then so does every instance. This goes against the current intuition that instances of density 1 are the hardest, and therefore is a step toward answering the open question in the affirmative. Our results stem from a novel combinatorial analysis of mixings of earlier algorithms for SUBSET SUM and a study of an extremal question in additive combinatorics connected to the problem of Uniquely Decodable Code Pairs in information theory.

Cite as

Per Austrin, Petteri Kaski, Mikko Koivisto, and Jesper Nederlof. Dense Subset Sum May Be the Hardest. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 13:1-13:14, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{austrin_et_al:LIPIcs.STACS.2016.13,
  author =	{Austrin, Per and Kaski, Petteri and Koivisto, Mikko and Nederlof, Jesper},
  title =	{{Dense Subset Sum May Be the Hardest}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{13:1--13:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.13},
  URN =		{urn:nbn:de:0030-drops-57143},
  doi =		{10.4230/LIPIcs.STACS.2016.13},
  annote =	{Keywords: subset sum, additive combinatorics, exponential-time algorithm, homo-morphic hashing, littlewood–offord problem}
}
Document
Complexity and Approximability of Parameterized MAX-CSPs

Authors: Holger Dell, Eun Jung Kim, Michael Lampis, Valia Mitsou, and Tobias Mömke

Published in: LIPIcs, Volume 43, 10th International Symposium on Parameterized and Exact Computation (IPEC 2015)


Abstract
We study the optimization version of constraint satisfaction problems (Max-CSPs) in the framework of parameterized complexity; the goal is to compute the maximum fraction of constraints that can be satisfied simultaneously. In standard CSPs, we want to decide whether this fraction equals one. The parameters we investigate are structural measures, such as the treewidth or the clique-width of the variable–constraint incidence graph of the CSP instance. We consider Max-CSPs with the constraint types AND, OR, PARITY, and MAJORITY, and with various parameters k. We attempt to fully classify them into the following three cases: 1. The exact optimum can be computed in FPT-time. 2. It is W[1]-hard to compute the exact optimum, but there is a randomized FPT approximation scheme (FPT-AS), which computes a (1-epsilon)-approximation in time f(k,epsilon) * poly(n). 3. There is no FPT-AS unless FPT=W[1]. For the corresponding standard CSPs, we establish FPT vs. W[1]-hardness results.

Cite as

Holger Dell, Eun Jung Kim, Michael Lampis, Valia Mitsou, and Tobias Mömke. Complexity and Approximability of Parameterized MAX-CSPs. In 10th International Symposium on Parameterized and Exact Computation (IPEC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 43, pp. 294-306, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{dell_et_al:LIPIcs.IPEC.2015.294,
  author =	{Dell, Holger and Kim, Eun Jung and Lampis, Michael and Mitsou, Valia and M\"{o}mke, Tobias},
  title =	{{Complexity and Approximability of Parameterized MAX-CSPs}},
  booktitle =	{10th International Symposium on Parameterized and Exact Computation (IPEC 2015)},
  pages =	{294--306},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-92-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{43},
  editor =	{Husfeldt, Thore and Kanj, Iyad},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2015.294},
  URN =		{urn:nbn:de:0030-drops-55910},
  doi =		{10.4230/LIPIcs.IPEC.2015.294},
  annote =	{Keywords: Approximation, Structural Parameters, Constraint Satisfaction}
}
Document
Approximate Hypergraph Coloring under Low-discrepancy and Related Promises

Authors: Vijay V. S. P. Bhattiprolu, Venkatesan Guruswami, and Euiwoong Lee

Published in: LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)


Abstract
A hypergraph is said to be X-colorable if its vertices can be colored with X colors so that no hyperedge is monochromatic. 2-colorability is a fundamental property (called Property B) of hypergraphs and is extensively studied in combinatorics. Algorithmically, however, given a 2-colorable k-uniform hypergraph, it is NP-hard to find a 2-coloring miscoloring fewer than a fraction 2^(-k+1) of hyperedges (which is trivially achieved by a random 2-coloring), and the best algorithms to color the hypergraph properly require about n^(1-1/k) colors, approaching the trivial bound of n as k increases. In this work, we study the complexity of approximate hypergraph coloring, for both the maximization (finding a 2-coloring with fewest miscolored edges) and minimization (finding a proper coloring using fewest number of colors) versions, when the input hypergraph is promised to have the following stronger properties than 2-colorability: (A) Low-discrepancy: If the hypergraph has a 2-coloring of discrepancy l << sqrt(k), we give an algorithm to color the hypergraph with about n^(O(l^2/k)) colors. However, for the maximization version, we prove NP-hardness of finding a 2-coloring miscoloring a smaller than 2^(-O(k)) (resp. k^(-O(k))) fraction of the hyperedges when l = O(log k) (resp. l=2). Assuming the Unique Games conjecture, we improve the latter hardness factor to 2^(-O(k)) for almost discrepancy-1 hypergraphs. (B) Rainbow colorability: If the hypergraph has a (k-l)-coloring such that each hyperedge is polychromatic with all these colors (this is stronger than a (l+1)-discrepancy 2-coloring), we give a 2-coloring algorithm that miscolors at most k^(-Omega(k)) of the hyperedges when l << sqrt(k), and complement this with a matching Unique Games hardness result showing that when l = sqrt(k), it is hard to even beat the 2^(-k+1) bound achieved by a random coloring. (C) Strong Colorability: We obtain similar (stronger) Min- and Max-2-Coloring algorithmic results in the case of (k+l)-strong colorability.

Cite as

Vijay V. S. P. Bhattiprolu, Venkatesan Guruswami, and Euiwoong Lee. Approximate Hypergraph Coloring under Low-discrepancy and Related Promises. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 152-174, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{bhattiprolu_et_al:LIPIcs.APPROX-RANDOM.2015.152,
  author =	{Bhattiprolu, Vijay V. S. P. and Guruswami, Venkatesan and Lee, Euiwoong},
  title =	{{Approximate Hypergraph Coloring under Low-discrepancy and Related Promises}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
  pages =	{152--174},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-89-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{40},
  editor =	{Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.152},
  URN =		{urn:nbn:de:0030-drops-53011},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2015.152},
  annote =	{Keywords: Hypergraph Coloring, Discrepancy, Rainbow Coloring, Stong Coloring, Algorithms, Semidefinite Programming, Hardness of Approximation}
}
Document
Towards a Characterization of Approximation Resistance for Symmetric CSPs

Authors: Venkatesan Guruswami and Euiwoong Lee

Published in: LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)


Abstract
A Boolean constraint satisfaction problem (CSP) is called approximation resistant if independently setting variables to 1 with some probability achieves the best possible approximation ratio for the fraction of constraints satisfied. We study approximation resistance of a natural subclass of CSPs that we call Symmetric Constraint Satisfaction Problems (SCSPs), where satisfaction of each constraint only depends on the number of true literals in its scope. Thus a SCSP of arity k can be described by a subset of allowed number of true literals. For SCSPs without negation, we conjecture that a simple sufficient condition to be approximation resistant by Austrin and Hastad is indeed necessary. We show that this condition has a compact analytic representation in the case of symmetric CSPs (depending only on the gap between the largest and smallest numbers in S), and provide the rationale behind our conjecture. We prove two interesting special cases of the conjecture, (i) when S is an interval and (ii) when S is even. For SCSPs with negation, we prove that the analogous sufficient condition by Austrin and Mossel is necessary for the same two cases, though we do not pose an analogous conjecture in general.

Cite as

Venkatesan Guruswami and Euiwoong Lee. Towards a Characterization of Approximation Resistance for Symmetric CSPs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 305-322, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX-RANDOM.2015.305,
  author =	{Guruswami, Venkatesan and Lee, Euiwoong},
  title =	{{Towards a Characterization of Approximation Resistance for Symmetric CSPs}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
  pages =	{305--322},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-89-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{40},
  editor =	{Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.305},
  URN =		{urn:nbn:de:0030-drops-53095},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2015.305},
  annote =	{Keywords: Constraint Satisfaction Problems, Approximation resistance}
}
Document
Improved NP-Inapproximability for 2-Variable Linear Equations

Authors: Johan Håstad, Sangxia Huang, Rajsekar Manokaran, Ryan O’Donnell, and John Wright

Published in: LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)


Abstract
An instance of the 2-Lin(2) problem is a system of equations of the form "x_i + x_j = b (mod 2)". Given such a system in which it's possible to satisfy all but an epsilon fraction of the equations, we show it is NP-hard to satisfy all but a C*epsilon fraction of the equations, for any C < 11/8 = 1.375 (and any 0 < epsilon <= 1/8). The previous best result, standing for over 15 years, had 5/4 in place of 11/8. Our result provides the best known NP-hardness even for the Unique Games problem, and it also holds for the special case of Max-Cut. The precise factor 11/8 is unlikely to be best possible; we also give a conjecture concerning analysis of Boolean functions which, if true, would yield a larger hardness factor of 3/2. Our proof is by a modified gadget reduction from a pairwise-independent predicate. We also show an inherent limitation to this type of gadget reduction. In particular, any such reduction can never establish a hardness factor C greater than 2.54. Previously, no such limitation on gadget reductions was known.

Cite as

Johan Håstad, Sangxia Huang, Rajsekar Manokaran, Ryan O’Donnell, and John Wright. Improved NP-Inapproximability for 2-Variable Linear Equations. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 341-360, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{hastad_et_al:LIPIcs.APPROX-RANDOM.2015.341,
  author =	{H\r{a}stad, Johan and Huang, Sangxia and Manokaran, Rajsekar and O’Donnell, Ryan and Wright, John},
  title =	{{Improved NP-Inapproximability for 2-Variable Linear Equations}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
  pages =	{341--360},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-89-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{40},
  editor =	{Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.341},
  URN =		{urn:nbn:de:0030-drops-53112},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2015.341},
  annote =	{Keywords: approximability, unique games, linear equation, gadget, linear programming}
}
Document
A Characterization of Hard-to-cover CSPs

Authors: Amey Bhangale, Prahladh Harsha, and Girish Varma

Published in: LIPIcs, Volume 33, 30th Conference on Computational Complexity (CCC 2015)


Abstract
We continue the study of covering complexity of constraint satisfaction problems (CSPs) initiated by Guruswami, Håstad and Sudan [SIAM J. Computing, 31(6):1663--1686, 2002] and Dinur and Kol [in Proc. 28th IEEE Conference on Computational Complexity, 2013]. The covering number of a CSP instance Phi, denoted by nu(Phi) is the smallest number of assignments to the variables of Phi, such that each constraint of Phi is satisfied by at least one of the assignments. We show the following results regarding how well efficient algorithms can approximate the covering number of a given CSP instance. 1. Assuming a covering unique games conjecture, introduced by Dinur and Kol, we show that for every non-odd predicate P over any constant sized alphabet and every integer K, it is NP-hard to distinguish between P-CSP instances (i.e., CSP instances where all the constraints are of type P) which are coverable by a constant number of assignments and those whose covering number is at least K. Previously, Dinur and Kol, using the same covering unique games conjecture, had shown a similar hardness result for every non-odd predicate over the Boolean alphabet that supports a pairwise independent distribution. Our generalization yields a complete characterization of CSPs over constant sized alphabet Sigma that are hard to cover since CSPs over odd predicates are trivially coverable with |Sigma| assignments. 2. For a large class of predicates that are contained in the 2k-LIN predicate, we show that it is quasi-NP-hard to distinguish between instances which have covering number at most two and covering number at least Omega(log(log(n))). This generalizes the 4-LIN result of Dinur and Kol that states it is quasi-NP-hard to distinguish between 4-LIN-CSP instances which have covering number at most two and covering number at least Omega(log(log(log(n)))).

Cite as

Amey Bhangale, Prahladh Harsha, and Girish Varma. A Characterization of Hard-to-cover CSPs. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 280-303, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{bhangale_et_al:LIPIcs.CCC.2015.280,
  author =	{Bhangale, Amey and Harsha, Prahladh and Varma, Girish},
  title =	{{A Characterization of Hard-to-cover CSPs}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{280--303},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.280},
  URN =		{urn:nbn:de:0030-drops-50574},
  doi =		{10.4230/LIPIcs.CCC.2015.280},
  annote =	{Keywords: CSPs, Covering Problem, Hardness of Approximation, Unique Games, Invariance Principle}
}
Document
Subset Sum in the Absence of Concentration

Authors: Per Austrin, Petteri Kaski, Mikko Koivisto, and Jesper Nederlof

Published in: LIPIcs, Volume 30, 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015)


Abstract
We study the exact time complexity of the Subset Sum problem. Our focus is on instances that lack additive structure in the sense that the sums one can form from the subsets of the given integers are not strongly concentrated on any particular integer value. We present a randomized algorithm that runs in O(2^0.3399nB^4) time on instances with the property that no value can arise as a sum of more than B different subsets of the n given integers.

Cite as

Per Austrin, Petteri Kaski, Mikko Koivisto, and Jesper Nederlof. Subset Sum in the Absence of Concentration. In 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 30, pp. 48-61, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{austrin_et_al:LIPIcs.STACS.2015.48,
  author =	{Austrin, Per and Kaski, Petteri and Koivisto, Mikko and Nederlof, Jesper},
  title =	{{Subset Sum in the Absence of Concentration}},
  booktitle =	{32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015)},
  pages =	{48--61},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-78-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{30},
  editor =	{Mayr, Ernst W. and Ollinger, Nicolas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2015.48},
  URN =		{urn:nbn:de:0030-drops-49034},
  doi =		{10.4230/LIPIcs.STACS.2015.48},
  annote =	{Keywords: subset sum, additive combinatorics, exponential-time algorithm, homomorphic hashing, Littlewood--Offord problem}
}
Document
Parity is Positively Useless

Authors: Cenny Wenner

Published in: LIPIcs, Volume 28, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)


Abstract
We give the first examples of non-trivially positively-useless predicates subject only to P != NP. In particular, for every constraint function Q : {-1,1}^4 -> R, we construct Contraint-Satisfaction-Problem (CSP) instances without negations which have value at least 1-eps when evaluted for the arity-four odd-parity predicate, yet it is NP-hard to find a solution with value significantly better than a random biased assignment when evaluated for Q. More generally, we show that all parities except one are positively useless. Although we are not able to exhibit a single protocol producing hard instances when evaluated for every Q, we show that two protocols do the trick. The first protocol is the classical one used by Håstad with a twist. We extend the protocol to multilayered Label Cover and employ a particular distribution over layers in order to limit moments of table biases. The second protocol is a modification of Chan's multi-question protocol where queried tuples of Label Cover vertices are randomized in such a way that the tables can be seen as being independently sampled from a common distribution and in effect having identical expected biases. We believe that our techniques may prove useful in further analyzing the approximability of CSPs without negations.

Cite as

Cenny Wenner. Parity is Positively Useless. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 433-448, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{wenner:LIPIcs.APPROX-RANDOM.2014.433,
  author =	{Wenner, Cenny},
  title =	{{Parity is Positively Useless}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{433--448},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.433},
  URN =		{urn:nbn:de:0030-drops-47144},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.433},
  annote =	{Keywords: Approximation hardness, approximation resistance, parity, usefulness, negations, monotone, constraint satisfaction problems, smoothness, multilayer, L}
}
Document
Universal Factor Graphs for Every NP-Hard Boolean CSP

Authors: Shlomo Jozeph

Published in: LIPIcs, Volume 28, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)


Abstract
An instance of a Boolean constraint satisfaction problem can be divided into two parts. One part, that we refer to as the factor graph of the instance, specifies for each clause the set of variables that are associated with the clause. The other part, specifies for each of the given clauses what is the constraint that is evaluated on the respective variables. Depending on the allowed choices of constraints, it is known that Boolean constraint satisfaction problems fall into one of two classes, being either NP-hard or in P. This paper shows that every NP-hard Boolean constraint satisfaction problem (except for an easy to characterize set of natural exceptions) has a universal factor graph. That is, for every NP-hard Boolean constraint satisfaction problem, there is a family of at most one factor graph of each size, such that the problem, restricted to instances that have a factor graph from this family, cannot be solved in polynomial time unless NP is contained in P/poly. Moreover, we extend this classification to one that establishes hardness of approximation.

Cite as

Shlomo Jozeph. Universal Factor Graphs for Every NP-Hard Boolean CSP. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 274-283, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{jozeph:LIPIcs.APPROX-RANDOM.2014.274,
  author =	{Jozeph, Shlomo},
  title =	{{Universal Factor Graphs for Every NP-Hard Boolean CSP}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{274--283},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.274},
  URN =		{urn:nbn:de:0030-drops-47021},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.274},
  annote =	{Keywords: Hardness of Approximation, Hardness with Preprocessing}
}
  • Refine by Author
  • 5 Austrin, Per
  • 3 Guruswami, Venkatesan
  • 3 Kaski, Petteri
  • 2 Bhangale, Amey
  • 2 Koivisto, Mikko
  • Show More...

  • Refine by Classification
  • 3 Theory of computation → Approximation algorithms analysis
  • 3 Theory of computation → Problems, reductions and completeness
  • 1 Applied computing → Economics
  • 1 Mathematics of computing → Approximation algorithms
  • 1 Theory of computation → Computational complexity and cryptography
  • Show More...

  • Refine by Keyword
  • 3 Hardness of Approximation
  • 2 Constraint Satisfaction Problems
  • 2 Unique Games Conjecture
  • 2 additive combinatorics
  • 2 exponential-time algorithm
  • Show More...

  • Refine by Type
  • 15 document

  • Refine by Publication Year
  • 6 2015
  • 3 2019
  • 2 2014
  • 1 2016
  • 1 2021
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail