16 Search Results for "Karthik C. S."


Document
Parameterized Approximation: Algorithms and Hardness (Dagstuhl Seminar 23291)

Authors: Karthik C. S., Parinya Chalermsook, Joachim Spoerhase, Meirav Zehavi, and Martin Herold

Published in: Dagstuhl Reports, Volume 13, Issue 7 (2024)


Abstract
Parameterization and approximation are two established approaches of coping with intractability in combinatorial optimization. In this Dagstuhl Seminar, we studied parameterized approximation as a relatively new algorithmic paradigm that combines these two popular research areas. In particular, we analyzed the solution quality (approximation ratio) as well as the running time of an algorithm in terms of a parameter that captures the "complexity" of a problem instance. While the field has grown and yielded some promising results, our understanding of the area is rather ad-hoc compared to our knowledge in approximation or parameterized algorithms alone. In this seminar, we brought together researchers from both communities in order to bridge this gap by accommodating the exchange and unification of scientific knowledge.

Cite as

Karthik C. S., Parinya Chalermsook, Joachim Spoerhase, Meirav Zehavi, and Martin Herold. Parameterized Approximation: Algorithms and Hardness (Dagstuhl Seminar 23291). In Dagstuhl Reports, Volume 13, Issue 7, pp. 96-107, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{c.s._et_al:DagRep.13.7.96,
  author =	{C. S., Karthik and Chalermsook, Parinya and Spoerhase, Joachim and Zehavi, Meirav and Herold, Martin},
  title =	{{Parameterized Approximation: Algorithms and Hardness (Dagstuhl Seminar 23291)}},
  pages =	{96--107},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2024},
  volume =	{13},
  number =	{7},
  editor =	{C. S., Karthik and Chalermsook, Parinya and Spoerhase, Joachim and Zehavi, Meirav and Herold, Martin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.13.7.96},
  URN =		{urn:nbn:de:0030-drops-197764},
  doi =		{10.4230/DagRep.13.7.96},
  annote =	{Keywords: approximation algorithms, Hardness of approximation, Parameterized algorithms}
}
Document
APPROX
On Complexity of 1-Center in Various Metrics

Authors: Amir Abboud, MohammadHossein Bateni, Vincent Cohen-Addad, Karthik C. S., and Saeed Seddighin

Published in: LIPIcs, Volume 275, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)


Abstract
We consider the classic 1-center problem: Given a set P of n points in a metric space find the point in P that minimizes the maximum distance to the other points of P. We study the complexity of this problem in d-dimensional 𝓁_p-metrics and in edit and Ulam metrics over strings of length d. Our results for the 1-center problem may be classified based on d as follows. - Small d. Assuming the hitting set conjecture (HSC), we show that when d = ω(log n), no subquadratic algorithm can solve the 1-center problem in any of the 𝓁_p-metrics, or in the edit or Ulam metrics. - Large d. When d = Ω(n), we extend our conditional lower bound to rule out subquartic algorithms for the 1-center problem in edit metric (assuming Quantified SETH). On the other hand, we give a (1+ε)-approximation for 1-center in the Ulam metric with running time O_{ε}̃(nd+n²√d). We also strengthen some of the above lower bounds by allowing approximation algorithms or by reducing the dimension d, but only against a weaker class of algorithms which list all requisite solutions. Moreover, we extend one of our hardness results to rule out subquartic algorithms for the well-studied 1-median problem in the edit metric, where given a set of n strings each of length n, the goal is to find a string in the set that minimizes the sum of the edit distances to the rest of the strings in the set.

Cite as

Amir Abboud, MohammadHossein Bateni, Vincent Cohen-Addad, Karthik C. S., and Saeed Seddighin. On Complexity of 1-Center in Various Metrics. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 275, pp. 1:1-1:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.APPROX/RANDOM.2023.1,
  author =	{Abboud, Amir and Bateni, MohammadHossein and Cohen-Addad, Vincent and Karthik C. S. and Seddighin, Saeed},
  title =	{{On Complexity of 1-Center in Various Metrics}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)},
  pages =	{1:1--1:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-296-9},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{275},
  editor =	{Megow, Nicole and Smith, Adam},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2023.1},
  URN =		{urn:nbn:de:0030-drops-188260},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2023.1},
  annote =	{Keywords: Center, Clustering, Edit metric, Ulam metric, Hamming metric, Fine-grained Complexity, Approximation}
}
Document
Can You Solve Closest String Faster Than Exhaustive Search?

Authors: Amir Abboud, Nick Fischer, Elazar Goldenberg, Karthik C. S., and Ron Safier

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
We study the fundamental problem of finding the best string to represent a given set, in the form of the Closest String problem: Given a set X ⊆ Σ^d of n strings, find the string x^* minimizing the radius of the smallest Hamming ball around x^* that encloses all the strings in X. In this paper, we investigate whether the Closest String problem admits algorithms that are faster than the trivial exhaustive search algorithm. We obtain the following results for the two natural versions of the problem: - In the continuous Closest String problem, the goal is to find the solution string x^* anywhere in Σ^d. For binary strings, the exhaustive search algorithm runs in time O(2^d poly(nd)) and we prove that it cannot be improved to time O(2^{(1-ε) d} poly(nd)), for any ε > 0, unless the Strong Exponential Time Hypothesis fails. - In the discrete Closest String problem, x^* is required to be in the input set X. While this problem is clearly in polynomial time, its fine-grained complexity has been pinpointed to be quadratic time n^{2 ± o(1)} whenever the dimension is ω(log n) < d < n^o(1). We complement this known hardness result with new algorithms, proving essentially that whenever d falls out of this hard range, the discrete Closest String problem can be solved faster than exhaustive search. In the small-d regime, our algorithm is based on a novel application of the inclusion-exclusion principle. Interestingly, all of our results apply (and some are even stronger) to the natural dual of the Closest String problem, called the Remotest String problem, where the task is to find a string maximizing the Hamming distance to all the strings in X.

Cite as

Amir Abboud, Nick Fischer, Elazar Goldenberg, Karthik C. S., and Ron Safier. Can You Solve Closest String Faster Than Exhaustive Search?. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 3:1-3:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ESA.2023.3,
  author =	{Abboud, Amir and Fischer, Nick and Goldenberg, Elazar and Karthik C. S. and Safier, Ron},
  title =	{{Can You Solve Closest String Faster Than Exhaustive Search?}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{3:1--3:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.3},
  URN =		{urn:nbn:de:0030-drops-186566},
  doi =		{10.4230/LIPIcs.ESA.2023.3},
  annote =	{Keywords: Closest string, fine-grained complexity, SETH, inclusion-exclusion}
}
Document
Almost Polynomial Factor Inapproximability for Parameterized k-Clique

Authors: Karthik C. S. and Subhash Khot

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
The k-Clique problem is a canonical hard problem in parameterized complexity. In this paper, we study the parameterized complexity of approximating the k-Clique problem where an integer k and a graph G on n vertices are given as input, and the goal is to find a clique of size at least k/F(k) whenever the graph G has a clique of size k. When such an algorithm runs in time T(k) ⋅ poly(n) (i.e., FPT-time) for some computable function T, it is said to be an F(k)-FPT-approximation algorithm for the k-Clique problem. Although, the non-existence of an F(k)-FPT-approximation algorithm for any computable sublinear function F is known under gap-ETH [Chalermsook et al., FOCS 2017], it has remained a long standing open problem to prove the same inapproximability result under the more standard and weaker assumption, W[1]≠FPT. In a recent breakthrough, Lin [STOC 2021] ruled out constant factor (i.e., F(k) = O(1)) FPT-approximation algorithms under W[1]≠FPT. In this paper, we improve this inapproximability result (under the same assumption) to rule out every F(k) = k^{1/H(k)} factor FPT-approximation algorithm for any increasing computable function H (for example H(k) = log^∗ k). Our main technical contribution is introducing list decoding of Hadamard codes over large prime fields into the proof framework of Lin.

Cite as

Karthik C. S. and Subhash Khot. Almost Polynomial Factor Inapproximability for Parameterized k-Clique. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 6:1-6:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{karthikc.s._et_al:LIPIcs.CCC.2022.6,
  author =	{Karthik C. S. and Khot, Subhash},
  title =	{{Almost Polynomial Factor Inapproximability for Parameterized k-Clique}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{6:1--6:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.6},
  URN =		{urn:nbn:de:0030-drops-165680},
  doi =		{10.4230/LIPIcs.CCC.2022.6},
  annote =	{Keywords: Parameterized Complexity, k-clique, Hardness of Approximation}
}
Document
APPROX
Hardness of Approximation for Euclidean k-Median

Authors: Anup Bhattacharya, Dishant Goyal, and Ragesh Jaiswal

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
The Euclidean k-median problem is defined in the following manner: given a set 𝒳 of n points in d-dimensional Euclidean space ℝ^d, and an integer k, find a set C ⊂ ℝ^d of k points (called centers) such that the cost function Φ(C,𝒳) ≡ ∑_{x ∈ 𝒳} min_{c ∈ C} ‖x-c‖₂ is minimized. The Euclidean k-means problem is defined similarly by replacing the distance with squared Euclidean distance in the cost function. Various hardness of approximation results are known for the Euclidean k-means problem [Pranjal Awasthi et al., 2015; Euiwoong Lee et al., 2017; Vincent Cohen{-}Addad and {Karthik {C. S.}}, 2019]. However, no hardness of approximation result was known for the Euclidean k-median problem. In this work, assuming the unique games conjecture (UGC), we provide the hardness of approximation result for the Euclidean k-median problem in O(log k) dimensional space. This solves an open question posed explicitly in the work of Awasthi et al. [Pranjal Awasthi et al., 2015]. Furthermore, we study the hardness of approximation for the Euclidean k-means/k-median problems in the bi-criteria setting where an algorithm is allowed to choose more than k centers. That is, bi-criteria approximation algorithms are allowed to output β k centers (for constant β > 1) and the approximation ratio is computed with respect to the optimal k-means/k-median cost. We show the hardness of bi-criteria approximation result for the Euclidean k-median problem for any β < 1.015, assuming UGC. We also show a similar hardness of bi-criteria approximation result for the Euclidean k-means problem with a stronger bound of β < 1.28, again assuming UGC.

Cite as

Anup Bhattacharya, Dishant Goyal, and Ragesh Jaiswal. Hardness of Approximation for Euclidean k-Median. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 4:1-4:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bhattacharya_et_al:LIPIcs.APPROX/RANDOM.2021.4,
  author =	{Bhattacharya, Anup and Goyal, Dishant and Jaiswal, Ragesh},
  title =	{{Hardness of Approximation for Euclidean k-Median}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{4:1--4:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.4},
  URN =		{urn:nbn:de:0030-drops-146979},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.4},
  annote =	{Keywords: Hardness of approximation, bicriteria approximation, approximation algorithms, k-median, k-means}
}
Document
Hardness Amplification of Optimization Problems

Authors: Elazar Goldenberg and Karthik C. S.

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products. We say that an optimization problem Π is direct product feasible if it is possible to efficiently aggregate any k instances of Π and form one large instance of Π such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem Π, our hardness amplification theorem may be informally stated as follows: If there is a distribution D over instances of Π of size n such that every randomized algorithm running in time t(n) fails to solve Π on 1/α(n) fraction of inputs sampled from D, then, assuming some relationships on α(n) and t(n), there is a distribution D' over instances of Π of size O(n⋅α(n)) such that every randomized algorithm running in time t(n)/poly(α(n)) fails to solve Π on 99/100 fraction of inputs sampled from D'. As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium.

Cite as

Elazar Goldenberg and Karthik C. S.. Hardness Amplification of Optimization Problems. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 1:1-1:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{goldenberg_et_al:LIPIcs.ITCS.2020.1,
  author =	{Goldenberg, Elazar and Karthik C. S.},
  title =	{{Hardness Amplification of Optimization Problems}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{1:1--1:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.1},
  URN =		{urn:nbn:de:0030-drops-116863},
  doi =		{10.4230/LIPIcs.ITCS.2020.1},
  annote =	{Keywords: hardness amplification, average case complexity, direct product, optimization problems, fine-grained complexity, TFNP}
}
Document
Small Cuts and Connectivity Certificates: A Fault Tolerant Approach

Authors: Merav Parter

Published in: LIPIcs, Volume 146, 33rd International Symposium on Distributed Computing (DISC 2019)


Abstract
We revisit classical connectivity problems in the {CONGEST} model of distributed computing. By using techniques from fault tolerant network design, we show improved constructions, some of which are even "local" (i.e., with O~(1) rounds) for problems that are closely related to hard global problems (i.e., with a lower bound of Omega(Diam+sqrt{n}) rounds). Distributed Minimum Cut: Nanongkai and Su presented a randomized algorithm for computing a (1+epsilon)-approximation of the minimum cut using O~(D +sqrt{n}) rounds where D is the diameter of the graph. For a sufficiently large minimum cut lambda=Omega(sqrt{n}), this is tight due to Das Sarma et al. [FOCS '11], Ghaffari and Kuhn [DISC '13]. - Small Cuts: A special setting that remains open is where the graph connectivity lambda is small (i.e., constant). The only lower bound for this case is Omega(D), with a matching bound known only for lambda <= 2 due to Pritchard and Thurimella [TALG '11]. Recently, Daga, Henzinger, Nanongkai and Saranurak [STOC '19] raised the open problem of computing the minimum cut in poly(D) rounds for any lambda=O(1). In this paper, we resolve this problem by presenting a surprisingly simple algorithm, that takes a completely different approach than the existing algorithms. Our algorithm has also the benefit that it computes all minimum cuts in the graph, and naturally extends to vertex cuts as well. At the heart of the algorithm is a graph sampling approach usually used in the context of fault tolerant (FT) design. - Deterministic Algorithms: While the existing distributed minimum cut algorithms are randomized, our algorithm can be made deterministic within the same round complexity. To obtain this, we introduce a novel definition of universal sets along with their efficient computation. This allows us to derandomize the FT graph sampling technique, which might be of independent interest. - Computation of all Edge Connectivities: We also consider the more general task of computing the edge connectivity of all the edges in the graph. In the output format, it is required that the endpoints u,v of every edge (u,v) learn the cardinality of the u-v cut in the graph. We provide the first sublinear algorithm for this problem for the case of constant connectivity values. Specifically, by using the recent notion of low-congestion cycle cover, combined with the sampling technique, we compute all edge connectivities in poly(D) * 2^{O(sqrt{log n log log n})} rounds. Sparse Certificates: For an n-vertex graph G and an integer lambda, a lambda-sparse certificate H is a subgraph H subseteq G with O(lambda n) edges which is lambda-connected iff G is lambda-connected. For D-diameter graphs, constructions of sparse certificates for lambda in {2,3} have been provided by Thurimella [J. Alg. '97] and Dori [PODC '18] respectively using O~(D) number of rounds. The problem of devising such certificates with o(D+sqrt{n}) rounds was left open by Dori [PODC '18] for any lambda >= 4. Using connections to fault tolerant spanners, we considerably improve the round complexity for any lambda in [1,n] and epsilon in (0,1), by showing a construction of (1-epsilon)lambda-sparse certificates with O(lambda n) edges using only O(1/epsilon^2 * log^{2+o(1)} n) rounds.

Cite as

Merav Parter. Small Cuts and Connectivity Certificates: A Fault Tolerant Approach. In 33rd International Symposium on Distributed Computing (DISC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 146, pp. 30:1-30:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{parter:LIPIcs.DISC.2019.30,
  author =	{Parter, Merav},
  title =	{{Small Cuts and Connectivity Certificates: A Fault Tolerant Approach}},
  booktitle =	{33rd International Symposium on Distributed Computing (DISC 2019)},
  pages =	{30:1--30:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-126-9},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{146},
  editor =	{Suomela, Jukka},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2019.30},
  URN =		{urn:nbn:de:0030-drops-113371},
  doi =		{10.4230/LIPIcs.DISC.2019.30},
  annote =	{Keywords: Connectivity, Minimum Cut, Spanners}
}
Document
On Closest Pair in Euclidean Metric: Monochromatic is as Hard as Bichromatic

Authors: Karthik C. S. and Pasin Manurangsi

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
Given a set of n points in R^d, the (monochromatic) Closest Pair problem asks to find a pair of distinct points in the set that are closest in the l_p-metric. Closest Pair is a fundamental problem in Computational Geometry and understanding its fine-grained complexity in the Euclidean metric when d=omega(log n) was raised as an open question in recent works (Abboud-Rubinstein-Williams [FOCS'17], Williams [SODA'18], David-Karthik-Laekhanukit [SoCG'18]). In this paper, we show that for every p in R_{>= 1} cup {0}, under the Strong Exponential Time Hypothesis (SETH), for every epsilon>0, the following holds: - No algorithm running in time O(n^{2-epsilon}) can solve the Closest Pair problem in d=(log n)^{Omega_{epsilon}(1)} dimensions in the l_p-metric. - There exists delta = delta(epsilon)>0 and c = c(epsilon)>= 1 such that no algorithm running in time O(n^{1.5-epsilon}) can approximate Closest Pair problem to a factor of (1+delta) in d >= c log n dimensions in the l_p-metric. In particular, our first result is shown by establishing the computational equivalence of the bichromatic Closest Pair problem and the (monochromatic) Closest Pair problem (up to n^{epsilon} factor in the running time) for d=(log n)^{Omega_epsilon(1)} dimensions. Additionally, under SETH, we rule out nearly-polynomial factor approximation algorithms running in subquadratic time for the (monochromatic) Maximum Inner Product problem where we are given a set of n points in n^{o(1)}-dimensional Euclidean space and are required to find a pair of distinct points in the set that maximize the inner product. At the heart of all our proofs is the construction of a dense bipartite graph with low contact dimension, i.e., we construct a balanced bipartite graph on n vertices with n^{2-epsilon} edges whose vertices can be realized as points in a (log n)^{Omega_epsilon(1)}-dimensional Euclidean space such that every pair of vertices which have an edge in the graph are at distance exactly 1 and every other pair of vertices are at distance greater than 1. This graph construction is inspired by the construction of locally dense codes introduced by Dumer-Miccancio-Sudan [IEEE Trans. Inf. Theory'03].

Cite as

Karthik C. S. and Pasin Manurangsi. On Closest Pair in Euclidean Metric: Monochromatic is as Hard as Bichromatic. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 17:1-17:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{c.s._et_al:LIPIcs.ITCS.2019.17,
  author =	{C. S., Karthik and Manurangsi, Pasin},
  title =	{{On Closest Pair in Euclidean Metric: Monochromatic is as Hard as Bichromatic}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{17:1--17:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.17},
  URN =		{urn:nbn:de:0030-drops-101100},
  doi =		{10.4230/LIPIcs.ITCS.2019.17},
  annote =	{Keywords: Closest Pair, Bichromatic Closest Pair, Contact Dimension, Fine-Grained Complexity}
}
Document
Towards a General Direct Product Testing Theorem

Authors: Elazar Goldenberg and Karthik C. S.

Published in: LIPIcs, Volume 122, 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018)


Abstract
The Direct Product encoding of a string a in {0,1}^n on an underlying domain V subseteq ([n] choose k), is a function DP_V(a) which gets as input a set S in V and outputs a restricted to S. In the Direct Product Testing Problem, we are given a function F:V -> {0,1}^k, and our goal is to test whether F is close to a direct product encoding, i.e., whether there exists some a in {0,1}^n such that on most sets S, we have F(S)=DP_V(a)(S). A natural test is as follows: select a pair (S,S')in V according to some underlying distribution over V x V, query F on this pair, and check for consistency on their intersection. Note that the above distribution may be viewed as a weighted graph over the vertex set V and is referred to as a test graph. The testability of direct products was studied over various domains and test graphs: Dinur and Steurer (CCC '14) analyzed it when V equals the k-th slice of the Boolean hypercube and the test graph is a member of the Johnson graph family. Dinur and Kaufman (FOCS '17) analyzed it for the case where V is the set of faces of a Ramanujan complex, where in this case V=O_k(n). In this paper, we study the testability of direct products in a general setting, addressing the question: what properties of the domain and the test graph allow one to prove a direct product testing theorem? Towards this goal we introduce the notion of coordinate expansion of a test graph. Roughly speaking a test graph is a coordinate expander if it has global and local expansion, and has certain nice intersection properties on sampling. We show that whenever the test graph has coordinate expansion then it admits a direct product testing theorem. Additionally, for every k and n we provide a direct product domain V subseteq (n choose k) of size n, called the Sliding Window domain for which we prove direct product testability.

Cite as

Elazar Goldenberg and Karthik C. S.. Towards a General Direct Product Testing Theorem. In 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 122, pp. 11:1-11:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{goldenberg_et_al:LIPIcs.FSTTCS.2018.11,
  author =	{Goldenberg, Elazar and C. S., Karthik},
  title =	{{Towards a General Direct Product Testing Theorem}},
  booktitle =	{38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018)},
  pages =	{11:1--11:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-093-4},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{122},
  editor =	{Ganguly, Sumit and Pandya, Paritosh},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2018.11},
  URN =		{urn:nbn:de:0030-drops-99105},
  doi =		{10.4230/LIPIcs.FSTTCS.2018.11},
  annote =	{Keywords: Property Testing, Direct Product, PCP, Johnson graph, Ramanujan Complex, Derandomization}
}
Document
Communication Complexity of Correlated Equilibrium with Small Support

Authors: Anat Ganor and Karthik C. S.

Published in: LIPIcs, Volume 116, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)


Abstract
We define a two-player N x N game called the 2-cycle game, that has a unique pure Nash equilibrium which is also the only correlated equilibrium of the game. In this game, every 1/poly(N)-approximate correlated equilibrium is concentrated on the pure Nash equilibrium. We show that the randomized communication complexity of finding any 1/poly(N)-approximate correlated equilibrium of the game is Omega(N). For small approximation values, our lower bound answers an open question of Babichenko and Rubinstein (STOC 2017).

Cite as

Anat Ganor and Karthik C. S.. Communication Complexity of Correlated Equilibrium with Small Support. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 116, pp. 12:1-12:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{ganor_et_al:LIPIcs.APPROX-RANDOM.2018.12,
  author =	{Ganor, Anat and C. S., Karthik},
  title =	{{Communication Complexity of Correlated Equilibrium with Small Support}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)},
  pages =	{12:1--12:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-085-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{116},
  editor =	{Blais, Eric and Jansen, Klaus and D. P. Rolim, Jos\'{e} and Steurer, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2018.12},
  URN =		{urn:nbn:de:0030-drops-94163},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2018.12},
  annote =	{Keywords: Correlated equilibrium, Nash equilibrium, Communication complexity}
}
Document
Parameterized Intractability of Even Set and Shortest Vector Problem from Gap-ETH

Authors: Arnab Bhattacharyya, Suprovat Ghoshal, Karthik C. S., and Pasin Manurangsi

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
The k-Even Set problem is a parameterized variant of the Minimum Distance Problem of linear codes over F_2, which can be stated as follows: given a generator matrix A and an integer k, determine whether the code generated by A has distance at most k. Here, k is the parameter of the problem. The question of whether k-Even Set is fixed parameter tractable (FPT) has been repeatedly raised in literature and has earned its place in Downey and Fellows' book (2013) as one of the "most infamous" open problems in the field of Parameterized Complexity. In this work, we show that k-Even Set does not admit FPT algorithms under the (randomized) Gap Exponential Time Hypothesis (Gap-ETH) [Dinur'16, Manurangsi-Raghavendra'16]. In fact, our result rules out not only exact FPT algorithms, but also any constant factor FPT approximation algorithms for the problem. Furthermore, our result holds even under the following weaker assumption, which is also known as the Parameterized Inapproximability Hypothesis (PIH) [Lokshtanov et al.'17]: no (randomized) FPT algorithm can distinguish a satisfiable 2CSP instance from one which is only 0.99-satisfiable (where the parameter is the number of variables). We also consider the parameterized k-Shortest Vector Problem (SVP), in which we are given a lattice whose basis vectors are integral and an integer k, and the goal is to determine whether the norm of the shortest vector (in the l_p norm for some fixed p) is at most k. Similar to k-Even Set, this problem is also a long-standing open problem in the field of Parameterized Complexity. We show that, for any p > 1, k-SVP is hard to approximate (in FPT time) to some constant factor, assuming PIH. Furthermore, for the case of p = 2, the inapproximability factor can be amplified to any constant.

Cite as

Arnab Bhattacharyya, Suprovat Ghoshal, Karthik C. S., and Pasin Manurangsi. Parameterized Intractability of Even Set and Shortest Vector Problem from Gap-ETH. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 17:1-17:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bhattacharyya_et_al:LIPIcs.ICALP.2018.17,
  author =	{Bhattacharyya, Arnab and Ghoshal, Suprovat and C. S., Karthik and Manurangsi, Pasin},
  title =	{{Parameterized Intractability of Even Set and Shortest Vector Problem from Gap-ETH}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{17:1--17:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.17},
  URN =		{urn:nbn:de:0030-drops-90214},
  doi =		{10.4230/LIPIcs.ICALP.2018.17},
  annote =	{Keywords: Parameterized Complexity, Inapproximability, Even Set, Minimum Distance Problem, Shortest Vector Problem, Gap-ETH}
}
Document
ARRIVAL: Next Stop in CLS

Authors: Bernd Gärtner, Thomas Dueholm Hansen, Pavel Hubácek, Karel Král, Hagar Mosaad, and Veronika Slívová

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
We study the computational complexity of Arrival, a zero-player game on n-vertex switch graphs introduced by Dohrau, Gärtner, Kohler, Matousek, and Welzl. They showed that the problem of deciding termination of this game is contained in NP n coNP. Karthik C. S. recently introduced a search variant of Arrival and showed that it is in the complexity class PLS. In this work, we significantly improve the known upper bounds for both the decision and the search variants of Arrival. First, we resolve a question suggested by Dohrau et al. and show that the decision variant of Arrival is in UP n coUP. Second, we prove that the search variant of Arrival is contained in CLS. Third, we give a randomized O(1.4143^n)-time algorithm to solve both variants. Our main technical contributions are (a) an efficiently verifiable characterization of the unique witness for termination of the Arrival game, and (b) an efficient way of sampling from the state space of the game. We show that the problem of finding the unique witness is contained in CLS, whereas it was previously conjectured to be FPSPACE-complete. The efficient sampling procedure yields the first algorithm for the problem that has expected runtime O(c^n) with c<2.

Cite as

Bernd Gärtner, Thomas Dueholm Hansen, Pavel Hubácek, Karel Král, Hagar Mosaad, and Veronika Slívová. ARRIVAL: Next Stop in CLS. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 60:1-60:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gartner_et_al:LIPIcs.ICALP.2018.60,
  author =	{G\"{a}rtner, Bernd and Hansen, Thomas Dueholm and Hub\'{a}cek, Pavel and Kr\'{a}l, Karel and Mosaad, Hagar and Sl{\'\i}vov\'{a}, Veronika},
  title =	{{ARRIVAL: Next Stop in CLS}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{60:1--60:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.60},
  URN =		{urn:nbn:de:0030-drops-90641},
  doi =		{10.4230/LIPIcs.ICALP.2018.60},
  annote =	{Keywords: CLS, switch graphs, zero-player game, UP n coUP}
}
Document
On the Complexity of Closest Pair via Polar-Pair of Point-Sets

Authors: Roee David, Karthik C. S., and Bundit Laekhanukit

Published in: LIPIcs, Volume 99, 34th International Symposium on Computational Geometry (SoCG 2018)


Abstract
Every graph G can be represented by a collection of equi-radii spheres in a d-dimensional metric Delta such that there is an edge uv in G if and only if the spheres corresponding to u and v intersect. The smallest integer d such that G can be represented by a collection of spheres (all of the same radius) in Delta is called the sphericity of G, and if the collection of spheres are non-overlapping, then the value d is called the contact-dimension of G. In this paper, we study the sphericity and contact dimension of the complete bipartite graph K_{n,n} in various L^p-metrics and consequently connect the complexity of the monochromatic closest pair and bichromatic closest pair problems.

Cite as

Roee David, Karthik C. S., and Bundit Laekhanukit. On the Complexity of Closest Pair via Polar-Pair of Point-Sets. In 34th International Symposium on Computational Geometry (SoCG 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 99, pp. 28:1-28:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{david_et_al:LIPIcs.SoCG.2018.28,
  author =	{David, Roee and C. S., Karthik and Laekhanukit, Bundit},
  title =	{{On the Complexity of Closest Pair via Polar-Pair of Point-Sets}},
  booktitle =	{34th International Symposium on Computational Geometry (SoCG 2018)},
  pages =	{28:1--28:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-066-8},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{99},
  editor =	{Speckmann, Bettina and T\'{o}th, Csaba D.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2018.28},
  URN =		{urn:nbn:de:0030-drops-87412},
  doi =		{10.4230/LIPIcs.SoCG.2018.28},
  annote =	{Keywords: Contact dimension, Sphericity, Closest Pair, Fine-Grained Complexity}
}
Document
Ham Sandwich is Equivalent to Borsuk-Ulam

Authors: Karthik C. S. and Arpan Saha

Published in: LIPIcs, Volume 77, 33rd International Symposium on Computational Geometry (SoCG 2017)


Abstract
The Borsuk-Ulam theorem is a fundamental result in algebraic topology, with applications to various areas of Mathematics. A classical application of the Borsuk-Ulam theorem is the Ham Sandwich theorem: The volumes of any n compact sets in R^n can always be simultaneously bisected by an (n-1)-dimensional hyperplane. In this paper, we demonstrate the equivalence between the Borsuk-Ulam theorem and the Ham Sandwich theorem. The main technical result we show towards establishing the equivalence is the following: For every odd polynomial restricted to the hypersphere f:S^n->R, there exists a compact set A in R^{n+1}, such that for every x in S^n we have f(x)=vol(A cap H^+) - vol(A cap H^-), where H is the oriented hyperplane containing the origin with x as the normal. A noteworthy aspect of the proof of the above result is the use of hyperspherical harmonics. Finally, using the above result we prove that there exist constants n_0, epsilon_0>0 such that for every n>= n_0 and epsilon <= epsilon_0/sqrt{48n}, any query algorithm to find an epsilon-bisecting (n-1)-dimensional hyperplane of n compact set in [-n^4.51,n^4.51]^n, even with success probability 2^-Omega(n), requires 2^Omega(n) queries.

Cite as

Karthik C. S. and Arpan Saha. Ham Sandwich is Equivalent to Borsuk-Ulam. In 33rd International Symposium on Computational Geometry (SoCG 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 77, pp. 24:1-24:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{c.s._et_al:LIPIcs.SoCG.2017.24,
  author =	{C. S., Karthik and Saha, Arpan},
  title =	{{Ham Sandwich is Equivalent to Borsuk-Ulam}},
  booktitle =	{33rd International Symposium on Computational Geometry (SoCG 2017)},
  pages =	{24:1--24:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-038-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{77},
  editor =	{Aronov, Boris and Katz, Matthew J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2017.24},
  URN =		{urn:nbn:de:0030-drops-72325},
  doi =		{10.4230/LIPIcs.SoCG.2017.24},
  annote =	{Keywords: Ham Sandwich theorem, Borsuk-Ulam theorem, Query Complexity, Hyperspherical Harmonics}
}
Document
On the Sensitivity Conjecture for Disjunctive Normal Forms

Authors: Karthik C. S. and Sébastien Tavenas

Published in: LIPIcs, Volume 65, 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)


Abstract
The sensitivity conjecture of Nisan and Szegedy [CC'94] asks whether for any Boolean function f, the maximum sensitivity s(f), is polynomially related to its block sensitivity bs(f), and hence to other major complexity measures. Despite major advances in the analysis of Boolean functions over the last decade, the problem remains widely open. In this paper, we consider a restriction on the class of Boolean functions through a model of computation (DNF), and refer to the functions adhering to this restriction as admitting the Normalized Block property. We prove that for any function f admitting the Normalized Block property, bs(f) <= 4 * s(f)^2. We note that (almost) all the functions mentioned in literature that achieve a quadratic separation between sensitivity and block sensitivity admit the Normalized Block property. Recently, Gopalan et al. [ITCS'16] showed that every Boolean function f is uniquely specified by its values on a Hamming ball of radius at most 2 * s(f). We extend this result and also construct examples of Boolean functions which provide the matching lower bounds.

Cite as

Karthik C. S. and Sébastien Tavenas. On the Sensitivity Conjecture for Disjunctive Normal Forms. In 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 65, pp. 15:1-15:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{s._et_al:LIPIcs.FSTTCS.2016.15,
  author =	{S., Karthik C. and Tavenas, S\'{e}bastien},
  title =	{{On the Sensitivity Conjecture for Disjunctive Normal Forms}},
  booktitle =	{36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)},
  pages =	{15:1--15:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-027-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{65},
  editor =	{Lal, Akash and Akshay, S. and Saurabh, Saket and Sen, Sandeep},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2016.15},
  URN =		{urn:nbn:de:0030-drops-68504},
  doi =		{10.4230/LIPIcs.FSTTCS.2016.15},
  annote =	{Keywords: Boolean function, Sensitivity, Block sensitivity, DNF}
}
  • Refine by Author
  • 7 C. S., Karthik
  • 4 Karthik C. S.
  • 3 Goldenberg, Elazar
  • 2 Abboud, Amir
  • 2 Manurangsi, Pasin
  • Show More...

  • Refine by Classification
  • 5 Theory of computation → Problems, reductions and completeness
  • 2 Theory of computation → Computational complexity and cryptography
  • 2 Theory of computation → Computational geometry
  • 2 Theory of computation → Design and analysis of algorithms
  • 2 Theory of computation → Facility location and clustering
  • Show More...

  • Refine by Keyword
  • 2 Closest Pair
  • 2 Fine-Grained Complexity
  • 2 Hardness of approximation
  • 2 Parameterized Complexity
  • 2 approximation algorithms
  • Show More...

  • Refine by Type
  • 16 document

  • Refine by Publication Year
  • 5 2018
  • 2 2019
  • 2 2023
  • 1 2015
  • 1 2016
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail