63 Search Results for "C. S., Karthik"


Document
The Planted Orthogonal Vectors Problem

Authors: David Kühnemann, Adam Polak, and Alon Rosen

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
In the k-Orthogonal Vectors (k-OV) problem we are given k sets, each containing n binary vectors of dimension d = n^o(1), and our goal is to pick one vector from each set so that at each coordinate at least one vector has a zero. It is a central problem in fine-grained complexity, conjectured to require n^{k-o(1)} time in the worst case. We propose a way to plant a solution among vectors with i.i.d. p-biased entries, for appropriately chosen p, so that the planted solution is the unique one. Our conjecture is that the resulting k-OV instances still require time n^{k-o(1)} to solve, on average. Our planted distribution has the property that any subset of strictly less than k vectors has the same marginal distribution as in the model distribution, consisting of i.i.d. p-biased random vectors. We use this property to give average-case search-to-decision reductions for k-OV.

Cite as

David Kühnemann, Adam Polak, and Alon Rosen. The Planted Orthogonal Vectors Problem. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 95:1-95:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kuhnemann_et_al:LIPIcs.ESA.2025.95,
  author =	{K\"{u}hnemann, David and Polak, Adam and Rosen, Alon},
  title =	{{The Planted Orthogonal Vectors Problem}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{95:1--95:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.95},
  URN =		{urn:nbn:de:0030-drops-245640},
  doi =		{10.4230/LIPIcs.ESA.2025.95},
  annote =	{Keywords: Average-case complexity, fine-grained complexity, orthogonal vectors}
}
Document
Fine-Grained Classification of Detecting Dominating Patterns

Authors: Jonathan Dransfeld, Marvin Künnemann, and Mirza Redzic

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
We consider the following generalization of dominating sets: Let G be a host graph and P be a pattern graph P. A dominating P-pattern in G is a subset S of vertices in G that (1) forms a dominating set in G and (2) induces a subgraph isomorphic to P. The graph theory literature studies the properties of dominating P-patterns for various patterns P, including cliques, matchings, independent sets, cycles and paths. Previous work (Kunnemann, Redzic 2024) obtains algorithms and conditional lower bounds for detecting dominating P-patterns particularly for P being a k-clique, a k-independent set and a k-matching. Their results give conditionally tight lower bounds if k is sufficiently large (where the bound depends the matrix multiplication exponent ω). We ask: Can we obtain a classification of the fine-grained complexity for all patterns P? Indeed, we define a graph parameter ρ(P) such that if ω = 2, then (n^ρ(P) m^{(|V(P)|-ρ(P))/2})^{1±o(1)} is the optimal running time assuming the Orthogonal Vectors Hypothesis, for all patterns P except the triangle K₃. Here, the host graph G has n vertices and m = Θ(n^α) edges, where 1 ≤ α ≤ 2. The parameter ρ(P) is closely related (but sometimes different) to a parameter δ(P) = max_{S ⊆ V(P)} |S|-|N(S)| studied in (Alon 1981) to tightly quantify the maximum number of occurrences of induced subgraphs isomorphic to P. Our results stand in contrast to the lack of a full fine-grained classification of detecting an arbitrary (not necessarily dominating) induced P-pattern.

Cite as

Jonathan Dransfeld, Marvin Künnemann, and Mirza Redzic. Fine-Grained Classification of Detecting Dominating Patterns. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 98:1-98:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dransfeld_et_al:LIPIcs.ESA.2025.98,
  author =	{Dransfeld, Jonathan and K\"{u}nnemann, Marvin and Redzic, Mirza},
  title =	{{Fine-Grained Classification of Detecting Dominating Patterns}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{98:1--98:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.98},
  URN =		{urn:nbn:de:0030-drops-245679},
  doi =		{10.4230/LIPIcs.ESA.2025.98},
  annote =	{Keywords: fine-grained complexity theory, domination in graphs, subgraph isomorphism, classification theorem, parameterized algorithms}
}
Document
Core-Sparse Monge Matrix Multiplication: Improved Algorithm and Applications

Authors: Paweł Gawrychowski, Egor Gorbachev, and Tomasz Kociumaka

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
Min-plus matrix multiplication is a fundamental tool for designing algorithms operating on distances in graphs and different problems solvable by dynamic programming. We know that, assuming the APSP hypothesis, no subcubic-time algorithm exists for the case of general matrices. However, in many applications the matrices admit certain structural properties that can be used to design faster algorithms. For example, when considering a planar graph, one often works with a Monge matrix A, meaning that the density matrix A^◻ has non-negative entries, that is, A^◻_{i,j} := A_{i+1,j} + A_{i,j+1} - A_{i,j} -A_{i+1,j+1} ≥ 0. The min-plus product of two n×n Monge matrices can be computed in 𝒪(n²) time using the famous SMAWK algorithm. In applications such as longest common subsequence, edit distance, and longest increasing subsequence, the matrices are even more structured, as observed by Tiskin [J. Discrete Algorithms, 2008]: they are (or can be converted to) simple unit-Monge matrices, meaning that the density matrix is a permutation matrix and, furthermore, the first column and the last row of the matrix consist of only zeroes. Such matrices admit an implicit representation of size 𝒪(n) and, as shown by Tiskin [SODA 2010 & Algorithmica, 2015], their min-plus product can be computed in 𝒪(nlog n) time. Russo [SPIRE 2010 & Theor. Comput. Sci., 2012] identified a general structural property of matrices that admit such efficient representation and min-plus multiplication algorithms: the core size δ, defined as the number of non-zero entries in the density matrices of the input and output matrices. He provided an adaptive implementation of the SMAWK algorithm that runs in 𝒪((n+δ)log³ n) or 𝒪((n+δ)log² n) time (depending on the representation of the input matrices). In this work, we further investigate the core size as the parameter that enables efficient min-plus matrix multiplication. On the combinatorial side, we provide a (linear) bound on the core size of the product matrix in terms of the core sizes of the input matrices. On the algorithmic side, we generalize Tiskin’s algorithm (but, arguably, with a more elementary analysis) to solve the core-sparse Monge matrix multiplication problem in 𝒪(n+δlog δ) ⊆ 𝒪(n + δ log n) time, matching the complexity for simple unit-Monge matrices. As witnessed by the recent work of Gorbachev and Kociumaka [STOC'25] for edit distance with integer weights, our generalization opens up the possibility of speed-ups for weighted sequence alignment problems. Furthermore, our multiplication algorithm is also capable of producing an efficient data structure for recovering the witness for any given entry of the output matrix. This allows us, for example, to preprocess an integer array of size n in Õ(n) time so that the longest increasing subsequence of any sub-array can be reconstructed in Õ(𝓁) time, where 𝓁 is the length of the reported subsequence. In comparison, Karthik C. S. and Rahul [arXiv, 2024] recently achieved 𝒪(𝓁+n^{1/2}polylog n)-time reporting after 𝒪(n^{3/2}polylog n)-time preprocessing.

Cite as

Paweł Gawrychowski, Egor Gorbachev, and Tomasz Kociumaka. Core-Sparse Monge Matrix Multiplication: Improved Algorithm and Applications. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 74:1-74:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gawrychowski_et_al:LIPIcs.ESA.2025.74,
  author =	{Gawrychowski, Pawe{\l} and Gorbachev, Egor and Kociumaka, Tomasz},
  title =	{{Core-Sparse Monge Matrix Multiplication: Improved Algorithm and Applications}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{74:1--74:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.74},
  URN =		{urn:nbn:de:0030-drops-245427},
  doi =		{10.4230/LIPIcs.ESA.2025.74},
  annote =	{Keywords: Min-plus matrix multiplication, Monge matrix, longest increasing subsequence}
}
Document
Faster Exponential Algorithms for Cut Problems via Geometric Data Structures

Authors: László Kozma and Junqi Tan

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
For many hard computational problems, simple algorithms that run in time 2ⁿ ⋅ n^O(1) arise, say, from enumerating all subsets of a size-n set. Finding (exponentially) faster algorithms is a natural goal that has driven much of the field of exact exponential algorithms (e.g., see Fomin and Kratsch, 2010). In this paper we obtain algorithms with running time O(1.9999977ⁿ) on input graphs with n vertices, for the following well-studied problems: - d-Cut: find a proper cut in which no vertex has more than d neighbors on the other side of the cut; - Internal Partition: find a proper cut in which every vertex has at least as many neighbors on its side of the cut as on the other side; and - (α,β)-Domination: given intervals α,β ⊆ [0,n], find a subset S of the vertices, so that for every vertex v ∈ S the number of neighbors of v in S is from α and for every vertex v ∉ S, the number of neighbors of v in S is from β. Our algorithms are exceedingly simple, combining the split and list technique (Horowitz and Sahni, 1974; Williams, 2005) with a tool from computational geometry: orthogonal range searching in the moderate dimensional regime (Chan, 2017). Our technique is applicable to the decision, optimization and counting versions of these problems and easily extends to various generalizations with more fine-grained, vertex-specific constraints, as well as to directed, balanced, and other variants. Algorithms with running times of the form cⁿ, for c < 2, were known for the first problem only for constant d, and for the third problem for certain special cases of α and β; for the second problem we are not aware of such results.

Cite as

László Kozma and Junqi Tan. Faster Exponential Algorithms for Cut Problems via Geometric Data Structures. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 110:1-110:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kozma_et_al:LIPIcs.ESA.2025.110,
  author =	{Kozma, L\'{a}szl\'{o} and Tan, Junqi},
  title =	{{Faster Exponential Algorithms for Cut Problems via Geometric Data Structures}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{110:1--110:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.110},
  URN =		{urn:nbn:de:0030-drops-245796},
  doi =		{10.4230/LIPIcs.ESA.2025.110},
  annote =	{Keywords: graph algorithms, cuts, exponential time, data structures}
}
Document
Hardness of Median and Center in the Ulam Metric

Authors: Nick Fischer, Elazar Goldenberg, Mursalin Habib, and Karthik C. S.

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
The classical rank aggregation problem seeks to combine a set X of n permutations into a single representative "consensus" permutation. In this paper, we investigate two fundamental rank aggregation tasks under the well-studied Ulam metric: computing a median permutation (which minimizes the sum of Ulam distances to X) and computing a center permutation (which minimizes the maximum Ulam distance to X) in two settings. - Continuous Setting: In the continuous setting, the median/center is allowed to be any permutation. It is known that computing a center in the Ulam metric is NP-hard and we add to this by showing that computing a median is NP-hard as well via a simple reduction from the Max-Cut problem. While this result may not be unexpected, it had remained elusive until now and confirms a speculation by Chakraborty, Das, and Krauthgamer [SODA '21]. - Discrete Setting: In the discrete setting, the median/center must be a permutation from the input set. We fully resolve the fine-grained complexity of the discrete median and discrete center problems under the Ulam metric, proving that the naive Õ(n² L)-time algorithm (where L is the length of the permutation) is conditionally optimal. This resolves an open problem raised by Abboud, Bateni, Cohen-Addad, Karthik C. S., and Seddighin [APPROX '23]. Our reductions are inspired by the known fine-grained lower bounds for similarity measures, but we face and overcome several new highly technical challenges.

Cite as

Nick Fischer, Elazar Goldenberg, Mursalin Habib, and Karthik C. S.. Hardness of Median and Center in the Ulam Metric. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 111:1-111:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{fischer_et_al:LIPIcs.ESA.2025.111,
  author =	{Fischer, Nick and Goldenberg, Elazar and Habib, Mursalin and Karthik C. S.},
  title =	{{Hardness of Median and Center in the Ulam Metric}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{111:1--111:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.111},
  URN =		{urn:nbn:de:0030-drops-245809},
  doi =		{10.4230/LIPIcs.ESA.2025.111},
  annote =	{Keywords: Ulam distance, median, center, rank aggregation, fine-grained complexity}
}
Document
(Multivariate) k-SUM as Barrier to Succinct Computation

Authors: Geri Gokaj, Marvin Künnemann, Sabine Storandt, and Carina Truschel

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
How does the time complexity of a problem change when the input is given succinctly rather than explicitly? We study this question for several geometric problems defined on a set X of N points in ℤ^d. As succinct representation, we choose a sumset (or Minkowski sum) representation: Instead of receiving X explicitly, we are given sets A,B of n points that define X as A+B = {a+b∣ a ∈ A,b ∈ B}. We investigate the fine-grained complexity of this succinct version for several Õ(N)-time computable geometric primitives. Remarkably, we can tie their complexity tightly to the complexity of corresponding k-SUM problems. Specifically, we introduce as All-ints 3-SUM(n,n,k) the following multivariate, multi-output variant of 3-SUM: given sets A,B of size n and set C of size k, determine for all c ∈ C whether there are a ∈ A and b ∈ B with a+b = c. We obtain the following results: 1) Succinct closest L_∞-pair requires time N^{1-o(1)} under the 3-SUM hypothesis, while succinct furthest L_∞-pair can be solved in time Õ(n). 2) Succinct bichromatic closest L_∞-Pair requires time N^{1-o(1)} iff the 4-SUM hypothesis holds. 3) The following problems are fine-grained equivalent to All-ints 3-SUM(n,n,k): succinct skyline computation in 2D with output size k and succinct batched orthogonal range search with k given ranges. This establishes conditionally tight Õ(min{nk, N})-time algorithms for these problems. We obtain further connections with All-ints 3-SUM(n,n,k) for succinctly computing independent sets in unit interval graphs. Thus, (Multivariate) k-SUM problems precisely capture the barrier for enabling sumset-succinct computation for various geometric primitives.

Cite as

Geri Gokaj, Marvin Künnemann, Sabine Storandt, and Carina Truschel. (Multivariate) k-SUM as Barrier to Succinct Computation. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 42:1-42:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gokaj_et_al:LIPIcs.ESA.2025.42,
  author =	{Gokaj, Geri and K\"{u}nnemann, Marvin and Storandt, Sabine and Truschel, Carina},
  title =	{{(Multivariate) k-SUM as Barrier to Succinct Computation}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{42:1--42:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.42},
  URN =		{urn:nbn:de:0030-drops-245101},
  doi =		{10.4230/LIPIcs.ESA.2025.42},
  annote =	{Keywords: Fine-grained complexity theory, sumsets, additive combinatorics, succinct inputs, computational geometry}
}
Document
Parameterized Approximability for Modular Linear Equations

Authors: Konrad K. Dabrowski, Peter Jonsson, Sebastian Ordyniak, George Osipov, and Magnus Wahlström

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
We consider the Min-r-Lin(ℤ_m) problem: given a system S of length-r linear equations modulo m, find Z ⊆ S of minimum cardinality such that S-Z is satisfiable. The problem is NP-hard and UGC-hard to approximate in polynomial time within any constant factor even when r = m = 2. We focus on parameterized approximation with solution size as the parameter. Dabrowski, Jonsson, Ordyniak, Osipov and Wahlström [SODA-2023] showed that Min-r-Lin(ℤ_m) is in FPT if m is prime (i.e. ℤ_m is a field), and it is W[1]-hard if m is not a prime power. We show that Min-r-Lin(ℤ_{pⁿ}) is FPT-approximable within a factor of 2 for every prime p and integer n ≥ 2. This implies that Min-2-Lin(ℤ_m), m ∈ ℤ^+, is FPT-approximable within a factor of 2ω(m) where ω(m) counts the number of distinct prime divisors of m. The high-level idea behind the algorithm is to solve tighter and tighter relaxations of the problem, decreasing the set of possible values for the variables at each step. When working over ℤ_{pⁿ} and viewing the values in base-p, one can roughly think of a relaxation as fixing the number of trailing zeros and the least significant nonzero digits of the values assigned to the variables. To solve the relaxed problem, we construct a certain graph where solutions can be identified with a particular collection of cuts. The relaxation may hide obstructions that will only become visible in the next iteration of the algorithm, which makes it difficult to find optimal solutions. To deal with this, we use a strategy based on shadow removal [Marx & Razgon, STOC-2011] to compute solutions that (1) cost at most twice as much as the optimum and (2) allow us to reduce the set of values for all variables simultaneously. We complement the algorithmic result with two lower bounds, ruling out constant-factor FPT-approximation for Min-3-Lin(R) over any nontrivial ring R and for Min-2-Lin(R) over some finite commutative rings R.

Cite as

Konrad K. Dabrowski, Peter Jonsson, Sebastian Ordyniak, George Osipov, and Magnus Wahlström. Parameterized Approximability for Modular Linear Equations. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 88:1-88:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dabrowski_et_al:LIPIcs.ESA.2025.88,
  author =	{Dabrowski, Konrad K. and Jonsson, Peter and Ordyniak, Sebastian and Osipov, George and Wahlstr\"{o}m, Magnus},
  title =	{{Parameterized Approximability for Modular Linear Equations}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{88:1--88:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.88},
  URN =		{urn:nbn:de:0030-drops-245562},
  doi =		{10.4230/LIPIcs.ESA.2025.88},
  annote =	{Keywords: parameterized complexity, approximation algorithms, linear equations}
}
Document
RANDOM
Avoiding Range via Turan-Type Bounds

Authors: Neha Kuntewar and Jayalal Sarma

Published in: LIPIcs, Volume 353, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025)


Abstract
Given a circuit C : {0,1}^n → {0,1}^m from a circuit class 𝒞, with m > n, finding a y ∈ {0,1}^m such that ∀ x ∈ {0,1}ⁿ, C(x) ≠ y, is the range avoidance problem (denoted by C-Avoid). Deterministic polynomial time algorithms (even with access to NP oracles) solving this problem are known to imply explicit constructions of various pseudorandom objects like hard Boolean functions, linear codes, PRGs etc. Deterministic polynomial time algorithms are known for NC⁰₂-Avoid when m > n, and for NC⁰₃-Avoid when m ≥ n²/log n, where NC⁰_k is the class of circuits with bounded fan-in which have constant depth and the output depends on at most k of the input bits. On the other hand, it is also known that NC⁰₃-Avoid when m = n+O(n^{2/3}) is at least as hard as explicit construction of rigid matrices. In fact, algorithms for solving range avoidance for even NC⁰₄ circuits imply new circuit lower bounds. In this paper, we propose a new approach to solving the range avoidance problem via hypergraphs. We formulate the problem in terms of Turan-type problems in hypergraphs of the following kind: for a fixed k-uniform hypergraph H, what is the maximum number of edges that can exist in H_C, which does not have a sub-hypergraph isomorphic to H? We show the following: - We first demonstrate the applicability of this approach by showing alternate proofs of some of the known results for the range avoidance problem using this framework. - We then use our approach to show (using several different hypergraph structures for which Turan-type bounds are known in the literature) that there is a constant c such that Monotone-NC⁰₃-Avoid can be solved in deterministic polynomial time when m > cn². - To improve the stretch constraint to linear, more precisely, to m > n, we show a new Turan-type theorem for a hypergraph structure (which we call the loose X_{2ℓ}-cycles). More specifically, we prove that any connected 3-uniform linear hypergraph with m > n edges must contain a loose X_{2ℓ} cycle. This may be of independent interest. - Using this, we show that Monotone-NC⁰₃-Avoid can be solved in deterministic polynomial time when m > n, thus improving the known bounds of NC⁰₃-Avoid for the case of monotone circuits. In contrast, we note that efficient algorithms for solving Monotone-NC⁰₆-Avoid, already imply explicit constructions for rigid matrices. - We also generalise our argument to solve the special case of range avoidance for NC⁰_k where each output function computed by the circuit is the majority function on its inputs, where m > n².

Cite as

Neha Kuntewar and Jayalal Sarma. Avoiding Range via Turan-Type Bounds. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 353, pp. 62:1-62:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kuntewar_et_al:LIPIcs.APPROX/RANDOM.2025.62,
  author =	{Kuntewar, Neha and Sarma, Jayalal},
  title =	{{Avoiding Range via Turan-Type Bounds}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025)},
  pages =	{62:1--62:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-397-3},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{353},
  editor =	{Ene, Alina and Chattopadhyay, Eshan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2025.62},
  URN =		{urn:nbn:de:0030-drops-244281},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2025.62},
  annote =	{Keywords: circuit lower bounds, explicit constructions, range avoidance, linear hypergraphs, Tur\'{a}n number of hypergraphs}
}
Document
On the Complexity of Finding 1-Center Spanning Trees

Authors: Pin-Hsian Lee, Meng-Tsung Tsai, and Hung-Lung Wang

Published in: LIPIcs, Volume 349, 19th International Symposium on Algorithms and Data Structures (WADS 2025)


Abstract
We consider the problem of finding a spanning tree T of a given undirected graph G such that any other spanning tree can be obtained from T by removing k edges and subsequently adding k edges, where k is minimized over all spanning trees of G. We refer to this minimum k as the treeradius of G. Treeradius is an interesting graph parameter with natural interpretations: (1) It is the smallest radius of a Hamming ball centered at an extreme point of the spanning tree polytope that covers the entire polytope. (2) Any graph with bounded treeradius also has bounded treewidth. Consequently, if a problem admits a fixed-parameter algorithm parameterized by treewidth, it also admits a fixed-parameter algorithm parameterized by treeradius. In this paper, we show that computing the exact treeradius for n-vertex graphs requires 2^Ω(n) time under the Exponential Time Hypothesis (ETH) and does not admit a PTAS, with an inapproximability bound of 1153/1152, unless P = NP. This hardness result is surprising, as treeradius has significantly higher ETH complexity compared to analogous problems on shortest path polytopes and star subgraph polytopes.

Cite as

Pin-Hsian Lee, Meng-Tsung Tsai, and Hung-Lung Wang. On the Complexity of Finding 1-Center Spanning Trees. In 19th International Symposium on Algorithms and Data Structures (WADS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 349, pp. 43:1-43:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{lee_et_al:LIPIcs.WADS.2025.43,
  author =	{Lee, Pin-Hsian and Tsai, Meng-Tsung and Wang, Hung-Lung},
  title =	{{On the Complexity of Finding 1-Center Spanning Trees}},
  booktitle =	{19th International Symposium on Algorithms and Data Structures (WADS 2025)},
  pages =	{43:1--43:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-398-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{349},
  editor =	{Morin, Pat and Oh, Eunjin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WADS.2025.43},
  URN =		{urn:nbn:de:0030-drops-242743},
  doi =		{10.4230/LIPIcs.WADS.2025.43},
  annote =	{Keywords: Treeradius, Spanning tree polytope, Shortest s, t-path polytope}
}
Document
Clustering Point Sets Revisited

Authors: Md. Billal Hossain and Benjamin Raichel

Published in: LIPIcs, Volume 349, 19th International Symposium on Algorithms and Data Structures (WADS 2025)


Abstract
In the sets clustering problem one is given a collection of point sets 𝒫 = {P_1,… P_m} in ℝ^d, where for any set of k centers in ℝ^d, each P_i is assigned to its nearest center as determine by some local cost functions. The goal is then to select a set of k centers to minimize some global cost function of the corresponding local assignment costs. Specifically, we consider either summing or taking the maximum cost over all P_i, where for each P_i the cost of assigning it to a center c is either max_{p ∈ P_i} ‖c-p‖, ∑_{p ∈ P_i} ‖c-p‖, or ∑_{p ∈ P_i} ‖c-p‖². Different combinations of the global and local cost functions naturally generalize the k-center, k-median, and k-means clustering problems. In this paper, we improve the prior results for the natural generalization of k-center, give the first result for the natural generalization of k-means, and give results for generalizations of k-median and k-center which differ from those previously studied.

Cite as

Md. Billal Hossain and Benjamin Raichel. Clustering Point Sets Revisited. In 19th International Symposium on Algorithms and Data Structures (WADS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 349, pp. 38:1-38:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hossain_et_al:LIPIcs.WADS.2025.38,
  author =	{Hossain, Md. Billal and Raichel, Benjamin},
  title =	{{Clustering Point Sets Revisited}},
  booktitle =	{19th International Symposium on Algorithms and Data Structures (WADS 2025)},
  pages =	{38:1--38:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-398-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{349},
  editor =	{Morin, Pat and Oh, Eunjin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WADS.2025.38},
  URN =		{urn:nbn:de:0030-drops-242693},
  doi =		{10.4230/LIPIcs.WADS.2025.38},
  annote =	{Keywords: Clustering, k-center, k-median, k-means}
}
Document
Monotone Bounded-Depth Complexity of Homomorphism Polynomials

Authors: C.S. Bhargav, Shiteng Chen, Radu Curticapean, and Prateek Dwivedi

Published in: LIPIcs, Volume 345, 50th International Symposium on Mathematical Foundations of Computer Science (MFCS 2025)


Abstract
For every fixed graph H, it is known that homomorphism counts from H and colorful H-subgraph counts can be determined in O(n^{t+1}) time on n-vertex input graphs G, where t is the treewidth of H. On the other hand, a running time of n^{o(t / log t)} would refute the exponential-time hypothesis. Komarath, Pandey, and Rahul (Algorithmica, 2023) studied algebraic variants of these counting problems, i.e., homomorphism and subgraph polynomials for fixed graphs H. These polynomials are weighted sums over the objects counted above, where each object is weighted by the product of variables corresponding to edges contained in the object. As shown by Komarath et al., the monotone circuit complexity of the homomorphism polynomial for H is Θ(n^{tw(H)+1}). In this paper, we characterize the power of monotone bounded-depth circuits for homomorphism and colorful subgraph polynomials. This leads us to discover a natural hierarchy of graph parameters tw_Δ(H), for fixed Δ ∈ ℕ, which capture the width of tree-decompositions for H when the underlying tree is required to have depth at most Δ. We prove that monotone circuits of product-depth Δ computing the homomorphism polynomial for H require size Θ(n^{tw_Δ(H^{†})+1}), where H^{†} is the graph obtained from H by removing all degree-1 vertices. This allows us to derive an optimal depth hierarchy theorem for monotone bounded-depth circuits through graph-theoretic arguments.

Cite as

C.S. Bhargav, Shiteng Chen, Radu Curticapean, and Prateek Dwivedi. Monotone Bounded-Depth Complexity of Homomorphism Polynomials. In 50th International Symposium on Mathematical Foundations of Computer Science (MFCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 345, pp. 19:1-19:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bhargav_et_al:LIPIcs.MFCS.2025.19,
  author =	{Bhargav, C.S. and Chen, Shiteng and Curticapean, Radu and Dwivedi, Prateek},
  title =	{{Monotone Bounded-Depth Complexity of Homomorphism Polynomials}},
  booktitle =	{50th International Symposium on Mathematical Foundations of Computer Science (MFCS 2025)},
  pages =	{19:1--19:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-388-1},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{345},
  editor =	{Gawrychowski, Pawe{\l} and Mazowiecki, Filip and Skrzypczak, Micha{\l}},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2025.19},
  URN =		{urn:nbn:de:0030-drops-241269},
  doi =		{10.4230/LIPIcs.MFCS.2025.19},
  annote =	{Keywords: algebraic complexity, homomorphisms, monotone circuit complexity, bounded-depth circuits, treewidth, pathwidth}
}
Document
A Coupled Reconfiguration Mechanism That Enables Powerful, Pseudoknot-Robust DNA Strand Displacement Devices with 2-Stranded Inputs

Authors: Hope Amber Johnson and Anne Condon

Published in: LIPIcs, Volume 347, 31st International Conference on DNA Computing and Molecular Programming (DNA 31) (2025)


Abstract
DNA strand displacement, a collective name for certain behaviors of short strands of DNA, has been used to build many interesting molecular devices over the past few decades. Among those devices are general implementation schemes for Chemical Reaction Networks, suggesting a place in an abstraction hierarchy for complex molecular programming. However, the possibilities of DNA strand displacement are far from fully explored. On a theoretical level, most DNA strand displacement systems are built out of a few simple motifs, with the space of possible motifs otherwise unexplored. On a practical level, the desire for general, large-scale DNA strand displacement systems is not fulfilled. Those systems that are scalable are not general, and those that are general don't scale up well. We have recently been exploring the space of possibilities for DNA strand displacement systems where all input complexes are made out of at most two strands of DNA. As a test case, we've had an open question of whether such systems can implement general Chemical Reaction Networks, in a way that has a certain set of other desirable properties - reversible, systematic, O(1) toeholds, bimolecular reactions, and correct according to CRN bisimulation - that the state-of-the-art implementations with more than 2-stranded inputs have. Until now we've had a few results that have all but one of those desirable properties, including one based on a novel mechanism we called coupled reconfiguration, but that depended on the physically questionable assumption that pseudoknots cannot occur. We wondered whether the same type of mechanism could be done in a pseudoknot-robust way. In this work we show that in fact, coupled reconfiguration can be done in a pseudoknot-robust way, and this mechanism can implement general Chemical Reaction Networks with all inputs being single strands of DNA. Going further, the same motifs used in this mechanism can implement stacks and surface-based bimolecular reactions. Those have been previously studied as part of polymer extensions of the Chemical Reaction Network model, and on an abstract model level, the resulting extensions are Turing-complete in ways the base Chemical Reaction Network model is not. Our mechanisms are significantly different from previously tested DNA strand displacement systems, which raises questions about their ability to be implemented experimentally, but we have some reasons to believe the challenges are solvable. So we present the pseudoknot-robust coupled reconfiguration mechanism and its use for general Chemical Reaction Network implementations; we present the extensions of the mechanism to stack and surface reactions; and we discuss the possible obstacles and solutions to experimental implementation, as well as the theoretical implications of this mechanism.

Cite as

Hope Amber Johnson and Anne Condon. A Coupled Reconfiguration Mechanism That Enables Powerful, Pseudoknot-Robust DNA Strand Displacement Devices with 2-Stranded Inputs. In 31st International Conference on DNA Computing and Molecular Programming (DNA 31). Leibniz International Proceedings in Informatics (LIPIcs), Volume 347, pp. 2:1-2:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{johnson_et_al:LIPIcs.DNA.31.2,
  author =	{Johnson, Hope Amber and Condon, Anne},
  title =	{{A Coupled Reconfiguration Mechanism That Enables Powerful, Pseudoknot-Robust DNA Strand Displacement Devices with 2-Stranded Inputs}},
  booktitle =	{31st International Conference on DNA Computing and Molecular Programming (DNA 31)},
  pages =	{2:1--2:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-399-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{347},
  editor =	{Schaeffer, Josie and Zhang, Fei},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DNA.31.2},
  URN =		{urn:nbn:de:0030-drops-238514},
  doi =		{10.4230/LIPIcs.DNA.31.2},
  annote =	{Keywords: Molecular programming, DNA strand displacement, Chemical Reaction Networks}
}
Document
U-Prithvi: Integrating a Foundation Model and U-Net for Enhanced Flood Inundation Mapping

Authors: Vit Kostejn, Yamil Essus, Jenna Abrahamson, and Ranga Raju Vatsavai

Published in: LIPIcs, Volume 346, 13th International Conference on Geographic Information Science (GIScience 2025)


Abstract
In recent years, large pre-trained models, commonly referred to as foundation models, have become increasingly popular for various tasks leveraging transfer learning. This trend has expanded to remote sensing, where transformer-based foundation models such as Prithvi, msGFM, and SatSwinMAE have been utilized for a range of applications. While these transformer-based models, particularly the Prithvi model, exhibit strong generalization capabilities, they have limitations on capturing fine-grained details compared to convolutional neural network architectures like U-Net in segmentation tasks. In this paper, we propose a novel architecture, U-Prithvi, which combines the strengths of the Prithvi transformer with those of U-Net. We introduce a RandomHalfMaskLayer to ensure balanced learning from both models during training. Our approach is evaluated on the Sen1Floods11 dataset for flood inundation mapping, and experimental results demonstrate better performance of U-Prithvi over both individual models, achieving improved performance on out-of-sample data. While this principle is illustrated using the Prithvi model, it is easily adaptable to other foundation models.

Cite as

Vit Kostejn, Yamil Essus, Jenna Abrahamson, and Ranga Raju Vatsavai. U-Prithvi: Integrating a Foundation Model and U-Net for Enhanced Flood Inundation Mapping. In 13th International Conference on Geographic Information Science (GIScience 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 346, pp. 18:1-18:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kostejn_et_al:LIPIcs.GIScience.2025.18,
  author =	{Kostejn, Vit and Essus, Yamil and Abrahamson, Jenna and Vatsavai, Ranga Raju},
  title =	{{U-Prithvi: Integrating a Foundation Model and U-Net for Enhanced Flood Inundation Mapping}},
  booktitle =	{13th International Conference on Geographic Information Science (GIScience 2025)},
  pages =	{18:1--18:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-378-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{346},
  editor =	{Sila-Nowicka, Katarzyna and Moore, Antoni and O'Sullivan, David and Adams, Benjamin and Gahegan, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.GIScience.2025.18},
  URN =		{urn:nbn:de:0030-drops-238479},
  doi =		{10.4230/LIPIcs.GIScience.2025.18},
  annote =	{Keywords: GeoAI, flood mapping, foundation model, U-Net, Prithvi}
}
Document
Bridging Language Models and Symbolic Solvers via the Model Context Protocol

Authors: Stefan Szeider

Published in: LIPIcs, Volume 341, 28th International Conference on Theory and Applications of Satisfiability Testing (SAT 2025)


Abstract
This paper presents the MCP Solver, a system that bridges large language models with symbolic solvers through the Model Context Protocol (MCP). The system includes a server and a client component. The server provides an interface to constraint programming (via MiniZinc Python), propositional satisfiability and maximum satisfiability (both via PySAT), and SAT modulo Theories (via Python Z3). The client contains an agent that connects to the server via MCP and uses a language model to autonomously translate problem statements (given in English) into encodings through an incremental editing process and runs the solver. Our experiments demonstrate that this neurosymbolic integration effectively combines the natural language understanding of language models with robust solving capabilities across multiple solving paradigms.

Cite as

Stefan Szeider. Bridging Language Models and Symbolic Solvers via the Model Context Protocol. In 28th International Conference on Theory and Applications of Satisfiability Testing (SAT 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 341, pp. 30:1-30:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{szeider:LIPIcs.SAT.2025.30,
  author =	{Szeider, Stefan},
  title =	{{Bridging Language Models and Symbolic Solvers via the Model Context Protocol}},
  booktitle =	{28th International Conference on Theory and Applications of Satisfiability Testing (SAT 2025)},
  pages =	{30:1--30:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-381-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{341},
  editor =	{Berg, Jeremias and Nordstr\"{o}m, Jakob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2025.30},
  URN =		{urn:nbn:de:0030-drops-237649},
  doi =		{10.4230/LIPIcs.SAT.2025.30},
  annote =	{Keywords: Large Language Models, Agents, Constraint Programming, Satisfiability Solvers, Maximum Satisfiability, SAT Modulo Theories, Model Context Protocol}
}
Document
Hardness Amplification for Real-Valued Functions

Authors: Yunqi Li and Prashant Nalini Vasudevan

Published in: LIPIcs, Volume 339, 40th Computational Complexity Conference (CCC 2025)


Abstract
Given an integer-valued function f:{0,1}ⁿ → {0,1,… , m-1} that is mildly hard to compute on instances drawn from some distribution D over {0,1}ⁿ, we show that the function g(x_1, … , x_t) = f(x_1) + ⋯ + f(x_t) is strongly hard to compute on instances (x_1,… ,x_t) drawn from the product distribution D^t. We also show the same for the task of approximately computing real-valued functions f:{0,1}ⁿ → [0,m). Our theorems immediately imply hardness self-amplification for several natural problems including Max-Clique and Max-SAT, Approximate #SAT, Entropy Estimation, etc..

Cite as

Yunqi Li and Prashant Nalini Vasudevan. Hardness Amplification for Real-Valued Functions. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 2:1-2:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.CCC.2025.2,
  author =	{Li, Yunqi and Vasudevan, Prashant Nalini},
  title =	{{Hardness Amplification for Real-Valued Functions}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{2:1--2:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.2},
  URN =		{urn:nbn:de:0030-drops-236967},
  doi =		{10.4230/LIPIcs.CCC.2025.2},
  annote =	{Keywords: Average-case complexity, hardness amplification}
}
  • Refine by Type
  • 63 Document/PDF
  • 36 Document/HTML

  • Refine by Publication Year
  • 33 2025
  • 7 2024
  • 5 2023
  • 1 2022
  • 1 2021
  • Show More...

  • Refine by Author
  • 8 Karthik C. S.
  • 7 C. S., Karthik
  • 4 Goldenberg, Elazar
  • 4 Ohsaka, Naoto
  • 3 Lee, Euiwoong
  • Show More...

  • Refine by Series/Journal
  • 53 LIPIcs
  • 5 LITES
  • 4 TGDK
  • 1 DagRep

  • Refine by Classification
  • 18 Theory of computation → Problems, reductions and completeness
  • 9 Theory of computation → Parameterized complexity and exact algorithms
  • 6 Theory of computation → Computational geometry
  • 6 Theory of computation → Design and analysis of algorithms
  • 5 Theory of computation → Facility location and clustering
  • Show More...

  • Refine by Keyword
  • 5 Clustering
  • 5 fine-grained complexity
  • 5 hardness of approximation
  • 4 reconfiguration problems
  • 3 approximation algorithms
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail