91 Search Results for "�rz�n, Karl-Erik"


Document
Weighted HOM-Problem for Nonnegative Integers

Authors: Andreas Maletti, Andreea-Teodora Nász, and Erik Paul

Published in: LIPIcs, Volume 289, 41st International Symposium on Theoretical Aspects of Computer Science (STACS 2024)


Abstract
The HOM-problem asks whether the image of a regular tree language under a given tree homomorphism is again regular. It was recently shown to be decidable by Godoy, Giménez, Ramos, and Àlvarez. In this paper, the ℕ-weighted version of this problem is considered and its decidability is proved. More precisely, it is decidable in polynomial time whether the image of a regular ℕ-weighted tree language under a nondeleting, nonerasing tree homomorphism is regular.

Cite as

Andreas Maletti, Andreea-Teodora Nász, and Erik Paul. Weighted HOM-Problem for Nonnegative Integers. In 41st International Symposium on Theoretical Aspects of Computer Science (STACS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 289, pp. 51:1-51:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{maletti_et_al:LIPIcs.STACS.2024.51,
  author =	{Maletti, Andreas and N\'{a}sz, Andreea-Teodora and Paul, Erik},
  title =	{{Weighted HOM-Problem for Nonnegative Integers}},
  booktitle =	{41st International Symposium on Theoretical Aspects of Computer Science (STACS 2024)},
  pages =	{51:1--51:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-311-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{289},
  editor =	{Beyersdorff, Olaf and Kant\'{e}, Mamadou Moustapha and Kupferman, Orna and Lokshtanov, Daniel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2024.51},
  URN =		{urn:nbn:de:0030-drops-197614},
  doi =		{10.4230/LIPIcs.STACS.2024.51},
  annote =	{Keywords: Weighted Tree Automaton, Decision Problem, Subtree Equality Constraint, Tree Homomorphism, HOM-Problem, Weighted Tree Grammar, Weighted HOM-Problem}
}
Document
The NFA Acceptance Hypothesis: Non-Combinatorial and Dynamic Lower Bounds

Authors: Karl Bringmann, Allan Grønlund, Marvin Künnemann, and Kasper Green Larsen

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
We pose the fine-grained hardness hypothesis that the textbook algorithm for the NFA Acceptance problem is optimal up to subpolynomial factors, even for dense NFAs and fixed alphabets. We show that this barrier appears in many variations throughout the algorithmic literature by introducing a framework of Colored Walk problems. These yield fine-grained equivalent formulations of the NFA Acceptance problem as problems concerning detection of an s-t-walk with a prescribed color sequence in a given edge- or node-colored graph. For NFA Acceptance on sparse NFAs (or equivalently, Colored Walk in sparse graphs), a tight lower bound under the Strong Exponential Time Hypothesis has been rediscovered several times in recent years. We show that our hardness hypothesis, which concerns dense NFAs, has several interesting implications: - It gives a tight lower bound for Context-Free Language Reachability. This proves conditional optimality for the class of 2NPDA-complete problems, explaining the cubic bottleneck of interprocedural program analysis. - It gives a tight (n+nm^{1/3})^{1-o(1)} lower bound for the Word Break problem on strings of length n and dictionaries of total size m. - It implies the popular OMv hypothesis. Since the NFA acceptance problem is a static (i.e., non-dynamic) problem, this provides a static reason for the hardness of many dynamic problems. Thus, a proof of the NFA Acceptance hypothesis would resolve several interesting barriers. Conversely, a refutation of the NFA Acceptance hypothesis may lead the way to attacking the current barriers observed for Context-Free Language Reachability, the Word Break problem and the growing list of dynamic problems proven hard under the OMv hypothesis.

Cite as

Karl Bringmann, Allan Grønlund, Marvin Künnemann, and Kasper Green Larsen. The NFA Acceptance Hypothesis: Non-Combinatorial and Dynamic Lower Bounds. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 22:1-22:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ITCS.2024.22,
  author =	{Bringmann, Karl and Gr{\o}nlund, Allan and K\"{u}nnemann, Marvin and Larsen, Kasper Green},
  title =	{{The NFA Acceptance Hypothesis: Non-Combinatorial and Dynamic Lower Bounds}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{22:1--22:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.22},
  URN =		{urn:nbn:de:0030-drops-195500},
  doi =		{10.4230/LIPIcs.ITCS.2024.22},
  annote =	{Keywords: Fine-grained complexity theory, non-deterministic finite automata}
}
Document
Space-Efficient Parameterized Algorithms on Graphs of Low Shrubdepth

Authors: Benjamin Bergougnoux, Vera Chekan, Robert Ganian, Mamadou Moustapha Kanté, Matthias Mnich, Sang-il Oum, Michał Pilipczuk, and Erik Jan van Leeuwen

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
Dynamic programming on various graph decompositions is one of the most fundamental techniques used in parameterized complexity. Unfortunately, even if we consider concepts as simple as path or tree decompositions, such dynamic programming uses space that is exponential in the decomposition’s width, and there are good reasons to believe that this is necessary. However, it has been shown that in graphs of low treedepth it is possible to design algorithms which achieve polynomial space complexity without requiring worse time complexity than their counterparts working on tree decompositions of bounded width. Here, treedepth is a graph parameter that, intuitively speaking, takes into account both the depth and the width of a tree decomposition of the graph, rather than the width alone. Motivated by the above, we consider graphs that admit clique expressions with bounded depth and label count, or equivalently, graphs of low shrubdepth. Here, shrubdepth is a bounded-depth analogue of cliquewidth, in the same way as treedepth is a bounded-depth analogue of treewidth. We show that also in this setting, bounding the depth of the decomposition is a deciding factor for improving the space complexity. More precisely, we prove that on n-vertex graphs equipped with a tree-model (a decomposition notion underlying shrubdepth) of depth d and using k labels, - Independent Set can be solved in time 2^𝒪(dk) ⋅ n^𝒪(1) using 𝒪(dk²log n) space; - Max Cut can be solved in time n^𝒪(dk) using 𝒪(dk log n) space; and - Dominating Set can be solved in time 2^𝒪(dk) ⋅ n^𝒪(1) using n^𝒪(1) space via a randomized algorithm. We also establish a lower bound, conditional on a certain assumption about the complexity of Longest Common Subsequence, which shows that at least in the case of Independent Set the exponent of the parametric factor in the time complexity has to grow with d if one wishes to keep the space complexity polynomial.

Cite as

Benjamin Bergougnoux, Vera Chekan, Robert Ganian, Mamadou Moustapha Kanté, Matthias Mnich, Sang-il Oum, Michał Pilipczuk, and Erik Jan van Leeuwen. Space-Efficient Parameterized Algorithms on Graphs of Low Shrubdepth. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 18:1-18:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bergougnoux_et_al:LIPIcs.ESA.2023.18,
  author =	{Bergougnoux, Benjamin and Chekan, Vera and Ganian, Robert and Kant\'{e}, Mamadou Moustapha and Mnich, Matthias and Oum, Sang-il and Pilipczuk, Micha{\l} and van Leeuwen, Erik Jan},
  title =	{{Space-Efficient Parameterized Algorithms on Graphs of Low Shrubdepth}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.18},
  URN =		{urn:nbn:de:0030-drops-186710},
  doi =		{10.4230/LIPIcs.ESA.2023.18},
  annote =	{Keywords: Parameterized complexity, shrubdepth, space complexity, algebraic methods}
}
Document
Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution

Authors: Karl Bringmann and Alejandro Cassis

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
We revisit the classic 0-1-Knapsack problem, in which we are given n items with their weights and profits as well as a weight budget W, and the goal is to find a subset of items of total weight at most W that maximizes the total profit. We study pseudopolynomial-time algorithms parameterized by the largest profit of any item p_{max}, and the largest weight of any item w_max. Our main result are algorithms for 0-1-Knapsack running in time Õ(n w_max p_max^{2/3}) and Õ(n p_max w_max^{2/3}), improving upon an algorithm in time O(n p_max w_max) by Pisinger [J. Algorithms '99]. In the regime p_max ≈ w_max ≈ n (and W ≈ OPT ≈ n²) our algorithms are the first to break the cubic barrier n³. To obtain our result, we give an efficient algorithm to compute the min-plus convolution of near-convex functions. More precisely, we say that a function f : [n] ↦ ℤ is Δ-near convex with Δ ≥ 1, if there is a convex function f ̆ such that f ̆(i) ≤ f(i) ≤ f ̆(i) + Δ for every i. We design an algorithm computing the min-plus convolution of two Δ-near convex functions in time Õ(nΔ). This tool can replace the usage of the prediction technique of Bateni, Hajiaghayi, Seddighin and Stein [STOC '18] in all applications we are aware of, and we believe it has wider applicability.

Cite as

Karl Bringmann and Alejandro Cassis. Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 24:1-24:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ESA.2023.24,
  author =	{Bringmann, Karl and Cassis, Alejandro},
  title =	{{Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{24:1--24:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.24},
  URN =		{urn:nbn:de:0030-drops-186776},
  doi =		{10.4230/LIPIcs.ESA.2023.24},
  annote =	{Keywords: Knapsack, Fine-Grained Complexity, Min-Plus Convolution}
}
Document
Bel-Games: A Formal Theory of Games of Incomplete Information Based on Belief Functions in the Coq Proof Assistant

Authors: Pierre Pomeret-Coquot, Hélène Fargier, and Érik Martin-Dorel

Published in: LIPIcs, Volume 268, 14th International Conference on Interactive Theorem Proving (ITP 2023)


Abstract
Decision theory and game theory are both interdisciplinary domains that focus on modelling and {analyzing} decision-making processes. On the one hand, decision theory aims to account for the possible behaviors of an agent with respect to an uncertain situation. It thus provides several frameworks to describe the decision-making processes in this context, including that of belief functions. On the other hand, game theory focuses on multi-agent decisions, typically with probabilistic uncertainty (if any), hence the so-called class of Bayesian games. In this paper, we use the Coq/SSReflect proof assistant to formally prove the results we obtained in [Pierre Pomeret{-}Coquot et al., 2022]. First, we formalize a general theory of belief functions with finite support, and structures and solutions concepts from game theory. On top of that, we extend Bayesian games to the theory of belief functions, so that we obtain a more expressive class of games we refer to as Bel games; it makes it possible to better capture human behaviors with respect to lack of information. Next, we provide three different proofs of an extended version of the so-called Howson-Rosenthal’s theorem, showing that Bel games can be casted into games of complete information, i.e., without any uncertainty. We thus embed this class of games into classical game theory, enabling the use of existing algorithms.

Cite as

Pierre Pomeret-Coquot, Hélène Fargier, and Érik Martin-Dorel. Bel-Games: A Formal Theory of Games of Incomplete Information Based on Belief Functions in the Coq Proof Assistant. In 14th International Conference on Interactive Theorem Proving (ITP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 268, pp. 25:1-25:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{pomeretcoquot_et_al:LIPIcs.ITP.2023.25,
  author =	{Pomeret-Coquot, Pierre and Fargier, H\'{e}l\`{e}ne and Martin-Dorel, \'{E}rik},
  title =	{{Bel-Games: A Formal Theory of Games of Incomplete Information Based on Belief Functions in the Coq Proof Assistant}},
  booktitle =	{14th International Conference on Interactive Theorem Proving (ITP 2023)},
  pages =	{25:1--25:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-284-6},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{268},
  editor =	{Naumowicz, Adam and Thiemann, Ren\'{e}},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITP.2023.25},
  URN =		{urn:nbn:de:0030-drops-184001},
  doi =		{10.4230/LIPIcs.ITP.2023.25},
  annote =	{Keywords: Game of Incomplete Information, Belief Function Theory, Coq Proof Assistant, SSReflect Proof Language, MathComp Library}
}
Document
Combinatorial Designs Meet Hypercliques: Higher Lower Bounds for Klee’s Measure Problem and Related Problems in Dimensions d ≥ 4

Authors: Egor Gorbachev and Marvin Künnemann

Published in: LIPIcs, Volume 258, 39th International Symposium on Computational Geometry (SoCG 2023)


Abstract
Klee’s measure problem (computing the volume of the union of n axis-parallel boxes in ℝ^d) is well known to have n^{d/2± o(1)}-time algorithms (Overmars, Yap, SICOMP'91; Chan FOCS'13). Only recently, a conditional lower bound (without any restriction to "combinatorial" algorithms) could be shown for d = 3 (Künnemann, FOCS'22). Can this result be extended to a tight lower bound for dimensions d ≥ 4? In this paper, we formalize the technique of the tight lower bound for d = 3 using a combinatorial object we call prefix covering design. We show that these designs, which are related in spirit to combinatorial designs, directly translate to conditional lower bounds for Klee’s measure problem and various related problems. By devising good prefix covering designs, we give the following lower bounds for Klee’s measure problem in ℝ^d, the depth problem for axis-parallel boxes in ℝ^d, the largest-volume/max-perimeter empty (anchored) box problem in ℝ^{2d}, and related problems: - Ω(n^1.90476) for d = 4, - Ω(n^2.22222) for d = 5, - Ω(n^{d/3 + 2√d/9-o(√d)}) for general d, assuming the 3-uniform hyperclique hypothesis. For Klee’s measure problem and the depth problem, these bounds improve previous lower bounds of Ω(n^{1.777...}), Ω(n^{2.0833...}) and Ω(n^{d/3 + 1/3 + Θ(1/d)}) respectively. Our improved prefix covering designs were obtained by (1) exploiting a computer-aided search using problem-specific insights as well as SAT solvers, and (2) showing how to transform combinatorial covering designs known in the literature to strong prefix covering designs. In contrast, we show that our lower bounds are close to best possible using this proof technique.

Cite as

Egor Gorbachev and Marvin Künnemann. Combinatorial Designs Meet Hypercliques: Higher Lower Bounds for Klee’s Measure Problem and Related Problems in Dimensions d ≥ 4. In 39th International Symposium on Computational Geometry (SoCG 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 258, pp. 36:1-36:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{gorbachev_et_al:LIPIcs.SoCG.2023.36,
  author =	{Gorbachev, Egor and K\"{u}nnemann, Marvin},
  title =	{{Combinatorial Designs Meet Hypercliques: Higher Lower Bounds for Klee’s Measure Problem and Related Problems in Dimensions d ≥ 4}},
  booktitle =	{39th International Symposium on Computational Geometry (SoCG 2023)},
  pages =	{36:1--36:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-273-0},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{258},
  editor =	{Chambers, Erin W. and Gudmundsson, Joachim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2023.36},
  URN =		{urn:nbn:de:0030-drops-178861},
  doi =		{10.4230/LIPIcs.SoCG.2023.36},
  annote =	{Keywords: Fine-grained complexity theory, non-combinatorial lower bounds, computational geometry, clique detection}
}
Document
New Approximation Algorithms for Touring Regions

Authors: Benjamin Qi and Richard Qi

Published in: LIPIcs, Volume 258, 39th International Symposium on Computational Geometry (SoCG 2023)


Abstract
We analyze the touring regions problem: find a (1+ε)-approximate Euclidean shortest path in d-dimensional space that starts at a given starting point, ends at a given ending point, and visits given regions R₁, R₂, R₃, … , R_n in that order. Our main result is an O (n/√ε log{1/ε} + 1/ε)-time algorithm for touring disjoint disks. We also give an O(min(n/ε, n²/√ε))-time algorithm for touring disjoint two-dimensional convex fat bodies. Both of these results naturally generalize to larger dimensions; we obtain O(n/{ε^{d-1}} log²1/ε + 1/ε^{2d-2}) and O(n/ε^{2d-2})-time algorithms for touring disjoint d-dimensional balls and convex fat bodies, respectively.

Cite as

Benjamin Qi and Richard Qi. New Approximation Algorithms for Touring Regions. In 39th International Symposium on Computational Geometry (SoCG 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 258, pp. 54:1-54:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{qi_et_al:LIPIcs.SoCG.2023.54,
  author =	{Qi, Benjamin and Qi, Richard},
  title =	{{New Approximation Algorithms for Touring Regions}},
  booktitle =	{39th International Symposium on Computational Geometry (SoCG 2023)},
  pages =	{54:1--54:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-273-0},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{258},
  editor =	{Chambers, Erin W. and Gudmundsson, Joachim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2023.54},
  URN =		{urn:nbn:de:0030-drops-179047},
  doi =		{10.4230/LIPIcs.SoCG.2023.54},
  annote =	{Keywords: shortest paths, convex bodies, fat objects, disks}
}
Document
Lower Bounds on Retroactive Data Structures

Authors: Lily Chung, Erik D. Demaine, Dylan Hendrickson, and Jayson Lynch

Published in: LIPIcs, Volume 248, 33rd International Symposium on Algorithms and Computation (ISAAC 2022)


Abstract
We prove essentially optimal fine-grained lower bounds on the gap between a data structure and a partially retroactive version of the same data structure. Precisely, assuming any one of three standard conjectures, we describe a problem that has a data structure where operations run in O(T(n,m)) time per operation, but any partially retroactive version of that data structure requires T(n,m)⋅m^{1-o(1)} worst-case time per operation, where n is the size of the data structure at any time and m is the number of operations. Any data structure with operations running in O(T(n,m)) time per operation can be converted (via the "rollback method") into a partially retroactive data structure running in O(T(n,m)⋅m) time per operation, so our lower bound is tight up to an m^o(1) factor common in fine-grained complexity.

Cite as

Lily Chung, Erik D. Demaine, Dylan Hendrickson, and Jayson Lynch. Lower Bounds on Retroactive Data Structures. In 33rd International Symposium on Algorithms and Computation (ISAAC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 248, pp. 32:1-32:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{chung_et_al:LIPIcs.ISAAC.2022.32,
  author =	{Chung, Lily and Demaine, Erik D. and Hendrickson, Dylan and Lynch, Jayson},
  title =	{{Lower Bounds on Retroactive Data Structures}},
  booktitle =	{33rd International Symposium on Algorithms and Computation (ISAAC 2022)},
  pages =	{32:1--32:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-258-7},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{248},
  editor =	{Bae, Sang Won and Park, Heejin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2022.32},
  URN =		{urn:nbn:de:0030-drops-173171},
  doi =		{10.4230/LIPIcs.ISAAC.2022.32},
  annote =	{Keywords: Retroactivity, time travel, rollback, fine-grained complexity}
}
Document
Computing Generalized Convolutions Faster Than Brute Force

Authors: Barış Can Esmer, Ariel Kulik, Dániel Marx, Philipp Schepper, and Karol Węgrzycki

Published in: LIPIcs, Volume 249, 17th International Symposium on Parameterized and Exact Computation (IPEC 2022)


Abstract
In this paper, we consider a general notion of convolution. Let D be a finite domain and let Dⁿ be the set of n-length vectors (tuples) of D. Let f : D × D → D be a function and let ⊕_f be a coordinate-wise application of f. The f-Convolution of two functions g,h : Dⁿ → {-M,…,M} is (g ⊛_f h)(v) := ∑_{v_g,v_h ∈ D^n s.t. v = v_g ⊕_f v_h} g(v_g) ⋅ h(v_h) for every 𝐯 ∈ Dⁿ. This problem generalizes many fundamental convolutions such as Subset Convolution, XOR Product, Covering Product or Packing Product, etc. For arbitrary function f and domain D we can compute f-Convolution via brute-force enumeration in 𝒪̃(|D|^{2n} ⋅ polylog(M)) time. Our main result is an improvement over this naive algorithm. We show that f-Convolution can be computed exactly in 𝒪̃((c ⋅ |D|²)ⁿ ⋅ polylog(M)) for constant c := 5/6 when D has even cardinality. Our main observation is that a cyclic partition of a function f : D × D → D can be used to speed up the computation of f-Convolution, and we show that an appropriate cyclic partition exists for every f. Furthermore, we demonstrate that a single entry of the f-Convolution can be computed more efficiently. In this variant, we are given two functions g,h : Dⁿ → {-M,…,M} alongside with a vector 𝐯 ∈ Dⁿ and the task of the f-Query problem is to compute integer (g ⊛_f h)(𝐯). This is a generalization of the well-known Orthogonal Vectors problem. We show that f-Query can be computed in 𝒪̃(|D|^{(ω/2)n} ⋅ polylog(M)) time, where ω ∈ [2,2.373) is the exponent of currently fastest matrix multiplication algorithm.

Cite as

Barış Can Esmer, Ariel Kulik, Dániel Marx, Philipp Schepper, and Karol Węgrzycki. Computing Generalized Convolutions Faster Than Brute Force. In 17th International Symposium on Parameterized and Exact Computation (IPEC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 249, pp. 12:1-12:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{esmer_et_al:LIPIcs.IPEC.2022.12,
  author =	{Esmer, Bar{\i}\c{s} Can and Kulik, Ariel and Marx, D\'{a}niel and Schepper, Philipp and W\k{e}grzycki, Karol},
  title =	{{Computing Generalized Convolutions Faster Than Brute Force}},
  booktitle =	{17th International Symposium on Parameterized and Exact Computation (IPEC 2022)},
  pages =	{12:1--12:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-260-0},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{249},
  editor =	{Dell, Holger and Nederlof, Jesper},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2022.12},
  URN =		{urn:nbn:de:0030-drops-173685},
  doi =		{10.4230/LIPIcs.IPEC.2022.12},
  annote =	{Keywords: Generalized Convolution, Fast Fourier Transform, Fast Subset Convolution}
}
Document
Parameterized Complexity of Streaming Diameter and Connectivity Problems

Authors: Jelle J. Oostveen and Erik Jan van Leeuwen

Published in: LIPIcs, Volume 249, 17th International Symposium on Parameterized and Exact Computation (IPEC 2022)


Abstract
We initiate the investigation of the parameterized complexity of Diameter and Connectivity in the streaming paradigm. On the positive end, we show that knowing a vertex cover of size k allows for algorithms in the Adjacency List (AL) streaming model whose number of passes is constant and memory is 𝒪(log n) for any fixed k. Underlying these algorithms is a method to execute a breadth-first search in 𝒪(k) passes and 𝒪(k log n) bits of memory. On the negative end, we show that many other parameters lead to lower bounds in the AL model, where Ω(n/p) bits of memory is needed for any p-pass algorithm even for constant parameter values. In particular, this holds for graphs with a known modulator (deletion set) of constant size to a graph that has no induced subgraph isomorphic to a fixed graph H, for most H. For some cases, we can also show one-pass, Ω(n log n) bits of memory lower bounds. We also prove a much stronger Ω(n²/p) lower bound for Diameter on bipartite graphs. Finally, using the insights we developed into streaming parameterized graph exploration algorithms, we show a new streaming kernelization algorithm for computing a vertex cover of size k. This yields a kernel of 2k vertices (with 𝒪(k²) edges) produced as a stream in poly(k) passes and only 𝒪(k log n) bits of memory.

Cite as

Jelle J. Oostveen and Erik Jan van Leeuwen. Parameterized Complexity of Streaming Diameter and Connectivity Problems. In 17th International Symposium on Parameterized and Exact Computation (IPEC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 249, pp. 24:1-24:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{oostveen_et_al:LIPIcs.IPEC.2022.24,
  author =	{Oostveen, Jelle J. and van Leeuwen, Erik Jan},
  title =	{{Parameterized Complexity of Streaming Diameter and Connectivity Problems}},
  booktitle =	{17th International Symposium on Parameterized and Exact Computation (IPEC 2022)},
  pages =	{24:1--24:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-260-0},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{249},
  editor =	{Dell, Holger and Nederlof, Jesper},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2022.24},
  URN =		{urn:nbn:de:0030-drops-173808},
  doi =		{10.4230/LIPIcs.IPEC.2022.24},
  annote =	{Keywords: Stream, Streaming, Graphs, Parameter, Complexity, Diameter, Connectivity, Vertex Cover, Disjointness, Permutation}
}
Document
Online Metric Allocation and Time-Varying Regularization

Authors: Nikhil Bansal and Christian Coester

Published in: LIPIcs, Volume 244, 30th Annual European Symposium on Algorithms (ESA 2022)


Abstract
We introduce a general online allocation problem that connects several of the most fundamental problems in online optimization. Let M be an n-point metric space. Consider a resource that can be allocated in arbitrary fractions to the points of M. At each time t, a convex monotone cost function c_t: [0,1] → ℝ_+ appears at some point r_t ∈ M. In response, an algorithm may change the allocation of the resource, paying movement cost as determined by the metric and service cost c_t(x_{r_t}), where x_{r_t} is the fraction of the resource at r_t at the end of time t. For example, when the cost functions are c_t(x) = α x, this is equivalent to randomized MTS, and when the cost functions are c_t(x) = ∞⋅1_{x < 1/k}, this is equivalent to fractional k-server. Because of an inherent scale-freeness property of the problem, existing techniques for MTS and k-server fail to achieve similar guarantees for metric allocation. To handle this, we consider a generalization of the online multiplicative update method where we decouple the rate at which a variable is updated from its value, resulting in interesting new dynamics. We use this to give an O(log n)-competitive algorithm for weighted star metrics. We then show how this corresponds to an extension of the online mirror descent framework to a setting where the regularizer is time-varying. Using this perspective, we further refine the guarantees of our algorithm. We also consider the case of non-convex cost functions. Using a simple 𝓁₂²-regularizer, we give tight bounds of Θ(n) on tree metrics, which imply deterministic and randomized competitive ratios of O(n²) and O(nlog n) respectively on arbitrary metrics.

Cite as

Nikhil Bansal and Christian Coester. Online Metric Allocation and Time-Varying Regularization. In 30th Annual European Symposium on Algorithms (ESA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 244, pp. 13:1-13:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bansal_et_al:LIPIcs.ESA.2022.13,
  author =	{Bansal, Nikhil and Coester, Christian},
  title =	{{Online Metric Allocation and Time-Varying Regularization}},
  booktitle =	{30th Annual European Symposium on Algorithms (ESA 2022)},
  pages =	{13:1--13:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-247-1},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{244},
  editor =	{Chechik, Shiri and Navarro, Gonzalo and Rotenberg, Eva and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2022.13},
  URN =		{urn:nbn:de:0030-drops-169515},
  doi =		{10.4230/LIPIcs.ESA.2022.13},
  annote =	{Keywords: Online algorithms, competitive analysis, k-server, metrical task systems, mirror descent, regularization}
}
Document
The Composition Complexity of Majority

Authors: Victor Lecomte, Prasanna Ramakrishnan, and Li-Yang Tan

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
We study the complexity of computing majority as a composition of local functions: Maj_n = h(g_1,…,g_m), where each g_j: {0,1}ⁿ → {0,1} is an arbitrary function that queries only k ≪ n variables and h: {0,1}^m → {0,1} is an arbitrary combining function. We prove an optimal lower bound of m ≥ Ω(n/k log k) on the number of functions needed, which is a factor Ω(log k) larger than the ideal m = n/k. We call this factor the composition overhead; previously, no superconstant lower bounds on it were known for majority. Our lower bound recovers, as a corollary and via an entirely different proof, the best known lower bound for bounded-width branching programs for majority (Alon and Maass '86, Babai et al. '90). It is also the first step in a plan that we propose for breaking a longstanding barrier in lower bounds for small-depth boolean circuits. Novel aspects of our proof include sharp bounds on the information lost as computation flows through the inner functions g_j, and the bootstrapping of lower bounds for a multi-output function (Hamming weight) into lower bounds for a single-output one (majority).

Cite as

Victor Lecomte, Prasanna Ramakrishnan, and Li-Yang Tan. The Composition Complexity of Majority. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 19:1-19:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{lecomte_et_al:LIPIcs.CCC.2022.19,
  author =	{Lecomte, Victor and Ramakrishnan, Prasanna and Tan, Li-Yang},
  title =	{{The Composition Complexity of Majority}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{19:1--19:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.19},
  URN =		{urn:nbn:de:0030-drops-165818},
  doi =		{10.4230/LIPIcs.CCC.2022.19},
  annote =	{Keywords: computational complexity, circuit lower bounds}
}
Document
Track A: Algorithms, Complexity and Games
Finding Monotone Patterns in Sublinear Time, Adaptively

Authors: Omri Ben-Eliezer, Shoham Letzter, and Erik Waingarten

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We investigate adaptive sublinear algorithms for finding monotone patterns in sequential data. Given fixed 2 ≤ k ∈ m N and ε > 0, consider the problem of finding a length-k increasing subsequence in a sequence f : [n] → ℝ, provided that f is ε-far from free of such subsequences. It was shown by Ben-Eliezer et al. [FOCS 2019] that the non-adaptive query complexity of the above task is Θ((log n)^⌊log₂ k⌋). In this work, we break the non-adaptive lower bound, presenting an adaptive algorithm for this problem which makes O(log n) queries. This is optimal, matching the classical Ω(log n) adaptive lower bound by Fischer [Inf. Comp. 2004] for monotonicity testing (which corresponds to the case k = 2). Equivalently, our result implies that testing whether a sequence decomposes into k monotone subsequences can be done with O(log n) queries.

Cite as

Omri Ben-Eliezer, Shoham Letzter, and Erik Waingarten. Finding Monotone Patterns in Sublinear Time, Adaptively. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 17:1-17:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{beneliezer_et_al:LIPIcs.ICALP.2022.17,
  author =	{Ben-Eliezer, Omri and Letzter, Shoham and Waingarten, Erik},
  title =	{{Finding Monotone Patterns in Sublinear Time, Adaptively}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{17:1--17:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.17},
  URN =		{urn:nbn:de:0030-drops-163586},
  doi =		{10.4230/LIPIcs.ICALP.2022.17},
  annote =	{Keywords: property testing, monotone patterns, monotone decomposition, adaptivity}
}
Document
Track A: Algorithms, Complexity and Games
A Structural Investigation of the Approximability of Polynomial-Time Problems

Authors: Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
An extensive research effort targets optimal (in)approximability results for various NP-hard optimization problems. Notably, the works of (Creignou'95) as well as (Khanna, Sudan, Trevisan, Williamson'00) establish a tight characterization of a large subclass of MaxSNP, namely Boolean MaxCSPs and further variants, in terms of their polynomial-time approximability. Can we obtain similarly encompassing characterizations for classes of polynomial-time optimization problems? To this end, we initiate the systematic study of a recently introduced polynomial-time analogue of MaxSNP, which includes a large number of well-studied problems (including Nearest and Furthest Neighbor in the Hamming metric, Maximum Inner Product, optimization variants of k-XOR and Maximum k-Cover). Specifically, for each k, MaxSP_k denotes the class of O(m^k)-time problems of the form max_{x_1,… , x_k} #{y : ϕ(x_1,… ,x_k,y)} where ϕ is a quantifier-free first-order property and m denotes the size of the relational structure. Assuming central hypotheses about clique detection in hypergraphs and exact Max-3-SAT}, we show that for any MaxSP_k problem definable by a quantifier-free m-edge graph formula φ, the best possible approximation guarantee in faster-than-exhaustive-search time O(m^{k-δ})falls into one of four categories: - optimizable to exactness in time O(m^{k-δ}), - an (inefficient) approximation scheme, i.e., a (1+ε)-approximation in time O(m^{k-f(ε)}), - a (fixed) constant-factor approximation in time O(m^{k-δ}), or - a nm^ε-approximation in time O(m^{k-f(ε)}). We obtain an almost complete characterization of these regimes, for MaxSP_k as well as for an analogously defined minimization class MinSP_k. As our main technical contribution, we show how to rule out the existence of approximation schemes for a large class of problems admitting constant-factor approximations, under a hypothesis for exact Sparse Max-3-SAT algorithms posed by (Alman, Vassilevska Williams'20). As general trends for the problems we consider, we observe: (1) Exact optimizability has a simple algebraic characterization, (2) only few maximization problems do not admit a constant-factor approximation; these do not even have a subpolynomial-factor approximation, and (3) constant-factor approximation of minimization problems is equivalent to deciding whether the optimum is equal to 0.

Cite as

Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann. A Structural Investigation of the Approximability of Polynomial-Time Problems. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 30:1-30:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2022.30,
  author =	{Bringmann, Karl and Cassis, Alejandro and Fischer, Nick and K\"{u}nnemann, Marvin},
  title =	{{A Structural Investigation of the Approximability of Polynomial-Time Problems}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{30:1--30:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.30},
  URN =		{urn:nbn:de:0030-drops-163713},
  doi =		{10.4230/LIPIcs.ICALP.2022.30},
  annote =	{Keywords: Classification Theorems, Hardness of Approximation in P, Fine-grained Complexity Theory}
}
Document
Track A: Algorithms, Complexity and Games
Faster Knapsack Algorithms via Bounded Monotone Min-Plus-Convolution

Authors: Karl Bringmann and Alejandro Cassis

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We present new exact and approximation algorithms for 0-1-Knapsack and Unbounded Knapsack: - Exact Algorithm for 0-1-Knapsack: 0-1-Knapsack has known algorithms running in time Õ(n + min{n ⋅ OPT, n ⋅ W, OPT², W²}) [Bellman '57], where n is the number of items, W is the weight budget, and OPT is the optimal profit. We present an algorithm running in time Õ(n + (W + OPT)^{1.5}). This improves the running time in case n,W,OPT are roughly equal. - Exact Algorithm for Unbounded Knapsack: Unbounded Knapsack has known algorithms running in time Õ(n + min{n ⋅ p_max, n ⋅ w_max, p_max², w_max²}) [Axiotis, Tzamos '19, Jansen, Rohwedder '19, Chan, He '22], where n is the number of items, w_{max} is the largest weight of any item, and p_max is the largest profit of any item. We present an algorithm running in time Õ(n + (p_max + w_max)^{1.5}), giving a similar improvement as for 0-1-Knapsack. - Approximating Unbounded Knapsack with Resource Augmentation: Unbounded Knapsack has a known FPTAS with running time Õ(min{n/ε, n + 1/ε²}) [Jansen, Kraft '18]. We study weak approximation algorithms, which approximate the optimal profit but are allowed to overshoot the weight constraint (i.e. resource augmentation). We present the first approximation scheme for Unbounded Knapsack in this setting, achieving running time Õ(n + 1/ε^{1.5}). Along the way, we also give a simpler FPTAS with lower order improvement in the standard setting. For all of these problem settings the previously known results had matching conditional lower bounds. We avoid these lower bounds in the approximation setting by allowing resource augmentation, and in the exact setting by analyzing the time complexity in terms of weight and profit parameters (instead of only weight or only profit parameters). Our algorithms can be seen as reductions to Min-Plus-Convolution on monotone sequences with bounded entries. These structured instances of Min-Plus-Convolution can be solved in time O(n^1.5) [Chi, Duan, Xie, Zhang '22] (in contrast to the conjectured n^{2-o(1)} lower bound for the general case). We complement our results by showing reductions in the opposite direction, that is, we show that achieving our results with the constant 1.5 replaced by any constant < 2 implies subquadratic algorithms for Min-Plus-Convolution on monotone sequences with bounded entries.

Cite as

Karl Bringmann and Alejandro Cassis. Faster Knapsack Algorithms via Bounded Monotone Min-Plus-Convolution. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 31:1-31:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ICALP.2022.31,
  author =	{Bringmann, Karl and Cassis, Alejandro},
  title =	{{Faster Knapsack Algorithms via Bounded Monotone Min-Plus-Convolution}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{31:1--31:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.31},
  URN =		{urn:nbn:de:0030-drops-163727},
  doi =		{10.4230/LIPIcs.ICALP.2022.31},
  annote =	{Keywords: Knapsack, Approximation Schemes, Fine-Grained Complexity, Min-Plus Convolution}
}
  • Refine by Author
  • 30 Bringmann, Karl
  • 17 Demaine, Erik D.
  • 7 Künnemann, Marvin
  • 7 van Leeuwen, Erik Jan
  • 5 Lynch, Jayson
  • Show More...

  • Refine by Classification
  • 14 Theory of computation → Computational geometry
  • 12 Theory of computation → Problems, reductions and completeness
  • 11 Theory of computation → Design and analysis of algorithms
  • 4 Theory of computation → Approximation algorithms analysis
  • 4 Theory of computation → Parameterized complexity and exact algorithms
  • Show More...

  • Refine by Keyword
  • 5 Fine-Grained Complexity
  • 4 Hardness in P
  • 4 NP-hardness
  • 4 Property Testing
  • 3 Fréchet distance
  • Show More...

  • Refine by Type
  • 91 document

  • Refine by Publication Year
  • 15 2022
  • 12 2020
  • 11 2019
  • 10 2018
  • 9 2016
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail