35 Search Results for "S�lflow, Andre"


Document
SAT Encodings and Beyond (Dagstuhl Seminar 23261)

Authors: Marijn J. H. Heule, Inês Lynce, Stefan Szeider, and Andre Schidler

Published in: Dagstuhl Reports, Volume 13, Issue 6 (2024)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 23261 "SAT Encodings and Beyond." The seminar facilitated an intense examination and discussion of current results and challenges related to encodings for SAT and related solving paradigms. The seminar featured presentations and group work that provided theoretical, practical, and industrial viewpoints. The goal was to foster more profound insights and advancements in encoding techniques, which are pivotal in enhancing solvers' efficiency.

Cite as

Marijn J. H. Heule, Inês Lynce, Stefan Szeider, and Andre Schidler. SAT Encodings and Beyond (Dagstuhl Seminar 23261). In Dagstuhl Reports, Volume 13, Issue 6, pp. 106-122, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{heule_et_al:DagRep.13.6.106,
  author =	{Heule, Marijn J. H. and Lynce, In\^{e}s and Szeider, Stefan and Schidler, Andre},
  title =	{{SAT Encodings and Beyond (Dagstuhl Seminar 23261)}},
  pages =	{106--122},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2024},
  volume =	{13},
  number =	{6},
  editor =	{Heule, Marijn J. H. and Lynce, In\^{e}s and Szeider, Stefan and Schidler, Andre},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.13.6.106},
  URN =		{urn:nbn:de:0030-drops-196409},
  doi =		{10.4230/DagRep.13.6.106},
  annote =	{Keywords: constraint propagation, lower and upper bounds, problem formulation, propositional satisfiability, symmetry breaking}
}
Document
Correlating Theory and Practice in Finding Clubs and Plexes

Authors: Aleksander Figiel, Tomohiro Koana, André Nichterlein, and Niklas Wünsche

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
For solving NP-hard problems there is often a huge gap between theoretical guarantees and observed running times on real-world instances. As a first step towards tackling this issue, we propose an approach to quantify the correlation between theoretical and observed running times. We use two NP-hard problems related to finding large "cliquish" subgraphs in a given graph as demonstration of this measure. More precisely, we focus on finding maximum s-clubs and s-plexes, i. e., graphs of diameter s and graphs where each vertex is adjacent to all but s vertices. Preprocessing based on Turing kernelization is a standard tool to tackle these problems, especially on sparse graphs. We provide a parameterized analysis for the Turing kernelization and demonstrate their usefulness in practice. Moreover, we demonstrate that our measure indeed captures the correlation between these new theoretical and the observed running times.

Cite as

Aleksander Figiel, Tomohiro Koana, André Nichterlein, and Niklas Wünsche. Correlating Theory and Practice in Finding Clubs and Plexes. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 47:1-47:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{figiel_et_al:LIPIcs.ESA.2023.47,
  author =	{Figiel, Aleksander and Koana, Tomohiro and Nichterlein, Andr\'{e} and W\"{u}nsche, Niklas},
  title =	{{Correlating Theory and Practice in Finding Clubs and Plexes}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{47:1--47:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.47},
  URN =		{urn:nbn:de:0030-drops-187000},
  doi =		{10.4230/LIPIcs.ESA.2023.47},
  annote =	{Keywords: Preprocessing, Turing kernelization, Pearson correlation coefficient}
}
Document
Invited Talk
Shifting programming education assessment from exercise outputs toward deeper comprehension (Invited Talk)

Authors: André L. Santos

Published in: OASIcs, Volume 112, 4th International Computer Programming Education Conference (ICPEC 2023)


Abstract
Practice and assessment in introductory programming courses are typically centered on problems that require students to write code to produce specific outputs. While these exercises are necessary and useful for providing practice and mastering syntax, their solutions may not effectively measure the learners’ real understanding of programming concepts. Misconceptions and knowledge gaps may be hidden under an exercise solution with correct outputs. Furthermore, obtaining answers has never been so easy in the present era of chatbots, so why should we care (much) about the solutions? Learning a skill is a process that requires iteration and failing, where feedback is of utmost importance. A programming exercise is a means to build up reasoning capabilities and strategic knowledge, not an end in itself. It is the process that matters most, not the exercise solution. Assessing if the learning process was effective requires much more than checking outputs. I advocate that introductory programming learning could benefit from placing more emphasis on assessing learner comprehension, over checking outputs. Does this mean that we should not check if the results are correct? Certainly not, but a significant part of the learning process would focus on assessing and providing feedback regarding the comprehension of the written code and underlying concepts. Automated assessment systems would reflect this shift by comprising evaluation items for such a purpose, with adequate feedback. Achieving this involves numerous challenges and innovative technical approaches. In this talk, I present an overview of past and future work on tools that integrate code comprehension aspects in the process of solving programming exercises.

Cite as

André L. Santos. Shifting programming education assessment from exercise outputs toward deeper comprehension (Invited Talk). In 4th International Computer Programming Education Conference (ICPEC 2023). Open Access Series in Informatics (OASIcs), Volume 112, pp. 1:1-1:5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{santos:OASIcs.ICPEC.2023.1,
  author =	{Santos, Andr\'{e} L.},
  title =	{{Shifting programming education assessment from exercise outputs toward deeper comprehension}},
  booktitle =	{4th International Computer Programming Education Conference (ICPEC 2023)},
  pages =	{1:1--1:5},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-290-7},
  ISSN =	{2190-6807},
  year =	{2023},
  volume =	{112},
  editor =	{Peixoto de Queir\'{o}s, Ricardo Alexandre and Teixeira Pinto, M\'{a}rio Paulo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.ICPEC.2023.1},
  URN =		{urn:nbn:de:0030-drops-184970},
  doi =		{10.4230/OASIcs.ICPEC.2023.1},
  annote =	{Keywords: Introductory programming, assessment, comprehension}
}
Document
Convolution Products on Double Categories and Categorification of Rule Algebras

Authors: Nicolas Behr, Paul-André Melliès, and Noam Zeilberger

Published in: LIPIcs, Volume 260, 8th International Conference on Formal Structures for Computation and Deduction (FSCD 2023)


Abstract
Motivated by compositional categorical rewriting theory, we introduce a convolution product over presheaves of double categories which generalizes the usual Day tensor product of presheaves of monoidal categories. One interesting aspect of the construction is that this convolution product is in general only oplax associative. For that reason, we identify several classes of double categories for which the convolution product is not just oplax associative, but fully associative. This includes in particular framed bicategories on the one hand, and double categories of compositional rewriting theories on the other. For the latter, we establish a formula which justifies the view that the convolution product categorifies the rule algebra product.

Cite as

Nicolas Behr, Paul-André Melliès, and Noam Zeilberger. Convolution Products on Double Categories and Categorification of Rule Algebras. In 8th International Conference on Formal Structures for Computation and Deduction (FSCD 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 260, pp. 17:1-17:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{behr_et_al:LIPIcs.FSCD.2023.17,
  author =	{Behr, Nicolas and Melli\`{e}s, Paul-Andr\'{e} and Zeilberger, Noam},
  title =	{{Convolution Products on Double Categories and Categorification of Rule Algebras}},
  booktitle =	{8th International Conference on Formal Structures for Computation and Deduction (FSCD 2023)},
  pages =	{17:1--17:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-277-8},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{260},
  editor =	{Gaboardi, Marco and van Raamsdonk, Femke},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2023.17},
  URN =		{urn:nbn:de:0030-drops-180017},
  doi =		{10.4230/LIPIcs.FSCD.2023.17},
  annote =	{Keywords: Categorical rewriting, double pushout, sesqui-pushout, double categories, convolution product, presheaf categories, framed bicategories, opfibrations, rule algebra}
}
Document
Parameterized Lower Bounds for Problems in P via Fine-Grained Cross-Compositions

Authors: Klaus Heeger, André Nichterlein, and Rolf Niedermeier

Published in: LIPIcs, Volume 254, 40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023)


Abstract
We provide a general framework to exclude parameterized running times of the form O(l^β + n^γ) for problems that have polynomial running time lower bounds under hypotheses from fine-grained complexity. Our framework is based on cross-compositions from parameterized complexity. We (conditionally) exclude running times of the form O(l^{γ/(γ-1) - ε} + n^γ) for any 1 < γ < 2 and ε > 0 for the following problems: - Longest Common (Increasing) Subsequence: Given two length-n strings over an alphabet Σ (over ℕ) and l ∈ ℕ, is there a common (increasing) subsequence of length l in both strings? - Discrete Fréchet Distance: Given two lists of n points each and k ∈ N, is the Fréchet distance of the lists at most k? Here l is the maximum number of points which one list is ahead of the other list in an optimum traversal. - Planar Motion Planning: Given a set of n non-intersecting axis-parallel line segment obstacles in the plane and a line segment robot (called rod), can the rod be moved to a specified target without touching any obstacles? Here l is the maximum number of segments any segment has in its vicinity. Moreover, we exclude running times O(l^{2γ/(γ-1) - ε} + n^γ) for any 1 < γ < 3 and ε > 0 for: - Negative Triangle: Given an edge-weighted graph with n vertices, is there a triangle whose sum of edge-weights is negative? Here l is the order of a maximum connected component. - Triangle Collection: Given a vertex-colored graph with n vertices, is there for each triple of colors a triangle whose vertices have these three colors? Here l is the order of a maximum connected component. - 2nd Shortest Path: Given an n-vertex edge-weighted digraph, vertices s and t, and k ∈ ℕ, has the second longest s-t-path length at most k? Here l is the directed feedback vertex set number. Except for 2nd Shortest Path all these running time bounds are tight, that is, algorithms with running time O(l^{γ/(γ-1)} + n^γ) for any 1 < γ < 2 and O(l^{2γ/(γ -1)} + n^γ) for any 1 < γ < 3, respectively, are known. Our running time lower bounds also imply lower bounds on kernelization algorithms for these problems.

Cite as

Klaus Heeger, André Nichterlein, and Rolf Niedermeier. Parameterized Lower Bounds for Problems in P via Fine-Grained Cross-Compositions. In 40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 254, pp. 35:1-35:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{heeger_et_al:LIPIcs.STACS.2023.35,
  author =	{Heeger, Klaus and Nichterlein, Andr\'{e} and Niedermeier, Rolf},
  title =	{{Parameterized Lower Bounds for Problems in P via Fine-Grained Cross-Compositions}},
  booktitle =	{40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023)},
  pages =	{35:1--35:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-266-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{254},
  editor =	{Berenbrink, Petra and Bouyer, Patricia and Dawar, Anuj and Kant\'{e}, Mamadou Moustapha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2023.35},
  URN =		{urn:nbn:de:0030-drops-176876},
  doi =		{10.4230/LIPIcs.STACS.2023.35},
  annote =	{Keywords: FPT in P, Kernelization, Decomposition}
}
Document
Applying a Cut-Based Data Reduction Rule for Weighted Cluster Editing in Polynomial Time

Authors: Hjalmar Schulz, André Nichterlein, Rolf Niedermeier, and Christopher Weyand

Published in: LIPIcs, Volume 249, 17th International Symposium on Parameterized and Exact Computation (IPEC 2022)


Abstract
Given an undirected graph, the task in Cluster Editing is to insert and delete a minimum number of edges to obtain a cluster graph, that is, a disjoint union of cliques. In the weighted variant each vertex pair comes with a weight and the edge modifications have to be of minimum overall weight. In this work, we provide the first polynomial-time algorithm to apply the following data reduction rule of Böcker et al. [Algorithmica, 2011] for Weighted Cluster Editing: For a graph G = (V,E), merge a vertex set S ⊆ V into a single vertex if the minimum cut of G[S] is at least the combined cost of inserting all missing edges within G[S] plus the cost of cutting all edges from S to the rest of the graph. Complementing our theoretical findings, we experimentally demonstrate the effectiveness of the data reduction rule, shrinking real-world test instances from the PACE Challenge 2021 by around 24% while previous heuristic implementations of the data reduction rule only achieve 8%.

Cite as

Hjalmar Schulz, André Nichterlein, Rolf Niedermeier, and Christopher Weyand. Applying a Cut-Based Data Reduction Rule for Weighted Cluster Editing in Polynomial Time. In 17th International Symposium on Parameterized and Exact Computation (IPEC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 249, pp. 25:1-25:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{schulz_et_al:LIPIcs.IPEC.2022.25,
  author =	{Schulz, Hjalmar and Nichterlein, Andr\'{e} and Niedermeier, Rolf and Weyand, Christopher},
  title =	{{Applying a Cut-Based Data Reduction Rule for Weighted Cluster Editing in Polynomial Time}},
  booktitle =	{17th International Symposium on Parameterized and Exact Computation (IPEC 2022)},
  pages =	{25:1--25:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-260-0},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{249},
  editor =	{Dell, Holger and Nederlof, Jesper},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2022.25},
  URN =		{urn:nbn:de:0030-drops-173816},
  doi =		{10.4230/LIPIcs.IPEC.2022.25},
  annote =	{Keywords: Correlation Clustering, Minimum Cut, Maximum s-t-Flow}
}
Document
Privacy Protection of Automated and Self-Driving Vehicles (Dagstuhl Seminar 22042)

Authors: Frank Kargl, Ioannis Krontiris, André Weimerskirch, Ian Williams, and Nataša Trkulja

Published in: Dagstuhl Reports, Volume 12, Issue 1 (2022)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 22042 "Privacy Protection of Automated and Self-Driving Vehicles". The Seminar reviewed existing privacy-enhancing technologies, standards, tools, and frameworks for protecting personal information in the context of automated and self-driving vehicles (AVs). We specifically focused on where such existing techniques clash with requirements of an AV and its data processing and identified the major road blockers on the way to deployment of privacy protection in AVs from a legal, technical, business and ethical perspective. Therefore, the seminar took an interdisciplinary approach involving autonomous and connected driving, privacy protection, and legal data protection experts. This report summarizes the discussions and findings during the seminar, includes the abstracts of talks, and includes a report from the working groups.

Cite as

Frank Kargl, Ioannis Krontiris, André Weimerskirch, Ian Williams, and Nataša Trkulja. Privacy Protection of Automated and Self-Driving Vehicles (Dagstuhl Seminar 22042). In Dagstuhl Reports, Volume 12, Issue 1, pp. 83-100, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Article{kargl_et_al:DagRep.12.1.83,
  author =	{Kargl, Frank and Krontiris, Ioannis and Weimerskirch, Andr\'{e} and Williams, Ian and Trkulja, Nata\v{s}a},
  title =	{{Privacy Protection of Automated and Self-Driving Vehicles (Dagstuhl Seminar 22042)}},
  pages =	{83--100},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2022},
  volume =	{12},
  number =	{1},
  editor =	{Kargl, Frank and Krontiris, Ioannis and Weimerskirch, Andr\'{e} and Williams, Ian and Trkulja, Nata\v{s}a},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.12.1.83},
  URN =		{urn:nbn:de:0030-drops-169220},
  doi =		{10.4230/DagRep.12.1.83},
  annote =	{Keywords: automotive security and privacy, privacy and data protection}
}
Document
Weighted Model Counting with Twin-Width

Authors: Robert Ganian, Filip Pokrývka, André Schidler, Kirill Simonov, and Stefan Szeider

Published in: LIPIcs, Volume 236, 25th International Conference on Theory and Applications of Satisfiability Testing (SAT 2022)


Abstract
Bonnet et al. (FOCS 2020) introduced the graph invariant twin-width and showed that many NP-hard problems are tractable for graphs of bounded twin-width, generalizing similar results for other width measures, including treewidth and clique-width. In this paper, we investigate the use of twin-width for solving the propositional satisfiability problem (SAT) and propositional model counting. We particularly focus on Bounded-ones Weighted Model Counting (BWMC), which takes as input a CNF formula F along with a bound k and asks for the weighted sum of all models with at most k positive literals. BWMC generalizes not only SAT but also (weighted) model counting. We develop the notion of "signed" twin-width of CNF formulas and establish that BWMC is fixed-parameter tractable when parameterized by the certified signed twin-width of F plus k. We show that this result is tight: it is neither possible to drop the bound k nor use the vanilla twin-width instead if one wishes to retain fixed-parameter tractability, even for the easier problem SAT. Our theoretical results are complemented with an empirical evaluation and comparison of signed twin-width on various classes of CNF formulas.

Cite as

Robert Ganian, Filip Pokrývka, André Schidler, Kirill Simonov, and Stefan Szeider. Weighted Model Counting with Twin-Width. In 25th International Conference on Theory and Applications of Satisfiability Testing (SAT 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 236, pp. 15:1-15:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ganian_et_al:LIPIcs.SAT.2022.15,
  author =	{Ganian, Robert and Pokr\'{y}vka, Filip and Schidler, Andr\'{e} and Simonov, Kirill and Szeider, Stefan},
  title =	{{Weighted Model Counting with Twin-Width}},
  booktitle =	{25th International Conference on Theory and Applications of Satisfiability Testing (SAT 2022)},
  pages =	{15:1--15:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-242-6},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{236},
  editor =	{Meel, Kuldeep S. and Strichman, Ofer},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2022.15},
  URN =		{urn:nbn:de:0030-drops-166896},
  doi =		{10.4230/LIPIcs.SAT.2022.15},
  annote =	{Keywords: Weighted model counting, twin-width, parameterized complexity, SAT}
}
Document
Invited Talk
The Manifold Joys of Sampling (Invited Talk)

Authors: Yin Tat Lee and Santosh S. Vempala

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We survey recent progress and many open questions in the field of sampling high-dimensional distributions, with specific focus on sampling with non-Euclidean metrics.

Cite as

Yin Tat Lee and Santosh S. Vempala. The Manifold Joys of Sampling (Invited Talk). In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 4:1-4:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{lee_et_al:LIPIcs.ICALP.2022.4,
  author =	{Lee, Yin Tat and Vempala, Santosh S.},
  title =	{{The Manifold Joys of Sampling}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{4:1--4:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.4},
  URN =		{urn:nbn:de:0030-drops-163459},
  doi =		{10.4230/LIPIcs.ICALP.2022.4},
  annote =	{Keywords: Sampling, Diffusion, Optimization, High Dimension}
}
Document
Dynamic Time Warping Under Translation: Approximation Guided by Space-Filling Curves

Authors: Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, Dániel Marx, and André Nusser

Published in: LIPIcs, Volume 224, 38th International Symposium on Computational Geometry (SoCG 2022)


Abstract
The Dynamic Time Warping (DTW) distance is a popular measure of similarity for a variety of sequence data. For comparing polygonal curves π, σ in ℝ^d, it provides a robust, outlier-insensitive alternative to the Fréchet distance. However, like the Fréchet distance, the DTW distance is not invariant under translations. Can we efficiently optimize the DTW distance of π and σ under arbitrary translations, to compare the curves' shape irrespective of their absolute location? There are surprisingly few works in this direction, which may be due to its computational intricacy: For the Euclidean norm, this problem contains as a special case the geometric median problem, which provably admits no exact algebraic algorithm (that is, no algorithm using only addition, multiplication, and k-th roots). We thus investigate exact algorithms for non-Euclidean norms as well as approximation algorithms for the Euclidean norm. For the L₁ norm in ℝ^d, we provide an 𝒪(n^{2(d+1)})-time algorithm, i.e., an exact polynomial-time algorithm for constant d. Here and below, n bounds the curves' complexities. For the Euclidean norm in ℝ², we show that a simple problem-specific insight leads to a (1+ε)-approximation in time 𝒪(n³/ε²). We then show how to obtain a subcubic 𝒪̃(n^{2.5}/ε²) time algorithm with significant new ideas; this time comes close to the well-known quadratic time barrier for computing DTW for fixed translations. Technically, the algorithm is obtained by speeding up repeated DTW distance estimations using a dynamic data structure for maintaining shortest paths in weighted planar digraphs. Crucially, we show how to traverse a candidate set of translations using space-filling curves in a way that incurs only few updates to the data structure. We hope that our results will facilitate the use of DTW under translation both in theory and practice, and inspire similar algorithmic approaches for related geometric optimization problems.

Cite as

Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, Dániel Marx, and André Nusser. Dynamic Time Warping Under Translation: Approximation Guided by Space-Filling Curves. In 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 20:1-20:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2022.20,
  author =	{Bringmann, Karl and Kisfaludi‑Bak, S\'{a}ndor and K\"{u}nnemann, Marvin and Marx, D\'{a}niel and Nusser, Andr\'{e}},
  title =	{{Dynamic Time Warping Under Translation: Approximation Guided by Space-Filling Curves}},
  booktitle =	{38th International Symposium on Computational Geometry (SoCG 2022)},
  pages =	{20:1--20:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-227-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{224},
  editor =	{Goaoc, Xavier and Kerber, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2022.20},
  URN =		{urn:nbn:de:0030-drops-160287},
  doi =		{10.4230/LIPIcs.SoCG.2022.20},
  annote =	{Keywords: Dynamic Time Warping, Sequence Similarity Measures}
}
Document
Towards Sub-Quadratic Diameter Computation in Geometric Intersection Graphs

Authors: Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, André Nusser, and Zahra Parsaeian

Published in: LIPIcs, Volume 224, 38th International Symposium on Computational Geometry (SoCG 2022)


Abstract
We initiate the study of diameter computation in geometric intersection graphs from the fine-grained complexity perspective. A geometric intersection graph is a graph whose vertices correspond to some shapes in d-dimensional Euclidean space, such as balls, segments, or hypercubes, and whose edges correspond to pairs of intersecting shapes. The diameter of a graph is the largest distance realized by a pair of vertices in the graph. Computing the diameter in near-quadratic time is possible in several classes of intersection graphs [Chan and Skrepetos 2019], but it is not at all clear if these algorithms are optimal, especially since in the related class of planar graphs the diameter can be computed in 𝒪̃(n^{5/3}) time [Cabello 2019, Gawrychowski et al. 2021]. In this work we (conditionally) rule out sub-quadratic algorithms in several classes of intersection graphs, i.e., algorithms of running time 𝒪(n^{2-δ}) for some δ > 0. In particular, there are no sub-quadratic algorithms already for fat objects in small dimensions: unit balls in ℝ³ or congruent equilateral triangles in ℝ². For unit segments and congruent equilateral triangles, we can even rule out strong sub-quadratic approximations already in ℝ². It seems that the hardness of approximation may also depend on dimensionality: for axis-parallel unit hypercubes in ℝ^{12}, distinguishing between diameter 2 and 3 needs quadratic time (ruling out (3/2-ε)- approximations), whereas for axis-parallel unit squares, we give an algorithm that distinguishes between diameter 2 and 3 in near-linear time. Note that many of our lower bounds match the best known algorithms up to sub-polynomial factors. Ultimately, this fine-grained perspective may enable us to determine for which shapes we can have efficient algorithms and approximation schemes for diameter computation.

Cite as

Karl Bringmann, Sándor Kisfaludi‑Bak, Marvin Künnemann, André Nusser, and Zahra Parsaeian. Towards Sub-Quadratic Diameter Computation in Geometric Intersection Graphs. In 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 21:1-21:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.SoCG.2022.21,
  author =	{Bringmann, Karl and Kisfaludi‑Bak, S\'{a}ndor and K\"{u}nnemann, Marvin and Nusser, Andr\'{e} and Parsaeian, Zahra},
  title =	{{Towards Sub-Quadratic Diameter Computation in Geometric Intersection Graphs}},
  booktitle =	{38th International Symposium on Computational Geometry (SoCG 2022)},
  pages =	{21:1--21:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-227-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{224},
  editor =	{Goaoc, Xavier and Kerber, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2022.21},
  URN =		{urn:nbn:de:0030-drops-160294},
  doi =		{10.4230/LIPIcs.SoCG.2022.21},
  annote =	{Keywords: Hardness in P, Geometric Intersection Graph, Graph Diameter, Orthogonal Vectors, Hyperclique Detection}
}
Document
Covering Many (Or Few) Edges with k Vertices in Sparse Graphs

Authors: Tomohiro Koana, Christian Komusiewicz, André Nichterlein, and Frank Sommer

Published in: LIPIcs, Volume 219, 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)


Abstract
We study the following two fixed-cardinality optimization problems (a maximization and a minimization variant). For a fixed α between zero and one we are given a graph and two numbers k ∈ ℕ and t ∈ ℚ. The task is to find a vertex subset S of exactly k vertices that has value at least (resp. at most for minimization) t. Here, the value of a vertex set computes as α times the number of edges with exactly one endpoint in S plus 1-α times the number of edges with both endpoints in S. These two problems generalize many prominent graph problems, such as Densest k-Subgraph, Sparsest k-Subgraph, Partial Vertex Cover, and Max (k,n-k)-Cut. In this work, we complete the picture of their parameterized complexity on several types of sparse graphs that are described by structural parameters. In particular, we provide kernelization algorithms and kernel lower bounds for these problems. A somewhat surprising consequence of our kernelizations is that Partial Vertex Cover and Max (k,n-k)-Cut not only behave in the same way but that the kernels for both problems can be obtained by the same algorithms.

Cite as

Tomohiro Koana, Christian Komusiewicz, André Nichterlein, and Frank Sommer. Covering Many (Or Few) Edges with k Vertices in Sparse Graphs. In 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 219, pp. 42:1-42:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{koana_et_al:LIPIcs.STACS.2022.42,
  author =	{Koana, Tomohiro and Komusiewicz, Christian and Nichterlein, Andr\'{e} and Sommer, Frank},
  title =	{{Covering Many (Or Few) Edges with k Vertices in Sparse Graphs}},
  booktitle =	{39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)},
  pages =	{42:1--42:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-222-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{219},
  editor =	{Berenbrink, Petra and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2022.42},
  URN =		{urn:nbn:de:0030-drops-158525},
  doi =		{10.4230/LIPIcs.STACS.2022.42},
  annote =	{Keywords: Parameterized Complexity, Kernelization, Partial Vertex Cover, Densest k-Subgraph, Max (k,n-k)-Cut, Degeneracy}
}
Document
Derzis: A Path Aware Linked Data Crawler

Authors: André Fernandes dos Santos and José Paulo Leal

Published in: OASIcs, Volume 94, 10th Symposium on Languages, Applications and Technologies (SLATE 2021)


Abstract
Consuming Semantic Web data presents several challenges, from the number of datasets it is composed of, to the (very) large size of some of those datasets and the uncertain availability of querying endpoints. According to its core principles, accessing linked data can be done simply by dereferencing the IRIs of RDF resources. This is a light alternative both for clients and servers when compared to dataset dumps or SPARQL endpoints. The linked data interface does not support complex querying, but using it recursively may suffice to gather information about RDF resources, or to extract the relevant sub-graph which can then be processed and queried using other methods. We present Derzis, an open source semantic web crawler capable of traversing the linked data cloud starting from a set of seed resources. Derzis maintains information about the paths followed while crawling, which allows to define property path-based restrictions to the crawling frontier.

Cite as

André Fernandes dos Santos and José Paulo Leal. Derzis: A Path Aware Linked Data Crawler. In 10th Symposium on Languages, Applications and Technologies (SLATE 2021). Open Access Series in Informatics (OASIcs), Volume 94, pp. 2:1-2:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{santos_et_al:OASIcs.SLATE.2021.2,
  author =	{Santos, Andr\'{e} Fernandes dos and Leal, Jos\'{e} Paulo},
  title =	{{Derzis: A Path Aware Linked Data Crawler}},
  booktitle =	{10th Symposium on Languages, Applications and Technologies (SLATE 2021)},
  pages =	{2:1--2:12},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-202-0},
  ISSN =	{2190-6807},
  year =	{2021},
  volume =	{94},
  editor =	{Queir\'{o}s, Ricardo and Pinto, M\'{a}rio and Sim\~{o}es, Alberto and Portela, Filipe and Pereira, Maria Jo\~{a}o},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2021.2},
  URN =		{urn:nbn:de:0030-drops-144198},
  doi =		{10.4230/OASIcs.SLATE.2021.2},
  annote =	{Keywords: Semantic web, linked open data, RDF, crawler}
}
Document
DataGen: JSON/XML Dataset Generator

Authors: Filipa Alves dos Santos, Hugo André Coelho Cardoso, João da Cunha e Costa, Válter Ferreira Picas Carvalho, and José Carlos Ramalho

Published in: OASIcs, Volume 94, 10th Symposium on Languages, Applications and Technologies (SLATE 2021)


Abstract
In this document we describe the steps towards DataGen implementation. DataGen is a versatile and powerful tool that allows for quick prototyping and testing of software applications, since currently too few solutions offer both the complexity and scalability necessary to generate adequate datasets in order to feed a data API or a more complex APP enabling those applications testing with appropriate data volume and data complexity. DataGen core is a Domain Specific Language (DSL) that was created to specify datasets. This language suffered several updates: repeating fields (with no limit), fuzzy fields (statistically generated), lists, highorder functions over lists, custom made transformation functions. The final result is a complex algebra that allows the generation of very complex datasets coping with very complex requirements. Throughout the paper we will give several examples of the possibilities. After generating a dataset DataGen gives the user the possibility to generate a RESTFull data API with that dataset, creating a running prototype. This solution has already been used in real life cases, described with more detail throughout the paper, in which it was able to create the intended datasets successfully. These allowed the application’s performance to be tested and for the right adjustments to be made. The tool is currently being deployed for general use.

Cite as

Filipa Alves dos Santos, Hugo André Coelho Cardoso, João da Cunha e Costa, Válter Ferreira Picas Carvalho, and José Carlos Ramalho. DataGen: JSON/XML Dataset Generator. In 10th Symposium on Languages, Applications and Technologies (SLATE 2021). Open Access Series in Informatics (OASIcs), Volume 94, pp. 6:1-6:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{santos_et_al:OASIcs.SLATE.2021.6,
  author =	{Santos, Filipa Alves dos and Cardoso, Hugo Andr\'{e} Coelho and da Cunha e Costa, Jo\~{a}o and Carvalho, V\'{a}lter Ferreira Picas and Ramalho, Jos\'{e} Carlos},
  title =	{{DataGen: JSON/XML Dataset Generator}},
  booktitle =	{10th Symposium on Languages, Applications and Technologies (SLATE 2021)},
  pages =	{6:1--6:14},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-202-0},
  ISSN =	{2190-6807},
  year =	{2021},
  volume =	{94},
  editor =	{Queir\'{o}s, Ricardo and Pinto, M\'{a}rio and Sim\~{o}es, Alberto and Portela, Filipe and Pereira, Maria Jo\~{a}o},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2021.6},
  URN =		{urn:nbn:de:0030-drops-144239},
  doi =		{10.4230/OASIcs.SLATE.2021.6},
  annote =	{Keywords: JSON, XML, Data Generation, Open Source, REST API, Strapi, JavaScript, Node.js, Vue.js, Scalability, Fault Tolerance, Dataset, DSL, PEG.js, MongoDB}
}
Document
A Verified Decision Procedure for Univariate Real Arithmetic with the BKR Algorithm

Authors: Katherine Cordwell, Yong Kiam Tan, and André Platzer

Published in: LIPIcs, Volume 193, 12th International Conference on Interactive Theorem Proving (ITP 2021)


Abstract
We formalize the univariate fragment of Ben-Or, Kozen, and Reif’s (BKR) decision procedure for first-order real arithmetic in Isabelle/HOL. BKR’s algorithm has good potential for parallelism and was designed to be used in practice. Its key insight is a clever recursive procedure that computes the set of all consistent sign assignments for an input set of univariate polynomials while carefully managing intermediate steps to avoid exponential blowup from naively enumerating all possible sign assignments (this insight is fundamental for both the univariate case and the general case). Our proof combines ideas from BKR and a follow-up work by Renegar that are well-suited for formalization. The resulting proof outline allows us to build substantially on Isabelle/HOL’s libraries for algebra, analysis, and matrices. Our main extensions to existing libraries are also detailed.

Cite as

Katherine Cordwell, Yong Kiam Tan, and André Platzer. A Verified Decision Procedure for Univariate Real Arithmetic with the BKR Algorithm. In 12th International Conference on Interactive Theorem Proving (ITP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 193, pp. 14:1-14:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{cordwell_et_al:LIPIcs.ITP.2021.14,
  author =	{Cordwell, Katherine and Tan, Yong Kiam and Platzer, Andr\'{e}},
  title =	{{A Verified Decision Procedure for Univariate Real Arithmetic with the BKR Algorithm}},
  booktitle =	{12th International Conference on Interactive Theorem Proving (ITP 2021)},
  pages =	{14:1--14:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-188-7},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{193},
  editor =	{Cohen, Liron and Kaliszyk, Cezary},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITP.2021.14},
  URN =		{urn:nbn:de:0030-drops-139099},
  doi =		{10.4230/LIPIcs.ITP.2021.14},
  annote =	{Keywords: quantifier elimination, matrix, theorem proving, real arithmetic}
}
  • Refine by Author
  • 5 Nichterlein, André
  • 5 van Renssen, André
  • 4 Melliès, Paul-André
  • 3 Koana, Tomohiro
  • 3 Korman, Matias
  • Show More...

  • Refine by Classification
  • 6 Theory of computation → Parameterized complexity and exact algorithms
  • 4 Theory of computation → Graph algorithms analysis
  • 3 Theory of computation → Computational geometry
  • 2 Theory of computation → Logic and verification
  • 1 Applied computing → Computer-assisted instruction
  • Show More...

  • Refine by Keyword
  • 3 Game semantics
  • 2 Kernelization
  • 2 Model checking
  • 2 RDF
  • 2 Semantics of programming languages
  • Show More...

  • Refine by Type
  • 35 document

  • Refine by Publication Year
  • 7 2022
  • 6 2020
  • 4 2017
  • 4 2023
  • 3 2010
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail