Search Results

Documents authored by Schmid, Markus L.


Found 2 Possible Name Variants:

Schmid, Markus L.

Document
Subsequences with Generalised Gap Constraints: Upper and Lower Complexity Bounds

Authors: Florin Manea, Jonas Richardsen, and Markus L. Schmid

Published in: LIPIcs, Volume 296, 35th Annual Symposium on Combinatorial Pattern Matching (CPM 2024)


Abstract
For two strings u, v over some alphabet A, we investigate the problem of embedding u into w as a subsequence under the presence of generalised gap constraints. A generalised gap constraint is a triple (i, j, C_{i, j}), where 1 ≤ i < j ≤ |u| and C_{i, j} ⊆ A^*. Embedding u as a subsequence into v such that (i, j, C_{i, j}) is satisfied means that if u[i] and u[j] are mapped to v[k] and v[𝓁], respectively, then the induced gap v[k + 1..𝓁 - 1] must be a string from C_{i, j}. This generalises the setting recently investigated in [Day et al., ISAAC 2022], where only gap constraints of the form C_{i, i + 1} are considered, as well as the setting from [Kosche et al., RP 2022], where only gap constraints of the form C_{1, |u|} are considered. We show that subsequence matching under generalised gap constraints is NP-hard, and we complement this general lower bound with a thorough (parameterised) complexity analysis. Moreover, we identify several efficiently solvable subclasses that result from restricting the interval structure induced by the generalised gap constraints.

Cite as

Florin Manea, Jonas Richardsen, and Markus L. Schmid. Subsequences with Generalised Gap Constraints: Upper and Lower Complexity Bounds. In 35th Annual Symposium on Combinatorial Pattern Matching (CPM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 296, pp. 22:1-22:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{manea_et_al:LIPIcs.CPM.2024.22,
  author =	{Manea, Florin and Richardsen, Jonas and Schmid, Markus L.},
  title =	{{Subsequences with Generalised Gap Constraints: Upper and Lower Complexity Bounds}},
  booktitle =	{35th Annual Symposium on Combinatorial Pattern Matching (CPM 2024)},
  pages =	{22:1--22:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-326-3},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{296},
  editor =	{Inenaga, Shunsuke and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2024.22},
  URN =		{urn:nbn:de:0030-drops-201329},
  doi =		{10.4230/LIPIcs.CPM.2024.22},
  annote =	{Keywords: String algorithms, subsequences with gap constraints, pattern matching, fine-grained complexity, conditional lower bounds, parameterised complexity}
}
Document
Subsequences with Gap Constraints: Complexity Bounds for Matching and Analysis Problems

Authors: Joel D. Day, Maria Kosche, Florin Manea, and Markus L. Schmid

Published in: LIPIcs, Volume 248, 33rd International Symposium on Algorithms and Computation (ISAAC 2022)


Abstract
We consider subsequences with gap constraints, i. e., length-k subsequences p that can be embedded into a string w such that the induced gaps (i. e., the factors of w between the positions to which p is mapped to) satisfy given gap constraints gc = (C_1, C_2, …, C_{k-1}); we call p a gc-subsequence of w. In the case where the gap constraints gc are defined by lower and upper length bounds C_i = (L^-_i, L^+_i) ∈ ℕ² and/or regular languages C_i ∈ REG, we prove tight (conditional on the orthogonal vectors (OV) hypothesis) complexity bounds for checking whether a given p is a gc-subsequence of a string w. We also consider the whole set of all gc-subsequences of a string, and investigate the complexity of the universality, equivalence and containment problems for these sets of gc-subsequences.

Cite as

Joel D. Day, Maria Kosche, Florin Manea, and Markus L. Schmid. Subsequences with Gap Constraints: Complexity Bounds for Matching and Analysis Problems. In 33rd International Symposium on Algorithms and Computation (ISAAC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 248, pp. 64:1-64:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{day_et_al:LIPIcs.ISAAC.2022.64,
  author =	{Day, Joel D. and Kosche, Maria and Manea, Florin and Schmid, Markus L.},
  title =	{{Subsequences with Gap Constraints: Complexity Bounds for Matching and Analysis Problems}},
  booktitle =	{33rd International Symposium on Algorithms and Computation (ISAAC 2022)},
  pages =	{64:1--64:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-258-7},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{248},
  editor =	{Bae, Sang Won and Park, Heejin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2022.64},
  URN =		{urn:nbn:de:0030-drops-173493},
  doi =		{10.4230/LIPIcs.ISAAC.2022.64},
  annote =	{Keywords: String algorithms, subsequences with gap constraints, pattern matching, fine-grained complexity, conditional lower bounds, parameterised complexity}
}
Document
Discovering Event Queries from Traces: Laying Foundations for Subsequence-Queries with Wildcards and Gap-Size Constraints

Authors: Sarah Kleest-Meißner, Rebecca Sattler, Markus L. Schmid, Nicole Schweikardt, and Matthias Weidlich

Published in: LIPIcs, Volume 220, 25th International Conference on Database Theory (ICDT 2022)


Abstract
We introduce subsequence-queries with wildcards and gap-size constraints (swg-queries, for short) as a tool for querying event traces. An swg-query q is given by a string s over an alphabet of variables and types, a global window size w, and a tuple c = ((c^-_1, c^+_1), (c^-_2, c^+_2), …, (c^-_{|s|-1}, c^+_{|s|-1})) of local gap-size constraints over ℕ × (ℕ ∪ {∞}). The query q matches in a trace t (i. e., a sequence of types) if the variables can uniformly be substituted by types such that the resulting string occurs in t as a subsequence that spans an area of length at most w, and the i^{th} gap of the subsequence (i. e., the distance between the i^{th} and (i+1)^{th} position of the subsequence) has length at least c^-_i and at most c^+_i. We formalise and investigate the task of discovering an swg-query that describes best the traces from a given sample S of traces, and we present an algorithm solving this task. As a central component, our algorithm repeatedly solves the matching problem (i. e., deciding whether a given query q matches in a given trace t), which is an NP-complete problem (in combined complexity). Hence, the matching problem is of special interest in the context of query discovery, and we therefore subject it to a detailed (parameterised) complexity analysis to identify tractable subclasses, which lead to tractable subclasses of the discovery problem as well. We complement this by a reduction proving that any query discovery algorithm also yields an algorithm for the matching problem. Hence, lower bounds on the complexity of the matching problem directly translate into according lower bounds of the query discovery problem. As a proof of concept, we also implemented a prototype of our algorithm and tested it on real-world data.

Cite as

Sarah Kleest-Meißner, Rebecca Sattler, Markus L. Schmid, Nicole Schweikardt, and Matthias Weidlich. Discovering Event Queries from Traces: Laying Foundations for Subsequence-Queries with Wildcards and Gap-Size Constraints. In 25th International Conference on Database Theory (ICDT 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 220, pp. 18:1-18:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{kleestmeiner_et_al:LIPIcs.ICDT.2022.18,
  author =	{Kleest-Mei{\ss}ner, Sarah and Sattler, Rebecca and Schmid, Markus L. and Schweikardt, Nicole and Weidlich, Matthias},
  title =	{{Discovering Event Queries from Traces: Laying Foundations for Subsequence-Queries with Wildcards and Gap-Size Constraints}},
  booktitle =	{25th International Conference on Database Theory (ICDT 2022)},
  pages =	{18:1--18:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-223-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{220},
  editor =	{Olteanu, Dan and Vortmeier, Nils},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICDT.2022.18},
  URN =		{urn:nbn:de:0030-drops-158922},
  doi =		{10.4230/LIPIcs.ICDT.2022.18},
  annote =	{Keywords: event queries on traces, pattern queries on strings, learning descriptive queries, complexity of query evaluation and query learning}
}
Document
A Purely Regular Approach to Non-Regular Core Spanners

Authors: Markus L. Schmid and Nicole Schweikardt

Published in: LIPIcs, Volume 186, 24th International Conference on Database Theory (ICDT 2021)


Abstract
The regular spanners (characterised by vset-automata) are closed under the algebraic operations of union, join and projection, and have desirable algorithmic properties. The core spanners (introduced by Fagin, Kimelfeld, Reiss, and Vansummeren (PODS 2013, JACM 2015) as a formalisation of the core functionality of the query language AQL used in IBM’s SystemT) additionally need string equality selections and it has been shown by Freydenberger and Holldack (ICDT 2016, Theory of Computing Systems 2018) that this leads to high complexity and even undecidability of the typical problems in static analysis and query evaluation. We propose an alternative approach to core spanners: by incorporating the string-equality selections directly into the regular language that represents the underlying regular spanner (instead of treating it as an algebraic operation on the table extracted by the regular spanner), we obtain a fragment of core spanners that, while having slightly weaker expressive power than the full class of core spanners, arguably still covers the intuitive applications of string equality selections for information extraction and has much better upper complexity bounds of the typical problems in static analysis and query evaluation.

Cite as

Markus L. Schmid and Nicole Schweikardt. A Purely Regular Approach to Non-Regular Core Spanners. In 24th International Conference on Database Theory (ICDT 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 186, pp. 4:1-4:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{schmid_et_al:LIPIcs.ICDT.2021.4,
  author =	{Schmid, Markus L. and Schweikardt, Nicole},
  title =	{{A Purely Regular Approach to Non-Regular Core Spanners}},
  booktitle =	{24th International Conference on Database Theory (ICDT 2021)},
  pages =	{4:1--4:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-179-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{186},
  editor =	{Yi, Ke and Wei, Zhewei},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICDT.2021.4},
  URN =		{urn:nbn:de:0030-drops-137124},
  doi =		{10.4230/LIPIcs.ICDT.2021.4},
  annote =	{Keywords: Document spanners, regular expressions with backreferences}
}
Document
Fine-Grained Complexity of Regular Path Queries

Authors: Katrin Casel and Markus L. Schmid

Published in: LIPIcs, Volume 186, 24th International Conference on Database Theory (ICDT 2021)


Abstract
A regular path query (RPQ) is a regular expression q that returns all node pairs (u, v) from a graph database that are connected by an arbitrary path labelled with a word from L(q). The obvious algorithmic approach to RPQ evaluation (called PG-approach), i. e., constructing the product graph between an NFA for q and the graph database, is appealing due to its simplicity and also leads to efficient algorithms. However, it is unclear whether the PG-approach is optimal. We address this question by thoroughly investigating which upper complexity bounds can be achieved by the PG-approach, and we complement these with conditional lower bounds (in the sense of the fine-grained complexity framework). A special focus is put on enumeration and delay bounds, as well as the data complexity perspective. A main insight is that we can achieve optimal (or near optimal) algorithms with the PG-approach, but the delay for enumeration is rather high (linear in the database). We explore three successful approaches towards enumeration with sub-linear delay: super-linear preprocessing, approximations of the solution sets, and restricted classes of RPQs.

Cite as

Katrin Casel and Markus L. Schmid. Fine-Grained Complexity of Regular Path Queries. In 24th International Conference on Database Theory (ICDT 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 186, pp. 19:1-19:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{casel_et_al:LIPIcs.ICDT.2021.19,
  author =	{Casel, Katrin and Schmid, Markus L.},
  title =	{{Fine-Grained Complexity of Regular Path Queries}},
  booktitle =	{24th International Conference on Database Theory (ICDT 2021)},
  pages =	{19:1--19:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-179-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{186},
  editor =	{Yi, Ke and Wei, Zhewei},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICDT.2021.19},
  URN =		{urn:nbn:de:0030-drops-137277},
  doi =		{10.4230/LIPIcs.ICDT.2021.19},
  annote =	{Keywords: Graph Databases, Regular Path Queries, Enumeration, Fine-Grained Complexity}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Graph and String Parameters: Connections Between Pathwidth, Cutwidth and the Locality Number (Track B: Automata, Logic, Semantics, and Theory of Programming)

Authors: Katrin Casel, Joel D. Day, Pamela Fleischmann, Tomasz Kociumaka, Florin Manea, and Markus L. Schmid

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
We investigate the locality number, a recently introduced structural parameter for strings (with applications in pattern matching with variables), and its connection to two important graph-parameters, cutwidth and pathwidth. These connections allow us to show that computing the locality number is NP-hard but fixed-parameter tractable (when the locality number or the alphabet size is treated as a parameter), and can be approximated with ratio O(sqrt{log{opt}} log n). As a by-product, we also relate cutwidth via the locality number to pathwidth, which is of independent interest, since it improves the best currently known approximation algorithm for cutwidth. In addition to these main results, we also consider the possibility of greedy-based approximation algorithms for the locality number.

Cite as

Katrin Casel, Joel D. Day, Pamela Fleischmann, Tomasz Kociumaka, Florin Manea, and Markus L. Schmid. Graph and String Parameters: Connections Between Pathwidth, Cutwidth and the Locality Number (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 109:1-109:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{casel_et_al:LIPIcs.ICALP.2019.109,
  author =	{Casel, Katrin and Day, Joel D. and Fleischmann, Pamela and Kociumaka, Tomasz and Manea, Florin and Schmid, Markus L.},
  title =	{{Graph and String Parameters: Connections Between Pathwidth, Cutwidth and the Locality Number}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{109:1--109:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.109},
  URN =		{urn:nbn:de:0030-drops-106858},
  doi =		{10.4230/LIPIcs.ICALP.2019.109},
  annote =	{Keywords: Graph and String Parameters, NP-Completeness, Approximation Algorithms}
}
Document
Consensus Strings with Small Maximum Distance and Small Distance Sum

Authors: Laurent Bulteau and Markus L. Schmid

Published in: LIPIcs, Volume 117, 43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018)


Abstract
The parameterised complexity of consensus string problems (Closest String, Closest Substring, Closest String with Outliers) is investigated in a more general setting, i. e., with a bound on the maximum Hamming distance and a bound on the sum of Hamming distances between solution and input strings. We completely settle the parameterised complexity of these generalised variants of Closest String and Closest Substring, and partly for Closest String with Outliers; in addition, we answer some open questions from the literature regarding the classical problem variants with only one distance bound. Finally, we investigate the question of polynomial kernels and respective lower bounds.

Cite as

Laurent Bulteau and Markus L. Schmid. Consensus Strings with Small Maximum Distance and Small Distance Sum. In 43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 117, pp. 1:1-1:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bulteau_et_al:LIPIcs.MFCS.2018.1,
  author =	{Bulteau, Laurent and Schmid, Markus L.},
  title =	{{Consensus Strings with Small Maximum Distance and Small Distance Sum}},
  booktitle =	{43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018)},
  pages =	{1:1--1:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-086-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{117},
  editor =	{Potapov, Igor and Spirakis, Paul and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2018.1},
  URN =		{urn:nbn:de:0030-drops-95834},
  doi =		{10.4230/LIPIcs.MFCS.2018.1},
  annote =	{Keywords: Consensus String Problems, Closest String, Closest Substring, Parameterised Complexity, Kernelisation}
}
Document
Combinatorial Properties and Recognition of Unit Square Visibility Graphs

Authors: Katrin Casel, Henning Fernau, Alexander Grigoriev, Markus L. Schmid, and Sue Whitesides

Published in: LIPIcs, Volume 83, 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)


Abstract
Unit square (grid) visibility graphs (USV and USGV, resp.) are described by axis-parallel visibility between unit squares placed (on integer grid coordinates) in the plane. We investigate combinatorial properties of these graph classes and the hardness of variants of the recognition problem, i.e., the problem of representing USGV with fixed visibilities within small area and, for USV, the general recognition problem.

Cite as

Katrin Casel, Henning Fernau, Alexander Grigoriev, Markus L. Schmid, and Sue Whitesides. Combinatorial Properties and Recognition of Unit Square Visibility Graphs. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 30:1-30:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{casel_et_al:LIPIcs.MFCS.2017.30,
  author =	{Casel, Katrin and Fernau, Henning and Grigoriev, Alexander and Schmid, Markus L. and Whitesides, Sue},
  title =	{{Combinatorial Properties and Recognition of Unit Square Visibility Graphs}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{30:1--30:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.30},
  URN =		{urn:nbn:de:0030-drops-80770},
  doi =		{10.4230/LIPIcs.MFCS.2017.30},
  annote =	{Keywords: Visibility graphs, visibility layout, NP-completeness, exact algorithms}
}
Document
Deterministic Regular Expressions with Back-References

Authors: Dominik D. Freydenberger and Markus L. Schmid

Published in: LIPIcs, Volume 66, 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)


Abstract
Most modern libraries for regular expression matching allow back-references (i.e. repetition operators) that substantially increase expressive power, but also lead to intractability. In order to find a better balance between expressiveness and tractability, we combine these with the notion of determinism for regular expressions used in XML DTDs and XML Schema. This includes the definition of a suitable automaton model, and a generalization of the Glushkov construction.

Cite as

Dominik D. Freydenberger and Markus L. Schmid. Deterministic Regular Expressions with Back-References. In 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 66, pp. 33:1-33:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{freydenberger_et_al:LIPIcs.STACS.2017.33,
  author =	{Freydenberger, Dominik D. and Schmid, Markus L.},
  title =	{{Deterministic Regular Expressions with Back-References}},
  booktitle =	{34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)},
  pages =	{33:1--33:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-028-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{66},
  editor =	{Vollmer, Heribert and Vall\'{e}e, Brigitte},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2017.33},
  URN =		{urn:nbn:de:0030-drops-70004},
  doi =		{10.4230/LIPIcs.STACS.2017.33},
  annote =	{Keywords: Deterministic Regular Expression, Regex, Glushkov Automaton}
}
Document
On the Complexity of Grammar-Based Compression over Fixed Alphabets

Authors: Katrin Casel, Henning Fernau, Serge Gaspers, Benjamin Gras, and Markus L. Schmid

Published in: LIPIcs, Volume 55, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)


Abstract
It is shown that the shortest-grammar problem remains NP-complete if the alphabet is fixed and has a size of at least 24 (which settles an open question). On the other hand, this problem can be solved in polynomial-time, if the number of nonterminals is bounded, which is shown by encoding the problem as a problem on graphs with interval structure. Furthermore, we present an O(3n) exact exponential-time algorithm, based on dynamic programming. Similar results are also given for 1-level grammars, i.e., grammars for which only the start rule contains nonterminals on the right side (thus, investigating the impact of the "hierarchical depth" on the complexity of the shortest-grammar problem).

Cite as

Katrin Casel, Henning Fernau, Serge Gaspers, Benjamin Gras, and Markus L. Schmid. On the Complexity of Grammar-Based Compression over Fixed Alphabets. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 122:1-122:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{casel_et_al:LIPIcs.ICALP.2016.122,
  author =	{Casel, Katrin and Fernau, Henning and Gaspers, Serge and Gras, Benjamin and Schmid, Markus L.},
  title =	{{On the Complexity of Grammar-Based Compression over Fixed Alphabets}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{122:1--122:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.122},
  URN =		{urn:nbn:de:0030-drops-62570},
  doi =		{10.4230/LIPIcs.ICALP.2016.122},
  annote =	{Keywords: Grammar-Based Compression, Straight-Line Programs, NP-Completeness, Exact Exponential Time Algorithms}
}
Document
Pattern Matching with Variables: Fast Algorithms and New Hardness Results

Authors: Henning Fernau, Florin Manea, Robert Mercas, and Markus L. Schmid

Published in: LIPIcs, Volume 30, 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015)


Abstract
A pattern (i. e., a string of variables and terminals) maps to a word, if this is obtained by uniformly replacing the variables by terminal words; deciding this is NP-complete. We present efficient algorithms\footnote{The computational model we use is the standard unit-cost RAM with logarithmic word size. Also, all logarithms appearing in our time complexity evaluations are in base 2.} that solve this problem for restricted classes of patterns. Furthermore, we show that it is NP-complete to decide, for a given number k and a word w, whether w can be factorised into k distinct factors; this shows that the injective version (i.e., different variables are replaced by different words) of the above matching problem is NP-complete even for very restricted cases.

Cite as

Henning Fernau, Florin Manea, Robert Mercas, and Markus L. Schmid. Pattern Matching with Variables: Fast Algorithms and New Hardness Results. In 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 30, pp. 302-315, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{fernau_et_al:LIPIcs.STACS.2015.302,
  author =	{Fernau, Henning and Manea, Florin and Mercas, Robert and Schmid, Markus L.},
  title =	{{Pattern Matching with Variables: Fast Algorithms and New Hardness Results}},
  booktitle =	{32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015)},
  pages =	{302--315},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-78-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{30},
  editor =	{Mayr, Ernst W. and Ollinger, Nicolas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2015.302},
  URN =		{urn:nbn:de:0030-drops-49220},
  doi =		{10.4230/LIPIcs.STACS.2015.302},
  annote =	{Keywords: combinatorial pattern matching, combinatorics on words, patterns with variables, \$\{cal NP\}\$-complete string problems}
}
Document
On the Parameterised Complexity of String Morphism Problems

Authors: Henning Fernau, Markus L. Schmid, and Yngve Villanger

Published in: LIPIcs, Volume 24, IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2013)


Abstract
Given a source string u and a target string w, to decide whether w can be obtained by applying a string morphism on u (i. e., uniformly replacing the symbols in u by strings) constitutes an NP-complete problem. For example, the target string w := baaba can be obtained from the source string u := aba, by replacing a and b in u by the strings ba and a, respectively. In this paper, we contribute to the recently started investigation of the computational complexity of the string morphism problem by studying it in the framework of parameterised complexity.

Cite as

Henning Fernau, Markus L. Schmid, and Yngve Villanger. On the Parameterised Complexity of String Morphism Problems. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2013). Leibniz International Proceedings in Informatics (LIPIcs), Volume 24, pp. 55-66, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{fernau_et_al:LIPIcs.FSTTCS.2013.55,
  author =	{Fernau, Henning and Schmid, Markus L. and Villanger, Yngve},
  title =	{{On the Parameterised Complexity of String Morphism Problems}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2013)},
  pages =	{55--66},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-64-4},
  ISSN =	{1868-8969},
  year =	{2013},
  volume =	{24},
  editor =	{Seth, Anil and Vishnoi, Nisheeth K.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2013.55},
  URN =		{urn:nbn:de:0030-drops-43619},
  doi =		{10.4230/LIPIcs.FSTTCS.2013.55},
  annote =	{Keywords: String Problems, String Morphisms, Parameterised Complexity, Exponential Time Hypothesis, Pattern Languages}
}

Schmid, Markus

Document
Self-Healing and Recovery Methods and their Classification

Authors: Onn Shehory, Josu Martinez, Artur Andrzejak, Cinzia Cappiello, Wlodzimierz Funika, Derrick Kondo, Leonardo Mariani, Benjamin Satzger, and Markus Schmid

Published in: Dagstuhl Seminar Proceedings, Volume 9201, Self-Healing and Self-Adaptive Systems (2009)


Abstract
This document summarizes the results of the Working Group 1 - ``Self-Healing and Recovery'' - within the Dagstuhl Seminar 09201 ``Self-Healing and Self-Adaptive Systems'' (organized by A. Andrzejak, K. Geihs, O. Shehory and J. Wilkes). The seminar was held from May 10th 2009 to May 15th 2009 in Schloss Dagstuhl~--~Leibniz Center for Informatics.

Cite as

Onn Shehory, Josu Martinez, Artur Andrzejak, Cinzia Cappiello, Wlodzimierz Funika, Derrick Kondo, Leonardo Mariani, Benjamin Satzger, and Markus Schmid. Self-Healing and Recovery Methods and their Classification. In Self-Healing and Self-Adaptive Systems. Dagstuhl Seminar Proceedings, Volume 9201, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{shehory_et_al:DagSemProc.09201.3,
  author =	{Shehory, Onn and Martinez, Josu and Andrzejak, Artur and Cappiello, Cinzia and Funika, Wlodzimierz and Kondo, Derrick and Mariani, Leonardo and Satzger, Benjamin and Schmid, Markus},
  title =	{{Self-Healing and Recovery Methods and their Classification}},
  booktitle =	{Self-Healing and Self-Adaptive Systems},
  pages =	{1--8},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2009},
  volume =	{9201},
  editor =	{Artur Andrzejak and Kurt Geihs and Onn Shehory and John Wilkes},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.09201.3},
  URN =		{urn:nbn:de:0030-drops-21082},
  doi =		{10.4230/DagSemProc.09201.3},
  annote =	{Keywords: Self-healing, self-recovery, redundancy techniques, architecture models, micro-rebooting, SOA-based process reorganization}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail