Search Results

Documents authored by G. Larsen, Kim



Larsen, Kim G.

Document
Invited Paper
Synthesis of Safe, Optimal and Compact Strategies for Stochastic Hybrid Games (Invited Paper)

Authors: Kim G. Larsen

Published in: LIPIcs, Volume 140, 30th International Conference on Concurrency Theory (CONCUR 2019)


Abstract
UPPAAL-Stratego is a recent branch of the verification tool UPPAAL allowing for synthesis of safe and optimal strategies for stochastic timed (hybrid) games. We describe newly developed learning methods, allowing for synthesis of significantly better strategies and with much improved convergence behaviour. Also, we describe novel use of decision trees for learning orders-of-magnitude more compact strategy representation. In both cases, the seek for optimality does not compromise safety.

Cite as

Kim G. Larsen. Synthesis of Safe, Optimal and Compact Strategies for Stochastic Hybrid Games (Invited Paper). In 30th International Conference on Concurrency Theory (CONCUR 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 140, pp. 2:1-2:5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{larsen:LIPIcs.CONCUR.2019.2,
  author =	{Larsen, Kim G.},
  title =	{{Synthesis of Safe, Optimal and Compact Strategies for Stochastic Hybrid Games}},
  booktitle =	{30th International Conference on Concurrency Theory (CONCUR 2019)},
  pages =	{2:1--2:5},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-121-4},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{140},
  editor =	{Fokkink, Wan and van Glabbeek, Rob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2019.2},
  URN =		{urn:nbn:de:0030-drops-109048},
  doi =		{10.4230/LIPIcs.CONCUR.2019.2},
  annote =	{Keywords: Timed automata, Stochastic hybrid grame, Symbolic synthesis, Reinforcement learning, Q-learning, M-learning}
}
Document
Computing Probabilistic Bisimilarity Distances for Probabilistic Automata

Authors: Giorgio Bacci, Giovanni Bacci, Kim G. Larsen, Radu Mardare, Qiyi Tang, and Franck van Breugel

Published in: LIPIcs, Volume 140, 30th International Conference on Concurrency Theory (CONCUR 2019)


Abstract
The probabilistic bisimilarity distance of Deng et al. has been proposed as a robust quantitative generalization of Segala and Lynch’s probabilistic bisimilarity for probabilistic automata. In this paper, we present a novel characterization of the bisimilarity distance as the solution of a simple stochastic game. The characterization gives us an algorithm to compute the distances by applying Condon’s simple policy iteration on these games. The correctness of Condon’s approach, however, relies on the assumption that the games are stopping. Our games may be non-stopping in general, yet we are able to prove termination for this extended class of games. Already other algorithms have been proposed in the literature to compute these distances, with complexity in UP cap coUP and PPAD. Despite the theoretical relevance, these algorithms are inefficient in practice. To the best of our knowledge, our algorithm is the first practical solution. In the proofs of all the above-mentioned results, an alternative presentation of the Hausdorff distance due to Mémoli plays a central rôle.

Cite as

Giorgio Bacci, Giovanni Bacci, Kim G. Larsen, Radu Mardare, Qiyi Tang, and Franck van Breugel. Computing Probabilistic Bisimilarity Distances for Probabilistic Automata. In 30th International Conference on Concurrency Theory (CONCUR 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 140, pp. 9:1-9:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bacci_et_al:LIPIcs.CONCUR.2019.9,
  author =	{Bacci, Giorgio and Bacci, Giovanni and Larsen, Kim G. and Mardare, Radu and Tang, Qiyi and van Breugel, Franck},
  title =	{{Computing Probabilistic Bisimilarity Distances for Probabilistic Automata}},
  booktitle =	{30th International Conference on Concurrency Theory (CONCUR 2019)},
  pages =	{9:1--9:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-121-4},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{140},
  editor =	{Fokkink, Wan and van Glabbeek, Rob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2019.9},
  URN =		{urn:nbn:de:0030-drops-109119},
  doi =		{10.4230/LIPIcs.CONCUR.2019.9},
  annote =	{Keywords: Probabilistic automata, Behavioural metrics, Simple stochastic games, Simple policy iteration algorithm}
}
Document
Partial Order Reduction for Reachability Games

Authors: Frederik Meyer Bønneland, Peter Gjøl Jensen, Kim G. Larsen, Marco Muñiz, and Jiří Srba

Published in: LIPIcs, Volume 140, 30th International Conference on Concurrency Theory (CONCUR 2019)


Abstract
Partial order reductions have been successfully applied to model checking of concurrent systems and practical applications of the technique show nontrivial reduction in the size of the explored state space. We present a theory of partial order reduction based on stubborn sets in the game-theoretical setting of 2-player games with reachability/safety objectives. Our stubborn reduction allows us to prune the interleaving behaviour of both players in the game, and we formally prove its correctness on the class of games played on general labelled transition systems. We then instantiate the framework to the class of weighted Petri net games with inhibitor arcs and provide its efficient implementation in the model checker TAPAAL. Finally, we evaluate our stubborn reduction on several case studies and demonstrate its efficiency.

Cite as

Frederik Meyer Bønneland, Peter Gjøl Jensen, Kim G. Larsen, Marco Muñiz, and Jiří Srba. Partial Order Reduction for Reachability Games. In 30th International Conference on Concurrency Theory (CONCUR 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 140, pp. 23:1-23:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bnneland_et_al:LIPIcs.CONCUR.2019.23,
  author =	{B{\o}nneland, Frederik Meyer and Jensen, Peter Gj{\o}l and Larsen, Kim G. and Mu\~{n}iz, Marco and Srba, Ji\v{r}{\'\i}},
  title =	{{Partial Order Reduction for Reachability Games}},
  booktitle =	{30th International Conference on Concurrency Theory (CONCUR 2019)},
  pages =	{23:1--23:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-121-4},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{140},
  editor =	{Fokkink, Wan and van Glabbeek, Rob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2019.23},
  URN =		{urn:nbn:de:0030-drops-109251},
  doi =		{10.4230/LIPIcs.CONCUR.2019.23},
  annote =	{Keywords: Petri nets, games, synthesis, partial order reduction, stubborn sets}
}
Document
Complete Volume
LIPIcs, Volume 83, MFCS'17, Complete Volume

Authors: Kim G. Larsen, Hans L. Bodlaender, and Jean-Francois Raskin

Published in: LIPIcs, Volume 83, 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)


Abstract
LIPIcs, Volume 83, MFCS'17, Complete Volume

Cite as

42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@Proceedings{larsen_et_al:LIPIcs.MFCS.2017,
  title =	{{LIPIcs, Volume 83, MFCS'17, Complete Volume}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017},
  URN =		{urn:nbn:de:0030-drops-82073},
  doi =		{10.4230/LIPIcs.MFCS.2017},
  annote =	{Keywords: Theory of Computation}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Kim G. Larsen, Hans L. Bodlaender, and Jean-Francois Raskin

Published in: LIPIcs, Volume 83, 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 0:i-0:xvi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{larsen_et_al:LIPIcs.MFCS.2017.0,
  author =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{0:i--0:xvi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.0},
  URN =		{urn:nbn:de:0030-drops-80564},
  doi =		{10.4230/LIPIcs.MFCS.2017.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
On the Metric-Based Approximate Minimization of Markov Chains

Authors: Giovanni Bacci, Giorgio Bacci, Kim G. Larsen, and Radu Mardare

Published in: LIPIcs, Volume 80, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)


Abstract
We address the behavioral metric-based approximate minimization problem of Markov Chains (MCs), i.e., given a finite MC and a positive integer k, we are interested in finding a k-state MC of minimal distance to the original. By considering as metric the bisimilarity distance of Desharnais at al., we show that optimal approximations always exist; show that the problem can be solved as a bilinear program; and prove that its threshold problem is in PSPACE and NP-hard. Finally, we present an approach inspired by expectation maximization techniques that provides suboptimal solutions. Experiments suggest that our method gives a practical approach that outperforms the bilinear program implementation run on state-of-the-art bilinear solvers.

Cite as

Giovanni Bacci, Giorgio Bacci, Kim G. Larsen, and Radu Mardare. On the Metric-Based Approximate Minimization of Markov Chains. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 104:1-104:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bacci_et_al:LIPIcs.ICALP.2017.104,
  author =	{Bacci, Giovanni and Bacci, Giorgio and Larsen, Kim G. and Mardare, Radu},
  title =	{{On the Metric-Based Approximate Minimization of Markov Chains}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{104:1--104:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.104},
  URN =		{urn:nbn:de:0030-drops-73675},
  doi =		{10.4230/LIPIcs.ICALP.2017.104},
  annote =	{Keywords: Behavioral distances, Probabilistic Models, Automata Minimization}
}
Document
WNetKAT: A Weighted SDN Programming and Verification Language

Authors: Kim G. Larsen, Stefan Schmid, and Bingtian Xue

Published in: LIPIcs, Volume 70, 20th International Conference on Principles of Distributed Systems (OPODIS 2016)


Abstract
Programmability and verifiability lie at the heart of the software-defined networking paradigm. While OpenFlow and its match-action concept provide primitive operations to manipulate hardware configurations, over the last years, several more expressive network programming languages have been developed. This paper presents WNetKAT, the first network programming language accounting for the fact that networks are inherently weighted, and communications subject to capacity constraints (e.g., in terms of bandwidth) and costs (e.g., latency or monetary costs). WNetKAT is based on a syntactic and semantic extension of the NetKAT algebra. We demonstrate several relevant applications for WNetKAT, including cost and capacity-aware reachability, as well as quality-of-service and fairness aspects. These applications do not only apply to classic, splittable and unsplittable (s,t)-flows, but also generalize to more complex (and stateful) network functions and service chains. For example, WNetKAT allows to model flows which need to traverse certain waypoint functions, which can change the traffic rate. This paper also shows the relationship between the equivalence problem of WNetKAT and the equivalence problem of the weighted finite automata, which implies undecidability of the former. However, this paper also shows the decidability of whether an expression equals to 0, which is sufficient in many practical scenarios, and we initiate the discussion of decidable subsets of the whole language.

Cite as

Kim G. Larsen, Stefan Schmid, and Bingtian Xue. WNetKAT: A Weighted SDN Programming and Verification Language. In 20th International Conference on Principles of Distributed Systems (OPODIS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 70, pp. 18:1-18:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{larsen_et_al:LIPIcs.OPODIS.2016.18,
  author =	{Larsen, Kim G. and Schmid, Stefan and Xue, Bingtian},
  title =	{{WNetKAT: A Weighted SDN Programming and Verification Language}},
  booktitle =	{20th International Conference on Principles of Distributed Systems (OPODIS 2016)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-031-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{70},
  editor =	{Fatourou, Panagiota and Jim\'{e}nez, Ernesto and Pedone, Fernando},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2016.18},
  URN =		{urn:nbn:de:0030-drops-70870},
  doi =		{10.4230/LIPIcs.OPODIS.2016.18},
  annote =	{Keywords: Software-Defined Networking, Verification, Reachability, Stateful Processing, Service Chains, Weighted Automata, Decidability, NetKAT}
}
Document
Probabilistic Mu-Calculus: Decidability and Complete Axiomatization

Authors: Kim G. Larsen, Radu Mardare, and Bingtian Xue

Published in: LIPIcs, Volume 65, 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)


Abstract
We introduce a version of the probabilistic mu-calculus (PMC) built on top of a probabilistic modal logic that allows encoding n-ary inequational conditions on transition probabilities. PMC extends previously studied calculi and we prove that, despite its expressiveness, it enjoys a series of good meta-properties. Firstly, we prove the decidability of satisfiability checking by establishing the small model property. An algorithm for deciding the satisfiability problem is developed. As a second major result, we provide a complete axiomatization for the alternation-free fragment of PMC. The completeness proof is innovative in many aspects combining various techniques from topology and model theory.

Cite as

Kim G. Larsen, Radu Mardare, and Bingtian Xue. Probabilistic Mu-Calculus: Decidability and Complete Axiomatization. In 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 65, pp. 25:1-25:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{larsen_et_al:LIPIcs.FSTTCS.2016.25,
  author =	{Larsen, Kim G. and Mardare, Radu and Xue, Bingtian},
  title =	{{Probabilistic Mu-Calculus: Decidability and Complete Axiomatization}},
  booktitle =	{36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)},
  pages =	{25:1--25:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-027-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{65},
  editor =	{Lal, Akash and Akshay, S. and Saurabh, Saket and Sen, Sandeep},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2016.25},
  URN =		{urn:nbn:de:0030-drops-68607},
  doi =		{10.4230/LIPIcs.FSTTCS.2016.25},
  annote =	{Keywords: Markov process, probabilistic modal mu-calculus, n-ary (in-)equational modalities, satisfiability, axiomatization}
}
Document
Synchronizing Words for Weighted and Timed Automata

Authors: Laurent Doyen, Line Juhl, Kim G. Larsen, Nicolas Markey, and Mahsa Shirmohammadi

Published in: LIPIcs, Volume 29, 34th International Conference on Foundation of Software Technology and Theoretical Computer Science (FSTTCS 2014)


Abstract
The problem of synchronizing automata is concerned with the existence of a word that sends all states of the automaton to one and the same state. This problem has classically been studied for complete deterministic finite automata, with the existence problem being NLOGSPACE-complete. In this paper we consider synchronizing-word problems for weighted and timed automata. We consider the synchronization problem in several variants and combinations of these, including deterministic and non-deterministic timed and weighted automata, synchronization to unique location with possibly different clock valuations or accumulated weights, as well as synchronization with a safety condition forbidding the automaton to visit states outside a safety-set during synchronization (e.g. energy constraints). For deterministic weighted automata, the synchronization problem is proven PSPACE-complete under energy constraints, and in 3-EXPSPACE under general safety constraints. For timed automata the synchronization problems are shown to be PSPACE-complete in the deterministic case, and undecidable in the non-deterministic case.

Cite as

Laurent Doyen, Line Juhl, Kim G. Larsen, Nicolas Markey, and Mahsa Shirmohammadi. Synchronizing Words for Weighted and Timed Automata. In 34th International Conference on Foundation of Software Technology and Theoretical Computer Science (FSTTCS 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 29, pp. 121-132, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{doyen_et_al:LIPIcs.FSTTCS.2014.121,
  author =	{Doyen, Laurent and Juhl, Line and Larsen, Kim G. and Markey, Nicolas and Shirmohammadi, Mahsa},
  title =	{{Synchronizing Words for Weighted and Timed Automata}},
  booktitle =	{34th International Conference on Foundation of Software Technology and Theoretical Computer Science (FSTTCS 2014)},
  pages =	{121--132},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-77-4},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{29},
  editor =	{Raman, Venkatesh and Suresh, S. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2014.121},
  URN =		{urn:nbn:de:0030-drops-48370},
  doi =		{10.4230/LIPIcs.FSTTCS.2014.121},
  annote =	{Keywords: Synchronizing words, weighted automata, timed automata}
}
Document
Adaptable Value-Set Analysis for Low-Level Code

Authors: Jörg Brauer, René Rydhof Hansen, Stefan Kowalewski, Kim G. Larsen, and Mads Chr. Olesen

Published in: OASIcs, Volume 24, 6th International Workshop on Systems Software Verification (2012)


Abstract
This paper presents a framework for binary code analysis that uses only SAT-based algorithms. Within the framework, incremental SAT solving is used to perform a form of weakly relational value-set analysis in a novel way, connecting the expressiveness of the value sets to computational complexity. Another key feature of our framework is that it translates the semantics of binary code into an intermediate representation. This allows for a straightforward translation of the program semantics into Boolean logic and eases the implementation efforts, too. We show that leveraging the efficiency of contemporary SAT solvers allows us to prove interesting properties about medium-sized microcontroller programs.

Cite as

Jörg Brauer, René Rydhof Hansen, Stefan Kowalewski, Kim G. Larsen, and Mads Chr. Olesen. Adaptable Value-Set Analysis for Low-Level Code. In 6th International Workshop on Systems Software Verification. Open Access Series in Informatics (OASIcs), Volume 24, pp. 32-43, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{brauer_et_al:OASIcs.SSV.2011.32,
  author =	{Brauer, J\"{o}rg and Hansen, Ren\'{e} Rydhof and Kowalewski, Stefan and Larsen, Kim G. and Olesen, Mads Chr.},
  title =	{{Adaptable Value-Set Analysis for Low-Level Code}},
  booktitle =	{6th International Workshop on Systems Software Verification},
  pages =	{32--43},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-36-1},
  ISSN =	{2190-6807},
  year =	{2012},
  volume =	{24},
  editor =	{Brauer, J\"{o}rg and Roveri, Marco and Tews, Hendrik},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SSV.2011.32},
  URN =		{urn:nbn:de:0030-drops-35884},
  doi =		{10.4230/OASIcs.SSV.2011.32},
  annote =	{Keywords: Abstract interpretation, SAT solving, embedded systems}
}
Document
Continuous Markovian Logic - From Complete Axiomatization to the Metric Space of Formulas

Authors: Luca Cardelli, Kim G. Larsen, and Radu Mardare

Published in: LIPIcs, Volume 12, Computer Science Logic (CSL'11) - 25th International Workshop/20th Annual Conference of the EACSL (2011)


Abstract
Continuous Markovian Logic (CML) is a multimodal logic that expresses quantitative and qualitative properties of continuous-space and continuous-time labelled Markov processes (CMPs). The modalities of CML approximate the rates of the exponentially distributed random variables that characterize the duration of the labeled transitions. In this paper we present a sound and complete Hilbert-style axiomatization of CML for the CMP-semantics and prove some metaproperties including the small model property. CML characterizes stochastic bisimulation and supports the definition of a quantified extension of satisfiability relation that measures the compatibility of a model and a property. Relying on the small model property, we prove that this measure can be approximated, within a given error, by using a distance between logical formulas.

Cite as

Luca Cardelli, Kim G. Larsen, and Radu Mardare. Continuous Markovian Logic - From Complete Axiomatization to the Metric Space of Formulas. In Computer Science Logic (CSL'11) - 25th International Workshop/20th Annual Conference of the EACSL. Leibniz International Proceedings in Informatics (LIPIcs), Volume 12, pp. 144-158, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{cardelli_et_al:LIPIcs.CSL.2011.144,
  author =	{Cardelli, Luca and Larsen, Kim G. and Mardare, Radu},
  title =	{{Continuous Markovian Logic - From Complete Axiomatization to the Metric Space of Formulas}},
  booktitle =	{Computer Science Logic (CSL'11) - 25th International Workshop/20th Annual Conference of the EACSL},
  pages =	{144--158},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-32-3},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{12},
  editor =	{Bezem, Marc},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CSL.2011.144},
  URN =		{urn:nbn:de:0030-drops-32281},
  doi =		{10.4230/LIPIcs.CSL.2011.144},
  annote =	{Keywords: probabilistic logic, axiomatization, Markov processes, metric semantics}
}
Document
A Quantitative Characterization of Weighted Kripke Structures in Temporal Logic

Authors: Kim G. Larsen, Uli Fahrenberg, and Claus Thrane

Published in: OASIcs, Volume 13, Annual Doctoral Workshop on Mathematical and Engineering Methods in Computer Science (MEMICS'09) (2009)


Abstract
We extend the usual notion of Kripke Structures with a weighted transition relation, and generalize the usual Boolean satisfaction relation of CTL to a map which assigns to states and temporal formulae a real-valued distance describing the degree of satisfaction. We describe a general approach to obtaining quantitative interpretations for a generic extension of the CTL syntax, and show that, for one such interpretation, the logic is both adequate and expressive with respect to quantitative bisimulation.

Cite as

Kim G. Larsen, Uli Fahrenberg, and Claus Thrane. A Quantitative Characterization of Weighted Kripke Structures in Temporal Logic. In Annual Doctoral Workshop on Mathematical and Engineering Methods in Computer Science (MEMICS'09). Open Access Series in Informatics (OASIcs), Volume 13, pp. 10-17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{larsen_et_al:OASIcs:2009:DROPS.MEMICS.2009.2345,
  author =	{Larsen, Kim G. and Fahrenberg, Uli and Thrane, Claus},
  title =	{{A Quantitative Characterization of Weighted Kripke Structures in Temporal Logic}},
  booktitle =	{Annual Doctoral Workshop on Mathematical and Engineering Methods in Computer Science (MEMICS'09)},
  pages =	{10--17},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-15-6},
  ISSN =	{2190-6807},
  year =	{2009},
  volume =	{13},
  editor =	{Hlinen\'{y}, Petr and Maty\'{a}\v{s}, V\'{a}clav and Vojnar, Tom\'{a}\v{s}},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DROPS.MEMICS.2009.2345},
  URN =		{urn:nbn:de:0030-drops-23454},
  doi =		{10.4230/DROPS.MEMICS.2009.2345},
  annote =	{Keywords: Quantitative analysis, Kripke structures, characteristic formulae, bisimulation distance, weighted CTL}
}
Document
Priced Timed Automata: Theory and Tools

Authors: Kim G. Larsen

Published in: LIPIcs, Volume 4, IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (2009)


Abstract
Priced timed automata are emerging as useful formalisms for modeling and analysing a broad range of resource allocation problems. In this extended abstract, we highlight recent (un)deci\-dability results related to priced timed automata as well as point to a number of open problems.

Cite as

Kim G. Larsen. Priced Timed Automata: Theory and Tools. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 4, pp. 417-425, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{larsen:LIPIcs.FSTTCS.2009.2337,
  author =	{Larsen, Kim G.},
  title =	{{Priced Timed Automata: Theory and Tools}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science},
  pages =	{417--425},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-13-2},
  ISSN =	{1868-8969},
  year =	{2009},
  volume =	{4},
  editor =	{Kannan, Ravi and Narayan Kumar, K.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2009.2337},
  URN =		{urn:nbn:de:0030-drops-23374},
  doi =		{10.4230/LIPIcs.FSTTCS.2009.2337},
  annote =	{Keywords: Timed systems, optimal scheduling, priced timed automata, games, model-checking}
}
Document
Online Testing of Real-Time Systems Using UPPAAL: Status and Future Work

Authors: Kim G. Larsen, Marius Mikucionis, and Brian Nielsen

Published in: Dagstuhl Seminar Proceedings, Volume 4371, Perspectives of Model-Based Testing (2005)


Abstract
We present TUPPAAL --- a new tool for online black-box testing of real-time embedded systems from non-deterministic timed automata specifications. We describe a sound and complete randomized online testing algorithm, and describe how to implement it using symbolic state representation and manipulation techniques. We propose the notion of relativized timed input/output conformance as the formal implementation relation. A novelty of this relation and our testing algorithm is that they explicitly take environment assumptions into account, generate, execute and verify the result online using the UPPAAL on-the-fly model-checking tool engine. A medium size case study shows promising results in terms of error detection capability and computation performance.

Cite as

Kim G. Larsen, Marius Mikucionis, and Brian Nielsen. Online Testing of Real-Time Systems Using UPPAAL: Status and Future Work. In Perspectives of Model-Based Testing. Dagstuhl Seminar Proceedings, Volume 4371, pp. 1-21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2005)


Copy BibTex To Clipboard

@InProceedings{larsen_et_al:DagSemProc.04371.3,
  author =	{Larsen, Kim G. and Mikucionis, Marius and Nielsen, Brian},
  title =	{{Online Testing of Real-Time Systems Using UPPAAL: Status and Future Work}},
  booktitle =	{Perspectives of Model-Based Testing},
  pages =	{1--21},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2005},
  volume =	{4371},
  editor =	{Ed Brinksma and Wolfgang Grieskamp and Jan Tretmans},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.04371.3},
  URN =		{urn:nbn:de:0030-drops-3269},
  doi =		{10.4230/DagSemProc.04371.3},
  annote =	{Keywords: Online testing, black-box testing, real-time systems, embedded systems, symbolic state representation, relativized timed input/output conformance, mo}
}

Larsen, Kim Gulstrand

Document
Quantitative Models: Expressiveness, Analysis, and New Applications (Dagstuhl Seminar 14041)

Authors: Manfred Droste, Paul Gastin, Kim Gulstrand Larsen, and Axel Legay

Published in: Dagstuhl Reports, Volume 4, Issue 1 (2014)


Abstract
From Jan. 19 to Jan. 24, 2014, "Quantitative Models: Expressiveness, Analysis, and New Applications" was held in Schloss Dagstuhl-Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

Cite as

Manfred Droste, Paul Gastin, Kim Gulstrand Larsen, and Axel Legay. Quantitative Models: Expressiveness, Analysis, and New Applications (Dagstuhl Seminar 14041). In Dagstuhl Reports, Volume 4, Issue 1, pp. 104-124, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@Article{droste_et_al:DagRep.4.1.104,
  author =	{Droste, Manfred and Gastin, Paul and Larsen, Kim Gulstrand and Legay, Axel},
  title =	{{Quantitative Models: Expressiveness, Analysis, and New Applications (Dagstuhl Seminar 14041)}},
  pages =	{104--124},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2014},
  volume =	{4},
  number =	{1},
  editor =	{Droste, Manfred and Gastin, Paul and Larsen, Kim Gulstrand and Legay, Axel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.4.1.104},
  URN =		{urn:nbn:de:0030-drops-45374},
  doi =		{10.4230/DagRep.4.1.104},
  annote =	{Keywords: quantitative models, quantitative analysis, timed and hybrid systems, probabilistic systems, weighted automata, systems biology, smart grid}
}

Larsen, Kim Guldstrand

Document
Parametric Verification of Weighted Systems

Authors: Peter Christoffersen, Mikkel Hansen, Anders Mariegaard, Julian Trier Ringsmose, Kim Guldstrand Larsen, and Radu Mardare

Published in: OASIcs, Volume 44, 2nd International Workshop on Synthesis of Complex Parameters (SynCoP'15) (2015)


Abstract
This paper addresses the problem of parametric model checking for weighted transition systems. We consider transition systems labelled with linear equations over a set of parameters and we use them to provide semantics for a parametric version of weighted CTL where the until and next operators are themselves indexed with linear equations. The parameters change the model-checking problem into a problem of computing a linear system of inequalities that characterizes the parameters that guarantee the satisfiability. To address this problem, we use parametric dependency graphs (PDGs) and we propose a global update function that yields an assignment to each node in a PDG. For an iterative application of the function, we prove that a fixed point assignment to PDG nodes exists and the set of assignments constitutes a well-quasi ordering, thus ensuring that the fixed point assignment can be found after finitely many iterations. To demonstrate the utility of our technique, we have implemented a prototype tool that computes the constraints on parameters for model checking problems.

Cite as

Peter Christoffersen, Mikkel Hansen, Anders Mariegaard, Julian Trier Ringsmose, Kim Guldstrand Larsen, and Radu Mardare. Parametric Verification of Weighted Systems. In 2nd International Workshop on Synthesis of Complex Parameters (SynCoP'15). Open Access Series in Informatics (OASIcs), Volume 44, pp. 77-90, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{christoffersen_et_al:OASIcs.SynCoP.2015.77,
  author =	{Christoffersen, Peter and Hansen, Mikkel and Mariegaard, Anders and Ringsmose, Julian Trier and Larsen, Kim Guldstrand and Mardare, Radu},
  title =	{{Parametric Verification of Weighted Systems}},
  booktitle =	{2nd International Workshop on Synthesis of Complex Parameters (SynCoP'15)},
  pages =	{77--90},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-82-8},
  ISSN =	{2190-6807},
  year =	{2015},
  volume =	{44},
  editor =	{Andr\'{e}, \'{E}tienne and Frehse, Goran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SynCoP.2015.77},
  URN =		{urn:nbn:de:0030-drops-56115},
  doi =		{10.4230/OASIcs.SynCoP.2015.77},
  annote =	{Keywords: parametric weighted transition systems, parametric weighted CTL, parametric model checking, well-quasi ordering, tool}
}
Document
Polynomial Time Decidability of Weighted Synchronization under Partial Observability

Authors: Jan Kretinsky, Kim Guldstrand Larsen, Simon Laursen, and Jiri Srba

Published in: LIPIcs, Volume 42, 26th International Conference on Concurrency Theory (CONCUR 2015)


Abstract
We consider weighted automata with both positive and negative integer weights on edges and study the problem of synchronization using adaptive strategies that may only observe whether the current weight-level is negative or nonnegative. We show that the synchronization problem is decidable in polynomial time for deterministic weighted automata.

Cite as

Jan Kretinsky, Kim Guldstrand Larsen, Simon Laursen, and Jiri Srba. Polynomial Time Decidability of Weighted Synchronization under Partial Observability. In 26th International Conference on Concurrency Theory (CONCUR 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 42, pp. 142-154, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{kretinsky_et_al:LIPIcs.CONCUR.2015.142,
  author =	{Kretinsky, Jan and Larsen, Kim Guldstrand and Laursen, Simon and Srba, Jiri},
  title =	{{Polynomial Time Decidability of Weighted Synchronization under Partial Observability}},
  booktitle =	{26th International Conference on Concurrency Theory (CONCUR 2015)},
  pages =	{142--154},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-91-0},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{42},
  editor =	{Aceto, Luca and de Frutos Escrig, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2015.142},
  URN =		{urn:nbn:de:0030-drops-53927},
  doi =		{10.4230/LIPIcs.CONCUR.2015.142},
  annote =	{Keywords: weighted automata, partial observability, synchronization, complexity}
}
Document
METAMOC: Modular Execution Time Analysis using Model Checking

Authors: Andreas E. Dalsgaard, Mads Chr. Olesen, Martin Toft, René Rydhof Hansen, and Kim Guldstrand Larsen

Published in: OASIcs, Volume 15, 10th International Workshop on Worst-Case Execution Time Analysis (WCET 2010)


Abstract
Safe and tight worst-case execution times (WCETs) are important when scheduling hard real-time systems. This paper presents METAMOC, a modular method, based on model checking and static analysis, that determines safe and tight WCETs for programs running on platforms featuring caching and pipelining. The method works by constructing a UPPAAL model of the program being analysed and annotating the model with information from an inter-procedural value analysis. The program model is then combined with a model of the hardware platform and model checked for the WCET. Through support for the platforms ARM7, ARM9 and ATMEL AVR 8-bit, the modularity and retargetability of the method are demonstrated, as only the pipeline needs to be remodelled. Hardware modelling is performed in a state-of-the-art graphical modelling environment. Experiments on the Mälardalen WCET benchmark programs show that taking caching into account yields much tighter WCETs than without modelling caches, and that METAMOC is a sufficiently fast and versatile approach for WCET analysis.

Cite as

Andreas E. Dalsgaard, Mads Chr. Olesen, Martin Toft, René Rydhof Hansen, and Kim Guldstrand Larsen. METAMOC: Modular Execution Time Analysis using Model Checking. In 10th International Workshop on Worst-Case Execution Time Analysis (WCET 2010). Open Access Series in Informatics (OASIcs), Volume 15, pp. 113-123, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{dalsgaard_et_al:OASIcs.WCET.2010.113,
  author =	{Dalsgaard, Andreas E. and Olesen, Mads Chr. and Toft, Martin and Hansen, Ren\'{e} Rydhof and Larsen, Kim Guldstrand},
  title =	{{METAMOC: Modular Execution Time Analysis using Model Checking}},
  booktitle =	{10th International Workshop on Worst-Case Execution Time Analysis (WCET 2010)},
  pages =	{113--123},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-21-7},
  ISSN =	{2190-6807},
  year =	{2010},
  volume =	{15},
  editor =	{Lisper, Bj\"{o}rn},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.WCET.2010.113},
  URN =		{urn:nbn:de:0030-drops-28319},
  doi =		{10.4230/OASIcs.WCET.2010.113},
  annote =	{Keywords: WCET analysis, timed automata, model checking, UPPAAL}
}
Document
10031 Abstracts Collection – Quantitative Models: Expressiveness and Analysis

Authors: Christel Baier, Manfred Droste, Paul Gastin, and Kim Guldstrand Larsen

Published in: Dagstuhl Seminar Proceedings, Volume 10031, Quantitative Models: Expressiveness and Analysis (2010)


Abstract
From Jan 18 to Jan 22, 2010, the Dagstuhl Seminar 10031 ``Quantitative Models: Expressiveness and Analysis '' was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

Cite as

Christel Baier, Manfred Droste, Paul Gastin, and Kim Guldstrand Larsen. 10031 Abstracts Collection – Quantitative Models: Expressiveness and Analysis. In Quantitative Models: Expressiveness and Analysis. Dagstuhl Seminar Proceedings, Volume 10031, pp. 1-15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{baier_et_al:DagSemProc.10031.1,
  author =	{Baier, Christel and Droste, Manfred and Gastin, Paul and Larsen, Kim Guldstrand},
  title =	{{10031 Abstracts Collection – Quantitative Models: Expressiveness and Analysis}},
  booktitle =	{Quantitative Models: Expressiveness and Analysis},
  pages =	{1--15},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10031},
  editor =	{Christel Baier and Manfred Droste and Paul Gastin and Kim Guldstrand Larsen},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10031.1},
  URN =		{urn:nbn:de:0030-drops-26839},
  doi =		{10.4230/DagSemProc.10031.1},
  annote =	{Keywords: Quantitative models, quantitative analysis, timed and hybrid systems, probabilistic systems, weighted automata}
}
Document
10031 Executive Summary – Quantitative Models: Expressiveness and Analysis

Authors: Christel Baier, Manfred Droste, Paul Gastin, and Kim Guldstrand Larsen

Published in: Dagstuhl Seminar Proceedings, Volume 10031, Quantitative Models: Expressiveness and Analysis (2010)


Abstract
Quantitative models and quantitative analysis in Computer Science are currently intensively studied, resulting in a revision of the foundation of Computer Science where classical yes/no answers are replaced by quantitative analyses. The potential application areas are huge, e.g., performance analysis, operational research or embedded systems. The aim of the seminar was to address three fundamental topics which are closely related: quantitative analysis of real-time and hybrid systems; probabilistic analysis and stochastic automata; weighted automata. These three areas of research have mainly evolved independently so far and the relationship between them has emerged only recently. The seminar brought together leading researchers of the three areas, with the goal of future highly productive cross-fertilizations.

Cite as

Christel Baier, Manfred Droste, Paul Gastin, and Kim Guldstrand Larsen. 10031 Executive Summary – Quantitative Models: Expressiveness and Analysis. In Quantitative Models: Expressiveness and Analysis. Dagstuhl Seminar Proceedings, Volume 10031, pp. 1-2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{baier_et_al:DagSemProc.10031.2,
  author =	{Baier, Christel and Droste, Manfred and Gastin, Paul and Larsen, Kim Guldstrand},
  title =	{{10031 Executive Summary – Quantitative Models: Expressiveness and Analysis}},
  booktitle =	{Quantitative Models: Expressiveness and Analysis},
  pages =	{1--2},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10031},
  editor =	{Christel Baier and Manfred Droste and Paul Gastin and Kim Guldstrand Larsen},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10031.2},
  URN =		{urn:nbn:de:0030-drops-26824},
  doi =		{10.4230/DagSemProc.10031.2},
  annote =	{Keywords: Quantitative models, quantitative analysis, timed and hybrid systems, probabilistic systems, weighted automata}
}

Kim, Sunghun

Document
Do Bugs Propagate? An Empirical Analysis of Temporal Correlations Among Software Bugs

Authors: Xiaodong Gu, Yo-Sub Han, Sunghun Kim, and Hongyu Zhang

Published in: LIPIcs, Volume 194, 35th European Conference on Object-Oriented Programming (ECOOP 2021)


Abstract
The occurrences of bugs are not isolated events, rather they may interact, affect each other, and trigger other latent bugs. Identifying and understanding bug correlations could help developers localize bug origins, predict potential bugs, and design better architectures of software artifacts to prevent bug affection. Many studies in the defect prediction and fault localization literature implied the dependence and interactions between multiple bugs, but few of them explicitly investigate the correlations of bugs across time steps and how bugs affect each other. In this paper, we perform social network analysis on the temporal correlations between bugs across time steps on software artifact ties, i.e., software graphs. Adopted from the correlation analysis methodology in social networks, we construct software graphs of three artifact ties such as function calls and type hierarchy and then perform longitudinal logistic regressions of time-lag bug correlations on these graphs. Our experiments on four open-source projects suggest that bugs can propagate as observed on certain artifact tie graphs. Based on our findings, we propose a hybrid artifact tie graph, a synthesis of a few well-known software graphs, that exhibits a higher degree of bug propagation. Our findings shed light on research for better bug prediction and localization models and help developers to perform maintenance actions to prevent consequential bugs.

Cite as

Xiaodong Gu, Yo-Sub Han, Sunghun Kim, and Hongyu Zhang. Do Bugs Propagate? An Empirical Analysis of Temporal Correlations Among Software Bugs. In 35th European Conference on Object-Oriented Programming (ECOOP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 194, pp. 11:1-11:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{gu_et_al:LIPIcs.ECOOP.2021.11,
  author =	{Gu, Xiaodong and Han, Yo-Sub and Kim, Sunghun and Zhang, Hongyu},
  title =	{{Do Bugs Propagate? An Empirical Analysis of Temporal Correlations Among Software Bugs}},
  booktitle =	{35th European Conference on Object-Oriented Programming (ECOOP 2021)},
  pages =	{11:1--11:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-190-0},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{194},
  editor =	{M{\o}ller, Anders and Sridharan, Manu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2021.11},
  URN =		{urn:nbn:de:0030-drops-140540},
  doi =		{10.4230/LIPIcs.ECOOP.2021.11},
  annote =	{Keywords: empirical software engineering, bug propagation, software graph, bug correlation}
}
Document
Automated Program Repair (Dagstuhl Seminar 17022)

Authors: Sunghun Kim, Claire Le Goues, Michael Pradel, and Abhik Roychoudhury

Published in: Dagstuhl Reports, Volume 7, Issue 1 (2017)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 17022 "Automated Program Repair". The seminar participants presented and discussed their research through formal and informal presentations. In particular, the seminar covered work related to search-based program repair, semantic program repair, and repair of non-functional properties. As a result of the seminar, several participants plan to launch various follow-up activities, such as a program repair competition, which would help to further establish and guide this young field of research.

Cite as

Sunghun Kim, Claire Le Goues, Michael Pradel, and Abhik Roychoudhury. Automated Program Repair (Dagstuhl Seminar 17022). In Dagstuhl Reports, Volume 7, Issue 1, pp. 19-31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@Article{kim_et_al:DagRep.7.1.19,
  author =	{Kim, Sunghun and Le Goues, Claire and Pradel, Michael and Roychoudhury, Abhik},
  title =	{{Automated Program Repair (Dagstuhl Seminar 17022)}},
  pages =	{19--31},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2017},
  volume =	{7},
  number =	{1},
  editor =	{Kim, Sunghun and Le Goues, Claire and Pradel, Michael and Roychoudhury, Abhik},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.7.1.19},
  URN =		{urn:nbn:de:0030-drops-71767},
  doi =		{10.4230/DagRep.7.1.19},
  annote =	{Keywords: Program repair, program analysis, software engineering}
}

Kim, Miryung

Document
SE4ML - Software Engineering for AI-ML-based Systems (Dagstuhl Seminar 20091)

Authors: Kristian Kersting, Miryung Kim, Guy Van den Broeck, and Thomas Zimmermann

Published in: Dagstuhl Reports, Volume 10, Issue 2 (2020)


Abstract
Multiple research disciplines, from cognitive sciences to biology, finance, physics, and the social sciences, as well as many companies, believe that data-driven and intelligent solutions are necessary. Unfortunately, current artificial intelligence (AI) and machine learning (ML) technologies are not sufficiently democratized - building complex AI and ML systems requires deep expertise in computer science and extensive programming skills to work with various machine reasoning and learning techniques at a rather low level of abstraction. It also requires extensive trial and error exploration for model selection, data cleaning, feature selection, and parameter tuning. Moreover, there is a lack of theoretical understanding that could be used to abstract away these subtleties. Conventional programming languages and software engineering paradigms have also not been designed to address challenges faced by AI and ML practitioners. In 2016, companies invested $26–39 billion in AI and McKinsey predicts that investments will be growing over the next few years. Any AI/ML-based systems will need to be built, tested, and maintained, yet there is a lack of established engineering practices in industry for such systems because they are fundamentally different from traditional software systems. This Dagstuhl Seminar brought together two rather disjoint communities together, software engineering and programming languages (PL/SE) and artificial intelligence and machine learning (AI-ML) to discuss open problems on how to improve the productivity of data scientists, software engineers, and AI-ML practitioners in industry.

Cite as

Kristian Kersting, Miryung Kim, Guy Van den Broeck, and Thomas Zimmermann. SE4ML - Software Engineering for AI-ML-based Systems (Dagstuhl Seminar 20091). In Dagstuhl Reports, Volume 10, Issue 2, pp. 76-87, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@Article{kersting_et_al:DagRep.10.2.76,
  author =	{Kersting, Kristian and Kim, Miryung and Van den Broeck, Guy and Zimmermann, Thomas},
  title =	{{SE4ML - Software Engineering for AI-ML-based Systems (Dagstuhl Seminar 20091)}},
  pages =	{76--87},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2020},
  volume =	{10},
  number =	{2},
  editor =	{Kersting, Kristian and Kim, Miryung and Van den Broeck, Guy and Zimmermann, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.10.2.76},
  URN =		{urn:nbn:de:0030-drops-130603},
  doi =		{10.4230/DagRep.10.2.76},
  annote =	{Keywords: correctness / explainability / traceability / fairness for ml, data scientist productivity, debugging/ testing / verification for ml systems}
}

Kim, Youngmin

Document
Salient Frame Detection for Molecular Dynamics Simulations

Authors: Youngmin Kim, Robert Patro, Cheuk Yiu Ip, Dianne P. O’Leary, and Andriy Anishkin

Published in: Dagstuhl Follow-Ups, Volume 2, Scientific Visualization: Interactions, Features, Metaphors (2011)


Abstract
Recent advances in sophisticated computational techniques have facilitated simulation of incrediblydetailed time-varying trajectories and in the process have generated vast quantities of simulation data. The current tools to analyze and comprehend large-scale time-varying data, however, lag far behind our ability to produce such simulation data. Saliency-based analysis can be applied to time-varying 3D datasets for the purpose of summarization, abstraction, and motion analysis. As the sizes of time-varying datasets continue to grow, it becomes more and more difficult to comprehend vast amounts of data and information in a short period of time. In this paper, we use eigenanalysis to generate orthogonal basis functions over sliding windows to characterize regions of unusual deviations and significant trends. Our results show that motion subspaces provide an effective technique for summarization of large molecular dynamics trajectories.

Cite as

Youngmin Kim, Robert Patro, Cheuk Yiu Ip, Dianne P. O’Leary, and Andriy Anishkin. Salient Frame Detection for Molecular Dynamics Simulations. In Scientific Visualization: Interactions, Features, Metaphors. Dagstuhl Follow-Ups, Volume 2, pp. 160-175, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InCollection{kim_et_al:DFU.Vol2.SciViz.2011.160,
  author =	{Kim, Youngmin and Patro, Robert and Yiu Ip, Cheuk and O’Leary, Dianne P. and Anishkin, Andriy},
  title =	{{Salient Frame Detection for Molecular Dynamics Simulations}},
  booktitle =	{Scientific Visualization: Interactions, Features, Metaphors},
  pages =	{160--175},
  series =	{Dagstuhl Follow-Ups},
  ISBN =	{978-3-939897-26-2},
  ISSN =	{1868-8977},
  year =	{2011},
  volume =	{2},
  editor =	{Hagen, Hans},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DFU.Vol2.SciViz.2011.160},
  URN =		{urn:nbn:de:0030-drops-32926},
  doi =		{10.4230/DFU.Vol2.SciViz.2011.160},
  annote =	{Keywords: Saliency based analysis, Molecular Dynamics, Simulation}
}

Kim, Joondong

Document
Scheduling Aircraft to Reduce Controller Workload

Authors: Joondong Kim, Alexander Kroeller, and Joseph Mitchell

Published in: OASIcs, Volume 12, 9th Workshop on Algorithmic Approaches for Transportation Modeling, Optimization, and Systems (ATMOS'09) (2009)


Abstract
We address a problem in air traffic management: scheduling flights in order to minimize the maximum number of aircraft that simultaneously lie within a single air traffic control sector at any time $t$. Since the problem is a generalization of the NP-hard no-wait job-shop scheduling, we resort to heuristics. We report experimental results for real-world flight data.

Cite as

Joondong Kim, Alexander Kroeller, and Joseph Mitchell. Scheduling Aircraft to Reduce Controller Workload. In 9th Workshop on Algorithmic Approaches for Transportation Modeling, Optimization, and Systems (ATMOS'09). Open Access Series in Informatics (OASIcs), Volume 12, pp. -12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:OASIcs.ATMOS.2009.2144,
  author =	{Kim, Joondong and Kroeller, Alexander and Mitchell, Joseph},
  title =	{{Scheduling Aircraft to Reduce Controller Workload}},
  booktitle =	{9th Workshop on Algorithmic Approaches for Transportation Modeling, Optimization, and Systems (ATMOS'09)},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-11-8},
  ISSN =	{2190-6807},
  year =	{2009},
  volume =	{12},
  editor =	{Clausen, Jens and Di Stefano, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ATMOS.2009.2144},
  URN =		{urn:nbn:de:0030-drops-21443},
  doi =		{10.4230/OASIcs.ATMOS.2009.2144},
  annote =	{Keywords: Air Traffic Management, trajectory scheduling, flight plan scheduling, no-wait job shop Air Traffic Management, trajectory scheduling, flight plan scheduling, no-wait job shop}
}

Kim, Jung-Jae

Document
Ontology-based Extraction of Transcription Regulation Events

Authors: Jung-Jae Kim

Published in: Dagstuhl Seminar Proceedings, Volume 8131, Ontologies and Text Mining for Life Sciences : Current Status and Future Perspectives (2008)


Abstract
I present an on-going work on extraction of transcription regulation events from text by using an ontology which plays a central role in integrating information from different sources. The events of transcription regulation are expressed in the literature with a high degree of compositeness. They have elements such as event types, participants, and attributes. These elements are associated with different keywords, which should be merged into a shared structure. I use the Gene Regulation Ontology (GRO) for the integration purpose. It contains not only biological concepts related to transcription regulation, but also inference rules for deduction of specific event types and attributes from semantics of sentences. It is also used to represent the semantics of linguistic patterns that are used to identify the semantics of sentences. The ontology provides the formality which is required for the extraction of specific and well-defined events as those of transcription regulation.

Cite as

Jung-Jae Kim. Ontology-based Extraction of Transcription Regulation Events. In Ontologies and Text Mining for Life Sciences : Current Status and Future Perspectives. Dagstuhl Seminar Proceedings, Volume 8131, p. 1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{kim:DagSemProc.08131.13,
  author =	{Kim, Jung-Jae},
  title =	{{Ontology-based Extraction of Transcription Regulation Events}},
  booktitle =	{Ontologies and Text Mining for Life Sciences : Current Status and Future Perspectives},
  pages =	{1--1},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8131},
  editor =	{Michael Ashburner and Ulf Leser and Dietrich Rebholz-Schuhmann},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08131.13},
  URN =		{urn:nbn:de:0030-drops-15112},
  doi =		{10.4230/DagSemProc.08131.13},
  annote =	{Keywords: Information extraction, ontology, transcription regulation, inference, ontology semantics}
}

Kim, Yunhyong

Document
Data, Information, and Knowledge: "where is the Life we have lost in living?"

Authors: Yunhyong Kim

Published in: Dagstuhl Seminar Proceedings, Volume 10291, Automation in Digital Preservation (2010)


Abstract
This abstract attempts to raise the question of whether current practices in digital preservation properly address the issues of findability of digital objects. It is also intended as a starting point for discussing preservation of digital information in contrast to digital data. The abstract is exploratory and informal.

Cite as

Yunhyong Kim. Data, Information, and Knowledge: "where is the Life we have lost in living?". In Automation in Digital Preservation. Dagstuhl Seminar Proceedings, Volume 10291, p. 1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{kim:DagSemProc.10291.9,
  author =	{Kim, Yunhyong},
  title =	{{Data, Information, and Knowledge: "where is the Life we have lost in living?"}},
  booktitle =	{Automation in Digital Preservation},
  pages =	{1--1},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10291},
  editor =	{Jean-Pierre Chanod and Milena Dobreva and Andreas Rauber and Seamus Ross},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10291.9},
  URN =		{urn:nbn:de:0030-drops-27656},
  doi =		{10.4230/DagSemProc.10291.9},
  annote =	{Keywords: Data, information, knowledge, wisdom, preservation, appraisal, selection, findability}
}
Document
On the Notion of Genre in Digital Preservation

Authors: Fiorella Foscarini, Yunhyong Kim, Christopher A. Lee, Alexander Mehler, Gillian Oliver, and Seamus Ross

Published in: Dagstuhl Seminar Proceedings, Volume 10291, Automation in Digital Preservation (2010)


Abstract
In this paper, we discuss the notion of genre as a basis for addressing the problem of context representation in digital preservation. We outline several reference points for the notion of genre. This includes a review of diplomatic principles that can support and enhance the power of genre as a key to capture information about context relations. Further, we discuss the impact of open genre models and open topic models in information retrieval and finally present a list of research questions concerning future research in automation of digital preservation.

Cite as

Fiorella Foscarini, Yunhyong Kim, Christopher A. Lee, Alexander Mehler, Gillian Oliver, and Seamus Ross. On the Notion of Genre in Digital Preservation. In Automation in Digital Preservation. Dagstuhl Seminar Proceedings, Volume 10291, pp. 1-16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{foscarini_et_al:DagSemProc.10291.12,
  author =	{Foscarini, Fiorella and Kim, Yunhyong and Lee, Christopher A. and Mehler, Alexander and Oliver, Gillian and Ross, Seamus},
  title =	{{On the Notion of Genre in Digital Preservation}},
  booktitle =	{Automation in Digital Preservation},
  pages =	{1--16},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10291},
  editor =	{Jean-Pierre Chanod and Milena Dobreva and Andreas Rauber and Seamus Ross},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10291.12},
  URN =		{urn:nbn:de:0030-drops-27638},
  doi =		{10.4230/DagSemProc.10291.12},
  annote =	{Keywords: Digital preservation, genre analysis, context modeling, diplomatics, information retrieval}
}

Larsen, Kasper Green

Document
Invertible Bloom Lookup Tables with Less Memory and Randomness

Authors: Nils Fleischhacker, Kasper Green Larsen, Maciej Obremski, and Mark Simkin

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In this work we study Invertible Bloom Lookup Tables (IBLTs) with small failure probabilities. IBLTs are highly versatile data structures that have found applications in set reconciliation protocols, error-correcting codes, and even the design of advanced cryptographic primitives. For storing n elements and ensuring correctness with probability at least 1 - δ, existing IBLT constructions require Ω(n((log(1/δ))/(log n))+1)) space and they crucially rely on fully random hash functions. We present new constructions of IBLTs that are simultaneously more space efficient and require less randomness. For storing n elements with a failure probability of at most δ, our data structure only requires O{n + log(1/δ)log log(1/δ)} space and O{log(log(n)/δ)}-wise independent hash functions. As a key technical ingredient we show that hashing n keys with any k-wise independent hash function h:U → [Cn] for some sufficiently large constant C guarantees with probability 1 - 2^{-Ω(k)} that at least n/2 keys will have a unique hash value. Proving this is non-trivial as k approaches n. We believe that the techniques used to prove this statement may be of independent interest. We apply our new IBLTs to the encrypted compression problem, recently studied by Fleischhacker, Larsen, Simkin (Eurocrypt 2023). We extend their approach to work for a more general class of encryption schemes and using our new IBLT we achieve an asymptotically better compression rate.

Cite as

Nils Fleischhacker, Kasper Green Larsen, Maciej Obremski, and Mark Simkin. Invertible Bloom Lookup Tables with Less Memory and Randomness. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 54:1-54:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{fleischhacker_et_al:LIPIcs.ESA.2024.54,
  author =	{Fleischhacker, Nils and Larsen, Kasper Green and Obremski, Maciej and Simkin, Mark},
  title =	{{Invertible Bloom Lookup Tables with Less Memory and Randomness}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{54:1--54:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.54},
  URN =		{urn:nbn:de:0030-drops-211252},
  doi =		{10.4230/LIPIcs.ESA.2024.54},
  annote =	{Keywords: Invertible Bloom Lookup Tables}
}
Document
Invited Paper
From TCS to Learning Theory (Invited Paper)

Authors: Kasper Green Larsen

Published in: LIPIcs, Volume 306, 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)


Abstract
While machine learning theory and theoretical computer science are both based on a solid mathematical foundation, the two research communities have a smaller overlap than what the proximity of the fields warrant. In this invited abstract, I will argue that traditional theoretical computer scientists have much to offer the learning theory community and vice versa. I will make this argument by telling a personal story of how I broadened my research focus to encompass learning theory, and how my TCS background has been extremely useful in doing so. It is my hope that this personal account may inspire more TCS researchers to tackle the many elegant and important theoretical questions that learning theory has to offer.

Cite as

Kasper Green Larsen. From TCS to Learning Theory (Invited Paper). In 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 306, pp. 4:1-4:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{larsen:LIPIcs.MFCS.2024.4,
  author =	{Larsen, Kasper Green},
  title =	{{From TCS to Learning Theory}},
  booktitle =	{49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)},
  pages =	{4:1--4:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-335-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{306},
  editor =	{Kr\'{a}lovi\v{c}, Rastislav and Ku\v{c}era, Anton{\'\i}n},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2024.4},
  URN =		{urn:nbn:de:0030-drops-205603},
  doi =		{10.4230/LIPIcs.MFCS.2024.4},
  annote =	{Keywords: Theoretical Computer Science, Learning Theory}
}
Document
Sublinear Time Shortest Path in Expander Graphs

Authors: Noga Alon, Allan Grønlund, Søren Fuglede Jørgensen, and Kasper Green Larsen

Published in: LIPIcs, Volume 306, 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)


Abstract
Computing a shortest path between two nodes in an undirected unweighted graph is among the most basic algorithmic tasks. Breadth first search solves this problem in linear time, which is clearly also a lower bound in the worst case. However, several works have shown how to solve this problem in sublinear time in expectation when the input graph is drawn from one of several classes of random graphs. In this work, we extend these results by giving sublinear time shortest path (and short path) algorithms for expander graphs. We thus identify a natural deterministic property of a graph (that is satisfied by typical random regular graphs) which suffices for sublinear time shortest paths. The algorithms are very simple, involving only bidirectional breadth first search and short random walks. We also complement our new algorithms by near-matching lower bounds.

Cite as

Noga Alon, Allan Grønlund, Søren Fuglede Jørgensen, and Kasper Green Larsen. Sublinear Time Shortest Path in Expander Graphs. In 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 306, pp. 8:1-8:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{alon_et_al:LIPIcs.MFCS.2024.8,
  author =	{Alon, Noga and Gr{\o}nlund, Allan and J{\o}rgensen, S{\o}ren Fuglede and Larsen, Kasper Green},
  title =	{{Sublinear Time Shortest Path in Expander Graphs}},
  booktitle =	{49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)},
  pages =	{8:1--8:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-335-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{306},
  editor =	{Kr\'{a}lovi\v{c}, Rastislav and Ku\v{c}era, Anton{\'\i}n},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2024.8},
  URN =		{urn:nbn:de:0030-drops-205646},
  doi =		{10.4230/LIPIcs.MFCS.2024.8},
  annote =	{Keywords: Shortest Path, Expanders, Breadth First Search, Graph Algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Optimal Non-Adaptive Cell Probe Dictionaries and Hashing

Authors: Kasper Green Larsen, Rasmus Pagh, Giuseppe Persiano, Toniann Pitassi, Kevin Yeo, and Or Zamir

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We present a simple and provably optimal non-adaptive cell probe data structure for the static dictionary problem. Our data structure supports storing a set of n key-value pairs from [u]× [u] using s words of space and answering key lookup queries in t = O(lg(u/n)/lg(s/n)) non-adaptive probes. This generalizes a solution to the membership problem (i.e., where no values are associated with keys) due to Buhrman et al. We also present matching lower bounds for the non-adaptive static membership problem in the deterministic setting. Our lower bound implies that both our dictionary algorithm and the preceding membership algorithm are optimal, and in particular that there is an inherent complexity gap in these problems between no adaptivity and one round of adaptivity (with which hashing-based algorithms solve these problems in constant time). Using the ideas underlying our data structure, we also obtain the first implementation of a n-wise independent family of hash functions with optimal evaluation time in the cell probe model.

Cite as

Kasper Green Larsen, Rasmus Pagh, Giuseppe Persiano, Toniann Pitassi, Kevin Yeo, and Or Zamir. Optimal Non-Adaptive Cell Probe Dictionaries and Hashing. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 104:1-104:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{larsen_et_al:LIPIcs.ICALP.2024.104,
  author =	{Larsen, Kasper Green and Pagh, Rasmus and Persiano, Giuseppe and Pitassi, Toniann and Yeo, Kevin and Zamir, Or},
  title =	{{Optimal Non-Adaptive Cell Probe Dictionaries and Hashing}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{104:1--104:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.104},
  URN =		{urn:nbn:de:0030-drops-202471},
  doi =		{10.4230/LIPIcs.ICALP.2024.104},
  annote =	{Keywords: non-adaptive, cell probe, dictionary, hashing}
}
Document
The NFA Acceptance Hypothesis: Non-Combinatorial and Dynamic Lower Bounds

Authors: Karl Bringmann, Allan Grønlund, Marvin Künnemann, and Kasper Green Larsen

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
We pose the fine-grained hardness hypothesis that the textbook algorithm for the NFA Acceptance problem is optimal up to subpolynomial factors, even for dense NFAs and fixed alphabets. We show that this barrier appears in many variations throughout the algorithmic literature by introducing a framework of Colored Walk problems. These yield fine-grained equivalent formulations of the NFA Acceptance problem as problems concerning detection of an s-t-walk with a prescribed color sequence in a given edge- or node-colored graph. For NFA Acceptance on sparse NFAs (or equivalently, Colored Walk in sparse graphs), a tight lower bound under the Strong Exponential Time Hypothesis has been rediscovered several times in recent years. We show that our hardness hypothesis, which concerns dense NFAs, has several interesting implications: - It gives a tight lower bound for Context-Free Language Reachability. This proves conditional optimality for the class of 2NPDA-complete problems, explaining the cubic bottleneck of interprocedural program analysis. - It gives a tight (n+nm^{1/3})^{1-o(1)} lower bound for the Word Break problem on strings of length n and dictionaries of total size m. - It implies the popular OMv hypothesis. Since the NFA acceptance problem is a static (i.e., non-dynamic) problem, this provides a static reason for the hardness of many dynamic problems. Thus, a proof of the NFA Acceptance hypothesis would resolve several interesting barriers. Conversely, a refutation of the NFA Acceptance hypothesis may lead the way to attacking the current barriers observed for Context-Free Language Reachability, the Word Break problem and the growing list of dynamic problems proven hard under the OMv hypothesis.

Cite as

Karl Bringmann, Allan Grønlund, Marvin Künnemann, and Kasper Green Larsen. The NFA Acceptance Hypothesis: Non-Combinatorial and Dynamic Lower Bounds. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 22:1-22:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.ITCS.2024.22,
  author =	{Bringmann, Karl and Gr{\o}nlund, Allan and K\"{u}nnemann, Marvin and Larsen, Kasper Green},
  title =	{{The NFA Acceptance Hypothesis: Non-Combinatorial and Dynamic Lower Bounds}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{22:1--22:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.22},
  URN =		{urn:nbn:de:0030-drops-195500},
  doi =		{10.4230/LIPIcs.ITCS.2024.22},
  annote =	{Keywords: Fine-grained complexity theory, non-deterministic finite automata}
}
Document
Distributed Shuffling in Adversarial Environments

Authors: Kasper Green Larsen, Maciej Obremski, and Mark Simkin

Published in: LIPIcs, Volume 267, 4th Conference on Information-Theoretic Cryptography (ITC 2023)


Abstract
We study mix-nets in the context of cryptocurrencies. Here we have many computationally weak shufflers that speak one after another and want to joinlty shuffle a list of ciphertexts (c₁, … , c_n). Each shuffler can only permute k << n ciphertexts at a time. An adversary A can track some of the ciphertexts and adaptively corrupt some of the shufflers. We present a simple protocol for shuffling the list of ciphertexts efficiently. The main technical contribution of this work is to prove that our simple shuffling strategy does indeed provide good anonymity guarantees and at the same time terminates quickly. Our shuffling algorithm provides a strict improvement over the current shuffling strategy in Ethereum’s block proposer elections. Our algorithm is secure against a stronger adversary, provides provable security guarantees, and is comparably in efficiency to the current approach.

Cite as

Kasper Green Larsen, Maciej Obremski, and Mark Simkin. Distributed Shuffling in Adversarial Environments. In 4th Conference on Information-Theoretic Cryptography (ITC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 267, pp. 10:1-10:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{larsen_et_al:LIPIcs.ITC.2023.10,
  author =	{Larsen, Kasper Green and Obremski, Maciej and Simkin, Mark},
  title =	{{Distributed Shuffling in Adversarial Environments}},
  booktitle =	{4th Conference on Information-Theoretic Cryptography (ITC 2023)},
  pages =	{10:1--10:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-271-6},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{267},
  editor =	{Chung, Kai-Min},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2023.10},
  URN =		{urn:nbn:de:0030-drops-183385},
  doi =		{10.4230/LIPIcs.ITC.2023.10},
  annote =	{Keywords: Distributed Computing, Shuffling}
}
Document
Hierarchical Categories in Colored Searching

Authors: Peyman Afshani, Rasmus Killmann, and Kasper Green Larsen

Published in: LIPIcs, Volume 248, 33rd International Symposium on Algorithms and Computation (ISAAC 2022)


Abstract
In colored range counting (CRC), the input is a set of points where each point is assigned a "color" (or a "category") and the goal is to store them in a data structure such that the number of distinct categories inside a given query range can be counted efficiently. CRC has strong motivations as it allows data structure to deal with categorical data. However, colors (i.e., the categories) in the CRC problem do not have any internal structure, whereas this is not the case for many datasets in practice where hierarchical categories exists or where a single input belongs to multiple categories. Motivated by these, we consider variants of the problem where such structures can be represented. We define two variants of the problem called hierarchical range counting (HCC) and sub-category colored range counting (SCRC) and consider hierarchical structures that can either be a DAG or a tree. We show that the two problems on some special trees are in fact equivalent to other well-known problems in the literature. Based on these, we also give efficient data structures when the underlying hierarchy can be represented as a tree. We show a conditional lower bound for the general case when the existing hierarchy can be any DAG, through reductions from the orthogonal vectors problem.

Cite as

Peyman Afshani, Rasmus Killmann, and Kasper Green Larsen. Hierarchical Categories in Colored Searching. In 33rd International Symposium on Algorithms and Computation (ISAAC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 248, pp. 25:1-25:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{afshani_et_al:LIPIcs.ISAAC.2022.25,
  author =	{Afshani, Peyman and Killmann, Rasmus and Larsen, Kasper Green},
  title =	{{Hierarchical Categories in Colored Searching}},
  booktitle =	{33rd International Symposium on Algorithms and Computation (ISAAC 2022)},
  pages =	{25:1--25:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-258-7},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{248},
  editor =	{Bae, Sang Won and Park, Heejin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2022.25},
  URN =		{urn:nbn:de:0030-drops-173100},
  doi =		{10.4230/LIPIcs.ISAAC.2022.25},
  annote =	{Keywords: Categorical Data, Computational Geometry}
}
Document
Optimality of the Johnson-Lindenstrauss Dimensionality Reduction for Practical Measures

Authors: Yair Bartal, Ora Nova Fandina, and Kasper Green Larsen

Published in: LIPIcs, Volume 224, 38th International Symposium on Computational Geometry (SoCG 2022)


Abstract
It is well known that the Johnson-Lindenstrauss dimensionality reduction method is optimal for worst case distortion. While in practice many other methods and heuristics are used, not much is known in terms of bounds on their performance. The question of whether the JL method is optimal for practical measures of distortion was recently raised in [Yair Bartal et al., 2019] (NeurIPS'19). They provided upper bounds on its quality for a wide range of practical measures and showed that indeed these are best possible in many cases. Yet, some of the most important cases, including the fundamental case of average distortion were left open. In particular, they show that the JL transform has 1+ε average distortion for embedding into k-dimensional Euclidean space, where k = O(1/ε²), and for more general q-norms of distortion, k = O(max{1/ε²,q/ε}), whereas tight lower bounds were established only for large values of q via reduction to the worst case. In this paper we prove that these bounds are best possible for any dimensionality reduction method, for any 1 ≤ q ≤ O((log (2ε² n))/ε) and ε ≥ 1/(√n), where n is the size of the subset of Euclidean space. Our results also imply that the JL method is optimal for various distortion measures commonly used in practice, such as stress, energy and relative error. We prove that if any of these measures is bounded by ε then k = Ω(1/ε²), for any ε ≥ 1/(√n), matching the upper bounds of [Yair Bartal et al., 2019] and extending their tightness results for the full range moment analysis. Our results may indicate that the JL dimensionality reduction method should be considered more often in practical applications, and the bounds we provide for its quality should be served as a measure for comparison when evaluating the performance of other methods and heuristics.

Cite as

Yair Bartal, Ora Nova Fandina, and Kasper Green Larsen. Optimality of the Johnson-Lindenstrauss Dimensionality Reduction for Practical Measures. In 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 13:1-13:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bartal_et_al:LIPIcs.SoCG.2022.13,
  author =	{Bartal, Yair and Fandina, Ora Nova and Larsen, Kasper Green},
  title =	{{Optimality of the Johnson-Lindenstrauss Dimensionality Reduction for Practical Measures}},
  booktitle =	{38th International Symposium on Computational Geometry (SoCG 2022)},
  pages =	{13:1--13:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-227-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{224},
  editor =	{Goaoc, Xavier and Kerber, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2022.13},
  URN =		{urn:nbn:de:0030-drops-160219},
  doi =		{10.4230/LIPIcs.SoCG.2022.13},
  annote =	{Keywords: average distortion, practical dimensionality reduction, JL transform}
}
Document
Broadcast Secret-Sharing, Bounds and Applications

Authors: Ivan Bjerre Damgård, Kasper Green Larsen, and Sophia Yakoubov

Published in: LIPIcs, Volume 199, 2nd Conference on Information-Theoretic Cryptography (ITC 2021)


Abstract
Consider a sender 𝒮 and a group of n recipients. 𝒮 holds a secret message 𝗆 of length l bits and the goal is to allow 𝒮 to create a secret sharing of 𝗆 with privacy threshold t among the recipients, by broadcasting a single message 𝖼 to the recipients. Our goal is to do this with information theoretic security in a model with a simple form of correlated randomness. Namely, for each subset 𝒜 of recipients of size q, 𝒮 may share a random key with all recipients in 𝒜. (The keys shared with different subsets 𝒜 must be independent.) We call this Broadcast Secret-Sharing (BSS) with parameters l, n, t and q. Our main question is: how large must 𝖼 be, as a function of the parameters? We show that (n-t)/q l is a lower bound, and we show an upper bound of ((n(t+1)/(q+t)) -t)l, matching the lower bound whenever t = 0, or when q = 1 or n-t. When q = n-t, the size of 𝖼 is exactly l which is clearly minimal. The protocol demonstrating the upper bound in this case requires 𝒮 to share a key with every subset of size n-t. We show that this overhead cannot be avoided when 𝖼 has minimal size. We also show that if access is additionally given to an idealized PRG, the lower bound on ciphertext size becomes (n-t)/q λ + l - negl(λ) (where λ is the length of the input to the PRG). The upper bound becomes ((n(t+1))/(q+t) -t)λ + l. BSS can be applied directly to secret-key threshold encryption. We can also consider a setting where the correlated randomness is generated using computationally secure and non-interactive key exchange, where we assume that each recipient has an (independently generated) public key for this purpose. In this model, any protocol for non-interactive secret sharing becomes an ad hoc threshold encryption (ATE) scheme, which is a threshold encryption scheme with no trusted setup beyond a PKI. Our upper bounds imply new ATE schemes, and our lower bound becomes a lower bound on the ciphertext size in any ATE scheme that uses a key exchange functionality and no other cryptographic primitives.

Cite as

Ivan Bjerre Damgård, Kasper Green Larsen, and Sophia Yakoubov. Broadcast Secret-Sharing, Bounds and Applications. In 2nd Conference on Information-Theoretic Cryptography (ITC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 199, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{damgard_et_al:LIPIcs.ITC.2021.10,
  author =	{Damg\r{a}rd, Ivan Bjerre and Larsen, Kasper Green and Yakoubov, Sophia},
  title =	{{Broadcast Secret-Sharing, Bounds and Applications}},
  booktitle =	{2nd Conference on Information-Theoretic Cryptography (ITC 2021)},
  pages =	{10:1--10:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-197-9},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{199},
  editor =	{Tessaro, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2021.10},
  URN =		{urn:nbn:de:0030-drops-143299},
  doi =		{10.4230/LIPIcs.ITC.2021.10},
  annote =	{Keywords: Secret-Sharing, Ad-hoc Threshold Encryption}
}
Document
Track A: Algorithms, Complexity and Games
Lower Bounds for Multiplication via Network Coding

Authors: Peyman Afshani, Casper Benjamin Freksen, Lior Kamma, and Kasper Green Larsen

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
Multiplication is one of the most fundamental computational problems, yet its true complexity remains elusive. The best known upper bound, very recently proved by Harvey and van der Hoeven (2019), shows that two n-bit numbers can be multiplied via a boolean circuit of size O(n lg n). In this work, we prove that if a central conjecture in the area of network coding is true, then any constant degree boolean circuit for multiplication must have size Omega(n lg n), thus almost completely settling the complexity of multiplication circuits. We additionally revisit classic conjectures in circuit complexity, due to Valiant, and show that the network coding conjecture also implies one of Valiant’s conjectures.

Cite as

Peyman Afshani, Casper Benjamin Freksen, Lior Kamma, and Kasper Green Larsen. Lower Bounds for Multiplication via Network Coding. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 10:1-10:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{afshani_et_al:LIPIcs.ICALP.2019.10,
  author =	{Afshani, Peyman and Freksen, Casper Benjamin and Kamma, Lior and Larsen, Kasper Green},
  title =	{{Lower Bounds for Multiplication via Network Coding}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{10:1--10:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.10},
  URN =		{urn:nbn:de:0030-drops-105861},
  doi =		{10.4230/LIPIcs.ICALP.2019.10},
  annote =	{Keywords: Circuit Complexity, Circuit Lower Bounds, Multiplication, Network Coding, Fine-Grained Complexity}
}
Document
Constructive Discrepancy Minimization with Hereditary L2 Guarantees

Authors: Kasper Green Larsen

Published in: LIPIcs, Volume 126, 36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019)


Abstract
In discrepancy minimization problems, we are given a family of sets S = {S_1,...,S_m}, with each S_i in S a subset of some universe U = {u_1,...,u_n} of n elements. The goal is to find a coloring chi : U -> {-1,+1} of the elements of U such that each set S in S is colored as evenly as possible. Two classic measures of discrepancy are l_infty-discrepancy defined as disc_infty(S,chi):=max_{S in S} | sum_{u_i in S} chi(u_i) | and l_2-discrepancy defined as disc_2(S,chi):=sqrt{(1/|S|) sum_{S in S} (sum_{u_i in S} chi(u_i))^2}. Breakthrough work by Bansal [FOCS'10] gave a polynomial time algorithm, based on rounding an SDP, for finding a coloring chi such that disc_infty(S,chi) = O(lg n * herdisc_infty(S)) where herdisc_infty(S) is the hereditary l_infty-discrepancy of S. We complement his work by giving a clean and simple O((m+n)n^2) time algorithm for finding a coloring chi such disc_2(S,chi) = O(sqrt{lg n} * herdisc_2(S)) where herdisc_2(S) is the hereditary l_2-discrepancy of S. Interestingly, our algorithm avoids solving an SDP and instead relies simply on computing eigendecompositions of matrices. To prove that our algorithm has the claimed guarantees, we also prove new inequalities relating both herdisc_infty and herdisc_2 to the eigenvalues of the incidence matrix corresponding to S. Our inequalities improve over previous work by Chazelle and Lvov [SCG'00] and by Matousek, Nikolov and Talwar [SODA'15+SCG'15]. We believe these inequalities are of independent interest as powerful tools for proving hereditary discrepancy lower bounds. Finally, we also implement our algorithm and show that it far outperforms random sampling of colorings in practice. Moreover, the algorithm finishes in a reasonable amount of time on matrices of sizes up to 10000 x 10000.

Cite as

Kasper Green Larsen. Constructive Discrepancy Minimization with Hereditary L2 Guarantees. In 36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 126, pp. 48:1-48:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{larsen:LIPIcs.STACS.2019.48,
  author =	{Larsen, Kasper Green},
  title =	{{Constructive Discrepancy Minimization with Hereditary L2 Guarantees}},
  booktitle =	{36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019)},
  pages =	{48:1--48:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-100-9},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{126},
  editor =	{Niedermeier, Rolf and Paul, Christophe},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2019.48},
  URN =		{urn:nbn:de:0030-drops-102878},
  doi =		{10.4230/LIPIcs.STACS.2019.48},
  annote =	{Keywords: Discrepancy, Hereditary Discrepancy, Combinatorics, Computational Geometry}
}
Document
Upper and Lower Bounds for Dynamic Data Structures on Strings

Authors: Raphaël Clifford, Allan Grønlund, Kasper Green Larsen, and Tatiana Starikovskaya

Published in: LIPIcs, Volume 96, 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018)


Abstract
We consider a range of simply stated dynamic data structure problems on strings. An update changes one symbol in the input and a query asks us to compute some function of the pattern of length m and a substring of a longer text. We give both conditional and unconditional lower bounds for variants of exact matching with wildcards, inner product, and Hamming distance computation via a sequence of reductions. As an example, we show that there does not exist an O(m^{1/2-epsilon}) time algorithm for a large range of these problems unless the online Boolean matrix-vector multiplication conjecture is false. We also provide nearly matching upper bounds for most of the problems we consider.

Cite as

Raphaël Clifford, Allan Grønlund, Kasper Green Larsen, and Tatiana Starikovskaya. Upper and Lower Bounds for Dynamic Data Structures on Strings. In 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 96, pp. 22:1-22:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{clifford_et_al:LIPIcs.STACS.2018.22,
  author =	{Clifford, Rapha\"{e}l and Gr{\o}nlund, Allan and Larsen, Kasper Green and Starikovskaya, Tatiana},
  title =	{{Upper and Lower Bounds for Dynamic Data Structures on Strings}},
  booktitle =	{35th Symposium on Theoretical Aspects of Computer Science (STACS 2018)},
  pages =	{22:1--22:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-062-0},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{96},
  editor =	{Niedermeier, Rolf and Vall\'{e}e, Brigitte},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2018.22},
  URN =		{urn:nbn:de:0030-drops-85088},
  doi =		{10.4230/LIPIcs.STACS.2018.22},
  annote =	{Keywords: exact pattern matching with wildcards, hamming distance, inner product, conditional lower bounds}
}
Document
On Using Toeplitz and Circulant Matrices for Johnson-Lindenstrauss Transforms

Authors: Casper Benjamin Freksen and Kasper Green Larsen

Published in: LIPIcs, Volume 92, 28th International Symposium on Algorithms and Computation (ISAAC 2017)


Abstract
The Johnson-Lindenstrauss lemma is one of the corner stone results in dimensionality reduction. It says that given N, for any set of N, vectors X \subset R^n, there exists a mapping f : X --> R^m such that f(X) preserves all pairwise distances between vectors in X to within(1 ± \eps) if m = O(\eps^{-2} lg N). Much effort has gone into developing fast embedding algorithms, with the Fast Johnson-Lindenstrauss transform of Ailon and Chazelle being one of the most well-known techniques. The current fastest algorithm that yields the optimal m = O(\eps{-2}lg N) dimensions has an embedding time of O(n lg n + \eps^{-2} lg^3 N). An exciting approach towards improving this, due to Hinrichs and Vybíral, is to use a random m times n Toeplitz matrix for the embedding. Using Fast Fourier Transform, the embedding of a vector can then be computed in O(n lg m) time. The big question is of course whether m = O(\eps^{-2} lg N) dimensions suffice for this technique. If so, this would end a decades long quest to obtain faster and faster Johnson-Lindenstrauss transforms. The current best analysis of the embedding of Hinrichs and Vybíral shows that m = O(\eps^{-2} lg^2 N) dimensions suffice. The main result of this paper, is a proof that this analysis unfortunately cannot be tightened any further, i.e., there exists a set of N vectors requiring m = \Omega(\eps^{-2} lg^2 N) for the Toeplitz approach to work.

Cite as

Casper Benjamin Freksen and Kasper Green Larsen. On Using Toeplitz and Circulant Matrices for Johnson-Lindenstrauss Transforms. In 28th International Symposium on Algorithms and Computation (ISAAC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 92, pp. 32:1-32:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{freksen_et_al:LIPIcs.ISAAC.2017.32,
  author =	{Freksen, Casper Benjamin and Larsen, Kasper Green},
  title =	{{On Using Toeplitz and Circulant Matrices for Johnson-Lindenstrauss Transforms}},
  booktitle =	{28th International Symposium on Algorithms and Computation (ISAAC 2017)},
  pages =	{32:1--32:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-054-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{92},
  editor =	{Okamoto, Yoshio and Tokuyama, Takeshi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2017.32},
  URN =		{urn:nbn:de:0030-drops-82540},
  doi =		{10.4230/LIPIcs.ISAAC.2017.32},
  annote =	{Keywords: dimensionality reduction, Johnson-Lindenstrauss, Toeplitz matrices}
}
Document
The Johnson-Lindenstrauss Lemma Is Optimal for Linear Dimensionality Reduction

Authors: Kasper Green Larsen and Jelani Nelson

Published in: LIPIcs, Volume 55, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)


Abstract
For any n > 1, 0 < epsilon < 1/2, and N > n^C for some constant C > 0, we show the existence of an N-point subset X of l_2^n such that any linear map from X to l_2^m with distortion at most 1 + epsilon must have m = Omega(min{n, epsilon^{-2}*lg(N)). This improves a lower bound of Alon [Alon, Discre. Mathem., 1999], in the linear setting, by a lg(1/epsilon) factor. Our lower bound matches the upper bounds provided by the identity matrix and the Johnson-Lindenstrauss lemma [Johnson and Lindenstrauss, Contem. Mathem., 1984].

Cite as

Kasper Green Larsen and Jelani Nelson. The Johnson-Lindenstrauss Lemma Is Optimal for Linear Dimensionality Reduction. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 82:1-82:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{larsen_et_al:LIPIcs.ICALP.2016.82,
  author =	{Larsen, Kasper Green and Nelson, Jelani},
  title =	{{The Johnson-Lindenstrauss Lemma Is Optimal for Linear Dimensionality Reduction}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{82:1--82:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.82},
  URN =		{urn:nbn:de:0030-drops-62032},
  doi =		{10.4230/LIPIcs.ICALP.2016.82},
  annote =	{Keywords: dimensionality reduction, lower bounds, Johnson-Lindenstrauss}
}
Document
Towards Tight Lower Bounds for Range Reporting on the RAM

Authors: Allan Grønlund and Kasper Green Larsen

Published in: LIPIcs, Volume 55, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)


Abstract
In the orthogonal range reporting problem, we are to preprocess a set of n points with integer coordinates on a UxU grid. The goal is to support reporting all k points inside an axis-aligned query rectangle. This is one of the most fundamental data structure problems in databases and computational geometry. Despite the importance of the problem its complexity remains unresolved in the word-RAM. On the upper bound side, three best tradeoffs exist, all derived by reducing range reporting to a ball-inheritance problem. Ball-inheritance is a problem that essentially encapsulates all previous attempts at solving range reporting in the word-RAM. In this paper we make progress towards closing the gap between the upper and lower bounds for range reporting by proving cell probe lower bounds for ball-inheritance. Our lower bounds are tight for a large range of parameters, excluding any further progress for range reporting using the ball-inheritance reduction.

Cite as

Allan Grønlund and Kasper Green Larsen. Towards Tight Lower Bounds for Range Reporting on the RAM. In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, pp. 92:1-92:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{grnlund_et_al:LIPIcs.ICALP.2016.92,
  author =	{Gr{\o}nlund, Allan and Larsen, Kasper Green},
  title =	{{Towards Tight Lower Bounds for Range Reporting on the RAM}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{92:1--92:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.92},
  URN =		{urn:nbn:de:0030-drops-61936},
  doi =		{10.4230/LIPIcs.ICALP.2016.92},
  annote =	{Keywords: Data Structures, Lower Bounds, Cell Probe Model, Range Reporting}
}
Document
Linear-Space Data Structures for Range Mode Query in Arrays

Authors: Timothy M. Chan, Stephane Durocher, Kasper Green Larsen, Jason Morrison, and Bryan T. Wilkinson

Published in: LIPIcs, Volume 14, 29th International Symposium on Theoretical Aspects of Computer Science (STACS 2012)


Abstract
A mode of a multiset S is an element a in S of maximum multiplicity; that is, a occurs at least as frequently as any other element in S. Given an array A[1:n] of n elements, we consider a basic problem: constructing a static data structure that efficiently answers range mode queries on A. Each query consists of an input pair of indices (i, j) for which a mode of A[i:j] must be returned. The best previous data structure with linear space, by Krizanc, Morin, and Smid (ISAAC 2003), requires O(sqrt(n) loglog n) query time. We improve their result and present an O(n)-space data structure that supports range mode queries in O(sqrt(n / log n)) worst-case time. Furthermore, we present strong evidence that a query time significantly below sqrt(n) cannot be achieved by purely combinatorial techniques; we show that boolean matrix multiplication of two sqrt(n) by sqrt(n) matrices reduces to n range mode queries in an array of size O(n). Additionally, we give linear-space data structures for orthogonal range mode in higher dimensions (queries in near O(n^(1-1/2d)) time) and for halfspace range mode in higher dimensions (queries in O(n^(1-1/d^2)) time).

Cite as

Timothy M. Chan, Stephane Durocher, Kasper Green Larsen, Jason Morrison, and Bryan T. Wilkinson. Linear-Space Data Structures for Range Mode Query in Arrays. In 29th International Symposium on Theoretical Aspects of Computer Science (STACS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 14, pp. 290-301, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{chan_et_al:LIPIcs.STACS.2012.290,
  author =	{Chan, Timothy M. and Durocher, Stephane and Larsen, Kasper Green and Morrison, Jason and Wilkinson, Bryan T.},
  title =	{{Linear-Space Data Structures for Range Mode Query in Arrays}},
  booktitle =	{29th International Symposium on Theoretical Aspects of Computer Science (STACS 2012)},
  pages =	{290--301},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-35-4},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{14},
  editor =	{D\"{u}rr, Christoph and Wilke, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2012.290},
  URN =		{urn:nbn:de:0030-drops-34254},
  doi =		{10.4230/LIPIcs.STACS.2012.290},
  annote =	{Keywords: mode, range query, data structure, linear space, array}
}

Kim, Eun Jung

Document
Bandwidth Parameterized by Cluster Vertex Deletion Number

Authors: Tatsuya Gima, Eun Jung Kim, Noleen Köhler, Nikolaos Melissinos, and Manolis Vasilakis

Published in: LIPIcs, Volume 285, 18th International Symposium on Parameterized and Exact Computation (IPEC 2023)


Abstract
Given a graph G and an integer b, Bandwidth asks whether there exists a bijection π from V(G) to {1, …, |V(G)|} such that max_{{u, v} ∈ E(G)} | π(u) - π(v) | ≤ b. This is a classical NP-complete problem, known to remain NP-complete even on very restricted classes of graphs, such as trees of maximum degree 3 and caterpillars of hair length 3. In the realm of parameterized complexity, these results imply that the problem remains NP-hard on graphs of bounded pathwidth, while it is additionally known to be W[1]-hard when parameterized by the treedepth of the input graph. In contrast, the problem does become FPT when parameterized by the vertex cover number of the input graph. In this paper, we make progress towards the parameterized (in)tractability of Bandwidth. We first show that it is FPT when parameterized by the cluster vertex deletion number cvd plus the clique number ω of the input graph, thus generalizing the previously mentioned result for vertex cover. On the other hand, we show that Bandwidth is W[1]-hard when parameterized only by cvd. Our results generalize some of the previous results and narrow some of the complexity gaps.

Cite as

Tatsuya Gima, Eun Jung Kim, Noleen Köhler, Nikolaos Melissinos, and Manolis Vasilakis. Bandwidth Parameterized by Cluster Vertex Deletion Number. In 18th International Symposium on Parameterized and Exact Computation (IPEC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 285, pp. 21:1-21:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{gima_et_al:LIPIcs.IPEC.2023.21,
  author =	{Gima, Tatsuya and Kim, Eun Jung and K\"{o}hler, Noleen and Melissinos, Nikolaos and Vasilakis, Manolis},
  title =	{{Bandwidth Parameterized by Cluster Vertex Deletion Number}},
  booktitle =	{18th International Symposium on Parameterized and Exact Computation (IPEC 2023)},
  pages =	{21:1--21:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-305-8},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{285},
  editor =	{Misra, Neeldhara and Wahlstr\"{o}m, Magnus},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2023.21},
  URN =		{urn:nbn:de:0030-drops-194401},
  doi =		{10.4230/LIPIcs.IPEC.2023.21},
  annote =	{Keywords: Bandwidth, Clique number, Cluster vertex deletion number, Parameterized complexity}
}
Document
Twin-Width VIII: Delineation and Win-Wins

Authors: Édouard Bonnet, Dibyayan Chakraborty, Eun Jung Kim, Noleen Köhler, Raul Lopes, and Stéphan Thomassé

Published in: LIPIcs, Volume 249, 17th International Symposium on Parameterized and Exact Computation (IPEC 2022)


Abstract
We introduce the notion of delineation. A graph class C is said delineated by twin-width (or simply, delineated) if for every hereditary closure D of a subclass of C, it holds that D has bounded twin-width if and only if D is monadically dependent. An effective strengthening of delineation for a class C implies that tractable FO model checking on C is perfectly understood: On hereditary closures of subclasses D of C, FO model checking on D is fixed-parameter tractable (FPT) exactly when D has bounded twin-width. Ordered graphs [BGOdMSTT, STOC '22] and permutation graphs [BKTW, JACM '22] are effectively delineated, while subcubic graphs are not. On the one hand, we prove that interval graphs, and even, rooted directed path graphs are delineated. On the other hand, we observe or show that segment graphs, directed path graphs (with arbitrarily many roots), and visibility graphs of simple polygons are not delineated. In an effort to draw the delineation frontier between interval graphs (that are delineated) and axis-parallel two-lengthed segment graphs (that are not), we investigate the twin-width of restricted segment intersection classes. It was known that (triangle-free) pure axis-parallel unit segment graphs have unbounded twin-width [BGKTW, SODA '21]. We show that K_{t,t}-free segment graphs, and axis-parallel H_t-free unit segment graphs have bounded twin-width, where H_t is the half-graph or ladder of height t. In contrast, axis-parallel H₄-free two-lengthed segment graphs have unbounded twin-width. We leave as an open question whether unit segment graphs are delineated. More broadly, we explore which structures (large bicliques, half-graphs, or independent sets) are responsible for making the twin-width large on the main classes of intersection and visibility graphs. Our new results, combined with the FPT algorithm for first-order model checking on graphs given with O(1)-sequences [BKTW, JACM '22], give rise to a variety of algorithmic win-win arguments. They all fall in the same framework: If p is an FO definable graph parameter that effectively functionally upperbounds twin-width on a class C, then p(G) ⩾ k can be decided in FPT time f(k) ⋅ |V(G)|^O(1). For instance, we readily derive FPT algorithms for k-Ladder on visibility graphs of 1.5D terrains, and k-Independent Set on visibility graphs of simple polygons. This showcases that the theory of twin-width can serve outside of classes of bounded twin-width.

Cite as

Édouard Bonnet, Dibyayan Chakraborty, Eun Jung Kim, Noleen Köhler, Raul Lopes, and Stéphan Thomassé. Twin-Width VIII: Delineation and Win-Wins. In 17th International Symposium on Parameterized and Exact Computation (IPEC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 249, pp. 9:1-9:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bonnet_et_al:LIPIcs.IPEC.2022.9,
  author =	{Bonnet, \'{E}douard and Chakraborty, Dibyayan and Kim, Eun Jung and K\"{o}hler, Noleen and Lopes, Raul and Thomass\'{e}, St\'{e}phan},
  title =	{{Twin-Width VIII: Delineation and Win-Wins}},
  booktitle =	{17th International Symposium on Parameterized and Exact Computation (IPEC 2022)},
  pages =	{9:1--9:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-260-0},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{249},
  editor =	{Dell, Holger and Nederlof, Jesper},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2022.9},
  URN =		{urn:nbn:de:0030-drops-173650},
  doi =		{10.4230/LIPIcs.IPEC.2022.9},
  annote =	{Keywords: Twin-width, intersection graphs, visibility graphs, monadic dependence and stability, first-order model checking}
}
Document
Obstructions for Matroids of Path-Width at most k and Graphs of Linear Rank-Width at most k

Authors: Mamadou Moustapha Kanté, Eun Jung Kim, O-joung Kwon, and Sang-il Oum

Published in: LIPIcs, Volume 219, 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)


Abstract
Every minor-closed class of matroids of bounded branch-width can be characterized by a minimal list of excluded minors, but unlike graphs, this list could be infinite in general. However, for each fixed finite field F, the list contains only finitely many F-representable matroids, due to the well-quasi-ordering of F-representable matroids of bounded branch-width under taking matroid minors [J. F. Geelen, A. M. H. Gerards, and G. Whittle (2002)]. But this proof is non-constructive and does not provide any algorithm for computing these F-representable excluded minors in general. We consider the class of matroids of path-width at most k for fixed k. We prove that for a finite field F, every F-representable excluded minor for the class of matroids of path-width at most k has at most 2^{|𝔽|^{O(k²)}} elements. We can therefore compute, for any integer k and a fixed finite field F, the set of F-representable excluded minors for the class of matroids of path-width k, and this gives as a corollary a polynomial-time algorithm for checking whether the path-width of an F-represented matroid is at most k. We also prove that every excluded pivot-minor for the class of graphs having linear rank-width at most k has at most 2^{2^{O(k²)}} vertices, which also results in a similar algorithmic consequence for linear rank-width of graphs.

Cite as

Mamadou Moustapha Kanté, Eun Jung Kim, O-joung Kwon, and Sang-il Oum. Obstructions for Matroids of Path-Width at most k and Graphs of Linear Rank-Width at most k. In 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 219, pp. 40:1-40:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{kante_et_al:LIPIcs.STACS.2022.40,
  author =	{Kant\'{e}, Mamadou Moustapha and Kim, Eun Jung and Kwon, O-joung and Oum, Sang-il},
  title =	{{Obstructions for Matroids of Path-Width at most k and Graphs of Linear Rank-Width at most k}},
  booktitle =	{39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)},
  pages =	{40:1--40:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-222-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{219},
  editor =	{Berenbrink, Petra and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2022.40},
  URN =		{urn:nbn:de:0030-drops-158507},
  doi =		{10.4230/LIPIcs.STACS.2022.40},
  annote =	{Keywords: path-width, matroid, linear rank-width, graph, forbidden minor, vertex-minor, pivot-minor}
}
Document
Twin-Width and Polynomial Kernels

Authors: Édouard Bonnet, Eun Jung Kim, Amadeus Reinald, Stéphan Thomassé, and Rémi Watrigant

Published in: LIPIcs, Volume 214, 16th International Symposium on Parameterized and Exact Computation (IPEC 2021)


Abstract
We study the existence of polynomial kernels for parameterized problems without a polynomial kernel on general graphs, when restricted to graphs of bounded twin-width. It was previously observed in [Bonnet et al., ICALP'21] that the problem k-Independent Set allows no polynomial kernel on graph of bounded twin-width by a very simple argument, which extends to several other problems such as k-Independent Dominating Set, k-Path, k-Induced Path, k-Induced Matching. In this work, we examine the k-Dominating Set and variants of k-Vertex Cover for the existence of polynomial kernels. As a main result, we show that k-Dominating Set does not admit a polynomial kernel on graphs of twin-width at most 4 under a standard complexity-theoretic assumption. The reduction is intricate, especially due to the effort to bring the twin-width down to 4, and it can be tweaked to work for Connected k-Dominating Set and Total k-Dominating Set with a slightly worse bound on the twin-width. On the positive side, we obtain a simple quadratic vertex kernel for Connected k-Vertex Cover and Capacitated k-Vertex Cover on graphs of bounded twin-width. These kernels rely on that graphs of bounded twin-width have Vapnik-Chervonenkis (VC) density 1, that is, for any vertex set X, the number of distinct neighborhoods in X is at most c⋅|X|, where c is a constant depending only on the twin-width. Interestingly the kernel applies to any graph class of VC density 1, and does not require a witness sequence. We also present a more intricate O(k^{1.5}) vertex kernel for Connected k-Vertex Cover. Finally we show that deciding if a graph has twin-width at most 1 can be done in polynomial time, and observe that most graph optimization/decision problems can be solved in polynomial time on graphs of twin-width at most 1.

Cite as

Édouard Bonnet, Eun Jung Kim, Amadeus Reinald, Stéphan Thomassé, and Rémi Watrigant. Twin-Width and Polynomial Kernels. In 16th International Symposium on Parameterized and Exact Computation (IPEC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 214, pp. 10:1-10:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bonnet_et_al:LIPIcs.IPEC.2021.10,
  author =	{Bonnet, \'{E}douard and Kim, Eun Jung and Reinald, Amadeus and Thomass\'{e}, St\'{e}phan and Watrigant, R\'{e}mi},
  title =	{{Twin-Width and Polynomial Kernels}},
  booktitle =	{16th International Symposium on Parameterized and Exact Computation (IPEC 2021)},
  pages =	{10:1--10:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-216-7},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{214},
  editor =	{Golovach, Petr A. and Zehavi, Meirav},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2021.10},
  URN =		{urn:nbn:de:0030-drops-153932},
  doi =		{10.4230/LIPIcs.IPEC.2021.10},
  annote =	{Keywords: Twin-width, kernelization, lower bounds, Dominating Set}
}
Document
APPROX
A Constant-Factor Approximation for Weighted Bond Cover

Authors: Eun Jung Kim, Euiwoong Lee, and Dimitrios M. Thilikos

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
The Weighted ℱ-Vertex Deletion for a class ℱ of graphs asks, weighted graph G, for a minimum weight vertex set S such that G-S ∈ ℱ. The case when ℱ is minor-closed and excludes some graph as a minor has received particular attention but a constant-factor approximation remained elusive for Weighted ℱ-Vertex Deletion. Only three cases of minor-closed ℱ are known to admit constant-factor approximations, namely Vertex Cover, Feedback Vertex Set and Diamond Hitting Set. We study the problem for the class ℱ of θ_c-minor-free graphs, under the equivalent setting of the Weighted c-Bond Cover problem, and present a constant-factor approximation algorithm using the primal-dual method. For this, we leverage a structure theorem implicit in [Joret et al., SIDMA'14] which states the following: any graph G containing a θ_c-minor-model either contains a large two-terminal protrusion, or contains a constant-size θ_c-minor-model, or a collection of pairwise disjoint constant-sized connected sets that can be contracted simultaneously to yield a dense graph. In the first case, we tame the graph by replacing the protrusion with a special-purpose weighted gadget. For the second and third case, we provide a weighting scheme which guarantees a local approximation ratio. Besides making an important step in the quest of (dis)proving a constant-factor approximation for Weighted ℱ-Vertex Deletion, our result may be useful as a template for algorithms for other minor-closed families.

Cite as

Eun Jung Kim, Euiwoong Lee, and Dimitrios M. Thilikos. A Constant-Factor Approximation for Weighted Bond Cover. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 7:1-7:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.APPROX/RANDOM.2021.7,
  author =	{Kim, Eun Jung and Lee, Euiwoong and Thilikos, Dimitrios M.},
  title =	{{A Constant-Factor Approximation for Weighted Bond Cover}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{7:1--7:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.7},
  URN =		{urn:nbn:de:0030-drops-147002},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.7},
  annote =	{Keywords: Constant-factor approximation algorithms, Primal-dual method, Bonds in graphs, Graph minors, Graph modification problems}
}
Document
Track A: Algorithms, Complexity and Games
Twin-width III: Max Independent Set, Min Dominating Set, and Coloring

Authors: Édouard Bonnet, Colin Geniet, Eun Jung Kim, Stéphan Thomassé, and Rémi Watrigant

Published in: LIPIcs, Volume 198, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)


Abstract
We recently introduced the notion of twin-width, a novel graph invariant, and showed that first-order model checking can be solved in time f(d,k)n for n-vertex graphs given with a witness that the twin-width is at most d, called d-contraction sequence or d-sequence, and formulas of size k [Bonnet et al., FOCS '20]. The inevitable price to pay for such a general result is that f is a tower of exponentials of height roughly k. In this paper, we show that algorithms based on twin-width need not be impractical. We present 2^{O(k)}n-time algorithms for k-Independent Set, r-Scattered Set, k-Clique, and k-Dominating Set when an O(1)-sequence of the graph is given in input. We further show how to solve the weighted version of k-Independent Set, Subgraph Isomorphism, and Induced Subgraph Isomorphism, in the slightly worse running time 2^{O(k log k)}n. Up to logarithmic factors in the exponent, all these running times are optimal, unless the Exponential Time Hypothesis fails. Like our FO model checking algorithm, these new algorithms are based on a dynamic programming scheme following the sequence of contractions forward. We then show a second algorithmic use of the contraction sequence, by starting at its end and rewinding it. As an example of such a reverse scheme, we present a polynomial-time algorithm that properly colors the vertices of a graph with relatively few colors, thereby establishing that bounded twin-width classes are χ-bounded. This significantly extends the χ-boundedness of bounded rank-width classes, and does so with a very concise proof. It readily yields a constant approximation for Max Independent Set on K_t-free graphs of bounded twin-width, and a 2^{O(OPT)}-approximation for Min Coloring on bounded twin-width graphs. We further observe that a constant approximation for Max Independent Set on bounded twin-width graphs (but arbitrarily large clique number) would actually imply a PTAS. The third algorithmic use of twin-width builds on the second one. Playing the contraction sequence backward, we show that bounded twin-width graphs can be edge-partitioned into a linear number of bicliques, such that both sides of the bicliques are on consecutive vertices, in a fixed vertex ordering. This property is trivially shared with graphs of bounded average degree. Given that biclique edge-partition, we show how to solve the unweighted Single-Source Shortest Paths and hence All-Pairs Shortest Paths in time O(n log n) and time O(n² log n), respectively. In sharp contrast, even Diameter does not admit a truly subquadratic algorithm on bounded twin-width graphs, unless the Strong Exponential Time Hypothesis fails. The fourth algorithmic use of twin-width builds on the so-called versatile tree of contractions [Bonnet et al., SODA '21], a branching and more robust witness of low twin-width. We present constant-approximation algorithms for Min Dominating Set and related problems, on bounded twin-width graphs, by showing that the integrality gap is constant. This is done by going down the versatile tree and stopping accordingly to a problem-dependent criterion. At the reached node, a greedy approach yields the desired approximation.

Cite as

Édouard Bonnet, Colin Geniet, Eun Jung Kim, Stéphan Thomassé, and Rémi Watrigant. Twin-width III: Max Independent Set, Min Dominating Set, and Coloring. In 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 198, pp. 35:1-35:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bonnet_et_al:LIPIcs.ICALP.2021.35,
  author =	{Bonnet, \'{E}douard and Geniet, Colin and Kim, Eun Jung and Thomass\'{e}, St\'{e}phan and Watrigant, R\'{e}mi},
  title =	{{Twin-width III: Max Independent Set, Min Dominating Set, and Coloring}},
  booktitle =	{48th International Colloquium on Automata, Languages, and Programming (ICALP 2021)},
  pages =	{35:1--35:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-195-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{198},
  editor =	{Bansal, Nikhil and Merelli, Emanuela and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2021.35},
  URN =		{urn:nbn:de:0030-drops-141044},
  doi =		{10.4230/LIPIcs.ICALP.2021.35},
  annote =	{Keywords: Twin-width, Max Independent Set, Min Dominating Set, Coloring, Parameterized Algorithms, Approximation Algorithms, Exact Algorithms}
}
Document
Towards Constant-Factor Approximation for Chordal / Distance-Hereditary Vertex Deletion

Authors: Jungho Ahn, Eun Jung Kim, and Euiwoong Lee

Published in: LIPIcs, Volume 181, 31st International Symposium on Algorithms and Computation (ISAAC 2020)


Abstract
For a family of graphs ℱ, Weighted ℱ-Deletion is the problem for which the input is a vertex weighted graph G = (V, E) and the goal is to delete S ⊆ V with minimum weight such that G⧵S ∈ ℱ. Designing a constant-factor approximation algorithm for large subclasses of perfect graphs has been an interesting research direction. Block graphs, 3-leaf power graphs, and interval graphs are known to admit constant-factor approximation algorithms, but the question is open for chordal graphs and distance-hereditary graphs. In this paper, we add one more class to this list by presenting a constant-factor approximation algorithm when ℱ is the intersection of chordal graphs and distance-hereditary graphs. They are known as ptolemaic graphs and form a superset of both block graphs and 3-leaf power graphs above. Our proof presents new properties and algorithmic results on inter-clique digraphs as well as an approximation algorithm for a variant of Feedback Vertex Set that exploits this relationship (named Feedback Vertex Set with Precedence Constraints), each of which may be of independent interest.

Cite as

Jungho Ahn, Eun Jung Kim, and Euiwoong Lee. Towards Constant-Factor Approximation for Chordal / Distance-Hereditary Vertex Deletion. In 31st International Symposium on Algorithms and Computation (ISAAC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 181, pp. 62:1-62:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{ahn_et_al:LIPIcs.ISAAC.2020.62,
  author =	{Ahn, Jungho and Kim, Eun Jung and Lee, Euiwoong},
  title =	{{Towards Constant-Factor Approximation for Chordal / Distance-Hereditary Vertex Deletion}},
  booktitle =	{31st International Symposium on Algorithms and Computation (ISAAC 2020)},
  pages =	{62:1--62:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-173-3},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{181},
  editor =	{Cao, Yixin and Cheng, Siu-Wing and Li, Minming},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2020.62},
  URN =		{urn:nbn:de:0030-drops-134063},
  doi =		{10.4230/LIPIcs.ISAAC.2020.62},
  annote =	{Keywords: ptolemaic, approximation algorithm, linear programming, feedback vertex set}
}
Document
Grundy Distinguishes Treewidth from Pathwidth

Authors: Rémy Belmonte, Eun Jung Kim, Michael Lampis, Valia Mitsou, and Yota Otachi

Published in: LIPIcs, Volume 173, 28th Annual European Symposium on Algorithms (ESA 2020)


Abstract
Structural graph parameters, such as treewidth, pathwidth, and clique-width, are a central topic of study in parameterized complexity. A main aim of research in this area is to understand the "price of generality" of these widths: as we transition from more restrictive to more general notions, which are the problems that see their complexity status deteriorate from fixed-parameter tractable to intractable? This type of question is by now very well-studied, but, somewhat strikingly, the algorithmic frontier between the two (arguably) most central width notions, treewidth and pathwidth, is still not understood: currently, no natural graph problem is known to be W-hard for one but FPT for the other. Indeed, a surprising development of the last few years has been the observation that for many of the most paradigmatic problems, their complexities for the two parameters actually coincide exactly, despite the fact that treewidth is a much more general parameter. It would thus appear that the extra generality of treewidth over pathwidth often comes "for free". Our main contribution in this paper is to uncover the first natural example where this generality comes with a high price. We consider Grundy Coloring, a variation of coloring where one seeks to calculate the worst possible coloring that could be assigned to a graph by a greedy First-Fit algorithm. We show that this well-studied problem is FPT parameterized by pathwidth; however, it becomes significantly harder (W[1]-hard) when parameterized by treewidth. Furthermore, we show that Grundy Coloring makes a second complexity jump for more general widths, as it becomes para-NP-hard for clique-width. Hence, Grundy Coloring nicely captures the complexity trade-offs between the three most well-studied parameters. Completing the picture, we show that Grundy Coloring is FPT parameterized by modular-width.

Cite as

Rémy Belmonte, Eun Jung Kim, Michael Lampis, Valia Mitsou, and Yota Otachi. Grundy Distinguishes Treewidth from Pathwidth. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 14:1-14:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{belmonte_et_al:LIPIcs.ESA.2020.14,
  author =	{Belmonte, R\'{e}my and Kim, Eun Jung and Lampis, Michael and Mitsou, Valia and Otachi, Yota},
  title =	{{Grundy Distinguishes Treewidth from Pathwidth}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{14:1--14:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.14},
  URN =		{urn:nbn:de:0030-drops-128803},
  doi =		{10.4230/LIPIcs.ESA.2020.14},
  annote =	{Keywords: Treewidth, Pathwidth, Clique-width, Grundy Coloring}
}
Document
Grundy Coloring & Friends, Half-Graphs, Bicliques

Authors: Pierre Aboulker, Édouard Bonnet, Eun Jung Kim, and Florian Sikora

Published in: LIPIcs, Volume 154, 37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020)


Abstract
The first-fit coloring is a heuristic that assigns to each vertex, arriving in a specified order σ, the smallest available color. The problem Grundy Coloring asks how many colors are needed for the most adversarial vertex ordering σ, i.e., the maximum number of colors that the first-fit coloring requires over all possible vertex orderings. Since its inception by Grundy in 1939, Grundy Coloring has been examined for its structural and algorithmic aspects. A brute-force f(k)n^{2^{k-1}}-time algorithm for Grundy Coloring on general graphs is not difficult to obtain, where k is the number of colors required by the most adversarial vertex ordering. It was asked several times whether the dependency on k in the exponent of n can be avoided or reduced, and its answer seemed elusive until now. We prove that Grundy Coloring is W[1]-hard and the brute-force algorithm is essentially optimal under the Exponential Time Hypothesis, thus settling this question by the negative. The key ingredient in our W[1]-hardness proof is to use so-called half-graphs as a building block to transmit a color from one vertex to another. Leveraging the half-graphs, we also prove that b-Chromatic Core is W[1]-hard, whose parameterized complexity was posed as an open question by Panolan et al. [JCSS '17]. A natural follow-up question is, how the parameterized complexity changes in the absence of (large) half-graphs. We establish fixed-parameter tractability on K_{t,t}-free graphs for b-Chromatic Core and Partial Grundy Coloring, making a step toward answering this question. The key combinatorial lemma underlying the tractability result might be of independent interest.

Cite as

Pierre Aboulker, Édouard Bonnet, Eun Jung Kim, and Florian Sikora. Grundy Coloring & Friends, Half-Graphs, Bicliques. In 37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 154, pp. 58:1-58:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{aboulker_et_al:LIPIcs.STACS.2020.58,
  author =	{Aboulker, Pierre and Bonnet, \'{E}douard and Kim, Eun Jung and Sikora, Florian},
  title =	{{Grundy Coloring \& Friends, Half-Graphs, Bicliques}},
  booktitle =	{37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020)},
  pages =	{58:1--58:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-140-5},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{154},
  editor =	{Paul, Christophe and Bl\"{a}ser, Markus},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2020.58},
  URN =		{urn:nbn:de:0030-drops-119190},
  doi =		{10.4230/LIPIcs.STACS.2020.58},
  annote =	{Keywords: Grundy coloring, parameterized complexity, ETH lower bounds, K\underline\{t,t\}-free graphs, half-graphs}
}
Document
Token Sliding on Split Graphs

Authors: Rémy Belmonte, Eun Jung Kim, Michael Lampis, Valia Mitsou, Yota Otachi, and Florian Sikora

Published in: LIPIcs, Volume 126, 36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019)


Abstract
We consider the complexity of the Independent Set Reconfiguration problem under the Token Sliding rule. In this problem we are given two independent sets of a graph and are asked if we can transform one to the other by repeatedly exchanging a vertex that is currently in the set with one of its neighbors, while maintaining the set independent. Our main result is to show that this problem is PSPACE-complete on split graphs (and hence also on chordal graphs), thus resolving an open problem in this area. We then go on to consider the c-Colorable Reconfiguration problem under the same rule, where the constraint is now to maintain the set c-colorable at all times. As one may expect, a simple modification of our reduction shows that this more general problem is PSPACE-complete for all fixed c >= 1 on chordal graphs. Somewhat surprisingly, we show that the same cannot be said for split graphs: we give a polynomial time (n^{O(c)}) algorithm for all fixed values of c, except c=1, for which the problem is PSPACE-complete. We complement our algorithm with a lower bound showing that c-Colorable Reconfiguration is W[2]-hard on split graphs parameterized by c and the length of the solution, as well as a tight ETH-based lower bound for both parameters.

Cite as

Rémy Belmonte, Eun Jung Kim, Michael Lampis, Valia Mitsou, Yota Otachi, and Florian Sikora. Token Sliding on Split Graphs. In 36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 126, pp. 13:1-13:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{belmonte_et_al:LIPIcs.STACS.2019.13,
  author =	{Belmonte, R\'{e}my and Kim, Eun Jung and Lampis, Michael and Mitsou, Valia and Otachi, Yota and Sikora, Florian},
  title =	{{Token Sliding on Split Graphs}},
  booktitle =	{36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019)},
  pages =	{13:1--13:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-100-9},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{126},
  editor =	{Niedermeier, Rolf and Paul, Christophe},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2019.13},
  URN =		{urn:nbn:de:0030-drops-102529},
  doi =		{10.4230/LIPIcs.STACS.2019.13},
  annote =	{Keywords: reconfiguration, independent set, split graph}
}
Document
Data-Compression for Parametrized Counting Problems on Sparse Graphs

Authors: Eun Jung Kim, Maria Serna, and Dimitrios M. Thilikos

Published in: LIPIcs, Volume 123, 29th International Symposium on Algorithms and Computation (ISAAC 2018)


Abstract
We study the concept of compactor, which may be seen as a counting-analogue of kernelization in counting parameterized complexity. For a function F:Sigma^* -> N and a parameterization kappa: Sigma^* -> N, a compactor (P,M) consists of a polynomial-time computable function P, called condenser, and a computable function M, called extractor, such that F=M o P, and the condensing P(x) of x has length at most s(kappa(x)), for any input x in Sigma^*. If s is a polynomial function, then the compactor is said to be of polynomial-size. Although the study on counting-analogue of kernelization is not unprecedented, it has received little attention so far. We study a family of vertex-certified counting problems on graphs that are MSOL-expressible; that is, for an MSOL-formula phi with one free set variable to be interpreted as a vertex subset, we want to count all A subseteq V(G) where |A|=k and (G,A) models phi. In this paper, we prove that every vertex-certified counting problems on graphs that is MSOL-expressible and treewidth modulable, when parameterized by k, admits a polynomial-size compactor on H-topological-minor-free graphs with condensing time O(k^2n^2) and decoding time 2^{O(k)}. This implies the existence of an FPT-algorithm of running time O(n^2 k^2)+2^{O(k)}. All aforementioned complexities are under the Uniform Cost Measure (UCM) model where numbers can be stored in constant space and arithmetic operations can be done in constant time.

Cite as

Eun Jung Kim, Maria Serna, and Dimitrios M. Thilikos. Data-Compression for Parametrized Counting Problems on Sparse Graphs. In 29th International Symposium on Algorithms and Computation (ISAAC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 123, pp. 20:1-20:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.ISAAC.2018.20,
  author =	{Kim, Eun Jung and Serna, Maria and Thilikos, Dimitrios M.},
  title =	{{Data-Compression for Parametrized Counting Problems on Sparse Graphs}},
  booktitle =	{29th International Symposium on Algorithms and Computation (ISAAC 2018)},
  pages =	{20:1--20:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-094-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{123},
  editor =	{Hsu, Wen-Lian and Lee, Der-Tsai and Liao, Chung-Shou},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2018.20},
  URN =		{urn:nbn:de:0030-drops-99688},
  doi =		{10.4230/LIPIcs.ISAAC.2018.20},
  annote =	{Keywords: Parameterized counting, compactor, protrusion decomposition}
}
Document
New Results on Directed Edge Dominating Set

Authors: Rémy Belmonte, Tesshu Hanaka, Ioannis Katsikarelis, Eun Jung Kim, and Michael Lampis

Published in: LIPIcs, Volume 117, 43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018)


Abstract
We study a family of generalizations of Edge Dominating Set on directed graphs called Directed (p,q)-Edge Dominating Set. In this problem an arc (u,v) is said to dominate itself, as well as all arcs which are at distance at most q from v, or at distance at most p to u. First, we give significantly improved FPT algorithms for the two most important cases of the problem, (0,1)-dEDS and (1,1)-dEDS (that correspond to versions of Dominating Set on line graphs), as well as polynomial kernels. We also improve the best-known approximation for these cases from logarithmic to constant. In addition, we show that (p,q)-dEDS is FPT parameterized by p+q+tw, but W-hard parameterized just by tw, where tw is the treewidth of the underlying graph of the input. We then go on to focus on the complexity of the problem on tournaments. Here, we provide a complete classification for every possible fixed value of p,q, which shows that the problem exhibits a surprising behavior, including cases which are in P; cases which are solvable in quasi-polynomial time but not in P; and a single case (p=q=1) which is NP-hard (under randomized reductions) and cannot be solved in sub-exponential time, under standard assumptions.

Cite as

Rémy Belmonte, Tesshu Hanaka, Ioannis Katsikarelis, Eun Jung Kim, and Michael Lampis. New Results on Directed Edge Dominating Set. In 43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 117, pp. 67:1-67:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{belmonte_et_al:LIPIcs.MFCS.2018.67,
  author =	{Belmonte, R\'{e}my and Hanaka, Tesshu and Katsikarelis, Ioannis and Kim, Eun Jung and Lampis, Michael},
  title =	{{New Results on Directed Edge Dominating Set}},
  booktitle =	{43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018)},
  pages =	{67:1--67:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-086-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{117},
  editor =	{Potapov, Igor and Spirakis, Paul and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2018.67},
  URN =		{urn:nbn:de:0030-drops-96490},
  doi =		{10.4230/LIPIcs.MFCS.2018.67},
  annote =	{Keywords: Edge Dominating Set, Tournaments, Treewidth}
}
Document
Finding Branch-Decompositions of Matroids, Hypergraphs, and More

Authors: Jisu Jeong, Eun Jung Kim, and Sang-il Oum

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
Given n subspaces of a finite-dimensional vector space over a fixed finite field F, we wish to find a "branch-decomposition" of these subspaces of width at most k, that is a subcubic tree T with n leaves mapped bijectively to the subspaces such that for every edge e of T, the sum of subspaces associated to the leaves in one component of T-e and the sum of subspaces associated to the leaves in the other component have the intersection of dimension at most k. This problem includes the problems of computing branch-width of F-represented matroids, rank-width of graphs, branch-width of hypergraphs, and carving-width of graphs. We present a fixed-parameter algorithm to construct such a branch-decomposition of width at most k, if it exists, for input subspaces of a finite-dimensional vector space over F. Our algorithm is analogous to the algorithm of Bodlaender and Kloks (1996) on tree-width of graphs. To extend their framework to branch-decompositions of vector spaces, we developed highly generic tools for branch-decompositions on vector spaces. For this problem, a fixed-parameter algorithm was known due to Hlinený and Oum (2008). But their method is highly indirect. Their algorithm uses the non-trivial fact by Geelen et al. (2003) that the number of forbidden minors is finite and uses the algorithm of Hlinený (2006) on checking monadic second-order formulas on F-represented matroids of small branch-width. Our result does not depend on such a fact and is completely self-contained, and yet matches their asymptotic running time for each fixed k.

Cite as

Jisu Jeong, Eun Jung Kim, and Sang-il Oum. Finding Branch-Decompositions of Matroids, Hypergraphs, and More. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 80:1-80:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{jeong_et_al:LIPIcs.ICALP.2018.80,
  author =	{Jeong, Jisu and Kim, Eun Jung and Oum, Sang-il},
  title =	{{Finding Branch-Decompositions of Matroids, Hypergraphs, and More}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{80:1--80:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.80},
  URN =		{urn:nbn:de:0030-drops-90849},
  doi =		{10.4230/LIPIcs.ICALP.2018.80},
  annote =	{Keywords: branch-width, rank-width, carving-width, fixed-parameter tractability}
}
Document
QPTAS and Subexponential Algorithm for Maximum Clique on Disk Graphs

Authors: Édouard Bonnet, Panos Giannopoulos, Eun Jung Kim, Pawel Rzazewski, and Florian Sikora

Published in: LIPIcs, Volume 99, 34th International Symposium on Computational Geometry (SoCG 2018)


Abstract
A (unit) disk graph is the intersection graph of closed (unit) disks in the plane. Almost three decades ago, an elegant polynomial-time algorithm was found for Maximum Clique on unit disk graphs [Clark, Colbourn, Johnson; Discrete Mathematics '90]. Since then, it has been an intriguing open question whether or not tractability can be extended to general disk graphs. We show the rather surprising structural result that a disjoint union of cycles is the complement of a disk graph if and only if at most one of those cycles is of odd length. From that, we derive the first QPTAS and subexponential algorithm running in time 2^{O~(n^{2/3})} for Maximum Clique on disk graphs. In stark contrast, Maximum Clique on intersection graphs of filled ellipses or filled triangles is unlikely to have such algorithms, even when the ellipses are close to unit disks. Indeed, we show that there is a constant ratio of approximation which cannot be attained even in time 2^{n^{1-epsilon}}, unless the Exponential Time Hypothesis fails.

Cite as

Édouard Bonnet, Panos Giannopoulos, Eun Jung Kim, Pawel Rzazewski, and Florian Sikora. QPTAS and Subexponential Algorithm for Maximum Clique on Disk Graphs. In 34th International Symposium on Computational Geometry (SoCG 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 99, pp. 12:1-12:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bonnet_et_al:LIPIcs.SoCG.2018.12,
  author =	{Bonnet, \'{E}douard and Giannopoulos, Panos and Kim, Eun Jung and Rzazewski, Pawel and Sikora, Florian},
  title =	{{QPTAS and Subexponential Algorithm for Maximum Clique on Disk Graphs}},
  booktitle =	{34th International Symposium on Computational Geometry (SoCG 2018)},
  pages =	{12:1--12:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-066-8},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{99},
  editor =	{Speckmann, Bettina and T\'{o}th, Csaba D.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2018.12},
  URN =		{urn:nbn:de:0030-drops-87259},
  doi =		{10.4230/LIPIcs.SoCG.2018.12},
  annote =	{Keywords: disk graph, maximum clique, computational complexity}
}
Document
Parameterized Algorithms for Min-Max Multiway Cut and List Digraph Homomorphism

Authors: Eun Jung Kim, Christophe Paul, Ignasi Sau, and Dimitrios M. Thilikos

Published in: LIPIcs, Volume 43, 10th International Symposium on Parameterized and Exact Computation (IPEC 2015)


Abstract
In this paper we design FPT-algorithms for two parameterized problems. The first is List Digraph Homomorphism: given two digraphs G and H and a list of allowed vertices of H for every vertex of G, the question is whether there exists a homomorphism from G to H respecting the list constraints. The second problem is a variant of Multiway Cut, namely Min-Max Multiway Cut: given a graph G, a non-negative integer l, and a set T of r terminals, the question is whether we can partition the vertices of G into r parts such that (a) each part contains one terminal and (b) there are at most l edges with only one endpoint in this part. We parameterize List Digraph Homomorphism by the number w of edges of G that are mapped to non-loop edges of H and we give a time 2^{O(l * log(h) + l^{2 * log(l)}} * n^{4} * log(n) algorithm, where h is the order of the host graph H.We also prove that Min-Max Multiway Cut can be solved in time 2^{O((l * r)^2 * log(l *r))} * n^{4} * log(n). Our approach introduces a general problem, called List Allocation, whose expressive power permits the design of parameterized reductions of both aforementioned problems to it. Then our results are based on an FPT-algorithm for the List Allocation problem that is designed using a suitable adaptation of the randomized contractions technique (introduced by [Chitnis, Cygan, Hajiaghayi, Pilipczuk, and Pilipczuk, FOCS 2012]).

Cite as

Eun Jung Kim, Christophe Paul, Ignasi Sau, and Dimitrios M. Thilikos. Parameterized Algorithms for Min-Max Multiway Cut and List Digraph Homomorphism. In 10th International Symposium on Parameterized and Exact Computation (IPEC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 43, pp. 78-89, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.IPEC.2015.78,
  author =	{Kim, Eun Jung and Paul, Christophe and Sau, Ignasi and Thilikos, Dimitrios M.},
  title =	{{Parameterized Algorithms for Min-Max Multiway Cut and List Digraph Homomorphism}},
  booktitle =	{10th International Symposium on Parameterized and Exact Computation (IPEC 2015)},
  pages =	{78--89},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-92-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{43},
  editor =	{Husfeldt, Thore and Kanj, Iyad},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2015.78},
  URN =		{urn:nbn:de:0030-drops-55738},
  doi =		{10.4230/LIPIcs.IPEC.2015.78},
  annote =	{Keywords: Parameterized complexity, Fixed-Parameter Tractable algorithm, Multiway Cut, Digraph homomorphism}
}
Document
An FPT Algorithm and a Polynomial Kernel for Linear Rankwidth-1 Vertex Deletion

Authors: Mamadou Moustapha Kanté, Eun Jung Kim, O-joung Kwon, and Christophe Paul

Published in: LIPIcs, Volume 43, 10th International Symposium on Parameterized and Exact Computation (IPEC 2015)


Abstract
Linear rankwidth is a linearized variant of rankwidth, introduced by Oum and Seymour [Approximating clique-width and branch-width. J. Combin. Theory Ser. B, 96(4):514-528, 2006.], and it is similar to pathwidth, which is the linearized variant of treewidth. Motivated from the results on graph modification problems into graphs of bounded treewidth or pathwidth, we investigate a graph modification problem into the class of graphs having linear rankwidth at most one, called the Linear Rankwidth-1 Vertex Deletion (shortly, LRW1-Vertex Deletion). In this problem, given an n-vertex graph G and a positive integer k, we want to decide whether there is a set of at most k vertices whose removal turns G into a graph of linear rankwidth at most one and if one exists, find such a vertex set. While the meta-theorem of Courcelle, Makowsky, and Rotics implies thatLRW1-Vertex Deletion can be solved in time f(k) * n^3 for some function f, it is not clear whether this problem allows a runtime with a modest exponential function. We establish that LRW1-Vertex Deletion can be solved in time 8^k * n^{O(1)}. The major obstacle to this end is how to handle a long induced cycle as an obstruction. To fix this issue, we define the necklace graphs and investigate their structural properties. We also show that the LRW1-Vertex Deletion has a polynomial kernel.

Cite as

Mamadou Moustapha Kanté, Eun Jung Kim, O-joung Kwon, and Christophe Paul. An FPT Algorithm and a Polynomial Kernel for Linear Rankwidth-1 Vertex Deletion. In 10th International Symposium on Parameterized and Exact Computation (IPEC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 43, pp. 138-150, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{kante_et_al:LIPIcs.IPEC.2015.138,
  author =	{Kant\'{e}, Mamadou Moustapha and Kim, Eun Jung and Kwon, O-joung and Paul, Christophe},
  title =	{{An FPT Algorithm and a Polynomial Kernel for Linear Rankwidth-1 Vertex Deletion}},
  booktitle =	{10th International Symposium on Parameterized and Exact Computation (IPEC 2015)},
  pages =	{138--150},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-92-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{43},
  editor =	{Husfeldt, Thore and Kanj, Iyad},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2015.138},
  URN =		{urn:nbn:de:0030-drops-55788},
  doi =		{10.4230/LIPIcs.IPEC.2015.138},
  annote =	{Keywords: (linear) rankwidth, distance-hereditary graphs, thread graphs, parameterized complexity, kernelization}
}
Document
A Polynomial Kernel for Block Graph Deletion

Authors: Eun Jung Kim and O-joung Kwon

Published in: LIPIcs, Volume 43, 10th International Symposium on Parameterized and Exact Computation (IPEC 2015)


Abstract
In the Block Graph Deletion problem, we are given a graph G on n vertices and a positive integer k, and the objective is to check whether it is possible to delete at most k vertices from G to make it a block graph, i.e., a graph in which each block is a clique. In this paper, we obtain a kernel with O(k^{6}) vertices for the Block Graph Deletion problem. This is a first step to investigate polynomial kernels for deletion problems into non-trivial classes of graphs of bounded rank-width, but unbounded tree-width. Our result also implies that Chordal Vertex Deletion admits a polynomial-size kernel on diamond-free graphs. For the kernelization and its analysis, we introduce the notion of 'complete degree' of a vertex. We believe that the underlying idea can be potentially applied to other problems. We also prove that the Block Graph Deletion problem can be solved in time 10^{k} * n^{O(1)}.

Cite as

Eun Jung Kim and O-joung Kwon. A Polynomial Kernel for Block Graph Deletion. In 10th International Symposium on Parameterized and Exact Computation (IPEC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 43, pp. 270-281, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.IPEC.2015.270,
  author =	{Kim, Eun Jung and Kwon, O-joung},
  title =	{{A Polynomial Kernel for Block Graph Deletion}},
  booktitle =	{10th International Symposium on Parameterized and Exact Computation (IPEC 2015)},
  pages =	{270--281},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-92-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{43},
  editor =	{Husfeldt, Thore and Kanj, Iyad},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2015.270},
  URN =		{urn:nbn:de:0030-drops-55893},
  doi =		{10.4230/LIPIcs.IPEC.2015.270},
  annote =	{Keywords: block graph, polynomial kernel, single-exponential FPT algorithm}
}
Document
Complexity and Approximability of Parameterized MAX-CSPs

Authors: Holger Dell, Eun Jung Kim, Michael Lampis, Valia Mitsou, and Tobias Mömke

Published in: LIPIcs, Volume 43, 10th International Symposium on Parameterized and Exact Computation (IPEC 2015)


Abstract
We study the optimization version of constraint satisfaction problems (Max-CSPs) in the framework of parameterized complexity; the goal is to compute the maximum fraction of constraints that can be satisfied simultaneously. In standard CSPs, we want to decide whether this fraction equals one. The parameters we investigate are structural measures, such as the treewidth or the clique-width of the variable–constraint incidence graph of the CSP instance. We consider Max-CSPs with the constraint types AND, OR, PARITY, and MAJORITY, and with various parameters k. We attempt to fully classify them into the following three cases: 1. The exact optimum can be computed in FPT-time. 2. It is W[1]-hard to compute the exact optimum, but there is a randomized FPT approximation scheme (FPT-AS), which computes a (1-epsilon)-approximation in time f(k,epsilon) * poly(n). 3. There is no FPT-AS unless FPT=W[1]. For the corresponding standard CSPs, we establish FPT vs. W[1]-hardness results.

Cite as

Holger Dell, Eun Jung Kim, Michael Lampis, Valia Mitsou, and Tobias Mömke. Complexity and Approximability of Parameterized MAX-CSPs. In 10th International Symposium on Parameterized and Exact Computation (IPEC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 43, pp. 294-306, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{dell_et_al:LIPIcs.IPEC.2015.294,
  author =	{Dell, Holger and Kim, Eun Jung and Lampis, Michael and Mitsou, Valia and M\"{o}mke, Tobias},
  title =	{{Complexity and Approximability of Parameterized MAX-CSPs}},
  booktitle =	{10th International Symposium on Parameterized and Exact Computation (IPEC 2015)},
  pages =	{294--306},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-92-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{43},
  editor =	{Husfeldt, Thore and Kanj, Iyad},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2015.294},
  URN =		{urn:nbn:de:0030-drops-55910},
  doi =		{10.4230/LIPIcs.IPEC.2015.294},
  annote =	{Keywords: Approximation, Structural Parameters, Constraint Satisfaction}
}

Kim, Jongkyu

Document
Vaquita: Fast and Accurate Identification of Structural Variation Using Combined Evidence

Authors: Jongkyu Kim and Knut Reinert

Published in: LIPIcs, Volume 88, 17th International Workshop on Algorithms in Bioinformatics (WABI 2017)


Abstract
Motivation: Comprehensive identification of structural variations (SVs) is a crucial task for studying genetic diversity and diseases. However, it remains challenging. There is only a marginal consensus between different methods, and our understanding of SVs is substantially limited.In general, integration of multiple pieces of evidence including split-read, read-pair, soft-clip, and read-depth yields the best result regarding accuracy. However, doing this step by step is usually cumbersome and computationally expensive. Result: We present Vaquita, an accurate and fast tool for the identification of structural variations, which leverages all four types of evidence in a single program. After merging SVs from split-reads and discordant read-pairs, Vaquita realigns the soft-clipped reads to the selected regions using a fast bit-vector algorithm. Furthermore, it also considers the discrepancy of depth distribution around breakpoints using Kullback-Leibler divergence. Finally, Vaquita provides an additional metric for candidate selection based on voting, and also provides robust prioritization based on rank aggregation. We show that Vaquita is robust in terms of sequencing coverage, insertion size of the library, and read length, and is comparable or even better for the identification of deletions, inversions, duplications, and translocations than state-of-the-art tools, using both simulated and real datasets. In addition, Vaquita is more than eight times faster than any other tools in comparison. Availability: Vaquita is implemented in C++ using the SeqAn library. The source code is distributed under the BSD license and can be downloaded at http://github.com/seqan/vaquita

Cite as

Jongkyu Kim and Knut Reinert. Vaquita: Fast and Accurate Identification of Structural Variation Using Combined Evidence. In 17th International Workshop on Algorithms in Bioinformatics (WABI 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 88, pp. 13:1-13:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.WABI.2017.13,
  author =	{Kim, Jongkyu and Reinert, Knut},
  title =	{{Vaquita: Fast and Accurate Identification of Structural Variation Using Combined Evidence}},
  booktitle =	{17th International Workshop on Algorithms in Bioinformatics (WABI 2017)},
  pages =	{13:1--13:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-050-7},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{88},
  editor =	{Schwartz, Russell and Reinert, Knut},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2017.13},
  URN =		{urn:nbn:de:0030-drops-76352},
  doi =		{10.4230/LIPIcs.WABI.2017.13},
  annote =	{Keywords: Structural variation}
}

Kim, Seongyong

Document
Short Paper
Application of Style Transfer in the Vectorization Process of Floorplans (Short Paper)

Authors: Seongyong Kim, Seula Park, and Kiyun Yu

Published in: LIPIcs, Volume 114, 10th International Conference on Geographic Information Science (GIScience 2018)


Abstract
As the market for indoor spatial information burgeons, the construction of indoor spatial databases consequently gain attention. Since floorplans are portable records of buildings, they are an indispensable source for the efficient construction of indoor environments. However, as previous research on floorplan information retrieval usually targeted specific formats, a system for constructing spatial information must include heuristic refinement steps. This study aims to convert diverse floorplans into an integrated format using the style transfer by deep networks. Our deep networks mimic a robust perception of human that recognize the cell structure of floorplans under various formats. The integrated format ensures that unified post-processing steps are required to the vectorization of floorplans. Through this process, indoor spatial information is constructed in a pragmatic way, using a plethora of architectural floorplans.

Cite as

Seongyong Kim, Seula Park, and Kiyun Yu. Application of Style Transfer in the Vectorization Process of Floorplans (Short Paper). In 10th International Conference on Geographic Information Science (GIScience 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 114, pp. 39:1-39:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.GISCIENCE.2018.39,
  author =	{Kim, Seongyong and Park, Seula and Yu, Kiyun},
  title =	{{Application of Style Transfer in the Vectorization Process of Floorplans}},
  booktitle =	{10th International Conference on Geographic Information Science (GIScience 2018)},
  pages =	{39:1--39:6},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-083-5},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{114},
  editor =	{Winter, Stephan and Griffin, Amy and Sester, Monika},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.GISCIENCE.2018.39},
  URN =		{urn:nbn:de:0030-drops-93672},
  doi =		{10.4230/LIPIcs.GISCIENCE.2018.39},
  annote =	{Keywords: Floorplan, Vectorising, Style Transfer, Generative Adversarial Networks}
}

Kim, Jiyoung

Document
Short Paper
Geotagging Location Information Extracted from Unstructured Data (Short Paper)

Authors: Kyunghyun Min, Jungseok Lee, Kiyun Yu, and Jiyoung Kim

Published in: LIPIcs, Volume 114, 10th International Conference on Geographic Information Science (GIScience 2018)


Abstract
Location information is an essential element of location-based services and is used in various ways. Unstructured data contain different types of location information, but coordinate values are required to determine the exact location. In Twitter, a typical social network service (SNS) platform of unstructured data, the number of geotagged tweets is low. If we can estimate the location of text by geotagging a large number of unstructured data, we can estimate the location of the event in real-time. This study is a base study on extracting the location information by using the named entity recognizer provided by the Exobrain API and applying geotagging to unstructured data in Hangul (Korean). We used Chosun news articles, which are grammatically correct and well organized, instead of tweets to extract three location-related categories, namely "location," "organization," and "artifact". We used the named entity recognizer and geotagged each sentence in combination of the fields in each category. The results of the study showed that 61% of the 800 test sentences did not have the location-related information, thus hindering geotagging. In 11.75% of the test sentences, geotagging was possible with only the given location information extracted using the named entity recognizer. The remaining 27.25% of the sentences contained information on more than two locations from the same subcategories and hence required location estimation from candidate locations. In future research, we plan to apply the results of this study to develop location estimation algorithm that makes use of the extracted location-related entities from purely unstructured data such as that on SNSs.

Cite as

Kyunghyun Min, Jungseok Lee, Kiyun Yu, and Jiyoung Kim. Geotagging Location Information Extracted from Unstructured Data (Short Paper). In 10th International Conference on Geographic Information Science (GIScience 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 114, pp. 49:1-49:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{min_et_al:LIPIcs.GISCIENCE.2018.49,
  author =	{Min, Kyunghyun and Lee, Jungseok and Yu, Kiyun and Kim, Jiyoung},
  title =	{{Geotagging Location Information Extracted from Unstructured Data}},
  booktitle =	{10th International Conference on Geographic Information Science (GIScience 2018)},
  pages =	{49:1--49:6},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-083-5},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{114},
  editor =	{Winter, Stephan and Griffin, Amy and Sester, Monika},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.GISCIENCE.2018.49},
  URN =		{urn:nbn:de:0030-drops-93778},
  doi =		{10.4230/LIPIcs.GISCIENCE.2018.49},
  annote =	{Keywords: Location Estimation, Information Extraction, Geo-Tagging, Location Information, Unstructured Data}
}

Kim, Kangsan

Document
Constant-Factor Approximation Algorithms for the Parity-Constrained Facility Location Problem

Authors: Kangsan Kim, Yongho Shin, and Hyung-Chan An

Published in: LIPIcs, Volume 181, 31st International Symposium on Algorithms and Computation (ISAAC 2020)


Abstract
Facility location is a prominent optimization problem that has inspired a large quantity of both theoretical and practical studies in combinatorial optimization. Although the problem has been investigated under various settings reflecting typical structures within the optimization problems of practical interest, little is known on how the problem behaves in conjunction with parity constraints. This shortfall of understanding was rather discouraging when we consider the central role of parity in the field of combinatorics. In this paper, we present the first constant-factor approximation algorithm for the facility location problem with parity constraints. We are given as the input a metric on a set of facilities and clients, the opening cost of each facility, and the parity requirement - odd, even, or unconstrained - of every facility in this problem. The objective is to open a subset of facilities and assign every client to an open facility so as to minimize the sum of the total opening costs and the assignment distances, but subject to the condition that the number of clients assigned to each open facility must have the same parity as its requirement. Although the unconstrained facility location problem as a relaxation for this parity-constrained generalization has unbounded gap, we demonstrate that it yields a structured solution whose parity violation can be corrected at small cost. This correction is prescribed by a T-join on an auxiliary graph constructed by the algorithm. This auxiliary graph does not satisfy the triangle inequality, but we show that a carefully chosen set of shortcutting operations leads to a cheap and sparse T-join. Finally, we bound the correction cost by exhibiting a combinatorial multi-step construction of an upper bound.

Cite as

Kangsan Kim, Yongho Shin, and Hyung-Chan An. Constant-Factor Approximation Algorithms for the Parity-Constrained Facility Location Problem. In 31st International Symposium on Algorithms and Computation (ISAAC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 181, pp. 21:1-21:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.ISAAC.2020.21,
  author =	{Kim, Kangsan and Shin, Yongho and An, Hyung-Chan},
  title =	{{Constant-Factor Approximation Algorithms for the Parity-Constrained Facility Location Problem}},
  booktitle =	{31st International Symposium on Algorithms and Computation (ISAAC 2020)},
  pages =	{21:1--21:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-173-3},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{181},
  editor =	{Cao, Yixin and Cheng, Siu-Wing and Li, Minming},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2020.21},
  URN =		{urn:nbn:de:0030-drops-133652},
  doi =		{10.4230/LIPIcs.ISAAC.2020.21},
  annote =	{Keywords: Facility location problems, approximation algorithms, clustering problems, parity constraints}
}

Kim, Sung-Hwan

Document
Computing the LCP Array of a Labeled Graph

Authors: Jarno N. Alanko, Davide Cenzato, Nicola Cotumaccio, Sung-Hwan Kim, Giovanni Manzini, and Nicola Prezza

Published in: LIPIcs, Volume 296, 35th Annual Symposium on Combinatorial Pattern Matching (CPM 2024)


Abstract
The LCP array is an important tool in stringology, allowing to speed up pattern matching algorithms and enabling compact representations of the suffix tree. Recently, Conte et al. [DCC 2023] and Cotumaccio et al. [SPIRE 2023] extended the definition of this array to Wheeler DFAs and, ultimately, to arbitrary labeled graphs, proving that it can be used to efficiently solve matching statistics queries on the graph’s paths. In this paper, we provide the first efficient algorithm building the LCP array of a directed labeled graph with n nodes and m edges labeled over an alphabet of size σ. The first step is to transform the input graph G into a deterministic Wheeler pseudoforest G_{is} with O(n) edges encoding the lexicographically- smallest and largest strings entering in each node of the original graph. Using state-of-the-art algorithms, this step runs in O(min{mlog n, m+n²}) time on arbitrary labeled graphs, and in O(m) time on Wheeler DFAs. The LCP array of G stores the longest common prefixes between those strings, i.e. it can easily be derived from the LCP array of G_{is}. After arguing that the natural generalization of a compact-space LCP-construction algorithm by Beller et al. [J. Discrete Algorithms 2013] runs in time Ω(nσ) on pseudoforests, we present a new algorithm based on dynamic range stabbing building the LCP array of G_{is} in O(nlog σ) time and O(nlogσ) bits of working space. Combined with our reduction, we obtain the first efficient algorithm to build the LCP array of an arbitrary labeled graph. An implementation of our algorithm is publicly available at https://github.com/regindex/Labeled-Graph-LCP.

Cite as

Jarno N. Alanko, Davide Cenzato, Nicola Cotumaccio, Sung-Hwan Kim, Giovanni Manzini, and Nicola Prezza. Computing the LCP Array of a Labeled Graph. In 35th Annual Symposium on Combinatorial Pattern Matching (CPM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 296, pp. 1:1-1:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{alanko_et_al:LIPIcs.CPM.2024.1,
  author =	{Alanko, Jarno N. and Cenzato, Davide and Cotumaccio, Nicola and Kim, Sung-Hwan and Manzini, Giovanni and Prezza, Nicola},
  title =	{{Computing the LCP Array of a Labeled Graph}},
  booktitle =	{35th Annual Symposium on Combinatorial Pattern Matching (CPM 2024)},
  pages =	{1:1--1:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-326-3},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{296},
  editor =	{Inenaga, Shunsuke and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2024.1},
  URN =		{urn:nbn:de:0030-drops-201113},
  doi =		{10.4230/LIPIcs.CPM.2024.1},
  annote =	{Keywords: LCP array, Wheeler automata, prefix sorting, pattern matching, sorting}
}
Document
Random Wheeler Automata

Authors: Ruben Becker, Davide Cenzato, Sung-Hwan Kim, Bojana Kodric, Riccardo Maso, and Nicola Prezza

Published in: LIPIcs, Volume 296, 35th Annual Symposium on Combinatorial Pattern Matching (CPM 2024)


Abstract
Wheeler automata were introduced in 2017 as a tool to generalize existing indexing and compression techniques based on the Burrows-Wheeler transform. Intuitively, an automaton is said to be Wheeler if there exists a total order on its states reflecting the natural co-lexicographic order of the strings labeling the automaton’s paths; this property makes it possible to represent the automaton’s topology in a constant number of bits per transition, as well as efficiently solving pattern matching queries on its accepted regular language. After their introduction, Wheeler automata have been the subject of a prolific line of research, both from the algorithmic and language-theoretic points of view. A recurring issue faced in these studies is the lack of large datasets of Wheeler automata on which the developed algorithms and theories could be tested. One possible way to overcome this issue is to generate random Wheeler automata. Motivated by this observation of practical nature, in this paper we initiate the theoretical study of random Wheeler automata, focusing our attention on the deterministic case (Wheeler DFAs - WDFAs). We start by naturally extending the Erdős-Rényi random graph model to WDFAs, and proceed by providing an algorithm generating uniform WDFAs according to this model. Our algorithm generates a uniform WDFA with n states, m transitions, and alphabet’s cardinality σ in O(m) expected time (O(mlog m) time w.h.p.) and constant working space for all alphabets of size σ ≤ m/ln m. The output WDFA is streamed directly to the output. As a by-product, we also give formulas for the number of distinct WDFAs and obtain that nσ + (n - σ) log σ bits are necessary and sufficient to encode a WDFA with n states and alphabet of size σ, up to an additive Θ(n) term. We present an implementation of our algorithm and show that it is extremely fast in practice, with a throughput of over 8 million transitions per second.

Cite as

Ruben Becker, Davide Cenzato, Sung-Hwan Kim, Bojana Kodric, Riccardo Maso, and Nicola Prezza. Random Wheeler Automata. In 35th Annual Symposium on Combinatorial Pattern Matching (CPM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 296, pp. 5:1-5:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{becker_et_al:LIPIcs.CPM.2024.5,
  author =	{Becker, Ruben and Cenzato, Davide and Kim, Sung-Hwan and Kodric, Bojana and Maso, Riccardo and Prezza, Nicola},
  title =	{{Random Wheeler Automata}},
  booktitle =	{35th Annual Symposium on Combinatorial Pattern Matching (CPM 2024)},
  pages =	{5:1--5:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-326-3},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{296},
  editor =	{Inenaga, Shunsuke and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2024.5},
  URN =		{urn:nbn:de:0030-drops-201157},
  doi =		{10.4230/LIPIcs.CPM.2024.5},
  annote =	{Keywords: Wheeler automata, Burrows-Wheeler transform, random graphs}
}
Document
Sorting Finite Automata via Partition Refinement

Authors: Ruben Becker, Manuel Cáceres, Davide Cenzato, Sung-Hwan Kim, Bojana Kodric, Francisco Olivares, and Nicola Prezza

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
Wheeler nondeterministic finite automata (WNFAs) were introduced in (Gagie et al., TCS 2017) as a powerful generalization of prefix sorting from strings to labeled graphs. WNFAs admit optimal solutions to classic hard problems on labeled graphs and languages such as compression and regular expression matching. The problem of deciding whether a given NFA is Wheeler is known to be NP-complete (Gibney and Thankachan, ESA 2019). Recently, however, Alanko et al. (Information and Computation 2021) showed how to side-step this complexity by switching to preorders: letting Q be the set of states and δ the set of transitions, they provided a O(|δ|⋅|Q|²)-time algorithm computing a totally-ordered partition (i.e. equivalence relation) of the WNFA’s states such that (1) equivalent states recognize the same regular language, and (2) the order of (the classes of) non-equivalent states is consistent with any Wheeler order, when one exists. As a result, the output is a preorder of the states as useful for pattern matching as standard Wheeler orders. Further extensions of this line of work (Cotumaccio et al., SODA 2021 and DCC 2022) generalized these concepts to arbitrary NFAs by introducing co-lex partial preorders: in general, any NFA admits a partial preorder of its states reflecting the co-lexicographic order of their accepted strings; the smaller the width of such preorder is, the faster regular expression matching queries can be performed. To date, the fastest algorithm for computing the smallest-width partial preorder on NFAs runs in O(|δ|² + |Q|^{5/2}) time (Cotumaccio, DCC 2022), while on DFAs the same task can be accomplished in O(min(|Q|²log|Q|, |δ|⋅|Q|)) time (Kim et al., CPM 2023). In this paper, we provide much more efficient solutions to the co-lex order computation problem. Our results are achieved by extending a classic algorithm for the relational coarsest partition refinement problem of Paige and Tarjan to work with ordered partitions. More specifically, we provide a O(|δ|log|Q|)-time algorithm computing a co-lex total preorder when the input is a Wheeler NFA, and an algorithm with the same time complexity computing the smallest-width co-lex partial order of any DFA. In addition, we present implementations of our algorithms and show that they are very efficient also in practice.

Cite as

Ruben Becker, Manuel Cáceres, Davide Cenzato, Sung-Hwan Kim, Bojana Kodric, Francisco Olivares, and Nicola Prezza. Sorting Finite Automata via Partition Refinement. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 15:1-15:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{becker_et_al:LIPIcs.ESA.2023.15,
  author =	{Becker, Ruben and C\'{a}ceres, Manuel and Cenzato, Davide and Kim, Sung-Hwan and Kodric, Bojana and Olivares, Francisco and Prezza, Nicola},
  title =	{{Sorting Finite Automata via Partition Refinement}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{15:1--15:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.15},
  URN =		{urn:nbn:de:0030-drops-186684},
  doi =		{10.4230/LIPIcs.ESA.2023.15},
  annote =	{Keywords: Wheeler automata, prefix sorting, pattern matching, graph compression, sorting, partition refinement}
}
Document
Faster Prefix-Sorting Algorithms for Deterministic Finite Automata

Authors: Sung-Hwan Kim, Francisco Olivares, and Nicola Prezza

Published in: LIPIcs, Volume 259, 34th Annual Symposium on Combinatorial Pattern Matching (CPM 2023)


Abstract
Sorting is a fundamental algorithmic pre-processing technique which often allows to represent data more compactly and, at the same time, speeds up search queries on it. In this paper, we focus on the well-studied problem of sorting and indexing string sets. Since the introduction of suffix trees in 1973, dozens of suffix sorting algorithms have been described in the literature. In 2017, these techniques were extended to sets of strings described by means of finite automata: the theory of Wheeler graphs [Gagie et al., TCS'17] introduced automata whose states can be totally-sorted according to the co-lexicographic (co-lex in the following) order of the prefixes of words accepted by the automaton. More recently, in [Cotumaccio, Prezza, SODA'21] it was shown how to extend these ideas to arbitrary automata by means of partial co-lex orders. This work showed that a co-lex order of minimum width (thus optimizing search query times) on deterministic finite automata (DFAs) can be computed in O(m² + n^{5/2}) time, m being the number of transitions and n the number of states of the input DFA. In this paper, we exhibit new combinatorial properties of the minimum-width co-lex order of DFAs and exploit them to design faster prefix sorting algorithms. In particular, we describe two algorithms sorting arbitrary DFAs in O(mn) and O(n² log n) time, respectively, and an algorithm sorting acyclic DFAs in O(m log n) time. Within these running times, all algorithms compute also a smallest chain partition of the partial order (required to index the DFA). We present an experiment result to show that an optimized implementation of the O(n² log n)-time algorithm exhibits a nearly-linear behaviour on large deterministic pan-genomic graphs and is thus also of practical interest.

Cite as

Sung-Hwan Kim, Francisco Olivares, and Nicola Prezza. Faster Prefix-Sorting Algorithms for Deterministic Finite Automata. In 34th Annual Symposium on Combinatorial Pattern Matching (CPM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 259, pp. 16:1-16:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.CPM.2023.16,
  author =	{Kim, Sung-Hwan and Olivares, Francisco and Prezza, Nicola},
  title =	{{Faster Prefix-Sorting Algorithms for Deterministic Finite Automata}},
  booktitle =	{34th Annual Symposium on Combinatorial Pattern Matching (CPM 2023)},
  pages =	{16:1--16:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-276-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{259},
  editor =	{Bulteau, Laurent and Lipt\'{a}k, Zsuzsanna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2023.16},
  URN =		{urn:nbn:de:0030-drops-179707},
  doi =		{10.4230/LIPIcs.CPM.2023.16},
  annote =	{Keywords: String Matching, Deterministic Finite Automata, Graph Indexing, Co-lexicographical Sorting}
}
Document
Simple Order-Isomorphic Matching Index with Expected Compact Space

Authors: Sung-Hwan Kim and Hwan-Gue Cho

Published in: LIPIcs, Volume 248, 33rd International Symposium on Algorithms and Computation (ISAAC 2022)


Abstract
In this paper, we present a novel indexing method for the order-isomorphic pattern matching problem (also known as order-preserving pattern matching, or consecutive permutation matching), in which two equal-length strings are defined to match when X[i] < X[j] iff Y[i] < Y[j] for 0 ≤ i,j < |X|. We observe an interesting relation between the order-isomorphic matching and the insertion process of a binary search tree, based on which we propose a data structure which not only has a concise structure comprised of only two wavelet trees but also provides a surprisingly simple searching algorithm. In the average case analysis, the proposed method requires 𝒪(R(T)) bits, and it is capable of answering a count query in 𝒪(R(P)) time, and reporting an occurrence in 𝒪(lg |T|) time, where T and P are the text and the pattern string, respectively; for a string X, R(X) is the total time taken for the construction of the binary search tree by successively inserting the keys X[|X|-1],⋯,X[0] at the root, and its expected value is 𝒪(|X|lgσ) where σ is the alphabet size. Furthermore, the proposed method can be viewed as a generalization of some other methods including several heuristics and restricted versions described in previous studies in the literature.

Cite as

Sung-Hwan Kim and Hwan-Gue Cho. Simple Order-Isomorphic Matching Index with Expected Compact Space. In 33rd International Symposium on Algorithms and Computation (ISAAC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 248, pp. 61:1-61:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.ISAAC.2022.61,
  author =	{Kim, Sung-Hwan and Cho, Hwan-Gue},
  title =	{{Simple Order-Isomorphic Matching Index with Expected Compact Space}},
  booktitle =	{33rd International Symposium on Algorithms and Computation (ISAAC 2022)},
  pages =	{61:1--61:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-258-7},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{248},
  editor =	{Bae, Sang Won and Park, Heejin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2022.61},
  URN =		{urn:nbn:de:0030-drops-173466},
  doi =		{10.4230/LIPIcs.ISAAC.2022.61},
  annote =	{Keywords: Compact Data Structure, String Matching, Order-Preserving Matching, Suffix Array, FM-index, Binary Search Tree}
}
Document
A Compact Index for Cartesian Tree Matching

Authors: Sung-Hwan Kim and Hwan-Gue Cho

Published in: LIPIcs, Volume 191, 32nd Annual Symposium on Combinatorial Pattern Matching (CPM 2021)


Abstract
Cartesian tree matching is a recently introduced string matching problem in which two strings match if their corresponding Cartesian trees are the same. It is considered appropriate to find patterns regarding their shapes especially in numerical time series data. While many related problems have been addressed, developing a compact index has received relatively less attention. In this paper, we present a 3n+o(n)-bit index that can count the number of occurrences of a Cartesian tree pattern in 𝒪(m) time where n and m are the text and pattern length. To the best of our knowledge, this work is the first 𝒪(n)-bit compact data structure for indexing for this problem.

Cite as

Sung-Hwan Kim and Hwan-Gue Cho. A Compact Index for Cartesian Tree Matching. In 32nd Annual Symposium on Combinatorial Pattern Matching (CPM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 191, pp. 18:1-18:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.CPM.2021.18,
  author =	{Kim, Sung-Hwan and Cho, Hwan-Gue},
  title =	{{A Compact Index for Cartesian Tree Matching}},
  booktitle =	{32nd Annual Symposium on Combinatorial Pattern Matching (CPM 2021)},
  pages =	{18:1--18:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-186-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{191},
  editor =	{Gawrychowski, Pawe{\l} and Starikovskaya, Tatiana},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2021.18},
  URN =		{urn:nbn:de:0030-drops-139699},
  doi =		{10.4230/LIPIcs.CPM.2021.18},
  annote =	{Keywords: String Matching, Suffix Array, FM-index, Compact Index, Cartesian Tree Matching}
}
Document
Indexing Isodirectional Pointer Sequences

Authors: Sung-Hwan Kim and Hwan-Gue Cho

Published in: LIPIcs, Volume 181, 31st International Symposium on Algorithms and Computation (ISAAC 2020)


Abstract
Many sequential and temporal data have dependency relationships among their elements, which can be represented as a sequence of pointers. In this paper, we introduce a new string matching problem with a particular type of strings, which we call isodirectional pointer sequence, in which each entry has a pointer to another entry. The proposed problem is not only a formalization of real-world dependency matching problems, but also a generalization of variants of the string matching problem such as parameterized pattern matching and Cartesian tree matching. We present a 2nlgσ+2n+o(n)-bit index that preprocesses the text T[1:n] so as to count the number of occurrences of pattern P[1:m] in 𝒪(mlgσ) where σ is the number of distinct lengths of pointers in T. Our index is also easily implementable in practice because it consists of wavelet trees and range maximum query index, which are widely used building blocks in many other compact data structures. By compressing the wavelet trees, the index can also be stored into 2nH^*₀(T)+2n+o(n) bits where H^*₀(T) is the 0-th order empirical entropy of the distribution of pointer lengths of T.

Cite as

Sung-Hwan Kim and Hwan-Gue Cho. Indexing Isodirectional Pointer Sequences. In 31st International Symposium on Algorithms and Computation (ISAAC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 181, pp. 35:1-35:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.ISAAC.2020.35,
  author =	{Kim, Sung-Hwan and Cho, Hwan-Gue},
  title =	{{Indexing Isodirectional Pointer Sequences}},
  booktitle =	{31st International Symposium on Algorithms and Computation (ISAAC 2020)},
  pages =	{35:1--35:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-173-3},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{181},
  editor =	{Cao, Yixin and Cheng, Siu-Wing and Li, Minming},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2020.35},
  URN =		{urn:nbn:de:0030-drops-133797},
  doi =		{10.4230/LIPIcs.ISAAC.2020.35},
  annote =	{Keywords: String Matching, Suffix Array, FM-index, Wavelet Tree, Range Minimum Query, Parameterized String Matching, Cartesian Tree Matching}
}

Kim, Donggyu

Document
Γ-Graphic Delta-Matroids and Their Applications

Authors: Donggyu Kim, Duksang Lee, and Sang-il Oum

Published in: LIPIcs, Volume 212, 32nd International Symposium on Algorithms and Computation (ISAAC 2021)


Abstract
For an abelian group Γ, a Γ-labelled graph is a graph whose vertices are labelled by elements of Γ. We prove that a certain collection of edge sets of a Γ-labelled graph forms a delta-matroid, which we call a Γ-graphic delta-matroid, and provide a polynomial-time algorithm to solve the separation problem, which allows us to apply the symmetric greedy algorithm of Bouchet to find a maximum weight feasible set in such a delta-matroid. We present two algorithmic applications on graphs; Maximum Weight Packing of Trees of Order Not Divisible by k and Maximum Weight S-Tree Packing. We also discuss various properties of Γ-graphic delta-matroids.

Cite as

Donggyu Kim, Duksang Lee, and Sang-il Oum. Γ-Graphic Delta-Matroids and Their Applications. In 32nd International Symposium on Algorithms and Computation (ISAAC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 212, pp. 70:1-70:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.ISAAC.2021.70,
  author =	{Kim, Donggyu and Lee, Duksang and Oum, Sang-il},
  title =	{{\Gamma-Graphic Delta-Matroids and Their Applications}},
  booktitle =	{32nd International Symposium on Algorithms and Computation (ISAAC 2021)},
  pages =	{70:1--70:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-214-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{212},
  editor =	{Ahn, Hee-Kap and Sadakane, Kunihiko},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2021.70},
  URN =		{urn:nbn:de:0030-drops-155038},
  doi =		{10.4230/LIPIcs.ISAAC.2021.70},
  annote =	{Keywords: delta-matroid, group-labelled graph, greedy algorithm, tree packing}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail