81 Search Results for "H�yer, Peter"


Document
Invited Talk
The CP-SAT-LP Solver (Invited Talk)

Authors: Laurent Perron, Frédéric Didier, and Steven Gay

Published in: LIPIcs, Volume 280, 29th International Conference on Principles and Practice of Constraint Programming (CP 2023)


Abstract
The CP-SAT-LP solver is developed by the Operations Research team at Google and is part of the OR-Tools [Laurent Perron and Vincent Furnon, 2023] open-source optimization suite. It is an implementation of a purely integral Constraint Programming solver on top of a SAT solver using Lazy Clause Generation [Stuckey, 2010]. It draws its inspiration from the chuffed solver [Geoffrey Chu et al., 2023], and from the CP 2013 plenary by Peter Stuckey on Lazy Clause Generation [Stuckey, 2013]. The CP-SAT-LP solver improves upon the chuffed solver [Geoffrey Chu et al., 2023] in two main directions. First, it uses a simplex alongside the SAT engine. Second, it implements and relies upon a portfolio of diverse workers for its search part. The use of the simplex brings the obvious advantages of a linear relaxation on the linear part of the full model. It also started the integration of MIP technology into CP-SAT-LP. This is a huge endeavour, as MIP solvers are mature and complex. It includes presolve - which was already a part of CP-SAT -, dual reductions, specific branching rules, cuts, reduced cost fixing, and more advanced techniques. It also allows to integrate tightly the research from the Scheduling on MIP community [Balas, 1985; Applegate and Cook, 1991; Maurice Queyranne, 1993] along with the most advanced scheduling algorithms [Vilím, 2011]. This has enabled breakthroughs in solving and proving hard scheduling instances of the Job-Shop problems [Ding et al., 2019] and Resource Constraint Project Scheduling Problems [Rainer Kolisch and Arno Sprecher, 1997; Artigues et al., 2008]. Using a portfolio of different workers makes it easier to try new ideas and to incorporate orthogonal techniques with little complication, except controlling the explosion of potential workers. These workers can be categorized along multiple criteria like finding primal solutions - either using complete solvers, Local Search [Luteberget and Sartor, 2023] or Large Neighborhood Search [Paul Shaw, 1998] -, improving dual bounds, trying to reduce the problem with the help of continuous probing. This diversity of behaviors has increased the robustness of the solver, while the continuous sharing of information between workers has produced massive speedups when running multiple workers in parallel. All in all, CP-SAT-LP is a state-of-the-art solver, with unsurpassed performance in the Constraint Programming community, breakthrough results on Scheduling benchmarks (with the closure of many open problems), and competitive results with the best MIP solvers (on purely integral problems).

Cite as

Laurent Perron, Frédéric Didier, and Steven Gay. The CP-SAT-LP Solver (Invited Talk). In 29th International Conference on Principles and Practice of Constraint Programming (CP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 280, pp. 3:1-3:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{perron_et_al:LIPIcs.CP.2023.3,
  author =	{Perron, Laurent and Didier, Fr\'{e}d\'{e}ric and Gay, Steven},
  title =	{{The CP-SAT-LP Solver}},
  booktitle =	{29th International Conference on Principles and Practice of Constraint Programming (CP 2023)},
  pages =	{3:1--3:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-300-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{280},
  editor =	{Yap, Roland H. C.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2023.3},
  URN =		{urn:nbn:de:0030-drops-190405},
  doi =		{10.4230/LIPIcs.CP.2023.3},
  annote =	{Keywords: Constraint Programming, Operations Research, Sat Solver}
}
Document
From Formal Boosted Tree Explanations to Interpretable Rule Sets

Authors: Jinqiang Yu, Alexey Ignatiev, and Peter J. Stuckey

Published in: LIPIcs, Volume 280, 29th International Conference on Principles and Practice of Constraint Programming (CP 2023)


Abstract
The rapid rise of Artificial Intelligence (AI) and Machine Learning (ML) has invoked the need for explainable AI (XAI). One of the most prominent approaches to XAI is to train rule-based ML models, e.g. decision trees, lists and sets, that are deemed interpretable due to their transparent nature. Recent years have witnessed a large body of work in the area of constraints- and reasoning-based approaches to the inference of interpretable models, in particular decision sets (DSes). Despite being shown to outperform heuristic approaches in terms of accuracy, most of them suffer from scalability issues and often fail to handle large training data, in which case no solution is offered. Motivated by this limitation and the success of gradient boosted trees, we propose a novel anytime approach to producing DSes that are both accurate and interpretable. The approach makes use of the concept of a generalized formal explanation and builds on the recent advances in formal explainability of gradient boosted trees. Experimental results obtained on a wide range of datasets, demonstrate that our approach produces DSes that more accurate than those of the state-of-the-art algorithms and comparable with them in terms of explanation size.

Cite as

Jinqiang Yu, Alexey Ignatiev, and Peter J. Stuckey. From Formal Boosted Tree Explanations to Interpretable Rule Sets. In 29th International Conference on Principles and Practice of Constraint Programming (CP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 280, pp. 38:1-38:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{yu_et_al:LIPIcs.CP.2023.38,
  author =	{Yu, Jinqiang and Ignatiev, Alexey and Stuckey, Peter J.},
  title =	{{From Formal Boosted Tree Explanations to Interpretable Rule Sets}},
  booktitle =	{29th International Conference on Principles and Practice of Constraint Programming (CP 2023)},
  pages =	{38:1--38:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-300-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{280},
  editor =	{Yap, Roland H. C.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2023.38},
  URN =		{urn:nbn:de:0030-drops-190758},
  doi =		{10.4230/LIPIcs.CP.2023.38},
  annote =	{Keywords: Decision set, interpretable model, gradient boosted tree, BT compilation}
}
Document
Short Paper
Predict-Then-Optimise Strategies for Water Flow Control (Short Paper)

Authors: Vincent Barbosa Vaz, James Bailey, Christopher Leckie, and Peter J. Stuckey

Published in: LIPIcs, Volume 280, 29th International Conference on Principles and Practice of Constraint Programming (CP 2023)


Abstract
A pressure sewer system is a network of pump stations used to collect and manage sewage from individual properties that cannot be directly connected to the gravity driven sewer network due to the topography of the terrain. We consider a common scenario for a pressure sewer system, where individual sites collect sewage in a local tank, and then pump it into the gravity fed sewage network. Standard control systems simply wait until the local tank reaches (near) capacity and begin pumping out. Unfortunately such simple control usually leads to peaks in sewage flow in the morning and evening, corresponding to peak water usage in the properties. High peak flows require equalization basins or overflow systems, or larger capacity sewage treatment plants. In this paper we investigate combining prediction and optimisation to better manage peak sewage flows. We use simple prediction methods to generate realistic possible future scenarios, and then develop optimisation models to generate pumping plans that try to smooth out flows into the network. The solutions of these models create a policy for pumping out that is specialized to individual properties and which overall is able to substantially reduce peak flows.

Cite as

Vincent Barbosa Vaz, James Bailey, Christopher Leckie, and Peter J. Stuckey. Predict-Then-Optimise Strategies for Water Flow Control (Short Paper). In 29th International Conference on Principles and Practice of Constraint Programming (CP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 280, pp. 42:1-42:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{barbosavaz_et_al:LIPIcs.CP.2023.42,
  author =	{Barbosa Vaz, Vincent and Bailey, James and Leckie, Christopher and J. Stuckey, Peter},
  title =	{{Predict-Then-Optimise Strategies for Water Flow Control}},
  booktitle =	{29th International Conference on Principles and Practice of Constraint Programming (CP 2023)},
  pages =	{42:1--42:10},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-300-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{280},
  editor =	{Yap, Roland H. C.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2023.42},
  URN =		{urn:nbn:de:0030-drops-190795},
  doi =		{10.4230/LIPIcs.CP.2023.42},
  annote =	{Keywords: Water Flow Control, Optimization, Machine Learning}
}
Document
A Sweep-Plane Algorithm for Calculating the Isolation of Mountains

Authors: Daniel Funke, Nicolai Hüning, and Peter Sanders

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
One established metric to classify the significance of a mountain peak is its isolation. It specifies the distance between a peak and the closest point of higher elevation. Peaks with high isolation dominate their surroundings and provide a nice view from the top. With the availability of worldwide Digital Elevation Models (DEMs), the isolation of all mountain peaks can be computed automatically. Previous algorithms run in worst case time that is quadratic in the input size. We present a novel sweep-plane algorithm that runs in time 𝒪(nlog n+pT_NN) where n is the input size, p the number of considered peaks and T_NN the time for a 2D nearest-neighbor query in an appropriate geometric search tree. We refine this to a two-level approach that has high locality and good parallel scalability. Our implementation reduces the time for calculating the isolation of every peak on Earth from hours to minutes while improving precision.

Cite as

Daniel Funke, Nicolai Hüning, and Peter Sanders. A Sweep-Plane Algorithm for Calculating the Isolation of Mountains. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 51:1-51:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{funke_et_al:LIPIcs.ESA.2023.51,
  author =	{Funke, Daniel and H\"{u}ning, Nicolai and Sanders, Peter},
  title =	{{A Sweep-Plane Algorithm for Calculating the Isolation of Mountains}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{51:1--51:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.51},
  URN =		{urn:nbn:de:0030-drops-187040},
  doi =		{10.4230/LIPIcs.ESA.2023.51},
  annote =	{Keywords: computational geometry, Geo-information systems, sweepline algorithms}
}
Document
Recognizing H-Graphs - Beyond Circular-Arc Graphs

Authors: Deniz Ağaoğlu Çağırıcı, Onur Çağırıcı, Jan Derbisz, Tim A. Hartmann, Petr Hliněný, Jan Kratochvíl, Tomasz Krawczyk, and Peter Zeman

Published in: LIPIcs, Volume 272, 48th International Symposium on Mathematical Foundations of Computer Science (MFCS 2023)


Abstract
In 1992 Biró, Hujter and Tuza introduced, for every fixed connected graph H, the class of H-graphs, defined as the intersection graphs of connected subgraphs of some subdivision of H. Such classes of graphs are related to many known graph classes: for example, K₂-graphs coincide with interval graphs, K₃-graphs with circular-arc graphs, the union of T-graphs, where T ranges over all trees, coincides with chordal graphs. Recently, quite a lot of research has been devoted to understanding the tractability border for various computational problems, such as recognition or isomorphism testing, in classes of H-graphs for different graphs H. In this work we undertake this research topic, focusing on the recognition problem. Chaplick, Töpfer, Voborník, and Zeman showed an XP-algorithm testing whether a given graph is a T-graph, where the parameter is the size of the tree T. In particular, for every fixed tree T the recognition of T-graphs can be solved in polynomial time. Tucker showed a polynomial time algorithm recognizing K₃-graphs (circular-arc graphs). On the other hand, Chaplick et al. showed also that for every fixed graph H containing two distinct cycles sharing an edge, the recognition of H-graphs is NP-hard. The main two results of this work narrow the gap between the NP-hard and 𝖯 cases of H-graph recognition. First, we show that the recognition of H-graphs is NP-hard when H contains two distinct cycles. On the other hand, we show a polynomial-time algorithm recognizing L-graphs, where L is a graph containing a cycle and an edge attached to it (which we call lollipop graphs). Our work leaves open the recognition problems of M-graphs for every unicyclic graph M different from a cycle and a lollipop.

Cite as

Deniz Ağaoğlu Çağırıcı, Onur Çağırıcı, Jan Derbisz, Tim A. Hartmann, Petr Hliněný, Jan Kratochvíl, Tomasz Krawczyk, and Peter Zeman. Recognizing H-Graphs - Beyond Circular-Arc Graphs. In 48th International Symposium on Mathematical Foundations of Computer Science (MFCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 272, pp. 8:1-8:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{agaoglucagirici_et_al:LIPIcs.MFCS.2023.8,
  author =	{A\u{g}ao\u{g}lu \c{C}a\u{g}{\i}r{\i}c{\i}, Deniz and \c{C}a\u{g}{\i}r{\i}c{\i}, Onur and Derbisz, Jan and Hartmann, Tim A. and Hlin\v{e}n\'{y}, Petr and Kratochv{\'\i}l, Jan and Krawczyk, Tomasz and Zeman, Peter},
  title =	{{Recognizing H-Graphs - Beyond Circular-Arc Graphs}},
  booktitle =	{48th International Symposium on Mathematical Foundations of Computer Science (MFCS 2023)},
  pages =	{8:1--8:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-292-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{272},
  editor =	{Leroux, J\'{e}r\^{o}me and Lombardy, Sylvain and Peleg, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2023.8},
  URN =		{urn:nbn:de:0030-drops-185420},
  doi =		{10.4230/LIPIcs.MFCS.2023.8},
  annote =	{Keywords: H-graphs, Intersection Graphs, Helly Property}
}
Document
Refactoring = Substitution + Rewriting: Towards Generic, Language-Independent Refactorings

Authors: Simon Thompson and Dániel Horpácsi

Published in: OASIcs, Volume 109, Eelco Visser Commemorative Symposium (EVCS 2023)


Abstract
Eelco Visser’s work has always encouraged stepping back from the particular to look at the underlying, conceptual problems. In that spirit we present an approach to describing refactorings that abstracts away from particular refactorings to classes of similar transformations, and presents an implementation of these that works by substitution and subsequent rewriting. Substitution is language-independent under this approach, while the rewrites embody language-specific aspects. Intriguingly, it also goes back to work on API migration by Huiqing Li and the first author, and sets refactoring in that general context.

Cite as

Simon Thompson and Dániel Horpácsi. Refactoring = Substitution + Rewriting: Towards Generic, Language-Independent Refactorings. In Eelco Visser Commemorative Symposium (EVCS 2023). Open Access Series in Informatics (OASIcs), Volume 109, pp. 26:1-26:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{thompson_et_al:OASIcs.EVCS.2023.26,
  author =	{Thompson, Simon and Horp\'{a}csi, D\'{a}niel},
  title =	{{Refactoring = Substitution + Rewriting: Towards Generic, Language-Independent Refactorings}},
  booktitle =	{Eelco Visser Commemorative Symposium (EVCS 2023)},
  pages =	{26:1--26:9},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-267-9},
  ISSN =	{2190-6807},
  year =	{2023},
  volume =	{109},
  editor =	{L\"{a}mmel, Ralf and Mosses, Peter D. and Steimann, Friedrich},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.EVCS.2023.26},
  URN =		{urn:nbn:de:0030-drops-177961},
  doi =		{10.4230/OASIcs.EVCS.2023.26},
  annote =	{Keywords: refactoring, generic, language independent, rewriting, substitution, API upgrade}
}
Document
APPROX
Bypassing the XOR Trick: Stronger Certificates for Hypergraph Clique Number

Authors: Venkatesan Guruswami, Pravesh K. Kothari, and Peter Manohar

Published in: LIPIcs, Volume 245, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)


Abstract
Let H(k,n,p) be the distribution on k-uniform hypergraphs where every subset of [n] of size k is included as an hyperedge with probability p independently. In this work, we design and analyze a simple spectral algorithm that certifies a bound on the size of the largest clique, ω(H), in hypergraphs H ∼ H(k,n,p). For example, for any constant p, with high probability over the choice of the hypergraph, our spectral algorithm certifies a bound of Õ(√n) on the clique number in polynomial time. This matches, up to polylog(n) factors, the best known certificate for the clique number in random graphs, which is the special case of k = 2. Prior to our work, the best known refutation algorithms [Amin Coja-Oghlan et al., 2004; Sarah R. Allen et al., 2015] rely on a reduction to the problem of refuting random k-XOR via Feige’s XOR trick [Uriel Feige, 2002], and yield a polynomially worse bound of Õ(n^{3/4}) on the clique number when p = O(1). Our algorithm bypasses the XOR trick and relies instead on a natural generalization of the Lovász theta semidefinite programming relaxation for cliques in hypergraphs.

Cite as

Venkatesan Guruswami, Pravesh K. Kothari, and Peter Manohar. Bypassing the XOR Trick: Stronger Certificates for Hypergraph Clique Number. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 245, pp. 42:1-42:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX/RANDOM.2022.42,
  author =	{Guruswami, Venkatesan and Kothari, Pravesh K. and Manohar, Peter},
  title =	{{Bypassing the XOR Trick: Stronger Certificates for Hypergraph Clique Number}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022)},
  pages =	{42:1--42:7},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-249-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{245},
  editor =	{Chakrabarti, Amit and Swamy, Chaitanya},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2022.42},
  URN =		{urn:nbn:de:0030-drops-171642},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2022.42},
  annote =	{Keywords: Planted clique, Average-case complexity, Spectral refutation, Random matrix theory}
}
Document
Short Paper
3D Sketch Maps: Concept, Potential Benefits, and Challenges (Short Paper)

Authors: Kevin Gonyop Kim, Jakub Krukar, Panagiotis Mavros, Jiayan Zhao, Peter Kiefer, Angela Schwering, Christoph Hölscher, and Martin Raubal

Published in: LIPIcs, Volume 240, 15th International Conference on Spatial Information Theory (COSIT 2022)


Abstract
Studying the 3D aspect of spatial information has become increasingly important due to changes in the way we interact with the surrounding environments as well as technological innovations. Current pen-and-paper approaches of sketch mapping have a limitation in investigating 3D spatial knowledge as they are forced to be drawn on 2D interfaces. In this paper, we propose the concept of 3D sketch mapping as a tool to study human spatial knowledge by externalizing the mental models of spatial information with 3D representations. The goal of this paper is to introduce the concept, discuss its potential importance and challenges, and share our vision for future research directions.

Cite as

Kevin Gonyop Kim, Jakub Krukar, Panagiotis Mavros, Jiayan Zhao, Peter Kiefer, Angela Schwering, Christoph Hölscher, and Martin Raubal. 3D Sketch Maps: Concept, Potential Benefits, and Challenges (Short Paper). In 15th International Conference on Spatial Information Theory (COSIT 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 240, pp. 14:1-14:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{kim_et_al:LIPIcs.COSIT.2022.14,
  author =	{Kim, Kevin Gonyop and Krukar, Jakub and Mavros, Panagiotis and Zhao, Jiayan and Kiefer, Peter and Schwering, Angela and H\"{o}lscher, Christoph and Raubal, Martin},
  title =	{{3D Sketch Maps: Concept, Potential Benefits, and Challenges}},
  booktitle =	{15th International Conference on Spatial Information Theory (COSIT 2022)},
  pages =	{14:1--14:7},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-257-0},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{240},
  editor =	{Ishikawa, Toru and Fabrikant, Sara Irina and Winter, Stephan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.COSIT.2022.14},
  URN =		{urn:nbn:de:0030-drops-168992},
  doi =		{10.4230/LIPIcs.COSIT.2022.14},
  annote =	{Keywords: Sketch maps, mental representations, spatial knowledge}
}
Document
Extending Partial Representations of Circle Graphs in Near-Linear Time

Authors: Guido Brückner, Ignaz Rutter, and Peter Stumpf

Published in: LIPIcs, Volume 241, 47th International Symposium on Mathematical Foundations of Computer Science (MFCS 2022)


Abstract
The partial representation extension problem generalizes the recognition problem for geometric intersection graphs. The input consists of a graph G, a subgraph H ⊆ G and a representation H of H. The question is whether G admits a representation G whose restriction to H is H. We study this question for circle graphs, which are intersection graphs of chords of a circle. Their representations are called chord diagrams. We show that for a graph with n vertices and m edges the partial representation extension problem can be solved in O((n + m) α(n + m)) time, where α is the inverse Ackermann function. This improves over an O(n³)-time algorithm by Chaplick, Fulek and Klavík [2019]. The main technical contributions are a canonical way of orienting chord diagrams and a novel compact representation of the set of all canonically oriented chord diagrams that represent a given circle graph G, which is of independent interest.

Cite as

Guido Brückner, Ignaz Rutter, and Peter Stumpf. Extending Partial Representations of Circle Graphs in Near-Linear Time. In 47th International Symposium on Mathematical Foundations of Computer Science (MFCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 241, pp. 25:1-25:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{bruckner_et_al:LIPIcs.MFCS.2022.25,
  author =	{Br\"{u}ckner, Guido and Rutter, Ignaz and Stumpf, Peter},
  title =	{{Extending Partial Representations of Circle Graphs in Near-Linear Time}},
  booktitle =	{47th International Symposium on Mathematical Foundations of Computer Science (MFCS 2022)},
  pages =	{25:1--25:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-256-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{241},
  editor =	{Szeider, Stefan and Ganian, Robert and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2022.25},
  URN =		{urn:nbn:de:0030-drops-168233},
  doi =		{10.4230/LIPIcs.MFCS.2022.25},
  annote =	{Keywords: circle graphs, partial representation extension, split decomposition tree, recognition algorithm}
}
Document
Fast Succinct Retrieval and Approximate Membership Using Ribbon

Authors: Peter C. Dillinger, Lorenz Hübschle-Schneider, Peter Sanders, and Stefan Walzer

Published in: LIPIcs, Volume 233, 20th International Symposium on Experimental Algorithms (SEA 2022)


Abstract
A retrieval data structure for a static function f: S → {0,1}^r supports queries that return f(x) for any x ∈ S. Retrieval data structures can be used to implement a static approximate membership query data structure (AMQ), i.e., a Bloom filter alternative, with false positive rate 2^{-r}. The information-theoretic lower bound for both tasks is r|S| bits. While succinct theoretical constructions using (1+o(1))r|S| bits were known, these could not achieve very small overheads in practice because they have an unfavorable space-time tradeoff hidden in the asymptotic costs or because small overheads would only be reached for physically impossible input sizes. With bumped ribbon retrieval (BuRR), we present the first practical succinct retrieval data structure. In an extensive experimental evaluation BuRR achieves space overheads well below 1% while being faster than most previously used retrieval data structures (typically with space overheads at least an order of magnitude larger) and faster than classical Bloom filters (with space overhead ≥ 44%). This efficiency, including favorable constants, stems from a combination of simplicity, word parallelism, and high locality. We additionally describe homogeneous ribbon filter AMQs, which are even simpler and faster at the price of slightly larger space overhead.

Cite as

Peter C. Dillinger, Lorenz Hübschle-Schneider, Peter Sanders, and Stefan Walzer. Fast Succinct Retrieval and Approximate Membership Using Ribbon. In 20th International Symposium on Experimental Algorithms (SEA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 233, pp. 4:1-4:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{dillinger_et_al:LIPIcs.SEA.2022.4,
  author =	{Dillinger, Peter C. and H\"{u}bschle-Schneider, Lorenz and Sanders, Peter and Walzer, Stefan},
  title =	{{Fast Succinct Retrieval and Approximate Membership Using Ribbon}},
  booktitle =	{20th International Symposium on Experimental Algorithms (SEA 2022)},
  pages =	{4:1--4:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-251-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{233},
  editor =	{Schulz, Christian and U\c{c}ar, Bora},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2022.4},
  URN =		{urn:nbn:de:0030-drops-165385},
  doi =		{10.4230/LIPIcs.SEA.2022.4},
  annote =	{Keywords: AMQ, Bloom filter, dictionary, linear algebra, randomized algorithm, retrieval data structure, static function data structure, succinct data structure, perfect hashing}
}
Document
Symmetry and Quantum Query-To-Communication Simulation

Authors: Sourav Chakraborty, Arkadev Chattopadhyay, Peter Høyer, Nikhil S. Mande, Manaswi Paraashar, and Ronald de Wolf

Published in: LIPIcs, Volume 219, 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)


Abstract
Buhrman, Cleve and Wigderson (STOC'98) showed that for every Boolean function f : {-1,1}ⁿ → {-1,1} and G ∈ {AND₂, XOR₂}, the bounded-error quantum communication complexity of the composed function f∘G equals O(𝖰(f) log n), where 𝖰(f) denotes the bounded-error quantum query complexity of f. This is achieved by Alice running the optimal quantum query algorithm for f, using a round of O(log n) qubits of communication to implement each query. This is in contrast with the classical setting, where it is easy to show that 𝖱^{cc}(f∘G) ≤ 2𝖱(f), where 𝖱^{cc} and 𝖱 denote bounded-error communication and query complexity, respectively. Chakraborty et al. (CCC'20) exhibited a total function for which the log n overhead in the BCW simulation is required. This established the somewhat surprising fact that quantum reductions are in some cases inherently more expensive than classical reductions. We improve upon their result in several ways. - We show that the log n overhead is not required when f is symmetric (i.e., depends only on the Hamming weight of its input), generalizing a result of Aaronson and Ambainis for the Set-Disjointness function (Theory of Computing'05). Our upper bound assumes a shared entangled state, though for most symmetric functions the assumed number of entangled qubits is less than the communication and hence could be part of the communication. - In order to prove the above, we design an efficient distributed version of noisy amplitude amplification that allows us to prove the result when f is the OR function. This also provides a different, and arguably simpler, proof of Aaronson and Ambainis’s O(√n) communication upper bound for Set-Disjointness. - In view of our first result above, one may ask whether the log n overhead in the BCW simulation can be avoided even when f is transitive, which is a weaker notion of symmetry. We give a strong negative answer by showing that the log n overhead is still necessary for some transitive functions even when we allow the quantum communication protocol an error probability that can be arbitrarily close to 1/2 (this corresponds to the unbounded-error model of communication). - We also give, among other things, a general recipe to construct functions for which the log n overhead is required in the BCW simulation in the bounded-error communication model, even if the parties are allowed to share an arbitrary prior entangled state for free.

Cite as

Sourav Chakraborty, Arkadev Chattopadhyay, Peter Høyer, Nikhil S. Mande, Manaswi Paraashar, and Ronald de Wolf. Symmetry and Quantum Query-To-Communication Simulation. In 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 219, pp. 20:1-20:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.STACS.2022.20,
  author =	{Chakraborty, Sourav and Chattopadhyay, Arkadev and H{\o}yer, Peter and Mande, Nikhil S. and Paraashar, Manaswi and de Wolf, Ronald},
  title =	{{Symmetry and Quantum Query-To-Communication Simulation}},
  booktitle =	{39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)},
  pages =	{20:1--20:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-222-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{219},
  editor =	{Berenbrink, Petra and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2022.20},
  URN =		{urn:nbn:de:0030-drops-158309},
  doi =		{10.4230/LIPIcs.STACS.2022.20},
  annote =	{Keywords: Classical and quantum communication complexity, query-to-communication-simulation, quantum computing}
}
Document
Mixing in Non-Quasirandom Groups

Authors: W. T. Gowers and Emanuele Viola

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
We initiate a systematic study of mixing in non-quasirandom groups. Let A and B be two independent, high-entropy distributions over a group G. We show that the product distribution AB is statistically close to the distribution F(AB) for several choices of G and F, including: 1) G is the affine group of 2x2 matrices, and F sets the top-right matrix entry to a uniform value, 2) G is the lamplighter group, that is the wreath product of ℤ₂ and ℤ_{n}, and F is multiplication by a certain subgroup, 3) G is Hⁿ where H is non-abelian, and F selects a uniform coordinate and takes a uniform conjugate of it. The obtained bounds for (1) and (2) are tight. This work is motivated by and applied to problems in communication complexity. We consider the 3-party communication problem of deciding if the product of three group elements multiplies to the identity. We prove lower bounds for the groups above, which are tight for the affine and the lamplighter groups.

Cite as

W. T. Gowers and Emanuele Viola. Mixing in Non-Quasirandom Groups. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 80:1-80:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{gowers_et_al:LIPIcs.ITCS.2022.80,
  author =	{Gowers, W. T. and Viola, Emanuele},
  title =	{{Mixing in Non-Quasirandom Groups}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{80:1--80:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.80},
  URN =		{urn:nbn:de:0030-drops-156761},
  doi =		{10.4230/LIPIcs.ITCS.2022.80},
  annote =	{Keywords: Groups, representation theory, mixing, communication complexity, quasi-random}
}
Document
Deterministic Dynamic Matching in Worst-Case Update Time

Authors: Peter Kiss

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
We present deterministic algorithms for maintaining a (3/2 + ε) and (2 + ε)-approximate maximum matching in a fully dynamic graph with worst-case update times Ô(√n) and Õ(1) respectively. The fastest known deterministic worst-case update time algorithms for achieving approximation ratio (2 - δ) (for any δ > 0) and (2 + ε) were both shown by Roghani et al. [arXiv'2021] with update times O(n^{3/4}) and O_ε(√n) respectively. We close the gap between worst-case and amortized algorithms for the two approximation ratios as the best deterministic amortized update times for the problem are O_ε(√n) and Õ(1) which were shown in Bernstein and Stein [SODA'2021] and Bhattacharya and Kiss [ICALP'2021] respectively. The algorithm achieving (3/2 + ε) approximation builds on the EDCS concept introduced by the influential paper of Bernstein and Stein [ICALP'2015]. Say that H is a (α, δ)-approximate matching sparsifier if at all times H satisfies that μ(H) ⋅ α + δ ⋅ n ≥ μ(G) (define (α, δ)-approximation similarly for matchings). We show how to maintain a locally damaged version of the EDCS which is a (3/2 + ε, δ)-approximate matching sparsifier. We further show how to reduce the maintenance of an α-approximate maximum matching to the maintenance of an (α, δ)-approximate maximum matching building based on an observation of Assadi et al. [EC'2016]. Our reduction requires an update time blow-up of Ô(1) or Õ(1) and is deterministic or randomized against an adaptive adversary respectively. To achieve (2 + ε)-approximation we improve on the update time guarantee of an algorithm of Bhattacharya and Kiss [ICALP'2021]. In order to achieve both results we explicitly state a method implicitly used in Nanongkai and Saranurak [STOC'2017] and Bernstein et al. [arXiv'2020] which allows to transform dynamic algorithms capable of processing the input in batches to a dynamic algorithms with worst-case update time. Independent Work: Independently and concurrently to our work Grandoni et al. [arXiv'2021] has presented a fully dynamic algorithm for maintaining a (3/2 + ε)-approximate maximum matching with deterministic worst-case update time O_ε(√n).

Cite as

Peter Kiss. Deterministic Dynamic Matching in Worst-Case Update Time. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 94:1-94:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{kiss:LIPIcs.ITCS.2022.94,
  author =	{Kiss, Peter},
  title =	{{Deterministic Dynamic Matching in Worst-Case Update Time}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{94:1--94:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.94},
  URN =		{urn:nbn:de:0030-drops-156909},
  doi =		{10.4230/LIPIcs.ITCS.2022.94},
  annote =	{Keywords: Dynamic Algorithms, Matching, Approximate Matching, EDCS}
}
Document
Reactamole: Functional Reactive Molecular Programming

Authors: Titus H. Klinge, James I. Lathrop, Peter-Michael Osera, and Allison Rogers

Published in: LIPIcs, Volume 205, 27th International Conference on DNA Computing and Molecular Programming (DNA 27) (2021)


Abstract
Chemical reaction networks (CRNs) are an important tool for molecular programming, a field that is rapidly expanding our ability to deploy computer programs into biological systems for a variety of applications. However, CRNs are also difficult to work with due to their massively parallel nature, leading to the need for higher-level languages that allow for easier computation with CRNs. Recently, research has been conducted into a variety of higher-level languages for deterministic CRNs but modeling CRN parallelism, managing error accumulation, and finding natural CRN representations are ongoing challenges. We introduce Reactamole, a higher-level language for deterministic CRNs that utilizes the functional reactive programming (FRP) paradigm to represent CRNs as a reactive dataflow network. Reactamole equates a CRN with a functional reactive program, implementing the key primitives of the FRP paradigm directly as CRNs. The functional nature of Reactamole makes reasoning about molecular programs easier, and its strong static typing allows us to ensure that a CRN is well-formed by virtue of being well-typed. In this paper, we describe the design of Reactamole and how we use CRNs to represent the common datatypes and operations found in FRP. We also demonstrate the potential of this functional reactive approach to molecular programming by giving an extended example where a CRN is constructed using FRP to modulate and demodulate an amplitude modulated signal.

Cite as

Titus H. Klinge, James I. Lathrop, Peter-Michael Osera, and Allison Rogers. Reactamole: Functional Reactive Molecular Programming. In 27th International Conference on DNA Computing and Molecular Programming (DNA 27). Leibniz International Proceedings in Informatics (LIPIcs), Volume 205, pp. 10:1-10:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{klinge_et_al:LIPIcs.DNA.27.10,
  author =	{Klinge, Titus H. and Lathrop, James I. and Osera, Peter-Michael and Rogers, Allison},
  title =	{{Reactamole: Functional Reactive Molecular Programming}},
  booktitle =	{27th International Conference on DNA Computing and Molecular Programming (DNA 27)},
  pages =	{10:1--10:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-205-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{205},
  editor =	{Lakin, Matthew R. and \v{S}ulc, Petr},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.DNA.27.10},
  URN =		{urn:nbn:de:0030-drops-146775},
  doi =		{10.4230/LIPIcs.DNA.27.10},
  annote =	{Keywords: Chemical Reaction Network, Functional Reactive Programming, Domain Specific Language}
}
Document
Enabling Preserving Bisimulation Equivalence

Authors: Rob van Glabbeek, Peter Höfner, and Weiyou Wang

Published in: LIPIcs, Volume 203, 32nd International Conference on Concurrency Theory (CONCUR 2021)


Abstract
Most fairness assumptions used for verifying liveness properties are criticised for being too strong or unrealistic. On the other hand, justness, arguably the minimal fairness assumption required for the verification of liveness properties, is not preserved by classical semantic equivalences, such as strong bisimilarity. To overcome this deficiency, we introduce a finer alternative to strong bisimilarity, called enabling preserving bisimilarity. We prove that this equivalence is justness-preserving and a congruence for all standard operators, including parallel composition.

Cite as

Rob van Glabbeek, Peter Höfner, and Weiyou Wang. Enabling Preserving Bisimulation Equivalence. In 32nd International Conference on Concurrency Theory (CONCUR 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 203, pp. 33:1-33:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{vanglabbeek_et_al:LIPIcs.CONCUR.2021.33,
  author =	{van Glabbeek, Rob and H\"{o}fner, Peter and Wang, Weiyou},
  title =	{{Enabling Preserving Bisimulation Equivalence}},
  booktitle =	{32nd International Conference on Concurrency Theory (CONCUR 2021)},
  pages =	{33:1--33:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-203-7},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{203},
  editor =	{Haddad, Serge and Varacca, Daniele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2021.33},
  URN =		{urn:nbn:de:0030-drops-144107},
  doi =		{10.4230/LIPIcs.CONCUR.2021.33},
  annote =	{Keywords: bisimilarity, liveness properties, fairness assumptions, process algebra}
}
  • Refine by Author
  • 4 König-Ries, Birgitta
  • 4 Pitoura, Evaggelia
  • 4 Reiher, Peter
  • 4 Sanders, Peter
  • 4 Türker, Can
  • Show More...

  • Refine by Classification
  • 9 Theory of computation → Graph algorithms analysis
  • 7 Theory of computation → Parameterized complexity and exact algorithms
  • 5 Mathematics of computing → Graph theory
  • 4 Theory of computation → Problems, reductions and completeness
  • 3 Mathematics of computing → Graph coloring
  • Show More...

  • Refine by Keyword
  • 2 Approximation
  • 2 Computable analysis
  • 2 H-graphs
  • 2 Matching
  • 2 homotopy type theory
  • Show More...

  • Refine by Type
  • 81 document

  • Refine by Publication Year
  • 15 2005
  • 14 2020
  • 11 2019
  • 7 2022
  • 6 2009
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail