25 Search Results for "Lange, Martin"


Document
Real-Time Higher-Order Recursion Schemes

Authors: Eric Alsmann and Florian Bruse

Published in: LIPIcs, Volume 318, 31st International Symposium on Temporal Representation and Reasoning (TIME 2024)


Abstract
Higher-Order Recursion Schemes (HORS) have long been studied as a tool to model functional programs. Model-checking the tree generated by a HORS of order k against a parity automaton is known to be k-EXPTIME-complete. This paper introduces timed HORS, a real-time version of HORS in the sense of Alur/Dill'90, to be model-checked against a pair of a parity automaton and a timed automaton. We show that adding dense linear time to the notion of recursion schemes adds one exponential to the cost of model-checking, i.e., model-checking a timed HORS of order k can be done in (k+1)-EXPTIME. This is shown by an adaption of the region-graph construction known from the model-checking of timed CTL. We also obtain a hardness result for k = 1, but we strongly conjecture that it holds for all k. This result is obtained by encoding runs of 2-EXPTIME Turing machines into the trees generated by timed HORS.

Cite as

Eric Alsmann and Florian Bruse. Real-Time Higher-Order Recursion Schemes. In 31st International Symposium on Temporal Representation and Reasoning (TIME 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 318, pp. 16:1-16:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{alsmann_et_al:LIPIcs.TIME.2024.16,
  author =	{Alsmann, Eric and Bruse, Florian},
  title =	{{Real-Time Higher-Order Recursion Schemes}},
  booktitle =	{31st International Symposium on Temporal Representation and Reasoning (TIME 2024)},
  pages =	{16:1--16:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-349-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{318},
  editor =	{Sala, Pietro and Sioutis, Michael and Wang, Fusheng},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TIME.2024.16},
  URN =		{urn:nbn:de:0030-drops-212236},
  doi =		{10.4230/LIPIcs.TIME.2024.16},
  annote =	{Keywords: Timed Automata, Higher-Order Recursion Schemes, Tree Automata}
}
Document
Runtime Instrumentation for Reactive Components

Authors: Luca Aceto, Duncan Paul Attard, Adrian Francalanza, and Anna Ingólfsdóttir

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Reactive software calls for instrumentation methods that uphold the reactive attributes of systems. Runtime verification imposes another demand on the instrumentation, namely that the trace event sequences it reports to monitors are sound - that is, they reflect actual executions of the system under scrutiny. This paper presents RIARC, a novel decentralised instrumentation algorithm for outline monitors meeting these two demands. Asynchrony in reactive software complicates the instrumentation due to potential trace event loss or reordering. RIARC overcomes these challenges using a next-hop IP routing approach to rearrange and report events soundly to monitors. RIARC is validated in two ways. We subject its corresponding implementation to rigorous systematic testing to confirm its correctness. In addition, we assess this implementation via extensive empirical experiments, subjecting it to large realistic workloads to ascertain its reactiveness. Our results show that RIARC optimises its memory and scheduler usage to maintain latency feasible for soft real-time applications. We also compare RIARC to inline and centralised monitoring, revealing that it induces comparable latency to inline monitoring in moderate concurrency settings where software performs long-running, computationally-intensive tasks, such as in Big Data stream processing.

Cite as

Luca Aceto, Duncan Paul Attard, Adrian Francalanza, and Anna Ingólfsdóttir. Runtime Instrumentation for Reactive Components. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 2:1-2:33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{aceto_et_al:LIPIcs.ECOOP.2024.2,
  author =	{Aceto, Luca and Attard, Duncan Paul and Francalanza, Adrian and Ing\'{o}lfsd\'{o}ttir, Anna},
  title =	{{Runtime Instrumentation for Reactive Components}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{2:1--2:33},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.2},
  URN =		{urn:nbn:de:0030-drops-208511},
  doi =		{10.4230/LIPIcs.ECOOP.2024.2},
  annote =	{Keywords: Runtime instrumentation, decentralised monitoring, reactive systems}
}
Document
Behavioural Up/down Casting For Statically Typed Languages

Authors: Lorenzo Bacchiani, Mario Bravetti, Marco Giunti, João Mota, and António Ravara

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
We provide support for polymorphism in static typestate analysis for object-oriented languages with upcasts and downcasts. Recent work has shown how typestate analysis can be embedded in the development of Java programs to obtain safer behaviour at runtime, e.g., absence of null pointer errors and protocol completion. In that approach, inheritance is supported at the price of limiting casts in source code, thus only allowing those at the beginning of the protocol, i.e., immediately after objects creation, or at the end, and in turn seriously affecting the applicability of the analysis. In this paper, we provide a solution to this open problem in typestate analysis by introducing a theory based on a richer data structure, named typestate tree, which supports upcast and downcast operations at any point of the protocol by leveraging union and intersection types. The soundness of the typestate tree-based approach has been mechanised in Coq. The theory can be applied to most object-oriented languages statically analysable through typestates, thus opening new scenarios for acceptance of programs exploiting inheritance and casting. To defend this thesis, we show an application of the theory, by embedding the typestate tree mechanism in a Java-like object-oriented language, and proving its soundness.

Cite as

Lorenzo Bacchiani, Mario Bravetti, Marco Giunti, João Mota, and António Ravara. Behavioural Up/down Casting For Statically Typed Languages. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 5:1-5:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bacchiani_et_al:LIPIcs.ECOOP.2024.5,
  author =	{Bacchiani, Lorenzo and Bravetti, Mario and Giunti, Marco and Mota, Jo\~{a}o and Ravara, Ant\'{o}nio},
  title =	{{Behavioural Up/down Casting For Statically Typed Languages}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{5:1--5:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.5},
  URN =		{urn:nbn:de:0030-drops-208543},
  doi =		{10.4230/LIPIcs.ECOOP.2024.5},
  annote =	{Keywords: Behavioural types, object-oriented programming, subtyping, cast, typestates}
}
Document
Understanding Concurrency Bugs in Real-World Programs with Kotlin Coroutines

Authors: Bob Brockbernd, Nikita Koval, Arie van Deursen, and Burcu Kulahcioglu Ozkan

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Kotlin language has recently become prominent for developing both Android and server-side applications. These programs are typically designed to be fast and responsive, with asynchrony and concurrency at their core. To enable developers to write asynchronous and concurrent code safely and concisely, Kotlin provides built-in coroutines support. However, developers unfamiliar with the coroutines concept may write programs with subtle concurrency bugs and face unexpected program behaviors. Besides the traditional concurrency bug patterns, such as data races and deadlocks, these bugs may exhibit patterns related to the coroutine semantics. Understanding these coroutine-specific bug patterns in real-world Kotlin applications is essential in avoiding common mistakes and writing correct programs. In this paper, we present the first study of real-world concurrency bugs related to Kotlin coroutines. We examined 55 concurrency bug cases selected from 7 popular open-source repositories that use Kotlin coroutines, including IntelliJ IDEA, Firefox, and Ktor, and analyzed their bug characteristics and root causes. We identified common bug patterns related to asynchrony and Kotlin’s coroutine semantics, presenting them with their root causes, misconceptions that led to the bugs, and strategies for their automated detection. Overall, this study provides insight into programming with Kotlin coroutines concurrency and its pitfalls, aiming to shed light on common bug patterns and foster further research and development of concurrency analysis tools for Kotlin programs.

Cite as

Bob Brockbernd, Nikita Koval, Arie van Deursen, and Burcu Kulahcioglu Ozkan. Understanding Concurrency Bugs in Real-World Programs with Kotlin Coroutines. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 8:1-8:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{brockbernd_et_al:LIPIcs.ECOOP.2024.8,
  author =	{Brockbernd, Bob and Koval, Nikita and van Deursen, Arie and Ozkan, Burcu Kulahcioglu},
  title =	{{Understanding Concurrency Bugs in Real-World Programs with Kotlin Coroutines}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{8:1--8:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.8},
  URN =		{urn:nbn:de:0030-drops-208579},
  doi =		{10.4230/LIPIcs.ECOOP.2024.8},
  annote =	{Keywords: Kotlin, coroutines, concurrency, asynchrony, software bugs}
}
Document
Fearless Asynchronous Communications with Timed Multiparty Session Protocols

Authors: Ping Hou, Nicolas Lagaillardie, and Nobuko Yoshida

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Session types using affinity and exception handling mechanisms have been developed to ensure the communication safety of protocols implemented in concurrent and distributed programming languages. Nevertheless, current affine session types are inadequate for specifying real-world asynchronous protocols, as they are usually imposed by time constraints which enable timeout exceptions to prevent indefinite blocking while awaiting valid messages. This paper proposes the first formal integration of affinity, time constraints, timeouts, and time-failure handling based on multiparty session types for supporting reliability in asynchronous distributed systems. With this theory, we statically guarantee that asynchronous timed communication is deadlock-free, communication safe, while being fearless - never hindered by timeout errors or abrupt terminations. To implement our theory, we introduce MultiCrusty^T, a Rust toolchain designed to facilitate the implementation of safe affine timed protocols. MultiCrusty^T leverages generic types and the time library to handle timed communications, integrated with optional types for affinity. We evaluate MultiCrusty^T by extending diverse examples from the literature to incorporate time and timeouts. We also showcase the correctness by construction of our approach by implementing various real-world use cases, including protocols from the Internet of Remote Things domain and real-time systems.

Cite as

Ping Hou, Nicolas Lagaillardie, and Nobuko Yoshida. Fearless Asynchronous Communications with Timed Multiparty Session Protocols. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 19:1-19:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hou_et_al:LIPIcs.ECOOP.2024.19,
  author =	{Hou, Ping and Lagaillardie, Nicolas and Yoshida, Nobuko},
  title =	{{Fearless Asynchronous Communications with Timed Multiparty Session Protocols}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{19:1--19:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.19},
  URN =		{urn:nbn:de:0030-drops-208681},
  doi =		{10.4230/LIPIcs.ECOOP.2024.19},
  annote =	{Keywords: Session Types, Concurrency, Time Failure Handling, Affinity, Timeout, Rust}
}
Document
Ozone: Fully Out-of-Order Choreographies

Authors: Dan Plyukhin, Marco Peressotti, and Fabrizio Montesi

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Choreographic programming is a paradigm for writing distributed applications. It allows programmers to write a single program, called a choreography, that can be compiled to generate correct implementations of each process in the application. Although choreographies provide good static guarantees, they can exhibit high latency when messages or processes are delayed. This is because processes in a choreography typically execute in a fixed, deterministic order, and cannot adapt to the order that messages arrive at runtime. In non-choreographic code, programmers can address this problem by allowing processes to execute out of order - for instance by using futures or reactive programming. However, in choreographic code, out-of-order process execution can lead to serious and subtle bugs, called communication integrity violations (CIVs). In this paper, we develop a model of choreographic programming for out-of-order processes that guarantees absence of CIVs and deadlocks. As an application of our approach, we also introduce an API for safe non-blocking communication via futures in the choreographic programming language Choral. The API allows processes to execute out of order, participate in multiple choreographies concurrently, and to handle unordered data messages. We provide an illustrative evaluation of our API, showing that out-of-order execution can reduce latency and increase throughput by overlapping communication with computation.

Cite as

Dan Plyukhin, Marco Peressotti, and Fabrizio Montesi. Ozone: Fully Out-of-Order Choreographies. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 31:1-31:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{plyukhin_et_al:LIPIcs.ECOOP.2024.31,
  author =	{Plyukhin, Dan and Peressotti, Marco and Montesi, Fabrizio},
  title =	{{Ozone: Fully Out-of-Order Choreographies}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{31:1--31:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.31},
  URN =		{urn:nbn:de:0030-drops-208800},
  doi =		{10.4230/LIPIcs.ECOOP.2024.31},
  annote =	{Keywords: Choreographic programming, Asynchrony, Concurrency}
}
Document
Completeness of Asynchronous Session Tree Subtyping in Coq

Authors: Burak Ekici and Nobuko Yoshida

Published in: LIPIcs, Volume 309, 15th International Conference on Interactive Theorem Proving (ITP 2024)


Abstract
Multiparty session types (MPST) serve as a foundational framework for formally specifying and verifying message passing protocols. Asynchronous subtyping in MPST allows for typing optimised programs preserving type safety and deadlock freedom under asynchronous interactions where the message order is preserved and sending is non-blocking. The optimisation is obtained by message reordering, which allows for sending messages earlier or receiving them later. Sound subtyping algorithms have been extensively studied and implemented as part of various programming languages and tools including C, Rust and C-MPI. However, formalising all such permutations under sequencing, selection, branching and recursion in session types is an intricate task. Additionally, checking asynchronous subtyping has been proven to be undecidable. This paper introduces the first formalisation of asynchronous subtyping in MPST within the Coq proof assistant. We first decompose session types into session trees that do not involve branching and selection, and then establish a coinductive refinement relation over them to govern subtyping. To showcase our formalisation, we prove example subtyping schemas that appear in the literature, all of which cannot be verified, at the same time, by any of the existing decidable sound algorithms. Additionally, we take the (inductive) negation of the refinement relation from a prior work by Ghilezan et al. [Ghilezan et al., 2023] and re-implement it, significantly reducing the number of rules (from eighteen to eight). We establish the completeness of subtyping with respect to its negation in Coq, addressing the issues concerning the negation rules outlined in the previous work [Ghilezan et al., 2023]. In the formalisation, we use the greatest fixed point of the least fixed point technique, facilitated by the paco library, to define coinductive predicates. We employ parametrised coinduction to prove their properties. The formalisation consists of roughly 10K lines of Coq code, accessible at: https://github.com/ekiciburak/sessionTreeST/tree/itp2024.

Cite as

Burak Ekici and Nobuko Yoshida. Completeness of Asynchronous Session Tree Subtyping in Coq. In 15th International Conference on Interactive Theorem Proving (ITP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 309, pp. 13:1-13:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ekici_et_al:LIPIcs.ITP.2024.13,
  author =	{Ekici, Burak and Yoshida, Nobuko},
  title =	{{Completeness of Asynchronous Session Tree Subtyping in Coq}},
  booktitle =	{15th International Conference on Interactive Theorem Proving (ITP 2024)},
  pages =	{13:1--13:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-337-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{309},
  editor =	{Bertot, Yves and Kutsia, Temur and Norrish, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITP.2024.13},
  URN =		{urn:nbn:de:0030-drops-207418},
  doi =		{10.4230/LIPIcs.ITP.2024.13},
  annote =	{Keywords: asynchronous multiparty session types, session trees, subtyping, Coq}
}
Document
Faster and Smaller Solutions of Obliging Games

Authors: Daniel Hausmann and Nir Piterman

Published in: LIPIcs, Volume 311, 35th International Conference on Concurrency Theory (CONCUR 2024)


Abstract
Obliging games have been introduced in the context of the game perspective on reactive synthesis in order to enforce a degree of cooperation between the to-be-synthesized system and the environment. Previous approaches to the analysis of obliging games have been small-step in the sense that they have been based on a reduction to standard (non-obliging) games in which single moves correspond to single moves in the original (obliging) game. Here, we propose a novel, large-step view on obliging games, reducing them to standard games in which single moves encode long-term behaviors in the original game. This not only allows us to give a meaningful definition of the environment winning in obliging games, but also leads to significantly improved bounds on both strategy sizes and the solution runtime for obliging games.

Cite as

Daniel Hausmann and Nir Piterman. Faster and Smaller Solutions of Obliging Games. In 35th International Conference on Concurrency Theory (CONCUR 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 311, pp. 28:1-28:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hausmann_et_al:LIPIcs.CONCUR.2024.28,
  author =	{Hausmann, Daniel and Piterman, Nir},
  title =	{{Faster and Smaller Solutions of Obliging Games}},
  booktitle =	{35th International Conference on Concurrency Theory (CONCUR 2024)},
  pages =	{28:1--28:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-339-3},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{311},
  editor =	{Majumdar, Rupak and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2024.28},
  URN =		{urn:nbn:de:0030-drops-208008},
  doi =		{10.4230/LIPIcs.CONCUR.2024.28},
  annote =	{Keywords: Two-player games, reactive synthesis, Emerson-Lei games, parity games}
}
Document
Breaking RSA Generically Is Equivalent to Factoring, with Preprocessing

Authors: Dana Dachman-Soled, Julian Loss, and Adam O'Neill

Published in: LIPIcs, Volume 304, 5th Conference on Information-Theoretic Cryptography (ITC 2024)


Abstract
We investigate the relationship between the classical RSA and factoring problems when preprocessing is considered. In such a model, adversaries can use an unbounded amount of precomputation to produce an "advice" string to then use during the online phase, when a problem instance becomes known. Previous work (e.g., [Bernstein, Lange ASIACRYPT '13]) has shown that preprocessing attacks significantly improve the runtime of the best-known factoring algorithms. Due to these improvements, we ask whether the relationship between factoring and RSA fundamentally changes when preprocessing is allowed. Specifically, we investigate whether there is a superpolynomial gap between the runtime of the best attack on RSA with preprocessing and on factoring with preprocessing. Our main result rules this out with respect to algorithms that perform generic computation on the RSA instance x^e od N yet arbitrary computation on the modulus N, namely a careful adaptation of the well-known generic ring model of Aggarwal and Maurer (Eurocrypt 2009) to the preprocessing setting. In particular, in this setting we show the existence of a factoring algorithm with polynomially related parameters, for any setting of RSA parameters. Our main technical contribution is a set of new information-theoretic techniques that allow us to handle or eliminate cases in which the Aggarwal and Maurer result does not yield a factoring algorithm in the standard model with parameters that are polynomially related to those of the RSA algorithm. These techniques include two novel compression arguments, and a variant of the Fiat-Naor/Hellman tables construction that is tailored to the factoring setting.

Cite as

Dana Dachman-Soled, Julian Loss, and Adam O'Neill. Breaking RSA Generically Is Equivalent to Factoring, with Preprocessing. In 5th Conference on Information-Theoretic Cryptography (ITC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 304, pp. 8:1-8:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dachmansoled_et_al:LIPIcs.ITC.2024.8,
  author =	{Dachman-Soled, Dana and Loss, Julian and O'Neill, Adam},
  title =	{{Breaking RSA Generically Is Equivalent to Factoring, with Preprocessing}},
  booktitle =	{5th Conference on Information-Theoretic Cryptography (ITC 2024)},
  pages =	{8:1--8:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-333-1},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{304},
  editor =	{Aggarwal, Divesh},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2024.8},
  URN =		{urn:nbn:de:0030-drops-205163},
  doi =		{10.4230/LIPIcs.ITC.2024.8},
  annote =	{Keywords: RSA, factoring, generic ring model, preprocessing}
}
Document
Separator Based Data Reduction for the Maximum Cut Problem

Authors: Jonas Charfreitag, Christine Dahn, Michael Kaibel, Philip Mayer, Petra Mutzel, and Lukas Schürmann

Published in: LIPIcs, Volume 301, 22nd International Symposium on Experimental Algorithms (SEA 2024)


Abstract
Preprocessing is an important ingredient for solving the maximum cut problem to optimality on real-world graphs. In our work, we derive a new framework for data reduction rules based on vertex separators. Vertex separators are sets of vertices, whose removal increases the number of connected components of a graph. Certain small separators can be found in linear time, allowing for an efficient combination of our framework with existing data reduction rules. Additionally, we complement known data reduction rules for triangles with a new one. In our computational experiments on established benchmark instances, we clearly show the effectiveness and efficiency of our proposed data reduction techniques. The resulting graphs are significantly smaller than in earlier studies and sometimes no vertex is left, so preprocessing has fully solved the instance to optimality. The introduced techniques are also shown to offer significant speedup potential for an exact state-of-the-art solver and to help a state-of-the-art heuristic to produce solutions of higher quality.

Cite as

Jonas Charfreitag, Christine Dahn, Michael Kaibel, Philip Mayer, Petra Mutzel, and Lukas Schürmann. Separator Based Data Reduction for the Maximum Cut Problem. In 22nd International Symposium on Experimental Algorithms (SEA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 301, pp. 4:1-4:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{charfreitag_et_al:LIPIcs.SEA.2024.4,
  author =	{Charfreitag, Jonas and Dahn, Christine and Kaibel, Michael and Mayer, Philip and Mutzel, Petra and Sch\"{u}rmann, Lukas},
  title =	{{Separator Based Data Reduction for the Maximum Cut Problem}},
  booktitle =	{22nd International Symposium on Experimental Algorithms (SEA 2024)},
  pages =	{4:1--4:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-325-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{301},
  editor =	{Liberti, Leo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2024.4},
  URN =		{urn:nbn:de:0030-drops-203698},
  doi =		{10.4230/LIPIcs.SEA.2024.4},
  annote =	{Keywords: Data Reduction, Maximum Cut, Vertex Separators}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Decidability of Graph Neural Networks via Logical Characterizations

Authors: Michael Benedikt, Chia-Hsuan Lu, Boris Motik, and Tony Tan

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We present results concerning the expressiveness and decidability of a popular graph learning formalism, graph neural networks (GNNs), exploiting connections with logic. We use a family of recently-discovered decidable logics involving "Presburger quantifiers". We show how to use these logics to measure the expressiveness of classes of GNNs, in some cases getting exact correspondences between the expressiveness of logics and GNNs. We also employ the logics, and the techniques used to analyze them, to obtain decision procedures for verification problems over GNNs. We complement this with undecidability results for static analysis problems involving the logics, as well as for GNN verification problems.

Cite as

Michael Benedikt, Chia-Hsuan Lu, Boris Motik, and Tony Tan. Decidability of Graph Neural Networks via Logical Characterizations. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 127:1-127:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{benedikt_et_al:LIPIcs.ICALP.2024.127,
  author =	{Benedikt, Michael and Lu, Chia-Hsuan and Motik, Boris and Tan, Tony},
  title =	{{Decidability of Graph Neural Networks via Logical Characterizations}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{127:1--127:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.127},
  URN =		{urn:nbn:de:0030-drops-202708},
  doi =		{10.4230/LIPIcs.ICALP.2024.127},
  annote =	{Keywords: Logic, Graph Neural Networks}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Regular Expressions with Backreferences and Lookaheads Capture NLOG

Authors: Yuya Uezato

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Backreferences and lookaheads are vital features to make classical regular expressions (REGEX) practical. Although these features have been widely used, understanding of the unrestricted combination of them has been limited. Practically, most likely, no implementation fully supports them. Theoretically, while some studies have addressed these features separately, few have dared to combine them. Those few studies showed that the amalgamation of these features significantly enhances the expressiveness of REGEX. However, no acceptable expressivity bound for REWBLk - REGEX with backreferences and lookaheads - has been established. We elucidate this by establishing that REWBLk coincides with NLOG, the class of languages accepted by log-space nondeterministic Turing machines (NTMs). In translating REWBLk to log-space NTMs, negative lookaheads are the most challenging part since it essentially requires complementing log-space NTMs in nondeterministic log-space. To address this problem, we revisit Immerman-Szelepcsényi theorem. In addition, we employ log-space nested-oracles NTMs to naturally handle nested lookaheads of REWBLk. Utilizing such oracle machines, we also present the new result that the membership problem of REWBLk is PSPACE-complete.

Cite as

Yuya Uezato. Regular Expressions with Backreferences and Lookaheads Capture NLOG. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 155:1-155:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{uezato:LIPIcs.ICALP.2024.155,
  author =	{Uezato, Yuya},
  title =	{{Regular Expressions with Backreferences and Lookaheads Capture NLOG}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{155:1--155:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.155},
  URN =		{urn:nbn:de:0030-drops-202984},
  doi =		{10.4230/LIPIcs.ICALP.2024.155},
  annote =	{Keywords: Regular Expression, Automata Theory, Nondeterministic Log-Space}
}
Document
Position
Grounding Stream Reasoning Research

Authors: Pieter Bonte, Jean-Paul Calbimonte, Daniel de Leng, Daniele Dell'Aglio, Emanuele Della Valle, Thomas Eiter, Federico Giannini, Fredrik Heintz, Konstantin Schekotihin, Danh Le-Phuoc, Alessandra Mileo, Patrik Schneider, Riccardo Tommasini, Jacopo Urbani, and Giacomo Ziffer

Published in: TGDK, Volume 2, Issue 1 (2024): Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge, Volume 2, Issue 1


Abstract
In the last decade, there has been a growing interest in applying AI technologies to implement complex data analytics over data streams. To this end, researchers in various fields have been organising a yearly event called the "Stream Reasoning Workshop" to share perspectives, challenges, and experiences around this topic. In this paper, the previous organisers of the workshops and other community members provide a summary of the main research results that have been discussed during the first six editions of the event. These results can be categorised into four main research areas: The first is concerned with the technological challenges related to handling large data streams. The second area aims at adapting and extending existing semantic technologies to data streams. The third and fourth areas focus on how to implement reasoning techniques, either considering deductive or inductive techniques, to extract new and valuable knowledge from the data in the stream. This summary is written not only to provide a crystallisation of the field, but also to point out distinctive traits of the stream reasoning community. Moreover, it also provides a foundation for future research by enumerating a list of use cases and open challenges, to stimulate others to join this exciting research area.

Cite as

Pieter Bonte, Jean-Paul Calbimonte, Daniel de Leng, Daniele Dell'Aglio, Emanuele Della Valle, Thomas Eiter, Federico Giannini, Fredrik Heintz, Konstantin Schekotihin, Danh Le-Phuoc, Alessandra Mileo, Patrik Schneider, Riccardo Tommasini, Jacopo Urbani, and Giacomo Ziffer. Grounding Stream Reasoning Research. In Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 1, pp. 2:1-2:47, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{bonte_et_al:TGDK.2.1.2,
  author =	{Bonte, Pieter and Calbimonte, Jean-Paul and de Leng, Daniel and Dell'Aglio, Daniele and Della Valle, Emanuele and Eiter, Thomas and Giannini, Federico and Heintz, Fredrik and Schekotihin, Konstantin and Le-Phuoc, Danh and Mileo, Alessandra and Schneider, Patrik and Tommasini, Riccardo and Urbani, Jacopo and Ziffer, Giacomo},
  title =	{{Grounding Stream Reasoning Research}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{2:1--2:47},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.1.2},
  URN =		{urn:nbn:de:0030-drops-198597},
  doi =		{10.4230/TGDK.2.1.2},
  annote =	{Keywords: Stream Reasoning, Stream Processing, RDF streams, Streaming Linked Data, Continuous query processing, Temporal Logics, High-performance computing, Databases}
}
Document
Survey
Semantic Web: Past, Present, and Future

Authors: Ansgar Scherp, Gerd Groener, Petr Škoda, Katja Hose, and Maria-Esther Vidal

Published in: TGDK, Volume 2, Issue 1 (2024): Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge, Volume 2, Issue 1


Abstract
Ever since the vision was formulated, the Semantic Web has inspired many generations of innovations. Semantic technologies have been used to share vast amounts of information on the Web, enhance them with semantics to give them meaning, and enable inference and reasoning on them. Throughout the years, semantic technologies, and in particular knowledge graphs, have been used in search engines, data integration, enterprise settings, and machine learning. In this paper, we recap the classical concepts and foundations of the Semantic Web as well as modern and recent concepts and applications, building upon these foundations. The classical topics we cover include knowledge representation, creating and validating knowledge on the Web, reasoning and linking, and distributed querying. We enhance this classical view of the so-called "Semantic Web Layer Cake" with an update of recent concepts that include provenance, security and trust, as well as a discussion of practical impacts from industry-led contributions. We conclude with an outlook on the future directions of the Semantic Web. This is a living document. If you like to contribute, please contact the first author and visit: https://github.com/ascherp/semantic-web-primer

Cite as

Ansgar Scherp, Gerd Groener, Petr Škoda, Katja Hose, and Maria-Esther Vidal. Semantic Web: Past, Present, and Future. In Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 1, pp. 3:1-3:37, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{scherp_et_al:TGDK.2.1.3,
  author =	{Scherp, Ansgar and Groener, Gerd and \v{S}koda, Petr and Hose, Katja and Vidal, Maria-Esther},
  title =	{{Semantic Web: Past, Present, and Future}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{3:1--3:37},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.1.3},
  URN =		{urn:nbn:de:0030-drops-198607},
  doi =		{10.4230/TGDK.2.1.3},
  annote =	{Keywords: Linked Open Data, Semantic Web Graphs, Knowledge Graphs}
}
Document
Survey
Logics for Conceptual Data Modelling: A Review

Authors: Pablo R. Fillottrani and C. Maria Keet

Published in: TGDK, Volume 2, Issue 1 (2024): Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge, Volume 2, Issue 1


Abstract
Information modelling for databases and object-oriented information systems avails of conceptual data modelling languages such as EER and UML Class Diagrams. Many attempts exist to add logical rigour to them, for various reasons and with disparate strengths. In this paper we aim to provide a structured overview of the many efforts. We focus on aims, approaches to the formalisation, including key dimensions of choice points, popular logics used, and the main relevant reasoning services. We close with current challenges and research directions.

Cite as

Pablo R. Fillottrani and C. Maria Keet. Logics for Conceptual Data Modelling: A Review. In Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 1, pp. 4:1-4:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{fillottrani_et_al:TGDK.2.1.4,
  author =	{Fillottrani, Pablo R. and Keet, C. Maria},
  title =	{{Logics for Conceptual Data Modelling: A Review}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{4:1--4:30},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.1.4},
  URN =		{urn:nbn:de:0030-drops-198616},
  doi =		{10.4230/TGDK.2.1.4},
  annote =	{Keywords: Conceptual Data Modelling, EER, UML, Description Logics, OWL}
}
  • Refine by Author
  • 9 Lange, Martin
  • 7 Bruse, Florian
  • 2 Kernberger, Daniel
  • 2 Yoshida, Nobuko
  • 1 Aceto, Luca
  • Show More...

  • Refine by Classification
  • 6 Theory of computation → Modal and temporal logics
  • 4 Theory of computation → Program specifications
  • 3 Theory of computation → Formal languages and automata theory
  • 3 Theory of computation → Logic and verification
  • 2 Computing methodologies → Concurrent computing methodologies
  • Show More...

  • Refine by Keyword
  • 5 temporal logic
  • 4 formal specification
  • 2 Concurrency
  • 2 expressive power
  • 2 hybrid logics
  • Show More...

  • Refine by Type
  • 25 document

  • Refine by Publication Year
  • 15 2024
  • 3 2021
  • 2 2020
  • 1 2016
  • 1 2017
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail