16 Search Results for "Wilhelm, Martin"


Document
Exact Minimum Weight Spanners via Column Generation

Authors: Fritz Bökler, Markus Chimani, Henning Jasper, and Mirko H. Wagner

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
Given a weighted graph G, a minimum weight α-spanner is a least-weight subgraph H ⊆ G that preserves minimum distances between all node pairs up to a factor of α. There are many results on heuristics and approximation algorithms, including a recent investigation of their practical performance [Markus Chimani and Finn Stutzenstein, 2022]. Exact approaches, in contrast, have long been denounced as impractical: The first exact ILP (integer linear program) method [Sigurd and Zachariasen, 2004] from 2004 is based on a model with exponentially many path variables, solved via column generation. A second approach [Ahmed et al., 2019], modeling via arc-based multicommodity flow, was presented in 2019. In both cases, only graphs with 40-100 nodes were reported to be solvable. In this paper, we briefly report on a theoretical comparison between these two models from a polyhedral point of view, and then concentrate on improvements and engineering aspects. We evaluate their performance in a large-scale empirical study. We report that our tuned column generation approach, based on multicriteria shortest path computations, is able to solve instances with over 16 000 nodes within 13 min. Furthermore, now knowing optimal solutions for larger graphs, we are able to investigate the quality of the strongest known heuristic on reasonably sized instances for the first time.

Cite as

Fritz Bökler, Markus Chimani, Henning Jasper, and Mirko H. Wagner. Exact Minimum Weight Spanners via Column Generation. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 30:1-30:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bokler_et_al:LIPIcs.ESA.2024.30,
  author =	{B\"{o}kler, Fritz and Chimani, Markus and Jasper, Henning and Wagner, Mirko H.},
  title =	{{Exact Minimum Weight Spanners via Column Generation}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{30:1--30:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.30},
  URN =		{urn:nbn:de:0030-drops-211012},
  doi =		{10.4230/LIPIcs.ESA.2024.30},
  annote =	{Keywords: Graph spanners, ILP, algorithm engineering, experimental study}
}
Document
APPROX
Online Time-Windows TSP with Predictions

Authors: Shuchi Chawla and Dimitris Christou

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
In the Time-Windows TSP (TW-TSP) we are given requests at different locations on a network; each request is endowed with a reward and an interval of time; the goal is to find a tour that visits as much reward as possible during the corresponding time window. For the online version of this problem, where each request is revealed at the start of its time window, no finite competitive ratio can be obtained. We consider a version of the problem where the algorithm is presented with predictions of where and when the online requests will appear, without any knowledge of the quality of this side information. Vehicle routing problems such as the TW-TSP can be very sensitive to errors or changes in the input due to the hard time-window constraints, and it is unclear whether imperfect predictions can be used to obtain a finite competitive ratio. We show that good performance can be achieved by explicitly building slack into the solution. Our main result is an online algorithm that achieves a competitive ratio logarithmic in the diameter of the underlying network, matching the performance of the best offline algorithm to within factors that depend on the quality of the provided predictions. The competitive ratio degrades smoothly as a function of the quality and we show that this dependence is tight within constant factors.

Cite as

Shuchi Chawla and Dimitris Christou. Online Time-Windows TSP with Predictions. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 2:1-2:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chawla_et_al:LIPIcs.APPROX/RANDOM.2024.2,
  author =	{Chawla, Shuchi and Christou, Dimitris},
  title =	{{Online Time-Windows TSP with Predictions}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{2:1--2:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.2},
  URN =		{urn:nbn:de:0030-drops-209954},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.2},
  annote =	{Keywords: Travelling Salesman Problem, Predictions, Learning-Augmented Algorithms, Approximation}
}
Document
A Dynamic Logic for Symbolic Execution for the Smart Contract Programming Language Michelson

Authors: Barnabas Arvay, Thi Thu Ha Doan, and Peter Thiemann

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Verification of smart contracts is an important topic in the context of blockchain technology. We study an approach to verification that is based on symbolic execution. As a formal basis for symbolic execution, we design a dynamic logic for Michelson, the smart contract language of the Tezos blockchain, and prove its soundness in the proof assistant Agda. Towards the soundness proof we formalize the concrete semantics as well as its symbolic counterpart in a unified setting. The logic encompasses single contract runs as well as inter-contract runs chained in a single transaction.

Cite as

Barnabas Arvay, Thi Thu Ha Doan, and Peter Thiemann. A Dynamic Logic for Symbolic Execution for the Smart Contract Programming Language Michelson. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 3:1-3:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{arvay_et_al:LIPIcs.ECOOP.2024.3,
  author =	{Arvay, Barnabas and Doan, Thi Thu Ha and Thiemann, Peter},
  title =	{{A Dynamic Logic for Symbolic Execution for the Smart Contract Programming Language Michelson}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{3:1--3:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.3},
  URN =		{urn:nbn:de:0030-drops-208529},
  doi =		{10.4230/LIPIcs.ECOOP.2024.3},
  annote =	{Keywords: Smart Contract, Blockchain, Formal Verification, Symbolic Execution}
}
Document
InferType: A Compiler Toolkit for Implementing Efficient Constraint-Based Type Inference

Authors: Senxi Li, Tetsuro Yamazaki, and Shigeru Chiba

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Supporting automatic type inference is in demand in modern language development. It is a challenging task but without appropriate supporting toolkits. This paper presents InferType, a Java library that helps implement constraint-based type inference. A compiler writer uses InferType’s classes and methods to describe type constraints and typing rules for type inference. InferType then performs constraint solving by translation to the Z3 SMT solver. InferType is equipped with our developed optimization technique. It reduces the search space for type variables by pre-computing the structures of those type variables for mitigating the performance bottleneck of constraint solving with deeply nested types. We use InferType to implement type inference for a subset of Python, and conduct experiments to evaluate how the developed optimization technique can affect the performance of type inference. Our results show that InferType’s optimization can greatly mitigate the performance bottleneck for programs with deeply nested types, and can potentially improve the performance for large nested types.

Cite as

Senxi Li, Tetsuro Yamazaki, and Shigeru Chiba. InferType: A Compiler Toolkit for Implementing Efficient Constraint-Based Type Inference. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 23:1-23:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ECOOP.2024.23,
  author =	{Li, Senxi and Yamazaki, Tetsuro and Chiba, Shigeru},
  title =	{{InferType: A Compiler Toolkit for Implementing Efficient Constraint-Based Type Inference}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{23:1--23:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.23},
  URN =		{urn:nbn:de:0030-drops-208728},
  doi =		{10.4230/LIPIcs.ECOOP.2024.23},
  annote =	{Keywords: Domain Specific Languages, Compilation, Static Analysis, Type Inference, Constraint Solving, SMT Solver}
}
Document
A Verified Earley Parser

Authors: Martin Rau and Tobias Nipkow

Published in: LIPIcs, Volume 309, 15th International Conference on Interactive Theorem Proving (ITP 2024)


Abstract
An Earley parser is a top-down parsing technique that is capable of parsing arbitrary context-free grammars. We present a functional implementation of an Earley parser verified using the interactive theorem prover Isabelle/HOL. Our formalization builds upon Cliff Jones' extensive, refinement-based paper proof. We implement and prove soundness and completeness of a functional recognizer modeling Jay Earley’s original imperative implementation and extend it with the necessary data structures to enable the construction of parse trees following the work of Elizabeth Scott. Building upon this foundation, we develop a functional parser and prove its soundness. We round off the paper by providing an informal argument and empirical data regarding the running time and space complexity of our implementation.

Cite as

Martin Rau and Tobias Nipkow. A Verified Earley Parser. In 15th International Conference on Interactive Theorem Proving (ITP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 309, pp. 31:1-31:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{rau_et_al:LIPIcs.ITP.2024.31,
  author =	{Rau, Martin and Nipkow, Tobias},
  title =	{{A Verified Earley Parser}},
  booktitle =	{15th International Conference on Interactive Theorem Proving (ITP 2024)},
  pages =	{31:1--31:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-337-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{309},
  editor =	{Bertot, Yves and Kutsia, Temur and Norrish, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITP.2024.31},
  URN =		{urn:nbn:de:0030-drops-207596},
  doi =		{10.4230/LIPIcs.ITP.2024.31},
  annote =	{Keywords: Verification, Parsers, Earley, Isabelle}
}
Document
Consistent Ultrafinitist Logic

Authors: Michał J. Gajda

Published in: LIPIcs, Volume 303, 29th International Conference on Types for Proofs and Programs (TYPES 2023)


Abstract
Ultrafinitism postulates that we can only compute on relatively short objects, and numbers beyond a certain value are not available. This approach would also forbid many forms of infinitary reasoning and allow removing certain paradoxes stemming from enumeration theorems. For a computational application of ultrafinitist logic, we need more than a proof system, but a logical framework to express both proofs, programs, and theorems in a single framework. We present its inference rules, reduction relation, and self-encoding to allow direct proving of the properties of ultrafinitist logic within itself. We also provide a justification why it can express all bounded Turing programs, and thus serve as a "logic of computability".

Cite as

Michał J. Gajda. Consistent Ultrafinitist Logic. In 29th International Conference on Types for Proofs and Programs (TYPES 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 303, pp. 5:1-5:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gajda:LIPIcs.TYPES.2023.5,
  author =	{Gajda, Micha{\l} J.},
  title =	{{Consistent Ultrafinitist Logic}},
  booktitle =	{29th International Conference on Types for Proofs and Programs (TYPES 2023)},
  pages =	{5:1--5:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-332-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{303},
  editor =	{Kesner, Delia and Reyes, Eduardo Hermo and van den Berg, Benno},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TYPES.2023.5},
  URN =		{urn:nbn:de:0030-drops-204833},
  doi =		{10.4230/LIPIcs.TYPES.2023.5},
  annote =	{Keywords: ultrafinitism, bounded Turing completeness, logic of computability, decidable logic, explicit complexity, strict finitism}
}
Document
The Platin Multi-Target Worst-Case Analysis Tool

Authors: Emad Jacob Maroun, Eva Dengler, Christian Dietrich, Stefan Hepp, Henriette Herzog, Benedikt Huber, Jens Knoop, Daniel Wiltsche-Prokesch, Peter Puschner, Phillip Raffeck, Martin Schoeberl, Simon Schuster, and Peter Wägemann

Published in: OASIcs, Volume 121, 22nd International Workshop on Worst-Case Execution Time Analysis (WCET 2024)


Abstract
With the increasing number of applications that require reliable runtime guarantees, the relevance of static worst-case analysis tools that can provide such guarantees increases. These analysis tools determine resource-consumption bounds of application tasks, with a model of the underlying hardware, to meet given resource budgets during runtime, such as deadlines of real-time tasks. This paper presents enhancements to the Platin worst-case analysis tool developed since its original release more than ten years ago. These novelties comprise Platin’s support for new architectures (i.e., ARMv6-M, RISC-V, and AVR) in addition to the previous backends for Patmos and ARMv7-M. Further, Platin now features system-wide analysis methods and annotation support to express system-level constraints. Besides an overview of these enhancements, we evaluate Platin’s accuracy for the two supported architecture implementations, Patmos and RISC-V.

Cite as

Emad Jacob Maroun, Eva Dengler, Christian Dietrich, Stefan Hepp, Henriette Herzog, Benedikt Huber, Jens Knoop, Daniel Wiltsche-Prokesch, Peter Puschner, Phillip Raffeck, Martin Schoeberl, Simon Schuster, and Peter Wägemann. The Platin Multi-Target Worst-Case Analysis Tool. In 22nd International Workshop on Worst-Case Execution Time Analysis (WCET 2024). Open Access Series in Informatics (OASIcs), Volume 121, pp. 2:1-2:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{maroun_et_al:OASIcs.WCET.2024.2,
  author =	{Maroun, Emad Jacob and Dengler, Eva and Dietrich, Christian and Hepp, Stefan and Herzog, Henriette and Huber, Benedikt and Knoop, Jens and Wiltsche-Prokesch, Daniel and Puschner, Peter and Raffeck, Phillip and Schoeberl, Martin and Schuster, Simon and W\"{a}gemann, Peter},
  title =	{{The Platin Multi-Target Worst-Case Analysis Tool}},
  booktitle =	{22nd International Workshop on Worst-Case Execution Time Analysis (WCET 2024)},
  pages =	{2:1--2:14},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-346-1},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{121},
  editor =	{Carle, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.WCET.2024.2},
  URN =		{urn:nbn:de:0030-drops-204704},
  doi =		{10.4230/OASIcs.WCET.2024.2},
  annote =	{Keywords: worst-case resource consumption, WCET, static analysis tool}
}
Document
Invited Paper
Invited Paper: Worst-Case Execution Time Analysis of Lingua Franca Applications

Authors: Martin Schoeberl, Ehsan Khodadad, Shaokai Lin, Emad Jacob Maroun, Luca Pezzarossa, and Edward A. Lee

Published in: OASIcs, Volume 121, 22nd International Workshop on Worst-Case Execution Time Analysis (WCET 2024)


Abstract
Real-time systems need to prove that all deadlines will be met. To enable this proof, the full stack of the system must be analyzable, and the right tools must be available. This includes the processor (execution platform), the runtime system, the compiler, and the WCET analysis tool. This paper presents a combination of the time-predictable processor Patmos, the coordination language Lingua Franca, and the WCET analysis tool Platin. We show how carefully written Lingua Franca programs enable static WCET analysis to build safety-critical applications.

Cite as

Martin Schoeberl, Ehsan Khodadad, Shaokai Lin, Emad Jacob Maroun, Luca Pezzarossa, and Edward A. Lee. Invited Paper: Worst-Case Execution Time Analysis of Lingua Franca Applications. In 22nd International Workshop on Worst-Case Execution Time Analysis (WCET 2024). Open Access Series in Informatics (OASIcs), Volume 121, pp. 4:1-4:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{schoeberl_et_al:OASIcs.WCET.2024.4,
  author =	{Schoeberl, Martin and Khodadad, Ehsan and Lin, Shaokai and Maroun, Emad Jacob and Pezzarossa, Luca and Lee, Edward A.},
  title =	{{Invited Paper: Worst-Case Execution Time Analysis of Lingua Franca Applications}},
  booktitle =	{22nd International Workshop on Worst-Case Execution Time Analysis (WCET 2024)},
  pages =	{4:1--4:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-346-1},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{121},
  editor =	{Carle, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.WCET.2024.4},
  URN =		{urn:nbn:de:0030-drops-204721},
  doi =		{10.4230/OASIcs.WCET.2024.4},
  annote =	{Keywords: worst-case execution time, coordination language, real-time systems, lingua franca}
}
Document
Targeted Branching for the Maximum Independent Set Problem Using Graph Neural Networks

Authors: Kenneth Langedal, Demian Hespe, and Peter Sanders

Published in: LIPIcs, Volume 301, 22nd International Symposium on Experimental Algorithms (SEA 2024)


Abstract
Identifying a maximum independent set is a fundamental NP-hard problem. This problem has several real-world applications and requires finding the largest possible set of vertices not adjacent to each other in an undirected graph. Over the past few years, branch-and-bound and branch-and-reduce algorithms have emerged as some of the most effective methods for solving the problem exactly. Specifically, the branch-and-reduce approach, which combines branch-and-bound principles with reduction rules, has proven particularly successful in tackling previously unmanageable real-world instances. This progress was largely made possible by the development of more effective reduction rules. Nevertheless, other key components that can impact the efficiency of these algorithms have not received the same level of interest. Among these is the branching strategy, which determines which vertex to branch on next. Until recently, the most widely used strategy was to choose the vertex of the highest degree. In this work, we present a graph neural network approach for selecting the next branching vertex. The intricate nature of current branch-and-bound solvers makes supervised and reinforcement learning difficult. Therefore, we use a population-based genetic algorithm to evolve the model’s parameters instead. Our proposed approach results in a speedup on 73% of the benchmark instances with a median speedup of 24%.

Cite as

Kenneth Langedal, Demian Hespe, and Peter Sanders. Targeted Branching for the Maximum Independent Set Problem Using Graph Neural Networks. In 22nd International Symposium on Experimental Algorithms (SEA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 301, pp. 20:1-20:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{langedal_et_al:LIPIcs.SEA.2024.20,
  author =	{Langedal, Kenneth and Hespe, Demian and Sanders, Peter},
  title =	{{Targeted Branching for the Maximum Independent Set Problem Using Graph Neural Networks}},
  booktitle =	{22nd International Symposium on Experimental Algorithms (SEA 2024)},
  pages =	{20:1--20:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-325-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{301},
  editor =	{Liberti, Leo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2024.20},
  URN =		{urn:nbn:de:0030-drops-203853},
  doi =		{10.4230/LIPIcs.SEA.2024.20},
  annote =	{Keywords: Graphs, Independent Set, Vertex Cover, Graph Neural Networks, Branch-and-Reduce}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
T-Rex: Termination of Recursive Functions Using Lexicographic Linear Combinations

Authors: Raphael Douglas Giles, Vincent Jackson, and Christine Rizkallah

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We introduce a powerful termination algorithm for structurally recursive functions that improves on the core ideas behind lexicographic termination algorithms for functional programs. The algorithm generates linear-lexicographic combinations of primitive measure functions measuring the recursive structure of terms. We introduce a measure language that enables the simplification and comparison of measures and we prove meta-theoretic properties of our measure language. Moreover, we demonstrate our algorithm, on an untyped first-order functional language and prove its soundness and that it runs in polynomial time. We also provide a Haskell implementation. As part of this work, we also show how to solve the maximisation of negative vector-components as a linear program.

Cite as

Raphael Douglas Giles, Vincent Jackson, and Christine Rizkallah. T-Rex: Termination of Recursive Functions Using Lexicographic Linear Combinations. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 139:1-139:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{giles_et_al:LIPIcs.ICALP.2024.139,
  author =	{Giles, Raphael Douglas and Jackson, Vincent and Rizkallah, Christine},
  title =	{{T-Rex: Termination of Recursive Functions Using Lexicographic Linear Combinations}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{139:1--139:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.139},
  URN =		{urn:nbn:de:0030-drops-202827},
  doi =		{10.4230/LIPIcs.ICALP.2024.139},
  annote =	{Keywords: Termination, Recursive functions}
}
Document
Survey
Towards Representing Processes and Reasoning with Process Descriptions on the Web

Authors: Andreas Harth, Tobias Käfer, Anisa Rula, Jean-Paul Calbimonte, Eduard Kamburjan, and Martin Giese

Published in: TGDK, Volume 2, Issue 1 (2024): Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge, Volume 2, Issue 1


Abstract
We work towards a vocabulary to represent processes and temporal logic specifications as graph-structured data. Different fields use incompatible terminologies for describing essentially the same process-related concepts. In addition, processes can be represented from different perspectives and levels of abstraction: both state-centric and event-centric perspectives offer distinct insights into the underlying processes. In this work, we strive to unify the representation of processes and related concepts by leveraging the power of knowledge graphs. We survey approaches to representing processes and reasoning with process descriptions from different fields and provide a selection of scenarios to help inform the scope of a unified representation of processes. We focus on processes that can be executed and observed via web interfaces. We propose to provide a representation designed to combine state-centric and event-centric perspectives while incorporating temporal querying and reasoning capabilities on temporal logic specifications. A standardised vocabulary and representation for processes and temporal specifications would contribute towards bridging the gap between the terminologies from different fields and fostering the broader application of methods involving temporal logics, such as formal verification and program synthesis.

Cite as

Andreas Harth, Tobias Käfer, Anisa Rula, Jean-Paul Calbimonte, Eduard Kamburjan, and Martin Giese. Towards Representing Processes and Reasoning with Process Descriptions on the Web. In Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 1, pp. 1:1-1:32, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{harth_et_al:TGDK.2.1.1,
  author =	{Harth, Andreas and K\"{a}fer, Tobias and Rula, Anisa and Calbimonte, Jean-Paul and Kamburjan, Eduard and Giese, Martin},
  title =	{{Towards Representing Processes and Reasoning with Process Descriptions on the Web}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{1:1--1:32},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.1.1},
  URN =		{urn:nbn:de:0030-drops-198583},
  doi =		{10.4230/TGDK.2.1.1},
  annote =	{Keywords: Process modelling, Process ontology, Temporal logic, Web services}
}
Document
Axis-Parallel Right Angle Crossing Graphs

Authors: Patrizio Angelini, Michael A. Bekos, Julia Katheder, Michael Kaufmann, Maximilian Pfister, and Torsten Ueckerdt

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
A RAC graph is one admitting a RAC drawing, that is, a polyline drawing in which each crossing occurs at a right angle. Originally motivated by psychological studies on readability of graph layouts, RAC graphs form one of the most prominent graph classes in beyond planarity. In this work, we study a subclass of RAC graphs, called axis-parallel RAC (or apRAC, for short), that restricts the crossings to pairs of axis-parallel edge-segments. apRAC drawings combine the readability of planar drawings with the clarity of (non-planar) orthogonal drawings. We consider these graphs both with and without bends. Our contribution is as follows: (i) We study inclusion relationships between apRAC and traditional RAC graphs. (ii) We establish bounds on the edge density of apRAC graphs. (iii) We show that every graph with maximum degree 8 is 2-bend apRAC and give a linear time drawing algorithm. Some of our results on apRAC graphs also improve the state of the art for general RAC graphs. We conclude our work with a list of open questions and a discussion of a natural generalization of the apRAC model.

Cite as

Patrizio Angelini, Michael A. Bekos, Julia Katheder, Michael Kaufmann, Maximilian Pfister, and Torsten Ueckerdt. Axis-Parallel Right Angle Crossing Graphs. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 9:1-9:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{angelini_et_al:LIPIcs.ESA.2023.9,
  author =	{Angelini, Patrizio and Bekos, Michael A. and Katheder, Julia and Kaufmann, Michael and Pfister, Maximilian and Ueckerdt, Torsten},
  title =	{{Axis-Parallel Right Angle Crossing Graphs}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{9:1--9:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.9},
  URN =		{urn:nbn:de:0030-drops-186623},
  doi =		{10.4230/LIPIcs.ESA.2023.9},
  annote =	{Keywords: Graph drawing, RAC graphs, Graph drawing algorithms}
}
Document
Team Semantics for the Specification and Verification of Hyperproperties

Authors: Andreas Krebs, Arne Meier, Jonni Virtema, and Martin Zimmermann

Published in: LIPIcs, Volume 117, 43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018)


Abstract
We develop team semantics for Linear Temporal Logic (LTL) to express hyperproperties, which have recently been identified as a key concept in the verification of information flow properties. Conceptually, we consider an asynchronous and a synchronous variant of team semantics. We study basic properties of this new logic and classify the computational complexity of its satisfiability, path, and model checking problem. Further, we examine how extensions of these basic logics react on adding other atomic operators. Finally, we compare its expressivity to the one of HyperLTL, another recently introduced logic for hyperproperties. Our results show that LTL under team semantics is a viable alternative to HyperLTL, which complements the expressivity of HyperLTL and has partially better algorithmic properties.

Cite as

Andreas Krebs, Arne Meier, Jonni Virtema, and Martin Zimmermann. Team Semantics for the Specification and Verification of Hyperproperties. In 43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 117, pp. 10:1-10:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{krebs_et_al:LIPIcs.MFCS.2018.10,
  author =	{Krebs, Andreas and Meier, Arne and Virtema, Jonni and Zimmermann, Martin},
  title =	{{Team Semantics for the Specification and Verification of Hyperproperties}},
  booktitle =	{43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018)},
  pages =	{10:1--10:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-086-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{117},
  editor =	{Potapov, Igor and Spirakis, Paul and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2018.10},
  URN =		{urn:nbn:de:0030-drops-95926},
  doi =		{10.4230/LIPIcs.MFCS.2018.10},
  annote =	{Keywords: LTL, Hyperproperties, Team Semantics, Model Checking, Satisfiability}
}
Document
Restructuring Expression Dags for Efficient Parallelization

Authors: Martin Wilhelm

Published in: LIPIcs, Volume 103, 17th International Symposium on Experimental Algorithms (SEA 2018)


Abstract
In the field of robust geometric computation it is often necessary to make exact decisions based on inexact floating-point arithmetic. One common approach is to store the computation history in an arithmetic expression dag and to re-evaluate the expression with increasing precision until an exact decision can be made. We show that exact-decisions number types based on expression dags can be evaluated faster in practice through parallelization on multiple cores. We compare the impact of several restructuring methods for the expression dag on its running time in a parallel environment.

Cite as

Martin Wilhelm. Restructuring Expression Dags for Efficient Parallelization. In 17th International Symposium on Experimental Algorithms (SEA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 103, pp. 20:1-20:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{wilhelm:LIPIcs.SEA.2018.20,
  author =	{Wilhelm, Martin},
  title =	{{Restructuring Expression Dags for Efficient Parallelization}},
  booktitle =	{17th International Symposium on Experimental Algorithms (SEA 2018)},
  pages =	{20:1--20:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-070-5},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{103},
  editor =	{D'Angelo, Gianlorenzo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2018.20},
  URN =		{urn:nbn:de:0030-drops-89551},
  doi =		{10.4230/LIPIcs.SEA.2018.20},
  annote =	{Keywords: exact computation, expression dag, parallel evaluation, restructuring}
}
Document
Towards a Time-predictable Dual-Issue Microprocessor: The Patmos Approach

Authors: Martin Schoeberl, Pascal Schleuniger, Wolfgang Puffitsch, Florian Brandner, and Christian W. Probst

Published in: OASIcs, Volume 18, Bringing Theory to Practice: Predictability and Performance in Embedded Systems (2011)


Abstract
Current processors are optimized for average case performance, often leading to a high worst-case execution time (WCET). Many architectural features that increase the average case performance are hard to be modeled for the WCET analysis. In this paper we present Patmos, a processor optimized for low WCET bounds rather than high average case performance. Patmos is a dual-issue, statically scheduled RISC processor. The instruction cache is organized as a method cache and the data cache is organized as a split cache in order to simplify the cache WCET analysis. To fill the dual-issue pipeline with enough useful instructions, Patmos relies on a customized compiler. The compiler also plays a central role in optimizing the application for the WCET instead of average case performance.

Cite as

Martin Schoeberl, Pascal Schleuniger, Wolfgang Puffitsch, Florian Brandner, and Christian W. Probst. Towards a Time-predictable Dual-Issue Microprocessor: The Patmos Approach. In Bringing Theory to Practice: Predictability and Performance in Embedded Systems. Open Access Series in Informatics (OASIcs), Volume 18, pp. 11-21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{schoeberl_et_al:OASIcs.PPES.2011.11,
  author =	{Schoeberl, Martin and Schleuniger, Pascal and Puffitsch, Wolfgang and Brandner, Florian and Probst, Christian W.},
  title =	{{Towards a Time-predictable Dual-Issue Microprocessor: The Patmos Approach}},
  booktitle =	{Bringing Theory to Practice: Predictability and Performance in Embedded Systems},
  pages =	{11--21},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-28-6},
  ISSN =	{2190-6807},
  year =	{2011},
  volume =	{18},
  editor =	{Lucas, Philipp and Wilhelm, Reinhard},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.PPES.2011.11},
  URN =		{urn:nbn:de:0030-drops-30774},
  doi =		{10.4230/OASIcs.PPES.2011.11},
  annote =	{Keywords: Time-predictable architecture, WCET analysis, WCET-aware compilation}
}
  • Refine by Author
  • 3 Schoeberl, Martin
  • 2 Maroun, Emad Jacob
  • 1 Angelini, Patrizio
  • 1 Arvay, Barnabas
  • 1 Bekos, Michael A.
  • Show More...

  • Refine by Classification
  • 3 Software and its engineering → Automated static analysis
  • 2 Computer systems organization → Real-time systems
  • 2 Mathematics of computing → Graph algorithms
  • 2 Theory of computation → Computational geometry
  • 1 Applied computing → Business process modeling
  • Show More...

  • Refine by Keyword
  • 1 Approximation
  • 1 Blockchain
  • 1 Branch-and-Reduce
  • 1 Compilation
  • 1 Constraint Solving
  • Show More...

  • Refine by Type
  • 16 document

  • Refine by Publication Year
  • 11 2024
  • 2 2018
  • 1 2006
  • 1 2011
  • 1 2023

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail