39 Search Results for "David, Marco"


Document
WORTEX: Worst-Case Execution Time and Energy Estimation in Low-Power Microprocessors Using Explainable ML

Authors: Hugo Reymond, Abderaouf Nassim Amalou, and Isabelle Puaut

Published in: OASIcs, Volume 121, 22nd International Workshop on Worst-Case Execution Time Analysis (WCET 2024)


Abstract
Real-time and energy-constrained systems heavily rely on estimates of the worst-case execution time (WCET) and worst-case energy consumption (WCEC) of code snippets to ensure trustworthy operation. Designing architecture-specific analytical models for time and energy is often challenging and time-consuming. In situations where analytical models are unavailable or incomplete, machine learning (ML) techniques emerge as a promising solution to build WCEC/WCET models. This paper introduces WORTEX, a toolkit for WCEC/WCET estimation of basic blocks based on ML techniques. To ensure the real-world applicability of its models, WORTEX extracts large datasets of basic blocks from real programs and precisely measures their energy consumption/execution time on the physical target platform. The dataset is used to train various WCEC/WCET models using different ML techniques. Experimental results on simple and time-predictable hardware show that even the most basic ML techniques provide accurate results, that never underestimate actual values. We also discuss the use of explainability techniques to gain trustworthiness for the models.

Cite as

Hugo Reymond, Abderaouf Nassim Amalou, and Isabelle Puaut. WORTEX: Worst-Case Execution Time and Energy Estimation in Low-Power Microprocessors Using Explainable ML. In 22nd International Workshop on Worst-Case Execution Time Analysis (WCET 2024). Open Access Series in Informatics (OASIcs), Volume 121, pp. 1:1-1:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{reymond_et_al:OASIcs.WCET.2024.1,
  author =	{Reymond, Hugo and Amalou, Abderaouf Nassim and Puaut, Isabelle},
  title =	{{WORTEX: Worst-Case Execution Time and Energy Estimation in Low-Power Microprocessors Using Explainable ML}},
  booktitle =	{22nd International Workshop on Worst-Case Execution Time Analysis (WCET 2024)},
  pages =	{1:1--1:14},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-346-1},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{121},
  editor =	{Carle, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.WCET.2024.1},
  URN =		{urn:nbn:de:0030-drops-204691},
  doi =		{10.4230/OASIcs.WCET.2024.1},
  annote =	{Keywords: Worst-Case Execution Time (WCET), Worst-Case Energy Consumption (WCEC), Machine Learning, Explainable ML models}
}
Document
The Platin Multi-Target Worst-Case Analysis Tool

Authors: Emad Jacob Maroun, Eva Dengler, Christian Dietrich, Stefan Hepp, Henriette Herzog, Benedikt Huber, Jens Knoop, Daniel Wiltsche-Prokesch, Peter Puschner, Phillip Raffeck, Martin Schoeberl, Simon Schuster, and Peter Wägemann

Published in: OASIcs, Volume 121, 22nd International Workshop on Worst-Case Execution Time Analysis (WCET 2024)


Abstract
With the increasing number of applications that require reliable runtime guarantees, the relevance of static worst-case analysis tools that can provide such guarantees increases. These analysis tools determine resource-consumption bounds of application tasks, with a model of the underlying hardware, to meet given resource budgets during runtime, such as deadlines of real-time tasks. This paper presents enhancements to the Platin worst-case analysis tool developed since its original release more than ten years ago. These novelties comprise Platin’s support for new architectures (i.e., ARMv6-M, RISC-V, and AVR) in addition to the previous backends for Patmos and ARMv7-M. Further, Platin now features system-wide analysis methods and annotation support to express system-level constraints. Besides an overview of these enhancements, we evaluate Platin’s accuracy for the two supported architecture implementations, Patmos and RISC-V.

Cite as

Emad Jacob Maroun, Eva Dengler, Christian Dietrich, Stefan Hepp, Henriette Herzog, Benedikt Huber, Jens Knoop, Daniel Wiltsche-Prokesch, Peter Puschner, Phillip Raffeck, Martin Schoeberl, Simon Schuster, and Peter Wägemann. The Platin Multi-Target Worst-Case Analysis Tool. In 22nd International Workshop on Worst-Case Execution Time Analysis (WCET 2024). Open Access Series in Informatics (OASIcs), Volume 121, pp. 2:1-2:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{maroun_et_al:OASIcs.WCET.2024.2,
  author =	{Maroun, Emad Jacob and Dengler, Eva and Dietrich, Christian and Hepp, Stefan and Herzog, Henriette and Huber, Benedikt and Knoop, Jens and Wiltsche-Prokesch, Daniel and Puschner, Peter and Raffeck, Phillip and Schoeberl, Martin and Schuster, Simon and W\"{a}gemann, Peter},
  title =	{{The Platin Multi-Target Worst-Case Analysis Tool}},
  booktitle =	{22nd International Workshop on Worst-Case Execution Time Analysis (WCET 2024)},
  pages =	{2:1--2:14},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-346-1},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{121},
  editor =	{Carle, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.WCET.2024.2},
  URN =		{urn:nbn:de:0030-drops-204704},
  doi =		{10.4230/OASIcs.WCET.2024.2},
  annote =	{Keywords: worst-case resource consumption, WCET, static analysis tool}
}
Document
Buffered Streaming Edge Partitioning

Authors: Adil Chhabra, Marcelo Fonseca Faraj, Christian Schulz, and Daniel Seemaier

Published in: LIPIcs, Volume 301, 22nd International Symposium on Experimental Algorithms (SEA 2024)


Abstract
Addressing the challenges of processing massive graphs, which are prevalent in diverse fields such as social, biological, and technical networks, we introduce HeiStreamE and FreightE, two innovative (buffered) streaming algorithms designed for efficient edge partitioning of large-scale graphs. HeiStreamE utilizes an adapted Split-and-Connect graph model and a Fennel-based multilevel partitioning scheme, while FreightE partitions a hypergraph representation of the input graph. Besides ensuring superior solution quality, these approaches also overcome the limitations of existing algorithms by maintaining linear dependency on the graph size in both time and memory complexity with no dependence on the number of blocks of partition. Our comprehensive experimental analysis demonstrates that HeiStreamE outperforms current streaming algorithms and the re-streaming algorithm 2PS in partitioning quality (replication factor), and is more memory-efficient for real-world networks where the number of edges is far greater than the number of vertices. Further, FreightE is shown to produce fast and efficient partitions, particularly for higher numbers of partition blocks.

Cite as

Adil Chhabra, Marcelo Fonseca Faraj, Christian Schulz, and Daniel Seemaier. Buffered Streaming Edge Partitioning. In 22nd International Symposium on Experimental Algorithms (SEA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 301, pp. 5:1-5:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chhabra_et_al:LIPIcs.SEA.2024.5,
  author =	{Chhabra, Adil and Fonseca Faraj, Marcelo and Schulz, Christian and Seemaier, Daniel},
  title =	{{Buffered Streaming Edge Partitioning}},
  booktitle =	{22nd International Symposium on Experimental Algorithms (SEA 2024)},
  pages =	{5:1--5:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-325-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{301},
  editor =	{Liberti, Leo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2024.5},
  URN =		{urn:nbn:de:0030-drops-203701},
  doi =		{10.4230/LIPIcs.SEA.2024.5},
  annote =	{Keywords: graph partitioning, edge partitioning, streaming, online, buffered partitioning}
}
Document
Targeted Branching for the Maximum Independent Set Problem Using Graph Neural Networks

Authors: Kenneth Langedal, Demian Hespe, and Peter Sanders

Published in: LIPIcs, Volume 301, 22nd International Symposium on Experimental Algorithms (SEA 2024)


Abstract
Identifying a maximum independent set is a fundamental NP-hard problem. This problem has several real-world applications and requires finding the largest possible set of vertices not adjacent to each other in an undirected graph. Over the past few years, branch-and-bound and branch-and-reduce algorithms have emerged as some of the most effective methods for solving the problem exactly. Specifically, the branch-and-reduce approach, which combines branch-and-bound principles with reduction rules, has proven particularly successful in tackling previously unmanageable real-world instances. This progress was largely made possible by the development of more effective reduction rules. Nevertheless, other key components that can impact the efficiency of these algorithms have not received the same level of interest. Among these is the branching strategy, which determines which vertex to branch on next. Until recently, the most widely used strategy was to choose the vertex of the highest degree. In this work, we present a graph neural network approach for selecting the next branching vertex. The intricate nature of current branch-and-bound solvers makes supervised and reinforcement learning difficult. Therefore, we use a population-based genetic algorithm to evolve the model’s parameters instead. Our proposed approach results in a speedup on 73% of the benchmark instances with a median speedup of 24%.

Cite as

Kenneth Langedal, Demian Hespe, and Peter Sanders. Targeted Branching for the Maximum Independent Set Problem Using Graph Neural Networks. In 22nd International Symposium on Experimental Algorithms (SEA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 301, pp. 20:1-20:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{langedal_et_al:LIPIcs.SEA.2024.20,
  author =	{Langedal, Kenneth and Hespe, Demian and Sanders, Peter},
  title =	{{Targeted Branching for the Maximum Independent Set Problem Using Graph Neural Networks}},
  booktitle =	{22nd International Symposium on Experimental Algorithms (SEA 2024)},
  pages =	{20:1--20:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-325-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{301},
  editor =	{Liberti, Leo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2024.20},
  URN =		{urn:nbn:de:0030-drops-203853},
  doi =		{10.4230/LIPIcs.SEA.2024.20},
  annote =	{Keywords: Graphs, Independent Set, Vertex Cover, Graph Neural Networks, Branch-and-Reduce}
}
Document
Invited Talk
Meaningfulness and Genericity in a Subsuming Framework (Invited Talk)

Authors: Delia Kesner, Victor Arrial, and Giulio Guerrieri

Published in: LIPIcs, Volume 299, 9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024)


Abstract
This paper studies the notion of meaningfulness for a unifying framework called dBang-calculus, which subsumes both call-by-name (dCBN) and call-by-value (dCBV). We first define meaningfulness in dBang and then characterize it by means of typability and inhabitation in an associated non-idempotent intersection type system previously appearing in the literature. We validate the proposed notion of meaningfulness by showing two properties: (1) consistency of the smallest theory, called ℋ, equating all meaningless terms, and (2) genericity, stating that meaningless subterms have no bearing on the significance of meaningful terms. The theory ℋ is also shown to have a unique consistent and maximal extension ℋ*, which coincides with a well-known notion of observational equivalence. Last but not least, we show that the notions of meaningfulness and genericity in the literature for dCBN and dCBV are subsumed by the corresponding ones proposed here for the dBang-calculus.

Cite as

Delia Kesner, Victor Arrial, and Giulio Guerrieri. Meaningfulness and Genericity in a Subsuming Framework (Invited Talk). In 9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 299, pp. 1:1-1:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kesner_et_al:LIPIcs.FSCD.2024.1,
  author =	{Kesner, Delia and Arrial, Victor and Guerrieri, Giulio},
  title =	{{Meaningfulness and Genericity in a Subsuming Framework}},
  booktitle =	{9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024)},
  pages =	{1:1--1:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-323-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{299},
  editor =	{Rehof, Jakob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2024.1},
  URN =		{urn:nbn:de:0030-drops-203305},
  doi =		{10.4230/LIPIcs.FSCD.2024.1},
  annote =	{Keywords: Lambda calculus, Solvability, Meaningfulness, Inhabitation, Genericity}
}
Document
Invited Talk
Abstraction-Based Decision Making for Statistical Properties (Invited Talk)

Authors: Filip Cano, Thomas A. Henzinger, Bettina Könighofer, Konstantin Kueffner, and Kaushik Mallik

Published in: LIPIcs, Volume 299, 9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024)


Abstract
Sequential decision-making in probabilistic environments is a fundamental problem with many applications in AI and economics. In this paper, we present an algorithm for synthesizing sequential decision-making agents that optimize statistical properties such as maximum and average response times. In the general setting of sequential decision-making, the environment is modeled as a random process that generates inputs. The agent responds to each input, aiming to maximize rewards and minimize costs within a specified time horizon. The corresponding synthesis problem is known to be PSPACE-hard. We consider the special case where the input distribution, reward, and cost depend on input-output statistics specified by counter automata. For such problems, this paper presents the first PTIME synthesis algorithms. We introduce the notion of statistical abstraction, which clusters statistically indistinguishable input-output sequences into equivalence classes. This abstraction allows for a dynamic programming algorithm whose complexity grows polynomially with the considered horizon, making the statistical case exponentially more efficient than the general case. We evaluate our algorithm on three different application scenarios of a client-server protocol, where multiple clients compete via bidding to gain access to the service offered by the server. The synthesized policies optimize profit while guaranteeing that none of the server’s clients is disproportionately starved of the service.

Cite as

Filip Cano, Thomas A. Henzinger, Bettina Könighofer, Konstantin Kueffner, and Kaushik Mallik. Abstraction-Based Decision Making for Statistical Properties (Invited Talk). In 9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 299, pp. 2:1-2:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{cano_et_al:LIPIcs.FSCD.2024.2,
  author =	{Cano, Filip and Henzinger, Thomas A. and K\"{o}nighofer, Bettina and Kueffner, Konstantin and Mallik, Kaushik},
  title =	{{Abstraction-Based Decision Making for Statistical Properties}},
  booktitle =	{9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024)},
  pages =	{2:1--2:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-323-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{299},
  editor =	{Rehof, Jakob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2024.2},
  URN =		{urn:nbn:de:0030-drops-203310},
  doi =		{10.4230/LIPIcs.FSCD.2024.2},
  annote =	{Keywords: Abstract interpretation, Sequential decision making, Counter machines}
}
Document
Univalent Enriched Categories and the Enriched Rezk Completion

Authors: Niels van der Weide

Published in: LIPIcs, Volume 299, 9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024)


Abstract
Enriched categories are categories whose sets of morphisms are enriched with extra structure. Such categories play a prominent role in the study of higher categories, homotopy theory, and the semantics of programming languages. In this paper, we study univalent enriched categories. We prove that all essentially surjective and fully faithful functors between univalent enriched categories are equivalences, and we show that every enriched category admits a Rezk completion. Finally, we use the Rezk completion for enriched categories to construct univalent enriched Kleisli categories.

Cite as

Niels van der Weide. Univalent Enriched Categories and the Enriched Rezk Completion. In 9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 299, pp. 4:1-4:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{vanderweide:LIPIcs.FSCD.2024.4,
  author =	{van der Weide, Niels},
  title =	{{Univalent Enriched Categories and the Enriched Rezk Completion}},
  booktitle =	{9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024)},
  pages =	{4:1--4:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-323-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{299},
  editor =	{Rehof, Jakob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2024.4},
  URN =		{urn:nbn:de:0030-drops-203337},
  doi =		{10.4230/LIPIcs.FSCD.2024.4},
  annote =	{Keywords: enriched categories, univalent categories, homotopy type theory, univalent foundations, Rezk completion}
}
Document
The Flower Calculus

Authors: Pablo Donato

Published in: LIPIcs, Volume 299, 9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024)


Abstract
We introduce the flower calculus, a deep inference proof system for intuitionistic first-order logic inspired by Peirce’s existential graphs. It works as a rewriting system over inductive objects called "flowers", that enjoy both a graphical interpretation as topological diagrams, and a textual presentation as nested sequents akin to coherent formulas. Importantly, the calculus dispenses completely with the traditional notion of symbolic connective, operating solely on nested flowers containing atomic predicates. We prove both the soundness of the full calculus and the completeness of an analytic fragment with respect to Kripke semantics. This provides to our knowledge the first analyticity result for a proof system based on existential graphs, adapting semantic cut-elimination techniques to a deep inference setting. Furthermore, the kernel of rules targetted by completeness is fully invertible, a desirable property for both automated and interactive proof search.

Cite as

Pablo Donato. The Flower Calculus. In 9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 299, pp. 5:1-5:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{donato:LIPIcs.FSCD.2024.5,
  author =	{Donato, Pablo},
  title =	{{The Flower Calculus}},
  booktitle =	{9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024)},
  pages =	{5:1--5:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-323-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{299},
  editor =	{Rehof, Jakob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2024.5},
  URN =		{urn:nbn:de:0030-drops-203343},
  doi =		{10.4230/LIPIcs.FSCD.2024.5},
  annote =	{Keywords: deep inference, graphical calculi, existential graphs, intuitionistic logic, Kripke semantics, cut-elimination}
}
Document
Mechanized Subject Expansion in Uniform Intersection Types for Perpetual Reductions

Authors: Andrej Dudenhefner and Daniele Pautasso

Published in: LIPIcs, Volume 299, 9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024)


Abstract
We provide a new, purely syntactical proof of strong normalization for the simply typed λ-calculus. The result relies on a novel proof of the equivalence between typability in the simple type system and typability in the uniform intersection type system (a restriction of the non-idempotent intersection type system). For formal verification, the equivalence is mechanized using the Coq proof assistant. In the present work, strong normalization of a given simply typed term M is shown in four steps. First, M is reduced to a normal form N via a suitable reduction strategy with a decreasing measure. Second, a uniform intersection type for the normal form N is inferred. Third, a uniform intersection type for M is constructed iteratively via subject expansion. Fourth, strong normalization of M is shown by induction on the size of the type derivation. A supplementary contribution is a family of perpetual reduction strategies, i.e. strategies which preserve infinite reduction paths. This family allows for subject expansion in the intersection type systems of interest, and contains a reduction strategy with a decreasing measure in the simple type system. A notable member of this family is Barendregt’s F_∞ reduction strategy.

Cite as

Andrej Dudenhefner and Daniele Pautasso. Mechanized Subject Expansion in Uniform Intersection Types for Perpetual Reductions. In 9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 299, pp. 8:1-8:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dudenhefner_et_al:LIPIcs.FSCD.2024.8,
  author =	{Dudenhefner, Andrej and Pautasso, Daniele},
  title =	{{Mechanized Subject Expansion in Uniform Intersection Types for Perpetual Reductions}},
  booktitle =	{9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024)},
  pages =	{8:1--8:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-323-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{299},
  editor =	{Rehof, Jakob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2024.8},
  URN =		{urn:nbn:de:0030-drops-203371},
  doi =		{10.4230/LIPIcs.FSCD.2024.8},
  annote =	{Keywords: lambda-calculus, simple types, intersection types, strong normalization, mechanization, perpetual reductions}
}
Document
A Linear Type System for L^p-Metric Sensitivity Analysis

Authors: Victor Sannier and Patrick Baillot

Published in: LIPIcs, Volume 299, 9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024)


Abstract
When working in optimisation or privacy protection, one may need to estimate the sensitivity of computer programs, i.e., the maximum multiplicative increase in the distance between two inputs and the corresponding two outputs. In particular, differential privacy is a rigorous and widely used notion of privacy that is closely related to sensitivity. Several type systems for sensitivity and differential privacy based on linear logic have been proposed in the literature, starting with the functional language Fuzz. However, they are either limited to certain metrics (L¹ and L^∞), and thus to the associated privacy mechanisms, or they rely on a complex notion of type contexts that does not interact well with operational semantics. We therefore propose a graded linear type system - inspired by Bunched Fuzz [{w}under et al., 2023] - called Plurimetric Fuzz that handles L^p vector metrics (for 1 ≤ p ≤ +∞), uses standard type contexts, gives reasonable bounds on sensitivity, and has good metatheoretical properties. We also provide a denotational semantics in terms of metric complete partial orders, and translation mappings from and to Fuzz.

Cite as

Victor Sannier and Patrick Baillot. A Linear Type System for L^p-Metric Sensitivity Analysis. In 9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 299, pp. 12:1-12:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{sannier_et_al:LIPIcs.FSCD.2024.12,
  author =	{Sannier, Victor and Baillot, Patrick},
  title =	{{A Linear Type System for L^p-Metric Sensitivity Analysis}},
  booktitle =	{9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024)},
  pages =	{12:1--12:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-323-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{299},
  editor =	{Rehof, Jakob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2024.12},
  URN =		{urn:nbn:de:0030-drops-203412},
  doi =		{10.4230/LIPIcs.FSCD.2024.12},
  annote =	{Keywords: type system, linear logic, sensitivity, vector metrics, differential privacy, lambda-calculus, functional programming, denotational semantics}
}
Document
Commutation Groups and State-Independent Contextuality

Authors: Samson Abramsky, Şerban-Ion Cercelescu, and Carmen-Maria Constantin

Published in: LIPIcs, Volume 299, 9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024)


Abstract
We introduce an algebraic structure for studying state-independent contextuality arguments, a key form of quantum non-classicality exemplified by the well-known Peres-Mermin magic square, and used as a source of quantum advantage. We introduce commutation groups presented by generators and relations, and analyse them in terms of a string rewriting system. There is also a linear algebraic construction, a directed version of the Heisenberg group. We introduce contextual words as a general form of contextuality witness. We characterise when contextual words can arise in commutation groups, and explicitly construct non-contextual value assignments in other cases. We give unitary representations of commutation groups as subgroups of generalized Pauli n-groups.

Cite as

Samson Abramsky, Şerban-Ion Cercelescu, and Carmen-Maria Constantin. Commutation Groups and State-Independent Contextuality. In 9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 299, pp. 28:1-28:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{abramsky_et_al:LIPIcs.FSCD.2024.28,
  author =	{Abramsky, Samson and Cercelescu, \c{S}erban-Ion and Constantin, Carmen-Maria},
  title =	{{Commutation Groups and State-Independent Contextuality}},
  booktitle =	{9th International Conference on Formal Structures for Computation and Deduction (FSCD 2024)},
  pages =	{28:1--28:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-323-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{299},
  editor =	{Rehof, Jakob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2024.28},
  URN =		{urn:nbn:de:0030-drops-203572},
  doi =		{10.4230/LIPIcs.FSCD.2024.28},
  annote =	{Keywords: Contextuality, state-independence, quantum mechanics, Pauli group, group presentations, unitary representations}
}
Document
JuMP2start: Time-Aware Stop-Start Technology for a Software-Defined Vehicle System

Authors: Anam Farrukh and Richard West

Published in: LIPIcs, Volume 298, 36th Euromicro Conference on Real-Time Systems (ECRTS 2024)


Abstract
Software-defined vehicle (SDV) systems replace traditional ECU architectures with software tasks running on centralized multicore processors in automotive-grade PCs. However, PC boot delays to cold-start an integrated vehicle management system (VMS) are problematic for time-critical functions, which must process sensor and actuator data within specific time bounds. To tackle this challenge, we present JuMP2start: a time-aware multicore stop-start approach for SDVs. JuMP2start leverages PC-class suspend-to-RAM techniques to capture a system snapshot when the vehicle is stopped. Upon restart, critical services are resumed-from-RAM within order of milliseconds compared to normal cold-start times. This work showcases how JuMP2start manages global suspension and resumption mechanisms for a state-of-the-art dual-domain vehicle management system comprising real-time OS (RTOS) and Linux SMP guests. JuMP2start models automotive tasks as continuable or restartable to ensure timing- and safety-critical function pipelines are reactively resumed with low latency, while discarding stale task state. Experiments with the VMS show that critical CAN traffic processing resumes within 500 milliseconds of waking the RTOS guest, and reaches steady-state throughput in under 7ms.

Cite as

Anam Farrukh and Richard West. JuMP2start: Time-Aware Stop-Start Technology for a Software-Defined Vehicle System. In 36th Euromicro Conference on Real-Time Systems (ECRTS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 298, pp. 1:1-1:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{farrukh_et_al:LIPIcs.ECRTS.2024.1,
  author =	{Farrukh, Anam and West, Richard},
  title =	{{JuMP2start: Time-Aware Stop-Start Technology for a Software-Defined Vehicle System}},
  booktitle =	{36th Euromicro Conference on Real-Time Systems (ECRTS 2024)},
  pages =	{1:1--1:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-324-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{298},
  editor =	{Pellizzoni, Rodolfo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2024.1},
  URN =		{urn:nbn:de:0030-drops-203046},
  doi =		{10.4230/LIPIcs.ECRTS.2024.1},
  annote =	{Keywords: Time-aware stop-start, Real-time power management, Suspend-to-RAM, Partitioning hypervisor, Vehicle management system, Vehicle-OS, Software-defined vehicles (SDV)}
}
Document
Tighter Worst-Case Response Time Bounds for Jitter-Based Self-Suspension Analysis

Authors: Mario Günzel, Georg von der Brüggen, and Jian-Jia Chen

Published in: LIPIcs, Volume 298, 36th Euromicro Conference on Real-Time Systems (ECRTS 2024)


Abstract
Tasks are called self-suspending if they can yield their ready state (specifically, releasing the processor while having highest priority) despite being incomplete, for instance, to offload computation to an external device or when waiting on access rights for shared resources or data. This self-suspending behavior requires special treatment when applying analytical results to compute worst-case response time bounds. One typical treatment is modeling self-suspension as release jitter in a so-called jitter-based analysis. The state of the art, when considering task-level fixed-priority scheduling, individually quantifies the jitter term of each higher-priority task by its worst-case response time minus its worst-case execution time. This work tightens the jitter term by taking the execution behavior of the other higher-priority tasks into account. Our improved jitter-based analysis analytically dominates the previous jitter-based analysis. Moreover, an evaluation for synthetically generated sporadic tasks demonstrates that this jitter term results in tighter worst-case response time bounds for self-suspending tasks. We observe an improvement for up to 55.89 % of the tasksets compared to the previous jitter-based analysis.

Cite as

Mario Günzel, Georg von der Brüggen, and Jian-Jia Chen. Tighter Worst-Case Response Time Bounds for Jitter-Based Self-Suspension Analysis. In 36th Euromicro Conference on Real-Time Systems (ECRTS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 298, pp. 4:1-4:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gunzel_et_al:LIPIcs.ECRTS.2024.4,
  author =	{G\"{u}nzel, Mario and von der Br\"{u}ggen, Georg and Chen, Jian-Jia},
  title =	{{Tighter Worst-Case Response Time Bounds for Jitter-Based Self-Suspension Analysis}},
  booktitle =	{36th Euromicro Conference on Real-Time Systems (ECRTS 2024)},
  pages =	{4:1--4:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-324-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{298},
  editor =	{Pellizzoni, Rodolfo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2024.4},
  URN =		{urn:nbn:de:0030-drops-203074},
  doi =		{10.4230/LIPIcs.ECRTS.2024.4},
  annote =	{Keywords: Worst-Case Response Time, WCRT, Jitter, Self-Suspension, Analysis}
}
Document
Shared Resource Contention in MCUs: A Reality Check and the Quest for Timeliness

Authors: Daniel Oliveira, Weifan Chen, Sandro Pinto, and Renato Mancuso

Published in: LIPIcs, Volume 298, 36th Euromicro Conference on Real-Time Systems (ECRTS 2024)


Abstract
Microcontrollers (MCUs) are steadily embracing multi-core technology to meet growing performance demands. This trend marks a shift from their traditionally simple, deterministic designs to more complex and inherently less predictable architectures. While shared resource contention is well-studied in mid to high-end embedded systems, the emergence of multi-core architectures in MCUs introduces unique challenges and characteristics that existing research has not fully explored. In this paper, we conduct an in-depth investigation of both mainstream and next-generation MCU-based platforms, aiming to identify the sources of contention on systems typically lacking these problems. We empirically demonstrate substantial contention effects across different MCU architectures (i.e., from single- to multi-core configurations), highlighting significant application slowdowns. Notably, we observe that slowdowns can reach several orders of magnitude, with the most extreme cases showing up to a 3800x (times, not percent) increase in execution time. To address these issues, we propose and evaluate muTPArtc, a novel mechanism designed for Timely Progress Assessment (TPA) and TPA-based runtime control specifically tailored to MCUs. muTPArtc is an MCU-specialized TPA-based mechanism that leverages hardware facilities widely available in commercial off-the-shelf MCUs (i.e., hardware breakpoints and cycle counters) to successfully monitor applications' progress, detect, and mitigate timing violations. Our results demonstrate that muTPArtc effectively manages performance degradation due to interference, requiring only minimal modifications to the build pipeline and no changes to the source code of the target application, while incurring minor overheads.

Cite as

Daniel Oliveira, Weifan Chen, Sandro Pinto, and Renato Mancuso. Shared Resource Contention in MCUs: A Reality Check and the Quest for Timeliness. In 36th Euromicro Conference on Real-Time Systems (ECRTS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 298, pp. 5:1-5:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{oliveira_et_al:LIPIcs.ECRTS.2024.5,
  author =	{Oliveira, Daniel and Chen, Weifan and Pinto, Sandro and Mancuso, Renato},
  title =	{{Shared Resource Contention in MCUs: A Reality Check and the Quest for Timeliness}},
  booktitle =	{36th Euromicro Conference on Real-Time Systems (ECRTS 2024)},
  pages =	{5:1--5:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-324-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{298},
  editor =	{Pellizzoni, Rodolfo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2024.5},
  URN =		{urn:nbn:de:0030-drops-203088},
  doi =		{10.4230/LIPIcs.ECRTS.2024.5},
  annote =	{Keywords: multi-core microcontrollers, shared resources contention, progress-aware regulation}
}
Document
Optimizing Per-Core Priorities to Minimize End-To-End Latencies

Authors: Francesco Paladino, Alessandro Biondi, Enrico Bini, and Paolo Pazzaglia

Published in: LIPIcs, Volume 298, 36th Euromicro Conference on Real-Time Systems (ECRTS 2024)


Abstract
Logical Execution Time (LET) allows decoupling the schedule of real-time periodic tasks from their communication, with the advantage of isolating the communication pattern from the variability of the schedule. However, when such tasks are organized in chains, the usage of LET at the task level does not necessarily transfer the same LET properties to the chain level. In this paper, we extend a LET-like model from tasks to chains spanning over multiple cores. We leverage the designed constant latency chains to optimize per-core priority assignment. Finally, we also provide a set of heuristic algorithms, that are compared in a large-scale experimental evaluation.

Cite as

Francesco Paladino, Alessandro Biondi, Enrico Bini, and Paolo Pazzaglia. Optimizing Per-Core Priorities to Minimize End-To-End Latencies. In 36th Euromicro Conference on Real-Time Systems (ECRTS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 298, pp. 6:1-6:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{paladino_et_al:LIPIcs.ECRTS.2024.6,
  author =	{Paladino, Francesco and Biondi, Alessandro and Bini, Enrico and Pazzaglia, Paolo},
  title =	{{Optimizing Per-Core Priorities to Minimize End-To-End Latencies}},
  booktitle =	{36th Euromicro Conference on Real-Time Systems (ECRTS 2024)},
  pages =	{6:1--6:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-324-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{298},
  editor =	{Pellizzoni, Rodolfo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2024.6},
  URN =		{urn:nbn:de:0030-drops-203094},
  doi =		{10.4230/LIPIcs.ECRTS.2024.6},
  annote =	{Keywords: Cause-Effect Chains, Logical Execution Time, End-to-End Latency, Design Optimization, Task Priorities, Data Age, Reaction Time}
}
  • Refine by Author
  • 2 Calbimonte, Jean-Paul
  • 2 Gaboardi, Marco
  • 2 Mancuso, Renato
  • 2 Montesi, Fabrizio
  • 1 Abramsky, Samson
  • Show More...

  • Refine by Classification
  • 6 Computer systems organization → Real-time systems
  • 3 Information systems → Semantic web description languages
  • 3 Mathematics of computing → Graph algorithms
  • 3 Theory of computation → Type theory
  • 2 Computer systems organization → Embedded systems
  • Show More...

  • Refine by Keyword
  • 2 Approximate counting
  • 2 Graph Neural Networks
  • 2 differential privacy
  • 2 lambda-calculus
  • 1 3SAT
  • Show More...

  • Refine by Type
  • 39 document

  • Refine by Publication Year
  • 32 2024
  • 2 2019
  • 2 2021
  • 1 2015
  • 1 2017
  • Show More...