14 Search Results for "Schordan, Markus"


Document
From Trusted Annotations to Verified Knowledge

Authors: Adrian Prantl, Jens Knoop, Raimund Kirner, Albrecht Kadlec, and Markus Schordan

Published in: OASIcs, Volume 10, 9th International Workshop on Worst-Case Execution Time Analysis (WCET'09) (2009)


Abstract
WCET analyzers commonly rely on user-provided annotations such as loop bounds, recursion depths, region- and program constants. This reliance on user-provided annotations has an important drawback. It introduces a Trusted Annotation Basis into WCET analysis without any guarantee that the user-provided annotations are safe, let alone sharp. Hence, safety and accuracy of a WCET analysis cannot be formally established. In this paper we propose a uniform approach, which reduces the trusted annotation base to a minimum, while simultaneously yielding sharper (tighter) time bounds. Fundamental to our approach is to apply model checking in concert with other more inexpensive program analysis techniques, and the coordinated application of two algorithms for Binary Tightening and Binary Widening, which control the application of the model checker and hence the computational costs of the approach. Though in this paper we focus on the control of model checking by Binary Tightening and Widening, this is embedded into a more general approach in which we apply an array of analysis methods of increasing power and computational complexity for proving or disproving relevant time bounds of a program. First practical experiences using the sample programs of the Mälardalen benchmark suite demonstrate the usefulness of the overall approach. In fact, for most of these benchmarks we were able to empty the trusted annotation base completely, and to tighten the computed WCET considerably.

Cite as

Adrian Prantl, Jens Knoop, Raimund Kirner, Albrecht Kadlec, and Markus Schordan. From Trusted Annotations to Verified Knowledge. In 9th International Workshop on Worst-Case Execution Time Analysis (WCET'09). Open Access Series in Informatics (OASIcs), Volume 10, pp. 1-11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{prantl_et_al:OASIcs.WCET.2009.2282,
  author =	{Prantl, Adrian and Knoop, Jens and Kirner, Raimund and Kadlec, Albrecht and Schordan, Markus},
  title =	{{From Trusted Annotations to Verified Knowledge}},
  booktitle =	{9th International Workshop on Worst-Case Execution Time Analysis (WCET'09)},
  pages =	{1--11},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-14-9},
  ISSN =	{2190-6807},
  year =	{2009},
  volume =	{10},
  editor =	{Holsti, Niklas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.WCET.2009.2282},
  URN =		{urn:nbn:de:0030-drops-22828},
  doi =		{10.4230/OASIcs.WCET.2009.2282},
  annote =	{Keywords: WCET analysis, annotations, binary tightening, binary widening, model checking, CBMC}
}
Document
WCET 2008 -- Report from the Tool Challenge 2008 -- 8th Intl. Workshop on Worst-Case Execution Time (WCET) Analysis

Authors: Niklas Holsti, Jan Gustafsson, Guillem Bernat, Clément Ballabriga, Armelle Bonenfant, Roman Bourgade, Hugues Cassé, Daniel Cordes, Albrecht Kadlec, Raimund Kirner, Jens Knoop, Paul Lokuciejewski, Nicholas Merriam, Marianne de Michiel, Adrian Prantl, Bernhard Rieder, Christine Rochange, Pascal Sainrat, and Markus Schordan

Published in: OASIcs, Volume 8, 8th International Workshop on Worst-Case Execution Time Analysis (WCET'08) (2008)


Abstract
Following the successful WCET Tool Challenge in 2006, the second event in this series was organized in 2008, again with support from the ARTIST2 Network of Excellence. The WCET Tool Challenge 2008 (WCC'08) provides benchmark programs and poses a number of "analysis problems" about the dynamic, run-time properties of these programs. The participants are challenged to solve these problems with their program analysis tools. Two kinds of problems are defined: WCET problems, which ask for bounds on the execution time of chosen parts (subprograms) of the benchmarks, under given constraints on input data; and flow-analysis problems, which ask for bounds on the number of times certain parts of the benchmark can be executed, again under some constraints. We describe the organization of WCC'08, the benchmark programs, the participating tools, and the general results, successes, and failures. Most participants found WCC'08 to be a useful test of their tools. Unlike the 2006 Challenge, the WCC'08 participants include several tools for the same target (ARM7, LPC2138), and tools that combine measurements and static analysis, as well as pure static-analysis tools.

Cite as

Niklas Holsti, Jan Gustafsson, Guillem Bernat, Clément Ballabriga, Armelle Bonenfant, Roman Bourgade, Hugues Cassé, Daniel Cordes, Albrecht Kadlec, Raimund Kirner, Jens Knoop, Paul Lokuciejewski, Nicholas Merriam, Marianne de Michiel, Adrian Prantl, Bernhard Rieder, Christine Rochange, Pascal Sainrat, and Markus Schordan. WCET 2008 -- Report from the Tool Challenge 2008 -- 8th Intl. Workshop on Worst-Case Execution Time (WCET) Analysis. In 8th International Workshop on Worst-Case Execution Time Analysis (WCET'08). Open Access Series in Informatics (OASIcs), Volume 8, pp. 1-23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{holsti_et_al:OASIcs.WCET.2008.1663,
  author =	{Holsti, Niklas and Gustafsson, Jan and Bernat, Guillem and Ballabriga, Cl\'{e}ment and Bonenfant, Armelle and Bourgade, Roman and Cass\'{e}, Hugues and Cordes, Daniel and Kadlec, Albrecht and Kirner, Raimund and Knoop, Jens and Lokuciejewski, Paul and Merriam, Nicholas and de Michiel, Marianne and Prantl, Adrian and Rieder, Bernhard and Rochange, Christine and Sainrat, Pascal and Schordan, Markus},
  title =	{{WCET 2008 -- Report from the Tool Challenge 2008 -- 8th Intl. Workshop on Worst-Case Execution Time (WCET) Analysis}},
  booktitle =	{8th International Workshop on Worst-Case Execution Time Analysis (WCET'08)},
  pages =	{1--23},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-10-1},
  ISSN =	{2190-6807},
  year =	{2008},
  volume =	{8},
  editor =	{Kirner, Raimund},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.WCET.2008.1663},
  URN =		{urn:nbn:de:0030-drops-16637},
  doi =		{10.4230/OASIcs.WCET.2008.1663},
  annote =	{Keywords: WCET analysis, benchmark}
}
Document
Towards a Common WCET Annotation Language: Essential Ingredients

Authors: Raimund Kirner, Albrecht Kadlec, Adrian Prantl, Markus Schordan, and Jens Knoop

Published in: OASIcs, Volume 8, 8th International Workshop on Worst-Case Execution Time Analysis (WCET'08) (2008)


Abstract
Within the last years, ambitions towards the definition of common interfaces and the development of open frameworks have increased the efficiency of research on WCET analysis. The Annotation Language Challenge for WCET analysis has been proposed in line with these ambitions in order to push the development of common interfaces also to the level of annotation languages, which are crucial for the power of WCET analysis tools. In this paper we present a list of essential ingredients for a common WCET annotation language. The selected ingredients comprise a number of features available in different WCET analysis tools and add several new concepts we consider important. The annotation concepts are described in an abstract format that can be instantiated at different representation levels.

Cite as

Raimund Kirner, Albrecht Kadlec, Adrian Prantl, Markus Schordan, and Jens Knoop. Towards a Common WCET Annotation Language: Essential Ingredients. In 8th International Workshop on Worst-Case Execution Time Analysis (WCET'08). Open Access Series in Informatics (OASIcs), Volume 8, pp. 1-13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{kirner_et_al:OASIcs.WCET.2008.1657,
  author =	{Kirner, Raimund and Kadlec, Albrecht and Prantl, Adrian and Schordan, Markus and Knoop, Jens},
  title =	{{Towards a Common WCET Annotation Language: Essential Ingredients}},
  booktitle =	{8th International Workshop on Worst-Case Execution Time Analysis (WCET'08)},
  pages =	{1--13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-10-1},
  ISSN =	{2190-6807},
  year =	{2008},
  volume =	{8},
  editor =	{Kirner, Raimund},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.WCET.2008.1657},
  URN =		{urn:nbn:de:0030-drops-16575},
  doi =		{10.4230/OASIcs.WCET.2008.1657},
  annote =	{Keywords: Worst-case execution time (WCET) analysis, annotation languages, WCET annotation language challenge}
}
Document
TuBound - A Conceptually New Tool for Worst-Case Execution Time Analysis

Authors: Adrian Prantl, Markus Schordan, and Jens Knoop

Published in: OASIcs, Volume 8, 8th International Workshop on Worst-Case Execution Time Analysis (WCET'08) (2008)


Abstract
TuBound is a conceptually new tool for the worst-case execution time (WCET) analysis of programs. A distinctive feature of TuBound is the seamless integration of a WCET analysis component and of a compiler in a uniform tool. TuBound enables the programmer to provide hints improving the precision of the WCET computation on the high-level program source code, while preserving the advantages of using an optimizing compiler and the accuracy of a WCET analysis performed on the low-level machine code. This way, TuBound ideally serves the needs of both the programmer and the WCET analysis by providing them the interface on the very abstraction level that is most appropriate and convenient to them. In this paper we present the system architecture of TuBound, discuss the internal work-flow of the tool, and report on first measurements using benchmarks from Maelardalen University. TuBound took also part in the WCET Tool Challenge 2008.

Cite as

Adrian Prantl, Markus Schordan, and Jens Knoop. TuBound - A Conceptually New Tool for Worst-Case Execution Time Analysis. In 8th International Workshop on Worst-Case Execution Time Analysis (WCET'08). Open Access Series in Informatics (OASIcs), Volume 8, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{prantl_et_al:OASIcs.WCET.2008.1661,
  author =	{Prantl, Adrian and Schordan, Markus and Knoop, Jens},
  title =	{{TuBound - A Conceptually New Tool for Worst-Case Execution Time Analysis}},
  booktitle =	{8th International Workshop on Worst-Case Execution Time Analysis (WCET'08)},
  pages =	{1--8},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-10-1},
  ISSN =	{2190-6807},
  year =	{2008},
  volume =	{8},
  editor =	{Kirner, Raimund},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.WCET.2008.1661},
  URN =		{urn:nbn:de:0030-drops-16611},
  doi =		{10.4230/OASIcs.WCET.2008.1661},
  annote =	{Keywords: Worst-case execution time (WCET) analysis, Tool Chain, Flow Constraints, Source-To-Source}
}
Document
08161 Abstracts Collection – Scalable Program Analysis

Authors: Florian Martin, Hanne Riis Nielson, Claudio Riva, and Markus Schordan

Published in: Dagstuhl Seminar Proceedings, Volume 8161, Scalable Program Analysis (2008)


Abstract
From April 13 to April 18, 2008, the Dagstuhl Seminar 08161 ``Scalable Program Analysis'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

Cite as

Florian Martin, Hanne Riis Nielson, Claudio Riva, and Markus Schordan. 08161 Abstracts Collection – Scalable Program Analysis. In Scalable Program Analysis. Dagstuhl Seminar Proceedings, Volume 8161, pp. 1-17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{martin_et_al:DagSemProc.08161.1,
  author =	{Martin, Florian and Riis Nielson, Hanne and Riva, Claudio and Schordan, Markus},
  title =	{{08161 Abstracts Collection – Scalable Program Analysis}},
  booktitle =	{Scalable Program Analysis},
  pages =	{1--17},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8161},
  editor =	{Florian Martin and Hanne Riis Nielson and Claudio Riva and Markus Schordan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08161.1},
  URN =		{urn:nbn:de:0030-drops-15766},
  doi =		{10.4230/DagSemProc.08161.1},
  annote =	{Keywords: Static analysis, security, pointer analysis, data flow analysis, error detection, concurrency}
}
Document
Average Case Analysis of Some Elimination-Based Data-Flow Analysis Algorithms

Authors: Johann Blieberger

Published in: Dagstuhl Seminar Proceedings, Volume 8161, Scalable Program Analysis (2008)


Abstract
The average case of some elimination-based data-flow analysis algorithms is analyzed in a mathematical way. Besides this allows for comparing the timing behavior of the algorithms, it also provides insights into how relevant the underlying statistics are when compared to practical settings.

Cite as

Johann Blieberger. Average Case Analysis of Some Elimination-Based Data-Flow Analysis Algorithms. In Scalable Program Analysis. Dagstuhl Seminar Proceedings, Volume 8161, pp. 1-12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{blieberger:DagSemProc.08161.2,
  author =	{Blieberger, Johann},
  title =	{{Average Case Analysis of Some Elimination-Based Data-Flow Analysis Algorithms}},
  booktitle =	{Scalable Program Analysis},
  pages =	{1--12},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8161},
  editor =	{Florian Martin and Hanne Riis Nielson and Claudio Riva and Markus Schordan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08161.2},
  URN =		{urn:nbn:de:0030-drops-15722},
  doi =		{10.4230/DagSemProc.08161.2},
  annote =	{Keywords: Average case analysis, elimination-based data-flow analysis algorithms, reducible flow graphs}
}
Document
Data-Flow Analysis for Multi-Core Computing Systems: A Reminder to Reverse Data-Flow Analysis

Authors: Jens Knoop

Published in: Dagstuhl Seminar Proceedings, Volume 8161, Scalable Program Analysis (2008)


Abstract
The increasing demands for highly performant, proven correct, easily maintainable, extensible programs together with the continuous growth of real-world programs strengthen the pressure for powerful and scalable program analyses for program development and code generation. Multi-core computing systems offer new chances for enhancing the scalability of program analyses, if the additional computing power offered by these systems can be used effectively. This, however, poses new challenges on the analysis side. In principle, it requires program analyses which can be easily parallelized and mapped to multi-core architectures. In this paper we remind to reverse data-flow analysis, which has been introduced and investigated in the context of demand-driven data-flow analysis, as one such class of program analyses which is particularly suitable for this.

Cite as

Jens Knoop. Data-Flow Analysis for Multi-Core Computing Systems: A Reminder to Reverse Data-Flow Analysis. In Scalable Program Analysis. Dagstuhl Seminar Proceedings, Volume 8161, pp. 1-14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{knoop:DagSemProc.08161.3,
  author =	{Knoop, Jens},
  title =	{{Data-Flow Analysis for Multi-Core Computing Systems: A Reminder to Reverse Data-Flow Analysis}},
  booktitle =	{Scalable Program Analysis},
  pages =	{1--14},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8161},
  editor =	{Florian Martin and Hanne Riis Nielson and Claudio Riva and Markus Schordan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08161.3},
  URN =		{urn:nbn:de:0030-drops-15753},
  doi =		{10.4230/DagSemProc.08161.3},
  annote =	{Keywords: Multi-core computing systems, scalable program analysis, reverse data-flow analysis, demand-driven data-flow analysis}
}
Document
Dependence Cluster Causes

Authors: Dave Binkley

Published in: Dagstuhl Seminar Proceedings, Volume 8161, Scalable Program Analysis (2008)


Abstract
A dependence cluster is a maximal set of program components that all depend upon one another. For small programs, programmers as well as static-analysis tools can overcome the negative effects of large dependence clusters. However, this ability diminished as program size increases. Thus, the existence of large dependence clusters presents a serious challenge to the scalability of modern software. Recent ongoing work into the existence and causes of dependence clusters is presented. A better understanding of clusters and their causes is a precursor to the construction of more informed analysis tools and ideally the eventual breaking or proactive avoidance of large dependence clusters.

Cite as

Dave Binkley. Dependence Cluster Causes. In Scalable Program Analysis. Dagstuhl Seminar Proceedings, Volume 8161, pp. 1-13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{binkley:DagSemProc.08161.4,
  author =	{Binkley, Dave},
  title =	{{Dependence Cluster Causes}},
  booktitle =	{Scalable Program Analysis},
  pages =	{1--13},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8161},
  editor =	{Florian Martin and Hanne Riis Nielson and Claudio Riva and Markus Schordan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08161.4},
  URN =		{urn:nbn:de:0030-drops-15711},
  doi =		{10.4230/DagSemProc.08161.4},
  annote =	{Keywords: Data Dependence, Control Dependence, Slice, Cluster}
}
Document
Parfait - Designing a Scalable Bug Checker

Authors: Cristina Cifuentes and Bernhard Scholz

Published in: Dagstuhl Seminar Proceedings, Volume 8161, Scalable Program Analysis (2008)


Abstract
We present the design of Parfait, a static layered program analysis framework for bug checking, designed for scalability and precision by improving false positive rates and scale to millions of lines of code. The Parfait framework is inherently parallelizable and makes use of demand driven analyses. In this paper we provide an example of several layers of analyses for buffer overflow, summarize our initial implementation for C, and provide preliminary results. Results are quantified in terms of correctly-reported, false positive and false negative rates against the NIST SAMATE synthetic benchmarks for C code.

Cite as

Cristina Cifuentes and Bernhard Scholz. Parfait - Designing a Scalable Bug Checker. In Scalable Program Analysis. Dagstuhl Seminar Proceedings, Volume 8161, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{cifuentes_et_al:DagSemProc.08161.5,
  author =	{Cifuentes, Cristina and Scholz, Bernhard},
  title =	{{Parfait - Designing a Scalable Bug Checker}},
  booktitle =	{Scalable Program Analysis},
  pages =	{1--8},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8161},
  editor =	{Florian Martin and Hanne Riis Nielson and Claudio Riva and Markus Schordan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08161.5},
  URN =		{urn:nbn:de:0030-drops-15737},
  doi =		{10.4230/DagSemProc.08161.5},
  annote =	{Keywords: Static analysis, demand driven, parallelizable}
}
Document
Scalable Analysis via Machine Learning: Predicting Memory Dependencies Precisely

Authors: Lars Gesellensetter

Published in: Dagstuhl Seminar Proceedings, Volume 8161, Scalable Program Analysis (2008)


Abstract
Using Machine Learning to yield Scalable Program Analyses Program Analysis tackles the problem of predicting the behavior or certain properties of the considered program code. The challenge lies in determining the dynamic runtime behavior statically at compile time. While in rare cases it is possible to determine exact dynamic properties already statically, in many cases, e.g., in analyzing memory dependencies, one can only find imprecise information. To overcome this, we apply Machine Learning (ML) techniques which are particularly suited for this task. They yield highly scalable predictors and are safely applicable when erroneous predictions merely have an impact on program optimality but not on correctness. In this talk, I present our approach to mitigate the impact of the memory gap. Over the last decade, computer performance is often dominated by memory speed, which did not manage to keep pace with the ever increasing cpu rates. We consider novel speculative optimization techniques of memory accesses to reduce their effective latency. We trained predictors to learn the memory dependencies of a given pair of accesses, and use the result in our optimization do decide about the profitability of a given optimization step.

Cite as

Lars Gesellensetter. Scalable Analysis via Machine Learning: Predicting Memory Dependencies Precisely. In Scalable Program Analysis. Dagstuhl Seminar Proceedings, Volume 8161, pp. 1-3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{gesellensetter:DagSemProc.08161.6,
  author =	{Gesellensetter, Lars},
  title =	{{Scalable Analysis via Machine Learning: Predicting Memory Dependencies Precisely}},
  booktitle =	{Scalable Program Analysis},
  pages =	{1--3},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8161},
  editor =	{Florian Martin and Hanne Riis Nielson and Claudio Riva and Markus Schordan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08161.6},
  URN =		{urn:nbn:de:0030-drops-15745},
  doi =		{10.4230/DagSemProc.08161.6},
  annote =	{Keywords: Program Analysis, Alias Analysis, Memory Depdencies, Speculative Optimizations, Machine Learning}
}
Document
Source-To-Source Analysis with SATIrE - an Example Revisited

Authors: Markus Schordan

Published in: Dagstuhl Seminar Proceedings, Volume 8161, Scalable Program Analysis (2008)


Abstract
Source-to-source analysis aims at supporting the reuse of analysis results similar to code reuse. The reuse of program code is a common technique which attempts to save time and costs by reducing redundant work. We want to avoid re-analyzing parts of a software system, such as library code. In the ideal case the analysis results are directly associated with the program itself. Source-to-source analysis supports this through program annotations. Further more, to get the best out of available software analysis tools, we aim at enabling the combination of the analysis results of different tools. In order to allow this, tools must be able to process another tool's analysis results. This enables numerous applications such as automatic annotation of interfaces, testing of analyses by checking the results of an analysis against provided annotations, domain aware analysis by utilizing domain-specific program annotations, and making analysis results persistent as annotations in source code. The design of the {em Static Analysis Tool Integration Engine} (SATIrE cite{satirewebsite}) allows to map source code annotations to its intermediate program representation as well as generating source code annotations from analysis results that are attached to the intermediate representation. The technical challenges are the design of the analysis information annotation language, the bidirectional propagation of the analysis information through different phases of the internal translation processes, and the combination of the different analyses through the plug-in mechanism. In its current version SATIrE targets C/C++ programs. In this paper we present the approach of source-to-source analysis and show in a detailed example analysis how we support this approach in SATIrE.

Cite as

Markus Schordan. Source-To-Source Analysis with SATIrE - an Example Revisited. In Scalable Program Analysis. Dagstuhl Seminar Proceedings, Volume 8161, pp. 1-13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{schordan:DagSemProc.08161.7,
  author =	{Schordan, Markus},
  title =	{{Source-To-Source Analysis with SATIrE - an Example Revisited}},
  booktitle =	{Scalable Program Analysis},
  pages =	{1--13},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8161},
  editor =	{Florian Martin and Hanne Riis Nielson and Claudio Riva and Markus Schordan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08161.7},
  URN =		{urn:nbn:de:0030-drops-15693},
  doi =		{10.4230/DagSemProc.08161.7},
  annote =	{Keywords: Source-to-source analysis, ARAL, Annotation Language}
}
Document
Towards Distributed Memory Parallel Program Analysis

Authors: Daniel J. Quinlan, Gergö Barany, and Thomas Panas

Published in: Dagstuhl Seminar Proceedings, Volume 8161, Scalable Program Analysis (2008)


Abstract
Our work presents a parallel attribute evaluation for distributed memory parallel computer architectures where previously only shared memory parallel support for this technique has been developed. Attribute evaluation is a part of how attribute grammars are used for program analysis within modern compilers. Within this work, we have extended ROSE, a open compiler infrastructure, with a distributed memory parallel attribute evaluation mechanism to support user defined global program analysis required for some forms of security analysis which can not be addresses by a file by file view of large scale applications. As a result, user defined security analyzes may now run in parallel without the user having to specify the way data is communicated between processors. The automation of communication enables an extensible open-source parallel program analysis infrastructure.

Cite as

Daniel J. Quinlan, Gergö Barany, and Thomas Panas. Towards Distributed Memory Parallel Program Analysis. In Scalable Program Analysis. Dagstuhl Seminar Proceedings, Volume 8161, pp. 1-9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{quinlan_et_al:DagSemProc.08161.8,
  author =	{Quinlan, Daniel J. and Barany, Gerg\"{o} and Panas, Thomas},
  title =	{{Towards Distributed Memory Parallel Program Analysis}},
  booktitle =	{Scalable Program Analysis},
  pages =	{1--9},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8161},
  editor =	{Florian Martin and Hanne Riis Nielson and Claudio Riva and Markus Schordan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08161.8},
  URN =		{urn:nbn:de:0030-drops-15685},
  doi =		{10.4230/DagSemProc.08161.8},
  annote =	{Keywords: Parallel computing, attribute evaluation, program analysis}
}
Document
Value Flow Graph Analysis with SATIrE

Authors: Gergö Barany

Published in: Dagstuhl Seminar Proceedings, Volume 8161, Scalable Program Analysis (2008)


Abstract
Partial redundancy elimination is a common program optimization that attempts to improve execution time by removing superfluous computations from a program. There are two well-known classes of such techniques: syntactic and semantic methods. While semantic optimization is more powerful, traditional algorithms based on SSA from are complicated, heuristic in nature, and unable to perform certain useful optimizations. The value flow graph is a syntactic program representation modeling semantic equivalences; it allows the combination of simple syntactic partial redundancy elimination with a powerful semantic analysis. This yields an optimization that is computationally optimal and simpler than traditional semantic methods. This talk discusses partial redundancy elimination using the value flow graph. A source-to-source optimizer for C++ was implemented using the SATIrE program analysis and transformation system. Two tools integrated in SATIrE were used in the implementation: ROSE is a framework for arbitrary analyses and source-to-source transformations of C++ programs, PAG is a tool for generating data flow analyzers from functional specifications.

Cite as

Gergö Barany. Value Flow Graph Analysis with SATIrE. In Scalable Program Analysis. Dagstuhl Seminar Proceedings, Volume 8161, pp. 1-6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{barany:DagSemProc.08161.9,
  author =	{Barany, Gerg\"{o}},
  title =	{{Value Flow Graph Analysis with SATIrE}},
  booktitle =	{Scalable Program Analysis},
  pages =	{1--6},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8161},
  editor =	{Florian Martin and Hanne Riis Nielson and Claudio Riva and Markus Schordan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08161.9},
  URN =		{urn:nbn:de:0030-drops-15709},
  doi =		{10.4230/DagSemProc.08161.9},
  annote =	{Keywords: Partial redundancy elimination, value flow analysis, source-to-source optimization}
}
Document
WCET Analysis: The Annotation Language Challenge

Authors: Raimund Kirner, Jens Knoop, Adrian Prantl, Markus Schordan, and Ingomar Wenzel

Published in: OASIcs, Volume 6, 7th International Workshop on Worst-Case Execution Time Analysis (WCET'07) (2007)


Abstract
Worst-case execution time (WCET) analysis is indispensable for the successful design and development of systems, which, in addition to their functional constraints, have to satisfy hard real-time constraints. The expressiveness and usability of annotation languages, which are used by algorithms and tools for WCET analysis in order to separate feasible from infeasible program paths, have a crucial impact on the precision and performance of these algorithms and tools. In this paper, we thus propose to complement the WCET tool challenge, which has recently successfully been launched, by a second closely related challenge: the WCET annotation language challenge. We believe that contributions towards mastering this challenge will be essential for the next major step of advancing the field of WCET analysis.

Cite as

Raimund Kirner, Jens Knoop, Adrian Prantl, Markus Schordan, and Ingomar Wenzel. WCET Analysis: The Annotation Language Challenge. In 7th International Workshop on Worst-Case Execution Time Analysis (WCET'07). Open Access Series in Informatics (OASIcs), Volume 6, pp. 1-17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2007)


Copy BibTex To Clipboard

@InProceedings{kirner_et_al:OASIcs.WCET.2007.1197,
  author =	{Kirner, Raimund and Knoop, Jens and Prantl, Adrian and Schordan, Markus and Wenzel, Ingomar},
  title =	{{WCET Analysis: The Annotation Language Challenge}},
  booktitle =	{7th International Workshop on Worst-Case Execution Time Analysis (WCET'07)},
  pages =	{1--17},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-05-7},
  ISSN =	{2190-6807},
  year =	{2007},
  volume =	{6},
  editor =	{Rochange, Christine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.WCET.2007.1197},
  URN =		{urn:nbn:de:0030-drops-11974},
  doi =		{10.4230/OASIcs.WCET.2007.1197},
  annote =	{Keywords: Worst-case execution time analysis, WCET, path description, annotation language challenge, expressiveness, convenience}
}
  • Refine by Author
  • 7 Schordan, Markus
  • 6 Knoop, Jens
  • 5 Prantl, Adrian
  • 4 Kirner, Raimund
  • 3 Kadlec, Albrecht
  • Show More...

  • Refine by Classification

  • Refine by Keyword
  • 2 Static analysis
  • 2 WCET analysis
  • 2 Worst-case execution time (WCET) analysis
  • 1 ARAL
  • 1 Alias Analysis
  • Show More...

  • Refine by Type
  • 14 document

  • Refine by Publication Year
  • 12 2008
  • 1 2007
  • 1 2009

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail