Dagstuhl Seminar Proceedings, Volume 8161



Publication Details

  • published at: 2008-08-28
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik

Access Numbers

Documents

No documents found matching your filter selection.
Document
08161 Abstracts Collection – Scalable Program Analysis

Authors: Florian Martin, Hanne Riis Nielson, Claudio Riva, and Markus Schordan


Abstract
From April 13 to April 18, 2008, the Dagstuhl Seminar 08161 ``Scalable Program Analysis'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

Cite as

Florian Martin, Hanne Riis Nielson, Claudio Riva, and Markus Schordan. 08161 Abstracts Collection – Scalable Program Analysis. In Scalable Program Analysis. Dagstuhl Seminar Proceedings, Volume 8161, pp. 1-17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{martin_et_al:DagSemProc.08161.1,
  author =	{Martin, Florian and Riis Nielson, Hanne and Riva, Claudio and Schordan, Markus},
  title =	{{08161 Abstracts Collection – Scalable Program Analysis}},
  booktitle =	{Scalable Program Analysis},
  pages =	{1--17},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8161},
  editor =	{Florian Martin and Hanne Riis Nielson and Claudio Riva and Markus Schordan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08161.1},
  URN =		{urn:nbn:de:0030-drops-15766},
  doi =		{10.4230/DagSemProc.08161.1},
  annote =	{Keywords: Static analysis, security, pointer analysis, data flow analysis, error detection, concurrency}
}
Document
Average Case Analysis of Some Elimination-Based Data-Flow Analysis Algorithms

Authors: Johann Blieberger


Abstract
The average case of some elimination-based data-flow analysis algorithms is analyzed in a mathematical way. Besides this allows for comparing the timing behavior of the algorithms, it also provides insights into how relevant the underlying statistics are when compared to practical settings.

Cite as

Johann Blieberger. Average Case Analysis of Some Elimination-Based Data-Flow Analysis Algorithms. In Scalable Program Analysis. Dagstuhl Seminar Proceedings, Volume 8161, pp. 1-12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{blieberger:DagSemProc.08161.2,
  author =	{Blieberger, Johann},
  title =	{{Average Case Analysis of Some Elimination-Based Data-Flow Analysis Algorithms}},
  booktitle =	{Scalable Program Analysis},
  pages =	{1--12},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8161},
  editor =	{Florian Martin and Hanne Riis Nielson and Claudio Riva and Markus Schordan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08161.2},
  URN =		{urn:nbn:de:0030-drops-15722},
  doi =		{10.4230/DagSemProc.08161.2},
  annote =	{Keywords: Average case analysis, elimination-based data-flow analysis algorithms, reducible flow graphs}
}
Document
Data-Flow Analysis for Multi-Core Computing Systems: A Reminder to Reverse Data-Flow Analysis

Authors: Jens Knoop


Abstract
The increasing demands for highly performant, proven correct, easily maintainable, extensible programs together with the continuous growth of real-world programs strengthen the pressure for powerful and scalable program analyses for program development and code generation. Multi-core computing systems offer new chances for enhancing the scalability of program analyses, if the additional computing power offered by these systems can be used effectively. This, however, poses new challenges on the analysis side. In principle, it requires program analyses which can be easily parallelized and mapped to multi-core architectures. In this paper we remind to reverse data-flow analysis, which has been introduced and investigated in the context of demand-driven data-flow analysis, as one such class of program analyses which is particularly suitable for this.

Cite as

Jens Knoop. Data-Flow Analysis for Multi-Core Computing Systems: A Reminder to Reverse Data-Flow Analysis. In Scalable Program Analysis. Dagstuhl Seminar Proceedings, Volume 8161, pp. 1-14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{knoop:DagSemProc.08161.3,
  author =	{Knoop, Jens},
  title =	{{Data-Flow Analysis for Multi-Core Computing Systems: A Reminder to Reverse Data-Flow Analysis}},
  booktitle =	{Scalable Program Analysis},
  pages =	{1--14},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8161},
  editor =	{Florian Martin and Hanne Riis Nielson and Claudio Riva and Markus Schordan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08161.3},
  URN =		{urn:nbn:de:0030-drops-15753},
  doi =		{10.4230/DagSemProc.08161.3},
  annote =	{Keywords: Multi-core computing systems, scalable program analysis, reverse data-flow analysis, demand-driven data-flow analysis}
}
Document
Dependence Cluster Causes

Authors: Dave Binkley


Abstract
A dependence cluster is a maximal set of program components that all depend upon one another. For small programs, programmers as well as static-analysis tools can overcome the negative effects of large dependence clusters. However, this ability diminished as program size increases. Thus, the existence of large dependence clusters presents a serious challenge to the scalability of modern software. Recent ongoing work into the existence and causes of dependence clusters is presented. A better understanding of clusters and their causes is a precursor to the construction of more informed analysis tools and ideally the eventual breaking or proactive avoidance of large dependence clusters.

Cite as

Dave Binkley. Dependence Cluster Causes. In Scalable Program Analysis. Dagstuhl Seminar Proceedings, Volume 8161, pp. 1-13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{binkley:DagSemProc.08161.4,
  author =	{Binkley, Dave},
  title =	{{Dependence Cluster Causes}},
  booktitle =	{Scalable Program Analysis},
  pages =	{1--13},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8161},
  editor =	{Florian Martin and Hanne Riis Nielson and Claudio Riva and Markus Schordan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08161.4},
  URN =		{urn:nbn:de:0030-drops-15711},
  doi =		{10.4230/DagSemProc.08161.4},
  annote =	{Keywords: Data Dependence, Control Dependence, Slice, Cluster}
}
Document
Parfait - Designing a Scalable Bug Checker

Authors: Cristina Cifuentes and Bernhard Scholz


Abstract
We present the design of Parfait, a static layered program analysis framework for bug checking, designed for scalability and precision by improving false positive rates and scale to millions of lines of code. The Parfait framework is inherently parallelizable and makes use of demand driven analyses. In this paper we provide an example of several layers of analyses for buffer overflow, summarize our initial implementation for C, and provide preliminary results. Results are quantified in terms of correctly-reported, false positive and false negative rates against the NIST SAMATE synthetic benchmarks for C code.

Cite as

Cristina Cifuentes and Bernhard Scholz. Parfait - Designing a Scalable Bug Checker. In Scalable Program Analysis. Dagstuhl Seminar Proceedings, Volume 8161, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{cifuentes_et_al:DagSemProc.08161.5,
  author =	{Cifuentes, Cristina and Scholz, Bernhard},
  title =	{{Parfait - Designing a Scalable Bug Checker}},
  booktitle =	{Scalable Program Analysis},
  pages =	{1--8},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8161},
  editor =	{Florian Martin and Hanne Riis Nielson and Claudio Riva and Markus Schordan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08161.5},
  URN =		{urn:nbn:de:0030-drops-15737},
  doi =		{10.4230/DagSemProc.08161.5},
  annote =	{Keywords: Static analysis, demand driven, parallelizable}
}
Document
Scalable Analysis via Machine Learning: Predicting Memory Dependencies Precisely

Authors: Lars Gesellensetter


Abstract
Using Machine Learning to yield Scalable Program Analyses Program Analysis tackles the problem of predicting the behavior or certain properties of the considered program code. The challenge lies in determining the dynamic runtime behavior statically at compile time. While in rare cases it is possible to determine exact dynamic properties already statically, in many cases, e.g., in analyzing memory dependencies, one can only find imprecise information. To overcome this, we apply Machine Learning (ML) techniques which are particularly suited for this task. They yield highly scalable predictors and are safely applicable when erroneous predictions merely have an impact on program optimality but not on correctness. In this talk, I present our approach to mitigate the impact of the memory gap. Over the last decade, computer performance is often dominated by memory speed, which did not manage to keep pace with the ever increasing cpu rates. We consider novel speculative optimization techniques of memory accesses to reduce their effective latency. We trained predictors to learn the memory dependencies of a given pair of accesses, and use the result in our optimization do decide about the profitability of a given optimization step.

Cite as

Lars Gesellensetter. Scalable Analysis via Machine Learning: Predicting Memory Dependencies Precisely. In Scalable Program Analysis. Dagstuhl Seminar Proceedings, Volume 8161, pp. 1-3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{gesellensetter:DagSemProc.08161.6,
  author =	{Gesellensetter, Lars},
  title =	{{Scalable Analysis via Machine Learning: Predicting Memory Dependencies Precisely}},
  booktitle =	{Scalable Program Analysis},
  pages =	{1--3},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8161},
  editor =	{Florian Martin and Hanne Riis Nielson and Claudio Riva and Markus Schordan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08161.6},
  URN =		{urn:nbn:de:0030-drops-15745},
  doi =		{10.4230/DagSemProc.08161.6},
  annote =	{Keywords: Program Analysis, Alias Analysis, Memory Depdencies, Speculative Optimizations, Machine Learning}
}
Document
Source-To-Source Analysis with SATIrE - an Example Revisited

Authors: Markus Schordan


Abstract
Source-to-source analysis aims at supporting the reuse of analysis results similar to code reuse. The reuse of program code is a common technique which attempts to save time and costs by reducing redundant work. We want to avoid re-analyzing parts of a software system, such as library code. In the ideal case the analysis results are directly associated with the program itself. Source-to-source analysis supports this through program annotations. Further more, to get the best out of available software analysis tools, we aim at enabling the combination of the analysis results of different tools. In order to allow this, tools must be able to process another tool's analysis results. This enables numerous applications such as automatic annotation of interfaces, testing of analyses by checking the results of an analysis against provided annotations, domain aware analysis by utilizing domain-specific program annotations, and making analysis results persistent as annotations in source code. The design of the {em Static Analysis Tool Integration Engine} (SATIrE cite{satirewebsite}) allows to map source code annotations to its intermediate program representation as well as generating source code annotations from analysis results that are attached to the intermediate representation. The technical challenges are the design of the analysis information annotation language, the bidirectional propagation of the analysis information through different phases of the internal translation processes, and the combination of the different analyses through the plug-in mechanism. In its current version SATIrE targets C/C++ programs. In this paper we present the approach of source-to-source analysis and show in a detailed example analysis how we support this approach in SATIrE.

Cite as

Markus Schordan. Source-To-Source Analysis with SATIrE - an Example Revisited. In Scalable Program Analysis. Dagstuhl Seminar Proceedings, Volume 8161, pp. 1-13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{schordan:DagSemProc.08161.7,
  author =	{Schordan, Markus},
  title =	{{Source-To-Source Analysis with SATIrE - an Example Revisited}},
  booktitle =	{Scalable Program Analysis},
  pages =	{1--13},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8161},
  editor =	{Florian Martin and Hanne Riis Nielson and Claudio Riva and Markus Schordan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08161.7},
  URN =		{urn:nbn:de:0030-drops-15693},
  doi =		{10.4230/DagSemProc.08161.7},
  annote =	{Keywords: Source-to-source analysis, ARAL, Annotation Language}
}
Document
Towards Distributed Memory Parallel Program Analysis

Authors: Daniel J. Quinlan, Gergö Barany, and Thomas Panas


Abstract
Our work presents a parallel attribute evaluation for distributed memory parallel computer architectures where previously only shared memory parallel support for this technique has been developed. Attribute evaluation is a part of how attribute grammars are used for program analysis within modern compilers. Within this work, we have extended ROSE, a open compiler infrastructure, with a distributed memory parallel attribute evaluation mechanism to support user defined global program analysis required for some forms of security analysis which can not be addresses by a file by file view of large scale applications. As a result, user defined security analyzes may now run in parallel without the user having to specify the way data is communicated between processors. The automation of communication enables an extensible open-source parallel program analysis infrastructure.

Cite as

Daniel J. Quinlan, Gergö Barany, and Thomas Panas. Towards Distributed Memory Parallel Program Analysis. In Scalable Program Analysis. Dagstuhl Seminar Proceedings, Volume 8161, pp. 1-9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{quinlan_et_al:DagSemProc.08161.8,
  author =	{Quinlan, Daniel J. and Barany, Gerg\"{o} and Panas, Thomas},
  title =	{{Towards Distributed Memory Parallel Program Analysis}},
  booktitle =	{Scalable Program Analysis},
  pages =	{1--9},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8161},
  editor =	{Florian Martin and Hanne Riis Nielson and Claudio Riva and Markus Schordan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08161.8},
  URN =		{urn:nbn:de:0030-drops-15685},
  doi =		{10.4230/DagSemProc.08161.8},
  annote =	{Keywords: Parallel computing, attribute evaluation, program analysis}
}
Document
Value Flow Graph Analysis with SATIrE

Authors: Gergö Barany


Abstract
Partial redundancy elimination is a common program optimization that attempts to improve execution time by removing superfluous computations from a program. There are two well-known classes of such techniques: syntactic and semantic methods. While semantic optimization is more powerful, traditional algorithms based on SSA from are complicated, heuristic in nature, and unable to perform certain useful optimizations. The value flow graph is a syntactic program representation modeling semantic equivalences; it allows the combination of simple syntactic partial redundancy elimination with a powerful semantic analysis. This yields an optimization that is computationally optimal and simpler than traditional semantic methods. This talk discusses partial redundancy elimination using the value flow graph. A source-to-source optimizer for C++ was implemented using the SATIrE program analysis and transformation system. Two tools integrated in SATIrE were used in the implementation: ROSE is a framework for arbitrary analyses and source-to-source transformations of C++ programs, PAG is a tool for generating data flow analyzers from functional specifications.

Cite as

Gergö Barany. Value Flow Graph Analysis with SATIrE. In Scalable Program Analysis. Dagstuhl Seminar Proceedings, Volume 8161, pp. 1-6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{barany:DagSemProc.08161.9,
  author =	{Barany, Gerg\"{o}},
  title =	{{Value Flow Graph Analysis with SATIrE}},
  booktitle =	{Scalable Program Analysis},
  pages =	{1--6},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8161},
  editor =	{Florian Martin and Hanne Riis Nielson and Claudio Riva and Markus Schordan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.08161.9},
  URN =		{urn:nbn:de:0030-drops-15709},
  doi =		{10.4230/DagSemProc.08161.9},
  annote =	{Keywords: Partial redundancy elimination, value flow analysis, source-to-source optimization}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail