Dagstuhl Seminar Proceedings, Volume 10111



Publication Details

  • published at: 2010-06-28
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik

Access Numbers

Documents

No documents found matching your filter selection.
Document
10111 Abstracts Collection – Practical Software Testing : Tool Automation and Human Factors

Authors: Mark Harman, Henry Muccini, Wolfram Schulte, and Tao Xie


Abstract
From March 14, 2010 to March 19, 2010, the Dagstuhl Seminar 10111 ``Practical Software Testing : Tool Automation and Human Factors'' was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

Cite as

Mark Harman, Henry Muccini, Wolfram Schulte, and Tao Xie. 10111 Abstracts Collection – Practical Software Testing : Tool Automation and Human Factors. In Practical Software Testing : Tool Automation and Human Factors. Dagstuhl Seminar Proceedings, Volume 10111, pp. 1-11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{harman_et_al:DagSemProc.10111.1,
  author =	{Harman, Mark and Muccini, Henry and Schulte, Wolfram and Xie, Tao},
  title =	{{10111 Abstracts Collection – Practical Software Testing : Tool Automation and Human Factors}},
  booktitle =	{Practical Software Testing : Tool Automation and Human Factors},
  pages =	{1--11},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10111},
  editor =	{Mark Harman and Henry Muccini and Wolfram Schulte and Tao Xie},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10111.1},
  URN =		{urn:nbn:de:0030-drops-26267},
  doi =		{10.4230/DagSemProc.10111.1},
  annote =	{Keywords: Software testing, Test generation, Test automation, Test oracles, Testing tools, Human-computer interaction, Code-based testing, Specification-based testing}
}
Document
10111 Executive Summary – Practical Software Testing: Tool Automation and Human Factors

Authors: Mark Harman, Henry Muccini, Wolfram Schulte, and Tao Xie


Abstract
The main goal of the seminar ``Practical Software Testing: Tool Automation and Human Factors'' was to bring together academics working on algorithms, methods, and techniques for practical software testing, with practitioners, interested in developing more soundly-based and well-understood testing processes and practices. The seminar's purpose was to make researchers aware of industry's problems, and practitioners aware of research approaches. The seminar focused in particular on testing automation and human factors. In the week of March 14-19, 2010, 40 researchers from 11 countries (Canada, France, Germany, Italy, Luxembourg, the Netherlands, Sweden, Switzerland, South Africa, United Kingdom, United States) discussed their recent work, and recent and future trends in software testing. The seminar consisted of five main types of presentations or activities: topic-oriented presentations, research-oriented presentations, short self-introduction presentations, tool demos, and working group meetings and presentations.

Cite as

Mark Harman, Henry Muccini, Wolfram Schulte, and Tao Xie. 10111 Executive Summary – Practical Software Testing: Tool Automation and Human Factors. In Practical Software Testing : Tool Automation and Human Factors. Dagstuhl Seminar Proceedings, Volume 10111, pp. 1-5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{harman_et_al:DagSemProc.10111.2,
  author =	{Harman, Mark and Muccini, Henry and Schulte, Wolfram and Xie, Tao},
  title =	{{10111 Executive Summary – Practical Software Testing: Tool Automation and Human Factors}},
  booktitle =	{Practical Software Testing : Tool Automation and Human Factors},
  pages =	{1--5},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10111},
  editor =	{Mark Harman and Henry Muccini and Wolfram Schulte and Tao Xie},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10111.2},
  URN =		{urn:nbn:de:0030-drops-26234},
  doi =		{10.4230/DagSemProc.10111.2},
  annote =	{Keywords: Software testing, Test generation, Test automation, Test oracles, Testing tools, Humancomputer interaction, Code-based testing, Specification-based te}
}
Document
AUTOMOCK: Automated Synthesis of a Mock Environment for Test Case Generation

Authors: Nadia Alshahwan, Yue Jia, Kiran Lakhotia, Gordon Fraser, David Shuler, and Paolo Tonella


Abstract
During testing, there are several reasons to exclude some of the components used by the unit under test, such as: (1) the component affects the state of the world in an irreversible way; (2) the component is not accessible for testing purposes (e.g., a web service); (3) the component introduces a major performance degradation to the testing phase (e.g., due to long computations); (4) it is hard (i.e., statistically unlikely) to obtain the output required by the test from the component. In such cases, we replace the component with a mock one. In this paper, we integrate the synthesis of mock components with the generation of test cases for the current testing goal (e.g., coverage). To avoid the generation of meaningless data, which may lead to assertion violation not related to bugs, we include a weak mock postcondition. We consider ways to automatically synthesize such postcondition. We empirically evaluate the quality of the mocks generated by our approach, as well as the benefits mocks introduce in terms of improved coverage and improved performance of the test case generator.

Cite as

Nadia Alshahwan, Yue Jia, Kiran Lakhotia, Gordon Fraser, David Shuler, and Paolo Tonella. AUTOMOCK: Automated Synthesis of a Mock Environment for Test Case Generation. In Practical Software Testing : Tool Automation and Human Factors. Dagstuhl Seminar Proceedings, Volume 10111, pp. 1-4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{alshahwan_et_al:DagSemProc.10111.3,
  author =	{Alshahwan, Nadia and Jia, Yue and Lakhotia, Kiran and Fraser, Gordon and Shuler, David and Tonella, Paolo},
  title =	{{AUTOMOCK: Automated Synthesis of a Mock Environment for Test Case Generation}},
  booktitle =	{Practical Software Testing : Tool Automation and Human Factors},
  pages =	{1--4},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10111},
  editor =	{Mark Harman and Henry Muccini and Wolfram Schulte and Tao Xie},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10111.3},
  URN =		{urn:nbn:de:0030-drops-26180},
  doi =		{10.4230/DagSemProc.10111.3},
  annote =	{Keywords: Test case generation, code analysis, automated software testing}
}
Document
Computing and Diagnosing Changes in Unit Test Energy Consumption

Authors: Andrew J. Ko, Michal Young, Jamie Andrews, Brian P. Robinson, and Mark Grechanik


Abstract
Many developers have reason to be concerned with with power consumption. For example, mobile app developers want to minimize how much power their applications draw, while still providing useful functionality. However, developers have few tools to get feedback about changes to their application's power consumption behavior as they implement an application and make changes to it over time. We present a tool that, using a team's existing test cases, performs repeated measurements of energy consumption based on instructions executed, objects generated, and blocking latency, generating a distribution of energy use estimates for each test run, recording these distributions in a time series of distributions over time. Then, when these distributions change substantially, we inform the developer of this change, and offer them diagnostic information about the elements of their code potentially responsible for the change and the inputs responsible. Through this information, we believe that developers will be better enabled to relate recent changes in their code to changes in energy consumption, enabling them to better incorporate changes in software energy consumption into their software evolution decisions.

Cite as

Andrew J. Ko, Michal Young, Jamie Andrews, Brian P. Robinson, and Mark Grechanik. Computing and Diagnosing Changes in Unit Test Energy Consumption. In Practical Software Testing : Tool Automation and Human Factors. Dagstuhl Seminar Proceedings, Volume 10111, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{ko_et_al:DagSemProc.10111.4,
  author =	{Ko, Andrew J. and Young, Michal and Andrews, Jamie and Robinson, Brian P. and Grechanik, Mark},
  title =	{{Computing and Diagnosing Changes in Unit Test Energy Consumption}},
  booktitle =	{Practical Software Testing : Tool Automation and Human Factors},
  pages =	{1--8},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10111},
  editor =	{Mark Harman and Henry Muccini and Wolfram Schulte and Tao Xie},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10111.4},
  URN =		{urn:nbn:de:0030-drops-26248},
  doi =		{10.4230/DagSemProc.10111.4},
  annote =	{Keywords: Energy, oracles}
}
Document
FITE - Future Integrated Testing Environment

Authors: Patrice Godefroid, Leonardo Mariani, Andrea Polini, Nikolai Tillmann, Willem Visser, and Michael W. Whalen


Abstract
It is a well known fact that the later software errors are discovered during the development process, the more costly they are to repair. Recently, automatic tools based on static and dynamic analysis have become widely used in industry to detect errors, such as null pointer dereferences, array indexing errors, assertion violations, etc. However, these techniques are typically applied late in the development cycle, and thus, the errors detected by such approaches are expensive to repair. Additionally, these techniques can suffer from scalability and presentation issues due to the fact that they are applied late in the development cycle. To address these issues we suggest that code should be continuously analyzed from an early stage of development, preferably as the code is written. This will allow developers to get instant feedback to repair errors as they are introduced, rather than later when it is more expensive. This analysis should also be incremental in nature to allow better scaling. Additionally, the presentation of errors in static and dynamic analysis tools can be improved due to the small increment of code being analyzed.

Cite as

Patrice Godefroid, Leonardo Mariani, Andrea Polini, Nikolai Tillmann, Willem Visser, and Michael W. Whalen. FITE - Future Integrated Testing Environment. In Practical Software Testing : Tool Automation and Human Factors. Dagstuhl Seminar Proceedings, Volume 10111, pp. 1-7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{godefroid_et_al:DagSemProc.10111.5,
  author =	{Godefroid, Patrice and Mariani, Leonardo and Polini, Andrea and Tillmann, Nikolai and Visser, Willem and Whalen, Michael W.},
  title =	{{FITE - Future Integrated Testing Environment}},
  booktitle =	{Practical Software Testing : Tool Automation and Human Factors},
  pages =	{1--7},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10111},
  editor =	{Mark Harman and Henry Muccini and Wolfram Schulte and Tao Xie},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10111.5},
  URN =		{urn:nbn:de:0030-drops-26191},
  doi =		{10.4230/DagSemProc.10111.5},
  annote =	{Keywords: Incremental analysis, incremental testing, human factors, static analysis, model checking}
}
Document
Groundwork for the Development of Testing Plans for Concurrent Software

Authors: Eileen Kraemer and Laura Dillon


Abstract
While multi-threading has become commonplace in many application domains (e.g., embedded systems, digital signal processing (DSP), networks, IP services, and graphics), multi-threaded code often requires complex co-ordination of threads. As a result, multi-threaded implementations are prone to subtle bugs that are difficult and time-consuming to locate. Moreover, current testing techniques that address multi-threading are generally costly while their effectiveness is unknown. The development of cost-effective testing plans requires an in-depth study of the nature, frequency, and cost of concurrency errors in the context of real-world applications. The full paper will lay the groundwork for such a study, with the purpose of informing the creation of a parametric cost model for testing multi-threaded software. The current version of the paper provides motivation for the study, an outline of the full paper, and a bibliography of related papers.

Cite as

Eileen Kraemer and Laura Dillon. Groundwork for the Development of Testing Plans for Concurrent Software. In Practical Software Testing : Tool Automation and Human Factors. Dagstuhl Seminar Proceedings, Volume 10111, pp. 1-4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{kraemer_et_al:DagSemProc.10111.6,
  author =	{Kraemer, Eileen and Laura Dillon},
  title =	{{Groundwork for the Development of Testing Plans for Concurrent Software}},
  booktitle =	{Practical Software Testing : Tool Automation and Human Factors},
  pages =	{1--4},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10111},
  editor =	{Mark Harman and Henry Muccini and Wolfram Schulte and Tao Xie},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10111.6},
  URN =		{urn:nbn:de:0030-drops-26215},
  doi =		{10.4230/DagSemProc.10111.6},
  annote =	{Keywords: Concurrency, Testing}
}
Document
Introducing Continuous Systematic Testing of Evolving Software

Authors: Mary Jean Harrold, Darko Marinov, Stephen Oney, Mauro Pezzè, Adam Porter, John Penix, Per Runeson, and Shin Yoo


Abstract
In today's evolutionary development of software, continuous testing is needed to ensure that the software is still functioning after changes. Test automation helps partly managing the large number of executions needed, but there is also a limit for how much automated tests may be executed. Then systematic approaches for test selection are needed also for automated tests. This manuscript defines this situation and outlines a general method and tool framework for its solution. Experiences from different companies are collected to illustrate how it may be set into practice.

Cite as

Mary Jean Harrold, Darko Marinov, Stephen Oney, Mauro Pezzè, Adam Porter, John Penix, Per Runeson, and Shin Yoo. Introducing Continuous Systematic Testing of Evolving Software. In Practical Software Testing : Tool Automation and Human Factors. Dagstuhl Seminar Proceedings, Volume 10111, pp. 1-8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{harrold_et_al:DagSemProc.10111.7,
  author =	{Harrold, Mary Jean and Marinov, Darko and Oney, Stephen and Pezz\`{e}, Mauro and Porter, Adam and Penix, John and Runeson, Per and Yoo, Shin},
  title =	{{Introducing Continuous Systematic Testing of Evolving Software}},
  booktitle =	{Practical Software Testing : Tool Automation and Human Factors},
  pages =	{1--8},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10111},
  editor =	{Mark Harman and Henry Muccini and Wolfram Schulte and Tao Xie},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10111.7},
  URN =		{urn:nbn:de:0030-drops-26228},
  doi =		{10.4230/DagSemProc.10111.7},
  annote =	{Keywords: Regression testing, continuous testing, test selection}
}
Document
Model-Based Testing for the Cloud

Authors: Antonia Bertolino, Wolfgang Grieskamp, Robert Hierons, Yves Le Traon, Bruno Legeard, Henry Muccini, Amit Paradkar, David Rosenblum, and Jan Tretmans


Abstract
Software in the cloud is characterised by the need to be highly adaptive and continuously available. Incremental changes are applied to the deployed system and need to be tested in the field. Different configurations need to be tested. Higher quality standards regarding both functional and non-functional properties are put on those systems, as they often face large and diverse customer bases and/or are used as services from different peer service implementations. The properties of interest include interoperability, privacy, security, reliability, performance, resource use, timing constraints, service dependencies, availability, and so on. This paper discusses the state of the art in model-based testing of cloud systems. It focuses on two central aspects of the problem domain: (a) dealing with the adaptive and dynamic character of cloud software when tested with model-based testing, by developing new online and offline test strategies, and (b) dealing with the variety of modeling concerns for functional and non-functional properties, by devising a unified framework for them where this is possible. Having discussed the state of the art we identify challenges and future directions.

Cite as

Antonia Bertolino, Wolfgang Grieskamp, Robert Hierons, Yves Le Traon, Bruno Legeard, Henry Muccini, Amit Paradkar, David Rosenblum, and Jan Tretmans. Model-Based Testing for the Cloud. In Practical Software Testing : Tool Automation and Human Factors. Dagstuhl Seminar Proceedings, Volume 10111, pp. 1-11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{bertolino_et_al:DagSemProc.10111.8,
  author =	{Bertolino, Antonia and Grieskamp, Wolfgang and Hierons, Robert and Le Traon, Yves and Legeard, Bruno and Muccini, Henry and Paradkar, Amit and Rosenblum, David and Tretmans, Jan},
  title =	{{Model-Based Testing for the Cloud}},
  booktitle =	{Practical Software Testing : Tool Automation and Human Factors},
  pages =	{1--11},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10111},
  editor =	{Mark Harman and Henry Muccini and Wolfram Schulte and Tao Xie},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10111.8},
  URN =		{urn:nbn:de:0030-drops-26251},
  doi =		{10.4230/DagSemProc.10111.8},
  annote =	{Keywords: Cloud computing, Model based testing, Non-functional properties}
}
Document
Model-based Testing: Next Generation Functional Software Testing

Authors: Bruno Legeard


Abstract
The idea of model-based testing is to use an explicit abstract model of a SUT and its environment to automatically derive tests for the SUT: the behavior of the model of the SUT is interpreted as the intended behavior of the SUT. The technology of automated model-based test case generation has matured to the point where large-scale deployments of this technology are becoming commonplace. The prerequisites for success, such as qualification of the test team, integrated tool chain availability and methods, are now identified, and a wide range of commercial and open-source tools are available. Although MBT will not solve all testing problems, it is an important and useful technique, which brings significant progress over the state of the practice for functional software testing effectiveness, and can increase productivity and improve functional coverage. In this talk, we'll adress the current trend of deploying MBT in the industry, particularly in the TCoE - Test Center of Excellence - managed by the big System Integrators, as a vector for software testing "industrialization".

Cite as

Bruno Legeard. Model-based Testing: Next Generation Functional Software Testing. In Practical Software Testing : Tool Automation and Human Factors. Dagstuhl Seminar Proceedings, Volume 10111, pp. 1-13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2010)


Copy BibTex To Clipboard

@InProceedings{legeard:DagSemProc.10111.9,
  author =	{Legeard, Bruno},
  title =	{{Model-based Testing: Next Generation Functional Software Testing}},
  booktitle =	{Practical Software Testing : Tool Automation and Human Factors},
  pages =	{1--13},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2010},
  volume =	{10111},
  editor =	{Mark Harman and Henry Muccini and Wolfram Schulte and Tao Xie},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10111.9},
  URN =		{urn:nbn:de:0030-drops-26207},
  doi =		{10.4230/DagSemProc.10111.9},
  annote =	{Keywords: Model-based testing, functional testing, test automation, process industrialization}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail