2 Search Results for "Mohror, Kathryn"


Document
Understanding I/O Behavior in Scientific and Data-Intensive Computing (Dagstuhl Seminar 21332)

Authors: Philip Carns, Julian Kunkel, Kathryn Mohror, and Martin Schulz

Published in: Dagstuhl Reports, Volume 11, Issue 7 (2021)


Abstract
Two key changes are driving an immediate need for deeper understanding of I/O workloads in high-performance computing (HPC): applications are evolving beyond the traditional bulk-synchronous models to include integrated multistep workflows, in situ analysis, artificial intelligence, and data analytics methods; and storage systems designs are evolving beyond a two-tiered file system and archive model to complex hierarchies containing temporary, fast tiers of storage close to compute resources with markedly different performance properties. Both of these changes represent a significant departure from the decades-long status quo and require investigation from storage researchers and practitioners to understand their impacts on overall I/O performance. Without an in-depth understanding of I/O workload behavior, storage system designers, I/O middleware developers, facility operators, and application developers will not know how best to design or utilize the additional tiers for optimal performance of a given I/O workload. The goal of this Dagstuhl Seminar was to bring together experts in I/O performance analysis and storage system architecture to collectively evaluate how our community is capturing and analyzing I/O workloads on HPC systems, identify any gaps in our methodologies, and determine how to develop a better in-depth understanding of their impact on HPC systems. Our discussions were lively and resulted in identifying critical needs for research in the area of understanding I/O behavior. We document those discussions in this report.

Cite as

Philip Carns, Julian Kunkel, Kathryn Mohror, and Martin Schulz. Understanding I/O Behavior in Scientific and Data-Intensive Computing (Dagstuhl Seminar 21332). In Dagstuhl Reports, Volume 11, Issue 7, pp. 16-75, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Article{carns_et_al:DagRep.11.7.16,
  author =	{Carns, Philip and Kunkel, Julian and Mohror, Kathryn and Schulz, Martin},
  title =	{{Understanding I/O Behavior in Scientific and Data-Intensive Computing (Dagstuhl Seminar 21332)}},
  pages =	{16--75},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2021},
  volume =	{11},
  number =	{7},
  editor =	{Carns, Philip and Kunkel, Julian and Mohror, Kathryn and Schulz, Martin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.11.7.16},
  URN =		{urn:nbn:de:0030-drops-155891},
  doi =		{10.4230/DagRep.11.7.16},
  annote =	{Keywords: I/O performance measurement, Understanding user I/O patterns, HPC I/O, I/O characterization}
}
Document
Challenges and Opportunities of User-Level File Systems for HPC (Dagstuhl Seminar 17202)

Authors: André Brinkmann, Kathryn Mohror, and Weikuan Yu

Published in: Dagstuhl Reports, Volume 7, Issue 5 (2018)


Abstract
The performance gap between magnetic disks and data processing on HPC systems has become that huge that an efficient data processing can only be achieved by introducing non-volatile memory (NVRAM) as a new storage tier. Although the benefits of hierarchical storage have been adequately demonstrated to the point that the newest leadership class HPC systems will employ burst buffers, critical questions remain for supporting hierarchical storage systems, including: How should we present hierarchical storage systems to user applications, such that they are easy to use and that application code is portable across systems? How should we manage data movement through a storage hierarchy for best performance and resilience of data? How do the particular I/O use cases mandate the way we manage data? There have been many efforts to explore this space in the form of file systems, with increasingly more implemented at the user level. This is because it is relatively easy to swap in new, specialized user-level file systems for use by applications on a case-by-case basis, as opposed to the current mainstream approach of using general-purpose, system-level file systems which may not be optimized for HPC workloads and must be installed by administrators. In contrast, file systems at the user level can be tailored for specific HPC workloads for high performance and can be used by applications without administrator intervention. Many user-level file system developers have found themselves “having to reinvent the wheel” to implement various optimizations in their file systems. Thus, a main goal of this meeting was to bring together experts in I/O performance, file systems, and storage, and collectively explore the space of current and future problems and solutions for I/O on hierarchical storage systems in order to begin a community effort in enabling user-level file system support for HPC systems. We had a lively week of learning about each other’s approaches as well as unique I/O use cases that can influence the design of a community-driven file and storage system standards. The agenda for this meeting contained talks from participants on the following high level topics: HPC storage and I/O support today; what do HPC users need for I/O; existing user-level file system efforts; object stores and other alternative storage systems; and components for building user-level file systems. The talks were short and intended to be conversation starters for more in-depth discussions with the whole group. The participants engaged in lengthy discussions on various questions that arose from the talks including: Are we ready to program to a memory hierarchy versus block devices? Are the needs of HPC users reflected in our existing file systems and storage systems? Should we drop or keep POSIX moving forward? What do we mean when we say "user-level file system"? Do we all mean the same thing? How should the IO 500 benchmark be defined so it is fair and useful? and How are stage-in and stage-out actually going to work? The report for this seminar contains a record of the talks from the participants as well as the resulting discussions. Our hope is that the effort initiated during this seminar will result in long-term collaborations that will benefit the HPC community as a whole.

Cite as

André Brinkmann, Kathryn Mohror, and Weikuan Yu. Challenges and Opportunities of User-Level File Systems for HPC (Dagstuhl Seminar 17202). In Dagstuhl Reports, Volume 7, Issue 5, pp. 97-139, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@Article{brinkmann_et_al:DagRep.7.5.97,
  author =	{Brinkmann, Andr\'{e} and Mohror, Kathryn and Yu, Weikuan},
  title =	{{Challenges and Opportunities of User-Level File Systems for HPC (Dagstuhl Seminar 17202)}},
  pages =	{97--139},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2017},
  volume =	{7},
  number =	{5},
  editor =	{Brinkmann, Andr\'{e} and Mohror, Kathryn and Yu, Weikuan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.7.5.97},
  URN =		{urn:nbn:de:0030-drops-82820},
  doi =		{10.4230/DagRep.7.5.97},
  annote =	{Keywords: High Performance Computing, I/O and Storage, Object Stores, User-level Storage Systems}
}
  • Refine by Author
  • 2 Mohror, Kathryn
  • 1 Brinkmann, André
  • 1 Carns, Philip
  • 1 Kunkel, Julian
  • 1 Schulz, Martin
  • Show More...

  • Refine by Classification
  • 1 Networks → Network performance analysis
  • 1 Software and its engineering → Software design engineering

  • Refine by Keyword
  • 1 HPC I/O
  • 1 High Performance Computing
  • 1 I/O and Storage
  • 1 I/O characterization
  • 1 I/O performance measurement
  • Show More...

  • Refine by Type
  • 2 document

  • Refine by Publication Year
  • 1 2017
  • 1 2021

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail