Search Results

Documents authored by Springer, Matthias


Document
Artifact
DynaSOAr: A Parallel Memory Allocator for Object-Oriented Programming on GPUs with Efficient Memory Access (Artifact)

Authors: Matthias Springer and Hidehiko Masuhara

Published in: DARTS, Volume 5, Issue 2, Special Issue of the 33rd European Conference on Object-Oriented Programming (ECOOP 2019)


Abstract
This artifact contains the source code of DynaSOAr, a CUDA framework for Single-Method Multiple-Objects (SMMO) applications. SMMO is a type of object-oriented programs in which parallelism is expressed by running the same method on all applications of a type. DynaSOAr is a dynamic memory allocator, combined with a data layout DSL and a parallel do-all operation. This artifact provides a tutorial explaining the API of DynaSOAr, along with nine benchmark applications from different domains. All benchmarks can be configured to use a different memory allocator to allow for a comparison with other state-of-the-art memory allocators.

Cite as

Matthias Springer and Hidehiko Masuhara. DynaSOAr: A Parallel Memory Allocator for Object-Oriented Programming on GPUs with Efficient Memory Access (Artifact). In Special Issue of the 33rd European Conference on Object-Oriented Programming (ECOOP 2019). Dagstuhl Artifacts Series (DARTS), Volume 5, Issue 2, pp. 2:1-2:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Article{springer_et_al:DARTS.5.2.2,
  author =	{Springer, Matthias and Masuhara, Hidehiko},
  title =	{{DynaSOAr: A Parallel Memory Allocator for Object-Oriented Programming on GPUs with Efficient Memory Access}},
  pages =	{2:1--2:2},
  journal =	{Dagstuhl Artifacts Series},
  ISSN =	{2509-8195},
  year =	{2019},
  volume =	{5},
  number =	{2},
  editor =	{Springer, Matthias and Masuhara, Hidehiko},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DARTS.5.2.2},
  URN =		{urn:nbn:de:0030-drops-107793},
  doi =		{10.4230/DARTS.5.2.2},
  annote =	{Keywords: CUDA, Data Layout, Dynamic Memory Allocation, GPUs, Object-oriented Programming, SIMD, Single-Instruction Multiple-Objects, Structure of Arrays}
}
Document
DynaSOAr: A Parallel Memory Allocator for Object-Oriented Programming on GPUs with Efficient Memory Access

Authors: Matthias Springer and Hidehiko Masuhara

Published in: LIPIcs, Volume 134, 33rd European Conference on Object-Oriented Programming (ECOOP 2019)


Abstract
Object-oriented programming has long been regarded as too inefficient for SIMD high-performance computing, despite the fact that many important HPC applications have an inherent object structure. On SIMD accelerators, including GPUs, this is mainly due to performance problems with memory allocation and memory access: There are a few libraries that support parallel memory allocation directly on accelerator devices, but all of them suffer from uncoalesed memory accesses. We discovered a broad class of object-oriented programs with many important real-world applications that can be implemented efficiently on massively parallel SIMD accelerators. We call this class Single-Method Multiple-Objects (SMMO), because parallelism is expressed by running a method on all objects of a type. To make fast GPU programming available to domain experts who are less experienced in GPU programming, we developed DynaSOAr, a CUDA framework for SMMO applications. DynaSOAr consists of (1) a fully-parallel, lock-free, dynamic memory allocator, (2) a data layout DSL and (3) an efficient, parallel do-all operation. DynaSOAr achieves performance superior to state-of-the-art GPU memory allocators by controlling both memory allocation and memory access. DynaSOAr improves the usage of allocated memory with a Structure of Arrays (SOA) data layout and achieves low memory fragmentation through efficient management of free and allocated memory blocks with lock-free, hierarchical bitmaps. Contrary to other allocators, our design is heavily based on atomic operations, trading raw (de)allocation performance for better overall application performance. In our benchmarks, DynaSOAr achieves a speedup of application code of up to 3x over state-of-the-art allocators. Moreover, DynaSOAr manages heap memory more efficiently than other allocators, allowing programmers to run up to 2x larger problem sizes with the same amount of memory.

Cite as

Matthias Springer and Hidehiko Masuhara. DynaSOAr: A Parallel Memory Allocator for Object-Oriented Programming on GPUs with Efficient Memory Access. In 33rd European Conference on Object-Oriented Programming (ECOOP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 134, pp. 17:1-17:37, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{springer_et_al:LIPIcs.ECOOP.2019.17,
  author =	{Springer, Matthias and Masuhara, Hidehiko},
  title =	{{DynaSOAr: A Parallel Memory Allocator for Object-Oriented Programming on GPUs with Efficient Memory Access}},
  booktitle =	{33rd European Conference on Object-Oriented Programming (ECOOP 2019)},
  pages =	{17:1--17:37},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-111-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{134},
  editor =	{Donaldson, Alastair F.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2019.17},
  URN =		{urn:nbn:de:0030-drops-108098},
  doi =		{10.4230/LIPIcs.ECOOP.2019.17},
  annote =	{Keywords: CUDA, Data Layout, Dynamic Memory Allocation, GPUs, Object-oriented Programming, SIMD, Single-Instruction Multiple-Objects, Structure of Arrays}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail