5 Search Results for "Eisenbach, Susan"


Document
Artifact
Implementation of SHAPES Case Studies (Artifact)

Authors: Alexandros Tasos, Juliana Franco, Sophia Drossopoulou, Tobias Wrigstad, and Susan Eisenbach

Published in: DARTS, Volume 6, Issue 2, Special Issue of the 34th European Conference on Object-Oriented Programming (ECOOP 2020)


Abstract
Our main paper presents {SHAPES}, a language extension which offers developers fine-grained control over the placement of data in memory, whilst retaining both memory safety and object abstraction via pooling and clustering. As part of the development of {SHAPES}, we wanted to investigate the usefulness of the concepts {SHAPES} brings to the table. To that extent, we implemented five such case studies. This publication provides the corresponding code and instructions on how to run these case studies and derive the results we provide.

Cite as

Alexandros Tasos, Juliana Franco, Sophia Drossopoulou, Tobias Wrigstad, and Susan Eisenbach. Implementation of SHAPES Case Studies (Artifact). In Special Issue of the 34th European Conference on Object-Oriented Programming (ECOOP 2020). Dagstuhl Artifacts Series (DARTS), Volume 6, Issue 2, pp. 19:1-19:3, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@Article{tasos_et_al:DARTS.6.2.19,
  author =	{Tasos, Alexandros and Franco, Juliana and Drossopoulou, Sophia and Wrigstad, Tobias and Eisenbach, Susan},
  title =	{{Implementation of SHAPES Case Studies (Artifact)}},
  pages =	{19:1--19:3},
  journal =	{Dagstuhl Artifacts Series},
  ISSN =	{2509-8195},
  year =	{2020},
  volume =	{6},
  number =	{2},
  editor =	{Tasos, Alexandros and Franco, Juliana and Drossopoulou, Sophia and Wrigstad, Tobias and Eisenbach, Susan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DARTS.6.2.19},
  URN =		{urn:nbn:de:0030-drops-132167},
  doi =		{10.4230/DARTS.6.2.19},
  annote =	{Keywords: Cache utilisation, Data representation, Memory safety}
}
Document
SCICO Journal-first
Reshape Your Layouts, Not Your Programs: A Safe Language Extension for Better Cache Locality (SCICO Journal-first)

Authors: Alexandros Tasos, Juliana Franco, Sophia Drossopoulou, Tobias Wrigstad, and Susan Eisenbach

Published in: LIPIcs, Volume 166, 34th European Conference on Object-Oriented Programming (ECOOP 2020)


Abstract
The vast gap between CPU and RAM speed means that on modern architectures, developers need to carefully consider data placement in memory to exploit spatial and temporal cache locality and use CPU caches effectively. To that extent, developers have devised various strategies regarding data placement; for objects that should be close in memory, a contiguous pool of objects is allocated and then new instances are constructed inside it; an array of objects is clustered into multiple arrays, each holding the values of a specific field of the objects. Such data placements, however, have to be performed manually, hence readability, maintainability, memory safety, and key OO concepts such as encapsulation and object identity need to be sacrificed and the business logic needs to be modified accordingly. We propose a language extension, SHAPES, which aims to offer developers high-level fine-grained control over data placement, whilst retaining memory safety and the look-and-feel of OO. SHAPES extends an OO language with the concepts of pools and layouts: Developers declare pools that contain objects of a specific type and specify the pool’s layout. A layout specifies how objects in a pool are laid out in memory. That is, it dictates how the values of the fields of the pool’s objects are grouped together into clusters. Objects stored in pools behave identically to ordinary, standalone objects; the type system allows the code to be oblivious to the layout being used. This means that the business logic is completely decoupled from any placement concerns and the developer need not deviate from the spirit of OO to better utilise the cache. In this paper, we present the features of SHAPES, as well as the design rationale behind each feature. We then showcase the merit of SHAPES through a sequence of case studies; we claim that, compared to the manual pooling and clustering of objects, we can observe improvement in readability and maintainability, and comparable (i.e., on par or better) performance. We also present SHAPES^h, an OO calculus which models the SHAPES ideas, we formalise the type system, and prove soundness. The SHAPES^h type system uses ideas from Ownership Types [Clarke et al., 2013] and Java Generics [Gosling et al., 2014]: In SHAPES^h, pools are part of the types; SHAPES^h class and type definitions are enriched with pool parameters. Moreover, class pool parameters are enriched with bounds, which is what allows the business logic of SHAPES to be oblivious to the layout being used. SHAPES^h types also enforce pool uniformity and homogeneity. A pool is uniform if it contains objects of the same class only; a pool is homogeneous if the corresponding fields of all its objects point to objects in the same pool. These properties allow for more efficient implementation. For performance considerations, we also designed SHAPES^l, an untyped, unsafe low-level language with no explicit support for objects or pools. We argue that it is possible to translate SHAPES^l into existing low-level intermediate representations, such as LLVM [Lattner and Adve, 2004], present the translation of SHAPES^h into SHAPES^l, and show its soundness. Thus, we expect SHAPES to offer developers more fine-grained control over data placement, without sacrificing memory safety or the OO look-and-feel.

Cite as

Alexandros Tasos, Juliana Franco, Sophia Drossopoulou, Tobias Wrigstad, and Susan Eisenbach. Reshape Your Layouts, Not Your Programs: A Safe Language Extension for Better Cache Locality (SCICO Journal-first). In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 31:1-31:3, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{tasos_et_al:LIPIcs.ECOOP.2020.31,
  author =	{Tasos, Alexandros and Franco, Juliana and Drossopoulou, Sophia and Wrigstad, Tobias and Eisenbach, Susan},
  title =	{{Reshape Your Layouts, Not Your Programs: A Safe Language Extension for Better Cache Locality}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{31:1--31:3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.31},
  URN =		{urn:nbn:de:0030-drops-131887},
  doi =		{10.4230/LIPIcs.ECOOP.2020.31},
  annote =	{Keywords: Cache utilisation, Data representation, Memory safety}
}
Document
K-LLVM: A Relatively Complete Semantics of LLVM IR

Authors: Liyi Li and Elsa L. Gunter

Published in: LIPIcs, Volume 166, 34th European Conference on Object-Oriented Programming (ECOOP 2020)


Abstract
LLVM [Lattner and Adve, 2004] is designed for the compile-time, link-time and run-time optimization of programs written in various programming languages. The language supported by LLVM targeted by modern compilers is LLVM IR [llvm.org, 2018]. In this paper we define K-LLVM, a reference semantics for LLVM IR. To the best of our knowledge, K-LLVM is the most complete formal LLVM IR semantics to date, including all LLVM IR instructions, intrinsic functions in the LLVM documentation and Standard-C library functions that are necessary to execute many LLVM IR programs. Additionally, K-LLVM formulates an abstract machine that executes all LLVM IR instructions. The machine allows to describe our formal semantics in terms of simulating a conceptual virtual machine that runs LLVM IR programs, including non-deterministic programs. Even though the K-LLVM memory model in this paper is assumed to be a sequentially consistent memory model and does not include all LLVM concurrency memory behaviors, the design of K-LLVM’s data layout allows the K-LLVM abstract machine to execute some LLVM IR programs that previous semantics did not cover, such as the full range of LLVM IR behaviors for the interaction among LLVM IR casting, pointer arithmetic, memory operations and some memory flags (e.g. readonly) of function headers. Additionally, the memory model is modularized in a manner that supports investigating other memory models. To validate K-LLVM, we have implemented it in 𝕂 [Roşu, 2016], which generated an interpreter for LLVM IR. Using this, we ran tests including 1,385 unit test programs and around 3,000 concrete LLVM IR programs, and K-LLVM passed all of them.

Cite as

Liyi Li and Elsa L. Gunter. K-LLVM: A Relatively Complete Semantics of LLVM IR. In 34th European Conference on Object-Oriented Programming (ECOOP 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 166, pp. 7:1-7:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ECOOP.2020.7,
  author =	{Li, Liyi and Gunter, Elsa L.},
  title =	{{K-LLVM: A Relatively Complete Semantics of LLVM IR}},
  booktitle =	{34th European Conference on Object-Oriented Programming (ECOOP 2020)},
  pages =	{7:1--7:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-154-2},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{166},
  editor =	{Hirschfeld, Robert and Pape, Tobias},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2020.7},
  URN =		{urn:nbn:de:0030-drops-131649},
  doi =		{10.4230/LIPIcs.ECOOP.2020.7},
  annote =	{Keywords: LLVM, formal semantics, K framework, memory model, abstract machine}
}
Document
A Tour of Gallifrey, a Language for Geodistributed Programming

Authors: Mae Milano, Rolph Recto, Tom Magrino, and Andrew C. Myers

Published in: LIPIcs, Volume 136, 3rd Summit on Advances in Programming Languages (SNAPL 2019)


Abstract
Programming efficient distributed, concurrent systems requires new abstractions that go beyond traditional sequential programming. But programmers already have trouble getting sequential code right, so simplicity is essential. The core problem is that low-latency, high-availability access to data requires replication of mutable state. Keeping replicas fully consistent is expensive, so the question is how to expose asynchronously replicated objects to programmers in a way that allows them to reason simply about their code. We propose an answer to this question in our ongoing work designing a new language, Gallifrey, which provides orthogonal replication through _restrictions_ with _merge strategies_, _contingencies_ for conflicts arising from concurrency, and _branches_, a novel concurrency control construct inspired by version control, to contain provisional behavior.

Cite as

Mae Milano, Rolph Recto, Tom Magrino, and Andrew C. Myers. A Tour of Gallifrey, a Language for Geodistributed Programming. In 3rd Summit on Advances in Programming Languages (SNAPL 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 136, pp. 11:1-11:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{milano_et_al:LIPIcs.SNAPL.2019.11,
  author =	{Milano, Mae and Recto, Rolph and Magrino, Tom and Myers, Andrew C.},
  title =	{{A Tour of Gallifrey, a Language for Geodistributed Programming}},
  booktitle =	{3rd Summit on Advances in Programming Languages (SNAPL 2019)},
  pages =	{11:1--11:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-113-9},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{136},
  editor =	{Lerner, Benjamin S. and Bod{\'\i}k, Rastislav and Krishnamurthi, Shriram},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SNAPL.2019.11},
  URN =		{urn:nbn:de:0030-drops-105549},
  doi =		{10.4230/LIPIcs.SNAPL.2019.11},
  annote =	{Keywords: programming languages, distributed systems, weak consistency, linear types}
}
Document
DynaSOAr: A Parallel Memory Allocator for Object-Oriented Programming on GPUs with Efficient Memory Access

Authors: Matthias Springer and Hidehiko Masuhara

Published in: LIPIcs, Volume 134, 33rd European Conference on Object-Oriented Programming (ECOOP 2019)


Abstract
Object-oriented programming has long been regarded as too inefficient for SIMD high-performance computing, despite the fact that many important HPC applications have an inherent object structure. On SIMD accelerators, including GPUs, this is mainly due to performance problems with memory allocation and memory access: There are a few libraries that support parallel memory allocation directly on accelerator devices, but all of them suffer from uncoalesed memory accesses. We discovered a broad class of object-oriented programs with many important real-world applications that can be implemented efficiently on massively parallel SIMD accelerators. We call this class Single-Method Multiple-Objects (SMMO), because parallelism is expressed by running a method on all objects of a type. To make fast GPU programming available to domain experts who are less experienced in GPU programming, we developed DynaSOAr, a CUDA framework for SMMO applications. DynaSOAr consists of (1) a fully-parallel, lock-free, dynamic memory allocator, (2) a data layout DSL and (3) an efficient, parallel do-all operation. DynaSOAr achieves performance superior to state-of-the-art GPU memory allocators by controlling both memory allocation and memory access. DynaSOAr improves the usage of allocated memory with a Structure of Arrays (SOA) data layout and achieves low memory fragmentation through efficient management of free and allocated memory blocks with lock-free, hierarchical bitmaps. Contrary to other allocators, our design is heavily based on atomic operations, trading raw (de)allocation performance for better overall application performance. In our benchmarks, DynaSOAr achieves a speedup of application code of up to 3x over state-of-the-art allocators. Moreover, DynaSOAr manages heap memory more efficiently than other allocators, allowing programmers to run up to 2x larger problem sizes with the same amount of memory.

Cite as

Matthias Springer and Hidehiko Masuhara. DynaSOAr: A Parallel Memory Allocator for Object-Oriented Programming on GPUs with Efficient Memory Access. In 33rd European Conference on Object-Oriented Programming (ECOOP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 134, pp. 17:1-17:37, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{springer_et_al:LIPIcs.ECOOP.2019.17,
  author =	{Springer, Matthias and Masuhara, Hidehiko},
  title =	{{DynaSOAr: A Parallel Memory Allocator for Object-Oriented Programming on GPUs with Efficient Memory Access}},
  booktitle =	{33rd European Conference on Object-Oriented Programming (ECOOP 2019)},
  pages =	{17:1--17:37},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-111-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{134},
  editor =	{Donaldson, Alastair F.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2019.17},
  URN =		{urn:nbn:de:0030-drops-108098},
  doi =		{10.4230/LIPIcs.ECOOP.2019.17},
  annote =	{Keywords: CUDA, Data Layout, Dynamic Memory Allocation, GPUs, Object-oriented Programming, SIMD, Single-Instruction Multiple-Objects, Structure of Arrays}
}
  • Refine by Author
  • 2 Drossopoulou, Sophia
  • 2 Eisenbach, Susan
  • 2 Franco, Juliana
  • 2 Tasos, Alexandros
  • 2 Wrigstad, Tobias
  • Show More...

  • Refine by Classification
  • 2 General and reference → Performance
  • 2 Software and its engineering → Classes and objects
  • 2 Theory of computation → Formalisms
  • 1 Computer systems organization → Single instruction, multiple data
  • 1 Software and its engineering → Allocation / deallocation strategies
  • Show More...

  • Refine by Keyword
  • 2 Cache utilisation
  • 2 Data representation
  • 2 Memory safety
  • 1 CUDA
  • 1 Data Layout
  • Show More...

  • Refine by Type
  • 5 document

  • Refine by Publication Year
  • 3 2020
  • 2 2019

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail