10 Search Results for "Curzel, Serena"


Volume

OASIcs, Volume 116

15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)

PARMA-DITAM 2024, January 18, 2024, Munich, Germany

Editors: João Bispo, Sotirios Xydis, Serena Curzel, and Luís Miguel Sousa

Document
Complete Volume
OASIcs, Volume 116, PARMA-DITAM 2024, Complete Volume

Authors: João Bispo, Sotirios Xydis, Serena Curzel, and Luís Miguel Sousa

Published in: OASIcs, Volume 116, 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)


Abstract
OASIcs, Volume 116, PARMA-DITAM 2024, Complete Volume

Cite as

15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024). Open Access Series in Informatics (OASIcs), Volume 116, pp. 1-88, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Proceedings{bispo_et_al:OASIcs.PARMA-DITAM.2024,
  title =	{{OASIcs, Volume 116, PARMA-DITAM 2024, Complete Volume}},
  booktitle =	{15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)},
  pages =	{1--88},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-307-2},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{116},
  editor =	{Bispo, Jo\~{a}o and Xydis, Sotirios and Curzel, Serena and Sousa, Lu{\'\i}s Miguel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2024},
  URN =		{urn:nbn:de:0030-drops-196938},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2024},
  annote =	{Keywords: OASIcs, Volume 116, PARMA-DITAM 2024, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: João Bispo, Sotirios Xydis, Serena Curzel, and Luís Miguel Sousa

Published in: OASIcs, Volume 116, 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024). Open Access Series in Informatics (OASIcs), Volume 116, pp. 0:i-0:x, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bispo_et_al:OASIcs.PARMA-DITAM.2024.0,
  author =	{Bispo, Jo\~{a}o and Xydis, Sotirios and Curzel, Serena and Sousa, Lu{\'\i}s Miguel},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)},
  pages =	{0:i--0:x},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-307-2},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{116},
  editor =	{Bispo, Jo\~{a}o and Xydis, Sotirios and Curzel, Serena and Sousa, Lu{\'\i}s Miguel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2024.0},
  URN =		{urn:nbn:de:0030-drops-196947},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2024.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
High-Level Synthesis Developments in the Context of European Space Technology Research (Invited Talk)

Authors: Fabrizio Ferrandi, Michele Fiorito, Claudio Barone, Giovanni Gozzi, and Serena Curzel

Published in: OASIcs, Volume 116, 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)


Abstract
European efforts to boost competitiveness in the space services sector promote the research and development of advanced software and hardware solutions. The EU-funded HERMES project contributes to the effort by qualifying radiation-hardened, high-performance programmable microprocessors and developing a software ecosystem that facilitates the deployment of complex applications on such platforms. The main objectives of the project include reaching a technology readiness level of 6 (i.e., validated and demonstrated in relevant environment) for the rad-hard NG-ULTRA FPGA with its ceramic hermetic package CGA 1752, developed within projects of the European Space Agency, French National Centre for Space Studies and the European Union. An equally important share of the project is dedicated to the development and validation of tools that support multicore software programming and FPGA acceleration. The HERMES project selected the Bambu High-Level Synthesis tool to integrate capabilities to translate C/C++ code into Verilog/VHDL in its development ecosystem. In HERMES, Bambu has been and will be extended to support new FPGA targets, architectural models, model-based design, and input applications. The increased performance offered by FPGAs is thus made available also to software developers who do not have hardware design expertise.

Cite as

Fabrizio Ferrandi, Michele Fiorito, Claudio Barone, Giovanni Gozzi, and Serena Curzel. High-Level Synthesis Developments in the Context of European Space Technology Research (Invited Talk). In 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024). Open Access Series in Informatics (OASIcs), Volume 116, pp. 1:1-1:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ferrandi_et_al:OASIcs.PARMA-DITAM.2024.1,
  author =	{Ferrandi, Fabrizio and Fiorito, Michele and Barone, Claudio and Gozzi, Giovanni and Curzel, Serena},
  title =	{{High-Level Synthesis Developments in the Context of European Space Technology Research}},
  booktitle =	{15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)},
  pages =	{1:1--1:12},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-307-2},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{116},
  editor =	{Bispo, Jo\~{a}o and Xydis, Sotirios and Curzel, Serena and Sousa, Lu{\'\i}s Miguel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2024.1},
  URN =		{urn:nbn:de:0030-drops-196951},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2024.1},
  annote =	{Keywords: High-Level Synthesis, rad-hard FPGAs}
}
Document
Accelerator-Driven Data Arrangement to Minimize Transformers Run-Time on Multi-Core Architectures

Authors: Alireza Amirshahi, Giovanni Ansaloni, and David Atienza

Published in: OASIcs, Volume 116, 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)


Abstract
The increasing complexity of transformer models in artificial intelligence expands their computational costs, memory usage, and energy consumption. Hardware acceleration tackles the ensuing challenges by designing processors and accelerators tailored for transformer models, supporting their computation hotspots with high efficiency. However, memory bandwidth can hinder improvements in hardware accelerators. Against this backdrop, in this paper we propose a novel memory arrangement strategy, governed by the hardware accelerator’s kernel size, which effectively minimizes off-chip data access. This arrangement is particularly beneficial for end-to-end transformer model inference, where most of the computation is based on general matrix multiplication (GEMM) operations. Additionally, we address the overhead of non-GEMM operations in transformer models within the scope of this memory data arrangement. Our study explores the implementation and effectiveness of the proposed accelerator-driven data arrangement approach in both single- and multi-core systems. Our evaluation demonstrates that our approach can achieve up to a 2.7x speed increase when executing inferences employing state-of-the-art transformers.

Cite as

Alireza Amirshahi, Giovanni Ansaloni, and David Atienza. Accelerator-Driven Data Arrangement to Minimize Transformers Run-Time on Multi-Core Architectures. In 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024). Open Access Series in Informatics (OASIcs), Volume 116, pp. 2:1-2:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{amirshahi_et_al:OASIcs.PARMA-DITAM.2024.2,
  author =	{Amirshahi, Alireza and Ansaloni, Giovanni and Atienza, David},
  title =	{{Accelerator-Driven Data Arrangement to Minimize Transformers Run-Time on Multi-Core Architectures}},
  booktitle =	{15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)},
  pages =	{2:1--2:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-307-2},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{116},
  editor =	{Bispo, Jo\~{a}o and Xydis, Sotirios and Curzel, Serena and Sousa, Lu{\'\i}s Miguel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2024.2},
  URN =		{urn:nbn:de:0030-drops-196960},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2024.2},
  annote =	{Keywords: Memory arrangement, Data layout, Hardware accelerators, Transformer models, Multi-core, System simulation}
}
Document
Zero-Copy, Minimal-Blackout Virtual Machine Migrations Using Disaggregated Shared Memory

Authors: Andreas Grapentin, Felix Eberhardt, Tobias Zagorni, Andreas Polze, Michele Gazzetti, and Christian Pinto

Published in: OASIcs, Volume 116, 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)


Abstract
We propose a new live-migration paradigm for virtual machines called zero-copy migration. By making the working set of the virtual machine available on the destination host through transparently byte-addressable disaggregated memory, we remove the need for a pre-copy phase while simultaneously reducing the performance impact of the post-copy phase. We describe an open-source implementation of the proposed paradigm based on QEMU-KVM and libvirt, and we evaluate the efficiency of the approach with a deployment on a functional hardware prototype of a memory disaggregation system realized using ThymesisFlow. Using a series of configurable benchmarks, we show that the lead time and blackout time of the migration are equal to best-case scenarios of traditional pre-copy, post-copy and hybrid approaches. Key performance metrics from the perspective of applications running in the virtual machine, such as memory latency and throughput, are improved by up to three orders of magnitude, increasing both flexibility and responsiveness of live-migrations in the datacenter.

Cite as

Andreas Grapentin, Felix Eberhardt, Tobias Zagorni, Andreas Polze, Michele Gazzetti, and Christian Pinto. Zero-Copy, Minimal-Blackout Virtual Machine Migrations Using Disaggregated Shared Memory. In 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024). Open Access Series in Informatics (OASIcs), Volume 116, pp. 3:1-3:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{grapentin_et_al:OASIcs.PARMA-DITAM.2024.3,
  author =	{Grapentin, Andreas and Eberhardt, Felix and Zagorni, Tobias and Polze, Andreas and Gazzetti, Michele and Pinto, Christian},
  title =	{{Zero-Copy, Minimal-Blackout Virtual Machine Migrations Using Disaggregated Shared Memory}},
  booktitle =	{15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)},
  pages =	{3:1--3:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-307-2},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{116},
  editor =	{Bispo, Jo\~{a}o and Xydis, Sotirios and Curzel, Serena and Sousa, Lu{\'\i}s Miguel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2024.3},
  URN =		{urn:nbn:de:0030-drops-196972},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2024.3},
  annote =	{Keywords: disaggregation, disaggregated memory, vm live migration, thymesisflow, power9, opencapi, performance evaluation, zero copy}
}
Document
Precision Tuning the Rust Memory-Safe Programming Language

Authors: Gabriele Magnani, Lev Denisov, Daniele Cattaneo, Giovanni Agosta, and Stefano Cherubin

Published in: OASIcs, Volume 116, 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)


Abstract
Precision tuning is an increasingly common approach for exploiting the tradeoff between energy efficiency or speedup, and accuracy. Its effectiveness is particularly strong whenever the maximum performance must be extracted from a computing system, such as embedded platforms. In these contexts, current engineering practice sees a dominance of memory-unsafe programming languages such as C and C++. However, the unsafe nature of these languages has come under great scrutiny as it leads to significant software vulnerabilities. Hence, safer programming languages which prevent memory-related bugs by design have been proposed as a replacement. Amongst these safer programming languages, one of the most popular has been Rust. In this work we adapt a state-of-the-art precision tuning tool, TAFFO, to operate on Rust code. By porting the PolyBench/C benchmark suite to Rust, we show that the effectiveness of the precision tuning is not affected by the use of a safer programming language, and moreover the safety properties of the language can be successfully preserved. Specifically, using TAFFO and Rust we achieved up to a 15× speedup over the base Rust code, thanks to the use of precision tuning.

Cite as

Gabriele Magnani, Lev Denisov, Daniele Cattaneo, Giovanni Agosta, and Stefano Cherubin. Precision Tuning the Rust Memory-Safe Programming Language. In 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024). Open Access Series in Informatics (OASIcs), Volume 116, pp. 4:1-4:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{magnani_et_al:OASIcs.PARMA-DITAM.2024.4,
  author =	{Magnani, Gabriele and Denisov, Lev and Cattaneo, Daniele and Agosta, Giovanni and Cherubin, Stefano},
  title =	{{Precision Tuning the Rust Memory-Safe Programming Language}},
  booktitle =	{15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)},
  pages =	{4:1--4:12},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-307-2},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{116},
  editor =	{Bispo, Jo\~{a}o and Xydis, Sotirios and Curzel, Serena and Sousa, Lu{\'\i}s Miguel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2024.4},
  URN =		{urn:nbn:de:0030-drops-196989},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2024.4},
  annote =	{Keywords: Approximate Computing, Memory Safety, Precision Tuning}
}
Document
Embedded Multi-Core Code Generation with Cross-Layer Parallelization

Authors: Oliver Oey, Michael Huebner, Timo Stripf, and Juergen Becker

Published in: OASIcs, Volume 116, 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)


Abstract
In this paper, we present a method for optimizing C code for embedded multi-core systems using cross-layer parallelization. The method has two phases. The first is to develop the algorithm without any optimization for the target platform. Then, the second step is to optimize and parallelize the code across four defined layers which are the algorithm, code, task, and data layers, for efficient execution on the target hardware. Each layer is focused on selected hardware characteristics. By using an iterative approach, individual kernels and composite algorithms can be very well adapted to execution on the hardware without further adaptation of the algorithm itself. The realization of this cross-layer parallelization consists of algorithm recognition, code transformations, task distribution, and insertion of synchronization and communication statements. The method is evaluated first on a common kernel and then on a sample image processing algorithm to showcase the benefits of the approach. Compared to other methods that only rely on two or three of these layers, 20 to 30 % of additional performance gain can be achieved.

Cite as

Oliver Oey, Michael Huebner, Timo Stripf, and Juergen Becker. Embedded Multi-Core Code Generation with Cross-Layer Parallelization. In 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024). Open Access Series in Informatics (OASIcs), Volume 116, pp. 5:1-5:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{oey_et_al:OASIcs.PARMA-DITAM.2024.5,
  author =	{Oey, Oliver and Huebner, Michael and Stripf, Timo and Becker, Juergen},
  title =	{{Embedded Multi-Core Code Generation with Cross-Layer Parallelization}},
  booktitle =	{15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)},
  pages =	{5:1--5:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-307-2},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{116},
  editor =	{Bispo, Jo\~{a}o and Xydis, Sotirios and Curzel, Serena and Sousa, Lu{\'\i}s Miguel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2024.5},
  URN =		{urn:nbn:de:0030-drops-196990},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2024.5},
  annote =	{Keywords: Parallelization, multi-core Processors, model-based Development, Code Generation}
}
Document
Accelerating Large-Scale Graph Processing with FPGAs: Lesson Learned and Future Directions

Authors: Marco Procaccini, Amin Sahebi, Marco Barbone, Wayne Luk, Georgi Gaydadjiev, and Roberto Giorgi

Published in: OASIcs, Volume 116, 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)


Abstract
Processing graphs on a large scale presents a range of difficulties, including irregular memory access patterns, device memory limitations, and the need for effective partitioning in distributed systems, all of which can lead to performance problems on traditional architectures such as CPUs and GPUs. To address these challenges, recent research emphasizes the use of Field-Programmable Gate Arrays (FPGAs) within distributed frameworks, harnessing the power of FPGAs in a distributed environment for accelerated graph processing. This paper examines the effectiveness of a multi-FPGA distributed architecture in combination with a partitioning system to improve data locality and reduce inter-partition communication. Utilizing Hadoop at a higher level, the framework maps the graph to the hardware, efficiently distributing pre-processed data to FPGAs. The FPGA processing engine, integrated into a cluster framework, optimizes data transfers, using offline partitioning for large-scale graph distribution. A first evaluation of the framework is based on the popular PageRank algorithm, which assigns a value to each node in a graph based on its importance. In the realm of large-scale graphs, the single FPGA solution outperformed the GPU solution that were restricted by memory capacity and surpassing CPU speedup by 26x compared to 12x. Moreover, when a single FPGA device was limited due to the size of the graph, our performance model showed that a distributed system with multiple FPGAs could increase performance by around 12x. This highlights the effectiveness of our solution for handling large datasets that surpass on-chip memory restrictions.

Cite as

Marco Procaccini, Amin Sahebi, Marco Barbone, Wayne Luk, Georgi Gaydadjiev, and Roberto Giorgi. Accelerating Large-Scale Graph Processing with FPGAs: Lesson Learned and Future Directions. In 15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024). Open Access Series in Informatics (OASIcs), Volume 116, pp. 6:1-6:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{procaccini_et_al:OASIcs.PARMA-DITAM.2024.6,
  author =	{Procaccini, Marco and Sahebi, Amin and Barbone, Marco and Luk, Wayne and Gaydadjiev, Georgi and Giorgi, Roberto},
  title =	{{Accelerating Large-Scale Graph Processing with FPGAs: Lesson Learned and Future Directions}},
  booktitle =	{15th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 13th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2024)},
  pages =	{6:1--6:12},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-307-2},
  ISSN =	{2190-6807},
  year =	{2024},
  volume =	{116},
  editor =	{Bispo, Jo\~{a}o and Xydis, Sotirios and Curzel, Serena and Sousa, Lu{\'\i}s Miguel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2024.6},
  URN =		{urn:nbn:de:0030-drops-197003},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2024.6},
  annote =	{Keywords: Graph processing, Distributed computing, Grid partitioning, FPGA, Accelerators}
}
Document
Invited Talk
SO(DA)^2: End-to-end Generation of Specialized Reconfigurable Architectures (Invited Talk)

Authors: Antonino Tumeo, Nicolas Bohm Agostini, Serena Curzel, Ankur Limaye, Cheng Tan, Vinay Amatya, Marco Minutoli, Vito Giovanni Castellana, Ang Li, and Joseph Manzano

Published in: OASIcs, Volume 100, 13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022)


Abstract
Modern data analysis applications are complex workflows composed of algorithms with diverse behaviors. They may include digital signal processing, data filtering, reduction, compression, graph algorithms, and machine learning. Their performance is highly dependent on the volume, the velocity, and the structure of the data. They are used in many different domains (from small, embedded devices, to large-scale, high-performance computing systems) but in all cases they need to provide answers with very low latency to enable real-time decision making and autonomy. Coarse-grained reconfigurable arrays (CGRAs), i.e., architectures composed of functional units able to perform complex operations interconnected through a network-on-chip and configure the datapath to map complex kernels, are a promising platform to accelerate these applications thanks to their adaptability. They provide higher flexibility than application-specific integrated circuits (ASICs) while offering increased energy efficiency and faster reconfiguration speed with respect to field-programmable gate arrays (FPGAs). However, designing and specializing CGRAs requires significant efforts. The inherent flexibility of these devices makes the application mapping process equally important to the hardware design generation. To obtain efficient systems, approaches that simultaneously considers software and hardware optimizations are necessary. In this paper, we discuss the Software Defined Architectures for Data Analytics (SO(DA)²) toolchain, an end-to-end hardware/software codesign framework to generate custom reconfigurable architectures for data analytics applications. (SO(DA)²) is composed of a high-level compiler (SODA-OPT) and a hardware generator (OpenCGRA) and can automatically explore and generate optimal CGRA designs starting from high-level programming frameworks. SO(DA)² considers partial dynamic reconfiguration as key element of the system design. We discuss the various elements of the framework and demonstrate the flow on the case study of a partial dynamic reconfigurable CGRA design for data streaming applications.

Cite as

Antonino Tumeo, Nicolas Bohm Agostini, Serena Curzel, Ankur Limaye, Cheng Tan, Vinay Amatya, Marco Minutoli, Vito Giovanni Castellana, Ang Li, and Joseph Manzano. SO(DA)^2: End-to-end Generation of Specialized Reconfigurable Architectures (Invited Talk). In 13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022). Open Access Series in Informatics (OASIcs), Volume 100, pp. 1:1-1:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{tumeo_et_al:OASIcs.PARMA-DITAM.2022.1,
  author =	{Tumeo, Antonino and Agostini, Nicolas Bohm and Curzel, Serena and Limaye, Ankur and Tan, Cheng and Amatya, Vinay and Minutoli, Marco and Castellana, Vito Giovanni and Li, Ang and Manzano, Joseph},
  title =	{{SO(DA)^2: End-to-end Generation of Specialized Reconfigurable Architectures}},
  booktitle =	{13th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 11th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2022)},
  pages =	{1:1--1:15},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-231-0},
  ISSN =	{2190-6807},
  year =	{2022},
  volume =	{100},
  editor =	{Palumbo, Francesca and Bispo, Jo\~{a}o and Cherubin, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.PARMA-DITAM.2022.1},
  URN =		{urn:nbn:de:0030-drops-161175},
  doi =		{10.4230/OASIcs.PARMA-DITAM.2022.1},
  annote =	{Keywords: Reconfigurable architectures, data analytics}
}
  • Refine by Author
  • 4 Curzel, Serena
  • 2 Bispo, João
  • 2 Sousa, Luís Miguel
  • 2 Xydis, Sotirios
  • 1 Agosta, Giovanni
  • Show More...

  • Refine by Classification
  • 3 Computer systems organization → Reconfigurable computing
  • 3 Software and its engineering → Compilers
  • 2 Computer systems organization → Embedded software
  • 2 Computer systems organization → Multicore architectures
  • 1 Computer systems organization → Architectures
  • Show More...

  • Refine by Keyword
  • 1 Accelerators
  • 1 Approximate Computing
  • 1 Code Generation
  • 1 Conference Organization
  • 1 Data layout
  • Show More...

  • Refine by Type
  • 9 document
  • 1 volume

  • Refine by Publication Year
  • 9 2024
  • 1 2022

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail