3 Search Results for "Amert, Tanya"


Document
Avoiding Pitfalls when Using NVIDIA GPUs for Real-Time Tasks in Autonomous Systems

Authors: Ming Yang, Nathan Otterness, Tanya Amert, Joshua Bakita, James H. Anderson, and F. Donelson Smith

Published in: LIPIcs, Volume 106, 30th Euromicro Conference on Real-Time Systems (ECRTS 2018)


Abstract
NVIDIA's CUDA API has enabled GPUs to be used as computing accelerators across a wide range of applications. This has resulted in performance gains in many application domains, but the underlying GPU hardware and software are subject to many non-obvious pitfalls. This is particularly problematic for safety-critical systems, where worst-case behaviors must be taken into account. While such behaviors were not a key concern for earlier CUDA users, the usage of GPUs in autonomous vehicles has taken CUDA programs out of the sole domain of computer-vision and machine-learning experts and into safety-critical processing pipelines. Certification is necessary in this new domain, which is problematic because GPU software may have been developed without any regard for worst-case behaviors. Pitfalls when using CUDA in real-time autonomous systems can result from the lack of specifics in official documentation, and developers of GPU software not being aware of the implications of their design choices with regards to real-time requirements. This paper focuses on the particular challenges facing the real-time community when utilizing CUDA-enabled GPUs for autonomous applications, and best practices for applying real-time safety-critical principles.

Cite as

Ming Yang, Nathan Otterness, Tanya Amert, Joshua Bakita, James H. Anderson, and F. Donelson Smith. Avoiding Pitfalls when Using NVIDIA GPUs for Real-Time Tasks in Autonomous Systems. In 30th Euromicro Conference on Real-Time Systems (ECRTS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 106, pp. 20:1-20:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{yang_et_al:LIPIcs.ECRTS.2018.20,
  author =	{Yang, Ming and Otterness, Nathan and Amert, Tanya and Bakita, Joshua and Anderson, James H. and Smith, F. Donelson},
  title =	{{Avoiding Pitfalls when Using NVIDIA GPUs for Real-Time Tasks in Autonomous Systems}},
  booktitle =	{30th Euromicro Conference on Real-Time Systems (ECRTS 2018)},
  pages =	{20:1--20:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-075-0},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{106},
  editor =	{Altmeyer, Sebastian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2018.20},
  URN =		{urn:nbn:de:0030-drops-89845},
  doi =		{10.4230/LIPIcs.ECRTS.2018.20},
  annote =	{Keywords: real-time systems, graphics processing units, scheduling algorithms, parallel computing, embedded software}
}
Document
Using Lock Servers to Scale Real-Time Locking Protocols: Chasing Ever-Increasing Core Counts

Authors: Catherine E. Nemitz, Tanya Amert, and James H. Anderson

Published in: LIPIcs, Volume 106, 30th Euromicro Conference on Real-Time Systems (ECRTS 2018)


Abstract
During the past decade, parallelism-related issues have been at the forefront of real-time systems research due to the advent of multicore technologies. In the coming years, such issues will loom ever larger due to increasing core counts. Having more cores means a greater potential exists for platform capacity loss when the available parallelism cannot be fully exploited. In this paper, such capacity loss is considered in the context of real-time locking protocols. In this context, lock nesting becomes a key concern as it can result in transitive blocking chains that force tasks to execute sequentially unnecessarily. Such chains can be quite long on a larger machine. Contention-sensitive real-time locking protocols have been proposed as a means of "breaking" transitive blocking chains, but such protocols tend to have high overhead due to more complicated lock/unlock logic. To ease such overhead, the usage of lock servers is considered herein. In particular, four specific lock-server paradigms are proposed and many nuances concerning their deployment are explored. Experiments are presented that show that, by executing cache hot, lock servers can enable reductions in lock/unlock overhead of up to 86%. Such reductions make contention-sensitive protocols a viable approach in practice.

Cite as

Catherine E. Nemitz, Tanya Amert, and James H. Anderson. Using Lock Servers to Scale Real-Time Locking Protocols: Chasing Ever-Increasing Core Counts. In 30th Euromicro Conference on Real-Time Systems (ECRTS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 106, pp. 25:1-25:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{nemitz_et_al:LIPIcs.ECRTS.2018.25,
  author =	{Nemitz, Catherine E. and Amert, Tanya and Anderson, James H.},
  title =	{{Using Lock Servers to Scale Real-Time Locking Protocols: Chasing Ever-Increasing Core Counts}},
  booktitle =	{30th Euromicro Conference on Real-Time Systems (ECRTS 2018)},
  pages =	{25:1--25:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-075-0},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{106},
  editor =	{Altmeyer, Sebastian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2018.25},
  URN =		{urn:nbn:de:0030-drops-89789},
  doi =		{10.4230/LIPIcs.ECRTS.2018.25},
  annote =	{Keywords: multiprocess locking protocols, nested locks, priority-inversion blocking, reader/writer locks, real-time locking protocols}
}
Document
Using Lock Servers to Scale Real-Time Locking Protocols: Chasing Ever-Increasing Core Counts (Artifact)

Authors: Catherine E. Nemitz, Tanya Amert, and James H. Anderson

Published in: DARTS, Volume 4, Issue 2, Special Issue of the 30th Euromicro Conference on Real-Time Systems (ECRTS 2018)


Abstract
During the past decade, parallelism-related issues have been at the forefront of real-time systems research due to the advent of multicore technologies. In the coming years, such issues will loom ever larger due to increasing core counts. Having more cores means a greater potential exists for platform capacity loss when the available parallelism cannot be fully exploited. In this work, such capacity loss is considered in the context of real-time locking protocols. In this context, lock nesting becomes a key concern as it can result in transitive blocking chains that force tasks to execute sequentially unnecessarily. Such chains can be quite long on a larger machine. Contention-sensitive real-time locking protocols have been proposed as a means of ``breaking'' transitive blocking chains, but such protocols tend to have high overhead due to more complicated lock/unlock logic. To ease such overhead, the usage of lock servers is considered herein. In particular, four specific lock-server paradigms are proposed and many nuances concerning their deployment are explored. Experiments are presented that show that, by executing cache hot, lock servers can enable reductions in lock/unlock overhead of up to 86\%. Such reductions make contention-sensitive protocols a viable approach in practice. This artifact contains the implementation of two contention-sensitive locking protocol variants implemented with four proposed lock-server paradigms, as well as the experiments with which they were evaluated.

Cite as

Catherine E. Nemitz, Tanya Amert, and James H. Anderson. Using Lock Servers to Scale Real-Time Locking Protocols: Chasing Ever-Increasing Core Counts (Artifact). In Special Issue of the 30th Euromicro Conference on Real-Time Systems (ECRTS 2018). Dagstuhl Artifacts Series (DARTS), Volume 4, Issue 2, pp. 2:1-2:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@Article{nemitz_et_al:DARTS.4.2.2,
  author =	{Nemitz, Catherine E. and Amert, Tanya and Anderson, James H.},
  title =	{{Using Lock Servers to Scale Real-Time Locking Protocols: Chasing Ever-Increasing Core Counts (Artifact)}},
  pages =	{2:1--2:3},
  journal =	{Dagstuhl Artifacts Series},
  ISSN =	{2509-8195},
  year =	{2018},
  volume =	{4},
  number =	{2},
  editor =	{Nemitz, Catherine E. and Amert, Tanya and Anderson, James H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DARTS.4.2.2},
  URN =		{urn:nbn:de:0030-drops-89704},
  doi =		{10.4230/DARTS.4.2.2},
  annote =	{Keywords: multiprocess locking protocols, nested locks, priority-inversion blocking, reader/writer locks, real-time locking protocols}
}
  • Refine by Author
  • 3 Amert, Tanya
  • 3 Anderson, James H.
  • 2 Nemitz, Catherine E.
  • 1 Bakita, Joshua
  • 1 Otterness, Nathan
  • Show More...

  • Refine by Classification
  • 2 Computer systems organization → Embedded and cyber-physical systems
  • 2 Computer systems organization → Real-time systems
  • 2 Software and its engineering → Process synchronization
  • 1 Computer systems organization → Embedded software
  • 1 Computer systems organization → Heterogeneous (hybrid) systems
  • Show More...

  • Refine by Keyword
  • 2 multiprocess locking protocols
  • 2 nested locks
  • 2 priority-inversion blocking
  • 2 reader/writer locks
  • 2 real-time locking protocols
  • Show More...

  • Refine by Type
  • 3 document

  • Refine by Publication Year
  • 3 2018

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail