History-Based Adaptive Work Distribution

Author Evgenij Belikov



PDF
Thumbnail PDF

File

OASIcs.ICCSW.2014.3.pdf
  • Filesize: 0.96 MB
  • 8 pages

Document Identifiers

Author Details

Evgenij Belikov

Cite AsGet BibTex

Evgenij Belikov. History-Based Adaptive Work Distribution. In 2014 Imperial College Computing Student Workshop. Open Access Series in Informatics (OASIcs), Volume 43, pp. 3-10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)
https://doi.org/10.4230/OASIcs.ICCSW.2014.3

Abstract

Exploiting parallelism of increasingly heterogeneous parallel architectures is challenging due to the complexity of parallelism management. To achieve high performance portability whilst preserving high productivity, high-level approaches to parallel programming delegate parallelism management, such as partitioning and work distribution, to the compiler and the run-time system. Random work stealing proved efficient for well-structured workloads, but neglects potentially useful context information that can be obtained through static analysis or monitoring at run time and used to improve load balancing, especially for irregular applications with highly varying thread granularity and thread creation patterns. We investigate the effectiveness of an adaptive work distribution scheme to improve load balancing for an extension of Haskell which provides a deterministic parallel programming model and supports both shared-memory and distributed-memory architectures. This scheme uses a less random work stealing that takes into account information on past stealing successes and failures. We quantify run time performance, communication overhead, and stealing success of four divide-and-conquer and data parallel applications for three different update intervals on a commodity 64-core Beowulf cluster of multi-cores.
Keywords
  • Adaptive Load Balancing
  • Work Stealing
  • Work Pushing
  • High-Level Parallel Programming
  • Context-Awareness

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. A. Al Zain, P. Trinder, G. Michaelson, and H.-W. Loidl. Evaluating a high-level parallel language (GpH) for computational Grids. IEEE Transactions on Parallel and Distributed Systems, 19(2):219-233, 2008. Google Scholar
  2. K. Asanovic, R. Bodik, J. Demmel, et al. A view of the parallel computing landscape. CACM, 52:56-67, October 2009. Google Scholar
  3. M. Aswad, P. Trinder, and H.-W. Loidl. Architecture aware parallel programming in Glasgow parallel Haskell (GpH). Procedia Computer Science, 9:1807-1816, 2012. Google Scholar
  4. E. Belikov, P. Deligiannis, P. Totoo, et al. A survey of high-level parallel programming models. Technical Report HW-MACS-TR-0103, Heriot-Watt University, 2013. Google Scholar
  5. E. Belikov, H.-W. Loidl, and G. Michaelson. Characterisation of Parallel Functional Applications. In Draft Proceedings of the 2014 Symposium on Trends in Functional Programming, Utrecht University, 2014. Google Scholar
  6. R. Blumofe and C. Leiserson. Scheduling multithreaded computations by work stealing. J. ACM, 46(5):720-748, September 1999. Google Scholar
  7. A. Brodtkorb, C. Dyken, T. Hagen, J. Hjelmervik, and O. Storaasli. State-of-the-art in heterogeneous computing. Scientific Programming, 18(1):1-33, May 2010. Google Scholar
  8. F. Burton and M. Sleep. Executing functional programs on a virtual tree of processors. In Proceedings of the Conference on Functional Program Language and Computer Architecture, pages 187-194. ACM, 1981. Google Scholar
  9. J. Dean and S. Ghemawat. MapReduce: simplified data processing on large clusters. Communications of the ACM, 51(1):107-113, 2008. Google Scholar
  10. J. Diaz, C. Munoz-Caro, and A. Nino. A survey of parallel programming models and tools in the multi and many-core era. IEEE Transactions on Parallel and Distributed Systems, 23(8):1369-1386, 2012. Google Scholar
  11. H. Gonzalez-Velez and M. Leyton. A survey of algorithmic skeleton frameworks: high-level structured parallel programming enablers. Software: Practice and Experience, 40(12), 2010. Google Scholar
  12. T. Harris and S. Singh. Feedback directed implicit parallelism. ACM SIGPLAN Notices, 42(9):251-264, September 2007. Google Scholar
  13. P. Hudak, J. Hughes, S. Peyton Jones, and P. Wadler. A history of Haskell: being lazy with class. In Proc. of the 3rd ACM SIGPLAN History of Programming Languages Conference, pages 1-55, June 2007. Google Scholar
  14. V. Janjic and K. Hammond. How to be a successful thief. In Euro-Par 2013 Parallel Processing, pages 114-125. Springer, 2013. Google Scholar
  15. S. Marlow, P. Maier, H.-W. Loidl, M. Aswad, and P. Trinder. Seq no more: better Strategies for parallel Haskell. In Proc. of the 3rd ACM Symposium on Haskell, pages 91-102, 2010. Google Scholar
  16. E. Mohr, D. Kranz, and R. Halstead Jr. Lazy task creation: A technique for increasing the granularity of parallel programs. IEEE Transactions on Parallel and Distributed Systems, 2(3):264-280, July 1991. Google Scholar
  17. J. Owens, D. Luebke, N. Govindaraju, et al. A survey of general-purpose computation on graphics hardware. Computer Graphics Forum, 26(1):80-113, 2007. Google Scholar
  18. W. Partain. The nofib benchmark suite of Haskell programs. In Functional Programming, Glasgow 1992, pages 195-202. Springer, 1993. Google Scholar
  19. H. Sutter and J. Larus. Software and the concurrency revolution. Queue, 3(7):54-62, 2005. Google Scholar
  20. P. Trinder, E. Barry Jr., M. Davis, et al. GpH: An architecture-independent functional language. IEEE Transactions on Software Engineering, 1998. Google Scholar
  21. P. Trinder, K. Hammond, J. Mattson Jr., A. Partridge, and S. Peyton Jones. GUM: a portable parallel implementation of Haskell. In Proc. of PLDI'96 Conf., 1996. Google Scholar
  22. P. Trinder, K. Hammond, H.-W. Loidl, and S. Peyton Jones. Algorithm + Strategy = Parallelism. Journal of Functional Programming, 8(1):23-60, January 1998. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail