Dynamic Power Consumption of the Full Posit Processing Unit: Analysis and Experiments

Authors Michele Piccoli , Davide Zoni , William Fornaciari , Giuseppe Massari , Marco Cococcioni, Federico Rossi, Sergio Saponara, Emanuele Ruffaldi



PDF
Thumbnail PDF

File

OASIcs.PARMA-DITAM.2023.6.pdf
  • Filesize: 3.03 MB
  • 11 pages

Document Identifiers

Author Details

Michele Piccoli
  • Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Polytechnic University of Milano, Italy
Davide Zoni
  • Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Polytechnic University of Milano, Italy
William Fornaciari
  • Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Polytechnic University of Milano, Italy
Giuseppe Massari
  • Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Polytechnic University of Milano, Italy
Marco Cococcioni
  • Dipartimento di Ingegneria dell'Informazione, University of Pisa, Italy
Federico Rossi
  • Dipartimento di Ingegneria dell'Informazione, University of Pisa, Italy
Sergio Saponara
  • Dipartimento di Ingegneria dell'Informazione, University of Pisa, Italy
Emanuele Ruffaldi
  • MMI spa, Pisa, Italy

Cite As Get BibTex

Michele Piccoli, Davide Zoni, William Fornaciari, Giuseppe Massari, Marco Cococcioni, Federico Rossi, Sergio Saponara, and Emanuele Ruffaldi. Dynamic Power Consumption of the Full Posit Processing Unit: Analysis and Experiments. In 14th Workshop on Parallel Programming and Run-Time Management Techniques for Many-Core Architectures and 12th Workshop on Design Tools and Architectures for Multicore Embedded Computing Platforms (PARMA-DITAM 2023). Open Access Series in Informatics (OASIcs), Volume 107, pp. 6:1-6:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023) https://doi.org/10.4230/OASIcs.PARMA-DITAM.2023.6

Abstract

Since its introduction in 2017, the Posit™ format for representing real numbers has attracted a lot of interest, as an alternative to IEEE 754 floating point representation. Several hardware implementations of arithmetic operations between posit numbers have also been proposed in recent years. In this work, we analyze the dynamic power consumption of the Full Posit Processing Unit (FPPU) recently developed at the University of Pisa. Experimental results show that we can model the dynamic power consumption of the FPPU with an acceptable approximation error from 2.84% (32-bit FPPU) to 7.32% (8-bit FPPU). Furthermore, from the synthesis of the power monitoring unit alongside the FPPU we demonstrate that the additional power module has an area cost that goes from ∼5% (32-bit FPPU) to ∼30% (8-bit FPPU) of the total unit area occupation.

Subject Classification

ACM Subject Classification
  • Hardware → Power estimation and optimization
  • Hardware → Arithmetic and datapath circuits
  • Hardware → Reconfigurable logic and FPGAs
Keywords
  • power estimation
  • computer arithmetic
  • posit numbers

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Nvidia TensorFloat32. URL: https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/.
  2. Tesla Dojo Technology: a Guide to Tesla’s Configurable Floating Point Formats & Arithmetic. https://tesla-cdn.thron.com/static/SBY4B9_tesla-dojo-technology_OPNZ0M.pdf, 2022.
  3. Giovanni Agosta, Marco Aldinucci, Carlos Alvarez, Roberto Ammendola, Yasir Arfat, Olivier Beaumont, Massimo Bernaschi, Andrea Biagioni, Tommaso Boccali, Berenger Bramas, Carlo Brandolese, Barbara Cantalupo, Mauro Carrozzo, Daniele Cattaneo, Alessandro Celestini, Massimo Celino, Iacopo Colonnelli, Paolo Cretaro, Pasqua D’Ambra, Marco Danelutto, Roberto Esposito, Lionel Eyraud-Dubois, Antonio Filgueras, William Fornaciari, Ottorino Frezza, Andrea Galimberti, Francesco Giacomini, Brice Goglin, Daniele Gregori, Abdou Guermouche, Francesco Iannone, Michal Kulczewski, Francesca Lo Cicero, Alessandro Lonardo, Alberto R. Martinelli, Michele Martinelli, Xavier Martorell, Giuseppe Massari, Simone Montangero, Gianluca Mittone, Raymond Namyst, Ariel Oleksiak, Paolo Palazzari, Pier Stanislao Paolucci, Federico Reghenzani, Cristian Rossi, Sergio Saponara, Francesco Simula, Federico Terraneo, Samuel Thibault, Massimo Torquati, Matteo Turisini, Piero Vicini, Miquel Vidal, Davide Zoni, and Giuseppe Zummo. Towards extreme scale technologies and accelerators for eurohpc hw/sw supercomputing applications for exascale: The textarossa approach. Microprocessors and Microsystems, 95:104679, 2022. URL: https://doi.org/10.1016/j.micpro.2022.104679.
  4. A. Agrawal, S. M. Mueller, B. M. Fleischer, X. Sun, N. Wang, J. Choi, and K. Gopalakrishnan. DLFloat: A 16-b floating point format designed for deep learning training and inference. In 2019 IEEE 26th Symp. on Computer Arithmetic (ARITH'19), pages 92-95, 2019. URL: https://doi.org/10.1109/ARITH.2019.00023.
  5. N. Burgess, J. Milanovic, N. Stephens, K. Monachopoulos, and D. Mansell. Bfloat16 processing for neural networks. In 2019 IEEE 26th Symp. on Computer Arithmetic (ARITH'19), pages 88-91, 2019. URL: https://doi.org/10.1109/ARITH.2019.00022.
  6. Z. Carmichael, H. F. Langroudi, C. Khazanov, J. Lillie, J. L. Gustafson, and D. Kudithipudi. Deep positron: A deep neural network using the posit number system. In 2019 Design, Automation Test in Europe Conference Exhibition (DATE), pages 1421-1426, 2019. Google Scholar
  7. M. Cococcioni, F. Rossi, E. Ruffaldi, S. Saponara, and B. Dupont de Dinechin. Novel arithmetics in deep neural networks signal processing for autonomous driving: Challenges and opportunities. IEEE Signal Processing Magazine, 38(1):97-110, 2021. URL: https://doi.org/10.1109/MSP.2020.2988436.
  8. Marco Cococcioni, Federico Rossi, Emanuele Ruffaldi, and Sergio Saponara. Fast approximations of activation functions in deep neural networks when using posit arithmetic. Sensors, 20(5), 2020. URL: https://www.mdpi.com/1424-8220/20/5/1515.
  9. Marco Cococcioni, Federico Rossi, Emanuele Ruffaldi, and Sergio Saponara. A lightweight posit processing unit for risc-v processors in deep neural network applications. IEEE Transactions on Emerging Topics in Computing, pages 1-1, 2021. URL: https://doi.org/10.1109/TETC.2021.3120538.
  10. Marco Cococcioni, Federico Rossi, Emanuele Ruffaldi, and Sergio Saponara. Small reals representations for deep learning at the edge: A comparison. In John Gustafson and Vassil Dimitrov, editors, Next Generation Arithmetic, pages 117-133, Cham, 2022. Springer International Publishing. Google Scholar
  11. Marco Cococcioni, Federico Rossi, Emanuele Ruffaldi, Sergio Saponara, and Francesco Urbani. Fppu: Design and implementation of a pipelined full posit processing unit. submitted, 2022. Google Scholar
  12. Luca Cremona, William Fornaciari, and Davide Zoni. Automatic identification and hardware implementation of a resource-constrained power model for embedded systems. Sustainable Computing: Informatics and Systems, 29:100467, 2021. URL: https://doi.org/10.1016/j.suscom.2020.100467.
  13. Seyed Hamed Fatemi Langroudi, Zachariah Carmichael, John Gustafson, and Dhireesha Kudithipudi. Positnn framework: Tapered precision deep learning inference for the edge. In 2019 IEEE Space Computing Conference (SCC), pages 53-59, July 2019. URL: https://doi.org/10.1109/SpaceComp.2019.00011.
  14. John L Gustafson and Isaac T Yonemoto. Beating floating point at its own game: Posit arithmetic. Supercomputing Frontiers and Innovations, 4(2):71-86, 2017. Google Scholar
  15. Riya Jain, Niraj Sharma, Farhad Merchant, Sachin Patkar, and Rainer Leupers. CLARINET: A RISC-V based framework for posit arithmetic empiricism. CoRR, abs/2006.00364, 2020. URL: http://arxiv.org/abs/2006.00364.
  16. Jeff Johnson. Rethinking floating point for deep learning. CoRR, abs/1811.01721, 2018. URL: http://arxiv.org/abs/1811.01721.
  17. Urs Köster, Tristan Webb, Xin Wang, Marcel Nassar, Arjun K Bansal, William Constable, Oguz Elibol, Scott Gray, Stewart Hall, Luke Hornof, et al. Flexpoint: An adaptive numerical format for efficient training of deep neural networks. In In Proc. of teh 31st Conference on Neural Information Processing Systems (NIPS'17), pages 1742-1752, 2017. Google Scholar
  18. J. Lu, C. Fang, M. Xu, J. Lin, and Z. Wang. Evaluations on deep neural networks training using posit number system. IEEE Transactions on Computers, pages 1-1, 2020. Google Scholar
  19. V. Popescu, M. Nassar, X. Wang, E. Tumer, and T. Webb. Flexpoint: Predictive numerics for deep learning. In In Proc. of the 25th IEEE Symp. on Computer Arithmetic (ARITH'18), pages 1-4, 2018. URL: https://doi.org/10.1109/ARITH.2018.8464801.
  20. Sugandha Tiwari, Neel Gala, Chester Rebeiro, and V. Kamakoti. PERI: A posit enabled RISC-V core. CoRR, abs/1908.01466, 2019. URL: http://arxiv.org/abs/1908.01466.
  21. Davide Zoni, Luca Cremona, Alessandro Cilardo, Mirko Gagliardi, and William Fornaciari. Powertap: All-digital power meter modeling for run-time power monitoring. Microprocessors and Microsystems, 63:128-139, 2018. URL: https://doi.org/10.1016/j.micpro.2018.07.007.
  22. Davide Zoni, Luca Cremona, and William Fornaciari. Powerprobe: Run-time power modeling through automatic rtl instrumentation. In 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE), pages 743-748, 2018. URL: https://doi.org/10.23919/DATE.2018.8342106.
  23. Davide Zoni, Luca Cremona, and William Fornaciari. All-digital control-theoretic scheme to optimize energy budget and allocation in multi-cores. IEEE Transactions on Computers, 69(5):706-721, 2020. URL: https://doi.org/10.1109/TC.2019.2963859.
  24. Davide Zoni, Luca Cremona, and William Fornaciari. All-digital energy-constrained controller for general-purpose accelerators and cpus. IEEE Embedded Systems Letters, 12(1):17-20, 2020. URL: https://doi.org/10.1109/LES.2019.2914136.
  25. Davide Zoni and Andrea Galimberti. Cost-effective fixed-point hardware support for risc-v embedded systems. Journal of Systems Architecture, 126:102476, 2022. URL: https://doi.org/10.1016/j.sysarc.2022.102476.
  26. Davide Zoni, Andrea Galimberti, and William Fornaciari. An fpu design template to optimize the accuracy-efficiency-area trade-off. Sustainable Computing: Informatics and Systems, 29:100450, 2021. URL: https://doi.org/10.1016/j.suscom.2020.100450.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail