Document Open Access Logo

Paging with Dynamic Memory Capacity

Author Enoch Peserico



PDF
Thumbnail PDF

File

LIPIcs.STACS.2019.56.pdf
  • Filesize: 0.5 MB
  • 18 pages

Document Identifiers

Author Details

Enoch Peserico
  • Univ. Padova, Italy

Acknowledgements

We are indebted to Federica Bogo and Michele Scquizzato for their relentless encouragement and a number of fruitful discussions on this topic; and to the anonymous referees of STACS 2019 for their insightful, in-depth review of our work.

Cite AsGet BibTex

Enoch Peserico. Paging with Dynamic Memory Capacity. In 36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 126, pp. 56:1-56:18, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2019)
https://doi.org/10.4230/LIPIcs.STACS.2019.56

Abstract

We study a generalization of the classic paging problem that allows the amount of available memory to vary over time - capturing a fundamental property of many modern computing realities, from cloud computing to multi-core and energy-optimized processors. It turns out that good performance in the "classic" case provides no performance guarantees when memory capacity fluctuates: roughly speaking, moving from static to dynamic capacity can mean the difference between optimality within a factor 2 in space and time, and suboptimality by an arbitrarily large factor. More precisely, adopting the competitive analysis framework, we show that some online paging algorithms, despite having an optimal (h,k)-competitive ratio when capacity remains constant, are not (3,k)-competitive for any arbitrarily large k in the presence of minimal capacity fluctuations. In this light it is surprising that several classic paging algorithms perform remarkably well even if memory capacity changes adversarially - in fact, even without taking those changes into explicit account! In particular, we prove that LFD still achieves the minimum number of faults, and that several classic online algorithms such as LRU have a "dynamic" (h,k)-competitive ratio that is the best one can achieve without knowledge of future page requests, even if one had perfect knowledge of future capacity fluctuations. Thus, with careful management, knowing/predicting future memory resources appears far less crucial to performance than knowing/predicting future data accesses. We characterize the optimal "dynamic" (h,k)-competitive ratio exactly, and show it has a somewhat complex expression that is almost but not quite equal to the "classic" ratio k/(k-h+1), thus proving a strict if minuscule separation between online paging performance achievable in the presence or absence of capacity fluctuations.

Subject Classification

ACM Subject Classification
  • Theory of computation → Caching and paging algorithms
Keywords
  • paging
  • cache
  • adaptive
  • elastic
  • online
  • competitive
  • virtual
  • energy

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Susanne Albers, Lene M. Favrholdt, and Oliver Giel. On paging with locality of reference. J. Comput. Syst. Sci., 70(2):145-175, 2005. URL: http://dx.doi.org/10.1016/j.jcss.2004.08.002.
  2. Spyros Angelopoulos, Reza Dorrigiv, and Alejandro López-Ortiz. On the separation and equivalence of paging strategies. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2007, New Orleans, Louisiana, USA, January 7-9, 2007, pages 229-237, 2007. URL: http://dl.acm.org/citation.cfm?id=1283383.1283408.
  3. Rakesh D. Barve and Jeffrey Scott Vitter. A Theoretical Framework for Memory-Adaptive Algorithms. In 40th Annual Symposium on Foundations of Computer Science, FOCS '99, 17-18 October, 1999, New York, NY, USA, pages 273-284, 1999. URL: http://dx.doi.org/10.1109/SFFCS.1999.814599.
  4. Laszlo A. Belady. A Study of Replacement Algorithms for Virtual-Storage Computer. IBM Systems Journal, 5(2):78-101, 1966. URL: http://dx.doi.org/10.1147/sj.52.0078.
  5. Shai Ben-David and Allan Borodin. A New Measure for the Study of On-Line Algorithms. Algorithmica, 11(1):73-91, 1994. URL: http://dx.doi.org/10.1007/BF01294264.
  6. Michael A. Bender, Erik D. Demaine, Roozbeh Ebrahimi, Jeremy T. Fineman, Rob Johnson, Andrea Lincoln, Jayson Lynch, and Samuel McCauley. Cache-Adaptive Analysis. In Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA 2016, Asilomar State Beach/Pacific Grove, CA, USA, July 11-13, 2016, pages 135-144, 2016. URL: http://dx.doi.org/10.1145/2935764.2935798.
  7. Michael A. Bender, Roozbeh Ebrahimi, Jeremy T. Fineman, Golnaz Ghasemiesfeh, Rob Johnson, and Samuel McCauley. Cache-Adaptive Algorithms. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2014, Portland, Oregon, USA, January 5-7, 2014, pages 958-971, 2014. URL: http://dx.doi.org/10.1137/1.9781611973402.71.
  8. Allan Borodin and Ran El-Yaniv. Online computation and competitive analysis. Cambridge University Press, 1998. Google Scholar
  9. Allan Borodin, Sandy Irani, Prabhakar Raghavan, and Baruch Schieber. Competitive Paging with Locality of Reference. J. Comput. Syst. Sci., 50(2):244-258, 1995. URL: http://dx.doi.org/10.1006/jcss.1995.1021.
  10. Joan Boyar, Sandy Irani, and Kim S. Larsen. A Comparison of Performance Measures for Online Algorithms. Algorithmica, 72(4):969-994, 2015. URL: http://dx.doi.org/10.1007/s00453-014-9884-6.
  11. Marek Chrobak. SIGACT news online algorithms column 17. SIGACT News, 41(4):114-121, 2010. URL: http://dx.doi.org/10.1145/1907450.1907547.
  12. Dawson R. Engler, M. Frans Kaashoek, and James O'Toole. Exokernel: An Operating System Architecture for Application-Level Resource Management. In Proceedings of the Fifteenth ACM Symposium on Operating System Principles, SOSP 1995, Copper Mountain Resort, Colorado, USA, December 3-6, 1995, pages 251-266, 1995. URL: http://dx.doi.org/10.1145/224056.224076.
  13. Amos Fiat and Anna R. Karlin. Randomized and multipointer paging with locality of reference. In Proceedings of the Twenty-Seventh Annual ACM Symposium on Theory of Computing, 29 May-1 June 1995, Las Vegas, Nevada, USA, pages 626-634, 1995. URL: http://dx.doi.org/10.1145/225058.225280.
  14. Amos Fiat, Richard M. Karp, Michael Luby, Lyle A. McGeoch, Daniel Dominic Sleator, and Neal E. Young. Competitive Paging Algorithms. J. Algorithms, 12(4):685-699, 1991. URL: http://dx.doi.org/10.1016/0196-6774(91)90041-V.
  15. Matteo Frigo, Charles E. Leiserson, Harald Prokop, and Sridhar Ramachandran. Cache-Oblivious Algorithms. ACM Trans. Algorithms, 8(1):4:1-4:22, 2012. Google Scholar
  16. Juan A. Garay, Inder S. Gopal, Shay Kutten, Yishay Mansour, and Moti Yung. Efficient On-Line Call Control Algorithms. J. Algorithms, 23(1):180-194, 1997. Google Scholar
  17. Avinatan Hassidim. Cache Replacement Policies for Multicore Processors. In ICS, pages 501-509. Tsinghua University Press, 2010. Google Scholar
  18. Houman Homayoun, Mohammad A. Makhzan, and Alexander V. Veidenbaum. Multiple sleep mode leakage control for cache peripheral circuits in embedded processors. In Proceedings of the 2008 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems, CASES 2008, Atlanta, GA, USA, October 19-24, 2008, pages 197-206, 2008. URL: http://dx.doi.org/10.1145/1450095.1450125.
  19. Sandy Irani. Page Replacement with Multi-Size Pages and Applications to Web Caching. Algorithmica, 33(3):384-409, 2002. Google Scholar
  20. Anna R. Karlin, Mark S. Manasse, Larry Rudolph, and Daniel Dominic Sleator. Competitive Snoopy Caching. Algorithmica, 3:77-119, 1988. URL: http://dx.doi.org/10.1007/BF01762111.
  21. Elias Koutsoupias and Christos H. Papadimitriou. Beyond Competitive Analysis. SIAM J. Comput., 30(1):300-317, 2000. Google Scholar
  22. Donghee Lee, Jongmoo Choi, Jong-Hun Kim, Sam H. Noh, Sang Lyul Min, Yookun Cho, and Chong-Sang Kim. LRFU: A spectrum of policies that subsumes the least recently used and least frequently used policies. IEEE Trans. Computers, 50(12):1352-1361, 2001. Google Scholar
  23. Alejandro López-Ortiz and Alejandro Salinger. Minimizing Cache Usage in Paging. In WAOA, volume 7846 of Lecture Notes in Computer Science, pages 145-158. Springer, 2012. Google Scholar
  24. Nimrod Megiddo and Dharmendra S. Modha. Outperforming LRU with an Adaptive Replacement Cache Algorithm. IEEE Computer, 37(4):58-65, 2004. URL: http://dx.doi.org/10.1109/MC.2004.1297303.
  25. Enoch Peserico. Elastic Paging. SIGMETRICS Perform. Eval. Rev., 41(1):349-350, June 2013. URL: http://dx.doi.org/10.1145/2494232.2479781.
  26. Enoch Peserico. Paging with dynamic memory capacity. CoRR, abs/1304.6007, 2013. URL: http://arxiv.org/abs/1304.6007v1.
  27. Kirk Pruhs. Competitive online scheduling for server systems. SIGMETRICS Performance Evaluation Review, 34(4):52-58, 2007. URL: http://dx.doi.org/10.1145/1243401.1243411.
  28. Moinuddin K. Qureshi and Yale N. Patt. Utility-Based Cache Partitioning: A Low-Overhead, High-Performance, Runtime Mechanism to Partition Shared Caches. In MICRO, pages 423-432. IEEE Computer Society, 2006. Google Scholar
  29. Prabhakar Raghavan and Marc Snir. Memory versus randomization in on-line algorithms. IBM Journal of Research and Development, 38(6):683-708, 1994. Google Scholar
  30. Daniel Dominic Sleator and Robert Endre Tarjan. Amortized Efficiency of List Update and Paging Rules. Commun. ACM, 28(2):202-208, 1985. URL: http://dx.doi.org/10.1145/2786.2793.
  31. Eric Torng. A Unified Analysis of Paging and Caching. Algorithmica, 20:194-203, 1998. Google Scholar
  32. Neal E. Young. On-Line Caching as Cache Size Varies. In Proceedings of the Second Annual ACM/SIGACT-SIAM Symposium on Discrete Algorithms, 28-30 January 1991, San Francisco, California, USA., pages 241-250, 1991. URL: http://dl.acm.org/citation.cfm?id=127787.127832.
  33. Neal E. Young. The K-Server Dual and Loose Competitiveness for Paging. Algorithmica, 11:525-541, 1994. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail