Caching with Reserves

Authors Sharat Ibrahimpur, Manish Purohit, Zoya Svitkina, Erik Vee, Joshua R. Wang



PDF
Thumbnail PDF

File

LIPIcs.APPROX-RANDOM.2022.52.pdf
  • Filesize: 0.7 MB
  • 16 pages

Document Identifiers

Author Details

Sharat Ibrahimpur
  • University of Waterloo, Canada
Manish Purohit
  • Google Research, Mountain View, CA, USA
Zoya Svitkina
  • Google Research, Mountain View, CA, USA
Erik Vee
  • Google Research, Mountain View, CA, USA
Joshua R. Wang
  • Google Research, Mountain View, CA, USA

Acknowledgements

We would like to thank Sungjin Im for suggesting the caching with reserves problem and for useful discussions.

Cite AsGet BibTex

Sharat Ibrahimpur, Manish Purohit, Zoya Svitkina, Erik Vee, and Joshua R. Wang. Caching with Reserves. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 245, pp. 52:1-52:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)
https://doi.org/10.4230/LIPIcs.APPROX/RANDOM.2022.52

Abstract

Caching is among the most well-studied topics in algorithm design, in part because it is such a fundamental component of many computer systems. Much of traditional caching research studies cache management for a single-user or single-processor environment. In this paper, we propose two related generalizations of the classical caching problem that capture issues that arise in a multi-user or multi-processor environment. In the caching with reserves problem, a caching algorithm is required to maintain at least k_i pages belonging to user i in the cache at any time, for some given reserve capacities k_i. In the public-private caching problem, the cache of total size k is partitioned into subcaches, a private cache of size k_i for each user i and a shared public cache usable by any user. In both of these models, as in the classical caching framework, the objective of the algorithm is to dynamically maintain the cache so as to minimize the total number of cache misses. We show that caching with reserves and public-private caching models are equivalent up to constant factors, and thus focus on the former. Unlike classical caching, both of these models turn out to be NP-hard even in the offline setting, where the page sequence is known in advance. For the offline setting, we design a 2-approximation algorithm, whose analysis carefully keeps track of a potential function to bound the cost. In the online setting, we first design an O(ln k)-competitive fractional algorithm using the primal-dual framework, and then show how to convert it online to a randomized integral algorithm with the same guarantee.

Subject Classification

ACM Subject Classification
  • Theory of computation → Caching and paging algorithms
Keywords
  • Approximation Algorithms
  • Online Algorithms
  • Caching

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Dimitris Achlioptas, Marek Chrobak, and John Noga. Competitive analysis of randomized paging algorithms. TCS, 234(1-2):203-218, 2000. Google Scholar
  2. Anna Adamaszek, Artur Czumaj, Matthias Englert, and Harald Räcke. An O(log k)-competitive algorithm for generalized caching. In SODA, pages 1681-1689, 2012. Google Scholar
  3. Nikhil Bansal, Niv Buchbinder, and Joseph Naor. Towards the randomized k-server conjecture: A primal-dual approach. In SODA, pages 40-55, 2010. Google Scholar
  4. Nikhil Bansal, Niv Buchbinder, and Joseph Naor. A primal-dual randomized algorithm for weighted paging. Journal of the ACM (JACM), 59(4):1-24, 2012. Google Scholar
  5. Nikhil Bansal, Niv Buchbinder, and Joseph Naor. Randomized competitive algorithms for generalized caching. SICOMP, 41(2):391-414, 2012. Google Scholar
  6. Nikhil Bansal, Christian Coester, Ravi Kumar, Manish Purohit, and Erik Vee. Learning-augmented weighted paging. In Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 67-89. SIAM, 2022. Google Scholar
  7. Sorav Bansal and Dharmendra S. Modha. CAR: Clock with adaptive replacement. In 3rd USENIX Conference on File and Storage Technologies (FAST 04), San Francisco, CA, March 2004. USENIX Association. URL: https://www.usenix.org/conference/fast-04/car-clock-adaptive-replacement.
  8. L. Belady. A study of replacement algorithms for a virtual-storage computer. IBM Systems Journal, 5(2):78-101, 1966. Google Scholar
  9. Leah Epstein, Csanád Imreh, Asaf Levin, and Judit Nagy-György. Online file caching with rejection penalties. Algorithmica, 71(2):279-306, 2015. Google Scholar
  10. Amos Fiat, Richard M Karp, Michael Luby, Lyle A McGeoch, Daniel D Sleator, and Neal E Young. Competitive paging algorithms. J. Algorithms, 12(4):685-699, 1991. Google Scholar
  11. Sharat Ibrahimpur, Manish Purohit, Zoya Svitkina, Erik Vee, and Joshua Wang. Caching with reserves, 2022. URL: http://arxiv.org/abs/2207.05975.
  12. Wu Kan, Tu Kaiwei, Patel Yuvraj, Sen Rathijit, Park Kwanghyun, Arpaci-Dusseau Andrea, and Remzi Arpaci-Dusseau. NyxCache: Flexible and efficient multi-tenant persistent memory caching. In 20th USENIX Conference on File and Storage Technologies (FAST 22), pages 1-16, Santa Clara, CA, February 2022. USENIX Association. URL: https://www.usenix.org/conference/fast22/presentation/wu.
  13. Mayuresh Kunjir, Brandon Fain, Kamesh Munagala, and Shivnath Babu. Robus: fair cache allocation for data-parallel workloads. In Proceedings of the 2017 ACM International Conference on Management of Data, pages 219-234, 2017. Google Scholar
  14. Lyle A McGeoch and Daniel D Sleator. A strongly competitive randomized paging algorithm. Algorithmica, 6(1-6):816-825, 1991. Google Scholar
  15. Nimrod Megiddo and Dharmendra S Modha. ARC: A Self-Tuning, low overhead replacement cache. In 2nd USENIX Conference on File and Storage Technologies (FAST 03), 2003. Google Scholar
  16. Qifan Pu, Haoyuan Li, Matei Zaharia, Ali Ghodsi, and Ion Stoica. FairRide:Near-Optimal, fair cache sharing. In 13th USENIX Symposium on Networked Systems Design and Implementation (NSDI 16), pages 393-406, 2016. Google Scholar
  17. Yinghao Yu, Wei Wang, Jun Zhang, and Khaled Ben Letaief. Lacs: Load-aware cache sharing with isolation guarantee. In 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), pages 207-217. IEEE, 2019. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail