Fragmented ARES: Dynamic Storage for Large Objects

Authors Chryssis Georgiou , Nicolas Nicolaou , Andria Trigeorgi



PDF
Thumbnail PDF

File

LIPIcs.DISC.2022.25.pdf
  • Filesize: 1.53 MB
  • 24 pages

Document Identifiers

Author Details

Chryssis Georgiou
  • University of Cyprus, Nicosia, Cyprus
Nicolas Nicolaou
  • Algolysis Ltd, Limassol, Cyprus
Andria Trigeorgi
  • University of Cyprus, Nicosia, Cyprus

Cite AsGet BibTex

Chryssis Georgiou, Nicolas Nicolaou, and Andria Trigeorgi. Fragmented ARES: Dynamic Storage for Large Objects. In 36th International Symposium on Distributed Computing (DISC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 246, pp. 25:1-25:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)
https://doi.org/10.4230/LIPIcs.DISC.2022.25

Abstract

Data availability is one of the most important features in distributed storage systems, made possible by data replication. Nowadays data are generated rapidly and developing efficient, scalable and reliable storage systems has become one of the major challenges for high performance computing. In this work, we develop and prove correct a dynamic, robust and strongly consistent distributed shared memory suitable for handling large objects (such as files) and utilizing erasure coding. We do so by integrating an Adaptive, Reconfigurable, Atomic memory framework, called Ares, with the CoBFS framework, which relies on a block fragmentation technique to handle large objects. With the addition of Ares, we also enable the use of an erasure-coded algorithm to further split the data and to potentially improve storage efficiency at the replica servers and operation latency. Our development is complemented with an in-depth experimental evaluation on the Emulab and AWS EC2 testbeds, illustrating the benefits of our approach, as well as interesting tradeoffs.

Subject Classification

ACM Subject Classification
  • Computing methodologies → Distributed algorithms
Keywords
  • Distributed storage
  • Large objects
  • Strong consistency
  • High access concurrency
  • Erasure code
  • Reconfiguration

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Cassandra. URL: https://cassandra.apache.org/_/index.html.
  2. Colossus. URL: https://cloud.google.com/blog/products/storage-data-transfer/a-peek-behind-colossus-googles-file-system.
  3. Data repository. URL: https://github.com/atrigeorgi/fragmentedARES-data.git.
  4. Dropbox. URL: https://www.dropbox.com/.
  5. Emulab network testbed. URL: https://www.emulab.net/.
  6. Hdfs. URL: https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html.
  7. M.K. Aguilera, I. Keidar, D. Malkhi, and A. Shraer. Dynamic atomic storage without consensus. In Proceedings of the 28th ACM symposium on Principles of distributed computing (PODC '09), pages 17-25, New York, NY, USA, 2009. ACM. Google Scholar
  8. A.F. Anta, C. Georgiou, T. Hadjistasi, E. Stavrakis, and A. Trigeorgi. Fragmented Object : Boosting Concurrency of Shared Large Objects. In Proc.of SIROCCO, pages 1-18, 2021. Google Scholar
  9. Antonio Fernández Anta, Chryssis Georgiou, Theophanis Hadjistasi, Nicolas Nicolaou, Efstathios Stavrakis, and Andria Trigeorgi. Fragmented objects: Boosting concurrency of sharedlarge objects. CoRR, abs/2102.12786, 2021. URL: http://arxiv.org/abs/2102.12786.
  10. H. Attiya. Robust Simulation of Shared Memory: 20 Years After. Bulletin of the EATCS, 100:99-114, 2010. Google Scholar
  11. H. Attiya, A. Bar-Noy, and D. Dolev. Sharing Memory Robustly in Message-Passing Systems. Journal of the ACM (JACM), 42(1):124-142, 1995. Google Scholar
  12. AWS EC2. URL: https://aws.amazon.com/ec2/.
  13. Paul Black. Ratcliff pattern recognition. Dictionary of Algorithms and Data Structures, 2021. Google Scholar
  14. A. Carpen-amarie. BlobSeer as a Data-Storage Facility for Clouds: Self-Adaptation, Integration, Evaluation, PhD Thesis, France, 2012. Google Scholar
  15. P. Dutta, R. Guerraoui, R.R. Levy, and A. Chakraborty. How fast can a distributed atomic read be? In Prof. of PODC, pages 236-245, 2004. Google Scholar
  16. Rui Fan and Nancy A. Lynch. Efficient replication of large data objects. In Faith Ellen Fich, editor, Distributed Computing, 17th International Conference, DISC 2003, Sorrento, Italy, October 1-3, 2003, Proceedings, volume 2848 of Lecture Notes in Computer Science, pages 75-91. Springer, 2003. URL: https://doi.org/10.1007/978-3-540-39989-6_6.
  17. E. Gafni and D. Malkhi. Elastic configuration maintenance via a parsimonious speculating snapshot solution. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 9363:140-153, 2015. URL: https://doi.org/10.1007/978-3-662-48653-5_10.
  18. C. Georgiou, N. Nicolaou, and A.A. Shvartsman. Fault-tolerant semifast implementations of atomic read/write registers. Journal of Parallel and Distributed Computing, 69(1):62-79, 2009. Google Scholar
  19. Chryssis Georgiou, Theophanis Hadjistasi, Nicolas Nicolaou, and Alexander A. Schwarzmann. Implementing three exchange read operations for distributed atomic storage. J. Parallel Distributed Comput., 163:97-113, 2022. URL: https://doi.org/10.1016/j.jpdc.2022.01.024.
  20. Chryssis Georgiou, Nicolas Nicolaou, and Andria Trigeorgi. Fragmented ARES: dynamic storage for large objects. CoRR, abs/2201.13292, 2022. URL: http://arxiv.org/abs/2201.13292.
  21. Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung. The Google File System. The Google File System, 53(1):79-81, 2003. Google Scholar
  22. Seth Gilbert, Nancy A. Lynch, and Alexander A. Shvartsman. RAMBO: A robust, reconfigurable atomic memory service for dynamic networks. Distributed Comput., 23(4):225-272, 2010. URL: https://doi.org/10.1007/s00446-010-0117-1.
  23. Vincent Gramoli, Nicolas Nicolaou, and Alexander A. Schwarzmann. Consistent Distributed Storage. Synthesis Lectures on Distributed Computing Theory. Morgan & Claypool Publishers, 2021. URL: https://doi.org/10.2200/S01069ED1V01Y202012DCT017.
  24. M.P. Herlihy and J.M. Wing. Linearizability: a correctness condition for concurrent objects. ACM Transactions on Programming Languages and Systems, 12(3):463-492, 1990. Google Scholar
  25. L. Jehl, R. Vitenberg, and H. Meling. Smartmerge: A new approach to reconfiguration for atomic storage. In International Symposium on Distributed Computing, pages 154-169. Springer, 2015. Google Scholar
  26. N.A. Lynch and A.A. Shvartsman. Robust emulation of shared memory using dynamic quorum-acknowledged broadcasts. In Proc. of FTCS, pages 272-281, 1997. Google Scholar
  27. N. Nicolaou, A.F. Anta, and C. Georgiou. Cover-ability: Consistent versioning in asynchronous, fail-prone, message-passing environments. In Proc. of IEEE NCA 2016, pages 224-231. Institute of Electrical and Electronics Engineers Inc., 2016. URL: https://doi.org/10.1109/NCA.2016.7778622.
  28. Nicolas Nicolaou, Viveck Cadambe, N. Prakash, Andria Trigeorgi, Kishori M. Konwar, Muriel Medard, and Nancy Lynch. ARES: Adaptive, Reconfigurable, Erasure coded, Atomic Storage. ACM Transactions on Storage (TOS), 2022. Accepted. Also in URL: https://arxiv.org/abs/1805.03727.
  29. Satadru Pan, Yunqiao Zhang, Atul Sikaria, Pavel Zakharov, Abhinav Sharma, Shiva Shankar, Mike Shuey, Richard Wareing, Monika Gangapuram, Guanglei Cao, Christian Preseau, Pratap Singh, Kestutis Patiejunas, J R Tipton, Theano Stavrinos, Ethan Katz-Bassett, and Wyatt Lloyd. Facebook’s Tectonic Filesystem: Efficiency from Exascale, 2021. URL: https://www.usenix.org/conference/fast21/presentation/pan.
  30. M O Rabin. Fingerprinting by random polynomials, 1981. URL: http://www.xmailserver.org/rabin.pdf.
  31. Irving S. Reed and Gustave Solomon. Polynomial codes over certain finite fields. Journal of The Society for Industrial and Applied Mathematics, 8:300-304, 1960. Google Scholar
  32. M.V. Steen and A.S. Tanenbaum. Distributed Systems, 3rd ed. distributed-systems.net, 2017. Google Scholar
  33. A. Tridgell and P. Mackerras. The rsync algorithm. Imagine, 1996. Google Scholar
  34. P. Viotti and M. Vukolic. Consistency in non-transactional distributed storage systems. ACM Computing Surveys (CSUR), 49:1-34, 2016. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail