Local Linearizability for Concurrent Container-Type Data Structures

Authors Andreas Haas, Thomas A. Henzinger, Andreas Holzer, Christoph M. Kirsch, Michael Lippautz, Hannes Payer, Ali Sezgin, Ana Sokolova, Helmut Veith

Thumbnail PDF


  • Filesize: 0.56 MB
  • 15 pages

Document Identifiers

Author Details

Andreas Haas
Thomas A. Henzinger
Andreas Holzer
Christoph M. Kirsch
Michael Lippautz
Hannes Payer
Ali Sezgin
Ana Sokolova
Helmut Veith

Cite AsGet BibTex

Andreas Haas, Thomas A. Henzinger, Andreas Holzer, Christoph M. Kirsch, Michael Lippautz, Hannes Payer, Ali Sezgin, Ana Sokolova, and Helmut Veith. Local Linearizability for Concurrent Container-Type Data Structures. In 27th International Conference on Concurrency Theory (CONCUR 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 59, pp. 6:1-6:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


The semantics of concurrent data structures is usually given by a sequential specification and a consistency condition. Linearizability is the most popular consistency condition due to its simplicity and general applicability. Nevertheless, for applications that do not require all guarantees offered by linearizability, recent research has focused on improving performance and scalability of concurrent data structures by relaxing their semantics. In this paper, we present local linearizability, a relaxed consistency condition that is applicable to container-type concurrent data structures like pools, queues, and stacks. While linearizability requires that the effect of each operation is observed by all threads at the same time, local linearizability only requires that for each thread T, the effects of its local insertion operations and the effects of those removal operations that remove values inserted by T are observed by all threads at the same time. We investigate theoretical and practical properties of local linearizability and its relationship to many existing consistency conditions. We present a generic implementation method for locally linearizable data structures that uses existing linearizable data structures as building blocks. Our implementations show performance and scalability improvements over the original building blocks and outperform the fastest existing container-type implementations.
  • (concurrent) data structures
  • relaxed semantics
  • linearizability


  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    PDF Downloads


  1. iOS Developer Library, Concurrency Programming Guide, Dispatch Queues. URL: https://developer.apple.com/library/ios/documentation/General/Conceptual/ConcurrencyProgrammingGuide/OperationQueues/OperationQueues.html.
  2. Y. Afek, G. Korland, and E. Yanovsky. Quasi-Linearizability: Relaxed Consistency for Improved Concurrency. In OPODIS, pages 395-410, 2010. Google Scholar
  3. M. Ahamad, R.A. Bazzi, R. John, P. Kohli, and G. Neiger. The Power of Processor Consistency. In SPAA, pages 251-260, 1993. Google Scholar
  4. M. Ahamad, G. Neiger, J.E. Burns, P. Kohli, and P.W. Hutto. Causal memory: definitions, implementation, and programming. Distributed Computing, 9(1):37-49, 1995. Google Scholar
  5. M. Aigner, C. M. Kirsch, M. Lippautz, and A. Sokolova. Fast, multicore-scalable, low-fragmentation memory allocation through large virtual memory and global data structures. In OOPSLA, pages 451-469, 2015. Google Scholar
  6. D. Alistarh, J. Kopinsky, J. Li, and N. Shavit. The SprayList: A Scalable Relaxed Priority Queue. In PPoPP, pages 11-20, 2015. Google Scholar
  7. A. Bouajjani, M. Emmi, C. Enea, and J. Hamza. On Reducing Linearizability to State Reachability. In ICALP, pages 95-107, 2015. Google Scholar
  8. S. Burckhardt, A. Gotsman, H. Yang, and M. Zawirski. Replicated Data Types: Specification, Verification, Optimality. In POPL, pages 271-284, 2014. Google Scholar
  9. POPL 2015 Artifact Evaluation Committee. POPL 2015 Artifact Evaluation. Accessed on 01 2015. URL: http://popl15-aec.cs.umass.edu/home/.
  10. Computational Systems Group, University of Salzburg. Scal: High-Performance Multicore-Scalable Computing. URL: http://scal.cs.uni-salzburg.at.
  11. J. Derrick, B. Dongol, G. Schellhorn, B. Tofan, O. Travkin, and H. Wehrheim. Quiescent Consistency: Defining and Verifying Relaxed Linearizability. In FM, pages 200-214, 2014. Google Scholar
  12. M. Dodds, A. Haas, and C.M. Kirsch. A Scalable, Correct Time-Stamped Stack. In POPL, pages 233-246, 2015. Google Scholar
  13. I. Filipovic, P.W. O'Hearn, N. Rinetzky, and H. Yang. Abstraction for concurrent objects. Theor. Comput. Sci., 411(51-52):4379-4398, 2010. Google Scholar
  14. J.R. Goodman. Cache consistency and sequential consistency. University of Wisconsin-Madison, Computer Sciences Department, 1991. Google Scholar
  15. A. Haas, T.A. Henzinger, A. Holzer, C.M. Kirsch, M. Lippautz, H. Payer, A. Sezgin, A. Sokolova, and H. Veith. Local Linearizability. CoRR, abs/1502.07118, 2016. Google Scholar
  16. A. Haas, T.A. Henzinger, C.M. Kirsch, M. Lippautz, H. Payer, A. Sezgin, and A. Sokolova. Distributed Queues in Shared Memory: Multicore Performance and Scalability through Quantitative Relaxation. In CF, 2013. Google Scholar
  17. A. Haas, T. Hütter, C.M. Kirsch, M. Lippautz, M. Preishuber, and A. Sokolova. Scal: A Benchmarking Suite for Concurrent Data Structures. In NETYS, pages 1-14, 2015. Google Scholar
  18. A. Heddaya and H. Sinha. Coherence, Non-coherence and Local Consistency in Distributed Shared Memory for Parallel Computing. Technical report, Computer Science Department, Boston University, 1992. Google Scholar
  19. J.L. Hennessy and D.A. Patterson. Computer Architecture, Fifth Edition: A Quantitative Approach. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 5th edition, 2011. Google Scholar
  20. T.A. Henzinger, C.M. Kirsch, H. Payer, A. Sezgin, and A. Sokolova. Quantitative relaxation of concurrent data structures. In POPL, pages 317-328, 2013. Google Scholar
  21. T.A. Henzinger, A. Sezgin, and V. Vafeiadis. Aspect-Oriented Linearizability Proofs. In CONCUR, pages 242-256, 2013. Google Scholar
  22. M. Herlihy and N. Shavit. The Art of Multiprocessor Programming. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2008. Google Scholar
  23. M. Herlihy and J.M. Wing. Linearizability: A Correctness Condition for Concurrent Objects. ACM Trans. Program. Lang. Syst., 12(3):463-492, 1990. Google Scholar
  24. R. Jagadeesan and J. Riely. Between Linearizability and Quiescent Consistency - Quantitative Quiescent Consistency. In ICALP, pages 220-231, 2014. Google Scholar
  25. C.M. Kirsch, M. Lippautz, and H. Payer. Fast and Scalable, Lock-free k-FIFO Queues. In PaCT, pages 208-223, 2013. Google Scholar
  26. L. Lamport. How to Make a Multiprocessor Computer That Correctly Executes Multiprocess Programs. IEEE Trans. Comput., 28(9):690-691, September 1979. Google Scholar
  27. R.J. Lipton and J.S. Sandberg. PRAM: A Scalable Shared Memory. Technical Report Nr. 180, Princeton University, Department of Computer Science, 1988. Google Scholar
  28. R.J. Lipton and J.S. Sandberg. Oblivious memory computer networking, September 28 1993. CA Patent 1,322,609. Google Scholar
  29. M.M. Michael and M.L. Scott. Simple, fast, and practical non-blocking and blocking concurrent queue algorithms. In PODC, pages 267-275, 1996. Google Scholar
  30. M.M. Michael, M.T. Vechev, and V.A. Saraswat. Idempotent Work Stealing. In PPoPP, pages 45-54, 2009. Google Scholar
  31. A. Morrison and Y. Afek. Fast Concurrent Queues for x86 Processors. In PPoPP, pages 103-112, 2013. Google Scholar
  32. Multicore Computing Group, Tel Aviv University. Fast Concurrent Queues for x86 Processors. Accessed on 01 2015. URL: http://mcg.cs.tau.ac.il/projects/lcrq/.
  33. H. Rihani, P. Sanders, and R. Dementiev. MultiQueues: Simpler, Faster, and Better Relaxed Concurrent Priority Queues. CoRR, 2014. URL: http://arxiv.org/abs/1411.1209.
  34. A. Sezgin. Sequential Consistency and Concurrent Data Structures. CoRR, abs/1506.04910, 2015. Google Scholar
  35. N. Shavit. Data Structures in the Multicore Age. CACM, 54(3):76-84, March 2011. Google Scholar
  36. R.K. Treiber. Systems Programming: Coping with Parallelism. Technical Report RJ-5118, IBM Research Center, 1986. Google Scholar
Questions / Remarks / Feedback

Feedback for Dagstuhl Publishing

Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail