11 Search Results for "Scott, Michael L."


Document
Engineering Edge Orientation Algorithms

Authors: Henrik Reinstädtler, Christian Schulz, and Bora Uçar

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
Given an undirected graph G, the edge orientation problem asks for assigning a direction to each edge to convert G into a directed graph. The aim is to minimize the maximum out-degree of a vertex in the resulting directed graph. This problem, which is solvable in polynomial time, arises in many applications. An ongoing challenge in edge orientation algorithms is their scalability, particularly in handling large-scale networks with millions or billions of edges efficiently. We propose a novel algorithmic framework based on finding and manipulating simple paths to face this challenge. Our framework is based on an existing algorithm and allows many algorithmic choices. By carefully exploring these choices and engineering the underlying algorithms, we obtain an implementation which is more efficient and scalable than the current state-of-the-art. Our experiments demonstrate significant performance improvements compared to state-of-the-art solvers. On average our algorithm is 6.59 times faster when compared to the state-of-the-art.

Cite as

Henrik Reinstädtler, Christian Schulz, and Bora Uçar. Engineering Edge Orientation Algorithms. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 97:1-97:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{reinstadtler_et_al:LIPIcs.ESA.2024.97,
  author =	{Reinst\"{a}dtler, Henrik and Schulz, Christian and U\c{c}ar, Bora},
  title =	{{Engineering Edge Orientation Algorithms}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{97:1--97:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.97},
  URN =		{urn:nbn:de:0030-drops-211682},
  doi =		{10.4230/LIPIcs.ESA.2024.97},
  annote =	{Keywords: edge orientation, pseudoarboricity, graph algorithms}
}
Document
Cross Module Quickening - The Curious Case of C Extensions

Authors: Felix Berlakovich and Stefan Brunthaler

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Dynamic programming languages such as Python offer expressive power and programmer productivity at the expense of performance. Although the topic of optimizing Python has received considerable attention over the years, a key obstacle remains elusive: C extensions. Time and again, optimized run-time environments, such as JIT compilers and optimizing interpreters, fall short of optimizing across C extensions, as they cannot reason about the native code hiding underneath. To bridge this gap, we present an analysis of C extensions for Python. The analysis data indicates that C extensions come in different varieties. One such variety is to merely speed up a single thing, such as reading a file and processing it directly in C. Another variety offers broad access through an API, resulting in a domain-specific language realized by function calls. While the former variety of C extensions offer little optimization potential for optimizing run-times, we find that the latter variety does offer considerable optimization potential. This optimization potential rests on dynamic locality that C extensions cannot readily tap. We introduce a new, interpreter-based optimization leveraging this untapped optimization potential called Cross-Module Quickening. The key idea is that C extensions can use an optimization interface to register highly-optimized operations on C extension-specific datatypes. A quickening interpreter uses these information to continuously specialize programs with C extensions. To quantify the attainable performance potential of going beyond C extensions, we demonstrate a concrete instantiation of Cross-Module Quickening for the CPython interpreter and the popular NumPy C extension. We evaluate our implementation with the NPBench benchmark suite and report performance improvements by a factor of up to 2.84.

Cite as

Felix Berlakovich and Stefan Brunthaler. Cross Module Quickening - The Curious Case of C Extensions. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 6:1-6:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{berlakovich_et_al:LIPIcs.ECOOP.2024.6,
  author =	{Berlakovich, Felix and Brunthaler, Stefan},
  title =	{{Cross Module Quickening - The Curious Case of C Extensions}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{6:1--6:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.6},
  URN =		{urn:nbn:de:0030-drops-208557},
  doi =		{10.4230/LIPIcs.ECOOP.2024.6},
  annote =	{Keywords: interpreter, optimizations, C extensions, Python}
}
Document
Mechanized HOL Reasoning in Set Theory

Authors: Simon Guilloud, Sankalp Gambhir, Andrea Gilot, and Viktor Kunčak

Published in: LIPIcs, Volume 309, 15th International Conference on Interactive Theorem Proving (ITP 2024)


Abstract
We present a mechanized embedding of higher-order logic (HOL) and algebraic data types (ADTs) into first-order logic with ZFC axioms. Our approach interprets types as sets, with function (arrow) types coinciding with set-theoretic function spaces. We assume traditional FOL syntax without notation for term-level binders. To embed λ-terms, we define a notion of context, defining the closure of all abstractions occuring inside a term. We implement the embedding in the Lisa proof assistant for schematic first-order logic and its library based on axiomatic set theory (presented at ITP 2023). We show how to implement type checking and the proof steps of HOL Light as proof-producing tactics in Lisa. The embedded HOL theorems and proofs are interoperable with the existing Lisa library. This yields a form of soft type system supporting top-level polymorphism and ADTs within set theory. The approach offers tools for Lisa users to carry HOL-style proofs within set theory. It also enables the import of HOL Light theorem statements into Lisa, as well as the replay of small HOL Light kernel proofs.

Cite as

Simon Guilloud, Sankalp Gambhir, Andrea Gilot, and Viktor Kunčak. Mechanized HOL Reasoning in Set Theory. In 15th International Conference on Interactive Theorem Proving (ITP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 309, pp. 18:1-18:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{guilloud_et_al:LIPIcs.ITP.2024.18,
  author =	{Guilloud, Simon and Gambhir, Sankalp and Gilot, Andrea and Kun\v{c}ak, Viktor},
  title =	{{Mechanized HOL Reasoning in Set Theory}},
  booktitle =	{15th International Conference on Interactive Theorem Proving (ITP 2024)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-337-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{309},
  editor =	{Bertot, Yves and Kutsia, Temur and Norrish, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITP.2024.18},
  URN =		{urn:nbn:de:0030-drops-207464},
  doi =		{10.4230/LIPIcs.ITP.2024.18},
  annote =	{Keywords: Proof assistant, First Order Logic, Set Theory, Higher Order Logic}
}
Document
Constraint Modelling with LLMs Using In-Context Learning

Authors: Kostis Michailidis, Dimos Tsouros, and Tias Guns

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
Constraint Programming (CP) allows for the modelling and solving of a wide range of combinatorial problems. However, modelling such problems using constraints over decision variables still requires significant expertise, both in conceptual thinking and syntactic use of modelling languages. In this work, we explore the potential of using pre-trained Large Language Models (LLMs) as coding assistants, to transform textual problem descriptions into concrete and executable CP specifications. We present different transformation pipelines with explicit intermediate representations, and we investigate the potential benefit of various retrieval-augmented example selection strategies for in-context learning. We evaluate our approach on 2 datasets from the literature, namely NL4Opt (optimisation) and Logic Grid Puzzles (satisfaction), and a heterogeneous set of exercises from a CP course. The results show that pre-trained LLMs have promising potential for initialising the modelling process, with retrieval-augmented in-context learning significantly enhancing their modelling capabilities.

Cite as

Kostis Michailidis, Dimos Tsouros, and Tias Guns. Constraint Modelling with LLMs Using In-Context Learning. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 20:1-20:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{michailidis_et_al:LIPIcs.CP.2024.20,
  author =	{Michailidis, Kostis and Tsouros, Dimos and Guns, Tias},
  title =	{{Constraint Modelling with LLMs Using In-Context Learning}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{20:1--20:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.20},
  URN =		{urn:nbn:de:0030-drops-207053},
  doi =		{10.4230/LIPIcs.CP.2024.20},
  annote =	{Keywords: Constraint Modelling, Constraint Acquisition, Constraint Programming, Large Language Models, In-Context Learning, Natural Language Processing, Named Entity Recognition, Retrieval-Augmented Generation, Optimisation}
}
Document
GPU Algorithm for Enumerating PL Spheres of Picard Number 4: Application to Toric Topology

Authors: Suyoung Choi, Hyeontae Jang, and Mathieu Vallée

Published in: LIPIcs, Volume 293, 40th International Symposium on Computational Geometry (SoCG 2024)


Abstract
The fundamental theorem for toric geometry states a toric manifold is encoded by a complete non-singular fan, whose combinatorial structure is the one of a PL sphere together with the set of generators of its rays. The wedge operation on a PL sphere increases its dimension without changing its Picard number. The seeds are the PL spheres that are not wedges. A PL sphere is toric colorable if it comes from a complete rational fan. A result of Choi and Park tells us that the set of toric seeds with a fixed Picard number p is finite. In fact, a toric PL sphere needs its facets to be bases of some binary matroids of corank p with neither coloops, nor cocircuits of size 2. We present and use a GPU-friendly and computationally efficient algorithm to enumerate this set of seeds, up to simplicial isomorphism. Explicitly, it allows us to obtain this set of seeds for Picard number 4 which is of main importance in toric topology for the characterization of toric manifolds with small Picard number. This follows the work of Kleinschmidt (1988) and Batyrev (1991) who fully classified toric manifolds with Picard number ≤ 3.

Cite as

Suyoung Choi, Hyeontae Jang, and Mathieu Vallée. GPU Algorithm for Enumerating PL Spheres of Picard Number 4: Application to Toric Topology. In 40th International Symposium on Computational Geometry (SoCG 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 293, pp. 41:1-41:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{choi_et_al:LIPIcs.SoCG.2024.41,
  author =	{Choi, Suyoung and Jang, Hyeontae and Vall\'{e}e, Mathieu},
  title =	{{GPU Algorithm for Enumerating PL Spheres of Picard Number 4: Application to Toric Topology}},
  booktitle =	{40th International Symposium on Computational Geometry (SoCG 2024)},
  pages =	{41:1--41:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-316-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{293},
  editor =	{Mulzer, Wolfgang and Phillips, Jeff M.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2024.41},
  URN =		{urn:nbn:de:0030-drops-199864},
  doi =		{10.4230/LIPIcs.SoCG.2024.41},
  annote =	{Keywords: PL sphere, simplicial sphere, toric manifold, Picard number, weak pseudo-manifold, characteristic map, binary matroid, parallel computing, GPU programming}
}
Document
Dynamic L-Budget Clustering of Curves

Authors: Kevin Buchin, Maike Buchin, Joachim Gudmundsson, Lukas Plätz, Lea Thiel, and Sampson Wong

Published in: LIPIcs, Volume 294, 19th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2024)


Abstract
A key goal of clustering is data reduction. In center-based clustering of complex objects therefore not only the number of clusters but also the complexity of the centers plays a crucial role. We propose L-Budget Clustering as unifying perspective on this task, optimizing the clustering under the constraint that the summed complexity of all centers is at most L. We present algorithms for clustering planar curves under the Fréchet distance, but note that our algorithms more generally apply to objects in metric spaces if a notion of simplification of objects is applicable. A scenario in which data reduction is of particular importance is when the space is limited. Our main result is an efficient (8 + ε)-approximation algorithm with a (1 + ε)-resource augmentation that maintains an L-budget clustering under insertion of curves using only O(Lε^{-1}) space and O^*(L³log(L) + L²log(r^*/r₀)) time where O^* hides factors of ε^{-1}.

Cite as

Kevin Buchin, Maike Buchin, Joachim Gudmundsson, Lukas Plätz, Lea Thiel, and Sampson Wong. Dynamic L-Budget Clustering of Curves. In 19th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 294, pp. 18:1-18:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{buchin_et_al:LIPIcs.SWAT.2024.18,
  author =	{Buchin, Kevin and Buchin, Maike and Gudmundsson, Joachim and Pl\"{a}tz, Lukas and Thiel, Lea and Wong, Sampson},
  title =	{{Dynamic L-Budget Clustering of Curves}},
  booktitle =	{19th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2024)},
  pages =	{18:1--18:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-318-8},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{294},
  editor =	{Bodlaender, Hans L.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SWAT.2024.18},
  URN =		{urn:nbn:de:0030-drops-200588},
  doi =		{10.4230/LIPIcs.SWAT.2024.18},
  annote =	{Keywords: clustering, streaming algorithm, polygonal curves, Fr\'{e}chet distance, storage efficiency, simplification, approximation algorithms}
}
Document
Fast Nonblocking Persistence for Concurrent Data Structures

Authors: Wentao Cai, Haosen Wen, Vladimir Maksimovski, Mingzhe Du, Rafaello Sanna, Shreif Abdallah, and Michael L. Scott

Published in: LIPIcs, Volume 209, 35th International Symposium on Distributed Computing (DISC 2021)


Abstract
We present a fully lock-free variant of our recent Montage system for persistent data structures. The variant, nbMontage, adds persistence to almost any nonblocking concurrent structure without introducing significant overhead or blocking of any kind. Like its predecessor, nbMontage is buffered durably linearizable: it guarantees that the state recovered in the wake of a crash will represent a consistent prefix of pre-crash execution. Unlike its predecessor, nbMontage ensures wait-free progress of the persistence frontier, thereby bounding the number of recent updates that may be lost on a crash, and allowing a thread to force an update of the frontier (i.e., to perform a sync operation) without the risk of blocking. As an extra benefit, the helping mechanism employed by our wait-free sync significantly reduces its latency. Performance results for nonblocking queues, skip lists, trees, and hash tables rival custom data structures in the literature - dramatically faster than achieved with prior general-purpose systems, and generally within 50% of equivalent non-persistent structures placed in DRAM.

Cite as

Wentao Cai, Haosen Wen, Vladimir Maksimovski, Mingzhe Du, Rafaello Sanna, Shreif Abdallah, and Michael L. Scott. Fast Nonblocking Persistence for Concurrent Data Structures. In 35th International Symposium on Distributed Computing (DISC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 209, pp. 14:1-14:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{cai_et_al:LIPIcs.DISC.2021.14,
  author =	{Cai, Wentao and Wen, Haosen and Maksimovski, Vladimir and Du, Mingzhe and Sanna, Rafaello and Abdallah, Shreif and Scott, Michael L.},
  title =	{{Fast Nonblocking Persistence for Concurrent Data Structures}},
  booktitle =	{35th International Symposium on Distributed Computing (DISC 2021)},
  pages =	{14:1--14:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-210-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{209},
  editor =	{Gilbert, Seth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2021.14},
  URN =		{urn:nbn:de:0030-drops-148169},
  doi =		{10.4230/LIPIcs.DISC.2021.14},
  annote =	{Keywords: Persistent Memory, Nonblocking Progress, Buffered Durable Linearizability}
}
Document
Brief Announcement
Brief Announcement: Building Fast Recoverable Persistent Data Structures with Montage

Authors: Haosen Wen, Wentao Cai, Mingzhe Du, Benjamin Valpey, and Michael L. Scott

Published in: LIPIcs, Volume 179, 34th International Symposium on Distributed Computing (DISC 2020)


Abstract
The recent emergence of fast, dense, nonvolatile main memory suggests that certain long-lived data structures might remain in their natural, pointer-rich format across program runs and hardware reboots. Operations on such structures must be instrumented with explicit write-back and fence instructions to ensure consistency in the wake of a crash. Techniques to minimize the cost of this instrumentation are an active topic of current research. We present what we believe to be the first general-purpose approach to building buffered durably linearizable persistent data structures, and a system, Montage, to support that approach. Montage is built on top of the Ralloc nonblocking persistent allocator. It employs a slow-ticking epoch clock, and ensures that no operation appears to span an epoch boundary. If a crash occurs in epoch e, all work performed in epochs e and e-1 is lost, but all work from prior epochs is preserved. We describe the implementation of Montage, argue its correctness, and report on experiments confirming excellent performance for operations on queues, sets/mappings, and general graphs.

Cite as

Haosen Wen, Wentao Cai, Mingzhe Du, Benjamin Valpey, and Michael L. Scott. Brief Announcement: Building Fast Recoverable Persistent Data Structures with Montage. In 34th International Symposium on Distributed Computing (DISC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 179, pp. 52:1-52:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{wen_et_al:LIPIcs.DISC.2020.52,
  author =	{Wen, Haosen and Cai, Wentao and Du, Mingzhe and Valpey, Benjamin and Scott, Michael L.},
  title =	{{Brief Announcement: Building Fast Recoverable Persistent Data Structures with Montage}},
  booktitle =	{34th International Symposium on Distributed Computing (DISC 2020)},
  pages =	{52:1--52:3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-168-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{179},
  editor =	{Attiya, Hagit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2020.52},
  URN =		{urn:nbn:de:0030-drops-131304},
  doi =		{10.4230/LIPIcs.DISC.2020.52},
  annote =	{Keywords: Durable linearizability, consistency, persistence, fault tolerance}
}
Document
Typically-Correct Derandomization for Small Time and Space

Authors: William M. Hoza

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
Suppose a language L can be decided by a bounded-error randomized algorithm that runs in space S and time n * poly(S). We give a randomized algorithm for L that still runs in space O(S) and time n * poly(S) that uses only O(S) random bits; our algorithm has a low failure probability on all but a negligible fraction of inputs of each length. As an immediate corollary, there is a deterministic algorithm for L that runs in space O(S) and succeeds on all but a negligible fraction of inputs of each length. We also give several other complexity-theoretic applications of our technique.

Cite as

William M. Hoza. Typically-Correct Derandomization for Small Time and Space. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 9:1-9:39, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{hoza:LIPIcs.CCC.2019.9,
  author =	{Hoza, William M.},
  title =	{{Typically-Correct Derandomization for Small Time and Space}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{9:1--9:39},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.9},
  URN =		{urn:nbn:de:0030-drops-108317},
  doi =		{10.4230/LIPIcs.CCC.2019.9},
  annote =	{Keywords: Derandomization, pseudorandomness, space complexity}
}
Document
Dalí: A Periodically Persistent Hash Map

Authors: Faisal Nawab, Joseph Izraelevitz, Terence Kelly, Charles B. Morrey III, Dhruva R. Chakrabarti, and Michael L. Scott

Published in: LIPIcs, Volume 91, 31st International Symposium on Distributed Computing (DISC 2017)


Abstract
Technology trends suggest that byte-addressable nonvolatile memory (NVM) will supplant many uses of DRAM over the coming decade, raising the prospect of inexpensive recovery from power failures and similar faults. Ensuring the consistency of persistent state remains nontrivial, however, in the presence of volatile caches; cached values can "leak" back to persistent memory in arbitrary order. To ensure consistency, existing persistent memory algorithms use expensive, explicit write-back instructions to force each value back to memory before performing a dependent write, thereby incurring significant run-time overhead. To reduce this overhead, we present a new design paradigm that we call periodic persistence. In a periodically persistent data structure, updates are made "in place," but can safely leak back to memory in any order, because only those updates that are known to be valid will be heeded during recovery. To guarantee forward progress, we periodically force a write-back of all dirty data in the cache, ensuring that all "sufficiently old" updates have indeed become persistent, at which point they become semantically visible to the recovery process. As an example of periodic persistence, we present a transactional hash map, Dalí, together with an informal proof of safety (buffered durable linearizability). Experiments with a prototype implementation suggest that periodic persistence can offer substantially better performance than either file-based or incrementally persistent (per-access write-back) alternatives.

Cite as

Faisal Nawab, Joseph Izraelevitz, Terence Kelly, Charles B. Morrey III, Dhruva R. Chakrabarti, and Michael L. Scott. Dalí: A Periodically Persistent Hash Map. In 31st International Symposium on Distributed Computing (DISC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 91, pp. 37:1-37:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{nawab_et_al:LIPIcs.DISC.2017.37,
  author =	{Nawab, Faisal and Izraelevitz, Joseph and Kelly, Terence and Morrey III, Charles B. and Chakrabarti, Dhruva R. and Scott, Michael L.},
  title =	{{Dal{\'\i}: A Periodically Persistent Hash Map}},
  booktitle =	{31st International Symposium on Distributed Computing (DISC 2017)},
  pages =	{37:1--37:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-053-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{91},
  editor =	{Richa, Andr\'{e}a},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2017.37},
  URN =		{urn:nbn:de:0030-drops-80148},
  doi =		{10.4230/LIPIcs.DISC.2017.37},
  annote =	{Keywords: data structure, nonvolatile memory, durable linearizability}
}
Document
Concurrent Computing in the Many-core Era (Dagstuhl Seminar 15021)

Authors: Michael Philippsen, Pascal Felber, Michael L. Scott, and J. Eliot B. Moss

Published in: Dagstuhl Reports, Volume 5, Issue 1 (2015)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 15021 "Concurrent computing in the many-core era". This seminar is a successor to Dagstuhl Seminars 08241 "Transactional memory: From implementation to application" and 12161 "Abstractions for scalable multicore computing", respectively held in June 2008 and in April 2012. The current seminar built on the previous seminars by notably (1) broadening the scope to concurrency beyond transactional memory and shared-memory multicores abstractions, (2) focusing on the new challenges and potential uses of emerging hardware support for synchronization extensions, and (3) considering the increasing complexity resulting from the explosion in heterogeneity.

Cite as

Michael Philippsen, Pascal Felber, Michael L. Scott, and J. Eliot B. Moss. Concurrent Computing in the Many-core Era (Dagstuhl Seminar 15021). In Dagstuhl Reports, Volume 5, Issue 1, pp. 1-56, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@Article{philippsen_et_al:DagRep.5.1.1,
  author =	{Philippsen, Michael and Felber, Pascal and Scott, Michael L. and Moss, J. Eliot B.},
  title =	{{Concurrent Computing in the Many-core Era (Dagstuhl Seminar 15021)}},
  pages =	{1--56},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2015},
  volume =	{5},
  number =	{1},
  editor =	{Philippsen, Michael and Felber, Pascal and Scott, Michael L. and Moss, J. Eliot B.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.5.1.1},
  URN =		{urn:nbn:de:0030-drops-50105},
  doi =		{10.4230/DagRep.5.1.1},
  annote =	{Keywords: Multi-/many-core processors, Concurrent Programming, Synchronization, Transactional Memory, Programming Languages, Compilation}
}
  • Refine by Author
  • 4 Scott, Michael L.
  • 2 Cai, Wentao
  • 2 Du, Mingzhe
  • 2 Wen, Haosen
  • 1 Abdallah, Shreif
  • Show More...

  • Refine by Classification

  • Refine by Keyword
  • 1 Buffered Durable Linearizability
  • 1 C extensions
  • 1 Compilation
  • 1 Concurrent Programming
  • 1 Constraint Acquisition
  • Show More...

  • Refine by Type
  • 11 document

  • Refine by Publication Year
  • 6 2024
  • 1 2015
  • 1 2017
  • 1 2019
  • 1 2020
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail