Search Results

Documents authored by Aksenov, Vitaly


Document
Toward Self-Adjusting k-Ary Search Tree Networks

Authors: Evgeniy Feder, Anton Paramonov, Pavel Mavrin, Iosif Salem, Vitaly Aksenov, and Stefan Schmid

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
Datacenter networks are becoming increasingly flexible with the incorporation of new optical communication technologies, such as optical circuit switches, enabling self-adjusting topologies that can adapt to the traffic pattern in a demand-aware manner. In this paper, we take the first steps toward demand-aware and self-adjusting k-ary tree networks. These are more powerful generalizations of existing binary search tree networks (like SplayNet [Stefan Schmid et al., 2016]), which have been at the core of self-adjusting network (SAN) designs. k-ary search tree networks are a natural generalization offering nodes of higher degrees, reduced route lengths, and local routing in spite of reconfigurations (due to maintaining the search property). Our main results are two online heuristics for self-adjusting k-ary tree networks. Empirical results show that our heuristics work better than SplayNet in most of the real network traces and for average to low locality synthetic traces, and are only a little inferior to SplayNet in all remaining traces. We build our online algorithms by first solving the offline case. First, we compute an offline (optimal) static demand-aware network for arbitrary traffic patterns in 𝒪(n³ ⋅ k) time via dynamic programming, where n is the number of network nodes (e.g., datacenter racks), and also improve the bound for the special case of uniformly distributed traffic. Then, we present a centroid-based approach to demand-aware network designs that we use both in the offline static and online settings. In the offline uniform-workload case, we construct this centroid network in linear time 𝒪(n).

Cite as

Evgeniy Feder, Anton Paramonov, Pavel Mavrin, Iosif Salem, Vitaly Aksenov, and Stefan Schmid. Toward Self-Adjusting k-Ary Search Tree Networks. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 52:1-52:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{feder_et_al:LIPIcs.ESA.2024.52,
  author =	{Feder, Evgeniy and Paramonov, Anton and Mavrin, Pavel and Salem, Iosif and Aksenov, Vitaly and Schmid, Stefan},
  title =	{{Toward Self-Adjusting k-Ary Search Tree Networks}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{52:1--52:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.52},
  URN =		{urn:nbn:de:0030-drops-211235},
  doi =		{10.4230/LIPIcs.ESA.2024.52},
  annote =	{Keywords: self-adjusting networks, networks, splay-tree, k-ary tree}
}
Document
Brief Announcement
Brief Announcement: BatchBoost: Universal Batching for Concurrent Data Structures

Authors: Vitaly Aksenov, Michael Anoprenko, Alexander Fedorov, and Michael Spear

Published in: LIPIcs, Volume 281, 37th International Symposium on Distributed Computing (DISC 2023)


Abstract
Batching is a technique that stores multiple keys/values in each node of a data structure. In sequential search data structures, batching reduces latency by reducing the number of cache misses and shortening the chain of pointers to dereference. Applying batching to concurrent data structures is challenging, because it is difficult to maintain the search property and keep contention low in the presence of batching. In this paper, we present a general methodology for leveraging batching in concurrent search data structures, called BatchBoost. BatchBoost builds a search data structure from distinct "data" and "index" layers. The data layer’s purpose is to store a batch of key/value pairs in each of its nodes. The index layer uses an unmodified concurrent search data structure to route operations to a position in the data layer that is "close" to where the corresponding key should exist. The requirements on the index and data layers are low: with minimal effort, we were able to compose three highly scalable concurrent search data structures based on three original data structures as the index layers with a batched version of the Lazy List as the data layer. The resulting BatchBoost data structures provide significant performance improvements over their original counterparts.

Cite as

Vitaly Aksenov, Michael Anoprenko, Alexander Fedorov, and Michael Spear. Brief Announcement: BatchBoost: Universal Batching for Concurrent Data Structures. In 37th International Symposium on Distributed Computing (DISC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 281, pp. 35:1-35:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{aksenov_et_al:LIPIcs.DISC.2023.35,
  author =	{Aksenov, Vitaly and Anoprenko, Michael and Fedorov, Alexander and Spear, Michael},
  title =	{{Brief Announcement: BatchBoost: Universal Batching for Concurrent Data Structures}},
  booktitle =	{37th International Symposium on Distributed Computing (DISC 2023)},
  pages =	{35:1--35:7},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-301-0},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{281},
  editor =	{Oshman, Rotem},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2023.35},
  URN =		{urn:nbn:de:0030-drops-191612},
  doi =		{10.4230/LIPIcs.DISC.2023.35},
  annote =	{Keywords: Concurrency, Synchronization, Locality}
}
Document
The Splay-List: A Distribution-Adaptive Concurrent Skip-List

Authors: Vitaly Aksenov, Dan Alistarh, Alexandra Drozdova, and Amirkeivan Mohtashami

Published in: LIPIcs, Volume 179, 34th International Symposium on Distributed Computing (DISC 2020)


Abstract
The design and implementation of efficient concurrent data structures has seen significant attention. However, most of this work has focused on concurrent data structures providing good worst-case guarantees. In real workloads, objects are often accessed at different rates, since access distributions may be non-uniform. Efficient distribution-adaptive data structures are known in the sequential case, e.g. the splay-trees; however, they often are hard to translate efficiently in the concurrent case. In this paper, we investigate distribution-adaptive concurrent data structures, and propose a new design called the splay-list. At a high level, the splay-list is similar to a standard skip-list, with the key distinction that the height of each element adapts dynamically to its access rate: popular elements "move up," whereas rarely-accessed elements decrease in height. We show that the splay-list provides order-optimal amortized complexity bounds for a subset of operations, while being amenable to efficient concurrent implementation. Experimental results show that the splay-list can leverage distribution-adaptivity to improve on the performance of classic concurrent designs, and can outperform the only previously-known distribution-adaptive design in certain settings.

Cite as

Vitaly Aksenov, Dan Alistarh, Alexandra Drozdova, and Amirkeivan Mohtashami. The Splay-List: A Distribution-Adaptive Concurrent Skip-List. In 34th International Symposium on Distributed Computing (DISC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 179, pp. 3:1-3:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{aksenov_et_al:LIPIcs.DISC.2020.3,
  author =	{Aksenov, Vitaly and Alistarh, Dan and Drozdova, Alexandra and Mohtashami, Amirkeivan},
  title =	{{The Splay-List: A Distribution-Adaptive Concurrent Skip-List}},
  booktitle =	{34th International Symposium on Distributed Computing (DISC 2020)},
  pages =	{3:1--3:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-168-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{179},
  editor =	{Attiya, Hagit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2020.3},
  URN =		{urn:nbn:de:0030-drops-130818},
  doi =		{10.4230/LIPIcs.DISC.2020.3},
  annote =	{Keywords: Data structures, self-adjusting, concurrency}
}
Document
Parallel Combining: Benefits of Explicit Synchronization

Authors: Vitaly Aksenov, Petr Kuznetsov, and Anatoly Shalyto

Published in: LIPIcs, Volume 125, 22nd International Conference on Principles of Distributed Systems (OPODIS 2018)


Abstract
A parallel batched data structure is designed to process synchronized batches of operations on the data structure using a parallel program. In this paper, we propose parallel combining, a technique that implements a concurrent data structure from a parallel batched one. The idea is that we explicitly synchronize concurrent operations into batches: one of the processes becomes a combiner which collects concurrent requests and initiates a parallel batched algorithm involving the owners (clients) of the collected requests. Intuitively, the cost of synchronizing the concurrent calls can be compensated by running the parallel batched algorithm. We validate the intuition via two applications. First, we use parallel combining to design a concurrent data structure optimized for read-dominated workloads, taking a dynamic graph data structure as an example. Second, we use a novel parallel batched priority queue to build a concurrent one. In both cases, we obtain performance gains with respect to the state-of-the-art algorithms.

Cite as

Vitaly Aksenov, Petr Kuznetsov, and Anatoly Shalyto. Parallel Combining: Benefits of Explicit Synchronization. In 22nd International Conference on Principles of Distributed Systems (OPODIS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 125, pp. 11:1-11:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{aksenov_et_al:LIPIcs.OPODIS.2018.11,
  author =	{Aksenov, Vitaly and Kuznetsov, Petr and Shalyto, Anatoly},
  title =	{{Parallel Combining: Benefits of Explicit Synchronization}},
  booktitle =	{22nd International Conference on Principles of Distributed Systems (OPODIS 2018)},
  pages =	{11:1--11:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-098-9},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{125},
  editor =	{Cao, Jiannong and Ellen, Faith and Rodrigues, Luis and Ferreira, Bernardo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2018.11},
  URN =		{urn:nbn:de:0030-drops-100713},
  doi =		{10.4230/LIPIcs.OPODIS.2018.11},
  annote =	{Keywords: concurrent data structure, parallel batched data structure, combining}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail