Search Results

Documents authored by Sadakane, Kunihiko


Document
A Simple Representation of Tree Covering Utilizing Balanced Parentheses and Efficient Implementation of Average-Case Optimal RMQs

Authors: Kou Hamada, Sankardeep Chakraborty, Seungbum Jo, Takuto Koriyama, Kunihiko Sadakane, and Srinivasa Rao Satti

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
Tree covering is a technique for decomposing a tree into smaller sized trees with desirable properties, and has been employed in various succinct data structures. However, significant hurdles stand in the way of a practical implementation of tree covering: a lot of pointers are used to maintain the tree-covering hierarchy and many indices for tree navigational queries consume theoretically negligible yet practically vast space. To tackle these problems, we propose a simple representation of tree covering using a balanced-parenthesis representation. The key to the proposal is the observation that every micro tree splits into at most two intervals on the BP representation. Utilizing the representation, we propose several data structures that represent a tree and its tree cover, which consequently allow micro tree compression with arbitrary coding and efficient tree navigational queries. We also applied our data structure to average-case optimal RMQ by Munro et al. [ESA 2021] and implemented the RMQ data structure. Our RMQ data structures spend less than 2n bits and process queries in a practical time on several settings of the performance evaluation, reducing the gap between theoretical space complexity and actual space consumption. For example, our implementation consumes 1.822n bits and processes queries in 5µs on average for random queries and in 13µs on average for the worst query widths. We also implement tree navigational operations while using the same amount of space as the RMQ data structures. We believe the representation can be widely utilized for designing practically memory-efficient data structures based on tree covering.

Cite as

Kou Hamada, Sankardeep Chakraborty, Seungbum Jo, Takuto Koriyama, Kunihiko Sadakane, and Srinivasa Rao Satti. A Simple Representation of Tree Covering Utilizing Balanced Parentheses and Efficient Implementation of Average-Case Optimal RMQs. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 64:1-64:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hamada_et_al:LIPIcs.ESA.2024.64,
  author =	{Hamada, Kou and Chakraborty, Sankardeep and Jo, Seungbum and Koriyama, Takuto and Sadakane, Kunihiko and Satti, Srinivasa Rao},
  title =	{{A Simple Representation of Tree Covering Utilizing Balanced Parentheses and Efficient Implementation of Average-Case Optimal RMQs}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{64:1--64:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.64},
  URN =		{urn:nbn:de:0030-drops-211359},
  doi =		{10.4230/LIPIcs.ESA.2024.64},
  annote =	{Keywords: Hypersuccinct trees, Succinct data structures, Range minimum queries, Binary trees}
}
Document
Succinct Data Structure for Chordal Graphs with Bounded Vertex Leafage

Authors: Girish Balakrishnan, Sankardeep Chakraborty, N. S. Narayanaswamy, and Kunihiko Sadakane

Published in: LIPIcs, Volume 294, 19th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2024)


Abstract
Chordal graphs is a well-studied large graph class that is also a strict super-class of path graphs. Munro and Wu (ISAAC 2018) have given an (n²/4+o(n²))-bit succinct representation for n-vertex unlabeled chordal graphs. A chordal graph G = (V,E) is the intersection graph of sub-trees of a tree T. Based on this characterization, the two parameters of chordal graphs which we consider in this work are leafage, introduced by Lin, McKee and West (Discussiones Mathematicae Graph Theory 1998) and vertex leafage, introduced by Chaplick and Stacho (Discret. Appl. Math. 2014). Leafage is the minimum number of leaves in any possible tree T characterizing G. Let L(u) denote the number of leaves of the sub-tree in T corresponding to u ∈ V and k = max_{u ∈ V} L(u). The smallest k for which there exists a tree T for G is called its vertex leafage. In this work, we improve the worst-case information theoretic lower bound of Munro and Wu (ISAAC 2018) for n-vertex unlabeled chordal graphs when vertex leafage is bounded and leafage is unbounded. The class of unlabeled k-vertex leafage chordal graphs that consists of all chordal graphs with vertex leafage at most k and unbounded leafage, denoted 𝒢_k, is introduced for the first time. For k > 0 in o(n^c), c > 0, we obtain a lower bound of ((k-1)n log n -kn log k - O(log n))-bits on the size of any data structure that encodes a graph in 𝒢_k. Further, for every k-vertex leafage chordal graph G and k > 1 in o(n^c), c > 0, we present a ((k-1)n log n + o(kn log n))-bit succinct data structure, constructed using the succinct data structure for path graphs with (k-1)n vertices. Our data structure supports adjacency query in O(k log n) time and using additional 2n log n bits, an O(k² d_v log n + log² n) time neighbourhood query where d_v is degree of v ∈ V.

Cite as

Girish Balakrishnan, Sankardeep Chakraborty, N. S. Narayanaswamy, and Kunihiko Sadakane. Succinct Data Structure for Chordal Graphs with Bounded Vertex Leafage. In 19th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 294, pp. 4:1-4:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{balakrishnan_et_al:LIPIcs.SWAT.2024.4,
  author =	{Balakrishnan, Girish and Chakraborty, Sankardeep and Narayanaswamy, N. S. and Sadakane, Kunihiko},
  title =	{{Succinct Data Structure for Chordal Graphs with Bounded Vertex Leafage}},
  booktitle =	{19th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2024)},
  pages =	{4:1--4:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-318-8},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{294},
  editor =	{Bodlaender, Hans L.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SWAT.2024.4},
  URN =		{urn:nbn:de:0030-drops-200446},
  doi =		{10.4230/LIPIcs.SWAT.2024.4},
  annote =	{Keywords: succinct data structure, chordal graphs, leafage, vertex leafage, path graphs}
}
Document
Shortest Beer Path Queries Based on Graph Decomposition

Authors: Tesshu Hanaka, Hirotaka Ono, Kunihiko Sadakane, and Kosuke Sugiyama

Published in: LIPIcs, Volume 283, 34th International Symposium on Algorithms and Computation (ISAAC 2023)


Abstract
Given a directed edge-weighted graph G = (V, E) with beer vertices B ⊆ V, a beer path between two vertices u and v is a path between u and v that visits at least one beer vertex in B, and the beer distance between two vertices is the shortest length of beer paths. We consider indexing problems on beer paths, that is, a graph is given a priori, and we construct some data structures (called indexes) for the graph. Then later, we are given two vertices, and we find the beer distance or beer path between them using the data structure. For such a scheme, efficient algorithms using indexes for the beer distance and beer path queries have been proposed for outerplanar graphs and interval graphs. For example, Bacic et al. (2021) present indexes with size O(n) for outerplanar graphs and an algorithm using them that answers the beer distance between given two vertices in O(α(n)) time, where α(⋅) is the inverse Ackermann function; the performance is shown to be optimal. This paper proposes indexing data structures and algorithms for beer path queries on general graphs based on two types of graph decomposition: the tree decomposition and the triconnected component decomposition. We propose indexes with size O(m+nr²) based on the triconnected component decomposition, where r is the size of the largest triconnected component. For a given query u,v ∈ V, our algorithm using the indexes can output the beer distance in query time O(α(m)). In particular, our indexing data structures and algorithms achieve the optimal performance (the space and the query time) for series-parallel graphs, which is a wider class of outerplanar graphs.

Cite as

Tesshu Hanaka, Hirotaka Ono, Kunihiko Sadakane, and Kosuke Sugiyama. Shortest Beer Path Queries Based on Graph Decomposition. In 34th International Symposium on Algorithms and Computation (ISAAC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 283, pp. 37:1-37:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hanaka_et_al:LIPIcs.ISAAC.2023.37,
  author =	{Hanaka, Tesshu and Ono, Hirotaka and Sadakane, Kunihiko and Sugiyama, Kosuke},
  title =	{{Shortest Beer Path Queries Based on Graph Decomposition}},
  booktitle =	{34th International Symposium on Algorithms and Computation (ISAAC 2023)},
  pages =	{37:1--37:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-289-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{283},
  editor =	{Iwata, Satoru and Kakimura, Naonori},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2023.37},
  URN =		{urn:nbn:de:0030-drops-193397},
  doi =		{10.4230/LIPIcs.ISAAC.2023.37},
  annote =	{Keywords: graph algorithm, shortest path problem, SPQR tree}
}
Document
Invited Talk
Succinct Representations of Graphs (Invited Talk)

Authors: Kunihiko Sadakane

Published in: LIPIcs, Volume 248, 33rd International Symposium on Algorithms and Computation (ISAAC 2022)


Abstract
We consider the problem of finding succinct representations of graphs, that is, encodings using asymptotically the minimum number of bits which support queries on the graphs efficiently. For a special class of graphs, there exist many theoretical results and practical implementations on ordered trees. On the other hand, for wider classes of graphs, though there are many results on counting the number of non-isomorphic graphs belonging to a graph class, there were few number of results on their succinct representations until recently. In this talk, we review some recent results on succinct representations of graphs such as interval, permutation, circle, circular-arc, trapezoid, circle-trapezoid, k-polygon, circle-polygon, cograph, separable, ptolemaic, distance hereditary, clique width k, block, cactus, series-parallel, planar, tree width k, path, boxicity k, chordal bipartite, strongly chordal, chordal graphs, etc.

Cite as

Kunihiko Sadakane. Succinct Representations of Graphs (Invited Talk). In 33rd International Symposium on Algorithms and Computation (ISAAC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 248, p. 1:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{sadakane:LIPIcs.ISAAC.2022.1,
  author =	{Sadakane, Kunihiko},
  title =	{{Succinct Representations of Graphs}},
  booktitle =	{33rd International Symposium on Algorithms and Computation (ISAAC 2022)},
  pages =	{1:1--1:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-258-7},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{248},
  editor =	{Bae, Sang Won and Park, Heejin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2022.1},
  URN =		{urn:nbn:de:0030-drops-172865},
  doi =		{10.4230/LIPIcs.ISAAC.2022.1},
  annote =	{Keywords: Graph Enumeration, Succinct Data Structure, Compression}
}
Document
Bi-Directional r-Indexes

Authors: Yuma Arakawa, Gonzalo Navarro, and Kunihiko Sadakane

Published in: LIPIcs, Volume 223, 33rd Annual Symposium on Combinatorial Pattern Matching (CPM 2022)


Abstract
Indexing highly repetitive texts is important in fields such as bioinformatics and versioned repositories. The run-length compression of the Burrows-Wheeler transform (BWT) provides a compressed representation particularly well-suited to text indexing. The r-index is one such index. It enables fast locating of occurrences of a pattern within O(r) words of space, where r is the number of equal-letter runs in the BWT. Its mechanism of locating is to maintain one suffix array sample along the backward-search of the pattern, and to compute all the pattern positions from that sample once the backward-search is complete. In this paper we develop this algorithm further, and propose a new bi-directional text index called the br-index, which supports extending the matched pattern both in forward and backward directions, and locating the occurrences of the pattern at any step of the search, within O(r+r_R) words of space, where r_R is the number of equal-letter runs in the BWT of the reversed text. Our experiments show that the br-index captures the long repetitions of the text, and outperforms the existing indexes in text searching allowing some mismatches except in an internal part.

Cite as

Yuma Arakawa, Gonzalo Navarro, and Kunihiko Sadakane. Bi-Directional r-Indexes. In 33rd Annual Symposium on Combinatorial Pattern Matching (CPM 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 223, pp. 11:1-11:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{arakawa_et_al:LIPIcs.CPM.2022.11,
  author =	{Arakawa, Yuma and Navarro, Gonzalo and Sadakane, Kunihiko},
  title =	{{Bi-Directional r-Indexes}},
  booktitle =	{33rd Annual Symposium on Combinatorial Pattern Matching (CPM 2022)},
  pages =	{11:1--11:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-234-1},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{223},
  editor =	{Bannai, Hideo and Holub, Jan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2022.11},
  URN =		{urn:nbn:de:0030-drops-161386},
  doi =		{10.4230/LIPIcs.CPM.2022.11},
  annote =	{Keywords: Compressed text indexes, Burrows-Wheeler Transform, highly repetitive text collections}
}
Document
Space-Efficient Data Structure for Posets with Applications

Authors: Tatsuya Yanagita, Sankardeep Chakraborty, Kunihiko Sadakane, and Srinivasa Rao Satti

Published in: LIPIcs, Volume 227, 18th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2022)


Abstract
Space efficient data structures for partial ordered sets or posets are well-researched field. It is known that a poset with n elements can be represented in n²/4 + o(n²) bits [Munro and Nicholson, 2016] and can also be represented in (1 + ε)n log n + 2nk + o(nk) bits [Farzan and Fischer, 2011] where k is width of the poset. In this paper, we make the latter data structure occupy 2n(k-1) + o(nk) bits by considering topological labeling on the elements of posets. Also considering the topological labeling, we propose a new data structure that calculates queries on transitive reduction graphs of posets faster though queries on transitive closure graphs are computed slower. Moreover, we propose an alternative data structure for topological labeled posets that calculates both of the queries faster though it uses 3nk - 2n + o(nk) bits of space. Additionally, we discuss the advantage of these data structures from the perspective of an application for BlockDAG, which is a more scalable version of Blockchain.

Cite as

Tatsuya Yanagita, Sankardeep Chakraborty, Kunihiko Sadakane, and Srinivasa Rao Satti. Space-Efficient Data Structure for Posets with Applications. In 18th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 227, pp. 33:1-33:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{yanagita_et_al:LIPIcs.SWAT.2022.33,
  author =	{Yanagita, Tatsuya and Chakraborty, Sankardeep and Sadakane, Kunihiko and Satti, Srinivasa Rao},
  title =	{{Space-Efficient Data Structure for Posets with Applications}},
  booktitle =	{18th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2022)},
  pages =	{33:1--33:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-236-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{227},
  editor =	{Czumaj, Artur and Xin, Qin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SWAT.2022.33},
  URN =		{urn:nbn:de:0030-drops-161931},
  doi =		{10.4230/LIPIcs.SWAT.2022.33},
  annote =	{Keywords: Succinct Data Structures, Posets, Blockchain}
}
Document
Complete Volume
LIPIcs, Volume 212, ISAAC 2021, Complete Volume

Authors: Hee-Kap Ahn and Kunihiko Sadakane

Published in: LIPIcs, Volume 212, 32nd International Symposium on Algorithms and Computation (ISAAC 2021)


Abstract
LIPIcs, Volume 212, ISAAC 2021, Complete Volume

Cite as

32nd International Symposium on Algorithms and Computation (ISAAC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 212, pp. 1-1188, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Proceedings{ahn_et_al:LIPIcs.ISAAC.2021,
  title =	{{LIPIcs, Volume 212, ISAAC 2021, Complete Volume}},
  booktitle =	{32nd International Symposium on Algorithms and Computation (ISAAC 2021)},
  pages =	{1--1188},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-214-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{212},
  editor =	{Ahn, Hee-Kap and Sadakane, Kunihiko},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2021},
  URN =		{urn:nbn:de:0030-drops-154329},
  doi =		{10.4230/LIPIcs.ISAAC.2021},
  annote =	{Keywords: LIPIcs, Volume 212, ISAAC 2021, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Hee-Kap Ahn and Kunihiko Sadakane

Published in: LIPIcs, Volume 212, 32nd International Symposium on Algorithms and Computation (ISAAC 2021)


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

32nd International Symposium on Algorithms and Computation (ISAAC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 212, pp. 0:i-0:xviii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{ahn_et_al:LIPIcs.ISAAC.2021.0,
  author =	{Ahn, Hee-Kap and Sadakane, Kunihiko},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{32nd International Symposium on Algorithms and Computation (ISAAC 2021)},
  pages =	{0:i--0:xviii},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-214-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{212},
  editor =	{Ahn, Hee-Kap and Sadakane, Kunihiko},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2021.0},
  URN =		{urn:nbn:de:0030-drops-154330},
  doi =		{10.4230/LIPIcs.ISAAC.2021.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Enumerating Range Modes

Authors: Kentaro Sumigawa, Sankardeep Chakraborty, Kunihiko Sadakane, and Srinivasa Rao Satti

Published in: LIPIcs, Volume 181, 31st International Symposium on Algorithms and Computation (ISAAC 2020)


Abstract
Given a sequence of elements, we consider the problem of indexing the sequence to support range mode queries - given a query range, find the element with maximum frequency in the range. We give indexing data structures for this problem; given a sequence, we construct a data structure that can be used later to process arbitrary queries. Our algorithms are efficient for small maximum frequency cases. We also consider a natural generalization of the problem: the range mode enumeration problem, for which there has been no known efficient algorithms. Our algorithms have query time complexities which are linear in the output size plus small terms.

Cite as

Kentaro Sumigawa, Sankardeep Chakraborty, Kunihiko Sadakane, and Srinivasa Rao Satti. Enumerating Range Modes. In 31st International Symposium on Algorithms and Computation (ISAAC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 181, pp. 29:1-29:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{sumigawa_et_al:LIPIcs.ISAAC.2020.29,
  author =	{Sumigawa, Kentaro and Chakraborty, Sankardeep and Sadakane, Kunihiko and Satti, Srinivasa Rao},
  title =	{{Enumerating Range Modes}},
  booktitle =	{31st International Symposium on Algorithms and Computation (ISAAC 2020)},
  pages =	{29:1--29:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-173-3},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{181},
  editor =	{Cao, Yixin and Cheng, Siu-Wing and Li, Minming},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2020.29},
  URN =		{urn:nbn:de:0030-drops-133732},
  doi =		{10.4230/LIPIcs.ISAAC.2020.29},
  annote =	{Keywords: range mode, space-efficient data structure, enumeration algorithm}
}
Document
Storing Set Families More Compactly with Top ZDDs

Authors: Kotaro Matsuda, Shuhei Denzumi, and Kunihiko Sadakane

Published in: LIPIcs, Volume 160, 18th International Symposium on Experimental Algorithms (SEA 2020)


Abstract
Zero-suppressed Binary Decision Diagrams (ZDDs) are data structures for representing set families in a compressed form. With ZDDs, many valuable operations on set families can be done in time polynomial in ZDD size. In some cases, however, the size of ZDDs for representing large set families becomes too huge to store them in the main memory. This paper proposes top ZDD, a novel representation of ZDDs which uses less space than existing ones. The top ZDD is an extension of top tree, which compresses trees, to compress directed acyclic graphs by sharing identical subgraphs. We prove that navigational operations on ZDDs can be done in time poly-logarithmic in ZDD size, and show that there exist set families for which the size of the top ZDD is exponentially smaller than that of the ZDD. We also show experimentally that our top ZDDs have smaller size than ZDDs for real data.

Cite as

Kotaro Matsuda, Shuhei Denzumi, and Kunihiko Sadakane. Storing Set Families More Compactly with Top ZDDs. In 18th International Symposium on Experimental Algorithms (SEA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 160, pp. 6:1-6:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{matsuda_et_al:LIPIcs.SEA.2020.6,
  author =	{Matsuda, Kotaro and Denzumi, Shuhei and Sadakane, Kunihiko},
  title =	{{Storing Set Families More Compactly with Top ZDDs}},
  booktitle =	{18th International Symposium on Experimental Algorithms (SEA 2020)},
  pages =	{6:1--6:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-148-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{160},
  editor =	{Faro, Simone and Cantone, Domenico},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2020.6},
  URN =		{urn:nbn:de:0030-drops-120809},
  doi =		{10.4230/LIPIcs.SEA.2020.6},
  annote =	{Keywords: top tree, Zero-suppressed Decision Diagram, space-efficient data structure}
}
Document
Compressed Orthogonal Search on Suffix Arrays with Applications to Range LCP

Authors: Kotaro Matsuda, Kunihiko Sadakane, Tatiana Starikovskaya, and Masakazu Tateshita

Published in: LIPIcs, Volume 161, 31st Annual Symposium on Combinatorial Pattern Matching (CPM 2020)


Abstract
We propose a space-efficient data structure for orthogonal range search on suffix arrays. For general two-dimensional orthogonal range search problem on a set of n points, there exists an n log n (1+o(1))-bit data structure supporting O(log n)-time counting queries [Mäkinen, Navarro 2007]. The space matches the information-theoretic lower bound. However, if we focus on a point set representing a suffix array, there is a chance to obtain a space efficient data structure. We answer this question affirmatively. Namely, we propose a data structure for orthogonal range search on suffix arrays which uses O(1/(ε) n (H₀+1)) bits where H₀ is the order-0 entropy of the string and answers a counting query in O(n^ε) time for any constant ε>0. As an application, we give an O(1/(ε) n (H₀+1))-bit data structure for the range LCP problem.

Cite as

Kotaro Matsuda, Kunihiko Sadakane, Tatiana Starikovskaya, and Masakazu Tateshita. Compressed Orthogonal Search on Suffix Arrays with Applications to Range LCP. In 31st Annual Symposium on Combinatorial Pattern Matching (CPM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 161, pp. 23:1-23:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{matsuda_et_al:LIPIcs.CPM.2020.23,
  author =	{Matsuda, Kotaro and Sadakane, Kunihiko and Starikovskaya, Tatiana and Tateshita, Masakazu},
  title =	{{Compressed Orthogonal Search on Suffix Arrays with Applications to Range LCP}},
  booktitle =	{31st Annual Symposium on Combinatorial Pattern Matching (CPM 2020)},
  pages =	{23:1--23:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-149-8},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{161},
  editor =	{G{\o}rtz, Inge Li and Weimann, Oren},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2020.23},
  URN =		{urn:nbn:de:0030-drops-121482},
  doi =		{10.4230/LIPIcs.CPM.2020.23},
  annote =	{Keywords: Orthogonal Range Search, Succinct Data Structure, Suffix Array}
}
Document
Indexing Graph Search Trees and Applications

Authors: Sankardeep Chakraborty and Kunihiko Sadakane

Published in: LIPIcs, Volume 138, 44th International Symposium on Mathematical Foundations of Computer Science (MFCS 2019)


Abstract
We consider the problem of compactly representing the Depth First Search (DFS) tree of a given undirected or directed graph having n vertices and m edges while supporting various DFS related queries efficiently in the RAM with logarithmic word size. We study this problem in two well-known models: indexing and encoding models. While most of these queries can be supported easily in constant time using O(n lg n) bits of extra space, our goal here is, more specifically, to beat this trivial O(n lg n) bit space bound, yet not compromise too much on the running time of these queries. In the indexing model, the space bound of our solution involves the quantity m, hence, we obtain different bounds for sparse and dense graphs respectively. In the encoding model, we first give a space lower bound, followed by an almost optimal data structure with extremely fast query time. Central to our algorithm is a partitioning of the DFS tree into connected subtrees, and a compact way to store these connections. Finally, we also apply these techniques to compactly index the shortest path structure, biconnectivity structures among others.

Cite as

Sankardeep Chakraborty and Kunihiko Sadakane. Indexing Graph Search Trees and Applications. In 44th International Symposium on Mathematical Foundations of Computer Science (MFCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 138, pp. 67:1-67:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.MFCS.2019.67,
  author =	{Chakraborty, Sankardeep and Sadakane, Kunihiko},
  title =	{{Indexing Graph Search Trees and Applications}},
  booktitle =	{44th International Symposium on Mathematical Foundations of Computer Science (MFCS 2019)},
  pages =	{67:1--67:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-117-7},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{138},
  editor =	{Rossmanith, Peter and Heggernes, Pinar and Katoen, Joost-Pieter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2019.67},
  URN =		{urn:nbn:de:0030-drops-110112},
  doi =		{10.4230/LIPIcs.MFCS.2019.67},
  annote =	{Keywords: Depth First Search Tree, Compact Data Structures, Encoding Schemes}
}
Document
An O(1)-Approximation Algorithm for Dynamic Weighted Vertex Cover with Soft Capacity

Authors: Hao-Ting Wei, Wing-Kai Hon, Paul Horn, Chung-Shou Liao, and Kunihiko Sadakane

Published in: LIPIcs, Volume 116, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)


Abstract
This study considers the soft capacitated vertex cover problem in a dynamic setting. This problem generalizes the dynamic model of the vertex cover problem, which has been intensively studied in recent years. Given a dynamically changing vertex-weighted graph G=(V,E), which allows edge insertions and edge deletions, the goal is to design a data structure that maintains an approximate minimum vertex cover while satisfying the capacity constraint of each vertex. That is, when picking a copy of a vertex v in the cover, the number of v's incident edges covered by the copy is up to a given capacity of v. We extend Bhattacharya et al.'s work [SODA'15 and ICALP'15] to obtain a deterministic primal-dual algorithm for maintaining a constant-factor approximate minimum capacitated vertex cover with O(log n / epsilon) amortized update time, where n is the number of vertices in the graph. The algorithm can be extended to (1) a more general model in which each edge is associated with a non-uniform and unsplittable demand, and (2) the more general capacitated set cover problem.

Cite as

Hao-Ting Wei, Wing-Kai Hon, Paul Horn, Chung-Shou Liao, and Kunihiko Sadakane. An O(1)-Approximation Algorithm for Dynamic Weighted Vertex Cover with Soft Capacity. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 116, pp. 27:1-27:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{wei_et_al:LIPIcs.APPROX-RANDOM.2018.27,
  author =	{Wei, Hao-Ting and Hon, Wing-Kai and Horn, Paul and Liao, Chung-Shou and Sadakane, Kunihiko},
  title =	{{An O(1)-Approximation Algorithm for Dynamic Weighted Vertex Cover with Soft Capacity}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)},
  pages =	{27:1--27:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-085-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{116},
  editor =	{Blais, Eric and Jansen, Klaus and D. P. Rolim, Jos\'{e} and Steurer, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2018.27},
  URN =		{urn:nbn:de:0030-drops-94312},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2018.27},
  annote =	{Keywords: approximation algorithm, dynamic algorithm, primal-dual, vertex cover}
}
Document
Compression with the tudocomp Framework

Authors: Patrick Dinklage, Johannes Fischer, Dominik Köppl, Marvin Löbel, and Kunihiko Sadakane

Published in: LIPIcs, Volume 75, 16th International Symposium on Experimental Algorithms (SEA 2017)


Abstract
We present a framework facilitating the implementation and comparison of text compression algorithms. We evaluate its features by a case study on two novel compression algorithms based on the Lempel-Ziv compression schemes that perform well on highly repetitive texts.

Cite as

Patrick Dinklage, Johannes Fischer, Dominik Köppl, Marvin Löbel, and Kunihiko Sadakane. Compression with the tudocomp Framework. In 16th International Symposium on Experimental Algorithms (SEA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 75, pp. 13:1-13:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{dinklage_et_al:LIPIcs.SEA.2017.13,
  author =	{Dinklage, Patrick and Fischer, Johannes and K\"{o}ppl, Dominik and L\"{o}bel, Marvin and Sadakane, Kunihiko},
  title =	{{Compression with the tudocomp Framework}},
  booktitle =	{16th International Symposium on Experimental Algorithms (SEA 2017)},
  pages =	{13:1--13:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-036-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{75},
  editor =	{Iliopoulos, Costas S. and Pissis, Solon P. and Puglisi, Simon J. and Raman, Rajeev},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2017.13},
  URN =		{urn:nbn:de:0030-drops-76015},
  doi =		{10.4230/LIPIcs.SEA.2017.13},
  annote =	{Keywords: lossless compression, compression framework, compression library, algorithm engineering, application of string algorithms}
}
Document
Space-Efficient Dictionaries for Parameterized and Order-Preserving Pattern Matching

Authors: Arnab Ganguly, Wing-Kai Hon, Kunihiko Sadakane, Rahul Shah, Sharma V. Thankachan, and Yilin Yang

Published in: LIPIcs, Volume 54, 27th Annual Symposium on Combinatorial Pattern Matching (CPM 2016)


Abstract
Let S and S' be two strings of the same length.We consider the following two variants of string matching. * Parameterized Matching: The characters of S and S' are partitioned into static characters and parameterized characters. The strings are parameterized match iff the static characters match exactly and there exists a one-to-one function which renames the parameterized characters in S to those in S'. * Order-Preserving Matching: The strings are order-preserving match iff for any two integers i,j in [1,|S|], S[i] <= S[j] iff S'[i] <= S'[j]. Let P be a collection of d patterns {P_1, P_2, ..., P_d} of total length n characters, which are chosen from an alphabet Sigma. Given a text T, also over Sigma, we consider the dictionary indexing problem under the above definitions of string matching. Specifically, the task is to index P, such that we can report all positions j where at least one of the patterns P_i in P is a parameterized-match (resp. order-preserving match) with the same-length substring of $T$ starting at j. Previous best-known indexes occupy O(n * log(n)) bits and can report all occ positions in O(|T| * log(|Sigma|) + occ) time. We present space-efficient indexes that occupy O(n * log(|Sigma|+d) * log(n)) bits and reports all occ positions in O(|T| * (log(|Sigma|) + log_{|Sigma|}(n)) + occ) time for parameterized matching and in O(|T| * log(n) + occ) time for order-preserving matching.

Cite as

Arnab Ganguly, Wing-Kai Hon, Kunihiko Sadakane, Rahul Shah, Sharma V. Thankachan, and Yilin Yang. Space-Efficient Dictionaries for Parameterized and Order-Preserving Pattern Matching. In 27th Annual Symposium on Combinatorial Pattern Matching (CPM 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 54, pp. 2:1-2:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{ganguly_et_al:LIPIcs.CPM.2016.2,
  author =	{Ganguly, Arnab and Hon, Wing-Kai and Sadakane, Kunihiko and Shah, Rahul and Thankachan, Sharma V. and Yang, Yilin},
  title =	{{Space-Efficient Dictionaries for Parameterized and Order-Preserving Pattern Matching}},
  booktitle =	{27th Annual Symposium on Combinatorial Pattern Matching (CPM 2016)},
  pages =	{2:1--2:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-012-5},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{54},
  editor =	{Grossi, Roberto and Lewenstein, Moshe},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2016.2},
  URN =		{urn:nbn:de:0030-drops-60736},
  doi =		{10.4230/LIPIcs.CPM.2016.2},
  annote =	{Keywords: Parameterized Matching, Order-preserving Matching, Dictionary Indexing, Aho-Corasick Automaton, Sparsification}
}
Document
Space-Time Trade-offs for Stack-Based Algorithms

Authors: Luis Barba, Matias Korman, Stefan Langerman, Rodrigo I. Silveira, and Kunihiko Sadakane

Published in: LIPIcs, Volume 20, 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013)


Abstract
In memory-constrained algorithms we have read-only access to the input, and the number of additional variables is limited. In this paper we introduce the compressed stack technique, a method that allows to transform algorithms whose space bottleneck is a stack into memory-constrained algorithms. Given an algorithm A that runs in O(n) time using a stack of length Theta(n), we can modify it so that it runs in O(n^2/2^s) time using a workspace of O(s) variables (for any s \in o(log n)) or O(n log n/log p)$ time using O(p log n/log p) variables (for any 2 <= p <= n). We also show how the technique can be applied to solve various geometric problems, namely computing the convex hull of a simple polygon, a triangulation of a monotone polygon, the shortest path between two points inside a monotone polygon, 1-dimensional pyramid approximation of a 1-dimensional vector, and the visibility profile of a point inside a simple polygon. Our approach exceeds or matches the best-known results for these problems in constant-workspace models (when they exist), and gives a trade-off between the size of the workspace and running time. To the best of our knowledge, this is the first general framework for obtaining memory-constrained algorithms.

Cite as

Luis Barba, Matias Korman, Stefan Langerman, Rodrigo I. Silveira, and Kunihiko Sadakane. Space-Time Trade-offs for Stack-Based Algorithms. In 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013). Leibniz International Proceedings in Informatics (LIPIcs), Volume 20, pp. 281-292, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{barba_et_al:LIPIcs.STACS.2013.281,
  author =	{Barba, Luis and Korman, Matias and Langerman, Stefan and Silveira, Rodrigo I. and Sadakane, Kunihiko},
  title =	{{Space-Time Trade-offs for Stack-Based Algorithms}},
  booktitle =	{30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013)},
  pages =	{281--292},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-50-7},
  ISSN =	{1868-8969},
  year =	{2013},
  volume =	{20},
  editor =	{Portier, Natacha and Wilke, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2013.281},
  URN =		{urn:nbn:de:0030-drops-39411},
  doi =		{10.4230/LIPIcs.STACS.2013.281},
  annote =	{Keywords: space-time trade-off, constant workspace, stack algorithms}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail