Search Results

Documents authored by Kurita, Kazuhiro


Document
Polynomial-Delay Enumeration of Large Maximal Common Independent Sets in Two Matroids

Authors: Yasuaki Kobayashi, Kazuhiro Kurita, and Kunihiro Wasa

Published in: LIPIcs, Volume 272, 48th International Symposium on Mathematical Foundations of Computer Science (MFCS 2023)


Abstract
Finding a maximum cardinality common independent set in two matroids (also known as Matroid Intersection) is a classical combinatorial optimization problem, which generalizes several well-known problems, such as finding a maximum bipartite matching, a maximum colorful forest, and an arborescence in directed graphs. Enumerating all maximal common independent sets in two (or more) matroids is a classical enumeration problem. In this paper, we address an "intersection" of these problems: Given two matroids and a threshold τ, the goal is to enumerate all maximal common independent sets in the matroids with cardinality at least τ. We show that this problem can be solved in polynomial delay and polynomial space. We also discuss how to enumerate all maximal common independent sets of two matroids in non-increasing order of their cardinalities.

Cite as

Yasuaki Kobayashi, Kazuhiro Kurita, and Kunihiro Wasa. Polynomial-Delay Enumeration of Large Maximal Common Independent Sets in Two Matroids. In 48th International Symposium on Mathematical Foundations of Computer Science (MFCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 272, pp. 58:1-58:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kobayashi_et_al:LIPIcs.MFCS.2023.58,
  author =	{Kobayashi, Yasuaki and Kurita, Kazuhiro and Wasa, Kunihiro},
  title =	{{Polynomial-Delay Enumeration of Large Maximal Common Independent Sets in Two Matroids}},
  booktitle =	{48th International Symposium on Mathematical Foundations of Computer Science (MFCS 2023)},
  pages =	{58:1--58:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-292-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{272},
  editor =	{Leroux, J\'{e}r\^{o}me and Lombardy, Sylvain and Peleg, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2023.58},
  URN =		{urn:nbn:de:0030-drops-185921},
  doi =		{10.4230/LIPIcs.MFCS.2023.58},
  annote =	{Keywords: Polynomial-delay enumeration, Ranked Enumeration, Matroid intersection, Reverse search}
}
Document
Optimal LZ-End Parsing Is Hard

Authors: Hideo Bannai, Mitsuru Funakoshi, Kazuhiro Kurita, Yuto Nakashima, Kazuhisa Seto, and Takeaki Uno

Published in: LIPIcs, Volume 259, 34th Annual Symposium on Combinatorial Pattern Matching (CPM 2023)


Abstract
LZ-End is a variant of the well-known Lempel-Ziv parsing family such that each phrase of the parsing has a previous occurrence, with the additional constraint that the previous occurrence must end at the end of a previous phrase. LZ-End was initially proposed as a greedy parsing, where each phrase is determined greedily from left to right, as the longest factor that satisfies the above constraint [Kreft & Navarro, 2010]. In this work, we consider an optimal LZ-End parsing that has the minimum number of phrases in such parsings. We show that a decision version of computing the optimal LZ-End parsing is NP-complete by showing a reduction from the vertex cover problem. Moreover, we give a MAX-SAT formulation for the optimal LZ-End parsing adapting an approach for computing various NP-hard repetitiveness measures recently presented by [Bannai et al., 2022]. We also consider the approximation ratio of the size of greedy LZ-End parsing to the size of the optimal LZ-End parsing, and give a lower bound of the ratio which asymptotically approaches 2.

Cite as

Hideo Bannai, Mitsuru Funakoshi, Kazuhiro Kurita, Yuto Nakashima, Kazuhisa Seto, and Takeaki Uno. Optimal LZ-End Parsing Is Hard. In 34th Annual Symposium on Combinatorial Pattern Matching (CPM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 259, pp. 3:1-3:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bannai_et_al:LIPIcs.CPM.2023.3,
  author =	{Bannai, Hideo and Funakoshi, Mitsuru and Kurita, Kazuhiro and Nakashima, Yuto and Seto, Kazuhisa and Uno, Takeaki},
  title =	{{Optimal LZ-End Parsing Is Hard}},
  booktitle =	{34th Annual Symposium on Combinatorial Pattern Matching (CPM 2023)},
  pages =	{3:1--3:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-276-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{259},
  editor =	{Bulteau, Laurent and Lipt\'{a}k, Zsuzsanna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2023.3},
  URN =		{urn:nbn:de:0030-drops-179571},
  doi =		{10.4230/LIPIcs.CPM.2023.3},
  annote =	{Keywords: Data Compression, LZ-End, Repetitiveness measures}
}
Document
Efficient Enumerations for Minimal Multicuts and Multiway Cuts

Authors: Kazuhiro Kurita and Yasuaki Kobayashi

Published in: LIPIcs, Volume 170, 45th International Symposium on Mathematical Foundations of Computer Science (MFCS 2020)


Abstract
Let G = (V, E) be an undirected graph and let B ⊆ V × V be a set of terminal pairs. A node/edge multicut is a subset of vertices/edges of G whose removal destroys all the paths between every terminal pair in B. The problem of computing a minimum node/edge multicut is NP-hard and extensively studied from several viewpoints. In this paper, we study the problem of enumerating all minimal node multicuts. We give an incremental polynomial delay enumeration algorithm for minimal node multicuts, which extends an enumeration algorithm due to Khachiyan et al. (Algorithmica, 2008) for minimal edge multicuts. Important special cases of node/edge multicuts are node/edge multiway cuts, where the set of terminal pairs contains every pair of vertices in some subset T ⊆ V, that is, B = T × T. We improve the running time bound for this special case: We devise a polynomial delay and exponential space enumeration algorithm for minimal node multiway cuts and a polynomial delay and space enumeration algorithm for minimal edge multiway cuts.

Cite as

Kazuhiro Kurita and Yasuaki Kobayashi. Efficient Enumerations for Minimal Multicuts and Multiway Cuts. In 45th International Symposium on Mathematical Foundations of Computer Science (MFCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 170, pp. 60:1-60:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kurita_et_al:LIPIcs.MFCS.2020.60,
  author =	{Kurita, Kazuhiro and Kobayashi, Yasuaki},
  title =	{{Efficient Enumerations for Minimal Multicuts and Multiway Cuts}},
  booktitle =	{45th International Symposium on Mathematical Foundations of Computer Science (MFCS 2020)},
  pages =	{60:1--60:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-159-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{170},
  editor =	{Esparza, Javier and Kr\'{a}l', Daniel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2020.60},
  URN =		{urn:nbn:de:0030-drops-127272},
  doi =		{10.4230/LIPIcs.MFCS.2020.60},
  annote =	{Keywords: Multicuts, Multiway cuts, Enumeration algorithms}
}
Document
Finding the Anticover of a String

Authors: Mai Alzamel, Alessio Conte, Shuhei Denzumi, Roberto Grossi, Costas S. Iliopoulos, Kazuhiro Kurita, and Kunihiro Wasa

Published in: LIPIcs, Volume 161, 31st Annual Symposium on Combinatorial Pattern Matching (CPM 2020)


Abstract
A k-anticover of a string x is a set of pairwise distinct factors of x of equal length k, such that every symbol of x is contained into an occurrence of at least one of those factors. The existence of a k-anticover can be seen as a notion of non-redundancy, which has application in computational biology, where they are associated with various non-regulatory mechanisms. In this paper we address the complexity of the problem of finding a k-anticover of a string x if it exists, showing that the decision problem is NP-complete on general strings for k ≥ 3. We also show that the problem admits a polynomial-time solution for k=2. For unbounded k, we provide an exact exponential algorithm to find a k-anticover of a string of length n (or determine that none exists), which runs in O*(min {3^{(n-k)/3)}, ((k(k+1))/2)^{n/(k+1)) time using polynomial space.

Cite as

Mai Alzamel, Alessio Conte, Shuhei Denzumi, Roberto Grossi, Costas S. Iliopoulos, Kazuhiro Kurita, and Kunihiro Wasa. Finding the Anticover of a String. In 31st Annual Symposium on Combinatorial Pattern Matching (CPM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 161, pp. 2:1-2:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{alzamel_et_al:LIPIcs.CPM.2020.2,
  author =	{Alzamel, Mai and Conte, Alessio and Denzumi, Shuhei and Grossi, Roberto and Iliopoulos, Costas S. and Kurita, Kazuhiro and Wasa, Kunihiro},
  title =	{{Finding the Anticover of a String}},
  booktitle =	{31st Annual Symposium on Combinatorial Pattern Matching (CPM 2020)},
  pages =	{2:1--2:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-149-8},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{161},
  editor =	{G{\o}rtz, Inge Li and Weimann, Oren},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2020.2},
  URN =		{urn:nbn:de:0030-drops-121270},
  doi =		{10.4230/LIPIcs.CPM.2020.2},
  annote =	{Keywords: Anticover, String algorithms, Stringology, NP-complete}
}
Document
Efficient Enumeration of Dominating Sets for Sparse Graphs

Authors: Kazuhiro Kurita, Kunihiro Wasa, Hiroki Arimura, and Takeaki Uno

Published in: LIPIcs, Volume 123, 29th International Symposium on Algorithms and Computation (ISAAC 2018)


Abstract
A dominating set D of a graph G is a set of vertices such that any vertex in G is in D or its neighbor is in D. Enumeration of minimal dominating sets in a graph is one of central problems in enumeration study since enumeration of minimal dominating sets corresponds to enumeration of minimal hypergraph transversal. However, enumeration of dominating sets including non-minimal ones has not been received much attention. In this paper, we address enumeration problems for dominating sets from sparse graphs which are degenerate graphs and graphs with large girth, and we propose two algorithms for solving the problems. The first algorithm enumerates all the dominating sets for a k-degenerate graph in O(k) time per solution using O(n + m) space, where n and m are respectively the number of vertices and edges in an input graph. That is, the algorithm is optimal for graphs with constant degeneracy such as trees, planar graphs, H-minor free graphs with some fixed H. The second algorithm enumerates all the dominating sets in constant time per solution for input graphs with girth at least nine.

Cite as

Kazuhiro Kurita, Kunihiro Wasa, Hiroki Arimura, and Takeaki Uno. Efficient Enumeration of Dominating Sets for Sparse Graphs. In 29th International Symposium on Algorithms and Computation (ISAAC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 123, pp. 8:1-8:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kurita_et_al:LIPIcs.ISAAC.2018.8,
  author =	{Kurita, Kazuhiro and Wasa, Kunihiro and Arimura, Hiroki and Uno, Takeaki},
  title =	{{Efficient Enumeration of Dominating Sets for Sparse Graphs}},
  booktitle =	{29th International Symposium on Algorithms and Computation (ISAAC 2018)},
  pages =	{8:1--8:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-094-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{123},
  editor =	{Hsu, Wen-Lian and Lee, Der-Tsai and Liao, Chung-Shou},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2018.8},
  URN =		{urn:nbn:de:0030-drops-99560},
  doi =		{10.4230/LIPIcs.ISAAC.2018.8},
  annote =	{Keywords: Enumeration algorithm, polynomial amortized time, dominating set, girth, degeneracy}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail