Search Results

Documents authored by Ohsaka, Naoto


Document
Track A: Algorithms, Complexity and Games
Optimal PSPACE-Hardness of Approximating Set Cover Reconfiguration

Authors: Shuichi Hirahara and Naoto Ohsaka

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
In the Minmax Set Cover Reconfiguration problem, given a set system ℱ over a universe 𝒰 and its two covers 𝒞^start and 𝒞^goal of size k, we wish to transform 𝒞^start into 𝒞^goal by repeatedly adding or removing a single set of ℱ while covering the universe 𝒰 in any intermediate state. Then, the objective is to minimize the maximum size of any intermediate cover during transformation. We prove that Minmax Set Cover Reconfiguration and Minmax Dominating Set Reconfiguration are PSPACE-hard to approximate within a factor of 2-(1/polyloglog N), where N is the size of the universe and the number of vertices in a graph, respectively, improving upon Ohsaka (SODA 2024) [Ohsaka, 2024] and Karthik C. S. and Manurangsi (2023) [Karthik C. S. and Manurangsi, 2023]. This is the first result that exhibits a sharp threshold for the approximation factor of any reconfiguration problem because both problems admit a 2-factor approximation algorithm as per Ito, Demaine, Harvey, Papadimitriou, Sideri, Uehara, and Uno (Theor. Comput. Sci., 2011) [Takehiro Ito et al., 2011]. Our proof is based on a reconfiguration analogue of the FGLSS reduction [Feige et al., 1996] from Probabilistically Checkable Reconfiguration Proofs of Hirahara and Ohsaka (STOC 2024) [Hirahara and Ohsaka, 2024]. We also prove that for any constant ε ∈ (0,1), Minmax Hypergraph Vertex Cover Reconfiguration on poly(ε^-1)-uniform hypergraphs is PSPACE-hard to approximate within a factor of 2-ε.

Cite as

Shuichi Hirahara and Naoto Ohsaka. Optimal PSPACE-Hardness of Approximating Set Cover Reconfiguration. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 85:1-85:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.ICALP.2024.85,
  author =	{Hirahara, Shuichi and Ohsaka, Naoto},
  title =	{{Optimal PSPACE-Hardness of Approximating Set Cover Reconfiguration}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{85:1--85:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.85},
  URN =		{urn:nbn:de:0030-drops-202283},
  doi =		{10.4230/LIPIcs.ICALP.2024.85},
  annote =	{Keywords: reconfiguration problems, hardness of approximation, probabilistic proof systems, FGLSS reduction}
}
Document
Track A: Algorithms, Complexity and Games
Alphabet Reduction for Reconfiguration Problems

Authors: Naoto Ohsaka

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We present a reconfiguration analogue of alphabet reduction à la Dinur (J. ACM, 2007) and its applications. Given a binary constraint graph G and its two satisfying assignments ψ^ini and ψ^tar, the Maxmin 2-CSP Reconfiguration problem requests to transform ψ^ini into ψ^tar by repeatedly changing the value of a single vertex so that the minimum fraction of satisfied edges is maximized. We demonstrate a polynomial-time reduction from Maxmin 2-CSP Reconfiguration with arbitrarily large alphabet size W ∈ ℕ to itself with universal alphabet size W₀ ∈ ℕ such that 1) the perfect completeness is preserved, and 2) if any reconfiguration for the former violates ε-fraction of edges, then Ω(ε)-fraction of edges must be unsatisfied during any reconfiguration for the latter. The crux of its construction is the reconfigurability of Hadamard codes, which enables to reconfigure between a pair of codewords, while avoiding getting too close to the other codewords. Combining this alphabet reduction with gap amplification due to Ohsaka (SODA 2024), we are able to amplify the 1 vs. 1-ε gap for arbitrarily small ε ∈ (0,1) up to the 1 vs. 1-ε₀ for some universal ε₀ ∈ (0,1) without blowing up the alphabet size. In particular, a 1 vs. 1-ε₀ gap version of Maxmin 2-CSP Reconfiguration with alphabet size W₀ is PSPACE-hard given a probabilistically checkable reconfiguration proof system having any soundness error 1-ε due to Hirahara and Ohsaka (STOC 2024) and Karthik C. S. and Manurangsi (2023). As an immediate corollary, we show that there exists a universal constant ε₀ ∈ (0,1) such that many popular reconfiguration problems are PSPACE-hard to approximate within a factor of 1-ε₀, including those of 3-SAT, Independent Set, Vertex Cover, Clique, Dominating Set, and Set Cover. This may not be achieved only by gap amplification of Ohsaka, which makes the alphabet size gigantic depending on ε^-1.

Cite as

Naoto Ohsaka. Alphabet Reduction for Reconfiguration Problems. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 113:1-113:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ohsaka:LIPIcs.ICALP.2024.113,
  author =	{Ohsaka, Naoto},
  title =	{{Alphabet Reduction for Reconfiguration Problems}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{113:1--113:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.113},
  URN =		{urn:nbn:de:0030-drops-202560},
  doi =		{10.4230/LIPIcs.ICALP.2024.113},
  annote =	{Keywords: reconfiguration problems, hardness of approximation, Hadamard codes, alphabet reduction}
}
Document
Gap Preserving Reductions Between Reconfiguration Problems

Authors: Naoto Ohsaka

Published in: LIPIcs, Volume 254, 40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023)


Abstract
Combinatorial reconfiguration is a growing research field studying problems on the transformability between a pair of solutions for a search problem. For example, in SAT Reconfiguration, for a Boolean formula φ and two satisfying truth assignments σ_𝗌 and σ_𝗍 for φ, we are asked to determine whether there is a sequence of satisfying truth assignments for φ starting from σ_𝗌 and ending with σ_𝗍, each resulting from the previous one by flipping a single variable assignment. We consider the approximability of optimization variants of reconfiguration problems; e.g., Maxmin SAT Reconfiguration requires to maximize the minimum fraction of satisfied clauses of φ during transformation from σ_𝗌 to σ_𝗍. Solving such optimization variants approximately, we may be able to obtain a reasonable reconfiguration sequence comprising almost-satisfying truth assignments. In this study, we prove a series of gap-preserving reductions to give evidence that a host of reconfiguration problems are PSPACE-hard to approximate, under some plausible assumption. Our starting point is a new working hypothesis called the Reconfiguration Inapproximability Hypothesis (RIH), which asserts that a gap version of Maxmin CSP Reconfiguration is PSPACE-hard. This hypothesis may be thought of as a reconfiguration analogue of the PCP theorem. Our main result is PSPACE-hardness of approximating Maxmin 3-SAT Reconfiguration of bounded occurrence under RIH. The crux of its proof is a gap-preserving reduction from Maxmin Binary CSP Reconfiguration to itself of bounded degree. Because a simple application of the degree reduction technique using expander graphs due to Papadimitriou and Yannakakis (J. Comput. Syst. Sci., 1991) does not preserve the perfect completeness, we modify the alphabet as if each vertex could take a pair of values simultaneously. To accomplish the soundness requirement, we further apply an explicit family of near-Ramanujan graphs and the expander mixing lemma. As an application of the main result, we demonstrate that under RIH, optimization variants of popular reconfiguration problems are PSPACE-hard to approximate, including Nondeterministic Constraint Logic due to Hearn and Demaine (Theor. Comput. Sci., 2005), Independent Set Reconfiguration, Clique Reconfiguration, and Vertex Cover Reconfiguration.

Cite as

Naoto Ohsaka. Gap Preserving Reductions Between Reconfiguration Problems. In 40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 254, pp. 49:1-49:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{ohsaka:LIPIcs.STACS.2023.49,
  author =	{Ohsaka, Naoto},
  title =	{{Gap Preserving Reductions Between Reconfiguration Problems}},
  booktitle =	{40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023)},
  pages =	{49:1--49:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-266-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{254},
  editor =	{Berenbrink, Petra and Bouyer, Patricia and Dawar, Anuj and Kant\'{e}, Mamadou Moustapha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2023.49},
  URN =		{urn:nbn:de:0030-drops-177016},
  doi =		{10.4230/LIPIcs.STACS.2023.49},
  annote =	{Keywords: combinatorial reconfiguration, hardness of approximation, gap reduction}
}
Document
On the Parameterized Intractability of Determinant Maximization

Authors: Naoto Ohsaka

Published in: LIPIcs, Volume 248, 33rd International Symposium on Algorithms and Computation (ISAAC 2022)


Abstract
In the Determinant Maximization problem, given an n × n positive semi-definite matrix A in ℚ^{n × n} and an integer k, we are required to find a k × k principal submatrix of A having the maximum determinant. This problem is known to be NP-hard and further proven to be W[1]-hard with respect to k by Koutis (2006); i.e., a f(k)n^𝒪(1)-time algorithm is unlikely to exist for any computable function f. However, there is still room to explore its parameterized complexity in the restricted case, in the hope of overcoming the general-case parameterized intractability. In this study, we rule out the fixed-parameter tractability of Determinant Maximization even if an input matrix is extremely sparse or low rank, or an approximate solution is acceptable. We first prove that Determinant Maximization is NP-hard and W[1]-hard even if an input matrix is an arrowhead matrix; i.e., the underlying graph formed by nonzero entries is a star, implying that the structural sparsity is not helpful. By contrast, we show that Determinant Maximization is solvable in polynomial time on tridiagonal matrices. Thereafter, we demonstrate the W[1]-hardness with respect to the rank r of an input matrix. Our result is stronger than Koutis' result in the sense that any k × k principal submatrix is singular whenever k > r. We finally give evidence that it is W[1]-hard to approximate Determinant Maximization parameterized by k within a factor of 2^{-c√k} for some universal constant c > 0. Our hardness result is conditional on the Parameterized Inapproximability Hypothesis posed by Lokshtanov, Ramanujan, Saurab, and Zehavi (2020), which asserts that a gap version of Binary Constraint Satisfaction Problem is W[1]-hard. To complement this result, we develop an ε-additive approximation algorithm that runs in ε^{-r²}⋅r^𝒪(r³)⋅n^𝒪(1) time for the rank r of an input matrix, provided that the diagonal entries are bounded.

Cite as

Naoto Ohsaka. On the Parameterized Intractability of Determinant Maximization. In 33rd International Symposium on Algorithms and Computation (ISAAC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 248, pp. 46:1-46:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{ohsaka:LIPIcs.ISAAC.2022.46,
  author =	{Ohsaka, Naoto},
  title =	{{On the Parameterized Intractability of Determinant Maximization}},
  booktitle =	{33rd International Symposium on Algorithms and Computation (ISAAC 2022)},
  pages =	{46:1--46:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-258-7},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{248},
  editor =	{Bae, Sang Won and Park, Heejin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2022.46},
  URN =		{urn:nbn:de:0030-drops-173316},
  doi =		{10.4230/LIPIcs.ISAAC.2022.46},
  annote =	{Keywords: Determinant maximization, Parameterized complexity, Approximability}
}
Document
On the Power of Tree-Depth for Fully Polynomial FPT Algorithms

Authors: Yoichi Iwata, Tomoaki Ogasawara, and Naoto Ohsaka

Published in: LIPIcs, Volume 96, 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018)


Abstract
There are many classical problems in P whose time complexities have not been improved over the past decades. Recent studies of "Hardness in P" have revealed that, for several of such problems, the current fastest algorithm is the best possible under some complexity assumptions. To bypass this difficulty, the concept of "FPT inside P" has been introduced. For a problem with the current best time complexity O(n^c), the goal is to design an algorithm running in k^{O(1)}n^{c'} time for a parameter k and a constant c'<c. In this paper, we investigate the complexity of graph problems in P parameterized by tree-depth, a graph parameter related to tree-width. We show that a simple divide-and-conquer method can solve many graph problems, including Weighted Matching, Negative Cycle Detection, Minimum Weight Cycle, Replacement Paths, and 2-hop Cover, in O(td m) time or O(td (m+nlog n)) time, where td is the tree-depth of the input graph. Because any graph of tree-width tw has tree-depth at most (tw+1)log_2 n, our algorithms also run in O(tw mlog n) time or O(tw (m+nlog n)log n) time. These results match or improve the previous best algorithms parameterized by tree-width. Especially, we solve an open problem of fully polynomial FPT algorithm for Weighted Matching parameterized by tree-width posed by Fomin et al. (SODA 2017).

Cite as

Yoichi Iwata, Tomoaki Ogasawara, and Naoto Ohsaka. On the Power of Tree-Depth for Fully Polynomial FPT Algorithms. In 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 96, pp. 41:1-41:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{iwata_et_al:LIPIcs.STACS.2018.41,
  author =	{Iwata, Yoichi and Ogasawara, Tomoaki and Ohsaka, Naoto},
  title =	{{On the Power of Tree-Depth for Fully Polynomial FPT Algorithms}},
  booktitle =	{35th Symposium on Theoretical Aspects of Computer Science (STACS 2018)},
  pages =	{41:1--41:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-062-0},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{96},
  editor =	{Niedermeier, Rolf and Vall\'{e}e, Brigitte},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2018.41},
  URN =		{urn:nbn:de:0030-drops-85255},
  doi =		{10.4230/LIPIcs.STACS.2018.41},
  annote =	{Keywords: Fully Polynomial FPT Algorithm, Tree-Depth, Divide-and-Conquer}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail