Search Results

Documents authored by Yoshida, Yuichi


Document
RANDOM
Stochastic Distance in Property Testing

Authors: Uri Meir, Gregory Schwartzman, and Yuichi Yoshida

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We introduce a novel concept termed "stochastic distance" for property testing. Diverging from the traditional definition of distance, where a distance t implies that there exist t edges that can be added to ensure a graph possesses a certain property (such as k-edge-connectivity), our new notion implies that there is a high probability that adding t random edges will endow the graph with the desired property. While formulating testers based on this new distance proves challenging in a sequential environment, it is much easier in a distributed setting. Taking k-edge-connectivity as a case study, we design ultra-fast testing algorithms in the CONGEST model. Our introduction of stochastic distance offers a more natural fit for the distributed setting, providing a promising avenue for future research in emerging models of computation.

Cite as

Uri Meir, Gregory Schwartzman, and Yuichi Yoshida. Stochastic Distance in Property Testing. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 57:1-57:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{meir_et_al:LIPIcs.APPROX/RANDOM.2024.57,
  author =	{Meir, Uri and Schwartzman, Gregory and Yoshida, Yuichi},
  title =	{{Stochastic Distance in Property Testing}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{57:1--57:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.57},
  URN =		{urn:nbn:de:0030-drops-210506},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.57},
  annote =	{Keywords: Connectivity, k-edge connectivity}
}
Document
Track A: Algorithms, Complexity and Games
Lipschitz Continuous Allocations for Optimization Games

Authors: Soh Kumabe and Yuichi Yoshida

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
In cooperative game theory, the primary focus is the equitable allocation of payoffs or costs among agents. However, in the practical applications of cooperative games, accurately representing games is challenging. In such cases, using an allocation method sensitive to small perturbations in the game can lead to various problems, including dissatisfaction among agents and the potential for manipulation by agents seeking to maximize their own benefits. Therefore, the allocation method must be robust against game perturbations. In this study, we explore optimization games, in which the value of the characteristic function is provided as the optimal value of an optimization problem. To assess the robustness of the allocation methods, we use the Lipschitz constant, which quantifies the extent of change in the allocation vector in response to a unit perturbation in the weight vector of the underlying problem. Thereafter, we provide an algorithm for the matching game that returns an allocation belonging to the (1/2-ε)-approximate core with Lipschitz constant O(ε^{-1}). Additionally, we provide an algorithm for a minimum spanning tree game that returns an allocation belonging to the 4-approximate core with a constant Lipschitz constant. The Shapley value is a popular allocation that satisfies several desirable properties. Therefore, we investigate the robustness of the Shapley value. We demonstrate that the Lipschitz constant of the Shapley value for the minimum spanning tree is constant, whereas that for the matching game is Ω(log n), where n denotes the number of vertices.

Cite as

Soh Kumabe and Yuichi Yoshida. Lipschitz Continuous Allocations for Optimization Games. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 102:1-102:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kumabe_et_al:LIPIcs.ICALP.2024.102,
  author =	{Kumabe, Soh and Yoshida, Yuichi},
  title =	{{Lipschitz Continuous Allocations for Optimization Games}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{102:1--102:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.102},
  URN =		{urn:nbn:de:0030-drops-202456},
  doi =		{10.4230/LIPIcs.ICALP.2024.102},
  annote =	{Keywords: Cooperative Games, Lipschitz Continuity}
}
Document
Track A: Algorithms, Complexity and Games
Testing Spreading Behavior in Networks with Arbitrary Topologies

Authors: Augusto Modanese and Yuichi Yoshida

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Given the full topology of a network, how hard is it to test if it is evolving according to a local rule or is far from doing so? Inspired by the works of Goldreich and Ron (J. ACM, 2017) and Nakar and Ron (ICALP, 2021), we initiate the study of property testing in dynamic environments with arbitrary topologies. Our focus is on the simplest non-trivial rule that can be tested, which corresponds to the 1-BP rule of bootstrap percolation and models a simple spreading behavior: Every "infected" node stays infected forever, and each "healthy" node becomes infected if and only if it has at least one infected neighbor. Our results are subdivided into two main groups: - If we are testing a single time step of evolution, then the query complexity is O(Δ/ε) or Õ(√n/ε) (whichever is smaller), where Δ and n are the maximum degree of a node and the number of vertices in the underlying graph, respectively. We also give lower bounds for both one- and two-sided error testers that match our upper bounds up to Δ = o(√n) and Δ = O(n^{1/3}), respectively. If ε is constant, then the first of these also holds against adaptive testers. - When testing the environment over T time steps, we have two algorithms that need O(Δ^{T-1}/εT) and Õ(|E|/εT) queries, respectively, where E is the set of edges of the underlying graph. All of our algorithms are one-sided error, and all of them are also non-adaptive, with the single exception of the more complex Õ(√n/ε)-query tester for the case T = 2.

Cite as

Augusto Modanese and Yuichi Yoshida. Testing Spreading Behavior in Networks with Arbitrary Topologies. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 112:1-112:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{modanese_et_al:LIPIcs.ICALP.2024.112,
  author =	{Modanese, Augusto and Yoshida, Yuichi},
  title =	{{Testing Spreading Behavior in Networks with Arbitrary Topologies}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{112:1--112:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.112},
  URN =		{urn:nbn:de:0030-drops-202554},
  doi =		{10.4230/LIPIcs.ICALP.2024.112},
  annote =	{Keywords: Property testing, bootstrap percolation, local phenomena, expander graphs}
}
Document
Average Sensitivity of the Knapsack Problem

Authors: Soh Kumabe and Yuichi Yoshida

Published in: LIPIcs, Volume 244, 30th Annual European Symposium on Algorithms (ESA 2022)


Abstract
In resource allocation, we often require that the output allocation of an algorithm is stable against input perturbation because frequent reallocation is costly and untrustworthy. Varma and Yoshida (SODA'21) formalized this requirement for algorithms as the notion of average sensitivity. Here, the average sensitivity of an algorithm on an input instance is, roughly speaking, the average size of the symmetric difference of the output for the instance and that for the instance with one item deleted, where the average is taken over the deleted item. In this work, we consider the average sensitivity of the knapsack problem, a representative example of a resource allocation problem. We first show a (1-ε)-approximation algorithm for the knapsack problem with average sensitivity O(ε^{-1}log ε^{-1}). Then, we complement this result by showing that any (1-ε)-approximation algorithm has average sensitivity Ω(ε^{-1}). As an application of our algorithm, we consider the incremental knapsack problem in the random-order setting, where the goal is to maintain a good solution while items arrive one by one in a random order. Specifically, we show that for any ε > 0, there exists a (1-ε)-approximation algorithm with amortized recourse O(ε^{-1}log ε^{-1}) and amortized update time O(log n+f_ε), where n is the total number of items and f_ε > 0 is a value depending on ε.

Cite as

Soh Kumabe and Yuichi Yoshida. Average Sensitivity of the Knapsack Problem. In 30th Annual European Symposium on Algorithms (ESA 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 244, pp. 75:1-75:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{kumabe_et_al:LIPIcs.ESA.2022.75,
  author =	{Kumabe, Soh and Yoshida, Yuichi},
  title =	{{Average Sensitivity of the Knapsack Problem}},
  booktitle =	{30th Annual European Symposium on Algorithms (ESA 2022)},
  pages =	{75:1--75:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-247-1},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{244},
  editor =	{Chechik, Shiri and Navarro, Gonzalo and Rotenberg, Eva and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2022.75},
  URN =		{urn:nbn:de:0030-drops-170136},
  doi =		{10.4230/LIPIcs.ESA.2022.75},
  annote =	{Keywords: Average Sensitivity, Knapsack Problem, FPRAS}
}
Document
Track A: Algorithms, Complexity and Games
Downsampling for Testing and Learning in Product Distributions

Authors: Nathaniel Harms and Yuichi Yoshida

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We study distribution-free property testing and learning problems where the unknown probability distribution is a product distribution over ℝ^d. For many important classes of functions, such as intersections of halfspaces, polynomial threshold functions, convex sets, and k-alternating functions, the known algorithms either have complexity that depends on the support size of the distribution, or are proven to work only for specific examples of product distributions. We introduce a general method, which we call downsampling, that resolves these issues. Downsampling uses a notion of "rectilinear isoperimetry" for product distributions, which further strengthens the connection between isoperimetry, testing and learning. Using this technique, we attain new efficient distribution-free algorithms under product distributions on ℝ^d: 1) A simpler proof for non-adaptive, one-sided monotonicity testing of functions [n]^d → {0,1}, and improved sample complexity for testing monotonicity over unknown product distributions, from O(d⁷) [Black, Chakrabarty, & Seshadhri, SODA 2020] to O(d³). 2) Polynomial-time agnostic learning algorithms for functions of a constant number of halfspaces, and constant-degree polynomial threshold functions; 3) An exp{O(dlog(dk))}-time agnostic learning algorithm, and an exp{O(dlog(dk))}-sample tolerant tester, for functions of k convex sets; and a 2^O(d) sample-based one-sided tester for convex sets; 4) An exp{O(k√d)}-time agnostic learning algorithm for k-alternating functions, and a sample-based tolerant tester with the same complexity.

Cite as

Nathaniel Harms and Yuichi Yoshida. Downsampling for Testing and Learning in Product Distributions. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 71:1-71:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{harms_et_al:LIPIcs.ICALP.2022.71,
  author =	{Harms, Nathaniel and Yoshida, Yuichi},
  title =	{{Downsampling for Testing and Learning in Product Distributions}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{71:1--71:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.71},
  URN =		{urn:nbn:de:0030-drops-164123},
  doi =		{10.4230/LIPIcs.ICALP.2022.71},
  annote =	{Keywords: property testing, learning, monotonicity, halfspaces, intersections of halfspaces, polynomial threshold functions}
}
Document
One-Tape Turing Machine and Branching Program Lower Bounds for MCSP

Authors: Mahdi Cheraghchi, Shuichi Hirahara, Dimitrios Myrisiotis, and Yuichi Yoshida

Published in: LIPIcs, Volume 187, 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)


Abstract
For a size parameter s: ℕ → ℕ, the Minimum Circuit Size Problem (denoted by MCSP[s(n)]) is the problem of deciding whether the minimum circuit size of a given function f : {0,1}ⁿ → {0,1} (represented by a string of length N : = 2ⁿ) is at most a threshold s(n). A recent line of work exhibited "hardness magnification" phenomena for MCSP: A very weak lower bound for MCSP implies a breakthrough result in complexity theory. For example, McKay, Murray, and Williams (STOC 2019) implicitly showed that, for some constant μ₁ > 0, if MCSP[2^{μ₁⋅ n}] cannot be computed by a one-tape Turing machine (with an additional one-way read-only input tape) running in time N^{1.01}, then P≠NP. In this paper, we present the following new lower bounds against one-tape Turing machines and branching programs: 1) A randomized two-sided error one-tape Turing machine (with an additional one-way read-only input tape) cannot compute MCSP[2^{μ₂⋅n}] in time N^{1.99}, for some constant μ₂ > μ₁. 2) A non-deterministic (or parity) branching program of size o(N^{1.5}/log N) cannot compute MKTP, which is a time-bounded Kolmogorov complexity analogue of MCSP. This is shown by directly applying the Nečiporuk method to MKTP, which previously appeared to be difficult. 3) The size of any non-deterministic, co-non-deterministic, or parity branching program computing MCSP is at least N^{1.5-o(1)}. These results are the first non-trivial lower bounds for MCSP and MKTP against one-tape Turing machines and non-deterministic branching programs, and essentially match the best-known lower bounds for any explicit functions against these computational models. The first result is based on recent constructions of pseudorandom generators for read-once oblivious branching programs (ROBPs) and combinatorial rectangles (Forbes and Kelley, FOCS 2018; Viola 2019). En route, we obtain several related results: 1) There exists a (local) hitting set generator with seed length Õ(√N) secure against read-once polynomial-size non-deterministic branching programs on N-bit inputs. 2) Any read-once co-non-deterministic branching program computing MCSP must have size at least 2^Ω̃(N).

Cite as

Mahdi Cheraghchi, Shuichi Hirahara, Dimitrios Myrisiotis, and Yuichi Yoshida. One-Tape Turing Machine and Branching Program Lower Bounds for MCSP. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 23:1-23:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{cheraghchi_et_al:LIPIcs.STACS.2021.23,
  author =	{Cheraghchi, Mahdi and Hirahara, Shuichi and Myrisiotis, Dimitrios and Yoshida, Yuichi},
  title =	{{One-Tape Turing Machine and Branching Program Lower Bounds for MCSP}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{23:1--23:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.23},
  URN =		{urn:nbn:de:0030-drops-136681},
  doi =		{10.4230/LIPIcs.STACS.2021.23},
  annote =	{Keywords: Minimum Circuit Size Problem, Kolmogorov Complexity, One-Tape Turing Machines, Branching Programs, Lower Bounds, Pseudorandom Generators, Hitting Set Generators}
}
Document
Ordered Graph Limits and Their Applications

Authors: Omri Ben-Eliezer, Eldar Fischer, Amit Levi, and Yuichi Yoshida

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
The emerging theory of graph limits exhibits an analytic perspective on graphs, showing that many important concepts and tools in graph theory and its applications can be described more naturally (and sometimes proved more easily) in analytic language. We extend the theory of graph limits to the ordered setting, presenting a limit object for dense vertex-ordered graphs, which we call an orderon. As a special case, this yields limit objects for matrices whose rows and columns are ordered, and for dynamic graphs that expand (via vertex insertions) over time. Along the way, we devise an ordered locality-preserving variant of the cut distance between ordered graphs, showing that two graphs are close with respect to this distance if and only if they are similar in terms of their ordered subgraph frequencies. We show that the space of orderons is compact with respect to this distance notion, which is key to a successful analysis of combinatorial objects through their limits. For the proof we combine techniques used in the unordered setting with several new techniques specifically designed to overcome the challenges arising in the ordered setting. We derive several applications of the ordered limit theory in extremal combinatorics, sampling, and property testing in ordered graphs. In particular, we prove a new ordered analogue of the well-known result by Alon and Stav [RS&A'08] on the furthest graph from a hereditary property; this is the first known result of this type in the ordered setting. Unlike the unordered regime, here the Erdős–Rényi random graph 𝐆(n, p) with an ordering over the vertices is not always asymptotically the furthest from the property for some p. However, using our ordered limit theory, we show that random graphs generated by a stochastic block model, where the blocks are consecutive in the vertex ordering, are (approximately) the furthest. Additionally, we describe an alternative analytic proof of the ordered graph removal lemma [Alon et al., FOCS'17].

Cite as

Omri Ben-Eliezer, Eldar Fischer, Amit Levi, and Yuichi Yoshida. Ordered Graph Limits and Their Applications. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 42:1-42:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{beneliezer_et_al:LIPIcs.ITCS.2021.42,
  author =	{Ben-Eliezer, Omri and Fischer, Eldar and Levi, Amit and Yoshida, Yuichi},
  title =	{{Ordered Graph Limits and Their Applications}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{42:1--42:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.42},
  URN =		{urn:nbn:de:0030-drops-135815},
  doi =		{10.4230/LIPIcs.ITCS.2021.42},
  annote =	{Keywords: graph limits, ordered graph, graphon, cut distance, removal lemma}
}
Document
Sensitivity Analysis of the Maximum Matching Problem

Authors: Yuichi Yoshida and Samson Zhou

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
We consider the sensitivity of algorithms for the maximum matching problem against edge and vertex modifications. When an algorithm A for the maximum matching problem is deterministic, the sensitivity of A on G is defined as max_{e ∈ E(G)}|A(G) △ A(G - e)|, where G-e is the graph obtained from G by removing an edge e ∈ E(G) and △ denotes the symmetric difference. When A is randomized, the sensitivity is defined as max_{e ∈ E(G)}d_{EM}(A(G),A(G-e)), where d_{EM}(⋅,⋅) denotes the earth mover’s distance between two distributions. Thus the sensitivity measures the difference between the output of an algorithm after the input is slightly perturbed. Algorithms with low sensitivity, or stable algorithms are desirable because they are robust to edge failure or attack. In this work, we show a randomized (1-ε)-approximation algorithm with worst-case sensitivity O_ε(1), which substantially improves upon the (1-ε)-approximation algorithm of Varma and Yoshida (SODA'21) that obtains average sensitivity n^O(1/(1+ε²)) sensitivity algorithm, and show a deterministic 1/2-approximation algorithm with sensitivity exp(O(log^*n)) for bounded-degree graphs. We then show that any deterministic constant-factor approximation algorithm must have sensitivity Ω(log^* n). Our results imply that randomized algorithms are strictly more powerful than deterministic ones in that the former can achieve sensitivity independent of n whereas the latter cannot. We also show analogous results for vertex sensitivity, where we remove a vertex instead of an edge. Finally, we introduce the notion of normalized weighted sensitivity, a natural generalization of sensitivity that accounts for the weights of deleted edges. For a graph with weight function w, the normalized weighted sensitivity is defined to be the sum of the weighted edges in the symmetric difference of the algorithm normalized by the altered edge, i.e., max_{e ∈ E(G)}1/(w(e))w (A(G) △ A(G - e)). Hence the normalized weighted sensitivity measures the weighted difference between the output of an algorithm after the input is slightly perturbed, normalized by the weight of the perturbation. We show that if all edges in a graph have polynomially bounded weight, then given a trade-off parameter α > 2, there exists an algorithm that outputs a 1/(4α)-approximation to the maximum weighted matching in O(m log_α n) time, with normalized weighted sensitivity O(1).

Cite as

Yuichi Yoshida and Samson Zhou. Sensitivity Analysis of the Maximum Matching Problem. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 58:1-58:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{yoshida_et_al:LIPIcs.ITCS.2021.58,
  author =	{Yoshida, Yuichi and Zhou, Samson},
  title =	{{Sensitivity Analysis of the Maximum Matching Problem}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{58:1--58:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.58},
  URN =		{urn:nbn:de:0030-drops-135979},
  doi =		{10.4230/LIPIcs.ITCS.2021.58},
  annote =	{Keywords: Sensitivity analysis, maximum matching, graph algorithms}
}
Document
Weakly Submodular Function Maximization Using Local Submodularity Ratio

Authors: Richard Santiago and Yuichi Yoshida

Published in: LIPIcs, Volume 181, 31st International Symposium on Algorithms and Computation (ISAAC 2020)


Abstract
Weak submodularity is a natural relaxation of the diminishing return property, which is equivalent to submodularity. Weak submodularity has been used to show that many (monotone) functions that arise in practice can be efficiently maximized with provable guarantees. In this work we introduce two natural generalizations of weak submodularity for non-monotone functions. We show that an efficient randomized greedy algorithm has provable approximation guarantees for maximizing these functions subject to a cardinality constraint. We then provide a more refined analysis that takes into account that the weak submodularity parameter may change (sometimes improving) throughout the execution of the algorithm. This leads to improved approximation guarantees in some settings. We provide applications of our results for monotone and non-monotone maximization problems.

Cite as

Richard Santiago and Yuichi Yoshida. Weakly Submodular Function Maximization Using Local Submodularity Ratio. In 31st International Symposium on Algorithms and Computation (ISAAC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 181, pp. 64:1-64:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{santiago_et_al:LIPIcs.ISAAC.2020.64,
  author =	{Santiago, Richard and Yoshida, Yuichi},
  title =	{{Weakly Submodular Function Maximization Using Local Submodularity Ratio}},
  booktitle =	{31st International Symposium on Algorithms and Computation (ISAAC 2020)},
  pages =	{64:1--64:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-173-3},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{181},
  editor =	{Cao, Yixin and Cheng, Siu-Wing and Li, Minming},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2020.64},
  URN =		{urn:nbn:de:0030-drops-134082},
  doi =		{10.4230/LIPIcs.ISAAC.2020.64},
  annote =	{Keywords: weakly submodular, non-monotone, local submodularity ratio}
}
Document
Distribution-Free Testing of Linear Functions on ℝⁿ

Authors: Noah Fleming and Yuichi Yoshida

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
We study the problem of testing whether a function f:ℝⁿ → ℝ is linear (i.e., both additive and homogeneous) in the distribution-free property testing model, where the distance between functions is measured with respect to an unknown probability distribution over ℝⁿ. We show that, given query access to f, sampling access to the unknown distribution as well as the standard Gaussian, and ε > 0, we can distinguish additive functions from functions that are ε-far from additive functions with O((1/ε)log(1/ε)) queries, independent of n. Furthermore, under the assumption that f is a continuous function, the additivity tester can be extended to a distribution-free tester for linearity using the same number of queries. On the other hand, we show that if we are only allowed to get values of f on sampled points, then any distribution-free tester requires Ω(n) samples, even if the underlying distribution is the standard Gaussian.

Cite as

Noah Fleming and Yuichi Yoshida. Distribution-Free Testing of Linear Functions on ℝⁿ. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 22:1-22:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{fleming_et_al:LIPIcs.ITCS.2020.22,
  author =	{Fleming, Noah and Yoshida, Yuichi},
  title =	{{Distribution-Free Testing of Linear Functions on \mathbb{R}ⁿ}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{22:1--22:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.22},
  URN =		{urn:nbn:de:0030-drops-117076},
  doi =		{10.4230/LIPIcs.ITCS.2020.22},
  annote =	{Keywords: Property Testing, Distribution-Free Testing, Linearity Testing}
}
Document
Sublinear-Time Quadratic Minimization via Spectral Decomposition of Matrices

Authors: Amit Levi and Yuichi Yoshida

Published in: LIPIcs, Volume 116, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)


Abstract
We design a sublinear-time approximation algorithm for quadratic function minimization problems with a better error bound than the previous algorithm by Hayashi and Yoshida (NIPS'16). Our approximation algorithm can be modified to handle the case where the minimization is done over a sphere. The analysis of our algorithms is obtained by combining results from graph limit theory, along with a novel spectral decomposition of matrices. Specifically, we prove that a matrix A can be decomposed into a structured part and a pseudorandom part, where the structured part is a block matrix with a polylogarithmic number of blocks, such that in each block all the entries are the same, and the pseudorandom part has a small spectral norm, achieving better error bound than the existing decomposition theorem of Frieze and Kannan (FOCS'96). As an additional application of the decomposition theorem, we give a sublinear-time approximation algorithm for computing the top singular values of a matrix.

Cite as

Amit Levi and Yuichi Yoshida. Sublinear-Time Quadratic Minimization via Spectral Decomposition of Matrices. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 116, pp. 17:1-17:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{levi_et_al:LIPIcs.APPROX-RANDOM.2018.17,
  author =	{Levi, Amit and Yoshida, Yuichi},
  title =	{{Sublinear-Time Quadratic Minimization via Spectral Decomposition of Matrices}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)},
  pages =	{17:1--17:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-085-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{116},
  editor =	{Blais, Eric and Jansen, Klaus and D. P. Rolim, Jos\'{e} and Steurer, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2018.17},
  URN =		{urn:nbn:de:0030-drops-94210},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2018.17},
  annote =	{Keywords: Qudratic function minimization, Approximation Algorithms, Matrix spectral decomposition, Graph limits}
}
Document
A New Approximation Guarantee for Monotone Submodular Function Maximization via Discrete Convexity

Authors: Tasuku Soma and Yuichi Yoshida

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
In monotone submodular function maximization, approximation guarantees based on the curvature of the objective function have been extensively studied in the literature. However, the notion of curvature is often pessimistic, and we rarely obtain improved approximation guarantees, even for very simple objective functions. In this paper, we provide a novel approximation guarantee by extracting an M^{natural}-concave function h:2^E -> R_+, a notion in discrete convex analysis, from the objective function f:2^E -> R_+. We introduce a novel notion called the M^{natural}-concave curvature of a given set function f, which measures how much f deviates from an M^{natural}-concave function, and show that we can obtain a (1-gamma/e-epsilon)-approximation to the problem of maximizing f under a cardinality constraint in polynomial time, where gamma is the value of the M^{natural}-concave curvature and epsilon > 0 is an arbitrary constant. Then, we show that we can obtain nontrivial approximation guarantees for various problems by applying the proposed algorithm.

Cite as

Tasuku Soma and Yuichi Yoshida. A New Approximation Guarantee for Monotone Submodular Function Maximization via Discrete Convexity. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 99:1-99:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{soma_et_al:LIPIcs.ICALP.2018.99,
  author =	{Soma, Tasuku and Yoshida, Yuichi},
  title =	{{A New Approximation Guarantee for Monotone Submodular Function Maximization via Discrete Convexity}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{99:1--99:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.99},
  URN =		{urn:nbn:de:0030-drops-91033},
  doi =		{10.4230/LIPIcs.ICALP.2018.99},
  annote =	{Keywords: Submodular Function, Approximation Algorithm, Discrete Convex Analysis}
}
Document
Streaming Algorithms for Maximizing Monotone Submodular Functions under a Knapsack Constraint

Authors: Chien-Chung Huang, Naonori Kakimura, and Yuichi Yoshida

Published in: LIPIcs, Volume 81, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)


Abstract
In this paper, we consider the problem of maximizing a monotone submodular function subject to a knapsack constraint in the streaming setting. In particular, the elements arrive sequentially and at any point of time, the algorithm has access only to a small fraction of the data stored in primary memory. For this problem, we propose a (0.363-epsilon)-approximation algorithm, requiring only a single pass through the data; moreover, we propose a (0.4-epsilon)-approximation algorithm requiring a constant number of passes through the data. The required memory space of both algorithms depends only on the size of the knapsack capacity and epsilon.

Cite as

Chien-Chung Huang, Naonori Kakimura, and Yuichi Yoshida. Streaming Algorithms for Maximizing Monotone Submodular Functions under a Knapsack Constraint. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 81, pp. 11:1-11:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.APPROX-RANDOM.2017.11,
  author =	{Huang, Chien-Chung and Kakimura, Naonori and Yoshida, Yuichi},
  title =	{{Streaming Algorithms for Maximizing Monotone Submodular Functions under a Knapsack Constraint}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)},
  pages =	{11:1--11:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-044-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{81},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} D. P. and Williamson, David P. and Vempala, Santosh S.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2017.11},
  URN =		{urn:nbn:de:0030-drops-75602},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2017.11},
  annote =	{Keywords: submodular functions, single-pass streaming, multiple-pass streaming, constant approximation}
}
Document
Robust Approximation of Temporal CSP

Authors: Suguru Tamaki and Yuichi Yoshida

Published in: LIPIcs, Volume 28, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)


Abstract
A temporal constraint language G is a set of relations with first-order definitions in (Q; <). Let CSP(G) denote the set of constraint satisfaction problem instances with relations from G. CSP(G) admits robust approximation if, for any e >= 0, given a (1-e)-satisfiable instance of CSP(G), we can compute an assignment that satisfies at least a (1-f(e))-fraction of constraints in polynomial time. Here, f(e) is some function satisfying f(0)=0 and f(e) goes 0 as e goes 0. Firstly, we give a qualitative characterization of robust approximability: Assuming the Unique Games Conjecture, we give a necessary and sufficient condition on G under which CSP(G) admits robust approximation. Secondly, we give a quantitative characterization of robust approximability: Assuming the Unique Games Conjecture, we precisely characterize how f(e) depends on e for each G. We show that our robust approximation algorithms can be run in almost linear time.

Cite as

Suguru Tamaki and Yuichi Yoshida. Robust Approximation of Temporal CSP. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 419-432, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{tamaki_et_al:LIPIcs.APPROX-RANDOM.2014.419,
  author =	{Tamaki, Suguru and Yoshida, Yuichi},
  title =	{{Robust Approximation of Temporal CSP}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{419--432},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.419},
  URN =		{urn:nbn:de:0030-drops-47135},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.419},
  annote =	{Keywords: constraint satisfaction, maximum satisfiability, approximation algorithm, hardness of approximation, infinite domain}
}
Document
Exact and Approximation Algorithms for the Maximum Constraint Satisfaction Problem over the Point Algebra

Authors: Yoichi Iwata and Yuichi Yoshida

Published in: LIPIcs, Volume 20, 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013)


Abstract
We study the constraint satisfaction problem over the point algebra. In this problem, an instance consists of a set of variables and a set of binary constraints of forms (x < y), (x <= y), (x \neq y) or (x = y). Then, the objective is to assign integers to variables so as to satisfy as many constraints as possible.This problem contains many important problems such as Correlation Clustering, Maximum Acyclic Subgraph, and Feedback Arc Set. We first give an exact algorithm that runs in O^*(3^{\frac{log 5}{log 6}n}) time, which improves the previous best O^*(3^n) obtained by a standard dynamic programming. Our algorithm combines the dynamic programming with the split-and-list technique. The split-and-list technique involves matrix products and we make use of sparsity of matrices to speed up the computation. As for approximation, we give a 0.4586-approximation algorithm when the objective is maximizing the number of satisfied constraints, and give an O(log n log log n)-approximation algorithm when the objective is minimizing the number of unsatisfied constraints.

Cite as

Yoichi Iwata and Yuichi Yoshida. Exact and Approximation Algorithms for the Maximum Constraint Satisfaction Problem over the Point Algebra. In 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013). Leibniz International Proceedings in Informatics (LIPIcs), Volume 20, pp. 127-138, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{iwata_et_al:LIPIcs.STACS.2013.127,
  author =	{Iwata, Yoichi and Yoshida, Yuichi},
  title =	{{Exact and Approximation Algorithms for the Maximum Constraint Satisfaction Problem over the Point Algebra}},
  booktitle =	{30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013)},
  pages =	{127--138},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-50-7},
  ISSN =	{1868-8969},
  year =	{2013},
  volume =	{20},
  editor =	{Portier, Natacha and Wilke, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2013.127},
  URN =		{urn:nbn:de:0030-drops-39282},
  doi =		{10.4230/LIPIcs.STACS.2013.127},
  annote =	{Keywords: Constraint Satisfaction Problems, Point Algebra, Exact Algorithms, Approximation Algorithms}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail